Evaluation of Measurement Space Representations of Deep Multi-Modal Object Detection for Extended Object Tracking in Autonomous Driving
by , ,
Abstract:
The perception ability of automated systems such as autonomous cars plays an outstanding role for safe and reliable functionality. With the continuously growing accuracy of deep neural networks for object detection on one side and the investigation of appropriate space representations for object tracking on the other side both essential perception parts received special research attention within the last years. However, early fusion of multiple sensors turns the determination of suitable measurement spaces into a complex and not trivial task. In this paper, we propose the use of a deep multi-modal object detection network for the early fusion of LiDAR and camera data to serve as a measurement source for an extended object tracking algorithm on Lie groups. We develop an extended Kalman filter and model the state space as the direct product Aff(2) x R^6 incorporating second- and third-order dynamics. We compare the tracking performance of different measurement space representations-SO(2) x R^4, SO(2)^2 x R^3 and Aff(2)-to evaluate, how our object detection network encapsulates the measurement parameters and the associated uncertainties. With our results, we show that the lowest tracking errors in the case of single object tracking are obtained by representing the measurement space by the affine group. Thus, we assume that our proposed object detection network captures the intrinsic relationships between the measurement parameters, especially between position and orientation.
Reference:
Evaluation of Measurement Space Representations of Deep Multi-Modal Object Detection for Extended Object Tracking in Autonomous Driving (Lino Antoni Giefer, Razieh Khamsehashari, Kerstin Schill), In IEEE 3rd Connected and Automated Vehicles Symposium (CAVS), 2020.
Bibtex Entry:
@inproceedings{giefer2020evaluation,
  title={Evaluation of Measurement Space Representations of Deep Multi-Modal Object Detection for Extended Object Tracking in Autonomous Driving},
  author={Giefer, Lino Antoni and Khamsehashari, Razieh and Schill, Kerstin},
  booktitle={IEEE 3rd Connected and Automated Vehicles Symposium (CAVS)},
  year={2020},
  pages={1--6},
  organization={IEEE},
  doi={10.1109/CAVS51000.2020.9334646},
  url={10.1109/CAVS51000.2020.9334646">https://doi.org/10.1109/CAVS51000.2020.9334646},
  abstract={The perception ability of automated systems such as autonomous cars plays an outstanding role for safe and reliable functionality. With the continuously growing accuracy of deep neural networks for object detection on one side and the investigation of appropriate space representations for object tracking on the other side both essential perception parts received special research attention within the last years. However, early fusion of multiple sensors turns the determination of suitable measurement spaces into a complex and not trivial task. In this paper, we propose the use of a deep multi-modal object detection network for the early fusion of LiDAR and camera data to serve as a measurement source for an extended object tracking algorithm on Lie groups. We develop an extended Kalman filter and model the state space as the direct product Aff(2) x R^6 incorporating second- and third-order dynamics. We compare the tracking performance of different measurement space representations-SO(2) x R^4, SO(2)^2 x R^3 and Aff(2)-to evaluate, how our object detection network encapsulates the measurement parameters and the associated uncertainties. With our results, we show that the lowest tracking errors in the case of single object tracking are obtained by representing the measurement space by the affine group. Thus, we assume that our proposed object detection network captures the intrinsic relationships between the measurement parameters, especially between position and orientation.},
	keywords={proreta}
}