Sensor Fusion with Failure Robustness A Deep Learning 3D Object Detection Architecture
Publicerad
Författare
Typ
Examensarbete för masterexamen
Modellbyggare
Tidskriftstitel
ISSN
Volymtitel
Utgivare
Sammanfattning
Autonomous Driving is the task of navigating a vehicle without driver interaction.
This requires accurate perception of the surroundings, which includes three dimensional
object detection. In this area, deep learning methods show great results, and
they usually use either images, point clouds, or a fusion of both. These methods
are often evaluated with full sensor availability. However, in a safety critical system,
unexpected scenarios such as sensor failure must be accounted for. The objective of
this thesis is to develop a deep learning architecture that is robust against the case
where some sensors stop sending data.
The proposed architecture is inspired by leading LIDAR object detection models,
which have reached a high performance. To be able to make detections during LIDAR
failure, the network learns to convert the 2D image to the same representation
as the LIDAR, in the form of an estimated 3D point cloud. The two point clouds
are merged into a common representation, which allows the model to perform the
detections jointly and thus work if either sensor fails. The final contribution is a
novel training procedure with simulated sensor failure.
The results show that the model is robust against sensor failure, by reaching close
to state-of-the-art performance for camera, LIDAR, and fusion with a single model.
Additionally, on the KITTI dataset, the model outperforms three specialized versions
that trained on camera, LIDAR, and fusion respectively.
A video of the model’s detections can be viewed at youtu.be/_rKN_USMUoo.
Beskrivning
Ämne/nyckelord
Deep Learning, Sensor Fusion, Object Detection, Sensor Failure, KITTI