Sensor Fusion with Failure Robustness A Deep Learning 3D Object Detection Architecture
dc.contributor.author | Berntsson, Joakim | |
dc.contributor.author | Tonderski, Adam | |
dc.contributor.department | Chalmers tekniska högskola / Institutionen för fysik | sv |
dc.contributor.examiner | Mehlig, Bernhard | |
dc.contributor.supervisor | Mehlig, Bernhard | |
dc.date.accessioned | 2020-05-12T07:07:31Z | |
dc.date.available | 2020-05-12T07:07:31Z | |
dc.date.issued | 2019 | sv |
dc.date.submitted | 2019 | |
dc.description.abstract | Autonomous Driving is the task of navigating a vehicle without driver interaction. This requires accurate perception of the surroundings, which includes three dimensional object detection. In this area, deep learning methods show great results, and they usually use either images, point clouds, or a fusion of both. These methods are often evaluated with full sensor availability. However, in a safety critical system, unexpected scenarios such as sensor failure must be accounted for. The objective of this thesis is to develop a deep learning architecture that is robust against the case where some sensors stop sending data. The proposed architecture is inspired by leading LIDAR object detection models, which have reached a high performance. To be able to make detections during LIDAR failure, the network learns to convert the 2D image to the same representation as the LIDAR, in the form of an estimated 3D point cloud. The two point clouds are merged into a common representation, which allows the model to perform the detections jointly and thus work if either sensor fails. The final contribution is a novel training procedure with simulated sensor failure. The results show that the model is robust against sensor failure, by reaching close to state-of-the-art performance for camera, LIDAR, and fusion with a single model. Additionally, on the KITTI dataset, the model outperforms three specialized versions that trained on camera, LIDAR, and fusion respectively. A video of the model’s detections can be viewed at youtu.be/_rKN_USMUoo. | sv |
dc.identifier.coursecode | TIFX05 | sv |
dc.identifier.uri | https://hdl.handle.net/20.500.12380/300780 | |
dc.language.iso | eng | sv |
dc.setspec.uppsok | PhysicsChemistryMaths | |
dc.subject | Deep Learning | sv |
dc.subject | Sensor Fusion | sv |
dc.subject | Object Detection | sv |
dc.subject | Sensor Failure | sv |
dc.subject | KITTI | sv |
dc.title | Sensor Fusion with Failure Robustness A Deep Learning 3D Object Detection Architecture | sv |
dc.type.degree | Examensarbete för masterexamen | sv |
dc.type.uppsok | H | |
local.programme | Complex adaptive systems (MPCAS), MSc |