Estimation of Lidar Point Clouds Based on Ultrasonic Sensors Using Deep Learning

dc.contributor.authorHjalmarsson, Carl
dc.contributor.authorJäghagen, Jesper
dc.contributor.departmentChalmers tekniska högskola / Institutionen för matematiska vetenskapersv
dc.contributor.examinerAndersson, Adam
dc.contributor.supervisorAbramsson, Andreas
dc.date.accessioned2023-07-03T09:35:39Z
dc.date.available2023-07-03T09:35:39Z
dc.date.issued2023
dc.date.submitted2023
dc.description.abstractSafety is a key point of research in the automotive industry. Car companies dedicate a great amount of time and resources to make their cars safer by developing sensory systems like Low Speed Perception (LSP). In this thesis, we have explored the possibility to enhance the contribution of ultrasonic sensors to LSP by leveraging deep learning to mimic data from the more expensive Lidar sensors. To do this we draw inspiration from three different deep learning approaches: image denoising, image segmentation and image-to-image translation, resulting in five different models: DAE(bin), DAE(mse), UNET(bin), UNET(ce) and an I2I cGAN. We train these models on three datasets, each containing paired ultrasonic and Lidar representations of a two-dimensional environment around the car. In order to measure the performance of these models, we develop an evaluation framework where we assess the ability of the models to map ultrasonic object detections to corresponding Lidar detections. We find that the performance of all networks is highly dependent on the data representation. When using a basic representation, consisting only of point detections and free-space, all models fail to improve upon the baseline ultrasonic sensor score. When using a sparse representation consisting of detection arcs, however, UNET(bin) succeeds in outperforming the baseline and mimicking the more accurate Lidar representation (see cover). Finally, when adding unknown areas of the environment to the sparse representation, both UNET(ce) and the cGAN manage to outperform the baseline in most aspects, and we see a convergence towards the more realistic Lidar representation. The results show that there is indeed a possibility to enhance ultrasonic sensor perception using deep learning and Lidar reference data, and while there is still much room for improvement, we have shown that there is potential in further research on this task.
dc.identifier.coursecodeMVEX03
dc.identifier.urihttp://hdl.handle.net/20.500.12380/306532
dc.language.isoeng
dc.setspec.uppsokPhysicsChemistryMaths
dc.subjectLow Speed Perception, Ultrasonic Sensor Perception, Deep Learning, Autoencoders, U-Net, Conditional Generative Adversarial Networks.
dc.titleEstimation of Lidar Point Clouds Based on Ultrasonic Sensors Using Deep Learning
dc.type.degreeExamensarbete för masterexamensv
dc.type.degreeMaster's Thesisen
dc.type.uppsokH
local.programmeComplex adaptive systems (MPCAS), MSc
local.programmeEngineering mathematics and computational science (MPENM), MSc
Ladda ner
Original bundle
Visar 1 - 1 av 1
Hämtar...
Bild (thumbnail)
Namn:
Master_Thesis_Carl_Hjalmarsson_Jesper_Jäghagen_2023.pdf
Storlek:
3.78 MB
Format:
Adobe Portable Document Format
Beskrivning:
License bundle
Visar 1 - 1 av 1
Hämtar...
Bild (thumbnail)
Namn:
license.txt
Storlek:
2.35 KB
Format:
Item-specific license agreed upon to submission
Beskrivning: