Estimating road-user position from a camera: a machine learning approach to enable safety applications
dc.contributor.author | Bharti, Karan | |
dc.contributor.department | Chalmers tekniska högskola / Institutionen för mekanik och maritima vetenskaper | sv |
dc.contributor.department | Chalmers University of Technology / Department of Mechanics and Maritime Sciences | en |
dc.contributor.examiner | Dozza, Marco | |
dc.contributor.supervisor | Rasch, Alexander | |
dc.contributor.supervisor | Pai, Rahul Rajendra | |
dc.date.accessioned | 2023-09-28T13:08:02Z | |
dc.date.available | 2023-09-28T13:08:02Z | |
dc.date.issued | 2023 | |
dc.date.submitted | 2023 | |
dc.description.abstract | Road user interactions are a crucial aspect of transportation safety, especially for vulnerable road users (VRU). These individuals are more susceptible to accidents and their associated consequences. It includes pedestrians, cyclists, and motorcyclists among other road users. It is of paramount importance to analyze road traffic interactions with a special focus on VRU for developing active safety algorithms, effective transportation policies, and safety measures. To this end, researchers have turned to naturalistic video data as a source of information for analyzing road user interactions. Considering the data volume, manual data reduction would be a challenge in scaling the data analysis. This underscores the importance of robust and efficient pipelines for the analysis of huge amounts of naturalistic video data, using computer vision algorithms, to help in understanding traffic interaction. In response, this thesis delves into the realm of computer vision involving machine learning to automate video data reduction and improve the analysis of road user interactions. Leveraging lidar’s accurate 3D spatial information and cameras’ detailed visual data, this thesis aims to develop a machine learning model for extraction of kinematics such as distance and angle of detected VRU with a focus on pedestrians, from video files. The model was trained on lidar output as ground truth for distance and angle estimation. By developing algorithms capable of extracting position from video data, this thesis aims to streamline the analysis process, reducing manual effort and error-prone subjectivity. This work can help in active safety research to understand road-user interactions and improve traffic safety | |
dc.identifier.coursecode | MMSX30 | |
dc.identifier.uri | http://hdl.handle.net/20.500.12380/307121 | |
dc.language.iso | eng | |
dc.setspec.uppsok | Technology | |
dc.subject | Active Safety | |
dc.subject | Video Data Reduction | |
dc.subject | Machine Learning | |
dc.subject | Kinematics Extraction | |
dc.subject | Distance Estimation | |
dc.subject | Vulnerable Road Users | |
dc.subject | Camera Calibration | |
dc.title | Estimating road-user position from a camera: a machine learning approach to enable safety applications | |
dc.type.degree | Examensarbete för masterexamen | sv |
dc.type.degree | Master's Thesis | en |
dc.type.uppsok | H | |
local.programme | Mobility engineering (MPMOB), MSc |