Enhancing Emergency Operations with Sensor Fusion and AI

dc.contributor.authorCarlsson, Joel
dc.contributor.authorGrahovic, Denis
dc.contributor.departmentChalmers tekniska högskola / Institutionen för fysiksv
dc.contributor.departmentChalmers University of Technology / Department of Physicsen
dc.contributor.examinerKarlsteen, Magnus
dc.contributor.supervisorKarlsteen, Magnus
dc.date.accessioned2025-06-16T13:53:21Z
dc.date.issued
dc.date.submitted
dc.description.abstractIn recent years, society has experienced rapid technological advancements across all fields, pushing the boundaries of what was once thought possible. Emergency services, such as fire departments, have largely been left behind. Operating in hazardous, smoke-filled environments places firefighters at extreme risk, where no compromises can be made on the tools and technologies that support them. This project addresses that gap by exploring the development of a Multi-Environment Camera (MEV-Cam) system designed to aid firefighters during rescue operations in visually impaired environments. The study focuses on the fusion of LiDAR and thermal imaging, investigating how the sensors can leverage their strengths and complement each others weaknesses to provide more reliable scene understanding. To address these challenges, four tailored datasets were created to simulate the intended application scenario, each containing synchronized data from the sensors. Fusion based models were developed for enhanced 3D visualization. For scene under standing, deep learning models were investigated to both LiDAR and thermal data, using architectures based on YOLO and PointNet. Additionally, pose estimation was explored using monocular visual odometry. A major focus of the project was on enhancing the raw collected data to ensure the sensors could be effectively used in this system. In parallel, a hardware prototype was developed to enable efficient data collection in real world environments. Results show that airborne particles significantly degrade LiDAR performance, while thermal sensors remain relatively unaffected. However, sensor fusion can compen sate for these limitations. Deep learning models demonstrated the ability to ac curately interpret scene structure under degraded conditions. After effective data enhancement, the MEV-Cam system showed improved performance across its vari ous modules. While these results highlight the promise of the technology, they also reveal current limitations and suggest several innovative directions for future work. This project marks the beginning of a new project to support life saving operations, by utilizing the latest sensor technology and AI.
dc.identifier.coursecodeTIFX05
dc.identifier.urihttp://hdl.handle.net/20.500.12380/309469
dc.language.isoeng
dc.setspec.uppsokPhysicsChemistryMaths
dc.subjectSensor fusion, Emergency operation, Object detection, LiDAR sensor, Thermal sensor, Machine learning, Visual odometry, 3D segmentation
dc.titleEnhancing Emergency Operations with Sensor Fusion and AI
dc.type.degreeExamensarbete för masterexamensv
dc.type.degreeMaster's Thesisen
dc.type.uppsokH
local.programmeComplex adaptive systems (MPCAS), MSc

Ladda ner

Original bundle

Visar 1 - 1 av 1
Hämtar...
Bild (thumbnail)
Namn:
Enhancing Emergency Operations with Sensor Fusion and AI.pdf
Storlek:
3.18 MB
Format:
Adobe Portable Document Format

License bundle

Visar 1 - 1 av 1
Hämtar...
Bild (thumbnail)
Namn:
license.txt
Storlek:
2.35 KB
Format:
Item-specific license agreed upon to submission
Beskrivning: