Enhancing Emergency Operations with Sensor Fusion and AI
dc.contributor.author | Carlsson, Joel | |
dc.contributor.author | Grahovic, Denis | |
dc.contributor.department | Chalmers tekniska högskola / Institutionen för fysik | sv |
dc.contributor.department | Chalmers University of Technology / Department of Physics | en |
dc.contributor.examiner | Karlsteen, Magnus | |
dc.contributor.supervisor | Karlsteen, Magnus | |
dc.date.accessioned | 2025-06-16T13:53:21Z | |
dc.date.issued | ||
dc.date.submitted | ||
dc.description.abstract | In recent years, society has experienced rapid technological advancements across all fields, pushing the boundaries of what was once thought possible. Emergency services, such as fire departments, have largely been left behind. Operating in hazardous, smoke-filled environments places firefighters at extreme risk, where no compromises can be made on the tools and technologies that support them. This project addresses that gap by exploring the development of a Multi-Environment Camera (MEV-Cam) system designed to aid firefighters during rescue operations in visually impaired environments. The study focuses on the fusion of LiDAR and thermal imaging, investigating how the sensors can leverage their strengths and complement each others weaknesses to provide more reliable scene understanding. To address these challenges, four tailored datasets were created to simulate the intended application scenario, each containing synchronized data from the sensors. Fusion based models were developed for enhanced 3D visualization. For scene under standing, deep learning models were investigated to both LiDAR and thermal data, using architectures based on YOLO and PointNet. Additionally, pose estimation was explored using monocular visual odometry. A major focus of the project was on enhancing the raw collected data to ensure the sensors could be effectively used in this system. In parallel, a hardware prototype was developed to enable efficient data collection in real world environments. Results show that airborne particles significantly degrade LiDAR performance, while thermal sensors remain relatively unaffected. However, sensor fusion can compen sate for these limitations. Deep learning models demonstrated the ability to ac curately interpret scene structure under degraded conditions. After effective data enhancement, the MEV-Cam system showed improved performance across its vari ous modules. While these results highlight the promise of the technology, they also reveal current limitations and suggest several innovative directions for future work. This project marks the beginning of a new project to support life saving operations, by utilizing the latest sensor technology and AI. | |
dc.identifier.coursecode | TIFX05 | |
dc.identifier.uri | http://hdl.handle.net/20.500.12380/309469 | |
dc.language.iso | eng | |
dc.setspec.uppsok | PhysicsChemistryMaths | |
dc.subject | Sensor fusion, Emergency operation, Object detection, LiDAR sensor, Thermal sensor, Machine learning, Visual odometry, 3D segmentation | |
dc.title | Enhancing Emergency Operations with Sensor Fusion and AI | |
dc.type.degree | Examensarbete för masterexamen | sv |
dc.type.degree | Master's Thesis | en |
dc.type.uppsok | H | |
local.programme | Complex adaptive systems (MPCAS), MSc |
Ladda ner
Original bundle
1 - 1 av 1
Hämtar...
- Namn:
- Enhancing Emergency Operations with Sensor Fusion and AI.pdf
- Storlek:
- 3.18 MB
- Format:
- Adobe Portable Document Format
License bundle
1 - 1 av 1
Hämtar...
- Namn:
- license.txt
- Storlek:
- 2.35 KB
- Format:
- Item-specific license agreed upon to submission
- Beskrivning: