Leveraging Large Language Models for advanced analysis of crash narratives in traffic safety research
| dc.contributor.author | Wang, Xixi | |
| dc.contributor.author | Carlino, Mattia | |
| dc.contributor.department | Chalmers tekniska högskola // Institutionen för mekanik och maritima vetenskaper | sv |
| dc.contributor.department | Chalmers University of Technology / Department of Mechanics and Maritime Sciences | en |
| dc.contributor.examiner | Thomson, Robert | |
| dc.contributor.supervisor | Kovaceva, Jordanka | |
| dc.date.accessioned | 2025-11-04T14:32:38Z | |
| dc.date.issued | 2025 | |
| dc.date.submitted | ||
| dc.description.abstract | Free-text crash narratives recorded in real-world crash databases have been shown to play a significant role in improving traffic safety. But they remain challenging to analyze at scale due to unstructured writing, heterogeneous terminology, and uneven detail. The development of Large Language Models (LLMs) offers a promising way to automatically extract information from narratives by asking questions. However, crash narratives remain hard for LLMs to analyze because of a lack of traffic safety domain knowledge. Moreover, relying on closed-source LLMs through external APIs poses privacy risks for crash data and often underperforms due to limited traffic knowledge. Motivated by these concerns, we study whether smaller open-source LLMs can support reasoning-intensive extraction from crash narratives, targeting three challenging objectives: the travel direction of the vehicles involved in the crash, identifying the manner of collision, and classifying crash type in multivehicle scenarios that require accurate per-vehicle prediction. In the first phase of the experiments, we focused on extracting vehicle travel directions by comparing small LLMs with 8 billion parameters (Mistral, DeepSeek, and Qwen) under different prompting strategies against fine-tuned transformers (BERT, RoBERTa, and SciBERT) on a manually labeled subset of the Crash Investigation Sampling System (CISS) dataset. The goal was to assess whether models trained on a generic corpus could approach or surpass the performance of domain-adapted baselines. Results confirmed that fine-tuned transformers achieved the best accuracy; however, advanced prompting strategies, particularly Chain of Thought, enabled some LLMs to reach about 90% accuracy, showing that they can serve as competitive alternatives. For the second and third tasks, to bridge domain gaps, we apply Low-Rank Adaption (LoRA) fine-tuning to inject traffic-specific knowledge. Experiments on the CISS dataset show that our fine-tuned 3B models can outperform GPT-4o while requiring minimal training resources. Further analysis of LLM-annotated data shows that LLMs can both compensate for and correct limitations in manual annotations while preserving key distributional characteristics. The results indicate that advanced prompting techniques and fine-tuned open-source models prove effective in large-scale traffic safety studies. | |
| dc.identifier.coursecode | LAD015 | |
| dc.identifier.uri | http://hdl.handle.net/20.500.12380/310706 | |
| dc.language.iso | eng | |
| dc.subject | crash narratives | |
| dc.subject | traffic safety | |
| dc.subject | large language models (LLM) | |
| dc.subject | open-source models | |
| dc.subject | fine-tuning | |
| dc.subject | low-rank adaptation (LoRa) | |
| dc.subject | prompt engineering | |
| dc.subject | BERT family (BERT, RoBERTa, SciBERT) | |
| dc.subject | information extraction | |
| dc.subject | CISS dataset | |
| dc.title | Leveraging Large Language Models for advanced analysis of crash narratives in traffic safety research | |
| dc.type.degree | Projektarbete, avancerad nivå | sv |
| dc.type.degree | Project Report, advanced level | en |
