Leveraging Large Language Models for advanced analysis of crash narratives in traffic safety research

dc.contributor.authorWang, Xixi
dc.contributor.authorCarlino, Mattia
dc.contributor.departmentChalmers tekniska högskola // Institutionen för mekanik och maritima vetenskapersv
dc.contributor.departmentChalmers University of Technology / Department of Mechanics and Maritime Sciencesen
dc.contributor.examinerThomson, Robert
dc.contributor.supervisorKovaceva, Jordanka
dc.date.accessioned2025-11-04T14:32:38Z
dc.date.issued2025
dc.date.submitted
dc.description.abstractFree-text crash narratives recorded in real-world crash databases have been shown to play a significant role in improving traffic safety. But they remain challenging to analyze at scale due to unstructured writing, heterogeneous terminology, and uneven detail. The development of Large Language Models (LLMs) offers a promising way to automatically extract information from narratives by asking questions. However, crash narratives remain hard for LLMs to analyze because of a lack of traffic safety domain knowledge. Moreover, relying on closed-source LLMs through external APIs poses privacy risks for crash data and often underperforms due to limited traffic knowledge. Motivated by these concerns, we study whether smaller open-source LLMs can support reasoning-intensive extraction from crash narratives, targeting three challenging objectives: the travel direction of the vehicles involved in the crash, identifying the manner of collision, and classifying crash type in multivehicle scenarios that require accurate per-vehicle prediction. In the first phase of the experiments, we focused on extracting vehicle travel directions by comparing small LLMs with 8 billion parameters (Mistral, DeepSeek, and Qwen) under different prompting strategies against fine-tuned transformers (BERT, RoBERTa, and SciBERT) on a manually labeled subset of the Crash Investigation Sampling System (CISS) dataset. The goal was to assess whether models trained on a generic corpus could approach or surpass the performance of domain-adapted baselines. Results confirmed that fine-tuned transformers achieved the best accuracy; however, advanced prompting strategies, particularly Chain of Thought, enabled some LLMs to reach about 90% accuracy, showing that they can serve as competitive alternatives. For the second and third tasks, to bridge domain gaps, we apply Low-Rank Adaption (LoRA) fine-tuning to inject traffic-specific knowledge. Experiments on the CISS dataset show that our fine-tuned 3B models can outperform GPT-4o while requiring minimal training resources. Further analysis of LLM-annotated data shows that LLMs can both compensate for and correct limitations in manual annotations while preserving key distributional characteristics. The results indicate that advanced prompting techniques and fine-tuned open-source models prove effective in large-scale traffic safety studies.
dc.identifier.coursecodeLAD015
dc.identifier.urihttp://hdl.handle.net/20.500.12380/310706
dc.language.isoeng
dc.subjectcrash narratives
dc.subjecttraffic safety
dc.subjectlarge language models (LLM)
dc.subjectopen-source models
dc.subjectfine-tuning
dc.subjectlow-rank adaptation (LoRa)
dc.subjectprompt engineering
dc.subjectBERT family (BERT, RoBERTa, SciBERT)
dc.subjectinformation extraction
dc.subjectCISS dataset
dc.titleLeveraging Large Language Models for advanced analysis of crash narratives in traffic safety research
dc.type.degreeProjektarbete, avancerad nivåsv
dc.type.degreeProject Report, advanced levelen

Ladda ner

Original bundle

Visar 1 - 1 av 1
Hämtar...
Bild (thumbnail)
Namn:
548572_Fulltext.pdf
Storlek:
9.09 MB
Format:
Adobe Portable Document Format

License bundle

Visar 1 - 1 av 1
Hämtar...
Bild (thumbnail)
Namn:
license.txt
Storlek:
2.35 KB
Format:
Item-specific license agreed upon to submission
Beskrivning: