Self-supervised pre-training for drowsiness prediction

dc.contributor.authorEkenstam, Felicia
dc.contributor.authorEkener, Felicia
dc.contributor.departmentChalmers tekniska högskola / Institutionen för fysiksv
dc.contributor.departmentChalmers University of Technology / Department of Physicsen
dc.contributor.examinerMidtvedt, Daniel
dc.contributor.supervisorClaesson, Anton
dc.date.accessioned2023-10-10T05:55:44Z
dc.date.available2023-10-10T05:55:44Z
dc.date.issued2023
dc.date.submitted2023
dc.description.abstractDrowsiness experienced while driving poses significant dangers, often leading to severe or even fatal accidents. Consequently, the implementation of technical solutions to mitigate road incidents caused by drowsiness has become essential, with machine learning emerging as a more widespread approach. However, the lack of high-quality data frequently hinders the development of neural networks, also in this domain. Over the past decade, pre-training of neural networks using self-supervised learning techniques has proven to be an effective strategy for alleviating this problem in many cases — which serves as the primary focus of this thesis. A BERT-like model was pre-trained, using a method inspired by Masked Language Modeling (MLM), on the task of predicting blink features. This was done by considering vectors of blink features, extracted from driving recordings, as token embeddings. The pre-trained model was then fine-tuned on the task of predicting drowsiness. The model’s performance was then compared to the performance of a similar model that had not been pre-trained. The evaluation revealed that the pre-trained model outperformed a set of baseline measures in predicting vectors of blink features for the pre-training task. However, when transferring the knowledge gained from pre-training to the task of drowsiness prediction, it did not achieve better performance than a model without pretraining. One plausible explanation for this observation is the discrepancy between the pre-training task and the drowsiness prediction task, hindering effective knowledge sharing within the model. Several suggestions for future work are proposed, including the optimization of the MLM method and the incorporation of additional pre-training tasks.
dc.identifier.coursecodeTIFX05
dc.identifier.urihttp://hdl.handle.net/20.500.12380/307202
dc.language.isoeng
dc.setspec.uppsokPhysicsChemistryMaths
dc.subjectdrowsiness
dc.subjectself-supervised learning
dc.subjectpre-training
dc.subjectfine-tuning
dc.subjectBERT
dc.titleSelf-supervised pre-training for drowsiness prediction
dc.type.degreeExamensarbete för masterexamensv
dc.type.degreeMaster's Thesisen
dc.type.uppsokH
local.programmeComplex adaptive systems (MPCAS), MSc

Ladda ner

Original bundle

Visar 1 - 1 av 1
Hämtar...
Bild (thumbnail)
Namn:
Master_Thesis_Ekener_Ekenstam.pdf
Storlek:
2.49 MB
Format:
Adobe Portable Document Format

License bundle

Visar 1 - 1 av 1
Hämtar...
Bild (thumbnail)
Namn:
license.txt
Storlek:
2.35 KB
Format:
Item-specific license agreed upon to submission
Beskrivning: