Self-supervised pre-training for drowsiness prediction
dc.contributor.author | Ekenstam, Felicia | |
dc.contributor.author | Ekener, Felicia | |
dc.contributor.department | Chalmers tekniska högskola / Institutionen för fysik | sv |
dc.contributor.department | Chalmers University of Technology / Department of Physics | en |
dc.contributor.examiner | Midtvedt, Daniel | |
dc.contributor.supervisor | Claesson, Anton | |
dc.date.accessioned | 2023-10-10T05:55:44Z | |
dc.date.available | 2023-10-10T05:55:44Z | |
dc.date.issued | 2023 | |
dc.date.submitted | 2023 | |
dc.description.abstract | Drowsiness experienced while driving poses significant dangers, often leading to severe or even fatal accidents. Consequently, the implementation of technical solutions to mitigate road incidents caused by drowsiness has become essential, with machine learning emerging as a more widespread approach. However, the lack of high-quality data frequently hinders the development of neural networks, also in this domain. Over the past decade, pre-training of neural networks using self-supervised learning techniques has proven to be an effective strategy for alleviating this problem in many cases — which serves as the primary focus of this thesis. A BERT-like model was pre-trained, using a method inspired by Masked Language Modeling (MLM), on the task of predicting blink features. This was done by considering vectors of blink features, extracted from driving recordings, as token embeddings. The pre-trained model was then fine-tuned on the task of predicting drowsiness. The model’s performance was then compared to the performance of a similar model that had not been pre-trained. The evaluation revealed that the pre-trained model outperformed a set of baseline measures in predicting vectors of blink features for the pre-training task. However, when transferring the knowledge gained from pre-training to the task of drowsiness prediction, it did not achieve better performance than a model without pretraining. One plausible explanation for this observation is the discrepancy between the pre-training task and the drowsiness prediction task, hindering effective knowledge sharing within the model. Several suggestions for future work are proposed, including the optimization of the MLM method and the incorporation of additional pre-training tasks. | |
dc.identifier.coursecode | TIFX05 | |
dc.identifier.uri | http://hdl.handle.net/20.500.12380/307202 | |
dc.language.iso | eng | |
dc.setspec.uppsok | PhysicsChemistryMaths | |
dc.subject | drowsiness | |
dc.subject | self-supervised learning | |
dc.subject | pre-training | |
dc.subject | fine-tuning | |
dc.subject | BERT | |
dc.title | Self-supervised pre-training for drowsiness prediction | |
dc.type.degree | Examensarbete för masterexamen | sv |
dc.type.degree | Master's Thesis | en |
dc.type.uppsok | H | |
local.programme | Complex adaptive systems (MPCAS), MSc |