Self-supervised pre-training for drowsiness prediction
Publicerad
Författare
Typ
Examensarbete för masterexamen
Master's Thesis
Master's Thesis
Modellbyggare
Tidskriftstitel
ISSN
Volymtitel
Utgivare
Sammanfattning
Drowsiness experienced while driving poses significant dangers, often leading to
severe or even fatal accidents. Consequently, the implementation of technical solutions
to mitigate road incidents caused by drowsiness has become essential, with
machine learning emerging as a more widespread approach. However, the lack of
high-quality data frequently hinders the development of neural networks, also in this
domain. Over the past decade, pre-training of neural networks using self-supervised
learning techniques has proven to be an effective strategy for alleviating this problem
in many cases — which serves as the primary focus of this thesis.
A BERT-like model was pre-trained, using a method inspired by Masked Language
Modeling (MLM), on the task of predicting blink features. This was done by considering
vectors of blink features, extracted from driving recordings, as token embeddings.
The pre-trained model was then fine-tuned on the task of predicting
drowsiness. The model’s performance was then compared to the performance of a
similar model that had not been pre-trained.
The evaluation revealed that the pre-trained model outperformed a set of baseline
measures in predicting vectors of blink features for the pre-training task. However,
when transferring the knowledge gained from pre-training to the task of drowsiness
prediction, it did not achieve better performance than a model without pretraining.
One plausible explanation for this observation is the discrepancy between
the pre-training task and the drowsiness prediction task, hindering effective knowledge
sharing within the model. Several suggestions for future work are proposed,
including the optimization of the MLM method and the incorporation of additional
pre-training tasks.
Beskrivning
Ämne/nyckelord
drowsiness, self-supervised learning, pre-training, fine-tuning, BERT