Question Answering In Conversational Context
Publicerad
Författare
Typ
Examensarbete för masterexamen
Program
Modellbyggare
Tidskriftstitel
ISSN
Volymtitel
Utgivare
Sammanfattning
In this era of digital technology when people are busy with their daily life, they look for methods to learn quickly with minimal effort. Today, people more often depend on machines to store and retrieve information. Soon, they will interact with a machine to seek information conversationally by asking questions to establish a continuous dialogue based on the information gained through the conversation. This thesis aims to study existing models created to help such machines, for a popular dataset QuAC (Question Answering in Context) [1]. Furthermore, this thesis looks to reduce the gap between the state-of-the-art F1 score of 64.1% (achieved by FlowQA [2] at the beginning of this thesis) and the human performance of 81.1%. In this thesis, we mainly focused on experimenting with FlowQA by (1) replacing the attention mechanism with multi-head attention, (2) integrating BERT (Bidirectional Encoder Representation from Transformer) [3]. In every experiment, there was a considerable amount of increment in the F1 score,with the highest score being 66.4% achieved by a novel combination of FlowQA and BERT along with the concept of obtaining contextualized word embeddings using a combination of dialog history and a moving window. Moreover, this thesis also developed a model using BERT alone that delivered an accuracy of 43.4% on the QuAC dataset.
Beskrivning
Ämne/nyckelord
Machine Learning, Deep Learning, Neural Networks, Machine Comprehension, QuAC, Question Answering, Transformer, BERT, FlowQA, Recurrent Neural Network (RNN), Natural Language Processing (NLP)