Evaluating word-vector generation techniques for use in semantic role labeling with recurrent LSTM neural networks

Typ
Examensarbete för masterexamen
Master Thesis
Program
Computer science – algorithms, languages and logic (MPALG), MSc
Publicerad
2017
Författare
Toom, Daniel
Modellbyggare
Tidskriftstitel
ISSN
Volymtitel
Utgivare
Sammanfattning
This is an investigation of how the performance on a natural language task, semantic role labeling (SRL), is affected by how the input text is encoded. Four methods for encoding words as dense vectors of floating-point numbers are evaluated: Noisecontrastive estimation (NCE), Continuous bag-of-words (CBOW), Skip-grams and Global vectors for word representation (GloVe). Vectors are generated using a corpus of Wikipedia articles as the training set, with different values for various parameters, such as vector length, context length and training methods. In order to evaluate the generated vectors, they are then used as the input into a recurrent bi-directional neural network using LSTM neurons. This network is trained using the standard CoNLL-2005 shared task data set, and evaluated with the associated test sets. The results show that there is no large or consistent advantage to using word encodings from any of the tried methods or any of the tried parameter settings over any of the other methods or parameter settings. The SRL system that was used thus seems to be fairly robust in regard to the choice of input vectors. All methods generate vectors that outperform random vectors, however, indicating that pretraining vectors has a positive effect. The labeling accuracy is close to but slightly worse than previously published state-of-the-art performance results from the use of a similar network on the SRL task.
Beskrivning
Ämne/nyckelord
Data- och informationsvetenskap , Computer and Information Science
Citation
Arkitekt (konstruktör)
Geografisk plats
Byggnad (typ)
Byggår
Modelltyp
Skala
Teknik / material
Index