Decoding neural machine translation using gradient descent

dc.contributor.authorSnelleman, Emanuel
dc.contributor.departmentChalmers tekniska högskola / Institutionen för data- och informationsteknik (Chalmers)sv
dc.contributor.departmentChalmers University of Technology / Department of Computer Science and Engineering (Chalmers)en
dc.description.abstractNeural machine translation is a resent approach to machine translation. Neural machine translation uses an artificial neural network to learn and perform translations. Usually an encoder-decoder architecture is used for the neural network. The encoder encodes the input sentence into a fix sized vector meant to represent the meaning of the sentence, the decoder then decodes that vector into a sentence of the same meaning in the other language. The decoder will generate the translation sequentially, and will output a probability distribution over the known words at each step of the sequence. From these probability distributions words needs to be chosen to form a sentence. The word chosen given any specific probability distribution will affect how the next probability distribution is generated. It is therefore important to chose the words well in order to get a translation as good as possible. In this thesis a way of choosing the words using gradient descent is tested. Decoding using gradient descent is compared to some other methods of decoding but gives no clear indication of working better than the other decoding methods. However, some results indicate gradient descent decoding might have some potential if further developed.
dc.subjectData- och informationsvetenskap
dc.subjectComputer and Information Science
dc.titleDecoding neural machine translation using gradient descent
dc.type.degreeExamensarbete för masterexamensv
dc.type.degreeMaster Thesisen
local.programmeComputer science – algorithms, languages and logic (MPALG), MSc
Ladda ner
Original bundle
Visar 1 - 1 av 1
Bild (thumbnail)
665.34 KB
Adobe Portable Document Format