Decoding neural machine translation using gradient descent

Publicerad

Typ

Examensarbete för masterexamen
Master Thesis

Modellbyggare

Tidskriftstitel

ISSN

Volymtitel

Utgivare

Sammanfattning

Neural machine translation is a resent approach to machine translation. Neural machine translation uses an artificial neural network to learn and perform translations. Usually an encoder-decoder architecture is used for the neural network. The encoder encodes the input sentence into a fix sized vector meant to represent the meaning of the sentence, the decoder then decodes that vector into a sentence of the same meaning in the other language. The decoder will generate the translation sequentially, and will output a probability distribution over the known words at each step of the sequence. From these probability distributions words needs to be chosen to form a sentence. The word chosen given any specific probability distribution will affect how the next probability distribution is generated. It is therefore important to chose the words well in order to get a translation as good as possible. In this thesis a way of choosing the words using gradient descent is tested. Decoding using gradient descent is compared to some other methods of decoding but gives no clear indication of working better than the other decoding methods. However, some results indicate gradient descent decoding might have some potential if further developed.

Beskrivning

Ämne/nyckelord

Data- och informationsvetenskap, Computer and Information Science

Citation

Arkitekt (konstruktör)

Geografisk plats

Byggnad (typ)

Byggår

Modelltyp

Skala

Teknik / material

Index

item.page.endorsement

item.page.review

item.page.supplemented

item.page.referenced