Exploring the Evolution of Multi-Agent Emergent Languages through Neural Iterated Learning
Ladda ner
Publicerad
Författare
Typ
Examensarbete för masterexamen
Master's Thesis
Master's Thesis
Modellbyggare
Tidskriftstitel
ISSN
Volymtitel
Utgivare
Sammanfattning
This study investigates the use of Neural Iterated Learning (NIL) in generating compositional languages within symbolic data systems and complex image datasets, such as MNIST. Through experiments focused on vocabulary size, message length, and penalization, we demonstrate that NIL can effectively produce structured communication systems, achieving high topological similarity and accuracy, particularly when using SoftMax for decision-making, Reinforce as the optimization method, and LSTM as the underlying network architecture. As the penalty introduced on message length, the increment of length cost leads to more efficient agent communication, with shorter and more structured messages, supporting the Brevity Law. While NIL performs robustly in simpler settings, its ability to maintain compositionality declines with increased data complexity, as observed in the Colored MNIST experiments. The results indicate that compressible representations improve generalization. However, NIL’s performance in high-dimensional contexts is sensitive to input complexity, requiring refined feature extraction and training setups for improved stability and efficiency.
Beskrivning
Ämne/nyckelord
Computer Science, Neural Iterated Learning, Natural Language Processing, Reinforcement Learning, Emergent languages, Language Evolution, Communication Efficiency, Compositionality, Feature Extraction, MNIST.
