Dynamic State Representation for Homeostatic Agents

Examensarbete för masterexamen

Please use this identifier to cite or link to this item: https://hdl.handle.net/20.500.12380/256399
Download file(s):
File Description SizeFormat 
256399.pdfFulltext3.6 MBAdobe PDFView/Open
Full metadata record
DC FieldValueLanguage
dc.contributor.authorMäkeläinen, Fredrik
dc.contributor.authorTorén, Hampus
dc.contributor.departmentChalmers tekniska högskola / Institutionen för data- och informationsteknik (Chalmers)sv
dc.contributor.departmentChalmers University of Technology / Department of Computer Science and Engineering (Chalmers)en
dc.date.accessioned2019-07-03T14:58:28Z-
dc.date.available2019-07-03T14:58:28Z-
dc.date.issued2018
dc.identifier.urihttps://hdl.handle.net/20.500.12380/256399-
dc.description.abstractIn a reinforcement learning setting an agent learns how to behave in an environment through interactions. For complex environments, the explorable state space can easily become unmanageable, and efficient approximations are needed. The Generic Animat model (GA model), heavily influenced by biology, takes an approach utilising a dynamic graph to represent the state. This thesis is part of the Generic Animat research project at Chalmers that develops the GA model. In this thesis, we identify and implement potential improvements to the GA model and make comparisons to standard Q-learning and deep Q-learning. With the improved GA model we show that in a state space larger than 232, we see substantial performance gains compared to the original model.
dc.language.isoeng
dc.setspec.uppsokTechnology
dc.subjectData- och informationsvetenskap
dc.subjectComputer and Information Science
dc.titleDynamic State Representation for Homeostatic Agents
dc.type.degreeExamensarbete för masterexamensv
dc.type.degreeMaster Thesisen
dc.type.uppsokH
Collection:Examensarbeten för masterexamen // Master Theses



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.