Learning Abstractions via Reinforcement Learning
dc.contributor.author | JERGÉUS, ERIK | |
dc.contributor.author | KARLSSON OINONEN, LEO | |
dc.contributor.department | Chalmers tekniska högskola / Institutionen för data och informationsteknik | sv |
dc.contributor.examiner | Dubhashi, Devdatt | |
dc.contributor.supervisor | Johansson, Moa | |
dc.date.accessioned | 2022-10-14T12:45:37Z | |
dc.date.available | 2022-10-14T12:45:37Z | |
dc.date.issued | 2022 | sv |
dc.date.submitted | 2020 | |
dc.description.abstract | In this paper we take the first steps in studying a new approach to synthesis of efficient communication schemes in multi-agent systems, trained via reinforcement learning. We combine symbolic methods with machine learning, in what is referred to as a neuro-symbolic system. The agents are not restricted to only use initial primitives: reinforcement learning is interleaved with steps to extend the current language with novel higher-level concepts, allowing generalisation and more informative communication via shorter messages. We demonstrate that this approach allow agents to converge more quickly on a small collaborative construction task. | sv |
dc.identifier.coursecode | DATX05 | sv |
dc.identifier.uri | https://hdl.handle.net/20.500.12380/305714 | |
dc.language.iso | eng | sv |
dc.setspec.uppsok | Technology | |
dc.subject | RL | sv |
dc.subject | MARL | sv |
dc.subject | multi-agent | sv |
dc.subject | DreamCoder | sv |
dc.subject | neuro-symbolic | sv |
dc.subject | abstraction | sv |
dc.subject | communication | sv |
dc.subject | AI | sv |
dc.title | Learning Abstractions via Reinforcement Learning | sv |
dc.type.degree | Examensarbete för masterexamen | sv |
dc.type.uppsok | H |