Cooperative Inverse Reinforcement Learning - Cooperation and learning in an asymmetric information setting with a suboptimal teacher

Examensarbete för masterexamen

Please use this identifier to cite or link to this item: https://hdl.handle.net/20.500.12380/256256
Download file(s):
File Description SizeFormat 
256256.pdfFulltext1.2 MBAdobe PDFView/Open
Type: Examensarbete för masterexamen
Master Thesis
Title: Cooperative Inverse Reinforcement Learning - Cooperation and learning in an asymmetric information setting with a suboptimal teacher
Authors: Ek, Johan
Abstract: There exists many different scenarios where an artificial intelligence (AI) may have to learn from a human. One such scenario is when they both have to cooperate but only the human knows what the goal is. This is the study of cooperative inverse reinforcement learning (CIRL). The purpose of this report is to analyze CIRL when the human is not behaving fully optimally and may make mistakes. The effect of different behaviours by the human is investigated and two frameworks are developed, one for when there is a finite set of possible goals and one for the general case where the set of possible goals is infinite. Two benchmark problems are designed to compare the learning performance. The experiments show that the AI learns, but also that the humans behaviour has a large affect on learning. Also highlighted by the experiments, is the difficulty of differentiating between the actual goal and other possible goals that are similar in some aspects.
Keywords: Data- och informationsvetenskap;Computer and Information Science
Issue Date: 2018
Publisher: Chalmers tekniska högskola / Institutionen för data- och informationsteknik (Chalmers)
Chalmers University of Technology / Department of Computer Science and Engineering (Chalmers)
URI: https://hdl.handle.net/20.500.12380/256256
Collection:Examensarbeten för masterexamen // Master Theses



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.