Human intent-recognition system for safety-critical human-machine interactions

Typ
Examensarbete för masterexamen
Program
Complex adaptive systems (MPCAS), MSc
Publicerad
2020
Författare
Künzler, Simon
Modellbyggare
Tidskriftstitel
ISSN
Volymtitel
Utgivare
Sammanfattning
The aim of this thesis was to investigate the potential of eye tracking technology, to help recognizing the intent of humans when working with a machine under shared control. An experiment was designed to study the eye gaze behaviour of test subjects, while manipulating a two degrees-of-freedom (DOF) SCARA robot. The subjects were given the task to maneuver the end-effector of the robot through a sequence of LEDs located on the robot action plane. The LED sequence was different for each experiment run and not known by the subjects before the start of each run. In the first step, eye gaze data was collected while the robot was unactuated. The fixation point of the subjects gaze was 4.5 times more likely to be in the proximity of the goal LEDs they intended to connect, opposed to fixating a point outside of the intended area. In addition, when the subjects planned to move from one LED to the next, the subject’s gaze tended to fixate on the next LED between one and two seconds before reaching the position with the robot end-effector, depending on how much distance the subject had to cover when moving from the current LED to the next. After reaching the fixated position, the gaze is shifted almost immediately (with 0.1-0.2s delay) onto the next LED, while movement onset is delayed about 0.5 seconds. This information was then used to develop an algorithm to predict which LED a subject is intending to reach. While performing a second set of tests, more data was collected, but this time under shared-control with the robot. The implemented algorithm was able to successfully identify the next goal LED in the subject’s planned path and to provide assistance in the movement of the robot arm. How far ahead of time the goals were recognized was dependent on how soon the subjects gaze shifted from a reached LED to their next planned goal LED. If the subject fixates on a goal LED 0.3s before initiating the movement towards it, the robot was able to perform the whole movement between the LEDs. In most cases the algorithm initiated the support half-way through the planned motion of the subjects. No significant differences in the subjects gaze data between passive robot manipulation and shared control could be identified.
Beskrivning
Ämne/nyckelord
Human intent-recognition , Eye tracking , Shared-control , Human-robot shared manipulation
Citation
Arkitekt (konstruktör)
Geografisk plats
Byggnad (typ)
Byggår
Modelltyp
Skala
Teknik / material
Index