Human Interaction Solutions for Intuitive Motion Generation of a Virtual Manikin

Examensarbete för masterexamen

Please use this identifier to cite or link to this item: https://hdl.handle.net/20.500.12380/223202
Download file(s):
File Description SizeFormat 
223202.pdfFulltext6.64 MBAdobe PDFView/Open
Type: Examensarbete för masterexamen
Master Thesis
Title: Human Interaction Solutions for Intuitive Motion Generation of a Virtual Manikin
Authors: Caltagirone, Luca
Abstract: The aim of this work was to develop a motion capture algorithm for the Kinect sensor, which can provide robust and real time tracking, even in those situations where the Kinect algorithm performs poorly. The proposed method belongs to the family of model based algorithms which work by establishing correspondences between an acquired point cloud and a custom-built body model. Specifically, an extension to articulated bodies of the point cloud registration algorithm known as iterative closest point (ICP) was used in combination with the Gauss-Newton minimization algorithm. The virtual manikin IMMA (Intelligently Moving Manikins) allows for the conduction of ergonomic studies in a simulated assembly line. To perform motion verification of the solutions found by the simulation software, a motion capture system could be integrated within its platform. Furthermore, this could facilitate the analysis of some complex assembly operations which may be troublesome to solve with the current version of IMMA. Most of the available commercial devices are expensive, difficult to operate, and require specialized equipment and software. However, in recent years, Microsoft has introduced to the market an RGB-D camera, the Kinect 1.0, which provides 3D information without the need for triangulation, and furthermore integrates a sophisticated motion tracking system based on machine learning algorithms. Unfortunately, this system performs rather poorly in the settings commonly found in assembly lines where self-occlusions and occlusions are commonplace. By measuring the normalized residual error per point (NREP), one can determine how well the articulated iterative closest point (AICP) system matches the body model to the point cloud acquired from the sensor. The obtained results show that the AICP system is more robust than the Kinect algorithm by producing an NREP approximately seven times smaller on average relative to a set of 30 unconstrained motion sequences involving occlusions and self-occlusions. Furthermore the developed system allows the tracking of a wider range of motions in an extended range. This makes it a potentially better solution for performing motion capture in assembly lines.
Keywords: Farkostteknik;Vehicle Engineering
Issue Date: 2014
Publisher: Chalmers tekniska högskola / Institutionen för tillämpad mekanik
Chalmers University of Technology / Department of Applied Mechanics
Series/Report no.: Diploma work - Department of Applied Mechanics, Chalmers University of Technology, Göteborg, Sweden : 2014:60
URI: https://hdl.handle.net/20.500.12380/223202
Collection:Examensarbeten för masterexamen // Master Theses



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.