Automatic View Compensation

Examensarbete för masterexamen

Please use this identifier to cite or link to this item:
Download file(s):
File Description SizeFormat 
255609.pdfFulltext43.31 MBAdobe PDFView/Open
Type: Examensarbete för masterexamen
Master Thesis
Title: Automatic View Compensation
Authors: Dahl, Rickard
Abstract: Controlling the luminance in the tunnel threshold zone is a challenging task. On one hand the luminance in the threshold zone needs to be high enough for a driver experiencing significant glare to be able to detect a possible obstacle located in the relatively dark threshold zone and stop in time to avoid an accident. On the other hand there are demands to keep the luminance as low as possible to reduce the energy consumption. In order to control the luminance in the threshold zone, the glare experienced by the driver needs to be estimated. This is done by measuring the equivalent veiling luminance Lseq. Ideally Lseq would be measured at the stopping distance, 1.5m above the road surface and in the middle of the road with the camera centered on the tunnel opening. In practice however, the camera often needs to be mounted on a height of 5 − 7m to avoid staining and vandalism and it sometimes also needs to be placed on the side of the road. This camera positioning leads to a significant Lseq error. In this thesis a view transformation method aimed at automatically reducing the Lseq error is presented. The method is based on Depth Image Based Rendering, which requires a camera pose for a virtual view as well as the depth map for the original view. Depth maps of a scene are rarely available however and for this reason a monocular depth map estimation method is proposed. The depth estimation method is based on detecting Maximally Stable Extremal Regions and classifying the regions according to detected line segments. The proposed view transformation method has been evaluated on a small number of scenes and the error reduction is as high as 40 − 85%. There are however several issues that require future improvements, including: a long computation time, high occurrence of errors in the estimated depth maps and errors in the centering of the diagram used for the Lseq computation. Nevertheless, the proposed approach can with a few minor adjustments be implemented in the luminance measurement system and be used in practice.
Keywords: Datorseende och robotik (autonoma system);Computer Vision and Robotics (Autonomous Systems)
Issue Date: 2018
Publisher: Chalmers tekniska högskola / Institutionen för elektroteknik
Chalmers University of Technology / Department of Electrical Engineering
Collection:Examensarbeten för masterexamen // Master Theses

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.