Automatic View Compensation

dc.contributor.authorDahl, Rickard
dc.contributor.departmentChalmers tekniska högskola / Institutionen för elektrotekniksv
dc.contributor.departmentChalmers University of Technology / Department of Electrical Engineeringen
dc.date.accessioned2019-07-03T14:49:24Z
dc.date.available2019-07-03T14:49:24Z
dc.date.issued2018
dc.description.abstractControlling the luminance in the tunnel threshold zone is a challenging task. On one hand the luminance in the threshold zone needs to be high enough for a driver experiencing significant glare to be able to detect a possible obstacle located in the relatively dark threshold zone and stop in time to avoid an accident. On the other hand there are demands to keep the luminance as low as possible to reduce the energy consumption. In order to control the luminance in the threshold zone, the glare experienced by the driver needs to be estimated. This is done by measuring the equivalent veiling luminance Lseq. Ideally Lseq would be measured at the stopping distance, 1.5m above the road surface and in the middle of the road with the camera centered on the tunnel opening. In practice however, the camera often needs to be mounted on a height of 5 − 7m to avoid staining and vandalism and it sometimes also needs to be placed on the side of the road. This camera positioning leads to a significant Lseq error. In this thesis a view transformation method aimed at automatically reducing the Lseq error is presented. The method is based on Depth Image Based Rendering, which requires a camera pose for a virtual view as well as the depth map for the original view. Depth maps of a scene are rarely available however and for this reason a monocular depth map estimation method is proposed. The depth estimation method is based on detecting Maximally Stable Extremal Regions and classifying the regions according to detected line segments. The proposed view transformation method has been evaluated on a small number of scenes and the error reduction is as high as 40 − 85%. There are however several issues that require future improvements, including: a long computation time, high occurrence of errors in the estimated depth maps and errors in the centering of the diagram used for the Lseq computation. Nevertheless, the proposed approach can with a few minor adjustments be implemented in the luminance measurement system and be used in practice.
dc.identifier.urihttps://hdl.handle.net/20.500.12380/255609
dc.language.isoeng
dc.setspec.uppsokTechnology
dc.subjectDatorseende och robotik (autonoma system)
dc.subjectComputer Vision and Robotics (Autonomous Systems)
dc.titleAutomatic View Compensation
dc.type.degreeExamensarbete för masterexamensv
dc.type.degreeMaster Thesisen
dc.type.uppsokH
local.programmeElektroteknik 300 hp (civilingenjör)
Ladda ner
Original bundle
Visar 1 - 1 av 1
Hämtar...
Bild (thumbnail)
Namn:
255609.pdf
Storlek:
42.3 MB
Format:
Adobe Portable Document Format
Beskrivning:
Fulltext