Automatic View Compensation

Publicerad

Författare

Typ

Examensarbete för masterexamen
Master Thesis

Modellbyggare

Tidskriftstitel

ISSN

Volymtitel

Utgivare

Sammanfattning

Controlling the luminance in the tunnel threshold zone is a challenging task. On one hand the luminance in the threshold zone needs to be high enough for a driver experiencing significant glare to be able to detect a possible obstacle located in the relatively dark threshold zone and stop in time to avoid an accident. On the other hand there are demands to keep the luminance as low as possible to reduce the energy consumption. In order to control the luminance in the threshold zone, the glare experienced by the driver needs to be estimated. This is done by measuring the equivalent veiling luminance Lseq. Ideally Lseq would be measured at the stopping distance, 1.5m above the road surface and in the middle of the road with the camera centered on the tunnel opening. In practice however, the camera often needs to be mounted on a height of 5 − 7m to avoid staining and vandalism and it sometimes also needs to be placed on the side of the road. This camera positioning leads to a significant Lseq error. In this thesis a view transformation method aimed at automatically reducing the Lseq error is presented. The method is based on Depth Image Based Rendering, which requires a camera pose for a virtual view as well as the depth map for the original view. Depth maps of a scene are rarely available however and for this reason a monocular depth map estimation method is proposed. The depth estimation method is based on detecting Maximally Stable Extremal Regions and classifying the regions according to detected line segments. The proposed view transformation method has been evaluated on a small number of scenes and the error reduction is as high as 40 − 85%. There are however several issues that require future improvements, including: a long computation time, high occurrence of errors in the estimated depth maps and errors in the centering of the diagram used for the Lseq computation. Nevertheless, the proposed approach can with a few minor adjustments be implemented in the luminance measurement system and be used in practice.

Beskrivning

Ämne/nyckelord

Datorseende och robotik (autonoma system), Computer Vision and Robotics (Autonomous Systems)

Citation

Arkitekt (konstruktör)

Geografisk plats

Byggnad (typ)

Byggår

Modelltyp

Skala

Teknik / material

Index

item.page.endorsement

item.page.review

item.page.supplemented

item.page.referenced