Matching Traffic Objects recorded by Stereoscopic Cameras

Typ
Examensarbete för masterexamen
Program
Publicerad
2020
Författare
Phung, Tommy
Areback, Stefan
Modellbyggare
Tidskriftstitel
ISSN
Volymtitel
Utgivare
Sammanfattning
If one uses several cameras to film different but overlapping parts of a scene (in our case an intersection), then a way to get an overview of the scene, is to relate all the cameras to a single coordinate system. This can be done manually using knowledge of the positions relative to the different cameras of objects that show up in the overlapping part of the filmed scene. However, it would be preferable (to save time, amongst other things) if this process could be automated, using information recorded by the cameras (in this case the positions, velocities and timestamps of traffic objects filmed by the camera). The suggested methods for achieving this is Coherent point drift (CPD) with the use of Expectation maximization (EM). Once a common coordinate system has been found, one still needs to merge the trajectories from the different cameras corresponding to the same traffic object into a single trajectory. Preferably, this too should be done using only the information recorded by the camera (positions, velocities, timestamps). For finding a merge, the presented method is a modified version of Longest Common Subsequence (LCSS) with respect to the camera views and their overlap, presented as polygons. CPD performs well for two cameras when LCSS is being applied as a method for noise reduction whereas when there are three cameras it gives an ambivalent solution. Using LCSS when matching trajectories for merging performs well for both two and three cameras, however the merging methods needs some additional calibrations.
Beskrivning
Ämne/nyckelord
Point set registration , Coherent point drift , Expectation maximization , Longest Common Subsequence , camera alignment
Citation
Arkitekt (konstruktör)
Geografisk plats
Byggnad (typ)
Byggår
Modelltyp
Skala
Teknik / material
Index