Multi camera extrinsic calibration

dc.contributor.authorBlanc, August
dc.contributor.departmentChalmers tekniska högskola / Institutionen för elektrotekniksv
dc.contributor.examinerHammarstrand, Lars
dc.contributor.supervisorHammarstrand, Lars
dc.contributor.supervisorStenborg, Erik
dc.date.accessioned2025-06-18T10:49:46Z
dc.date.issued2025
dc.date.submitted
dc.description.abstractAutonomous driving has the potential of increasing traffic safety and enabling greater automation in our society. These systems rely on high precision sensor data, not the least from cameras, in order to detect and map out the environment around a vehicle. In doing this, knowledge about how the sensors are mounted is of crucial importance. This is typically calibrated in a static rig upon manufacturing a vehicle, but after driving for a while the orientation of sensors might change due to bumps and vibrations, thus requiring regular recalibration. This thesis presents an automated refinement method for the rotations of cameras mounted on a car. There are four fisheye cameras with a wide field of view, that together can be used to form a 360◦ view of the surrounding ground. This is what you typically see on the display when parking a modern car. There are also three regular cameras, of which one is looking forwards and two are looking backwards but slightly to the left and right, respectively. Given images taken by the cameras at the same time, we find pixel coordinates that correspond to the same real world point for cameras with pairwise overlapping fields of view. Using this information we then optimize all camera poses simultaneously, starting from an initial pose estimate. The performance is evaluated by two criteria. Firstly, we consider the surround view images that are created by stitching together fisheye camera images projected onto the ground plane. Looking at the overlap between cameras, the alignment of e.g. lane markings and shadows in the ground plane indicates the quality of a calibration. Secondly, we manually annotate point correspondences for each overlapping camera pair in each scenario. These are used to calculate a quantitative error metric, essentially telling us how close the pixel coordinates are to actually originate from the same real world point for the given camera poses. The results indicate that our method achieves an accuracy comparable to that of a static calibration rig. This is a very promising result, since the time and space consuming rig calibration could possibly be switched for a calibration method that can run while the car is driving. Additionally, the fisheye camera calibration appears to achieve an accuracy comparable to other state of the art methods, while also being applicable to a wide range of driving environments and allowing for other cameras to be included in the calibration as well.
dc.identifier.coursecodeEENX30
dc.identifier.urihttp://hdl.handle.net/20.500.12380/309532
dc.language.isoeng
dc.setspec.uppsokTechnology
dc.subjectComputer Vision
dc.subjectExtrinsic Calibration
dc.subjectBundle Adjustment
dc.subjectSurround View System
dc.subjectFisheye Camera
dc.subjectRotation Optimization
dc.titleMulti camera extrinsic calibration
dc.type.degreeExamensarbete för masterexamensv
dc.type.degreeMaster's Thesisen
dc.type.uppsokH
local.programmeComplex adaptive systems (MPCAS), MSc

Ladda ner

Original bundle

Visar 1 - 1 av 1
Hämtar...
Bild (thumbnail)
Namn:
MasterThesisFinalReport.pdf
Storlek:
14.78 MB
Format:
Adobe Portable Document Format

License bundle

Visar 1 - 1 av 1
Hämtar...
Bild (thumbnail)
Namn:
license.txt
Storlek:
2.35 KB
Format:
Item-specific license agreed upon to submission
Beskrivning: