Scene Change Detection

Typ
Examensarbete för masterexamen
Program
Publicerad
2022
Författare
Siddhant, Som
Swaathy, Sambath
Modellbyggare
Tidskriftstitel
ISSN
Volymtitel
Utgivare
Sammanfattning
Scene Change Detection (SCD) identifies changes between the images taken at two different times, using pixel-level or point cloud approaches in most cases. Training a neural network for such a task requires a large number of images with annotated changes. Annotating changes is a slow, costly and time-consuming process. The state-of-the-art (SOTA) approach for SCD, like the DR-TANet paper, is based on transfer learning from large ImageNet datasets. This is a supervised technique and to overcome the challenges mentioned above, we introduce a self-supervised pretraining method with unlabeled datasets based on a existing D-SSCD approach that learns temporal-consistent representations of a pair of images. This project is an investigation of these approaches that can train and evaluate on available datasets through the use of a suitable loss function for the purpose of SCD. We compare results for different percentages of labeled data from different models and benchmark datasets such as Visual Localization CMU (VL_CMU_CD) and Panoramic change detection (PCD) datasets.
Beskrivning
Ämne/nyckelord
Detection, pixel-level, slow, annotating
Citation
Arkitekt (konstruktör)
Geografisk plats
Byggnad (typ)
Byggår
Modelltyp
Skala
Teknik / material
Index