Scene Change Detection

Loading...
Thumbnail Image

Date

Type

Examensarbete för masterexamen

Programme

Model builders

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

Scene Change Detection (SCD) identifies changes between the images taken at two different times, using pixel-level or point cloud approaches in most cases. Training a neural network for such a task requires a large number of images with annotated changes. Annotating changes is a slow, costly and time-consuming process. The state-of-the-art (SOTA) approach for SCD, like the DR-TANet paper, is based on transfer learning from large ImageNet datasets. This is a supervised technique and to overcome the challenges mentioned above, we introduce a self-supervised pretraining method with unlabeled datasets based on a existing D-SSCD approach that learns temporal-consistent representations of a pair of images. This project is an investigation of these approaches that can train and evaluate on available datasets through the use of a suitable loss function for the purpose of SCD. We compare results for different percentages of labeled data from different models and benchmark datasets such as Visual Localization CMU (VL_CMU_CD) and Panoramic change detection (PCD) datasets.

Description

Keywords

Detection, pixel-level, slow, annotating

Citation

Architect

Location

Type of building

Build Year

Model type

Scale

Material / technology

Index

Endorsement

Review

Supplemented By

Referenced By