Importance Sampling in Deep Learning Object Detection - An empirical investigation into accelerating deep learning object detection training by exploiting informative data points
dc.contributor.author | Huang, Alexander | |
dc.contributor.author | Kvernes, Johannes | |
dc.contributor.department | Chalmers tekniska högskola / Institutionen för data och informationsteknik | sv |
dc.contributor.department | Chalmers University of Technology / Department of Computer Science and Engineering | en |
dc.contributor.examiner | Angelov, Krasimir | |
dc.contributor.supervisor | Yu, Yinan | |
dc.date.accessioned | 2023-06-21T09:16:42Z | |
dc.date.available | 2023-06-21T09:16:42Z | |
dc.date.issued | 2023 | |
dc.date.submitted | 2023 | |
dc.description.abstract | In recent years, the field of deep learning object detection has witnessed a notable surge in progress, largely fueled by the growth of available data. While the ever growing amount of data has enabled complex object detection models to improve generalization performance and to facilitate training phases that are less prone to over-fitting, the time to reach desired performance with this remarkable amount of data has been reported to be an emerging problem in practice. This is certainly the case when each sample consist of high resolution image data that could burden the capacity of loading data. This thesis explores the possibilities of leveraging importance sampling as a means to accelerate gradient-based training of an object detection model. This avenue of research aims at biasing the sampling of training examples during training, in the hope of exploiting informative samples and reduce the amount of computation on uninformative, noisy and out-of-distribution samples. While previous art shows compelling evidence of importance sampling in deep learning, it is consistently reported in a context of less complex tasks. In this work, we propose techniques that can be applied to single-stage anchor-free object detectors, in order to adopt importance sampling to accelerate training. Our methods do not require modifications to the objective function, and allow for a biased sampling procedure that remains consistent across runs and samples. Our results suggest that an uncertainty-based heuristic outperforms loss-based heuristics, and that object detection training can be subject to a remarkable speed-up in terms of reaching the baseline’s performance in fewer iterations, where the baseline samples the training examples uniformly without replacement. Furthermore, the empiric observations reported in this work also indicate that an increased final generalization performance can be achieved given an equal amount of training time when compared to the baseline. | |
dc.identifier.uri | http://hdl.handle.net/20.500.12380/306344 | |
dc.setspec.uppsok | Technology | |
dc.subject | Importance sampling | |
dc.subject | deep learning | |
dc.subject | object detection | |
dc.subject | accelerate training | |
dc.subject | sampling heuristic | |
dc.subject | gradient descent | |
dc.subject | loss | |
dc.subject | uncertainty | |
dc.subject | threshold-closeness | |
dc.title | Importance Sampling in Deep Learning Object Detection - An empirical investigation into accelerating deep learning object detection training by exploiting informative data points | |
dc.type.degree | Examensarbete för masterexamen | sv |
dc.type.degree | Master's Thesis | en |
dc.type.uppsok | H | |
local.programme | Data science and AI (MPDSC), MSc |