Training Binary Deep Neural Networks Using Knowledge Distillation

Examensarbete för masterexamen

Please use this identifier to cite or link to this item: https://hdl.handle.net/20.500.12380/301202
Download file(s):
File Description SizeFormat 
Master_Thesis_Sofia_Lundborg.pdf1.97 MBAdobe PDFView/Open
Bibliographical item details
FieldValue
Type: Examensarbete för masterexamen
Title: Training Binary Deep Neural Networks Using Knowledge Distillation
Authors: Lundborg, Sofia
Abstract: Binary networks can be used to speed up inference time and make image analysis possible on less powerful devices. When binarizing a network the accuracy drops. The thesis aimed to investigate how the accuracy of a binary network can be improved by using knowledge distillation. Three different knowledge distillation methods were tested for various network types. Additionally, different architectures of a residual block in ResNet were suggested and tested. Test on CIFAR10 showed an 1.5% increase in accuracy when using knowledge distillation and an increase of 1.1% when testing on ImageNet dataset. The results indicate that the suggested knowledge distillation method can improve the accuracy of a binary network. Further testing needs to be done to verify the results, especially longer training. However, there is great potential that knowledge distillation can be used to boost the accuracy of binary networks.
Keywords: deep neural networks;knowledge distillation;binary neural networks
Issue Date: 2020
Publisher: Chalmers tekniska högskola / Institutionen för fysik
URI: https://hdl.handle.net/20.500.12380/301202
Collection:Examensarbeten för masterexamen // Master Theses



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.