Adversarial Black-Box Attacks in the Domain of Device Fingerprints
Examensarbete för masterexamen
Computer systems and networks (MPCSN), MSc
Network security products incorporate many diﬀerent tools in order to secure large networks. State-of-the-artproductsoftenutilizemachinelearninginordertoclassify devices connected to a network to assign them diﬀerent levels of trust without the need for authentication. These zero-conﬁguration security mechanisms work similarly to image classifying Deep Neural Networks and are of interest for big organizations where large amounts of devices come and go every day. However, solutions leveraging the power of machine learning also inherit its vulnerability to adversarial samples. Previous work has shown that even in query-limited blackbox scenarios, which is the most limiting for an attacker, image classiﬁers are vulnerable to adversarial attacks that make use of specially crafted input vectors . This study shows that known attack techniques against image classiﬁers can be successfully reapplied to classiﬁers in the domain of device ﬁngerprints in computer networks. We provide proof of concept that previously discovered adversarial sampling techniques are applicable in the domain of device ﬁngerprints by attacking a well known commercial classiﬁer. We show that across ten diﬀerent devices on average 9.9% of the adversarial samples were successfully misclassiﬁed by the classiﬁer. The most prominent of those devices had 36% of its adversarial samplesmisclassiﬁed. Theseresultspointtotheneedformoresophisticatedtraining algorithmsaswellastheimportanceofnotbuildingsolutionsthatbuildsontrusting device- or user-supplied data.
Adversarial Machine Learning , Adversarial Samples , Black-Box Attack , Device Fingerprinting , Network Packet Sniﬃng , Network Security , Transferability