Is This Data Point In your Training Set? Similarity-based Inference Attacks: Performance Evaluation of Range Membership Attacks to Audit Privacy Risks in Machine Learning Models
Download
Date
Authors
Type
Examensarbete för masterexamen
Master's Thesis
Master's Thesis
Model builders
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Membership Inference Attacks (MIAs) pose a serious threat to the privacy of machine learning (ML) models by determining whether a specific data point was used during model training. A recent and powerful variant, the Range Membership Inference Attack (RaMIA), assesses privacy risks over a range of semantically similar data points. However, its practical application is limited by a high query overhead, as it requires querying the target model for every sample in the range. This thesis proposes and evaluates a novel approach designed to overcome this limitation by combining range queries with group testing principles to reduce the number of queries sent to the target model without losing the attack performance and making the attack more stealthy. Instead of testing every sample, this method first groups similar data points based on their extracted features and then queries only a small number of strategically chosen representatives.
All the experiments are conducted on the CIFAR-10 dataset, comparing its performance against the standard RaMIA baseline. The results demonstrate that RaMIA with group testing successfully reduces the number of queries by 84% in a setting of 50 augmentations. This work reveals that even minor enhancements in query design and decoding strategy can lead to substantial gains in auditing. Moreover, we provide practical recommendations for tuning key hyperparameters and integrate our attack into the LeakPro framework for reproducibility and broader adoption in privacy auditing of ML models.
Description
Keywords
Machine Learning, Membership Inference Attacks, Range Membership Inference Attacks, Group Testing, Privacy Auditing, Decoding
