Is This Data Point In your Training Set? Similarity-based Inference Attacks: Performance Evaluation of Range Membership Attacks to Audit Privacy Risks in Machine Learning Models

dc.contributor.authorNawrin Nova, Sifat
dc.contributor.authorShubham, Saha
dc.contributor.departmentChalmers tekniska högskola / Institutionen för data och informationstekniksv
dc.contributor.departmentChalmers University of Technology / Department of Computer Science and Engineeringen
dc.contributor.examinerRhouma, Rhouma
dc.contributor.supervisorDuvignau, Romaric
dc.date.accessioned2025-12-03T14:59:43Z
dc.date.issued2025
dc.date.submitted
dc.description.abstractMembership Inference Attacks (MIAs) pose a serious threat to the privacy of machine learning (ML) models by determining whether a specific data point was used during model training. A recent and powerful variant, the Range Membership Inference Attack (RaMIA), assesses privacy risks over a range of semantically similar data points. However, its practical application is limited by a high query overhead, as it requires querying the target model for every sample in the range. This thesis proposes and evaluates a novel approach designed to overcome this limitation by combining range queries with group testing principles to reduce the number of queries sent to the target model without losing the attack performance and making the attack more stealthy. Instead of testing every sample, this method first groups similar data points based on their extracted features and then queries only a small number of strategically chosen representatives. All the experiments are conducted on the CIFAR-10 dataset, comparing its performance against the standard RaMIA baseline. The results demonstrate that RaMIA with group testing successfully reduces the number of queries by 84% in a setting of 50 augmentations. This work reveals that even minor enhancements in query design and decoding strategy can lead to substantial gains in auditing. Moreover, we provide practical recommendations for tuning key hyperparameters and integrate our attack into the LeakPro framework for reproducibility and broader adoption in privacy auditing of ML models.
dc.identifier.coursecodeDATX05
dc.identifier.urihttp://hdl.handle.net/20.500.12380/310799
dc.language.isoeng
dc.setspec.uppsokTechnology
dc.subjectMachine Learning
dc.subjectMembership Inference Attacks
dc.subjectRange Membership Inference Attacks
dc.subjectGroup Testing
dc.subjectPrivacy Auditing
dc.subjectDecoding
dc.titleIs This Data Point In your Training Set? Similarity-based Inference Attacks: Performance Evaluation of Range Membership Attacks to Audit Privacy Risks in Machine Learning Models
dc.type.degreeExamensarbete för masterexamensv
dc.type.degreeMaster's Thesisen
dc.type.uppsokH
local.programmeComputer systems and networks (MPCSN), MSc

Ladda ner

Original bundle

Visar 1 - 1 av 1
Hämtar...
Bild (thumbnail)
Namn:
CSE 25-166 SS SN.pdf
Storlek:
4.05 MB
Format:
Adobe Portable Document Format

License bundle

Visar 1 - 1 av 1
Hämtar...
Bild (thumbnail)
Namn:
license.txt
Storlek:
2.35 KB
Format:
Item-specific license agreed upon to submission
Beskrivning: