Deep Leakage in Federated Learning: Understanding Privacy Vulnerabilities

Publicerad

Typ

Examensarbete för masterexamen
Master's Thesis

Modellbyggare

Tidskriftstitel

ISSN

Volymtitel

Utgivare

Sammanfattning

Deep Leakage attacks in Federated Learning have traditionally relied on FedSGD, yet real-world deployments commonly adopt FedAVG due to its reduced communi cation overhead. This study investigates the feasibility and limitations of executing DL attacks within a FedAVG setting. A custom FL framework was developed to support FedAVG and state-of-the-art DL techniques to operate on shared model weights instead of gradients. Experiments conducted using the CIFAR-10 dataset revealed that while DL attacks are possible under FedAVG, their success dimin ishes as local training (batch size and epochs) increases, due to degraded gradient approximations. Additionally, model initialization strategies, dataset size, and im age resolution significantly impact reconstruction quality. These findings highlight critical trade-offs between privacy and performance in FL systems, emphasizing the need for cautious design choices in real-world applications.

Beskrivning

Ämne/nyckelord

Federated Learning, FedAVG, Deep Leakage, Privacy Attack, Model Inversion, Gradient Approximation, Image Reconstruction, Robustness Analysis, Local Training, Data Leakage

Citation

Arkitekt (konstruktör)

Geografisk plats

Byggnad (typ)

Byggår

Modelltyp

Skala

Teknik / material

Index

item.page.endorsement

item.page.review

item.page.supplemented

item.page.referenced