Evaluation of Privacy-protected Federated Learning

dc.contributor.authorBESHARATI, FARZAD
dc.contributor.authorCHAN, STEFAN
dc.contributor.departmentChalmers tekniska högskola / Institutionen för data och informationstekniksv
dc.contributor.departmentChalmers University of Technology / Department of Computer Science and Engineeringen
dc.contributor.examinerAlmgren, Magnus
dc.contributor.supervisorSchiller, Elad
dc.date.accessioned2023-01-23T12:09:10Z
dc.date.available2023-01-23T12:09:10Z
dc.date.issued2023
dc.date.submitted2023
dc.description.abstractFederated Learning is a form of distributed machine learning where training is per formed at several participants and then aggregated on a central coordinator. How ever, federated learning itself is not secure and can not guarantee confidentiality for the data of participants. But the protection of privacy-sensitive data can be achieved through Homomorphic Encryption methods such as the Cheon-Kim-Kim Song schema (CKKS). In this report, we study two federated learning methods, Federated Averaging (FedAvg) and Federated Learning Based on Dynamic Regular ization (FedDyn), and apply CKKS to them to study the impact of securing such methods. We implement the federated learning methods with and without CKKS and also create a centralized implementation for comparison. Due to the nature of the Homomorphic Encryption scheme, a minor alteration to FedDyn was done to secure it for our implementation. In addition, we implement a selective global invocation through accuracy criteria. Evaluating the implementations yields that securing the federated learning methods increases the running and convergence time with a minor difference in model performance. Unsecured FedAvg and FedDyn take on average 130 and 150 seconds per global round, compared to 510 and 700 seconds for secured FedAvg and FedDyn. While model performance may not be affected to a noticeable degree, the number of global rounds to convergence is higher for the secured federated methods compared to the unsecured ones. Unsecured FedAvg and FedDyn take on average 27 and 21 global rounds to reach convergence, compared to 37 and 27 for secured FedAvg and FedDyn. Overall, privacy protection through Homomorphic Encryption can be implemented on federated learning methods with out major alteration and overall performance penalty, but with extra memory costs and time.
dc.identifier.coursecodeDATX05
dc.identifier.urihttps://odr.chalmers.se/handle/20.500.12380/305945
dc.language.isoeng
dc.setspec.uppsokTechnology
dc.subjectMachine Learning
dc.subjectFederated Learning
dc.subjectEncryption
dc.subjectHomomorphic Encryption
dc.subjectCKKS
dc.subjectFedAvg
dc.subjectFedDyn
dc.subjectPrivacy
dc.subjectEvaluation
dc.titleEvaluation of Privacy-protected Federated Learning
dc.type.degreeExamensarbete för masterexamensv
dc.type.degreeMaster's Thesisen
dc.type.uppsokH
local.programmeComputer science – algorithms, languages and logic (MPALG), MSc

Download

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
CSE 22-152 Chan Besharati.pdf
Size:
1.49 MB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
2.35 KB
Format:
Item-specific license agreed upon to submission
Description:

Collections