Evaluating the Performance of Federated Learning A Case Study of Distributed Machine Learning with Erlang
Ladda ner
Typ
Examensarbete för masterexamen
Master Thesis
Master Thesis
Program
Computer science – algorithms, languages and logic (MPALG), MSc
Publicerad
2018
Författare
Nilsson, Adrian
Smith, Simon
Modellbyggare
Tidskriftstitel
ISSN
Volymtitel
Utgivare
Sammanfattning
An alternative environment for distributed machine learning has recently been proposed in what is called Federated Learning. In Federated Learning, a global model is learnt by aggregating models that have been optimised locally on the same distributed clients that generate training data. Contrary to centralised optimisation, clients in the setting of Federated Learning can be very large in number and are characterised by challenges of data and network heterogeneity. Examples of clients include smartphones and connected vehicles, which highlights the practical relevance of this approach to distributed machine learning. We compare three algorithms for Federated Learning and benchmark their performance against a centralised approach where data resides on the server. The algorithms covered are Federated Averaging (FedAvg), Federated Stochastic Variance Reduced Gradient (FSVRG), and CO-OP. They are evaluated on the MNIST dataset using both IID and non-IID partitionings of the data. Our results show that, among the three federated algorithms, FedAvg trains the model with the highest accuracy regardless of how data was partitioned. Our comparison between FedAvg and centralised learning shows that they are practically equivalent when IID data is used, but the centralised approach outperforms FedAvg with non-IID data. We recommend FedAvg over FSVRG and see practical benefits for an asynchronous algorithm, such as CO-OP.
Beskrivning
Ämne/nyckelord
Data- och informationsvetenskap , Computer and Information Science