Please note: This master’s thesis presentation will be given online.
Xinda Li, Master’s candidate
David R. Cheriton School of Computer Science
Supervisor: Professor Florian Kerschbaum
Federated Learning (FL) allows multiple participants to collaboratively train a deep learning model without sharing their private training data. However, due to its distributive nature, FL is vulnerable to various poisoning attacks. An adversary can submit malicious model updates that aim to degrade the joint model’s utility. In this work, we formulate the adversary’s goal as an optimization problem and present an effective model poisoning attack using projected gradient descent. Our empirical results show that our attack has a larger impact on the global model’s accuracy than previous attacks.
Motivated by this, we design a robust defense algorithm that mitigates existing poisoning attacks. Our defense leverages Constraint K-means clustering and uses a small validation dataset for the server to select optimal updates in each FL round. We conduct experiments on three non-iid image classification datasets and demonstrate the robustness of our defense algorithm under various FL settings.