Seminar • Cryptography, Security, and Privacy (CrySP) • On the Security of Distributed Machine Learning

Monday, October 20, 2025 2:00 pm - 3:00 pm EDT (GMT -04:00)

Please note: This seminar will take place in DC 1304.

Ghassan Karame
Professor of Computer Science
Chair for Information Security
Ruhr-University Bochum

Professor Ghassan Karame

Distributed and decentralized platforms support transparency and enable open access and participation. It is expected that decentralization will stimulate innovation and will positively impact the digital experience of many enterprises around the globe, e.g., in applications for payments, machine learning, social networks, etc.

In this talk, I will explore how decentralization influences the security of two prominent emerging decentralized ML platforms: Federated Learning and Decentralized Machine Learning. We start by analyzing the impact of training hyperparameters on the effectiveness of backdoor attacks and defenses in HFL. More specifically, we show both analytically and by means of measurements that the choice of hyperparameters by benign clients does not only influence model accuracy but also significantly impacts backdoor attack success. This stands in sharp contrast with the multitude of contributions in the area of HFL security, which often rely on custom ad-hoc hyperparameter choices for benign clients—leading to more pronounced backdoor attack strength and diminished impact of defenses. Our results indicate that properly tuning benign clients’ hyperparameters—such as learning rate, batch size, and number of local epochs—can significantly curb the effectiveness of backdoor attacks, regardless of the malicious clients’ settings.

In our second work, we explore, for the first time, the robustness of distributed ML models that are fully heterogeneous in training data, architecture, scheduler, optimizer, and other model parameters. Supported by theory and extensive experimental validation using CIFAR10 and FashionMNIST, we show that such properly distributed ML instantiations achieve across-the-board improvements in accuracy-robustness tradeoffs against state-of-the-art transfer-based attacks that could otherwise not be realized by current ensemble or federated learning instantiations. For instance, our experiments on CIFAR10 show that for the Common Weakness attack, one of the most powerful state-of-the-art transfer-based attacks, our method improves robust accuracy by up to 40%, with a minimal impact on clean task accuracy.


Bio: Since November 2021, Ghassan is a full Professor of Computer Science at the Ruhr-University Bochum (RUB) - leading the Chair for Information Security. He is a Principal Investigator (PI) in the Cluster of Excellence CASA (Cyber Security in the Age of Large-Scale Adversaries) and,  since October 2023, the Director (and a PI) at the Horst Goertz Institute for IT Security (HGI). Before joining RUB, he was working as an NEC Fellow and was leading the Security research group at NEC Labs in Germany. Prior to joining NEC Labs, he was working as a postdoctoral researcher in the Institute of Information Security of ETH Zurich, Switzerland.

Since 2011, he holds a PhD degree in Computer Science from ETH Zurich. He is interested in all aspects of security and privacy with a focus on decentralized security, and distributed machine learning security.