Please note: This PhD seminar will take place online.
Vasisht Duddu, PhD candidate
David R. Cheriton School of Computer Science
Supervisor: Professor N. Asokan
Machine learning (ML) models are susceptible to various security, privacy, and fairness risks. Adversaries with different characteristics (i.e., objectives, knowledge, and capabilities) can collude by exploiting one risk to amplify other risks. Existing work lacks a systematic framework to explore collusions among adversaries, and to study the implications of the adversaries’ characteristics.
I present a framework covering collusion (a) between train- and test-time adversaries, and (b) among test-time adversaries. The framework includes factors enabling collusion between adversaries. I propose a guideline to conjecture about the potential for collusion using the enabling factors. I use it to explain prior work, conjecture about unexplored collusions, and empirically validate two such cases. Finally, I discuss how adversaries’ characteristics influence the potential for collusion.