Please note: This PhD seminar will take place online.
Vasisht Duddu, PhD candidate
David R. Cheriton School of Computer Science
Supervisor: Professor N. Asokan
Machine learning (ML) models are susceptible to various risks to security, privacy, fairness, transparency, and safety. While substantial prior work explores the design of defenses against individual risks, this is not sufficient for real-world ML models which must protect against multiple risks simultaneously. This requires practitioners to address unintended interactions that emerge when protecting against multiple risks.
In this talk, I discuss two types of unintended interactions. First, I explore how defenses against one risk can inadvertently increase or decrease other unrelated risks. I introduce a framework based on the conjecture that overfitting and memorization underlie these interactions, use it to identify two previously unexplored interactions, and empirically validate them. Second, I address conflicting interactions which reduce the effectiveness of defenses when combined. I propose Def\Con, a technique for detecting conflicts and assessing whether defenses can be effectively combined, and show that it is accurate, scalable, non-invasive, and general.