Vasisht Duddu, Sebastian Szyller and N. Asokan receive Distinguished Paper Award at 45th IEEE Symposium on Security and Privacy

Friday, June 21, 2024

PhD candidate Vasisht Duddu, Intel Labs research scientist Sebastian Szyller, and Professor N. Asokan have been honoured with a Distinguished Paper Award for their work titled “SoK: Unintended Interactions among Machine Learning Defenses and Risks.” Their paper was presented at the 45th IEEE Symposium on Security and Privacy, the premier forum for showcasing developments in computer security and electronic privacy.

“Congratulations to Vasisht, Asokan, and their colleague Sebastian on receiving a distinguished paper award,” said Raouf Boutaba, University Professor and Director of the Cheriton School of Computer Science. “Although considerable research has been conducted on security and privacy risks in machine learning models, further work is needed to understand how specific defences interact with other risks. Their award-winning systematization of knowledge paper offers a framework to identify and explain interactions between defences and risks, allowing them to conjecture about unintended interactions.”

photo of Professor N. Asokan and PhD candidate Vasisht Duddu

L to R: Professor N. Asokan and PhD candidate Vasisht Duddu. Sebastian Szyller was unavailable for the photo.

Vasisht Duddu is pursuing a PhD at the Cheriton School of Computer Science. His research focuses on risks to security, privacy, fairness, and transparency in machine learning models. He also designs attacks to exploit these risks and defences to counter them to better understand the interplay between risks and defences. Additionally, he works on ensuring accountability in machine learning pipelines to meet regulatory requirements. 

N. Asokan is a Professor of Computer Science at Waterloo where he holds a David R. Cheriton Chair and serves as the Executive Director of the Waterloo Cybersecurity and Privacy Institute. Asokan’s primary research theme is systems security broadly, including topics like developing and using novel platform security features, applying cryptographic techniques to design secure protocols for distributed systems, applying machine learning techniques to security and privacy problems, and understanding and addressing the security and privacy of machine learning applications themselves.

Sebastian Szyller is a research scientist at Intel Labs. He works on various aspects of security and privacy of machine learning. Recently, he has been working on model extraction attacks and defences, membership inference, and differential privacy. More broadly, he is interested in different ways to protect machine learning models and data to enable robust and privacy-preserving analysis, both in terms of the technical details as well as legislation compliance. Sebastian was a visiting PhD student at Waterloo during fall 2022.

More about this award-winning research

Machine learning models are vulnerable to a variety of risks to their security, privacy and fairness. Although various defences have been proposed to protect against risks individually, when a defence is effective against one risk it may inadvertently lead to an increased susceptibility to other risks. Adversarial training to increase robustness, for example, also increases vulnerability to membership inference attacks.

Predicting such unintended interactions is challenging. A unified framework that clarifies the relationship between defences and risks can help researchers identify unexplored interactions and design algorithms with better trade-offs. It also helps practitioners to account for such interactions before deployment. Earlier work, however, was limited to studying a specific risk, defence or interaction as opposed to systematically studying their underlying causes. A comprehensive framework spanning multiple defences and risks to systematically identify potential unintended interactions is currently absent.

The research team addressed this gap by systematically examining various unintended interactions across multiple defences and risks. They hypothesized that overfitting and memorization of training data are the potential causes underlying these unintended interactions. An effective defence may induce, reduce or depend on overfitting or memorization, which in turn affects the model’s susceptibility to other risks. 

Their study identified several factors — such as the characteristics of the training dataset, its objective function, and the model — that collectively influence a model’s propensity to overfit or memorize. These factors provide insight into understanding the susceptibility to different risks when a defence is employed. 

Key contributions of the study

  1. Developed the first systematic framework to understand unintended interactions by their underlying causes and factors that influence them
  2. Conducted a comprehensive literature survey to identify different unintended interactions, situating them within the framework, and a guideline to the framework to hypothesize about unintended interactions
  3. Identified previously unexplored unintended interactions for future research, using the framework to hypothesize two such interactions and empirically validating them

For further details about this award-winning research, please see the paper: Vasisht Duddu, Sebastian Szyller, N. Asokan. SoK: Unintended Interactions among Machine Learning Defenses and Risks, 2024 IEEE Symposium on Security and Privacy, San Francisco, CA, 2024.

Research group’s project page: 

Blog article:

  1. 2024 (68)
    1. July (11)
    2. June (11)
    3. May (15)
    4. April (9)
    5. March (13)
    6. February (1)
    7. January (8)
  2. 2023 (70)
    1. December (6)
    2. November (7)
    3. October (7)
    4. September (2)
    5. August (3)
    6. July (7)
    7. June (8)
    8. May (9)
    9. April (6)
    10. March (7)
    11. February (4)
    12. January (4)
  3. 2022 (63)
    1. December (2)
    2. November (7)
    3. October (6)
    4. September (6)
    5. August (1)
    6. July (3)
    7. June (7)
    8. May (8)
    9. April (7)
    10. March (6)
    11. February (6)
    12. January (4)
  4. 2021 (64)
  5. 2020 (73)
  6. 2019 (90)
  7. 2018 (82)
  8. 2017 (51)
  9. 2016 (27)
  10. 2015 (41)
  11. 2014 (32)
  12. 2013 (46)
  13. 2012 (17)
  14. 2011 (20)