Seminar • Machine Learning — New Advances in (Adversarially) Robust and Secure Machine LearningExport this event to calendar

Monday, February 8, 2021 12:00 PM EST

Please note: This seminar will be given online.

Hongyang Zhang, Postdoctoral Fellow
Toyota Technological Institute at Chicago

Deep learning models are often vulnerable to adversarial examples. In this talk, we will focus on robustness and security of machine learning against adversarial examples. There are two types of defenses against such attacks: 1) empirical and 2) certified adversarial robustness.

In the first part of the talk, we will see the foundation of our winning system, TRADES, in the NeurIPS'18 Adversarial Vision Challenge in which we won 1st place out of 400 teams and 3,000 submissions. Our study is motivated by an intrinsic trade-off between robustness and accuracy: we provide a differentiable and tight surrogate loss to capture the trade-off using the theory of classification-calibrated loss. TRADES has record-breaking performance in various standard benchmarks and challenges, including the adversarial benchmark RobustBench, the NLP benchmark GLUE, the Unrestricted Adversarial Examples Challenge hosted by Google, and has motivated many new attacking methods powered by our TRADES benchmark.

In the second part of the talk, to equip empirical robustness with certification, we study certified adversarial robustness by random smoothing in the L_infty threat model. On one hand, we show that random smoothing on the TRADES-trained classifier achieves SOTA certified robustness when the L infty perturbation radius is small. On the other hand, when the perturbation is large, i.e., independent of inverse of input dimension, we show that random smoothing is provably unable to certify L_infty robustness for arbitrary random noise distribution. The intuition behind our theory reveals an intrinsic difficulty of achieving certified robustness by “random noise based methods,” and inspires new directions as potential future work.


Bio: Hongyang Zhang is a Postdoc fellow at Toyota Technological Institute at Chicago, hosted by Avrim Blum and Greg Shakhnarovich. He obtained his Ph.D. from CMU Machine Learning Department in 2019, advised by Maria-Florina Balcan and David P. Woodruff. 

His research interests lie in the intersection between theory and practice of machine learning, robustness and AI security. His methods won the championship or ranked top in various competitions such as the NeurIPS’18 Adversarial Vision Challenge (all three tracks), the Unrestricted Adversarial Examples Challenge hosted by Google, and the NeurIPS’20 Challenge on Predicting Generalization of Deep Learning. He also authored a book on machine learning and computer vision in 2017.


To join this seminar on Zoom, please go to https://zoom.us/j/99807420032?pwd=YVBhWkxpOEJlYkZpeVdLOXNOcHBnZz09

Location 
Online seminar
200 University Avenue West

Waterloo, ON N2L 3G1
Canada
Event tags 

S M T W T F S
31
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
1
2
3
4
  1. 2024 (118)
    1. May (4)
    2. April (37)
    3. March (27)
    4. February (25)
    5. January (25)
  2. 2023 (296)
    1. December (20)
    2. November (28)
    3. October (15)
    4. September (25)
    5. August (30)
    6. July (30)
    7. June (22)
    8. May (23)
    9. April (32)
    10. March (31)
    11. February (18)
    12. January (22)
  3. 2022 (245)
  4. 2021 (210)
  5. 2020 (217)
  6. 2019 (255)
  7. 2018 (217)
  8. 2017 (36)
  9. 2016 (21)
  10. 2015 (36)
  11. 2014 (33)
  12. 2013 (23)
  13. 2012 (4)
  14. 2011 (1)
  15. 2010 (1)
  16. 2009 (1)
  17. 2008 (1)