Master’s Thesis Presentation • Machine Learning — Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve Adversarial RobustnessExport this event to calendar

Thursday, August 6, 2020 11:00 AM EDT

Please note: This master’s thesis presentation will be given online.

Ahmadreza Jeddi, Master’s candidate
David R. Cheriton School of Computer Science

Deep neural networks have been achieving state-of-the-art performance across a wide variety of applications, and due to their outstanding performance, they are being deployed in safety and security critical systems. However, in recent years, deep neural networks have been shown to be very vulnerable to optimally crafted input samples called adversarial examples. Although the adversarial perturbations are imperceptible to humans, especially, in the domain of computer vision, they have been very successful in fooling strong deep models. The vulnerability of deep models to adversarial attacks limits their widespread deployment for safety-critical applications. As a result, adversarial attack and defense algorithms have drawn great attention in the literature.

Many defense algorithms have been proposed to overcome the threat of adversarial attacks, and many of these algorithms use adversarial training (adding perturbations during the training stage). Alongside other adversarial defense approaches being investigated, there has been a very recent interest in improving adversarial robustness in deep neural networks through the introduction of perturbations during the training process. However, such methods leverage fixed, pre-defined perturbations and require significant hyper-parameter tuning that makes them very difficult to leverage in a general fashion.

In this work, we introduce Learn2Perturb, an end-to-end feature perturbation learning approach for improving the adversarial robustness of deep neural networks. More specifically, we introduce novel perturbation-injection modules that are incorporated at each layer to perturb the feature space and increase uncertainty in the network. This feature perturbation is performed at both the training and the inference stages. Furthermore, inspired by the Expectation-Maximization approach, an alternating back-propagation training algorithm is introduced to train the network and noise parameters consecutively. Experimental results on CIFAR-10 and CIFAR-100 datasets show that the proposed Learn2Perturb method can result in deep neural networks which are 4–7 percent more robust on Linf FGSM and PDG adversarial attacks and significantly outperforms the state-of-the-art against L2 C&W attack and a wide range of well-known black-box attacks.

Location 
Online presentation
200 University Avenue West

Waterloo, ON N2L 3G1
Canada

S M T W T F S
26
27
28
29
30
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
1
2
3
4
5
6
  1. 2024 (100)
    1. April (23)
    2. March (27)
    3. February (25)
    4. January (25)
  2. 2023 (296)
    1. December (20)
    2. November (28)
    3. October (15)
    4. September (25)
    5. August (30)
    6. July (30)
    7. June (22)
    8. May (23)
    9. April (32)
    10. March (31)
    11. February (18)
    12. January (22)
  3. 2022 (245)
  4. 2021 (210)
  5. 2020 (217)
  6. 2019 (255)
  7. 2018 (217)
  8. 2017 (36)
  9. 2016 (21)
  10. 2015 (36)
  11. 2014 (33)
  12. 2013 (23)
  13. 2012 (4)
  14. 2011 (1)
  15. 2010 (1)
  16. 2009 (1)
  17. 2008 (1)