Theoretical Machine Learning Seminar

Generalizable Adversarial Robustness to Unforeseen Attacks

In the last couple of years, a lot of progress has been made to enhance robustness of models against adversarial attacks. However, two major shortcomings still remain: (i) practical defenses are often vulnerable against strong “adaptive” attack algorithms, and (ii) current defenses have poor generalization to “unforeseen” attack threat models (the ones not used in training).

In this talk, I will present our recent results to tackle these issues. I will first discuss generalizability of a class of provable defenses based on randomized smoothing to various Lp and non-Lp attack models. Then, I will present adversarial attacks and defenses for a novel “perceptual” adversarial threat model. Remarkably, the defense against perceptual threat model generalizes well against many types of unforeseen Lp and non-Lp adversarial attacks.

This talk is based on joint works with Alex Levine, Sahil Singla, Cassidy Laidlaw, Aounon Kumar and Tom Goldstein.

Date & Time

June 23, 2020 | 12:30pm – 1:45pm

Location

Remote Access Only - see link below

Speakers

Soheil Feizi

Affiliation

University of Maryland

Notes

We welcome broad participation in our seminar series. To receive login details, interested participants will need to fill out a registration form accessible from the link below.  Upcoming seminars in this series can be found here.

Register Here