Microsoft Research | On the Adversarial Robustness of Deep Learning @MicrosoftResearch | Uploaded December 2022 | Updated October 2024, 1 week ago.
Research Talk
Jun Zhu, Tsinghua University
Although deep learning methods have obtained significant progress in many tasks, it has been widely recognized that the current methods are vulnerable to adversarial noise. This weakness poses serious risk to safety-critical applications. In this talk, I will present some recent progress on adversarial attack and defense for deep learning, including theory, algorithms and benchmarks.
Learn more about the Responsible AI Workshop: microsoft.com/en-us/research/event/responsible-ai-an-interdisciplinary-approach-workshop
This workshop was part of the Microsoft Research Summit 2022: microsoft.com/en-us/research/event/microsoft-research-summit-2022
Research Talk
Jun Zhu, Tsinghua University
Although deep learning methods have obtained significant progress in many tasks, it has been widely recognized that the current methods are vulnerable to adversarial noise. This weakness poses serious risk to safety-critical applications. In this talk, I will present some recent progress on adversarial attack and defense for deep learning, including theory, algorithms and benchmarks.
Learn more about the Responsible AI Workshop: microsoft.com/en-us/research/event/responsible-ai-an-interdisciplinary-approach-workshop
This workshop was part of the Microsoft Research Summit 2022: microsoft.com/en-us/research/event/microsoft-research-summit-2022