@MicrosoftResearch
  @MicrosoftResearch
Microsoft Research | On the Adversarial Robustness of Deep Learning @MicrosoftResearch | Uploaded December 2022 | Updated October 2024, 1 week ago.
Research Talk
Jun Zhu, Tsinghua University 

Although deep learning methods have obtained significant progress in many tasks, it has been widely recognized that the current methods are vulnerable to adversarial noise. This weakness poses serious risk to safety-critical applications. In this talk, I will present some recent progress on adversarial attack and defense for deep learning, including theory, algorithms and benchmarks.

Learn more about the Responsible AI Workshop: microsoft.com/en-us/research/event/responsible-ai-an-interdisciplinary-approach-workshop

This workshop was part of the Microsoft Research Summit 2022: microsoft.com/en-us/research/event/microsoft-research-summit-2022
On the Adversarial Robustness of Deep LearningSITI 2022 - Panel Discussion and moderated Q&A sessionPanel Discussion: Beyond Language: The future of multimodal models in healthcare, gaming, and AIResearch Forum 3 | Keynote: Building Globally Equitable AIProject Aurora: The first large-scale foundation model of the atmosphereResearch intern talk: Unified speech enhancement approach for speech degradation & noise suppressionResearch talk: Low-latency ​Real-time Insights ​from SpaceGalea: The Bridge Between Mixed Reality and NeurotechnologyHuman-Centered AI: Ensuring Human Control While Increasing AutomationCombining Machine Learning and Bayesian networks for Decision Support in Arrythmia DiagnosisScalable and Efficient AI: From Supercomputers to SmartphonesAugmenting Human Cognition and Decision Making with AI

On the Adversarial Robustness of Deep Learning @MicrosoftResearch

SHARE TO X SHARE TO REDDIT SHARE TO FACEBOOK WALLPAPER