@ibmresearch
  @ibmresearch
IBM Research | How to make AI hack-proof @ibmresearch | Uploaded December 2021 | Updated October 2024, 4 days ago.
Recently, we’ve seen an explosion of AI datasets and models that are impacting millions around the world each day in ways big and small. During development of an AI model, conditions are carefully controlled to obtain the best possible performance — but in the real world, where models are ultimately deployed, conditions are rarely perfect, and risks are abundant. Our research on adversarial robustness aims to seek out soft spots in popular machine learning techniques to defend against them by simulating — and mitigating — new attacks to ultimately design more robust models and algorithms. Learn more about the field of secure AI research here
research.ibm.com/blog/securing-ai-workflows-with-adversarial-robustness
and explore the open source adversarial robustness toolkit below: art360.mybluemix.net
How to make AI hack-proofSecurity and privacy: Managing threats and protecting passwordsExamining the impact of the IBM-HBCU Quantum CenterAccelerated Materials Discovery: Project PhotoresistCERN & IBM Research Exploring quantum computing for high energy physicsQuantum Utility: IBM Quantum and UC Berkeley experiment charts path to useful quantum computingIBM Quantum System TwoLearning from our past for a healthier futureA graph-based formalism for surface codes and twistsBenchmarking near-term quantum computers*MIT unveils IBM 704 computer at ribbon cutting - the first on campus computer ever!On chip single pump interferometric Josephson Isolator

How to make AI hack-proof @ibmresearch

SHARE TO X SHARE TO REDDIT SHARE TO FACEBOOK WALLPAPER