@ibmresearch
  @ibmresearch
IBM Research | ICLR Paper: Sign-OPT: A Query-Efficient Hard-Label Adversarial Attack @ibmresearch | Uploaded June 2020 | Updated October 2024, 4 days ago.
For AI to be truly trustworthy, we also believe its integrity must be maintained. People need to feel confident that an AI system’s training and inference has not been manipulated in any way. IBM Research has been a pioneer of what we call “AI robustness” that equips AI systems and deep neural networks (DNNs) with the ability to fight back against adversarial attacks.

This year at ICLR, a team of IBM and University of California, Los Angeles, researchers including myself will present a paper that developed a new “Sign-OPT” approach for efficiently penetrating a hard label black model – a model whose underlying information is hidden to an attacker. In this work, our team found that our attack consistently required five to 10-times fewer queries when compared to the current state-of-the-art approaches for generating adversarial examples.

An adversarial attack aims to get a deployed AI system to misclassify data so the target model will be untrustworthy. This research was done to give “white-hat” hackers a more effective tool for testing the security of their organizations’ AI algorithms.
ICLR Paper: Sign-OPT: A Query-Efficient Hard-Label Adversarial AttackBuilding our quantum future: Are we ready for quantum computing?Whats Next in AI:  AI We Can Reason WithThe IBM Research AI Hardware CenterThe next big leap in cryptography: NIST’s post-quantum cryptography standardsThe Short: Celebrating the best breakthroughs of 2023The Future of Cryptography ⛓Quantum Utility: How error mitigation makes quantum computers usefulThe Short: IBM x NASA foundation model, AI for climate in UAE, IBM climate FM model comes to the UKIBM Cloud Code Engine for Quantum ComputingComputing Responsibly in the Era of Post Quantum CryptographyThe Short: Using AI to eliminate PFAS, Red-teaming AI models from attacks, & AI at Augusta National

ICLR Paper: Sign-OPT: A Query-Efficient Hard-Label Adversarial Attack @ibmresearch

SHARE TO X SHARE TO REDDIT SHARE TO FACEBOOK WALLPAPER