IBM’s Adversarial Robustness Toolbox, an open AI library, was released in April. Since then, developers have found some interesting uses for the tool.
IBM launched an open library to help secure artificial intelligence systems in April.
They call it the Adversarial Robustness Toolbox (ART) to help developers better protect AI systems and neural network. It contains benchmarks, defenses, and attacks in a framework-agnostic library.
What are three key ways developers can leverage this new tool for improved AI security?

1. A Repository for Information
IBM wants ART to become the go-to source for AI developers. In order to foster collaboration and learning, they made it open-source.
But they also voiced concerns over the threats AI systems pose to ZDNet. In order to strengthen the AI system community at-large, IBM researchers decided to develop ART.
In a report from IBM, a spokesperson said:
“This emerging area of research looks at the best ways to attack and defend the AI systems we have come to rely upon before the bad guys do…”
The inspiration comes from IBM researchers discovering that existing tools did not defend AI systems sufficiently. With ART, developers and researchers can add the information they collect about AI systems.
From knowledge of vulnerabilities and attacks to how to protect against attacks, people can contribute relevant information into ART’s open-source sieve.
2. Compare Attack and Defense Strategies
Due to its framework-agnostic feature, ART benefits everyone developing AI systems.
You can test many aspects of AI system development defenses and attack methods. From vulnerabilities in TensorFlow to attack methods in Keras, ART will have information about all of it.
ART supports defense methods like feature squeezing, label smoothing, and spatial smoothing. But it also allows for implementing visual recognition attack methods like Deep Fool, Jacobian Saliency Mapping, and the Fast Gradient Method.
You can find a comprehensive list on the Github website along with setup instructions.

3. Complete Tests Like Model Robustness
ART allows researchers and developers to “test” their AI systems, too.
The image above comes from a research paper from Jan Hendrik Metzen, Mummadi Chaithanya Kumar, Thomas Brox, and Volker Fischer. It involves semantic image segmentation and how adversarial entities might exploit vulnerabilities.
“More severely, there even exist universal perturbations that are input-agnostic but fool the network on the majority of inputs…” says the abstract available from Cornell University.
Essentially, the experiment highlights how “adversarial AI” could trick computers and humans. This is the precise action IBM wants ART to prevent and informs how developers can use ART moving forward.
You can test a variety of things from model hardening to runtime detection and model robustness. After all, it is called the Adversarial Robustness Toolbox.
You can see the simplicity of ART’s user interface above.
But one does question how this will fit into IBM’s PowerAI model in terms of their own transparency. Perhaps the long-time tech conglomerate will contribute their own AI system development insights, as well.
ART won’t write code for you like the Rice University AI Bayou. But, with this move, IBM positioned both ART and themselves as thought leaders in the realm of AI system security.
Comments (0)
Most Recent