In cybersecurity, the rise of AI raises as many hopes as concerns. For that reason, Google’s newly acquired “Kaggle” is pitting AI systems against each other in a series of security contests.
With the ever-increasing amount of data, the identification of fraudulent activities among a stream of legitimate actions is becoming increasingly complex to detect.
Current security systems are becoming less and less efficient and will eventually be completely outdated as masses of data continue to pile up.
Kaggle is organizing a cybersecurity competition pitting AI against AI.Click To TweetIn the face of cyber attacks, which are increasingly numerous and sophisticated, many specialists are looking to the development of AI cybersecurity platforms as a solution.
AI systems can be trained to detect attacks in real-time and also to predict the true objectives of fraudsters. However, they’re not foolproof.
Vulnerability of AI Agents to “Adversarial Examples”
While AI promises to open up incredible future prospects in almost all fields, it is also a source of concern.
Recent research studies have demonstrated that it is possible, and even easy, to deceive AI algorithms.
AI’s glaring vulnerability is what experts call “adversarial examples” to which there’s currently no proven method for thwarting them.
“Adversarial examples” are slightly-modified inputs that are imperceptible for human observers. Yet, they can mislead AI models into producing erroneous outputs.
Here’s an example from a recent study, a fraudster starts with an image of a panda, and then adds “noise” that leads the machine learning model to classify it as a gibbon with 99% confidence.
Imagine a self-driving car that doesn’t stop at a stop sign. It continues to roll through because it is fooled into thinking it’s on a freeway.
And that doesn’t necessarily have to be a cyber attack. A malevolent individual could alter the sign by simple paint, stickers, or projections.
AI Systems Going Into no-Holds-Barred Security Tournament
Acquired by Google earlier this year, Kaggle is data science startup that hosts data and AI competitions upon which it awards best teams with prize money.
For its NIPS 2017 competition track (Neural Information Processing Systems) Kaggle has accepted 5 proposals, among which one is from the Google Brain team.
Because of the serious challenge of adversarial examples, Google Brain is organizing an adversarial attacks-themed competition.
The “Competition on Adversarial Attacks and Defenses” is built around 3 sub-categories: Non-targeted Adversarial Attack, Targeted Adversarial Attack, and Defense Against Adversarial Attack.
Teams are invited to submit their programs that meet each these three challenges. The competition will culminate by pitting all attack systems against all defense systems. May the best AI win!
The solution is quite simple, using your custom generated noise you have found an exact image that is “over-fitted” for the network. If you add random noise to your new image, all of the over-fitting would be lost and you would again immediately have the original classification again. For example re-sampling the image, down-scaling it or just altering all the pixel values by a small amount would break the exact patterns you have found.
While the original classifier is fooled by your precise “noise”, it is very robust against real noise which can easily drown out your modifications.
An implementation of this might be to make 10 copies of the input image, and add small amounts of random noise to all of them, and then average the ‘scores’ across them all.