Technology 4 min read

Threat of AR Hacking Fixable With Machine Learning

Lumen Photos | Shutterstock.com

Lumen Photos | Shutterstock.com

One of the growing concerns of augmented reality is user security.

In the Internet of Everything, AR and VR will play an integral role. From psychotherapy to sports, these tweaks of our reality help better ourselves and our experiences.

While some wholeheartedly welcome this integration, others raise vehement worry over the security of users. But there is a secret weapon to combat these issues.

In what ways could machine learning improve the safety of AR for users?

The Major Caveat of Immersive AR Apps

Augmented reality apps and tech are more prolific than ever. Though numbers have dipped since release, Pokémon Go, one of the most popular AR apps, still have an estimated 30 million daily users.

Digital Journal released a strongly worded op-ed in October railing against AR tech security measures citing an MIT article as its primary source. But Franziska Roesner, Assistant Professor at Paul G. Allen School of Computer Science & Engineering at the University of Washington, has talked about this before.

Along with co-writers Tadayoshi Kohno, and David Molnar, Roesner penned a paper in 2012 on the “Security and Privacy for Augmented Reality Systems”.

The paper asks practical questions about the integration of AR into daily life.

Soon, you will have text and speech automatically translated for you in real time as if it were a magic magnifying glass. You will have driving directions displayed on the road in front of you. The world will be at once easier to navigate and less mysterious. 

What’s so scary about that?

Well, the actual worry is that someone could potentially tamper with that tech with something like malware. Because you are actively following directions given by the tech with physical actions like driving your car, the implications of hacked AR include physical harm. But that’s not the only negative that could come of it.

At some point, as we navigate a new reality where everyone around us uses AR to find their way and record their experiences, our likeness and actions will be recorded by others. Others’ actions will be recorded by our devices. In a world where a digital signature can unlock everything we own, at what point are we taking in too much information?

Regardless of the nature of the AR (app for smartphone, wearable tech, etc.) this sensory “hacking” is understandably worrisome. Luckily, other institutions are also researching practical concerns regarding augmented reality tech.

How Weak IoT Security Affects AR Users

The concern over such a thing isn’t only recent. Sci-Fi stories and media have tussled with the idea–as in this clip from Ghost in the Shell: Stand Alone Complex.

Warning: there are blood and gunfire in this clip.

 

Concerns range beyond someone “hacking your eyes”, of course. Data breaches and identity protection are paramount, too. AR devices have access to your location, surroundings, and everything on your phone or tablet. Roesner also raises the point that other people’s devices are also recording you.

What happens to this data? Does the app developer or hardware developer own it? If so, can they sell it to third parties? What measures will they take to protect users?

These are all relevant questions still being asked of many current technologies–including VR. As we have seen numerous times in the last few years, our data is never 100% safe. The severe Equifax breach reinforced this fact most recently.

How Machine Learning Can Augment AR Security Concerns

The onus may be on the developers of AR tech to take appropriate security measures on the micro level. But, on the macro level, there is a way to mitigate concerns of security when it comes to augmented reality.

We already have dozens of programs to scan for and block malware on our desktop PCs and laptop computers. Who is to say that we can’t extrapolate the same processes into our AI-managed AR platforms?

As suggested in the Digital Journal op-ed, if we can teach AI to recognize malware and potentially harmful code, we can address security concerns more readily. Similar checks for “red flags” can determine whether or not you can access the app. These functions could be incorporated into the virtual assistants we already have.

Turn-Key Solution or Potential for Greater Breaches?

Alexa, Siri, Cortana, and others could all be incorporated with anti-malware software for improved cybersecurity results. But this integration doesn’t liberate systems of hacking vulnerabilities.

Arguably, even if it makes hijacking AR apps more difficult, it could grant hackers more access to other apps, data, and information.

What is the best way to mitigate cybersecurity concerns regarding augmented reality tech?

First AI Web Content Optimization Platform Just for Writers

Found this article interesting?

Let Juliet Childers know how much you appreciate this article by clicking the heart icon and by sharing this article on social media.


Profile Image

Juliet Childers

Content Specialist and EDGY OG with a (mostly) healthy obsession with video games. She covers Industry buzz including VR/AR, content marketing, cybersecurity, AI, and many more.

Comments (0)
Most Recent most recent
You
share Scroll to top

Link Copied Successfully

Sign in

Sign in to access your personalized homepage, follow authors and topics you love, and clap for stories that matter to you.

Sign in with Google Sign in with Facebook

By using our site you agree to our privacy policy.