Technology 4 min read

What Will it Take for us to Trust Robots?

Andrey_Popov / Shutterstock.com

Andrey_Popov / Shutterstock.com

Thanks to deep learning, AI will adapt depending on its experience. With an ability to improve and adapt, undoubtedly machine learning will be faced with an opportunity to make unethical decisions. As a result, we will need to impose legal and ethical limits.

The continual progress of artificial intelligence promises new opportunity. Moreso than any other technological concept, AI is met with much enthusiasm and apprehension at the same time. Fears range from job-killing automation to a dystopian robot overlord. Promised advancements range from better video game AI to machines that can accurately diagnose a patient from mountains of data in seconds.

#AI that make important decisions will need to fit #ethical #guidelines.Click To Tweet

At some point, AI is going to be entrusted with making decisions. Be it a system that monitors the integrity of infrastructures like bridges and buildings or a defense system tasked with intercepting ballistic missiles, machines will have the opportunity to make emergency decisions for us. Before that happens, we must lay an ethical framework for thinking machines.

A Definition of AI Comes Before any Regulation

Already in 1942, Isaac Asimov introduced the “Three Laws of Robotics” in his short short story “Runaround.” If you’ve seen or read I, Robot, you will also recognize these basic guidelines:

“A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.”

Laws seem like an ideal safeguard against hypothetical robotic terrorists, but this is not as simple as it sounds. In order to put a face on this multifaceted concept, regulations must do the work of defining exactly ‘what is artificial intelligence?

Creating universal moral laws that operate in any context is a difficult task. There must be laws governing the manufacture of robots, and other laws for thinking robots themselves to govern their actions. Furthermore, robot owners will also have to abide by a certain set of ethical guidelines regarding the use of their property, much like people must respect animal rights.

Perhaps it will take a Robocop-like machine to help spur the regulatory conversation. What would we do about delinquent or criminal robots? In such a case, who would be held responsible: the robot, the manufacturer, or the owner?

First, we will not have to deal with the ubiquitous bipedal humanoids in Asimov’s imagining.

AI is taking many forms and functions: virtual assistants that exist within our home, devices, and vehicle, self-driving cars, chatbots, and even smart prosthetics tasked with interpreting brain signals.

As a result, it is necessary to give plenty of forethought to come up with a definition of AI, because as of now there is no consensus on the legal definition of the term. This might have to do with our inability to find a consensual definition of “human intelligence” in the first place.

How to Overcome Distrust of Robots

Are we ready to entrust robots with the management of our daily lives? To what extent can we trust them? Letting an app suggest a restaurant is one thing, and hiring a tin can babysitter to look after our children is another.

If you’ve seen Gabe Ibáñez’s 2014 sci-fi film, AUTOMATAthen you’re familiar with the issue of accountability when it comes to AI decision making. Just like the manufacturer who insures the behavior of its creation, regulatory policy regarding AI in real life will help to alleviate irrational concerns.

The more robots behave like humans, the more we will trust them. Social navigation in robotics is a prerequisite for full integration of robots into society. Researchers are working on the development of a system that would enable robots to navigate autonomously in their environment while respecting social conventions, believing that this will minimize distrust of robots.

The AI debate should bring together scientists, academics, experts in robotics and tech developers to guarantee a diversity of opinions and perspectives for the development of principles of common interests around AI. The latest example is the Beneficial AI 2017 conference, which was held from January 5th to the 8th in Asilomar, California. It was held in Puerto Rico in 2015.

First AI Web Content Optimization Platform Just for Writers

Found this article interesting?

Let Zayan Guedim know how much you appreciate this article by clicking the heart icon and by sharing this article on social media.


Profile Image

Zayan Guedim

Trilingual poet, investigative journalist, and novelist. Zed loves tackling the big existential questions and all-things quantum.

Comments (3)
Most Recent most recent
You
  1. amrou March 05 at 9:28 pm GMT

    But it is programmed and will be given their orders by humans

    • Profile Image
      Zayan Guedim Author March 05 at 10:18 pm GMT

      Hi,
      Deep learning enables machines to teach and program themselves. They would be able to escape human control, hence the justified fear.
      Thanks.

    • Brett Forsberg March 06 at 5:07 pm GMT

      You’re absolutely right. How could that change? Is it possible that some clandestine, self-replicating probe could eventually forget its directives? Maybe the programming could be damaged somehow. Would this always lead to the machine failing? What if its “mutation” caused it to evolve in some way?

share Scroll to top

Link Copied Successfully

Sign in

Sign in to access your personalized homepage, follow authors and topics you love, and clap for stories that matter to you.

Sign in with Google Sign in with Facebook

By using our site you agree to our privacy policy.