Technology 3 min read

Scientists Call For Ban on Killer Robots

Denis Starostin / Shutterstock.com

Denis Starostin / Shutterstock.com

The word “Killer Robot” often conjures images of mindless TX-850 models sent from the future to kill other members of human resistance. In other cases, we could imagine the philosophical and poetic Ultron, obsessed with destroying the Avengers.

Whichever killer robot comes to mind, it often comes with a futuristic timeline – between 30 to 50 years. So, we don’t have to worry about it now, right?

Wrong — Killer robots are already here.

Also known as a Lethal Autonomous Weapons System (LAWs), the existence of these AI-powered killing machines have become a cause for concern. Why, you ask?

Scientists argue that an autonomous system cannot be perfect and could malfunction in unpredictable ways. As a result, it poses a grave danger to all innocent civilians. Also, ethics experts believe that it is unethical for machines to take human lives without intervention.

As a result, several non-governmental organizations, including the American Association for the Advancement of Science are calling for a ban on the development of weapons controlled by artificial intelligence. Among the prominent individuals calling for the worldwide prohibition of killer robots is Humans Rights Watch’s Mary Wareham.

In a statement to the BBC, she said;

“We’re not talking about walking, talking terminator robots that are about to take over the world. Our concern is much more imminent; conventional weapon system with autonomy.”

In other words, the current killer robots are not based on Hollywood’s portrayals. Instead, they’re the conventional weapons that you’re used to but powered by artificial intelligence.

These include military aircraft that can take off, fly, and land on their own as well as pocket-sized drones.

Chief Technical Officer of Clearpath Robotics, Ryan Gariepy also supports the ban. Not only did the company denounce the use of AI-systems for warfare, but he also stated that it would not develop it in the future.

“An autonomous system cannot decide to kill or not to kill in a vacuum. The de-facto decision has been made thousands of miles away by developers, programmers, and scientists who have no conception of the situation the weapon is deployed in,” Gariepy said.

The idea of a killer robot also raises a question of legal liability. Simply put, who is responsible when a machine decides to take a human life?

According to Peter Asaro of the New School in New York, machines are not moral agents. As such, they cannot be held responsible for making decisions of life and death.

“So it may well be that the people who made the autonomous weapon are responsible,” Asaro told the BBC.

Be that as it may, not everyone supports the outright ban of AI-controlled weapon system. The United States and Russia are among several countries that are against a ban on the so-called killer robots.

Read More: Veo Robotics Wants to Control Killer Robots With 3D Sensors

First AI Web Content Optimization Platform Just for Writers

Found this article interesting?

Let Sumbo Bello know how much you appreciate this article by clicking the heart icon and by sharing this article on social media.


Profile Image

Sumbo Bello

Sumbo Bello is a creative writer who enjoys creating data-driven content for news sites. In his spare time, he plays basketball and listens to Coldplay.

Comments (0)
Most Recent most recent
You
5
share Scroll to top

Link Copied Successfully

Sign in

Sign in to access your personalized homepage, follow authors and topics you love, and clap for stories that matter to you.

Sign in with Google Sign in with Facebook

By using our site you agree to our privacy policy.