Technology 2 min read

Researchers Pledge to not Develop Autonomous AI Weapons

Autonomous AI weapons are a very real threat of future conflicts. To counteract this, the world's leading AI companies and experts have signed a pledge to refuse to be involved in the creation of autonomous AI weapons. | Image By Josh McCann | Shutterstock

Autonomous AI weapons are a very real threat of future conflicts. To counteract this, the world's leading AI companies and experts have signed a pledge to refuse to be involved in the creation of autonomous AI weapons. | Image By Josh McCann | Shutterstock

Artificial intelligence researchers from around the world have agreed to ban the development of autonomous AI weapons.

On Tuesday, the Future of Life Institute (FLI) announced that over 2,400 artificial intelligence researchers from 36 countries and 160 companies signed a pledge stating that they won’t participate in the development, trade, or manufacture of lethal autonomous AI weapons.

“Artificial intelligence (AI) is poised to play an increasing role in military systems. There is an urgent opportunity and necessity for citizens, policymakers, and leaders to distinguish between acceptable and unacceptable uses of AI,” FLI, a Boston-based volunteer-run research and outreach organization, said in a statement.

“In this light, we the undersigned agree that the decision to take a human life should never be delegated to a machine. There is a moral component to this position, that we should not allow machines to make life-taking decisions for which others – or nobody – will be culpable. There is also a powerful pragmatic argument: lethal autonomous weapons, selecting and engaging targets without human intervention, would be dangerously destabilizing for every country and individual.”

Some of the companies and organizations who signed the pledge are Google DeepMind, Element AI, Lucid.ai, and the European Association for Artificial Intelligence. Tesla CEO Elon Musk, DeepMind Co-Founder Shane Legg, GoodAI CEO Marek Rosa, and UC-Berkeley Center for Intelligent Systems director Stuart Russell are among the personalities who made the pledge.

“We would really like to ensure that the overall impact of the technology is positive and not leading to a terrible arms race, or a dystopian future with robots flying around killing everybody,” Anthony Aguirre, a UC-Santa Cruz professor and also a pledge signatory, said in a statement.

Back in 2015, Musk reportedly donated $10 million USD to an FLI research program centered on ensuring that artificial intelligence will be beneficial to people. Last year, Musk together with Hassbis and Suleyman through FLI called on the United Nations to regulate the development of autonomous AI weapons systems.

Do you believe that autonomous AI weapons pose a threat to humanity?

First AI Web Content Optimization Platform Just for Writers

Found this article interesting?

Let Chelle Fuertes know how much you appreciate this article by clicking the heart icon and by sharing this article on social media.


Profile Image

Chelle Fuertes

Chelle is the Product Management Lead at INK. She's an experienced SEO professional as well as UX researcher and designer. She enjoys traveling and spending time anywhere near the sea with her family and friends.

Comments (0)
Most Recent most recent
You
share Scroll to top

Link Copied Successfully

Sign in

Sign in to access your personalized homepage, follow authors and topics you love, and clap for stories that matter to you.

Sign in with Google Sign in with Facebook

By using our site you agree to our privacy policy.