Technology 4 min read

Two Measures to Contain the Race for Hostile Super AI

One way to stop super AI from potentially rising is to make it hard to get into the AI arms race. Researchers now propose new approaches to achieve this.

Shutterstock

Shutterstock

Artificial intelligence is likely on the way to acquire superhuman cognitive and even physical abilities. Several scientists, philosophers, and leading techpreneurs think it’s only a matter of time before super AI takes over.

We have seen grave warnings on the risk of superhuman AI from world-leading figures.

Late astrophysicist Stephen Hawking predicted the emergence of superhumans and warned that we have already let the AI genie “out of the bottle.” However, he thinks we can contain this genie and leverage its powers.

A couple of books on from our list, The Best Artificial Intelligence Books you Need to Read, speak of the dangers of hostile super-intelligent AI.

Philosopher Nick Bostrom includes superintelligence as one of the major existential risks threatening to wipe out humanity. Physicist Max Tegmark thinks AI safety is “the most important conversation of our time.” Tesla and SpaceX CEO Elon Musk also puts super AI high up on the list of existential risks.

Pre-emptive Strike Against Hostile Super-AI

The references above may make the scenario of hostile super-intelligent entities taking on humans less science-fiction.

We need to do something before it’s too late!

But who can stop the rise of super AI that seems like a fatality? The answer is scientists, researchers, engineers working on AI, in addition to regulators and business leaders.

They can nip superintelligence risk in the bud. But how?

Countering any super AI’s potential offensives against humans now should start with this question: why do we need to make super AI hostile in the first place? Will AI develop hostility against humans on its own, like naturally?

By purposefully creating it or creating an environment conducive to unfriendly AI, humans are to blame either way.

Some, like Nick Bostrom, think the rise of superintelligence comes with two main challenges. While the first is mostly a technical challenge, the second is more of a political one.

We have first to ensure self-aware AI’s objectives are aligned with those of humans. That will help avoid any overlapping interests that could be life-threatening for us. And two, we have to contain ethical and societal implications that super AI may bring about. AI shouldn’t social and economic inequalities to the benefit of a small elite. That’s a political issue.

To prevent an AI apocalypse, two AI experts propose an exciting approach.

Make it Hard to Join the AI Arms Race

Wim Naudé is a Professorial Fellow at Maastricht Economic and Social Research Institute on Innovation and Technology (UNU-MERIT) in the Netherlands. And Nicola Dimitri is a Professor of Economics at the University of Siena in Italy.

The two researchers recently published a paper titled: “The race for artificial general intelligence: implications for public policy.

In the article, they raise the question of hostile super AI. Essentially they propose closing the AI research market and making it less competitive. Making it hard to run in the AI arms race means less competing parties. This means they wouldn’t find themselves compelled to “cut corners” safety-wise to win the race.

As to how governments can lessen the AI competition, the pair think that governments should opt for public procurement and AI taxes.

“Governments could also offer to buy a less-than-best version of super-AI, effectively creating a “second prize” in the arms race and stopping it from being a winner-takes-all competition… As for taxes, governments could set the tax rate on the group that invents super-AI according to how friendly or unfriendly the AI is. A high enough tax rate would essentially mean the nationalization of the super-AI,” the researchers explain.

The measures proposed would reduce the pressure on the AI contest participants. For one thing, raising R&D funds would be easier since there aren’t that many applicants. Also, such an incentivizing system would lead them to coordinate and cooperate.

That said, however, these measures seem more like a palliative rather than curative medicine.

Now, artificial intelligence still depends on the skills of humans coders. When AI cuts the strings that it would become Artificial General Intelligence (AGI), or super AI, it’ll likely reach that stage no matter how many players take part in the game. There will also be those who try to gatecrash the party. When AI gets there, whenever and however it does, how would we ensure it doesn’t bottle up inherent hostility? Or that it will develop it?

Read More: Experts Say We Need To Start Over To Build True Artificial Intelligence

First AI Web Content Optimization Platform Just for Writers

Found this article interesting?

Let Zayan Guedim know how much you appreciate this article by clicking the heart icon and by sharing this article on social media.


Profile Image

Zayan Guedim

Trilingual poet, investigative journalist, and novelist. Zed loves tackling the big existential questions and all-things quantum.

Comments (0)
Most Recent most recent
You
share Scroll to top

Link Copied Successfully

Sign in

Sign in to access your personalized homepage, follow authors and topics you love, and clap for stories that matter to you.

Sign in with Google Sign in with Facebook

By using our site you agree to our privacy policy.