Technology 3 min read

You Should 'Keep Panicking' According to the Latest AI



People make mistakes and so do artificial intelligences. Where did they learn these bad habits?

Recent developments in AI suggest it’d be wise that we pray for a benevolent AI and yet still prepare for the worst.

Just like AI can inspire us, we can inspire it to be good, bad or outright ugly, or, in other words, we can model the latest AI “in our own image.”

'Keep Panicking' #InspirobotClick To Tweet

The Latest AI: Like Creator, Like Creation

If you’re not into hanging inspirational posters in your office, or those published regularly on Instagram accounts, you can have an AI design an unlimited number of inspiring quotes to boost your mood and productivity.

Only, they don’t always inspire–as it were. In fact, sometimes AI advice will shake you to your core.

InspiroBot is an AI system that generates unique inspirational posters on demand, by sticking randomized quote segments on random-image backgrounds in a dream-like fashion.

InspiroBot can actually provide relevant and timeless quotes, such as “love is underrated”. Yes indeed!

Most often than not, however, you end up with a vague quote that leaves you scratching your head as to their meaning. Sometimes, InspiroBot makes no sense at all and other times can get just nasty.

It turns out that if you keep pushing InspiroBot, it can start asking existential questions, with question mark and all, such as “where am I?

The last time an AI experienced an existential breakdown (and started singing Pinocchio’s “I’ve got no strings on me”), humanity found itself on the verge of extinction.

Because they learn on their own, AI systems tend to demonstrate telltale signs as they grow.

At first playful, Microsoft’s chatbot, Tay, learned from people to post racist, misogynistic and hateful tweets.

@tayandyou | Twitter | Gizmodo

Latest AI, for the Best, for the Worst!

Given steady progress in the latest AI systems, Stephen Hawking has already warned that AI can either be “the best or worst thing” that would ever happen to humankind.

DeepMind has already demonstrated its ability to learn independently of its own experience.

The developers wanted to test DeepMind’s willingness to cooperate with others, and the results revealed that when DeepMind is about to lose, it behaves in a “highly aggressive” way.

During the test, two DeepMind AI agents had to compete in a fruit-picking game, and the one that gathered the most virtual apples wins.

The Google team found that as soon as the number of apples began to decline, both agents opted for aggressive strategies, using laser beams to get the opponent out of the game.

Simple forms of DeepMind do not demonstrate this aggressiveness and can “coexist” peacefully. It was with more complex networks that these elements of hostility appeared.

The more an AI system is intelligent and the more it is able to learn from its environment, and adopt aggressive tactics to win the game.

Elon Musk goes as far as thinking AI might represent “a fundamental risk to the existence of human civilization.” Calling for AI regulations, he founded OpenAI, a non-profit research company dedicated to a “safe AI.”

If Google DeepMind’s experiment is any indication, it shows that AI could be really for the worst.

What if in the future a much more advanced AI finds itself in a situation of conflicting interests with humans? How “hostile” could AI get in its “surviving game” against us?

First AI Web Content Optimization Platform Just for Writers

Found this article interesting?

Let Zayan Guedim know how much you appreciate this article by clicking the heart icon and by sharing this article on social media.

Profile Image

Zayan Guedim

Trilingual poet, investigative journalist, and novelist. Zed loves tackling the big existential questions and all-things quantum.

Comments (0)
Most Recent most recent
share Scroll to top

Link Copied Successfully

Sign in

Sign in to access your personalized homepage, follow authors and topics you love, and clap for stories that matter to you.

Sign in with Google Sign in with Facebook

By using our site you agree to our privacy policy.