Technology 15 min read

It's Time to Stop Trusting the Verge on Their Google Articles

It's Time to Stop Trusting the Verge on Their Google Articles

This article is an opinion piece.

The Verge published a post titled “It’s time to stop trusting Google search already” which follows another article last month attacking Google for surfacing a widely shared 4chan article as a trending story.

Healthy skepticism and reasoning should be our default mode. Don’t trust the internet, don’t trust TV, don’t trust your government. Why trust Google in the first place, and if we do, then why isn’t the Verge’s article calling out our education system?  

Here’s my issue: if you are going to ask Google search algorithms to take moral positions, what are you really asking for? It’s almost like The Verge expects an algorithm to know actual truth from the slightest falsehood. Even if an algorithm could, should we want that? And even if Google could manually moderate all results, should we want that? I’d say no.

The very notion that a supreme set of algorithms or artificial intelligence should be able to determine truth on your behalf begs for technocracy.

Throughout most of our human history, we have lived under circumstances where government and religion were the ultimate enforcers of truth.

Case in point, many of Isaac Newton‘s writings were hidden until 1960 out of fear that this physicist and father of the Enlightenment would have been viewed as a heretic.

Also, it’s common knowledge that the famous philosopher Socrates faced a death sentence for impiety and moral corruption against the gods of Athens when really he was punished for pressing well-known figures in his society to back up their beliefs.

Here in the 2017 Americas, silencing people and their speech still occurs when others deem it offensive or hateful. For example, our Canadian neighbors will fine or jail people for speech deemed hateful, and a Richard Dawkins event in California was canceled over his past comments about Islam.

As it turns out, it’s very hard for humans to anticipate the outcome of our actions ahead of time, even when we try to use our best judgment.

Moreover, if we are not even that skilled at discerning truth or knowing good from evil as humans, why are we demanding artificial intelligence to have a superior judgment? Artificial Intelligence is, after all, created in our image and trained to carry our biases.

Do journalists understand that, barring deplatforming or human moderation, creating scalable algorithms that only show moderated content would require an AGI (Artificial General Intelligence)? This is still the stuff of sci-fi. Today’s AI is incredibly dumb, and to achieve AGI, we will likely have to start from scratch.

In other words, blogs like The Verge are seemingly asking tech companies for the yet-impossible and, at the same time, lumping culpability for hate speech on these companies rather than the people that create and share it.

The Diversity Problem in Artificial General Intelligence

What if we did achieve an AGI with the equivalent of human consciousness in a kubernetes container that we could scale on AWS, and it could judge every article and claim on the web for fairness, truth, and hatefulness?

Would this solution work? I am skeptical.

First, if this were to work, we’d need to solve the diversity problem with AI. What if the AI decides it leans ideologically toward totalitarianism, and we let this AGI curtail our speech?

Now, you say, that should never be allowed! While an extreme example, we are now back at suggesting human judgment should influence the AGI. However, which human possesses the ultimate measure of morality and rationality? Which human would be qualified enough? Not a single one!

Fei-Fei Li, Chief AI Scientist at Google
Fei-Fei Li, Chief AI Scientist at Google, believes that a humane AI should be diverse enough to understand all humans. | Image via TED.com

That a genuinely humane AI should understand ALL humans, not just one, is the position of Fei-Fei Li, the brilliant Chief AI Scientist at Google.

Suppose we achieve a truly humane AGI. Would it satisfactorily fulfill what The Verge is calling for in its article? Not at all, because we would run into the AGI supremacy problem.

Now, we should point out that Robertson’s final conclusion in The Verge doesn’t substantiate her calls for a more appropriate, more highly judgmental algorithm. Instead, she writes, “But when something like search screws up, we can’t just tell Google to offer the right answers. We have to operate on the assumption that it won’t ever have them.”

Essentially, that last line is what I’m arguing, but I’m expressing concern over another facet of the argument, where she writes, “We have to hold these systems to a high standard.”

We have to hold ourselves to a high standard because the algorithm showing a ‘Top Story’ or ‘Popular on Twitter’ snippet that contains hate speech or mistruths isn’t failing at its purpose. The algorithm successfully shows what’s popular, whether it is fake news or the most widely accepted truth. It’s up to us as humans to know the difference.

The Supremacy Problem in Artificial General Intelligence

If we were to hold tech firms like Google to create a more humane algorithm that doesn’t “screw up,” I would assert that such an AGI would not only need to be democratized but it would also need to be regularly reset and retrained from scratch, to avoid the formation of a fundamentalist AGI doctrine. If we do not allow for such a process, then we risk what I will call the ‘supremacy problem’ in Artificial General Intelligence.

To avoid the supremacy problem with AGI, we must not allow it to become immortal for as long as humans are mortal too.

If we allow a single mortal entity to create and own an AGI that knows good and evil, and this AGI, being a machine is immortal, then we have a man-created-God scenario. Such Artificial Intelligence would always learn more, become increasingly set in its judgment, and wouldn’t die. In my opinion, this is the stuff of which nightmares (and movies) are made.

Man has never ceased trying to make Gods in his own image. In the end, if we succeed in creating an AGI, we might regret finally succeeding.

If we allow a limited number of entities to create and own such AGIs, as, in this analogy, if Google were able to create an algorithm that could judge right from wrong for us, then men would have created a true pantheon of Supreme AGIs.

In Canada, a supreme government can already decide on what constitutes tolerable speech, so if they ever equip an AGI to enforce it, the supremacy problem is not that far from reality.

When it comes to the question of mortality, leave it to Elon Musk to already have thought of a way: “an AI-human symbiote.” However, when we die, does our AGI live on? Does it go to AGI heaven, or become reincarnated?

Unless the AGI resets or is separated in another realm, his solution does not solve the AGI supremacy problem for everyone, because we’d more likely bisect humanity when a group of people chooses to live a mortal human life instead of living as AI-human symbiotes.

In a future dystopian day, the symbiotes may visit a Truman show style zoo of protosymbiotes. (That would be you and me)

Deplatforming at Scale is Free Speech Prevention

Deplatforming is sometimes justified as there is a time and place for everything. Since we are exploring deplatforming in the context of internet utilities, I am trying to explore the dangers of systematic deplatforming at scale.

Since the technology to stop ‘fake news’ (1) is still in the realm of sci-fi, and (2) simultaneously amounts to what is described to be the greatest risk to civilization, we have to resort to old-school methods of speech prevention to achieve what The Verge says Google should. One such old-school method is called deplatforming.

Deplatforming means taking away the platform for speech deemed inappropriate. This is a preventive move against free, yet inappropriate, speech. On a micro scale, this is like covering your hands over your ears. On a macro scale, others are putting their hands over your ears.

On a vast scale, say, a government or a yet larger technological public utility relied on by billions of people such as Google and Facebook, this amounts to free speech prevention.

In an ongoing assault on free speech, China is increasingly engaged in deplatforming its citizens by preventing access to platforms of expression which don’t deplatform, censor, or moderate their users’ speech.

Now, companies in the United States like Facebook, Twitter, Reddit, and Google are under pressure to do the same.

The net effect of massive deplatforming is like as neutering your cat and saying you support her right to become pregnant.

As a kid who was bullied in school with both hurtful words and physical violence, I know that words can harm more than physical violence. However, I do not believe that systematic and massive deplatforming is the solution.

I was not denied a platform to speak up about this violence to my teachers. I was not given a gag order to protect those whose reputation was at stake.

Calling for such stifling of speech is creating an easy tool for would-be authoritarians to silence opposition.

Deplatforming was Hollywood’s favorite tool to stop artists from speaking out about sexual abuse.

Deplatforming women, political opposition, ideologies, religious apostates and deviants of every sort–let’s break through the newspeak, and call it for what it can easily become: blanket censorship and a danger to our cherished freedoms.

The Human Moderation Option

So what can technology companies do?

Human moderation, rather than algorithms doing the job, is one potential solution. We established that current AI is too dumb to discern truth from lie, and that future AGI solutions capable of moderation are quite possibly dangerous.

Are journalists assuming that a Google employee should moderate and supervise every single search result?

I will humor that idea.

Google et al. could start charging us money for using search engines and hire operators to connect you with the information you seek. The operators, possessing general intelligence and humane morality, could filter news to their best judgment.

At 3.5 billion searches a day, using human moderators to filter this search volume, Google could end unemployment for a good portion of the world’s population. | Telephone Switchboard | Image via Wikimedia Commons

It’s fair to say that would not work either.

However, what if we crowdsourced opinions and let the users vote? Now we are talking!

Human Moderation Take #2: A Fair Voting System?

Many social platforms are based on up-and-downvoting of content. The origins of this brilliant but straightforward system was first popularized with Facebook’s thumbs up.

Highly upvoted content is promoted to others, while highly downvoted content falls by the wayside. High-trafficked, controversial content simultaneously has both high upvotes and high downvotes.

For this system to work well and be fair, I came up with a few requirements. Some points may be open for debate and be incomplete, but I’m willing to start the conversation. The requirements are:

  • (A) Voters are mortal, so that they change their opinions over time so that the status-quo can be challenged.
  • (B) Votes are democratized, everyone has an equal vote.
  • (C) Votes are anonymous since our behavior, under the expectation of privacy, is closer to our authentic self than if we are at risk of being exposed for our opinions.
  • (D) Voters are humane, and votes come from people, not bots.
  • (E) Users or platform owners do not manipulate votes.
  • (F) Sponsored, boosted, and promoted posts are always identified as such, and sponsors are not allowed to influence the votes.

Facebook

Facebook is a big offender in my view, already violating rules C, and F, and is increasingly involved in censorship accusations outside of the U.S., and is now trying to manually review content.

The American public had a right to know that specific Facebook posts were paid propaganda, but Facebook hasn’t been so transparent about boosted posts

Moreover, if you secretly liked Hillary while living in Trump land, would you upvote content if your friends could see your likes? Studies have demonstrated that being watched changes you–without you knowing.  (For a Facebook fix, check out this community post.)

Google Search

Google has been testing user moderation in search engines for many years, but is now under pressure: more voices go up to moderate content, while the company is also increasingly accused of censorship.

Google allows users to report inappropriate search predictions, edit the knowledge graph, edit the map data, and so forth.

I wonder what Google is up to at 3am.

Given the stake Google has in protecting search results from manipulation and its constant fight against SEOs seeking to manipulate results, imagine how quickly rule E would be violated if Google allowed users to influence search rankings directly?

For Google, the news section, trending stories, and social media results are troublesome to deal with. Regular organic search works pretty well, so long they are not being manipulated by humans.

What about self-manipulating or moderating of search results? The European Union just fined Google $2.7 billion USD for changing search results, but now Germany is threatening social media companies if they don’t change search results. Damned if you do, well, you know the rest.

So what is the difference between Germany and China, other than that those governments want to enforce their idea of good and evil online?

To the Credit of Reddit

Reddit as a platform implements nearly all of my list of requirements for a fair system of free speech, and so I give it high marks.

However, Reddit is also a victim of its popularity and has been akin to a battleground as large organized networks have sought, in the past, to exploit the forum’s fair voting system.

This kind of system exploitation has occurred on both on the left and right side of the U.S. political spectrum. For example, in the past year, tens of thousands of pro-Trump users have organized to mass-upvote news favorable to the right.

As a result, they were taking over the front page of Reddit. This forced Reddit to take action to curb the power of the pro-Trump users over r/all, because it turned off people who disagreed. Moreover, if the election was any indication, that is roughly just over half of the country.

I can imagine this was a challenging situation for Reddit, a for-profit business. In trying to save their company, they were somehow forced to bend rules B and E, and related cases even lead to apologies from Reddit’s CEO.

The left has responded in kind, with action dubbed ‘brigading’: the organized mass downvoting of content. At some point, someone went as far as making a list of Reddit users who posted on pro-Trump subreddits. Conspiracy theories ensued when users complained that they were being targeted and deplatformed on various subreddits.

Other tactics were used involving bots, as the screen below illustrates:

Reddit being manipulated by a bot.
Reddit being manipulated by a bot. | Via Imgur

This manipulation is clearly against the terms of service, and it remains a constant battle for Reddit. Despite the fair criticism it faced, I have to give Reddit credit.

Reddit is a collection of smaller free speech zones and mostly succeeds at achieving a fair voting system so long that people remain in their zones.

In some ways, Reddit’s system reflects that of a democracy. Even if Reddit fails at moderating the common areas with the highest standards, at least it does not prohibit users from venturing into subreddits where their views can be challenged.

And if that is not good enough, you can always try voat.co or other alt-media sites.

In Conclusion

Deplatforming on a massive scale is tantamount to censorship, and no current AI is smart enough to know good and evil. If we ever create an AGI smart enough to know good and evil, we have to deal with the supremacy problem, and if we do not, we are back to the equivalent of the human moderation option.

From all the current technology available, a fair user voting system is the closest we can get to a balanced system of online expression. However, user voting is tough to get right, and even if you do, it is prone to manipulation and challenging to maintain fairness.

Since it’s so hard to foresee all the consequences of our actions reminds me of a magic lamp story. One man wished for so much gold that he would never have to work another day in his life. His wish was fulfilled and he choked on a pile of gold so high it towered above his home. I suppose the best part of being dead is that you don’t have to go to work anymore.

Moral of the story? Be careful what you wish for!

This opinion article is sure to spark strong opinions. Please share them freely in the comment section.

First AI Web Content Optimization Platform Just for Writers

Found this article interesting?

Let Alexander De Ridder know how much you appreciate this article by clicking the heart icon and by sharing this article on social media.


Profile Image

Alexander De Ridder

Alexander crafts magical tools for web marketing. He is a smart creative, a builder of amazing things. He loves to study “how” and “why” humans and AI make decisions.

Comment (1)
Most Recent most recent
You
  1. Archer Tuttle October 26 at 6:16 pm GMT

    This opinion article is sure to spark strong opinions. Please share them freely in the comment section.

    A very long and tedious article.

    “current AI is too dumb to discern truth from lie”
    It’s just like humans then!

    Was the article written by a human?

share Scroll to top

Link Copied Successfully

Sign in

Sign in to access your personalized homepage, follow authors and topics you love, and clap for stories that matter to you.

Sign in with Google Sign in with Facebook

By using our site you agree to our privacy policy.