Technology 3 min read

Viral AI Tool ImageNet Roulette Criticized for Being Racist

ImageNet Roulette, an AI project designed to understand how machine learning systems view humans, is now being criticized for being racist.

Image courtesy of Pixabay

Image courtesy of Pixabay

For the last couple of days, online users have been criticizing an AI tool, ImageNet Roulette, for using racial slurs to describe people. But, can artificial intelligence be racist?

What is ImageNet Roulette and How Does it Work?

ImageNet Roulette was developed by artificial intelligence researchers, Kate Crawford and an artist, Trevor Paglen. The purpose of the AI tool is simple: to help use ourselves to see technology for what it is.

With this goal in mind, the developers trained the ImageNet Roulette algorithm using photos of humans in ImageNet, a dataset which includes over 14 million photographs of humans and objects.

So, how does it all work?

Well, users that visit the site simply have to upload their photos for the AI to identify. ImageNet Roulette then labels the faces using one of the 2,833 subcategories of people that exist within ImageNet’s taxonomy.

While the exercise was fun for some users, others found it a bit disconcerting. The AI’s labels ranged from seemingly harmless words such as “weatherman,” “widower,” “pilot,” “adult male” to downright racial slurs.

A reporter for the Guardian wrote:

“I don’t know exactly what I was expecting the machine to tell me about myself, but I wasn’t expecting what I got: a new version of my official Guardian headshot, labeled in neon green print: “gook, slant-eyed.” Below the photo, my label was helpfully defined as “a disparaging term for an Asian person.”

How Can Racist Humans Create Biased Machines?

It turns out that was the exact outcome Crawford and Paglen had in mind when they created the machine.

ImageNet Roulette is based on a flawed dataset that was labeled by imperfect underpaid humans. So, the result was a reflection of the thoughts and beliefs of the dataset creators.

In a statement, Crawford and Paglens wrote:

“By looking at the images in this collection, and see how people’s personal photographs have been labeled, raises two essential questions: where are the boundaries between science, history, politics, prejudice and ideology in artificial intelligence? And who has the power to build and benefit from these systems?”

ImageNet Roulette demonstrates how we could unintentionally feed our political beliefs into a technical system. In other words, the machine became racist and misogynistic because humans are too.

AI still requires tons of data to make intelligent decisions. However, when fed with bias information, it could also be used to twist facts to serve the interest of a few individuals.

Read More: New Artificial Intelligence Technique to Forecast Volcanic Eruptions

Found this article interesting?

Let Sumbo Bello know how much you appreciate this article by clicking the heart icon and by sharing this article on social media.


Profile Image

Sumbo Bello

Sumbo Bello is a creative writer who enjoys creating data-driven content for news sites. In his spare time, he plays basketball and listens to Coldplay.

Comments (0)
Most Recent most recent
You
share Scroll to top

Link Copied Successfully

Sign in

Sign in to access your personalized homepage, follow authors and topics you love, and clap for stories that matter to you.

Sign in with Google Sign in with Facebook

By using our site you agree to our privacy policy.