Technology 4 min read

How Machine Learning Trains AI to be Sexist (by Accident)

SelimBT | Shutterstock.com

SelimBT | Shutterstock.com

Without question, the world is moving toward automation and artificial intelligence, which pick up on some of our deeply-ingrained natural prejudices.

While scenarios like The Matrix and I, Robot might seem like folly, AI picks up behaviors from we humans. As a result, some AIs adapt based on new information. Others, regrettably, can become racist, murderous, and/or sexist.

Can we prevent AIs from learning negative behaviors and traits?

Can We Stop AIs from Becoming Sexist?Click To Tweet
image of Leeloo Dallas in The Fifth Element for AI and machine learning article about bias and sexism
The Fifth Element | Giphy

Leeloo Dallas Multipass or How AIs Learn 

If you’re a fan of science fiction, you’ve no doubt seen the movie The Fifth Element. Milla Jovovich plays a perfect being created from a strand of alien DNA who learns about humanity and our history after she is “born”.

Similarly, artificial intelligences do the same thing using videos, images, words, and data of all kinds. Due to this manner of knowledge absorption, the lessons can get a bit muddled because human history is, too.

As we saw with Microsoft’s Tay bot, humans can manipulate anything with machine learning capabilities if they really want to do so. But in this instance, humans didn’t manipulate with how and what the AI learned. 

A University of Washington research team studied how computer vision algorithms handled gender predictions based on an image data set. Using a classic set of images typical in AI predictive experiments, the AI neural network predicted women to be doing traditionally “female” tasks in the images.

You know these kinds of tasks: cooking and the like. The problem is that the image could be a balding man in a kitchen and the AI would still predict a woman. With a predictive algorithm, mitigating biases matters.

These biases emerged as a result of one five major areas wherein machine learning acquires biases:

  • Bias through interaction
  • Emergent bias
  • Data-driven bias
  • Conflicting goals bias
  • Similarity bias

Of course, these kinds of biases aren’t new and people are doing something about them. Unfortunately, there’s something else far more troubling with this outcome.

Sexist Predictions Magnified into Misclassification

It’s not stellar that the neural network predicts that women are 33% more likely to appear in the kitchen/cooking. But this isn’t the biggest concern. The problem is that these biases become amplified across the AI neural network leading to further misclassification or bias.

MIT Technology Review reports: “So, trained on that data set, an AI was 68 percent more likely to predict a woman was cooking and did so even when an image was clearly of a balding man in a kitchen.”

This dataset is just one of the catalogs of images we use to train machine learning algorithms. Imagine what that percentage could climb to if the neural network explored more data sets. 

Why is This a Problem?

It might not be a huge issue if this algorithm is used for targeting social media ads. However, if this algorithm is used in predictive crime software, that amplification could turn problematic — even deadly.

An Equitable Future for AI & Humans

Minds from prestigious bodies such as MIT, Stanford, and Harvard expressed concern. Biases based on gender, ethnicity and other criteria are all relevant. After all, machine bias is human bias given how machine learning works in its current iteration.

Some say that we should incorporate an AI watchdog in order to avoid unfair and discriminatory practices. Some members of the scientific community took a studious approach to the matter.

The AI Now Initiative is dedicated to determining the long-term social implications and effects of artificial intelligences (and their potential biases). Focused on liberties, bias, inclusion, automation, labor, and other AI future related issues, the research group aims to work across multiple disciplines to better understand social impacts of humans on AI and vice versa.

Would you choose an AI watchdog as a solution to machine learning bias, or something else?

First AI Web Content Optimization Platform Just for Writers

Found this article interesting?

Let Juliet Childers know how much you appreciate this article by clicking the heart icon and by sharing this article on social media.


Profile Image

Juliet Childers

Content Specialist and EDGY OG with a (mostly) healthy obsession with video games. She covers Industry buzz including VR/AR, content marketing, cybersecurity, AI, and many more.

Comment (1)
Most Recent most recent
You
  1. Profile Image
    mega slotgame June 28 at 6:48 am GMT

    พีจี เกมสล็อต pg slot ธีมอาหาร น่าเล่น ให้ได้รู้จักกันได้เงินจริง 100% เล่นง่าย ได้เงินไว ต้องที่เว็บเราเท่านั้น ฝากถอนไม่มีขั้นต่ำ a href=https://pg slot.game/PG SLOT.GAME/a สล็อตเว็บอันดับ 1 สมัครสมาชิกรับโบนัสทันที!

share Scroll to top

Link Copied Successfully

Sign in

Sign in to access your personalized homepage, follow authors and topics you love, and clap for stories that matter to you.

Sign in with Google Sign in with Facebook

By using our site you agree to our privacy policy.