Technology 3 min read

Researchers Propose way to Reduce Bias in Computer Vision

Zapp2Photo / Shutterstock.com

Zapp2Photo / Shutterstock.com

Computer scientists at Princeton and Stanford University have devised a way to address the problem of bias in computer vision. It involves improving the training data itself.

ImageNet, a database of over 14 million images, plays a crucial role in computer vision. It serves as the source of training data for machine learning algorithms that classify or recognize elements within the photos.

ImageNet’s scale is so tremendous that it became necessary to automate image collection and crowdsource image annotations. Expectedly, this form of construction has unintended consequences, and one of them is bias.

An assistant professor of computer science at Princeton, Olga Russakovsky explained:

“Computer vision now works really well, which means it’s being deployed all over the place in all kinds of contexts. This means that now is the time for talking about what kind of impact it’s having on the world and thinking about these kinds of fairness issues.”

In a new paper, the ImageNet team systematically identified and proposed the removal of non-visual concepts and offensive categories. These include racial and sexual categorizations in ImageNet’s person’s categories.

Also, the team designed a tool to enable users to specify and retrieve image sets of people by age, gender expression, or skin color. That way, algorithms can reasonably classify people’s faces and activities in images.

“There is very much a need for researchers and labs with core technical expertise in this to engage in these kinds of conversations,” said Russakovsky

ImageNet and Biases in Computer Vision

Back in 2009, a group of computer scientists from Stanford and Princeton launched ImageNet as a database for academic researchers.

After that, the team wanted to encourage other researchers to build a better computer vision algorithm using ImageNet. So, they created the ImageNet Large Scale Visual Recognition Challenge. 

So, how did the fairness issue arise?

According to reports, some of the biases in ImageNet comes from the pipeline used to build the database.

The image categories came from an old database of English words used for natural language processing research called WordNet. While the words are clearly defined in verbal terms, they do not translate well in visual vocabulary.

For example, a term that describes an individual’s religion could retrieve the most distinctive image results. And this could potentially lead to algorithms that perpetuate stereotypes.

The crowdsourcing program may have contributed more to the bias.

After collecting an extensive database of images, the paid workers to verify the candidate images. And this resulted in biases and inappropriate categorizations.

When you ask people to verify images by selecting the correct ones from a large set of candidates, people feel pressured to select some images, and those images tend to be the ones with distinctive or stereotypical features,” said lead author Kaiyu Yang, a graduate student in computer science.

Now, the ImageNet team is updating its hardware and database to include the rebalancing tool developed in the research. They’re also implementing the filtering of the person categories.

Read More: Viral AI Tool ImageNet Roulette Criticized For Being Racist

First AI Web Content Optimization Platform Just for Writers

Found this article interesting?

Let Sumbo Bello know how much you appreciate this article by clicking the heart icon and by sharing this article on social media.


Profile Image

Sumbo Bello

Sumbo Bello is a creative writer who enjoys creating data-driven content for news sites. In his spare time, he plays basketball and listens to Coldplay.

Comments (0)
Most Recent most recent
You
share Scroll to top

Link Copied Successfully

Sign in

Sign in to access your personalized homepage, follow authors and topics you love, and clap for stories that matter to you.

Sign in with Google Sign in with Facebook

By using our site you agree to our privacy policy.