Technology 3 min read

New AI System can Detect and Highlight Help Speech

Researchers from Carnegie Mellon University developed an AI system that highlights help speech to fight hateful content online.

OneSideProFoto / Shutterstock.com

OneSideProFoto / Shutterstock.com

A team of researchers has developed an AI system to counter hate speech by detecting and highlighting help speech.

According to the United Nations, the Rohingya people are one of the most persecuted minorities in the world.

Not only are they denied citizenship under the 1982 Myanmar nationality law, but they’re also restricted from freedom of movement and state education. What’s more, Rohingya refugees are also subject to online attacks in the form of hate speech,

Unfortunately, many Rohingya are not proficient in global languages such as English, and they have limited access to the internet. Since they spend a bulk of their time trying to remain alive, the Rohingya refugees can’t simply log into Twitter to post their content.

Now researchers at Carnegie Mellon University‘s Language Technology Institute (LTI) have found a way to give voice to the voiceless.

They’re using artificial intelligence to analyze hundreds of thousands of comments on social media sites. The researchers hope to identify and highlight help speech to counter the hateful content.

A post-doctoral researcher in the LTI who conducted the study, Ashiqur R. KhudaBukhsh said:

“Even if there’s lots of hateful content, we can still find positive comments.”

By finding and highlighting these positive comments, the researchers hope to make the internet a safer and healthier place.

Here’s how the system works.

Using AI To Find Relevant Help Speech On Social Media Sites

In what they’re calling the first AI-focused analysis of the Rohingya refugee crisis, the researchers analyzed over 250,000 comments from YouTube.

The language model uses previous examples to predict the words that are likely to occur in a given sequence. That way, the system can understand what social media users are trying to say.

While recent improvements in language models make the analysis of such magnitude possible, the CMU team also innovated further. They figured out a way to apply these models to short social media texts in South Asia.

An initial sampling of the YouTube comments suggested that 10 percent of the comments were positive. However, when the researchers used their method to search for help speech, the result rose to 88 percent.

This indicates that the CMU team’s method could substantially reduce the manual process of finding texts on social media, KhudaBukhsh said.

But, there’s a downside to finding pro-Rohingya texts on social media. According to the researchers, some “help” text contains hateful language against the alleged persecutors.

For example, a YouTube comment reads: Antagonists of the Rohingya are “really kind of like animals not like human beings, so that’s why they genocide innocent people.”

So, while the method may reduce the manual process, the AI system still requires human judgment, the researchers concluded.

Read More: Using Machine-Learning Algorithm to Spot Online Trolls

First AI Web Content Optimization Platform Just for Writers

Found this article interesting?

Let Sumbo Bello know how much you appreciate this article by clicking the heart icon and by sharing this article on social media.


Profile Image

Sumbo Bello

Sumbo Bello is a creative writer who enjoys creating data-driven content for news sites. In his spare time, he plays basketball and listens to Coldplay.

Comments (0)
Least Recent least recent
You
share Scroll to top

Link Copied Successfully

Sign in

Sign in to access your personalized homepage, follow authors and topics you love, and clap for stories that matter to you.

Sign in with Google Sign in with Facebook

By using our site you agree to our privacy policy.