Technology 2 min read

New AI Tool can Detect and Counter Online Abuse

UVgreen / Shutterstock.com

UVgreen / Shutterstock.com

Researchers are developing an artificial intelligence tool that could help fight online abuse, especially on social media.

Researchers at the University of Sheffield are developing an AI tool to detect and counter online abuse, particularly on social media.

Social media allows us to communicate quickly and easily with family and friends.

In addition to sharing opinions and beliefs, we can also broadcast conversations to the point of going viral. Unfortunately, abusers use social media platforms for this same reason.

Online abuse can take several forms, from cyberbullying, trolling to hate speech. Recently, these actions have sparked public outrage, and people now expect social media platforms to do more to tackle the issue.

In a statement to the press, researcher from the University of Sheffield’s Department of Computer Science, Professor Kalina Bontcheva said:

“There has been a huge increase in the level of abuse and hate speech online in recent years, and this has left governments and social media platforms struggling to deal with the consequences.”

So, Bontcheva collaborated with a colleague at Simon Fraser University in Canada, Wendy Hui Kyong Chun, to address the issue. Together, they’re developing artificial intelligence and natural language processing (NLP) methods to tackle abuse and hate speech online.

Here’s how it works

Developing an NLP and AI Tool to Fight Online Abuse

The current moderation systems on social media are now without biases. Due to problems with rigid definition and determination of abusive language, these systems create new forms of discrimination.

So Bontcheva and colleagues will examine the AI methods that are currently being used to detect online abuse and hate speech. Then, they’ll use the result to develop an effective and unbiased algorithm.

Along with being context-aware, the system will also respect language differences within communities based on gender, sexuality, race, and ethnicity.

The project focuses on two areas — the gaming industry and messages directed at politicians on social media. However, the researchers intend to make the new tools open source to empower other users.

Bontcheva noted:

“We are developing novel AI and NLP methods to address the problem while also developing a substantial program of training for academics and early career researchers to build capacity and expertise in this key area of research.”

The project was funded by UK Research and Innovation (UKRI) as one of 10 UK-Canada projects to support the responsible development of AI.

Read More: New MIT Automated System can Update Wikipedia Articles

First AI Web Content Optimization Platform Just for Writers

Found this article interesting?

Let Sumbo Bello know how much you appreciate this article by clicking the heart icon and by sharing this article on social media.


Profile Image

Sumbo Bello

Sumbo Bello is a creative writer who enjoys creating data-driven content for news sites. In his spare time, he plays basketball and listens to Coldplay.

Comments (0)
Most Recent most recent
You
share Scroll to top

Link Copied Successfully

Sign in

Sign in to access your personalized homepage, follow authors and topics you love, and clap for stories that matter to you.

Sign in with Google Sign in with Facebook

By using our site you agree to our privacy policy.