Technology 2 min read

New Algorithm can Distinguish Cyberbullies on Twitter

Image courtesy of Shutterstock

Image courtesy of Shutterstock

Researchers have developed a new machine-learning algorithm that can identify cyberbullies on Twitter with 90 percent accuracy.

Harmful actions on social media are often ambiguous. It’s usually in the form of a seemingly superficial comment or criticism. As a result, researchers have found it challenging to develop tools that can effectively detect such actions.

So, a team of researchers from Binghamton University, which includes computer scientists, Jeremy Blackburn decided to address this issue. They analyzed the behavioral patterns of abusive Twitter users and compared them with their non-abusive counterparts on the platform.

In a statement about the project, Blackburn said:

“Our research indicates that machine learning can be used to detect users that are cyberbullies automatically, and thus could help Twitter and other social media platforms remove problematic users.”

Here’s how the Binghamton team developed the machine learning algorithm.

Developing an Algorithm to Identify Cyberbullies on Twitter

The researchers first build crawlers – programs that collect data from Twitter via a variety of mechanisms. That way, they were able to collect user information on the platform.

These include tweets, profiles, and other social network-related things, like who they follow and who follows them.

Next, the researchers performed sentiment analysis as well as natural language processing on the tweets. They also conducted various social network analysis on the connections between users.

Using the data from the result, the team developed algorithms to automatically place offensive online behavior into two categories – cyberbullying and cyber aggression.

The algorithm was able to identify abusive users on Twitter with 90 percent accuracy. Whether a user sent death threats or racist remarks, the algorithm could spot all kinds of harassing behavior on the platform.

With that said, the researcher admitted that the system is reactive. That means, it does not inherently prevent bullying actions, it only identifies them for the platform to take action.

Blackburn noted:

“And the unfortunate truth is that even if bullying accounts are deleted, even if all their previous attacks are deleted, the victims still saw and were potentially affected by them.”

Read More: Twitter Introduces 6-second Viewable Video Ad Bids

First AI Web Content Optimization Platform Just for Writers

Found this article interesting?

Let Sumbo Bello know how much you appreciate this article by clicking the heart icon and by sharing this article on social media.

Profile Image

Sumbo Bello

Sumbo Bello is a creative writer who enjoys creating data-driven content for news sites. In his spare time, he plays basketball and listens to Coldplay.

Comments (0)
Most Recent most recent
share Scroll to top

Link Copied Successfully

Sign in

Sign in to access your personalized homepage, follow authors and topics you love, and clap for stories that matter to you.

Sign in with Google Sign in with Facebook

By using our site you agree to our privacy policy.