Researchers at the California Institute of Technology may have found a more efficient way to spot online trolls using a machine-learning algorithm.
Since the beginning of the web, online trolls have been a source of distress to civil internet users. They exist to inflame any discussion, even igniting personal attacks on other social media users for sharing their views.
While trolling starts with making others angry, it could escalate into bullying, psychologically-damaging harassment, or even death threats.
Expectedly, social media platforms monitor online interactions to prevent online harassment. The process requires the rapid detection of offensive, harassing, or social media posts.
However, current methods to obtain social media data are far from adequate. It involves using either uninterpretable automation or relying on a statical set of keyword that would quickly become outdated.
A junior at Caltech, Maya Srikanth explained:
“It isn’t scalable to have humans try to do this work by hand, and those humans are potentially biased. On the other hand, keyword searching suffers from the speed at which online conversations evolve.”
So, the Caltech team demonstrated how a GloVe (Global Vectors for Word Representation) model could help discover new and relevant keywords.
Using GloVe to Identify Online Trolls
The GloVe is a word-embedding model. It represents words in a vector space, where the distance between two words could describe their semantic or linguistic similarity.
For example, when the team searched Twitter for uses of MeToo in conversations, they discover a cluster of related hashtags. These include “SupportSurvivors,” “ImWithHer,” and “NotSilent.”
Aside from showing how individual conversations are related to a topic of interest, GloVe also provides context. Thanks to the machine-learning model, the researchers understood how social media users use certain words.
For example, they noted that users on a Reddit forum dedicated to misogyny used the word “female” in close association with the words “sexual,” “negative,” and “intercourse.” Meanwhile, in Twitter posts about the #MeToo movement, the same word was associated with “companies,” “desire,” and “victims.”
Bren Professor of Computing and Mathematical Sciences, Anima Anandkumar said:
“It was an eye-opening experience about just how ugly trolling can get. Hopefully, the tools we’re developing now will help fight all kinds of harassment in the future.”
The researchers hope that their proof-of-concept could one day inspire a more powerful tool to spot online harassment.
Comments (0)
Least Recent