Training Algorithms to Spot Online Trolls
Thu, October 21, 2021

Training Algorithms to Spot Online Trolls

Social media has been a heaven-send for many, but just like all good things, there is a bad side to it: online trolls. Many users have been subjected to some form of social media trolling during their usage / Photo by: rawpixel via 123RF

 

Social media has been a heaven-send for many, but just like all good things, there is a bad side to it: online trolls. Many users have been subjected to some form of social media trolling during their usage. Trolls can be found in almost every corner of the web, aiming to create conflict on social media sites by making controversial statements. They start fights or upset people by posting off-topic or offending messages in an online community.

Most of us might have seen or read comments like “you don’t know what you’re talking about,” “just shut up,” or “you don’t belong here,” on a viral post. Online trolls purposely say something controversial to get a rise out of other users. If you visit a conservative or liberal Facebook page, there’s a great chance that you'll see endless, chaotic arguments and comments. Many of them usually make wild and unwarranted opinions. 

According to Time, an American weekly news magazine and news website published in New York City, online trolls have been steadily upping their game throughout the years. For instance, trolls invaded several Facebook memorial pages of deceased users back in 2011 to mock their deaths. In 2012, feminist Anita Sarkeesian received bomb threats at speaking engagements, doxing threats, rape threats, and more after starting a Kickstarter campaign to fund a series of YouTube videos chronicling misogyny in video games.

Today, social media has been more toxic than ever. Users encounter trolls that not only post mean comments and start arguments but also harass people. A Pew Research Center survey revealed that 70% of Internet users aged 18 to 24 reported experiencing harassment, while 26% of women that age said they’d been stalked online. Another 2014 study published in the psychology journal Personality and Individual Differences discovered that approximately 5% of Internet users who self-identified as trolls scored extremely high in the dark tetrad of personality traits: narcissism, psychopathy, Machiavellianism, and sadism. 

But, above all, trolling is a political fight. Many politicians use online trolls to divide the public’s perceptions and spread fake news. It has become a powerful tool to promote racism, misogyny, and conservative views. They derisively call their adversaries “social-justice warriors” and the alt-right’s version of political activism. With online trolls growing every year, social media will remain a toxic place. But, artificial intelligence holds the key to hunting these trolls down.

Spotting Online Trolls Through Algorithms

The Internet has become a hellscape run by trolls, running the online community with toxic arguments and opinions. To address this issue, researchers from CalTech University use AI to detect trolls more effectively online. “The field of AI research is becoming more inclusive, but there are always people who resist change. It was an eye-opening experience about just how ugly trolling can get. Hopefully, the tools we’re developing now will help fight all kinds of harassment in the future,” researcher Anima Anandkumar said. 

According to Technology Org, an online site that publishes information about various science and technology topics, the team presented the study last December at the 2019 Conference on Neural Information Processing Systems in Vancouver, Canada. They showed that machine learning algorithms can monitor online social media conversations as they evolve. In this way, social media platforms could have an effective and automated way to spot online trolling.

One of the major reasons why trolling continues to spread is the lack of prevention. In most cases, automated systems look for negative online posts or keywords and flag them to be either be handled by human moderators or dealt with automatically. This effort has been ineffective, considering the number of trolls still proliferating across social media platforms. 

“It isn’t scalable to have humans try to do this work by hand, and those humans are potentially biased. On the other hand, keyword searching suffers from the speed at which online conversations evolve. New terms crop up and old terms change the meaning, so a keyword that was used sincerely one day might be meant sarcastically the next,” author Michael Alvarez, professor of political science, said.

Thus, CalTech researchers used a model called Global Vectors for Word Representation or GloVe, which can do a better job at learning new keywords than the older system. According to Science Times, an online site that provides the latest research, discoveries, and scientific breakthroughs for science enthusiasts, GloVe can help moderators learn the context of the online text to detect the actual meaning of the troll's text through analyzing not only certain keywords but also anything related to it. For instance, searching “#MeToo” can show related hashtags like “SupportSurvivors,” “ImWithHer,” and “NotSilent.” 

Aside from that, it can also show how these keywords are used in a sentence or phrase. In an online Reddit forum dedicated to misogyny, the word “female” was used in close association with the words “sexual,” “negative,” and “intercourse.” 

Predicting Online Trolling Before it Happens

Through CalTech’s new study, it will become easier for moderators to flag down toxic and abusive comments. But AI can also spot online trolls even before they attack. Dr. Srijan Kumar, a post-doctoral research fellow in computer science at Stanford University, is using AI to counteract trolling through addressing online misbehavior. Dr. Kumar develops statistical analysis, graph mining, embedding, and deep learning-based methods to characterize what normal behavior looks like. This technique, which is currently being used on the Indian e-commerce platform Flipkart, will be used to identify abnormal or malicious behavior. 

According to Fortune, an American multinational business magazine headquartered in New York City, Dr. Kumar can spot online trolling before it happens through characterizing the users’ behavior and detecting them before they hurt other users. He uses a method called REV2 that uses the review graph of user-review-product to identify fraudsters. After that, REV2 compares the reviews to previously identified cases of fake reviewers.

Also, a recent study by Dr. Kumar showed that it is possible to accurately predict when an online platform will attack/harass/troll another. “Thus, I created a deep learning-based model that uses the text and community structure to predict, with high accuracy, if a community is going to attack another. Such models are of practical use, as it can alert the community moderators to keep an eye out for an incoming attack,” he said. 

AI and algorithms can do great wonders to help social media platforms not only flag down toxic comments/arguments but also take down online trolls.

AI and algorithms can do great wonders to help social media platforms not only flag down toxic comments/arguments but also take down online trolls / Photo by: Tero Vesalainen via 123RF