|A 2019 study showed that 53% of UK adult internet users reported seeing hateful content online in 2018, higher the 47% in 2017. Of those who have witnessed online hate, less than half took action / Photo by: Łukasz Stefański via 123RF|
Hate has become an unavoidable part of the internet. While this can be prevented, many people still engage in online hate speech for various reasons. A 2019 study showed that 53% of UK adult internet users reported seeing hateful content online in 2018, higher the 47% in 2017. Of those who have witnessed online hate, less than half took action.
Research conducted by the Anti-Defamation League, a nonprofit that tracks and fights anti-Semitism, revealed that 53% of Americans reported being subjected to hateful speech and harassment in 2018. About 37% reported severe attacks such as sexual harassment and stalking. "This is an epidemic and it has been far too silent. We wanted to understand the extent of it and the impact of it,” said Adam Neufeld, ADL's vice president of innovation and strategy.
The ADL survey reported than 1 in 5 respondents reported being subjected to physical threats online. Nearly 1 in 5 encountered sexual harassment (18%), stalking (18%), and sustained harassment (17%). According to USA Today, an online site that delivers current local and national news, sports, entertainment, finance, technology, and more, some of those surveyed said that they were targeted because of their identity.
It was reported that the online abuse they experienced was due to their gender identity (20%), race or ethnicity (15%), sexual orientation (11%), religion (11%), occupation (9%), and disability (8%). Unfortunately, the impacts of online hate can last long. According to the survey, 38% of the respondents curtailed or changed their online habits.
Online hate speech attacks anyone regardless of their background. The recent ones are targeted at the Rohingya people who have faced decades of systematic discrimination, statelessness, and violence in Myanmar.
Online Hate Speech Against the Rohingya People
For many decades, the Rohingya community suffered tremendous abuses from Myanmar’s security forces. Thousands of them made perilous journeys out of Myanmar to escape the abuse and violence they continuously experience in the hands of their government. The latest mass departure happened in August 2017 where an estimated 745,000 Rohingya, which included more than 400,00 children, fled into Cox’s Bazar, a town on the southeast coast of Bangladesh.
According to BBC News, an operational business division of the British Broadcasting Corporation responsible for the gathering and broadcasting of news and current affairs, Rohingya Arsa militants launched deadly attacks on more than 30 police posts during that month. This triggered the world's fastest-growing refugee crisis. Most of the refugees reached Bangladesh, seeking shelter and setting up camp. These people had little to no access to aid, safe drinking water, food, shelter, or healthcare.
|For many decades, the Rohingya community suffered tremendous abuses from Myanmar’s security forces. Thousands of them made perilous journeys out of Myanmar to escape the abuse and violence they continuously experience in the hands of their government / Photo by: CAPTAIN RAJU via Wikimedia Commons|
While international organizations and many countries joined forces to provide basic assistance to the Rohingya community, the victims’ sufferings didn’t end there—it even reached the internet. In 2018, reports showed more than 1,000 posts, comments, and pornographic images attacking the Rohingya and other Muslims on Facebook. A UN investigator reported that Facebook was being used to incite violence and hatred against the Muslim minority group. The platform, she said, had “turned into a beast.” Researchers and human rights activists stated that the platform was being used in Myanmar to promote racism and hatred of Muslims, in particular, the Rohingya.
David Madden, a tech entrepreneur who worked in Myanmar, stated that they have warned Facebook many times. In 2015, he told Facebook officials that its platform was being exploited to foment hatred. “It couldn’t have been presented to them more clearly, and they didn’t take the necessary steps,” Madden said.
How AI Can Help
Fortunately, artificial intelligence can help. Recently, researchers from Carnegie Mellon University in the US introduced an AI system that can rapidly analyze thousands of comments on social media as well as identify the fraction that defends or sympathize with voiceless groups. This aims to counter hate speech directed at minorities such as the Rohingya community. With this tool, human social media moderators will have an option to highlight this "help speech" in the comment sections.
"Even if there's lots of hateful content, we can still find positive comments," Ashiqur R. KhudaBukhsh, a post-doctoral researcher who conducted the research, said.
The team believes that this is the first AI-focused analysis of the Rohingya refugee crisis. According to International Business Times, an online site that provides comprehensive content around the most important business, economic, and political news from around the world, the researchers used the AI system to search for anti-war "hope speech” with the help of a large dataset. They reported that the results were 88% positive.
Also, the researchers were able to develop another tool that can apply the model to short social media texts in South Asia. "Short bits of text, often with spelling and grammar mistakes, are difficult for machines to interpret. It's even harder in South Asian countries, where people may speak several languages and tend to "code switch," combining bits of different languages and even different writing systems in the same statement," the study reported.
The team believes that by finding and highlighting these positive comments toward the Rohingya community, they can contribute to making the internet a safer, healthier place. This is equivalent to detecting and eliminating hostile content or banning the trolls responsible for the online hate. The AI system shows that the technology can be of help to protect minorities.
|Recently, researchers from Carnegie Mellon University in the US introduced an AI system that can rapidly analyze thousands of comments on social media as well as identify the fraction that defends or sympathize with voiceless groups / Photo by: Kittipong Jirasukhanont via 123RF|