|It was reported that Facebook's AI and machine learning detected some prohibited content before the moderators identified it / Photo Credit: Shutterstock|
Facebook has been people’s go-to social media platform since it was launched in 2004. Reports show that as of Q3 2018, there are more than 2.375 billion monthly active users. Worldwide, 26.3% of the online population use Facebook, with people uploading 136,000 photos and updating their status 293,000 times per minute. The social media platform has enabled communication and conversation between people. It has also become an avenue where we can get the latest news.
However, Facebook has also faced numerous scandals. For instance, a whistleblower revealed in March 2018 that political consulting firm Cambridge Analytica was harvesting Facebook user data from over 500 million of its users. Instead of informing them, the company failed to tell users that their privacy had been violated. According to TRT World, an online site that aims to provide new perspectives on world events to a global audience, Cambridge Analytica had used the information to advance their political agenda.
Also, Facebook has become a platform to spread hate in Sri Lanka. In March 2018, the Sri Lankan government banned Facebook to end anti-Muslim riots after some fake news fuelled the fight. “On my instructions, my secretary has discussed with officials of Facebook, who have agreed that its platform will not be used for spreading hate speech and inciting violence,” Sri Lanka’s President Maithripala Sirisena said. Aside from that, the platform’s algorithms have been used to push anti-Islamic comments prominently.
Last year, the UN condemned Facebook for taking part in inciting genocide in Myanmar. The country’s military used the platform to spread fake news and incite hatred towards Rohingya Muslims. It was reported that there had been more than 1,000 incidences of posts in Burmese attacking Rohingyas and Muslims generally.
With fake news and hateful content spreading on Facebook, experts see this as an opportunity to ‘clean’ the platform. Mike Schroepfer, the company’s chief technology officer, stated that technology is the only way to prevent bad actors from taking advantage of the service. Artificial intelligence could greatly help.
Identifying 96.8% of Prohibited Content
While Facebook has rules and regulations to keep users from spreading fake news or hateful comments, it’s not that effective. Billions of users are extremely difficult to handle. According to Wired, a monthly American magazine that focuses on how emerging technologies affect culture, the economy, and politics, algorithms have proved capable of helping to police Facebook.
While the company has been successful in detecting and blocking pornography and nudity, its training software is not effective in decoding text. Thus, AI is needed. Facebook needs AI systems because they are capable of understanding the shifting nuances of more than 100 different languages. The company needs a strong force to detect global terrorist propaganda, bullying and harassment, violence and graphic content, and others.
Earlier this year, Facebook released its Community Standards Enforcement Report. The report showed that Facebook’s AI and machine learning proactively detected 96.8% of the content in six of the nine areas mentioned before a human spotted it. This is higher than Q4 2018’s 96.2%. According to VentureBeat, the leading source for the latest technology news, Facebook had identified 65% of the more than four million hate speech posts, which is an increase from 24% just over a year ago and 59% in Q4 2018.
The report also showed that the decrease in the overall amount of illicit content viewed on Facebook is due to algorithmic improvements. For instance, the company was able to take action on about 900,000 pieces of drug sale content. About 83.35 of these posts were detected proactively by its AI models. Aside from that, 69.9% of about 670,000 pieces of firearm sale content on Facebook were detected before content moderators or users encountered it.
“By catching more violating posts proactively, this technology lets our team focus on spotting the next trends in how bad actors try to skirt our detection. [We] continue to invest in technology to expand our abilities to detect this content across different languages and regions,” Facebook vice president of integrity Guy Rosen said.
How Facebook’s AI Looks for Bad Content
Today, Facebook can detect and take action on prohibited posts faster than their team. But how does the AI do it? According to MIT Technology Review, a global media company that aims to bring about better-informed and more conscious decisions about technology through authoritative, influential, and trustworthy journalism, the company has been training its machine learning systems to identify and label objects in videos.
The company is also using two main approaches in searching for dangerous content. The first one is employing neural networks that look for features and behaviors of known objects. The second involves labeling them with varying percentages of confidence. These systems can decide to remove content when they see problematic images or behavior. This is because the neural networks have already been trained on a combination of pre-labeled videos from its human reviewers, reports from users, and more.
However, getting AI to truly understand language remains to be one of the biggest challenges. This explains why we still need human reviewers and not solely AI for this kind of job. Facebook is still relying on people to report the overwhelming majority of bullying and harassment posts that break its rules. While Facebook is now using AI to help it regulate the platform, users still need to be vigilant with the posts they see online.
|The company has been training its systems to detect objects that may be prohibited for the platform / Photo Credit: Shutterstock|