Facebook AI Head Reveals How Algorithms Are Monitoring Facebook Contents
Mon, November 29, 2021

Facebook AI Head Reveals How Algorithms Are Monitoring Facebook Contents

Facebook’s AI lab can effectively flag nudity and violence in images and videos on the platform / Photo Credit: Shutterstock

 

Facebook founder Mark Zuckerberg has always insisted that the massive social network is a force for good. While there’s a lot of truth in this, his statement hasn’t been quite reflective in the platform. This is after the company has been called out for many issues for the past few years such as exploiting user data and failing to curb harmful content. Many organizations have encouraged the company to provide more transparency on its policies.

For instance, the Anti-Defamation League encouraged Facebook last April 2019 “to explain how hate content spreads on the platform and how their policies are enforced in ways consistent with both internal standards and with the ethical standards of civil society.” During the same month, Zuckerberg stated that their staff consisting of 200 people has been focused on flagging and deleting terrorist content, showing that their algorithms have been straightforward enough to handle terrorism content. 

“I think we have capacity in 30 languages that we are working on. And, in addition to that, we have a number of AI tools that we are developing… that can proactively go flag the content,” he said. 

Jerome Pesenti, Facebook’s head of AI, stated that the company has made a lot of progress. According to the Observer, a media site that focuses on culture, real estate, media, politics, and the entertainment and publishing industries, Facebook’s AI lab has become effective in flagging nudity and violence in images and videos on the platform after understanding graphic content. The lab also recently made breakthroughs in understanding language. At the same time, Facebook is working on tools to detect deepfake videos. Pesenti stated that his team is “trying to be proactive about it.”

Nonetheless, he also believed that they still have a long way to go, acknowledging that AI and deep learning have a lot of limitations. “We are very, very far from human intelligence, and there are some criticisms that are valid: It can propagate human biases, it’s not easy to explain, it doesn’t have common sense, it’s more on the level of pattern matching than robust semantic understanding," he added.