Deepfakes Will Only Get Worse This 2020
Wed, April 21, 2021

Deepfakes Will Only Get Worse This 2020

Deepfakes are hard to spot because there’s no simple algorithm that can automatically spot AI-edited content / Credits: meyer_solutions via Shutterstock


Deepfakes, artificial intelligence-generated fake media, are the new challenge that tech companies and social media sites are facing. Many politicians and public figures have been the subject of these deepfakes, which aims to spread misinformation and manipulate the public’s opinion. For instance, a video of Facebook CEO Mark Zuckerberg had circulated on social media, claiming that he had “total control of billions of people’s stolen data, all their secrets, their lives, their futures.” However, this video is false. Zuckerberg didn’t say anything like that. 

While many researchers are trying to prevent the circulation of deepfakes and spot them easily, these AI-generated fake media are incredibly hard to moderate. This is because they belong to a broad category of AI-edited photos and videos that, if tech companies managed them, would affect other harmless content. “If you take ‘deepfake’ to mean any video or image that’s edited by machine learning then it applies to such a huge category of things that it’s unclear if it means anything at all,” Tim Hwang, former director of the Harvard-MIT Ethics and Governance of AI Initiative, said. 

With this problem, it would not be surprising if deepfakes will get worse this 2020 but social media sites are still working on addressing the issue with these AI-generated fake content. According to The Verge, an American technology news and media network that publishes news items, long-form feature stories, guidebooks, product reviews, and podcasts, Facebook recently announced moderation policies that covered deepfakes. 

According to Facebook, they will be removing “manipulated misleading media” which has been “edited or synthesized” using AI or machine learning “in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say.” Unfortunately, they still need to improve their systems because there’s no simple algorithm that can automatically spot AI-edited content.