|The keyword “deepfake” has attracted between 1 million and 10 million searches every month / Photo by ightwise via 123RF|
MIT recently held a tech conference with an unexpected special guest: Russian President Vladimir Putin. However, the one who appeared on-screen during the event wasn’t actually Putin but a “deepfake,” an artificial intelligence-manipulated video or audio that looked and sounded just like the real thing. Hao Li, the developer behind the Putin deepfake, said it wasn’t intended to trick people. Instead, Li wanted to offer a glimpse into the current state of deepfake technology.
Deepfakes have been circulating in social media for the past years. It was used to show Facebook CEO Mark Zuckerberg appearing to admit to controlling “billions of people’s stolen data” and a “Game of Thrones” character apologizing for the show’s disappointing final season.
Li stated that the technology is going to be perfect in just two to three years. “There will be no way to tell if it’s real or not, so we have to take a different approach,” he said.
DeepTrace B.V’s “The State of Deepfakes: Reality Under Check” in 2018 focused on how deepfakes generate harmful synthetic video, images, or audio. Since 2017, the number of webpages returned by Google search for "deepfake" has grown rapidly. This includes searches for web pages containing related videos. Aside from that, the report showed that the term topped out at 100 per month in Google searches across the world.
The keyword “deepfake” has attracted between 1 million and 10 million searches every month along with its popularity in the mainstream media. The first to experience large scale impact from the spread of deepfakes is the online adult industry. This is driven by consumer demand for face swapping in a pornographic context. The top 10 adult websites have more than 1,790 deepfake videos combined.
Deepfakes Are Extremely Dangerous
Deepfakes utilize generative adversarial networks (GANs) containing two machine learning models. The first ML model trains on a data set and then creates video forgeries, while the second ML attempts to detect the forgeries. Those who create deepfakes tweak the fakes until the other ML model can't detect the forgery. They tend to create a large set of training data to create a believable deepfake.
Recently, Deeptrace Labs, a creator of a service designed to identify deepfakes, released a report showing that AI-created fake videos can be extremely dangerous. They discovered that the number of deepfakes on the internet has doubled over the last year, with 14,678 of them across several streaming platforms and porn sites. They also found that 96 percent of the subjects of these fake videos are women, mostly celebrities. The images of these women are often turned into sexual fantasies without their consent.
According to Vox, an American news and opinion website, the report revealed that 96 percent of all deepfake videos were pornographic and nonconsensual. The top four adult websites that are hosting deepfakes received a combined 134 million views on such videos. A full 100 percent of the videos’ subjects on these websites were women. This shows that deepfake’s primary purpose is to satisfy sexually driven fantasies and degrade women. However, these fake videos not only target women but also politicians.
|According to Forbes deepfake is ready to be weaponized not only for the 2020 US election but also for future elections / Photo by scyther5 via 123RF|
Marco Rubio, the Republican senator from Florida and 2016 presidential candidate, referred to deepfakes as the modern equivalent of nuclear weapons. Before, a person needed several aircraft carriers, long-range missiles, and nuclear weapons to threaten a country. Today, we just need access to our internet system, electrical grid and infrastructure, banking system, and the ability to produce a very realistic fake video that could undermine elections.
The report also acknowledged the fact that deepfakes can seriously disrupt the political landscape. According to Forbes, a global media company focusing on business, investing, technology, entrepreneurship, leadership, and lifestyle, the technology is ready to be weaponized not only for the 2020 US election but also for our future elections and political discourse. Deepfake has the power to further divide a nation and distract voters from the real issues that matter.
Google Releases Large Dataset of Deepfakes for Researchers
Recently, Google released a large dataset of visual deepfakes that aims to help researchers in deepfake detection efforts. Medianama, the premier source of information and analysis on digital policy in India, reported that the thousands of deepfake videos were produced using paid and consenting actors. The researchers need to feed the dataset with a lot of deepfakes to train and test automated detection tools. This effort is created in collaboration with its Jigsaw technology incubator, the Technical University of Munich and the University Federico II of Naples.
Google used publicly available, state-of-the-art, automatic deepfake algorithms—Deepfakes, Face2Face, FaceSwap, and NeuralTextures—to transform the 28 actors’ faces in quotidian settings. “We firmly believe in supporting a thriving research community around mitigating potential harms from misuses of synthetic media, and today’s release of our deepfake dataset in the FaceForensics benchmark is an important step in that direction,” the tech giant said.
According to Naked Security, an online site that features computer security news, opinion, advice, and research from anti-virus experts, the data set now contains more than 3,000 deepfake videos. Pairs of actors were selected randomly, then deep neural networks would swap the face of one actor onto the head of another to help researchers detect deepfakes.