Illusion 100: The Dangerous Spread of Deepfakes In Today's Society
Wed, April 21, 2021

Illusion 100: The Dangerous Spread of Deepfakes In Today's Society

Camera apps installed on your mobile phones are becoming more sophisticated as you can elongate your legs, remove pimples, add animal ears and other effects, and create fake videos that can make anyone think they are real /Photo by: antb via Shutterstock

 

Camera apps installed on your mobile phones are becoming more sophisticated as you can elongate your legs, remove pimples, add animal ears and other effects, and create fake videos that can make anyone think they are real, wrote Grace Shao of business and CNBC, a financial market coverage news platform. The technology used to produce fake videos is called deepfakes, which has become more accessible to the masses. 

John Villasenor, nonresident senior fellow of governance studies at the Center for Technology Innovation at Washington-based public policy organization, said since deepfakes are becoming more sophisticated and accessible, they are now “raising a set of challenging policy, technology, and legal issues.”

What Are Deepfakes? 

Deepfake is a form of artificial intelligence that combines the words “deep learning” and “fake.” It can also be defined as “fraudulently authored material,” according to Robert Anzalone of business news Forbes. Deepfake content “superimposes a manufactured image over a real source image” to produce a synthetic model. The creator can make the model say or perform anything they desire. 

In simplistic terms, deepfakes refer to falsified videos made by deep learning, explained Paul Barrett, adjunct professor of law at New York University. On the other hand, deep learning is “a subset of AI” and is an arrangement of algorithms that are capable of learning and making intelligent decisions on their own. Deep learning can produce a counterfeit by studying photographs and videos of a victim from different angles. 

Then, the individual’s behavior and speech patterns will be mimicked by the deep learning system. But the danger of deep learning is “the technology can be used to make people believe something is real when it is not.”

Villasenor told CNBC that it can also be used to undermine a political candidate’s reputation by making them “appear to say or do things that never actually occurred.” Deepfakes are a powerful tool for people who might want to use misinformation to influence an election. Hence, deepfakes are one of many elements in misinformation campaigns, either in media form or otherwise, to fool the general public. 

The Spread of Deepfakes and Misinformation

Kathryn Harrison of DeepTrust Alliance told Anzalone that misinformation campaigns transcend beyond video. For instance, the Facebook Transparency Report found that 2.2 billion bogus Facebook accounts were taken down between January 2019 to March 2019 “due to an increase in automated scripted attacks.” Harrison also pointed out that the average person failed to determine 40% of deepfake videos. Each year, over 3.6 trillion YouTube views originate from deepfake videos. 

Unfortunately, deepfakes can be hard to detect and identify. This type of content is getting better and better at fooling people into thinking they are genuine, which can weaken society’s capability to govern itself, thereby clouding our objectivity with misinformation. An Amsterdam-based company called Deeptrace Labs proposes a deep learning monitoring system for detecting fraudulent media. 

The firm published a report about the said issue in September 2019. The report, titled “The State of Deepfakes,” said Deeptreace’s goal is to safeguard individuals and organizations from the harmful impacts of deepfakes, as quoted by Giorgio Patrini in Deeptrace Lab’s news website. Moreover, the report also highlighted that the majority of AI-generated videos are pornographic, where faces of celebrities are “imposed on the authentic images.” Deepfakes mainly target female actors. 

Unfortunately, deepfakes can be hard to detect and identify. This type of content is getting better and better at fooling people into thinking they are genuine, which can weaken society’s capability to govern itself / Photo by: Wachiwit via Shutterstock

 

Detecting Deepfakes Are A Struggle

Harrison asserted, “Today’s efforts to tackle the problem of deepfakes and misinformation are siloed and fractured.” Each firm is trying to address the issue for their platforms and use-cases. But no single company owns the “end to end life cycle of digital content. Hence, technical and societal solutions must be formulated to address this problem.

Harrison added, “It takes an extensive ecosystem to put in place the guardrails to begin to tackle these challenges.” The issue of deepfakes is more complex and goes much deeper. What about those fake accounts on Twitter or public information sources? 

Employing A Multi-Disciplinary Approach

Despite that, there are universities dedicated to studying this problem across disciplines and societal perspectives. Large corporations like Facebook and Microsoft will be working with top universities across the US to create a database of deepfake videos for research, as reported by Elizabeth Culliford of Reuters, an international news organization.

Academic research has also tapped into the legal, psychological, technical, political, and sociological implications and impacts of deepfakes. However, it is difficult to halt the proliferation of misinformation. It’s even harder to detect the telltale signs of a deepfake video considering that this technology is becoming more advanced. 

Hence, Harrison believed that it is important for various industries and disciplines to work together to combat fake information. Even if there are AI technologies that can help detect deepfakes, the question is: “Will people be more likely to believe a deepfake or a detection algorithm that flags the video as fabricated?”

Deepfakes leverage AI and deep learning to ruin someone’s reputation or influence an outcome. Identifying which content is manipulated becomes more challenging as deepfake technology becomes more advanced. Detection techniques need to catch up, pushing every stakeholder to participate in a virtual arms race. 

Large corporations like Facebook and Microsoft will be working with top universities across the US to create a database of deepfake videos for research / Photo by: VDB Photos via Shutterstock