Tech Giants Use AI to Detect and Tackle Online Child Abuse
Wed, April 21, 2021

Tech Giants Use AI to Detect and Tackle Online Child Abuse

Every year, thousands of children across the world face abuse, neglect, and other incredibly difficult experiences from people who are supposed to take care of them / Photo by: Katarzyna Białasiewicz via 123RF

 

Every year, thousands of children across the world face abuse, neglect, and other incredibly difficult experiences from people who are supposed to take care of them. Many of them grow up carrying the trauma caused by those childhood experiences, putting their health and future at risk. Unfortunately, this is incredibly common.

The 28th edition of the Child Maltreatment Report released by the Children’s Bureau at HHS' Administration for Children and Families (ACF) revealed that an estimated 674,000 children were determined to be victims of maltreatment in 2017. About 74.9% were neglected, 18.3% were physically abused, and 8.6% were sexually abused. It was reported that the youngest children were most vulnerable to maltreatment. Kids in the first year of their life had the highest rate of victimization of 24.2 per 1,000 children in the national population of the same age.

However, child abuse not only happens inside the household. With the rise of social media, predators were given a wide platform to look for victims. According to the Center for Public Impact, a not-for-profit organization that works with governments, public servants, and other changemakers to reimagine government, there were 18.4 million reports of suspected online abuse of children in 2018. This was 6,000 times higher than in 1998. An investigation also revealed that 45 million images of children are being sexually exploited online. 

The Internet Watch Foundation reported that 39% of child sexual abuse material (CSAM) online are victims under the age of 10. Of this figure, 43% depict acts of extreme sexual violence. It was also reported that 99% of CSAM goes undetected and only 1% of sex trafficking victims are rescued. This is because social media sites have given these predators a platform where they can connect with peers and share the “best practice” for their crimes.

Many social media platforms are also responsible for these materials circulating on the Internet. For instance, reports show that Facebook Messenger was responsible for nearly 12 million reports of CSAM last 2018. While technology and social media have empowered these predators to do their crimes, it can also be the solution we are striving for. Thus, many law enforcement agencies and organizations are using artificial intelligence for better prevention, detection, and prosecution of these crimes.

Google’s New AI Tool to Fight Child Sexual Abuse

The online demand for images and videos of child sexual abuse has been increasing for years. Reports show that there were 105,047 URLs hosting child abuse content removed from the internet in 2018. Each of these sites contained thousands of pictures and footage of CSAM. The Internet Watch Foundation (IWF), a UK-based charity, reported that they had stripped 477,595 web pages containing abuse images from the web. Unfortunately, this isn’t enough to control the rise of CSAM online.

To address this, Google released a free AI tool back in 2018 for NGOs and its industry partners to assist human moderators and ease some of the burdens in detecting CSAM online. This would help them in monitoring and reviewing CSAM 700% more material than they are now. “By sharing this new technology, the identification of images could be speeded up, which in turn could make the internet a safer place for both survivors and users,” Susie Hargreaves, CEO of the IWF, said.

According to Silicon Republic, a leading source for technology, science, and start-up news, information and resources for people who are passionate about STEM, the AI tool works by going through all of the flagged images. Its advanced neural networks would put the ones it believes are the highest priority for review up at the top of the moderator’s list. After that, the materials would be added to the ever-growing database of child sexual abuse imagery for the AI and other tools to filter them out automatically in the future.

Since the tool assists moderators by sorting flagged images and videos, the reviewing process is much quicker than ever. Fred Langford, the deputy CEO of the Internet Watch Foundation (IWF), stated that the software would “help teams like our own deploy our limited resources much more effectively.” “At the moment we just use purely humans to go through the content and say, ‘yes,’ ‘no. This will help with triaging,” he added. 

To address this, Google released a free AI tool back in 2018 for NGOs and its industry partners to assist human moderators and ease some of the burdens in detecting CSAM online / Photo by: bennymarty via 123RF

 

Microsoft’s Project Artemis

Google is not the only tech giant that’s willing to help stop CSAM online. Recently, Microsoft announced an AI tool that aims to hunt online child abusers, named “Project Artemis.” The tool was developed in collaboration with The Meet Group, Roblox, Kik, and Thorn. The development grooming detection technique began in November 2018. The announcement marks the technical and engineering progress over the last 14 months, which was led by Dr. Hany Farid, a leading academic. 

According to The Verge, an American technology news and media network that publishes news items, long-form feature stories, guidebooks, product reviews, and podcasts, Artemis works by recognizing specific words and speech patterns and flagging suspicious messages for review by a human moderator. After that, the moderators determine whether to escalate the situation by contacting the police or other law enforcement officials. The National Center for Missing and Exploited Children is notified immediately once there has been a request for child sexual exploitation or images of child abuse from the perpetrators. 

Google is not the only tech giant that’s willing to help stop CSAM online. Recently, Microsoft announced an AI tool that aims to hunt online child abusers, named “Project Artemis.” The tool was developed in collaboration with The Meet Group, Roblox, Kik, and Thorn / Photo by: llcv2 via 123RF

 

Tools like this are important because they help moderators set an industry standard for what the detection and monitoring of predators should look like. Elizabeth Jeglic, a professor of psychology at John Jay College of Criminal Justice in New York, stated that child predators have much more immediate access when lurking in online chat rooms. 

“Within 30 minutes they may be talking sexually with a child. In person it’s harder to get access to a child, but online a predator is able to go in, test the waters and if it doesn’t work, go ahead and move on to the next victim,” she said. 

However, Julie Cordua, CEO of nonprofit tech organization Thorn, stated that the tool has its limitations. However, it is still a huge step in the right direction. It is a great step toward detecting the online grooming of children by sexual predators, saving more children’s lives. 

The AI tools developed by Google and Microsoft are great examples of how tech companies can help stop child abuse online. This could help in saving innocent lives from sexual predators.