New AI Tool Identifies Fake News on Social Media
Thu, April 22, 2021

New AI Tool Identifies Fake News on Social Media

Researchers from the University of Waterloo developed an AI tool that aims to help social media networks and news organizations remove fake content / Credits: georgejmclittle via 123RF

 

Technological advancements in the past several years have transformed the communications landscape. Thousands of fake news are invading almost all social media platforms, making it difficult for people to distinguish which is true or false. News organizations are also having a hard time debunking disinformation since content can usually spread faster on the Internet. This is where artificial intelligence comes in.

Researchers from the University of Waterloo developed an AI tool that aims to help social media networks and news organizations remove fake content. The study "Taking a Stance on Fake News: Towards Automatic Disinformation Assessment via Deep Bidirectional Transformer Language Models for Stance Detection” was presented last December at the Conference on Neural Information Processing Systems in Vancouver.

Chris Dulhanty, a graduate student who led the project, stated that this study represents the effort in a larger body of work to mitigate the spread of disinformation. "We need to empower journalists to uncover the truth and keep us informed," he said.

According to TechXplore, an online site that covers the latest engineering, electronics, and technology advances, the researchers used a large dataset created for a 2017 scientific competition called the Fake News Challenge. The tool uses deep-learning AI algorithms in determining if the content is supported by other posts and stories on the same subject. 

The team was motivated to create the tool considering the proliferation of online posts and news stories that are fabricated to deceive or mislead readers. Also, the AI tool can detect fake news with 90% accuracy. AI algorithms were shown tens of thousands of claims paired with stories that either supported or didn't support them to help them figure out which is true or false among those contents. 

"It augments their capabilities and flags information that doesn't look quite right for verification,” Alexander Wong, a professor of systems design engineering at Waterloo, said.