AI Is Progressing But It's Lacking In One Area
Mon, April 19, 2021

AI Is Progressing But It's Lacking In One Area

False advertisement of AI is akin to snake oil / Photo Credit: Andrey Suslov (via Shutterstock)


In a recent presentation, Arvind Narayanan, an associate professor of Computer Science at Princeton, likened the false advertisement of AI to “snake oil,” as reported by Matt Asay of TechRepublic, an online trade publication. It’s not that there are no real, useful ways to utilize AI, rather, “Much of what's being sold as 'AI' today is snake oil--it does not and cannot work,” he asserted. 

Narayanan said, “AI experts have a more modest estimate that Artificial General Intelligence or Strong AI is about 50 years away, but history tells us that even experts tend to be wildly optimistic about AI predictions.” To him, AI performs well in “Perception,” a category that includes content identification, medical diagnosis from scans, text-to-speech, facial recognition, and deepfakes. In the Perception category, AI is “already at or beyond human accuracy” and is getting better and faster in the aforementioned areas. 

Narayanan noted that AI also performs well in Automating Judgment, which includes hate speech detection, content recommendation, spam detection, detection of copyrighted material, automated essay grading. He said that “humans have some heuristic” in their minds such as identifying what is spam or not. It’s possible for AI to learn it if given enough examples. AI will never be perfect in this area since it involves judgment and people can “disagree about the correct decision.” 

However, in the area of Predicting Social Outcomes, this is where AI fails. Such examples are predicting criminal recidivism, terrorist risk, at-risk kids, and job performance, including predictive policing. When we rely on pseudo-AI to predict social outcomes, we are likely to run into the problem of our inability to explain the prediction.

This doesn’t mean that AI is not good for society. Of course, it is good. In the future, AI will find its way into every application or industry. To Narayanan, it only becomes bad when we misapply AI to predict social outcomes without providing an explanation as to why an employee or person is fired or arrested.