Why Humans Don't Trust AI
Thu, October 21, 2021

Why Humans Don't Trust AI

Artificial intelligence systems continue to turn huge amounts of complex, unstructured data into actionable insights / Photo by: Khanthachai C via Shutterstock

 

Artificial intelligence systems continue to turn huge amounts of complex, unstructured data into actionable insights. In the next couple of decades, it is expected that AI could make more significant medical advances, manage the complexity of the global economy, and better analyze climate change due to the growing volumes of data that can be gathered and analyzed.

A recent report by PwC, a global network of firms delivering world-class assurance, tax, and consulting services for businesses, showed that AI could contribute up to $15.7 trillion to the global economy by 2030. In a survey of more than 1,000 US business executives at companies that are utilizing AI, 27% had already implemented AI in multiple areas, 22% were investigating the use of AI, 20% were planning to deploy AI enterprise-wide, 16% had already implemented pilot projects within discrete areas, and 15% were planning to deploy AI in multiple areas.

However, the progress of AI has caused fear, particularly in the workforce. The PwC’s international jobs automation study showed that while AI could cause less than 3% of jobs to be lost by 2020, it could reach as high as 30% by the mid-2030s. For now, executives in the said survey believe that the technology isn’t taking away jobs in their organizations. The report revealed that twice as many executives said AI will lead to an increased headcount (38%) compared to those who said it will lead to job cuts (19%) in their organization.

Distrust

Just like any other technology, AI can make mistakes because, despite its rapid progress, it is still very much in its infancy. A recent report by Pega, a software company, revealed that consumers don’t completely understand the benefits of AI. Thus, they are more likely to trust a real person to make decisions. According to Fintech News, the fastest and easiest way to stay up to date on fintech news and deep techs useful for your sector, the lack of trust in AI can negatively impact the consumer’s digital experience as well as a brand’s reputation. 

The report revealed that only 25% of consumers would trust a decision made by an AI system over a real person about their qualifications for a bank loan. Dr. Rob Walker, vice president for decisioning and analytics at Pega, said that consumers have greater trust in them than the technology because they prefer speaking to people. 

Also, most of the respondents believe that AI doesn’t utilize morality or empathy. Over 56% of customers are not convinced that it is possible to develop machines that behave morally, while only 12% agreed that AI can tell the difference between good and evil. At the same time, only 12% believe they have ever interacted with a machine that has shown empathy.

A recent report by Pega, a software company, revealed that consumers don’t completely understand the benefits of AI. Thus, they are more likely to trust a real person to make decisions / Photo by: fizkes via Shutterstock

 

“What’s needed is the ability for AI systems to help companies make ethical decisions. To use the same example, in addition to a bank following regulatory processes before making an offer of a loan to an individual it should also be able to determine whether or not it’s the right thing to do ethically,” Dr. Walker said. 

Additionally, the 2017 PwC CEO Pulse survey revealed that 76% of the respondents reported that potential for biases and lack of transparency were impeding AI adoption in their enterprise, while 73% stated that there’s a need to ensure governance and rules to control it. Some of the main concerns involve robots taking jobs and leaving people without the opportunity to earn a living. For instance, while autonomous vehicles would benefit professions such as taxi and truck drivers, these would disrupt the wider automotive industry.

According to Electronic Design, the largest published print magazine for the electronic design industry published in the US, another concern is that robots could become more intelligent than humans and “take over.” As of now, researchers are exploring ways such technologies can be understood by the public and have it remain under human control. For instance, US defense agency Darpa launched its Explainable AI project, while OpenAI, a not-for-profit research company, is working towards "discovering and enacting the path to safe artificial general intelligence.”

Building Trust in AI

Unfortunately, all efforts, systems, and innovations that AI has helped create can potentially fail if this distrust continues. According to The Next Web, a website and annual series of conferences focused on new technology and start-up companies in Europe, companies must take the first step to earn the trust they need to push forward. They need to try several approaches to encourage consumers to trust their AI products and services.

One of the ways to do this is by making AI systems transparent, explainable, ethical, properly trained with appropriate data, and free of bias. This is a huge improvement if this happens since commercially available AI systems are an opaque black box. They only offer scarce visibility to their users about the underlying data, processes, and logic that lead to the system’s decisions. Also, achieving trust in AI means improving bias detection and mitigation. Data scientists and developers should continue to improve bias metrics, notions of fairness, and bias mitigation and detection algorithms. 

Indeed, there’s still so much to do and improve to make sure that AI could work with high accuracy and without bias. It’s up to the researchers and developers to ensure these because people’s trust is critical in AI’s success. 

Indeed, there’s still so much to do and improve to make sure that AI could work with high accuracy and without bias / Photo by: pathdoc via Shutterstock