“Explainable AI”: Google’s New Tool to Explain the Complexities of Machine Learning
Mon, April 19, 2021

“Explainable AI”: Google’s New Tool to Explain the Complexities of Machine Learning

Artificial intelligence and machine learning have become an increasing part of our daily lives. Decisions and predictions made using the technology have also become much more profound compared to the past years / Photo by: Mike MacKenzie via Flickr

 

Artificial intelligence and machine learning have become an increasing part of our daily lives. Decisions and predictions made using the technology have also become much more profound compared to the past years. Forbes, a global media company focusing on business, investing, technology, entrepreneurship, leadership, and lifestyle, reported that 82% of enterprises adopting AI and machine learning have gained a financial return of investments. Reports also predicted that the business value created by these technologies will reach $3.9 trillion in 2022.

While some of us might have come across AI and machine learning, a lot of people have little visibility and knowledge of how these systems work. Machine learning, in particular, continues to baffle the minds of a lot of people. Experts describe it as a branch of AI that automates analytical model building. It is based on the idea that systems can learn from data, identify and analyze patterns, and make decisions with minimal human intervention. 

Machine learning was born from a theory that computers can learn without being programmed or supervised to perform specific tasks. However, the machine learning we know now is different from what we know from the past mainly due to the new computing technologies. Its main concept revolves around one idea: As models are exposed to new data, they can independently adapt. They can learn from past computations in producing reliable and accurate decisions and results. 

Machine learning was born from a theory that computers can learn without being programmed or supervised to perform specific tasks / Photo by: Pop Nukoonrat via 123RF

 

Why It Is Important to Understand  Machine Learning

Machine learning is incredibly important nowadays for various reasons such as it can solve complicated real-world problems. It has made positive and huge impacts on businesses mainly because of the dramatic change in data storage and computing processing power. However, not everyone appreciates or is knowledgeable about its benefits. There’s a need to fully understand how it works so we can trust the decisions of AI systems. 

The lack of knowledge and opportunities to understand machine learning has led to a huge number of people not trusting AI, especially as algorithms become more complicated. According to BecomingHuman.ai, an online site that features the latest news, info, and tutorials on AI, machine learning, deep learning, big data, and what it means for humanity, there has been an increasing fear of mistakes, undetected bias, and miscomprehension today. Thus, learning the gist of extremely complex mathematical functions behind AI systems can help in understanding what the technology can do.

One of the reasons why both AI and machine learning are difficult to understand is people’s tendency to mystify them. Most of the time, people associate these technologies to super-human intelligence. Also, many companies across the world have used and abused the term “AI” so that they can ride its hype-wave. They often over-hype and over-promise on technologies they don’t fully understand. 

Introducing “Explainable AI”

To fully trust AI and machine learning, people must learn how computer systems work and how they are able to produce transparent explanations and reasons for the decisions they make. Tech giant Google has been working to achieve this goal for the past years. For instance, it launched the What-If Tool last year to make algorithms more transparent. It provides an easy-to-use interface for expanding understanding of black-box classification and regression machine learning models. 

Recently, they introduced a new tool that aims to help humans grasp the complexities of machine learning called Explainable AI. Its aim is to explain how and why a machine learning model reaches its conclusions. According to ZDNet, a business technology news website published by CBS Interactive, the outcome of the algorithm will be explained to the user by quantifying how much each feature in the dataset influenced the results. Each data present in the dataset will have a contribution to the overall machine learning model.

Explainable AI will show account and credit scores so users can understand why a given algorithm came up with a particular decision. Thomas Kurian, the CEO of Google Cloud, said, "If you're using AI for credit scoring, you want to be able to understand why the model rejected a particular model and accepted another one. Explainable AI allows you, as a customer, who is using AI in an enterprise business process, to understand why the AI infrastructure generated a particular outcome," he said.

Recently, Google introduced a new tool that aims to help humans grasp the complexities of machine learning called Explainable AI / Photo by: The Pancake of Heaven via Wikimedia Commons

 

Through the help of the What-If Tool, Explainable AI can provide data scientists with the insight needed to debug model performance and improve data sets or model architecture. While the explanations that Google’s new AI tool would provide don’t reveal any fundamental relationship in the data sample of the population, they would still reflect the patterns the model found in the data.

According to SiliconANGLE, a media company that provides news, commentary, and analysis on the technology industry, Tracy Frey, the director of strategy for Google Cloud AI, stated that they are striving to make the most useful and straightforward explanation methods available for the users while being transparent to the AI systems’ limitations. 

Also, Stefan Hoejmose, Head of Data Journeys of Sky, emphasized the need to understand how models arrive at their decisions since it is critical for the use of AI in our society. “We are excited to see the progress made by Google Cloud to solve this industry challenge. With tools like What-If Tool and feature attributions in AI Platform, our data scientists can build models with confidence and provide human-understandable explanations,” she said.

Such efforts no doubt will one day fully clear the cloud of uncertainty that a lot of people today still feel about AI.