The Black Box and the Transparency Paradox of AI
Wed, April 21, 2021

The Black Box and the Transparency Paradox of AI

More companies are using machine learning models to make decisions such as deciding who can be given loans or jobs or be admitted to universities / Photo by: maxuser via Shutterstock

 

More companies are using machine learning models to make decisions such as deciding who can be given loans or jobs or be admitted to universities, said Dr. Markus Noga of Digitalist Magazine, a news platform on cloud, mobile, big data, and more. Algorithms are also leveraged to recommend a movie to watch, an apartment to rent, or a person to date. But in recent years, academics and practitioners have called for greater transparency on how AI models work, according to Andrew Burt of Harvard Business Review, a general management magazine. 

Transparency can help reduce issues of fairness, discrimination, and trust. All these issues have received increased attention lately. For example, Apple’s new credit card was under fire for sexist lending models. On the other hand, Amazon scrapped an AI hiring tool as it discriminated against women. You can call it AI’s “transparency paradox.” 

 

Black Box AI 

The black box problem is not a new phenomenon. In fact, its relevance has grown as more machine learning solutions become more powerful and sophisticated. Models can outperform humans in complex tasks such as transcribing speech, classifying images, and translating from one language to another. “And the more sophisticated the model, the lower its explainability level,” noted Noga. 

In some machine learning-enabled models, the black box issue is irrelevant because users have no choice if they want to leverage its intelligence. If a simpler and more explainable model cannot do a specific job such as translating a text from Chinese to English, then you can decide not to use it and translate the text by yourself. In other use cases, we don’t care about an algorithm’s transparency. 

The black box problem is not a new phenomenon. In fact, its relevance has grown as more machine learning solutions become more powerful and sophisticated / Photo by: Customdesigner via Shutterstock

 

Let’s say we have a model that detects 10,000 of the most promising customer prospects from a list of millions or chooses the best product to recommend to customers. These examples leave humans out of the picture because it would be effort-intensive to check everything. Explainability is an area of research consisting of five approaches. The first is to use simpler models, sacrificing accuracy for explainability. The second is to combine simpler and sophisticated models. The former provides the recommendation while the latter provides rationals. However, these models can disagree. 

The third approach is to use intermediate model states in which patterns can be visualized as features to provide an explanation for image classification. The fourth is to use attention mechanisms to direct “attention” toward the most important parts of the input. Finally, the fifth approach is to modify inputs in which the model can be run on variants of the input, relaying the results to the user. 

The Transparency Paradox

Generating more information about AI might create real benefits, but it may also create new risks. Hence, organizations need to think about how they are going to manage the risks of AI, “the information they’re generating about these risks, and how that information is shared and protected.” In a research paper by Dylan Slack and colleagues, it focused on how variants of LIME and SHAP, two popular techniques to explain the black box algorithm, could be hacked, as said in Cornell University’s arXiv.org, an open archive e-prints platform. 

With LIME, a paper by Marco Tulio Ribeiro and his team explained how an incomprehensible image classifier recognized various objects in an image, according to arXiv.org. For example, an acoustic guitar was recognized by the bridge and parts of the fretboard, while a Labrador Retriever was identified due to its facial features on the right side of its face. LIME, as well as the explainable AI as a whole, have been lauded as breakthroughs to make “opaque algorithms more transparent.” 

But LIME and SHAP can be attacked as explanations can be intentionally manipulated, leading people to lose trust in the model, including its explanations. There are other potential risks associated with AI transparency. Reza Shokri and his team wrote on arXiv.org that exposing more information about machine learning algorithms can make them more susceptible to attacks. Smitha Milla and colleagues also found that entire algorithms can be stolen just based on their explanations alone, as said in arXiv.org. 

These studies show that being transparent about a model’s inner workings may render its security less effective or expose a firm to more liability. In other words, all data carries risks. 

Action Plans for Firms

Companies need to acknowledge that there are risks associated with transparency. These risks must be incorporated into a broader risk model. The model should state “how to engage with explainable models and the extent to which information about the model is available to others.” 

Organizations must engage with lawyers as early as possible when deploying and creating AI. Firms can involve legal departments to facilitate an open and legally privileged environment. This allows companies to probe models for every possible vulnerability “without creating additional liabilities.” Most importantly, organizations must recognize that security is an increasing concern in AI. More security issues and bugs will be discovered as the world adopts AI even more.  

AI transparency is a paradox, but that doesn’t mean companies should not achieve it. The downsides of transparency should be incorporated into a risk model, taking into account that entire algorithms can be stolen or manipulated.

Companies need to acknowledge that there are risks associated with transparency. These risks must be incorporated into a broader risk model / Photo by: GaudiLab via Shutterstock