Addressing AI Bias and Promoting a More Inclusive Society
Thu, April 22, 2021

Addressing AI Bias and Promoting a More Inclusive Society

Most commercial AI models use labeled training data to teach the system how to behave. / Photo by: metamorworks via Shutterstock

 

Bias is an unavoidable aspect of life, as it is “the result of the necessarily limited view of the world that any single person or group can achieve,” according to Craig S. Smith of New York City-based newspaper The New York Times. However, social bias can be amplified by AI in dangerous ways. 

For example, existing gaps in promoting and employing women and people of color in the workplace can widen if biases are unintentionally programmed into the AI or if the machine learns to discriminate, as noted by Catalyst, a global non-profit that helps women accelerate into leadership. While AI has the potential to make the decision-making process more efficient and less biased, it is not truly a “clean slate.” Remember, it is “only as good as the data that powers it.” 

Causes of Bias In AI

There is no one root cause of AI bias, but researchers can take that into consideration when developing and training machine-learning models, said Josh Feast of general management magazine Harvard Business Review. For example, a skewed or incomplete training data set occurs when demographic categories are lacking in the training data, thereby causing AI bias. When models are developed using this data, they fail to scale properly when applied to data that contains those missing categories. So, if female speakers comprise 10% of your training data, then your model will likely produce more errors. 

Most commercial AI models use labeled training data to teach the system how to behave. The human that comes up with the labels is biased, unintentionally encoding them into the models. Machine-learning models are trained to estimate labels, so any misclassification and unfairness towards a particular gender programmed into the model can lead to bias. Features and modeling techniques can also introduce bias. 

For example, text-to-speech technology, automatic speech recognition, and speech-to-text technology performed poorly for female speakers in contrast to males. It turns out that the way the speech was modeled and analyzed was more accurate for taller speakers with low-pitched voices and longer vocal cords. Speakers with these characteristics are typically male, which makes the technology less accurate for people with higher-pitched voices, who are mostly female. 

 

A skewed or incomplete training data set occurs when demographic categories are lacking in the training data, thereby causing AI bias. / Photo by: whiteMocca via Shutterstock

 

How Can AI Bias Be Minimized? 

Addressing the causes and formulating solutions to curb AI bias is not black and white. But there are ways for developers and executives to minimize gender bias. Executives should hire more women and diverse workers with technical skills in the AI field. It is also best to use roughly as many female samples as males in the training data. 

This way, more perspectives are added to the pool of data, enabling developers to train AI to accurately reflect a more diverse and inclusive society. By collecting more training data associated with sensitive groups, we can apply modern machine learning de-biasing techniques to penalize errors and to impose additional penalties for producing unfairness. 

Homogenous AI teams and researchers may not notice that bias has been programmed into the model, thereby creating a negative impact on the AI they trained or created. On the other hand, Diversity can also reduce groupthink and improve team decision-making, allowing teams to make more thorough and faster decisions. 

To illustrate, Olga Russakovsky, an assistant professor in the Department of Computer Science at Princeton University and a co-founder of the AI4ALL foundation, stated that they are doing several things to rebalance the data set in ImageNet to accurately reflect the world at large, reported Smith. In 2009, the ImageNet data set was curated for object recognition, housing over 14 million images. 

So far, Russakovsky and her colleagues sifted through 2,000 categories to remove images that may be deemed as offensive. They are currently designing an interface to allow the community to flag additional images or categories as offensive. By leveraging the interface, everyone can have a “voice in this system.” 

Scientists Need to Rewire Their Thinking

Interestingly, research scientist at Google on the ethical AI team and a co-founder of Black in AI Timnit Gebru argued that the cultural attitude of objectivity and meritocracy among scientists should be changed. 

For instance, people from non-marginalized groups start taking all the credit meant for individuals from marginalized groups. The former also spend money on “initiatives.” This is why institutions are bringing the wrong people to talk about how AI creates impacts on our society. They are privileged and famous, therefore, they can bring in more money to further benefit the already privileged. 

According to Gebru, science has to be reframed to help us understand the world’s social dynamics as most of the radical change occurs at the societal level. For Gebru, science is currently taught from “no one’s point of view.” Hence, there is a need for more interdisciplinary work.

Unity In Diversity

She asserted that we need to have governing bodies, principles, and standards, as well as people who vote for having algorithms checked. Other than that, future research should consider adding “a broader representation of data” to help expand our understanding of handling diversity, said Feast. 

Bias is a part of life, making it impossible to be objective all the time. But if AI starts to amplify bias unintentionally or intentionally, then there’s something wrong. Scientists, executives, and developers must widen their perspectives and take on a more interdisciplinary approach in promoting diversity.