New AI Framework to Prevent Machines From Having Biases
Sat, April 17, 2021

New AI Framework to Prevent Machines From Having Biases

The benefits of artificial intelligence are proving to be practically infinite. By leveraging AI development, businesses can accelerate their digital innovation and progress / Photo by: Peshkova via Shutterstock

 

The benefits of artificial intelligence are proving to be practically infinite. By leveraging AI development, businesses can accelerate their digital innovation and progress. This is why a lot of enterprises are investing in AI technologies today. Forbes, a global media company focusing on business, investing, technology, entrepreneurship, leadership, and lifestyle, reported that improving customer experiences drive the majority of enterprises (62%) to invest in new AI technologies. This is followed by improving product innovation (59%) and achieving greater operational excellence (55%).

While AI has massively transformed industries and helped most people’s lives become better, the technology has also faced major problems: racial and gender bias. AI algorithms are complex packets of code that strive to learn through given training data. However, when this data is not well-rounded, flawed, or biased, the algorithm quickly learns to discriminate too. Recent reports showed how the technology has been unfair to women and minorities. For instance, facial recognition technology gets more confused when scanning dark-skinned women compared to light-skinned men. 

A new study published by a New York University research center concluded that the lack of diversity in the AI field has reached “a moment of reckoning” perpetuating gender and racial biases. The Guardian, a British daily newspaper, reported that the lack of diversity is due to the overwhelmingly white and male field. According to the report, more than 80% of AI professors are men while only 15% of AI researchers at Facebook and 10% of AI researchers at Google are women. 

“The industry has to acknowledge the gravity of the situation and admit that its existing methods have failed to address these problems. The use of AI systems for the classification, detection, and prediction of race and gender is in urgent need of reevaluation,” author Kate Crawford said. 

The lack of diversity in AI not only discriminates marginalized sectors but also puts an increasingly huge amount of power and capital in the hands of selected people. Unfortunately, the issue gets more difficult to solve as years pass by.

AI’s Diversity Crisis

A recent study uncovered large gender and racial bias in AI systems used by several tech giants such as Amazon, Microsoft, and IBM. The main task was guessing the gender of a face. The researchers revealed that all companies performed significantly better in identifying male faces than female faces. While there are no more than 1% error rates for lighter-skinned men, the errors increased to 35% for darker-skinned women. AI systems have failed to classify popular black personalities such as Serena Williams, Michelle Obama, and Oprah Winfrey.

Earlier this year, UNESCO released a report showing that AI virtual assistants fuel gender bias. Since virtual assistants were designed with female names and voices, the report argues that they reproduce discriminatory stereotypes. They reinforce the role of women as secondary and submissive to men. Aside from that, the recent recruitment algorithm developed by Amazon to sort resumes for job applications displayed gender biases. Reports showed that the algorithm downgrades resumes that contained the word “women” or a reference to women’s colleges.

A recent study uncovered large gender and racial bias in AI systems used by several tech giants such as Amazon, Microsoft, and IBM. The main task was guessing the gender of a face / Photo by: Rawpixel.com via Shutterstock

 

While it was mentioned that the bias in AI algorithms occurs because of problems in training data, it would still boil down to how an AI developer frames a scenario or system. The perceptions that a developer has define what and how the data is collected and utilized. Tess Posner, the chief executive officer of AI4ALL, stated that there should be an increasing effort to increase transparency around how algorithms are built. Also, explaining how these algorithms work is necessary for fixing the diversity problems in AI.

“The core of the problem is whether market forces are going to be sufficient for this to be fixed. It’s going to take effort at all stages of AI and take change at cultural and procedural levels to solve this,” Posner said. 

How AI Can Be Fair to All

AI has done many wonders, from voice assistants and 3D printing to self-driving cars. However, certain issues have also emerged with its prominence, particularly bias. To mitigate this “undesirable behavior” in AI systems, researchers have come up with a framework. 

Recently, researchers from Stanford and the University of Massachusetts developed an algorithmic framework that guarantees AI won’t misbehave. The framework uses “Sheldonian” algorithms, named after Hari Sheldon, the protagonist in Isaac Asimov’s “Foundation” series, to train an AI application to avoid bias. According to TechXplore, an online site that covers the latest engineering, electronics, and technology advances, the study worked around the idea that "unsafe" or "unfair" outcomes or behaviors can be defined mathematically. If that’s the case, developers can avoid unwanted results by creating algorithms that can learn from data.

"We show how the designers of machine learning algorithms can make it easier for people who want to build AI into their products and services to describe unwanted outcomes or behaviors that the AI system will avoid with high probability,” author Philip Thomas, an assistant professor of computer science at the University of Massachusetts Amherst, said. 

The researchers stated that the Sheldonian architecture allows developers to define their own operating conditions in an effort to prevent systems from crossing certain thresholds while training or optimizing. This could significantly keep AI systems from harming or discriminating against humans. Also, Thomas stated that this new AI framework will make it easier for developers to build behavior-avoidance instructions into all sorts of algorithms. 

It’s no wonder that people get scared of AI, not only because of how it’s usually portrayed in fiction but especially because it has been revealed to have existing biases. Scientists and researchers should try even more to find solutions that will address the issue. 

It’s no wonder that people get scared of AI, not only because of how it’s usually portrayed in fiction but especially because it has been revealed to have existing biases / Photo by: Phonlamai Photo via Shutterstock