|Bias in artificial intelligence is no secret. In 2016, the US news organization ProPublica reported that COMPAS, an algorithm widely used in the country to guide sentencing by predicting the likelihood of criminal offense, was racially biased / Photo by: Monsit Jangariyawong via 123RF|
Bias in artificial intelligence is no secret. In 2016, the US news organization ProPublica reported that COMPAS, an algorithm widely used in the country to guide sentencing by predicting the likelihood of criminal offense, was racially biased. This was considered as the most notorious case of AI prejudice because the algorithm predicted that black defendants pose a higher risk of recidivism than they do, which is opposite for white defendants.
During the same year, the Human Rights Data Analysis Group reported that PredPol, an algorithm designed to predict when and where crimes will take place, could lead to the unfair targeting of certain neighborhoods. Reports showed that the algorithm had repeatedly sent authorities to neighborhoods with a high proportion of people from racial minorities regardless of the true-crime rate in those areas.
A 2019 study revealed large gender and racial bias in AI systems, which were sold by tech giants like Amazon, Microsoft, and IBM. According to Time, an American weekly news magazine and news website, the findings showed that all companies performed substantially better on male faces compared to female faces. They had error rates of no more than 1% for lighter-skinned men. On the other hand, the errors soared to 25% for darker-skinned women. Also, the AI systems weren’t able to correctly identify the faces of Serena Williams, Oprah Winfrey, and Michelle Obama.
These examples showed that AI bias is extremely prevalent. Thus, it will not be surprising to know that most industries are facing this kind of issue, even education. Currently, AI is widely being used in the education sector. Companies like Carnegie Learning and Fuel Education have been applying AI to K-12 learning. The technology has helped students and instructors gain more knowledge through modern ways of teaching using devices. However, AI faces a set of challenges particularly bias algorithms in education.
How Does AI Bias Affect Education?
Experts believe that education can be the next great frontier for predictive technology, especially when more and more companies are paying attention to it. A 2017 study revealed that a staggering $1.7 billion were invested in AI-based learning companies. This was quite exciting since children would benefit the most from it. However, just like other AI technologies, predictive systems are not perfect.
A 2018 study of Guamanian students discovered a predictive algorithm bias. According to Analytics India Magazine, an online site that covers the technological progress in the space of analytics, artificial intelligence, data science, and big data in India, the graph that showed the likelihood of a student passing tests was biased. It revealed that the algorithm had a bias toward different types of Guamanian students.
It’s also important to understand why algorithms are biased. It all goes back to the kind of data that developers feed into the algorithms. The more bad data is fed to the algorithm, the more bias it will be. This can be a problem with the number of people and the types of student information that are used in the algorithm’s training. Algorithms are more likely to be biased if developers run training on the algorithm with fewer people belonging to specific skin color.
“So, in this case, there's nothing wrong with the data, and there's nothing wrong with the model. What's wrong is that ingrained biases in society have led to unequal outcomes in the workplace, and that isn't something you can fix with an algorithm,” Dr. Rumman Chowdhury, consulting company Accenture’s lead for responsible AI, said.
AI can also create problems when it comes to its applications in assessing student’s prior and ongoing learning, placing students in appropriate subject levels, individualizing instructions, and scheduling. Unfortunately, the said algorithm doesn’t consider students’ experiences, low-income ability, and students in minority groups, who are at a relatively small achievement record. This not only shows a clear AI bias but also hinders the growth of students.
|Experts believe that education can be the next great frontier for predictive technology, especially when more and more companies are paying attention to it. A 2017 study revealed that a staggering $1.7 billion were invested in AI-based learning companies / Photo by: dolgachov via 123RF|
AI Can Worsen Racial Inequity in Schools
Many students of color are discriminated against every day in schools, especially in the US. They go to school knowing that it will be another day where they would be bullied. Unfortunately, AI can make this worse. According to Brookings, a nonprofit public policy organization that aims to conduct in-depth research that leads to new ideas for solving problems facing society at the local, national, and global levels, children from black and Latino or Hispanic communities would face greater inequalities if we go too far toward digitizing education.
These inequalities could be prevalent if developers don’t consider how to check the inherent biases of the (mostly white) that they feed AI systems. This not only leads to worsening AI bias but also amplifies biases in the real world. This is only going to be a lot difficult for society to stop racism since systemic racism and discrimination are already embedded in our educational systems. Thus, developers need to intentionally build AI systems through a lens of racial equity.
AI also threatens to replace human teachers with software. Thus, some US districts have turned to online platforms due to teacher shortage. This will make students struggle more because they don’t have trained human teachers who not only know the subject matter but know and care about the students.