|ML algorithms can only do as good as the data you feed them / Photo Credit: metamorworks (via Shutterstock)|
Companies are vulnerable to a new kind of risk as AI and machine learning give them the ability to move faster with fewer employees, said Irina Farooq of business news Forbes. AI and machine learning algorithms can only get as good as the data you provide them with. If they are trained to make decisions using data that use biased examples of certain groups of people or don’t include enough examples, they will produce biased results or decisions.
If you show images of women in the kitchen to an algorithm, then it would associate women with kitchens to generate assumptions and decisions. The parameters of what the algorithm should consider are still up to you. Developers and data scientists doing this task may not be aware that they are feeding unconsciously biased parameters into the algorithm. For example, we don’t know what the parameters were for the Apple Card’s credit determinations. However, if factors included annual income without taking into consideration joint property ownership and tax fillings, women, who still make 80.70 for every man’s dollar, would be at a huge disadvantage.
Efforts to establish equal protection under the law have been aimed at preventing “conscious human bias.” The problem with AI is that it reproduces our unconscious biases faster and more effectively without a moral conscience or concern for PR. In this case, the algorithm “meted out credit decisions” without a human being’s higher order thinking skills to see a red flag in the stark differences between “credit limits offered to women versus men on the whole.” As with many AI and machine learning algorithms, Apple Card’s is a black box, meaning it has no framework to trace its training and decision making. This is a serious issue for corporations and society alike.
Therefore, it is important to create a framework to trace the “lineage of an algorithm’s training”. We also have to ensure that machine learning-enabled data platforms have the necessary infrastructure to implement transparency, governance, and repeatability. We have to rectify any flaws present in the system that produce unconscious biases before people talk about these biases on Twitter.