|Noel Sharkey, the Sheffield University robotics/AI pioneer, believes that algorithms are infected with biases / Credits: olegdudko via 123RF|
It’s no secret that algorithms have been infiltrated with biases. Prof. Noel Sharkey, the Sheffield University robotics/AI pioneer and the leading figure in a global campaign against “killer robots,” stated that the decision-making processes of algorithms could not be fair or trusted because they are infected with biases. Some of the examples he cited where biases are present include determining who should get bail and who should go to jail.
Another example of how bias might infect algorithms is the increasing work on autonomous weapons or “killer robots.” Some reports revealed that these weapons can potentially select individual targets on their own. They can target individual faces via screens from bases thousands of miles away. “Now the new idea that you could send autonomous weapons out on their own, with no direct human control, and find an individual target via facial recognition is more dangerous,” he said.
Nonetheless, Sharkey still believes that some innovations are worth risking for. This is why he recently called on experts and companies for a strict regulation of all decision algorithms. “There should be a moratorium on all algorithms that impact on people’s lives. Why? Because they are not working and have been shown to be biased across the board,” he said. He suggested that there should be tests for AI decision-making machines in the same way new pharmaceutical drugs are vigorously checked.
According to The Guardian, a daily British newspaper, Sharkey had talked with the biggest global social media and computing corporations, such as Google and Microsoft, about the innate bias problem. He stated that all of them are aware of such problems. Sharkey believes that algorithms have to be subjected to the same rigorous testing as any new drug produced. This means testing these systems on millions of people to reach a point that shows no major inbuilt bias.