Why Humans Should Be Held Accountable for AI and Algorithm Errors
Wed, April 21, 2021

Why Humans Should Be Held Accountable for AI and Algorithm Errors

Australian human rights commissioner Ed Santow suggested that people need to be held accountable for the mistakes AI and algorithms make on their behalf / Credits: Alexander Limbach via Shutterstock

 

Modern technologies, particularly artificial intelligence, are everywhere and it seems that there’s no stopping them from continually being a part of our society. However, with companies and industries increasingly investing in AI technologies, experts are concerned that this would result in negative outcomes. For instance, a recent report by the Australian human rights body rejected legislation for a facial recognition system and backed down on the use of automatic debit notices.

Experts stated that facial recognition technology should be carefully regulated because it might impact our human rights when issues such as false positives arise. False positives could lead to people being falsely arrested and detained. At The Guardian, a British daily newspaper, the paper recommended that whenever AI will be used, there should an accompanied a cost-benefit analysis and public consultation before it is brought in. 

People should understand the AI-led decision in a non-technical way. Also, the discussion paper consists of 29 proposals, including a complete review of AI use in government and the proposal for an AI commissioner that would help guide government agencies on how to implement AI effectively.

Australian human rights commissioner Ed Santow, thus, suggested that people need to be held accountable for the mistakes AI and algorithms make on their behalf. “It should prompt a process of making sure people’s basic human rights are protected. That’s the whole point of having statements of compatibility with human rights accompanying any new bill – it throws up any problem before they come to pass,” he said.

Aside from that, the commission suggested that there should be legislation showing that a person is ultimately responsible and legally liable for the decisions made by AI. Santow believes that there has to be a chain of legal liability as well as human oversight and intervention. “You need to make sure the human is properly empowered to identify when things might be going wrong and intervene to correct them,” he said.