Roboethics: Should We Hold Robots and AI Accountable?
Sun, April 18, 2021

Roboethics: Should We Hold Robots and AI Accountable?

Roboethics refers to the code of conduct that “robotic engineer designers must implement” in the AI of a robot. / Photo by: Sarah Holmlund via Shutterstock

 

Today, artificial entities like AI and robots can have legal personas and rights just like humans, so wrote Nick Easen of Raconteur, a content and news platform for business decision-makers. Chief executive of industry body techUK Julian David explained that Ai is already making a profound impact in “most aspects of our lives.” As technology continues to evolve, it also raises ethical and legal questions that need to be addressed. 

Amazon, Facebook, and IBM are legal entities. They have the same privileges as citizens as these firms have the right to defend themselves in court. They also have the right to freedom of speech. Does that mean Google’s algorithm, Amazon’s Alexa, and IBM’s AI engine Watson might be given new responsibilities and right or perhaps “qualify for a new status in law?” The ethical debate is on for robots and AI!

Robot Ethics Create More Problems

American writer Isaac Asimov developed the Three Laws of Robotics in the 1940s, which was first introduced in the short science fiction story “Runaround,” as reported by Susan Fourtane of Interesting Engineering, a website dedicated to engineering, technology, and science. 

Asimov’s first law of robotics states that robots should not harm a human being or allow a human to come to harm through inaction. The second law says a robot must obey the orders given by humans unless it will conflict with the first law. Lastly, a robot must safeguard its existence so long as its action will not go against the first two laws.

On paper, it sounds great. However, eight decades later, the Three Laws of Robotics have sparked more conflict and problems among roboticists, debating on machine ethics along with philosophers and engineers. 

What Is Roboethics? 

Also known as machine ethics, it refers to the code of conduct that “robotic engineer designers must implement” in the AI of a robot. Roboticists must ensure that autonomous systems are going to demonstrate “ethically acceptable behavior in situations” where robots or any autonomous system interact with humans.  

Ethical issues are going to rise as robotics become more advanced. Pawel Lichocki, Peter H. Kahn Jr., and Aude Billard wrote in “The Ethical Landscape of Robotics” that various ethical issues have surfaced in two sets of robotic applications, namely service and lethal use. Service robots are developed to coexist and interact peacefully with humans, while lethal robots are created to be deployed on the battlefield to fight alongside the military. 

The authors cited computer scientist Noel Sharkey and roboethicist Ronald Arkin in their research. They quoted Sharkey: “The cognitive capabilities of robots do not match that of humans.” Therefore, lethal robots are more unethical because they can commit mistakes more easily than humans, Sharkey added. On the other hand, Arkin countered, “Although an unmanned system will not be able to perfectly behave in the battlefield, it can perform more ethically that[n] human beings.”

A Case Example of an Attempt to Grant Justice for Robots  

Two years ago, the European Parliament thought of creating a new legal status: electronic personhood. It wanted to make AI and robots “so-called e-personas with responsibilities.” It justified its initiative by saying that AI, an algorithm, or a robot could be held accountable if things went downhill. A total of 156 AI specialists from 14 nations responded by condemning the move in a group letter. 

Sharkey stated that it makes no sense to make an AI or a robot to be responsible for its own outputs, considering it did not even understand what it’s doing. “Humans are responsible for computer output,” he emphasized. If robots did take responsibility for their own actions, then companies might shun their responsibilities to consumers and possible victims. Making an AI or a robot a legal entity would cause repercussions in law.

However, the idea behind the European Parliament’s proposal was not about granting human rights to robots. Rather, it was more about treating AI as a machine with human backing, which is accountable in law. 

 

The European Parliament wanted to make AI and robots “so-called e-personas with responsibilities.” / Photo by: Sebastien Decoret via 123rf

 

The Importance of Roboethics in the Future

Roboethics will play a more significant role in the future as more sophisticated robots and Artificial General Intelligence (AGI) become a more integral aspect of everyday life. Hence, the ethical and social debate surrounding robots must become “increasingly important.”

Some people may argue that robots will help build a better world for humans. On the other hand, the opposing party may disagree with robots as moral agents as they are not designed to be moral decision-makers. Current laws are not even prepared to incorporate AI in the legal framework. In fact, head of AI at multinational law firm Gowling WLG Matt Hervey is skeptical of it since it is used in a “number of applications.” Unfortunately, laws are not abreast of technological changes, and lawyers have the right to consider passing new laws that cater to AI and robots. 

Still, Humans Must Exercise Responsibility 

Perhaps robots could become a moral agent with moral responsibility in the years to come. But for now, engineers and designers must assume accountability for the ethical consequences of their creations. 

Robots and AI are mere products of human intelligence and creativity. Hence, blaming a robot for any mishaps is simply not a logical move. Unless, of course, they start to think and act on their own.