Death by Algorithm: AI Can Accidentally Start a War, Former Engineer Warns
Wed, April 14, 2021

Death by Algorithm: AI Can Accidentally Start a War, Former Engineer Warns

The era of machine-driven warfare is approaching, as technologies seek for continuous development / Photo Credit: alexandersikov via 123rf

 

A new era of machine-driven warfare is fast approaching. Wars will be faster, more high-tech, and less human than ever before -- all because of robots. 

Experts over the past few years have expressed their fear towards robots – that, eventually, they will kill us all. This sounds like something out of a sci-fi movie, but recent advancements in technology, especially in the military, seem interested in developing this. Killer robots, also called fully autonomous weapons, would be able to select and engage targets without meaningful human control. While fully autonomous weapons have not yet been developed, reports show that at least 381 partly autonomous weapons and military robotics systems have been deployed or are under development in 12 countries. This includes the US, the UK, Israel, France, and China. 

According to The Guardian, a British online news site, research by the International Data Corporation projects that the global spending on robotics will double to $188 billion in 2020 from only $91.5 billion in 2018. This brings full autonomy closer to realization. The Stockholm International Peace Research Institute reported that the increase of investment in artificial intelligence and robotics in many major military powers was due to the US’s plan to modernize its army and ensure its strategic superiority across the globe. 

Steven Walker, director of the US Defense Advanced Research Projects Agency, said, “We very much acknowledge that we’re in a competition with countries like China and Russia.” The said agency’s 2018 budget was increased by 27 percent last year.

 

 

The Age of Killer Robots

Experts in machine learning and military technology described killer robots as machines that would target and kill without a “human in the loop.” The proposed weapons are not humanoid robots but mostly drones. But the concept of these killers is nothing new. According to The Conversation, a not-for-profit media outlet that uses content sourced from academics and researchers, machine-gun wielding robots were deployed in Iraq as early as 2007.

Also, technologies that perform a task without the need for direct human intervention have existed for a very long time. For instance, proximity use was developed during World War II to explode artillery shells at a predetermined distance from their target. The device was proven more effective by taking the human out of the loop completely. So, questions about whether we should use autonomous robots have already been answered. We already use them and they take on many forms. 

The technology to replace humans with an algorithm, particularly in military operations, is on its way much faster than we thought. Stuart Russell, a computer science professor at UC Berkeley and leading AI researcher, said, “Technologically, autonomous weapons are easier than self-driving cars. People who work in the related technologies think it’d be relatively easy to put together a very effective weapon in less than two years.”

Killer robots would use an algorithm in deciding where the drone should fire. Experts say that it could have a fixed list of people it can target and fire only if it’s highly confident or trained. Or, the algorithm could learn to target anyone holding something visually identifiable as a gun in a war zone. 

 

 

The military has been expressing a high interest in designing weapons that target people without human intervention despite its terrifying moral implications. Current drones being employed these days have limits on where they can operate, have some lag time, and are somewhat vulnerable. But killer robots can change all of that. The military can launch thousands or millions of fully autonomous weapons without thousands or millions of people operating them. 

An interesting argument about killer robots is that military experts claim that they are more ethical than humans. This is because humans sometimes deliberately target innocent people, commit war crimes, or kill people who’ve surrendered. They also get stressed, fatigued, and confused, and end up making mistakes. Killer robots, on the other hand, “follow exactly their code.” A 2018 book entitled “Army of None: Autonomous Weapons and the Future of War” written by Ranger Paul Scharre, a Pentagon defense expert, emphasized, “Unlike human soldiers, machines never get angry or seek revenge.”

Killer Robots Could Cause Mass Atrocities

Experts have also expressed serious doubts that fully autonomous weapons would be capable of meeting international law standards. This includes the rules of proportionality, distinction, and military necessity. Worse, these machines threaten the fundamental right to life and the principle of human dignity.

Last year, former Google engineer Laura Nola resigned in protest of being sent to work on a project to dramatically enhance US military drone technology. Recently, she warned people that these killer robots could accidentally start the next world war. She demanded that any weapons not operated by humans be banned before they threaten to start World War Three. According to Evening Standard, a local, free daily newspaper, Nola stated that AI could do “calamitous things” of their own accord.

 

Robotic machines could threaten the fundamental right to life and the principle of human dignity / Photo Credit: alexandersikov via 123rf