Can AI Killer Robots Reduce Collateral Damage?
Sat, April 17, 2021

Can AI Killer Robots Reduce Collateral Damage?

AI can be destructive. It can be a tool to kill people and destroy countries. This kind of technology is popularly known as lethal autonomous weapons or AI killer robots / Photo by: Dick Thomas Johnson via Wikimedia Commons

 

Artificial intelligence technology is something that companies and industries are beginning to invest in to improve their businesses, automate tasks, and increase sales. However, AI can also be destructive. It can be a tool to kill people and destroy countries. This kind of technology is popularly known as lethal autonomous weapons or AI killer robots. 

Based on the recent trends in AI and robotics technology, many experts believe it will only take a little time before these AI killer robots become fully operational. Last 2018, several governments from across the world met at the UN in Geneva and discussed how to regulate lethal autonomous weapons systems (LAWS). These AI-powered ships, tanks, planes, and guns could fight the wars of the future without any human intervention. While fully autonomous weapons do not yet exist, many high-ranking military officials have stated that the use of robots will be widespread in warfare in a matter of years. 

Reports showed that about 12 countries such as China, France, Israel, the UK, and the US have deployed at least 381 partly autonomous weapon and military robotics systems. What makes these technologies so appealing to the military is that it can replace humans with an algorithm that can decide who to target or where to shoot. “Technologically, autonomous weapons are easier than self-driving cars. People who work in the related technologies think it’d be relatively easy to put together a very effective weapon in less than two years,” Stuart Russell, a computer science professor at UC Berkeley and leading AI researcher, said. 

While there’s a general enthusiasm for these AI killer robots among many countries, some nations strongly oppose it. Statista, a German online portal for statistics, reported that South Korea registered the strongest opposition to the war practice with about 74% of respondents siding against the killer robots. Other countries that have little to no support for these weapons include Germany (72%), Spain (65%), Mexico (64%), China (60%), France (59%), and more.

A 2019 survey by market research company Ipsos revealed that 61% of adults across 26 countries stated that they oppose the use of lethal autonomous weapons systems, 22% support such use, and 17% said that they are not sure. Of the respondents who opposed, 66% stated that they believe lethal autonomous weapons systems cross a moral line as machines should not be allowed to kill. Meanwhile, 54% of those who are opposed also feel this way because weapons are “unaccountable.”

Can These Weapons Protect People?

The military is the one leading the development of these lethal autonomous weapons despite the technologies’ terrifying moral implications. According to Vox, an American news and opinion website owned by Vox Media, autonomous weapons open up a world of new capabilities. They can make crucial decisions about when drones could fire or where they should target the next attack. These weapons could also change the current way drones transmit and receive information from their base, which usually takes some time, limits where the military could operate, and leaves them vulnerable. 

“Because you don’t need a human, you can launch thousands or millions of [autonomous weapons] even if you don’t have thousands or millions of humans to look after them. They don’t have to worry about jamming, which is probably one of the best ways to protect against human-operated drones,” Toby Walsh, a professor of artificial intelligence at the University of New South Wales and an activist against lethal autonomous weapons development, said. 

Walsh added that the most interesting argument for killer robots is that they could be more ethical than humans. After all, humans sometimes commit war crimes, deliberately targeting innocents or killing people who’ve surrendered. They can also get fatigued, stressed, and confused, and end up making mistakes. 

The military is the one leading the development of these lethal autonomous weapons despite the technologies’ terrifying moral implications / Photo by: David Seaford via 123RF

 

Meanwhile, lethal weapons are programmed to do whatever they need to do with accuracy and efficiency. A 2018 book titled “Army of None: Autonomous Weapons and the Future of War” written by pentagon defense expert and former US Army Ranger Paul Scharre argued that machines never get angry or seek revenge, unlike humans. 

But, countries are also investing in lethal autonomous weapons to better protect civilians. According to The Bulletin of the Atomic Scientists, a nonprofit organization concerning science and global security issues resulting from accelerating technological advances that have negative consequences for humanity, these weapons could more precisely and efficiently target enemy fighters and deactivate itself if it does not detect the intended target. Thus, it can reduce the risks inherent in more intensive attacks like a traditional air bombardment.

Dr. Larry Lewis, the Director of the Center for Autonomy and Artificial Intelligence at CNA, analyzed over 1,000 real-world incidents in which civilians were killed across the world. He discovered that there were two general kinds of mistakes that have caused human deaths in combat: either military personnel missed indicators that civilians were present, or civilians were mistaken as combatants and attacked in that belief. Thus, experts believe that AI could be used to help avert these mistakes. 

Campaign to Stop LAWS

Many countries are working to ban or regulate killer robots, considering the dangers they may cause in the future. According to Human Rights Watch, an international non-governmental organization that conducts research and advocacy on human rights, lethal autonomous weapons are now seen as one of the top existential threats faced by the planet. Many states are coming together to create a new treaty to prohibit lethal autonomous weapons systems.

For instance, the Alliance for Multilateralism initiative was established by France and Germany, including dozens of foreign ministers last September 2019 at the high-level United Nations General Assembly. The initiative identified killer robots as being among six “politically relevant” issues requiring an urgent multilateral response. Since 2014, the Convention on Conventional Weapons (CCW) has conducted eight meetings on killer robots, where participants agreed that there’s a need to retain some form of human control over the use of force. In fact, about 30 countries are vigorously promoting a ban treaty as essential to stigmatize the removal of human control from weapons systems.

While lethal autonomous weapons can potentially help the military with their missions and even save civilians, the fact remains that there’s something scary about a technology that can decide on its own. It can lead to new mistakes which might endanger people’s lives. 

Many countries are working to ban or regulate killer robots, considering the dangers they may cause in the future / Photo by: Aleksandr Papichev via 123RF