Weaponization of AI: The Battle Between Morality and Technology
Fri, December 3, 2021

Weaponization of AI: The Battle Between Morality and Technology

AI can make our lives more efficient, but it can be leveraged to create autonomous weapons. / Photo by: taa22 via 123rf

 

In the past, it would have seemed inconceivable to think about the capabilities of AI, according to Seth Colaner of Venture Beat, a technology news website. Now, we are captivated with AI’s ability to change our world for the better. But we ought to ask ourselves: Is AI good or evil? Is it a weapon or a tool? As technology continues to advance, we are faced with such complex questions about the delicate balance between benefit and harm, and perhaps, even security and morality. 

Frankly, AI is used as a weapon and as a tool. It can make our lives more efficient, but it can be leveraged to create autonomous weapons. Can you imagine an AI executing a warfare task without human involvement? With its algorithms, programs, and sensors, it is becoming a part of our reality, said Jayshree Pandya of business news platform Forbes. 

AI As A Weapon

The rapid evolution of AI, machine learning, and deep learning leads to innovation, intensifying a developer’s quest for AI chips. This signifies that AI is slowly revolutionizing warfare, in which nations are eager to develop automated weapon systems that are made possible with AI. When states individually and collectively hasten their efforts to have a “competitive advantage” in science and technology, the more the weaponization of AI becomes an inevitable phenomenon. 

Let’s analogize AI’s capacities with guns and hammers. Both can be used as tools and weapons. Hence, what matters most is “who’s wielding the object,” what they intend to do with it, “to whom or for whom, and why.” In AI, the analogy applies to autonomous military weapons and robotic process automation (RPA). Just thinking about AI-enabled military-grade weapons is downright terrifying, as their purpose is to annihilate people (enemies) in a more efficient manner, while we try to minimize casualties. 

We need to envision how algorithmic warfare would look like. Developing AI-powered autonomous weapons is one thing but using them in algorithmic warfare to inflict harm to other states and humans is another. AI as a weapons system is evident now. These systems can be in the form of deploying “fire-and-forget” missile systems and utilizing stationary systems to automate every single task. Some of these tasks include equipment maintenance and the deployment of surveillance robots, drones, etc. 

Sure, Autonomous Weapons Systems are seen to provide opportunities for minimizing the operating costs of weapons systems through “a more efficient use of manpower.” This boosts a system’s speed, accuracy, precision, reach and coordination on the CGS battlefield. Still, there is a need to understand and evaluate the intricacies of economic, legal, security, and societal issues. 

Even so, humans are the ones pulling the trigger, releasing the arrow, and pressing a button. Do we give a machine the power to decide on who lives and dies? 

 

Autonomous Weapons Systems are seen to provide opportunities for minimizing the operating costs of weapons systems through “a more efficient use of manpower.” / Photo by: Rafael Ben-Ari via 123rf

 

The Murky Middle Ground 

This escalates the debate on AI. Did you know that facial recognition can be weaponized by individuals? Facial recognition technology can diagnose genetic disorders or screen for potential human trafficking. Law enforcement can leverage this AI technology to track down a terrorist or augment our online shopping experience. 

Now, let’s talk about how this technology is weaponized. For example, the New York Police Department has abused its facial recognition technology to apprehend a suspect. While it is used lawfully by authorities— that is, if it’s not abused— people may perceive it as a weapon, causing them to live in fear. 

Alarmingly, the same technology is employed in profiling Uighur Muslims in China, which was made possible due to Microsoft’s research. The tech giant was criticized for its initiative. In Build 2019, general manager of AI Programs at Microsoft Tim O’Brien noted that the company adhered to the approach of “We’re going to do the right things, but the government needs to get involved, in partnership with us, to build a regulatory framework.” 

At first glance, it sounds pragmatic and responsible. But Colaner posed an eye-opening question: “Does that mean Microsoft won’t even entertain the possibility that it shouldn’t create a technology just because it can?” If so, then the firm is just removing itself from a debate about whether a particular technology should exist. 

Programmers and tech companies should play a role in the debate. The former can program and determine the nature of the AI. What if they intentionally or accidentally program an autonomous weapon to operate in violation of the current (and future) international humanitarian law? Who should be responsible for it? 

For organizations, they need to consider how they will regulate the technologies they develop. Further, they should also consider if some technologies are meant to be shown to the public or confined in a research lab. 

Future Challenges to Cybersecurity 

Algorithms are not secure or immune to bugs and malware because security risks are present everywhere. When an AI goes to war with other AI, cybersecurity issues will add to the existing risks to the future of humanity. 

Indeed, AI is both a tool and a weapon. We can say that we live in a world where states engage in an arms race. However, AI does have its benefits so long as it’s not abused. While it sounds ideal, tech firms, governments, and people should work together to utilize AI technologies for the good of humanity.