|AI killer robots can accidentally cause mass atrocities or start a war / Credits: Usa-Pyon via Shutterstock|
Last year, Laura Nolan, a former top Google software, revealed that a new generation of autonomous weapons widely known as artificial intelligence killer robots could accidentally cause mass atrocities or start a war. Nolan was known for resigning from the company in protest at being sent to work on a project to dramatically enhance US military drone technology. This has prompted the public to talk about these weapons again.
Even experts and international organizations are paying attention to these AI killer robots. Last September, France, and Germany launched an “Alliance for Multilateralism” initiative at the high-level United Nations General Assembly. Along with several foreign ministers, they identified killer robots as being among six “politically relevant” issues requiring an urgent multilateral response. A growing number of states have also seen autonomous weapons as one of the top existential threats faced by the planet.
This is not the first time that we have heard of AI killer robots. For the past few years, many countries such as China, Israel, South Korea, Russia, the US, and the UK have been developing fully autonomous weapons. This includes armed drones that are remotely-piloted by a human. According to the World Report 2020 by the Human Rights Watch, an international non-governmental organization that conducts research and advocacy on human rights, UN Secretary-General António Guterres expressed his concern with how “killer robots could take the place of soldiers.”
Guterres added that these weapons have the power and discretion to take a human life “morally repugnant and politically despicable.” Criticisms and concerns about AI killer robots are also affecting military acquisition and development. For instance, many defense planners are increasingly becoming reluctant to budget millions of dollars for autonomous weapons systems.
Thus, experts from several companies called to demand the regulation of AI killer robots, warning that algorithms can inevitably reflect various social biases. These biases could cause people with certain profiles to be targeted disproportionately if wrongly applied to weapons.