Tackling Lethal Autonomous Weapon Systems (LAWS) — through regulation, not bans
by Setu Bandh Upadhyay, Research Fellow, The Dialogue, New Delhi
Every time a video of Boston Dynamics comes out, the Internet reacts in two ways: amazed at the achievements of humanity, and simultaneously, scared of the possibility of future robot overlords. Boston Dynamics is a highly advanced American robotics company that has produced absolutely spellbinding marvels of robotics in recent times.
It has been working closely with the Defense Advanced Research Projects Agency (DARPA) of the United States Department of Defense to develop military strategic support robots like BigDog and LS3, which are capable of carrying ammunition and supply for troops across tough terrains (Murphy 2018).
In addition to developing strategic support robots for use in the military, Boston Dynamics has also been exploring techniques to develop what are often dubbed as Lethal Autonomous Weapon Systems (or LAWS) — that have generated both excitement and fear amongst people.
Let’s take Atlas — the bipedal humanoid robot — for instance. While the recent video of Atlas doing parkour (a military obstacle course training exercise) was incredibly fascinating, it was also intimidating. You can watch the video here. Given the aggressive pace at which Boston Dynamics is developing these technologies, it should be no surprise to watch Atlas not just mimic but surpass the capabilities of military soldiers in the next Boston Dynamics video.
Many see that as an imminent threat to human survival on this planet. And perhaps, for good reason.
In fact, experts have argued that the competition for AI superiority could become the most probable reason for a third world war (Rundle 2015). After all, in the last ten years, the number of American troops deployed on foreign soil has reduced by 90 percent while there has been a ten-fold increase in drone strikes (Delman 2016).
So, why not ban the production and deployment of autonomous weapon systems and save humanity from the catastrophic effects of the third warfare revolution?
Bans don’t usually work
In 2013, an international coalition of non-governmental organizations called Campaign to Stop Killer Robots advocated for a complete ban on fully autonomous weapons. Even more recently, earlier this year, a large group of ethics professors and human rights advocates called for a similar ban (Tangermann 2019).
However, these and many such calls made on the ban of LAWS have been practically rejected by most significant military powers around the world (Gayle 2019). Like the United States, countries including Israel, Russia, China, United Kingdom, and South Korea all have projects of LAWS underway (Atherton 2018).
Proponents of LAWS often cite their cheap mass production that can dramatically reduce national military budgets. Proponents of LAWS also argue that AI-powered autonomous weapons will help reduce accidental casualties by making fewer mistakes as they are not vulnerable to human error.
However, the flip side of having autonomous weapons is conveniently ignored by those who endorse its development and adoption in national defense strategies — just imagine if these cost-effective, highly efficient war weapons fall into the wrong hands. The resulting havoc would be beyond imagination. (Markoff 2017).
Experts argue that even though an outright ban is most desirable, it might prove to be ineffective in actually curbing the use of LAWS. This argument has been supported by the failed bans on weapons throughout history (Crootof 2014). Hence, strong regulation instead appears to be a more viable solution to tackling the irresponsible use of LAWS (Gubrud 2014).
Regulation would allow the monitoring and control of the development of automation technology in warfare and will foster compliance by nations-states on the established and agreed upon principles upholding the responsible use of LAWS (Melzer 2013).
So, how should we go about regulating LAWS on the international plane?
Before answering that question, refining our understanding of LAWS and how they work would be a good idea.
Decrypting LAWS
LAWS without any human intervention can find or engage targets, and decide who has to live and who has to die without a shred of moral turpitude. This lack of empathy and compassion makes LAWS excellent weapons but is also problematic from the ethical aspect. Hence, the concept of an “ethical governor” has been proposed to technologically introduce ethical constraints on these deadly autonomous systems before they engage a target (Docherty 2012).
Moreover, LAWS are programmed in such a way that they engage targets without concern for their own survival; and this lack of human factor makes them even more dangerous by enabling them to undertake suicide missions. However, on the bright side, the “stoical” nature of LAWS can benefit many humanitarian interventions; for example, where the concern for individual life must outweigh that of a hostage group. But, should such ethical quandaries surrounding LAWS be resolved by a simple cost-benefit analysis?
Many have also brought the liability for acts committed by LAWS into question. Since the weapon makes its own decisions, the liability for accidental casualties created by the use of such weapon can be tossed around amongst the weapon inventors, weapon operators, and often the weapon itself; making it difficult to hold anyone accountable (Jacobson 2017).
Now that we understand LAWS better, let’s look at how we can effectively regulate them to tackle the catastrophic risks that they pose to humanity.
How to regulate LAWS?
Countries like the United Kingdom and the United States have argued that the Geneva Conventions are sufficient to deal with LAWS, thereby suggesting no need for a special international treaty or convention. They allude to Article 36 of Additional Protocol 1 to the Geneva Conventions which prohibits unauthorized, or irresponsible, or inhumane use of “a new weapon, means or method of warfare”. To this end, the Article obliges state parties to the Conventions to develop internal review mechanisms.
However, some parties to the Conventions have raised concerns over the dangers that are posed by the open-ended language of Article 36 — for instance, the Article does not provide details on how state parties to the Conventions should frame their internal review mechanisms, and therefore, they are at liberty to frame their own internal review mechanisms as they like (Vincent and Verbruggen 2017).
Hence, there is fear that ill-motivated parties to the Conventions might frame their internal mechanisms in such a way that allows LAWS to bypass their scrutiny to meet corrupt ends.
Therefore, a treaty which provides uniform legal and ethical principles to regulate the development, deployment, and use of LAWS is what we need to effectively tackle the multitude of risks associated with LAWS. These principles should broadly focus on:
(1) Defining provisions in International law for the use of LAWS by nation-states for self-defense.
(2) Defining the rules for armed conflict using LAWS within the ambit of International law to ensure the restricted use of LAWS.
(3) Establishing the precise boundaries for acceptable levels of automation and mandating limitation on remote warfare to ensure human factor in the use of LAWS.
(4) Obliging nation-states to maintain algorithmic transparency in the LAWS that they deploy and use in an armed conflict.
(5) Delineating the principles for the attribution of liability in case of civilian casualties and collateral damage caused by the use of LAWS in an interstate armed conflict, and a clear distinction between state liability and individual liability in all such cases.
To sum up
The potential dangers of the use of “Killer Robots” in interstate armed conflicts, if gone unheard, will have drastic implications on the world with the possibility of a perpetual war between countries and an innumerable number of casualties.
Experts and civil society groups have responded with calls for a blanket ban on autonomous weapons.
However, historically, such bans on weapons haven’t really worked.
Focus should therefore lie on building an international consensus between national governments on how these weapons can exist in compliance with international law.
This consensus on an international plane can be fostered through discussions and dialogue between national governments, technical experts, jurists, and policy practitioners under the umbrella of international organizations, like the International Committee for the Red Cross (ICRC).
International consensus on why, when, and how autonomous weapons should be deployed by nation-states could lead to the formulation of a global regulatory framework promoting responsible use of LAWS, thereby protecting humanity from the catastrophic effects of LAWS.
REFERENCES
Atherton, Kelsey D (2018). Are Killer Robots the Future of War? Parsing the Facts on Autonomous Weapons. The New York Times.
Boulanin, Vincent, and Maaike Verbruggen (2017). Article 36 Reviews: Dealing with the Challenges Posed by Emerging Technologies. Stockholm International Peace Research Institute.
Crootof, Rebecca (2014). The Killer Robots Are Here: Legal and Policy Implications. Cardozo Law Review 36: 1837 – 1916.
Delman, Edward (2016). Has Obama Started More Wars Than He’s Ended? The Atlantic.
Docherty, Bonnie (2012). Losing Humanity: The Case against Killer Robots. Human Rights Watch.
Gayle, Damien (2019). UK, US and Russia among Those Opposing Killer Robot Ban. The Guardian.
Gubrud, Mark. 2014. Stopping Killer Robots. Bulletin of the Atomic Scientists 70(1): 32 – 42.
Jacobson, Barbara R. (2017). Lethal Autonomous Weapon Systems. DiploFoundation.
Markoff, John (2017). Fearing Bombs That Can Pick Whom to Kill. The New York Times.
Melzer, Nils (2013). Human Rights Implications of the Usage of Drones and Unmanned Robots in Warfare. Directorate-General for External Policies. European Parliament.
Murphy, Mike (2018). Boston Dynamics Is Going to Start Selling Its Creepy Robots in 2019. Quartz.
Rundle, Michael (2015). Musk, Hawking Warn of “inevitable” Killer Robot Arms Race. Wired UK.
Tangermann, Victor (2019). Scientists Call for a Ban on AI-Controlled Killer Robots. Futurism.
Views expressed above belong to the author(s).
© 2019-23 AI Policy Exchange
AI Policy Exchange is a non-profit registered under section 8 of Indian Companies Act, 2013.
We are headquartered in New Delhi, India.
Founded in 2019 by Raj Shekhar and Antara Vats
editor@aipolicyexchange.org