And their effects
Alex Leveringhaus, Ethics and Autonomous Weapons
(London: Macmillan Publishers Ltd., 2016), 136.
The book observes that the technology has changed the character of warfare and demands of warfare have made possible technological innovation
Ethics and Autonomous Weapons, written by a political philosopher, clearly assesses the ethical and legal ramifications of Lethal Autonomous Weapons System (LAWS) and presents a novel ethical argument against these weapons. The book is amongst the first academic treatments of the emerging debate on autonomous weapons. It is divided into four chapters and each is broken down into sections which fit logically into the topic of the chapter. All chapters are composed of several defining parts that maintain a sense of continuity throughout the book. The book is well-referenced and each chapter is complimentedby a thorough bibliography. Alex Leveringhaus presents a broad analysis of ethical, legal, and technological challenges posed by emerging weapons technologies, especially remote-controlled and autonomous targeting systems. He tactfully answers the questions like, what is new in the autonomous weapons debate? And why are some activists arguing for a ban? The book also provides a philosophical and ethical perspective on autonomous weapons. Powerful states that can potentially be agents of intervention already have an arsenal of remote weaponry at their disposal that makes boots on the ground unnecessary. The North Atlantic Treaty Organisation’s (NATO) campaign in Kosovo, as well as more recent UN-backed operations in Libya, illustrates this point. Both campaigns relied largely on air power; there were no boots on the ground during the combat phase. P15
The book observes that the technology has changed the character of warfare and demands of warfare have made possible technological innovation. This emerging weapons technology is known as “machine autonomy” and there is now a debate on the implications of machine autonomy for weapons. Though there were various campaigns underway to ban autonomous weapons but the possibility of such a ban was discussed first at the United Nations (UN) in Geneva in 2014.
The book shows two things: Firstly, the autonomous weapons are a complex phenomenon. They give rise to considerable questions about the conceptualisation of such weapons. The second takeaway from this book is that weapons technology merits the attention of ethicists. The analysis provided in the book shows that the development of new weapons is often justified by the states on humanitarian grounds. Better weapons are often said to have the capacity to ‘humanise’ armed conflict, protect soldiers’ lives, and minimise the impact of armed conflict on civilians. Enhanced compliance with relevant ethical and legal frameworks is often the most important ethical argument in favour of certain weapons.
The second chapter of the book provides an in-depth analysis of the concept of an autonomous weapon. It considers two crucial questions: What is a weapon? And what is an autonomous weapon? Drones are generally described as unmanned systems but this concept is wrong because there is human involvement in the operation of the drone. They are controlled by an operator via remote control while the autonomous weapons are often classified as “out-of-the-loop” systems. The operator is taken out of the loop once they are programmed. The machine accomplishes its tasks without further human guidance. Compared to autonomous weapons, drones are “in-the-loop” systems because the human is directly involved in the operation of the machine. It has been claimed that autonomous weapons are distinctive because they are capable of decision-making. (p. 53).
The author further discusses two models; “Generating Model” which contends that, in the future, artificial agents may be capable of generating decisions. And the “The Execution Model,” according to which the operator would make an assessment of whether certain potential targets were indeed legitimate. Once programmed, an artificial agent would then be able to execute the targeting decision by looking for targets that fit the targeting criteria set out in its orders. (p. 56). Alex Leveringhaus bases his conclusions on details by International Humanitarian Law (IHL) aka “jus in bello.” He explains that the criteria of IHL pose major problems for the Generating Model. Beginning with the principle of distinction, it is difficult to see how an artificial agent could determine whether a human person is a legitimate target or not. From the perspective of an artificial agent, a child with a toy gun, an illegitimate target, may look very similar to a legitimate target, such as a fully armed combatant. The Execution Model, like the Generating Model, also faces serious problems. Artificial agents can be used to execute targeting decisions but they will not be able to identify the target. One argument is removing human beings from theatres makes warfare more humane. Humane warfare without humans if true makes autonomous weapons attractive to just war theorists. However the author believes that the operators will still be required to programme autonomous weapons, so humans continue to play a crucial role in armed conflict.
Though there is a very small number of autonomous weapons systems as yet, but the technology would soon be possible making military action easier for some countries and thus leading to more killings
The advocates of autonomous weapons believe that these weapons are desirable because they minimise wrongdoing. The author counters this argument by quoting the infamous Kandahar massacre, perpetrated by US Army Staff Sergeant Robert Bales in 2012. Bales, who was unknown to his superiors, murdered sixteen civilians and wounded six. Nine of the murdered civilians were children. Bales had gone rogue, acting outside of a combat mission, having left his camp without authorisation from his superiors. Now imagine that, in the near future, a soldier — for reasons similar to Bales’ — was intent on killing as many civilians as possible. (p65). It is believed that war is a collective business on all levels but “this does not mean that one should not neglect the role of individual in it.” (p72).
The third chapter turns from conceptual to normative issues. It addresses two issues. Firstly, it is concerned with normative arguments in favour of autonomous weapons and offers a detailed analysis of humanitarian justifications for the development of autonomous weapons. Secondly, it tries to answer the questions like how the deployment of autonomous weapons leads to responsibility gaps. The author tries to prove that there are situations in which none can be held responsible for the use of force by an autonomous weapon. If responsibility cannot be assigned to a machine, then it is likely to be assigned to the operator of an autonomous weapon. Operators appear in the first stages of the causal chain that leads to the application of force to a target but not the final stage.
The fourth chapter outlines a novel argument, which is the argument from “Human Agency,” against the use of autonomous weapons to target humans. The debate is less concerned with the effects of autonomous weaponry on responsibility. It is more inclined to discuss objections to autonomous weapons that are not directly related to responsibility. The main argument from Human Agency is that there are morally relevant differences between human agency and the artificial agency of machines. The Human Agency believes that there are systems that can distinguish human persons from animals as well as objects. The border robots deployed in the Demilitarised Zone between North and South Korea may serve as an example. (p120). However, the distinction criterion demands that belligerent parties distinguish between legitimate and illegitimate human targets, not just between human and non-human targets. It is hard to see how a machine could be programmed to comply with this principle and secondly if the argument from Human Agency is sound, there are strong ethical reasons against autonomous weapons that could be directly deployed against humans.
The concluding chapter provides a brief assessment of the regulation of autonomous weaponry. The author asks whether researchers in robotics, Artificial Intelligence (AI), and intelligent systems design have a duty not to make their expertise available to the military in order to prevent the creation of autonomous weapons and the problems the weapons pose.
The book raises the question whether a ban on autonomous weapons is needed. The author is confused when he discusses that in Carl von Clausewitz’s famous words, ‘war is the continuation of politics by other means.’ The pacifists could argue that machine autonomy, or any other type of technology, must never be used for military purposes. But the problem is that it is unclear how such a ban could be enforced. He concludes that banning LAWS is of non-ideal approach. Banning would be problematic because there are cases where the deployment of an autonomous weapon can potentially satisfy IHL. These systems may be capable of distinguishing a missile from an airplane, a tank from some other vehicle, or a submarine from some other type of vessel. It might be possible to imagine a sophisticated robot that could autonomously track and destroy enemy robots. One could also imagine a drone that is capable of autonomously engaging enemy drones and shooting them down. None of these deployments would be illegal, and it is hard to see why they should be unethical. A ban of autonomous weapons in these kinds of cases seems misplaced. There is nothing in the design of autonomous weapons as such that would be illegal. The author believes that whether these weapons are desirable depends on their use.
Another argument the author supports in the book is that weapons are needed to achieve some important political goals. Nevertheless, one would expect designers who participate in weapons research to bear in mind the risks posed by certain types of weaponry, and to try and mitigate them via sound design. Book also suggests that the designers must also work closely with those charged with developing a standard of care and they should be aware of the ethical and legal frameworks that regulate weapons technology.
The good job done by the author is that he gives reader a good idea of the ethical issues arising from autonomous weapons. Though there is a very small number of autonomous weapons systems as yet, but the technology would soon be possible making military action easier for some countries and thus leading to more killings. Pakistan advocates the ban on autonomous weapons believing that they pose challenges to IHL. Pakistan was the first country to call for a ban on LAWS and is the most active proponent of a preemptive ban concluded at the Conference on Certain Conventional Weapons (CCW). Pakistan is also the first Non-Aligned Movement (NAM) group member which served as a CCW Review Conference (RevCon) president. Pakistan’s disarmament representative Ambassador Ms Tehmina Janjua presided over the CCW’s Fifth RevCon in December 2016, where states supported the ban. In 2017, the Sri Lankan government also supported the establishment of a Group of Governmental Experts (GGE) on LAWS and to elevate the dialogue on LAWS to a state-driven formal process. Argentina, China and Peru have also agreed to ban the weapons.
This year the countries will meet to discuss the future of LAWS. The meeting will be chaired by the Indian representative on disarmament, Ambassador Amandeep Singh Gill. The move is an important step towards the prohibition of killer robots, preemptively, before they increase the chances of civilian causalities. The Human Rights Watch, and Robotics and AI researchers also call for a ban on LAWS.