[Continuing coverage of the UN’s 2015 conference on killer robots. See all posts in this series here.]
If the ongoing process of deliberation under the auspices of the United Nations is to result in new law limiting the autonomy of weapon systems, at some point there will need to be lawyers involved. Court was in session on Wednesday afternoon, and the usual rules governing the admissibility of evidence and arguments were in force.
As the debate began to ramp up about three years ago, a handful of law professors, notably Michael Schmitt and Jeffrey Thurnher (of the U.S. Naval War College) and Kenneth Anderson and Matthew Waxman (of American University and Columbia University, respectively), emerged among the first serious public advocates for autonomous weapons. If you’ve ever read law review articles, you know that they are heavy on arguments from precedent; new situations are to be adjudicated in terms of arguments and judgments from the past. For these lawerly defenders of killer robots, the main question seems to be whether autonomous weapons are prohibited by law that was written before the possibility of machines making lethal decisions, in place of soldiers or police, was even considered — outside of science fiction.
The legal debate is complicated by the distinctions between domestic law, international human rights law, and international humanitarian law (IHL, which nowadays is essentially synonymous with the “law of war” or the “law of armed conflict”). The convention under which these talks are being conducted is an IHL treaty, which might mean that even if it were to result in a broad ban on killer robots in interstate warfare, they might remain legal for intrastate use by police and by militaries in civil war.
The principles of IHL are rooted in the proposition that warfare is lawful if it is just. According to longstanding tradition in just war theory, a war is only just if there is a just reason to go to war (jus ad bellum) and if the conduct in warfare has been conducted justly (jus in bello). Over time, these abstract principles have been given definition in the form of treaties and other instruments of international law.
At least in theory, jus ad bellum is addressed today primarily by the UN Charter, which proscribes the use of force except in two circumstances: 1) The Security Council has determined that force should be used, or 2) You are under armed attack, and the Security Council hasn’t had time to take action (such as ordering you to surrender). In practice, the Security Council acts when it’s able to, and almost every nation that chooses to go to war claims to be under attack, and invokes Article 51 (the right of self-defense).
Jus in bello comprises principles that are addressed in a number of treaties, the most important and comprehensive of which are the Geneva Conventions of 1949 and their 1977 Additional Protocols. These treaties enshrine and embody a number of principles, the most important of which, in the debate over killer robots, have been 1) distinction, the principle that armed forces must “at all times distinguish” between combatants and civilians, and 2) proportionality, the principle that commanders considering an attack must assess expected harm to civilians, and weigh this against anticipated military gains. There is no particular formula for proportionality, except what a “reasonable” person would decide. A third principle often included under jus in bello is that of humanity, which is usually taken to mean the avoidance of causing suffering that is not needed to achieve military objectives, but actually has deeper roots in the recognition of common humanity — all men are brothers, y’know?
Distinction and proportionality don’t only affect the conduct of armies; they also have implications for weapons. Specifically, a weapon that by its nature is incapable of being directed at military objectives and avoiding civilians is considered inherently indiscriminate, and thus effectively banned. Weapons that cause unnecessary suffering may be deemed inhumane. Many argue that both of these are true of nuclear weapons, but the argument was tested in the International Court of Justice in 1996, and did not carry the day.
The principal argument that has been advanced by the Campaign to Stop Killer Robots is that computers today do not have — and for the immediate future will not have — capabilities adequate to comply with the jus in bello requirements of distinction and proportionality. The principal counterargument of autonomous weapons proponents is that we don’t know what computers may be capable of in the future; they might ultimately be able to exercise distinction and judge proportionality even better than humans, assuming that “better” is always well-defined.
Lawyers for killer robots like to argue that some nations, especially the United States, already conduct legal reviews of new weapons to determine their consistency with the laws of war. (These are known as “Article 36” reviews.) The United States, and countries which share its aversion to binding arms control, have suggested an increased emphasis on such legal reviews — conducted internally and not subject to public or international oversight, of course — as an alternative to any new law.
This was the case argued by William Boothby, a former Deputy Director of Legal Services for the Royal Air Force (U.K.), and now an associate fellow on “emerging security challenges” at an outfit called the Geneva Centre for Security Policy. (In fact, Boothby runs an intensive four-day course on “Weapons Law and the Legal Review of Weapons” at the Centre, which he was keen to announce to the entire conference. The course is intended for “lawyers, diplomats and other officials”; the next one runs in December and, if you act now, registration is only 750 Swiss francs and includes meals! It’s hard for some folks to turn down a chance for free advertising, I guess.)
Boothby was dismissive of the concept of “meaningful human control,” which he asserted was incompatible with autonomous weapons systems. “That may be obvious,” he informed us, “but I do believe it is worth stating.” We agree. That is the point.
|One of Jensen’s slides|
His co-panelist Eric Talbot Jensen of Brigham Young University had much to say about balloons and submarines. The Hague Convention of 1899 had imposed a moratorium on dropping bombs from balloons, which became moot when the First World War filled the skies with airplanes. Submarines were originally regarded with horror because no quarter could be given to the hundreds who would drown when a ship was sunk, yet submarine warfare was eventually normalized because it was so militarily effective. The point of all this seemed to be that, since these early efforts to preemptively ban emerging technology weapons failed, we should not bother to try stopping killer robots today. (Never mind the fact that other weapons technologies have, through both developing norms and international treaties, been limited with great success.)
Jensen then described his vision of
an autonomous weapon … able to determine which civilian in the crowd has a metal object that might be a weapon, able to sense an increased pulse and breathing rate amongst the many civilians in the crowd, able to have a 360 degree view of the situation, able to process all that data in milliseconds, detect who the shooter is, and take the appropriate action based on pre-programmed algorithms….
I doubt this narrative had quite the effect that Jensen was hoping for. In a response statement to the plenary, another NGO representative described it as “a chilling example of what some may be thinking.”
The third lawyer on the panel was Kathleen Lawand, representing the International Committee of the Red Cross. She did a good job of being evenhanded as she ran down a list of legal criteria that the use of an autonomous weapon would have to meet. In answer to “the general question of whether or not AWS are unlawful,” her thoughtful answer was that “it depends.” She certainly brought up quite a few reasons to doubt it.
Listening to this rather desiccated discussion, it occurred to me that until now, essentially all lawyerly debate about autonomous weapons has been conducted on the assumption that it is entirely a matter of jus in bello, perhaps because all previous debates on the legality of weapons have been entirely within this domain of the law of war. After all, nobody had ever had to consider before that a weapon itself might decide to start a war, unjustly.
This suddenly appeared to me as a door back out to the real world, where we are less concerned about legal correctness and more about things like human dignity, freedom, and survival. Why weren’t any of these things legal issues, I wondered. Do none of them have any place in the law, or in the room that afternoon deciding the future of humanity and killer robots?
Minutes later, after considerable wrangling with my ICRAC colleagues, we had a statement prepared, just in time to be called on so it could be read to the plenary. I’ll simply quote it here:
This discussion has been directed almost entirely to considerations of law derived from the principle of jus in bello. We appear to be overlooking, or excluding, considerations of jus ad bellum that arise from the use of autonomous weapons systems. It is in this context that those considerations also typically discussed as matters of international peace and security may be considered to have implications under the law of armed conflict.
|Juergen Altmann reading the ICRAC statement on jus ad bellum, with Noel Sharkey looking on|
We are concerned about the destabilization and chaos that may be introduced into the international system by arms races and the appearance of new, unfamiliar threats. In addition, we are concerned as scientists, about what may happen when nations with an uneasy relationship field increasingly complex, autonomous systems in confrontation with one another. We know that the interactions of such systems are unpredictable for two reasons.
The first is the inherent error-proneness of complex software even when it is engineered by a single co-operative team. The second is that, in reality, these interacting systems will have been developed by non-cooperating teams, who will do their utmost to maintain secrecy and to ensure that their systems will exploit every opportunity to prevail once hostilities are understood to have commenced or, perhaps, are believed to be imminent. Once hostilities have begun, it may become very difficult for humans to intervene and to reestablish peace, due to the high speed and complexity of events. Neither side would want to risk losing the battle once it had begun.
Do these considerations have no implications for the legality of autonomous weapons? Can we consider a war that has been initiated as a result of needless political or military instability, or due to the unpredictable interactions of machines, or escalated out of human control due to the high speed and complexity of events, and not for any human moral or political cause, to be a just war?