[First post in a series covering the UN’s 2015 conference on killer robots. See all posts in the series here.]

Over the next week, I’ll be blogging from Geneva, where 118 nations (if they all show up) will be meeting to discuss “Lethal Autonomous Weapons Systems” (LAWS) and, you know, the fate of humanity. You may have seen headlines about the United Nations trying to outlaw killer robots, which is a bit inaccurate. First of all, the UN can’t actually outlaw anything; Security Council resolutions are supposed to have the force of law on matters of international peace and security, but apart from attempts to shackle miscreants like Iraq, Iran, and North Korea, the Security Council has never tried to impose arms control on the major military powers, most of which can just veto its resolutions anyway. And anyway, the first point is irrelevant; this meeting is taking place under a subsidiary of the UN, the Convention on Certain Conventional Weapons (CCW), whose full name is actually longer and even more boring-sounding than that but has something to do with “excessively injurious” or “indiscriminate” weapons. As an aside, I note that “excessively injurious” weapons are the ones that don’t kill you, not the ones that do. But delegating the issue of autonomous weapons to the CCW is more related to the notion that stupid killer robots, like land mines, would be unable to distinguish civilians from combatants, hence “indiscriminate.”

The author (on the right)

This will actually be the second CCW meeting on LAWS, which is a nice acronym but doesn’t have any official definition. The first meeting, held in 2014, was attended by at least 80 nations, which is very good for a treaty organization whose typical meeting was described by a colleague of mine as “start late, nobody wants to say anything, routine business announcements, and adjourn early.” The 2014 LAWS meeting was nothing like that. The room was packed, expert presentations were listened to intently both in the main sessions and side events, and dozens of countries plus a handful of NGOs made statements. The highlight of the entire week was a statement by the Holy See (Vatican): “… weighing military gain and human suffering… is not reducible to technical matters of programming.” (You can read the full Vatican statement here or listen to it here.) The nadir had to be when the U.S. delegation asserted that the Obama administration’s 2012 policy directive to the military on Autonomy in Weapon Systems represents an example for the rest of the world. Another low point was the closing statement from U.S. State Department legal advisor Stephen Townley, in which he reasserted the same position, adding with condescension that “it is important to remind ourselves that machines do not make decisions.” Oookay, nothing to worry about then, now that we know that autonomy in weapon systems is actually impossible.

Full disclosure: I am a member of one of those NGOs, the International Committee for Robot Arms Control, part of the Campaign to Stop Killer Robots, a multinational coalition led by Human Rights Watch. I don’t speak for them; in fact, I am liable to say things that higher-ups in the hierarchy don’t want to hear (but should listen to, IMHO). But at least you know where I stand (and where I will sit in the big room), in case you were still wondering. I’m grateful to my colleagues on Futurisms for inviting me to blog here, although they may not agree with everything (or anything) I say, either, so please don’t call in drone strikes on them; let me be the martyr, please, if anything I say arouses your human capacity for violence.

Another preview post to come tomorrow, and then more over the next week as the meeting proceeds.

1 Comments

  1. Looking forward to your blogging, seeing some sharper definitions and perhaps some draft wording for items that might be included in a Protocol VI.

Comments are closed.