Читать книгу Should We Ban Killer Robots? - Deane Baker - Страница 7

Оглавление

Introduction

If you haven’t yet watched the short film Slaughterbots on YouTube, you really should do so now. I mean it – stop reading immediately and watch the video before going any further. You won’t regret it, Slaughterbots is short and impressively well executed. Besides, what I say below contains spoilers.

Slaughterbots was created by the Future of Life Institute in conjunction with Stuart Russell from the University of California at Berkeley. The film garnered over 350,000 views on YouTube in the first four days after its release, and was reported on by a large range of news outlets, from CNN to the Telegraph. The fictional near-future scenario depicted in this film in vivid Hollywood thriller style is both entertaining and scary, but is scripted with serious intent. As Russell explains at the end of the video, Slaughterbots is intended to help us see that, while AI’s ‘potential to benefit humanity is enormous, even in defence’, we must nonetheless draw a line. Ominously, he warns us that ‘the window to act is closing fast’. The key issue is that ‘[a]llowing machines to choose to kill humans will be devastating to our security and our freedom’ (Sugg 2017).

The film opens with a Steve Jobs-like figure speaking on stage at the release of a new product. Only, instead of the next generation of iPhone, the product is a weapon – a tiny autonomous quadcopter loaded with three grams of shaped explosives, and which combines artificial intelligence (AI) and facial recognition technology to lethal effect. After proudly explaining that ‘its processor can react 100 times faster than a human’, the Steve Jobs of Death demonstrates his creation. We watch as he throws it into the air, and it then buzzes autonomously, like an angry hornet, over to its designated target – in this case a humanoid dummy. After latching parasitically onto the forehead of this simulated enemy soldier, the drone fires its charge, neatly and precisely destroying the simulated brain within, to the applause of the adoring crowd. If that were not demonstration enough, a video then plays on the giant screen, showing a group of men in black fatigues in an underground car park. The mosquito-like buzzing of the quadcopter causes the men to scatter in fear, only to be killed one by one as the tiny drones identify, track and engage them, detonating their charges with firecracker-like pops. ‘Now that is an airstrike of surgical precision’, says Mr Death-Jobs. As if sensing the concern that is building as we watch, he is quick to reassure his audience: ‘Now trust me, these were all bad guys.’ (Of course, we don’t trust him one tiny bit.) Our concern only increases as he tells us that ‘they can evade … pretty much any countermeasure. They cannot be stopped.’ Another video rolls on the big screen, this one depicting a huge cargo aircraft that excretes thousands of these tiny drones, while we are informed that ‘[a] 25 million dollar budget now buys this – enough to kill half a city. The bad half.’ (Just the bad half – yeah, riiiight.) ‘Nuclear is obsolete’, we are told. This new weapon offers the potential to ‘take out your entire enemy, virtually risk-free’. What could possibly go wrong?

At that point the film cuts across to a fictional news feed that’s designed to help us see the dirty reality behind the advocacy and smooth assurances presented by the Steve Jobs of Death. The weapon has fallen into the wrong hands. An attack on the US Capitol Building has killed eleven senators – all from ‘just one side of the aisle’. TV news reports that ‘the intelligence community has no idea who perpetrated the attack, nor whether it was a state, group, or even a single individual’. We witness the horror of a mother’s Voice over the Internet Protocol [VOIP] call to her student-activist son that ends with his clinical killing by one of the micro drones, as swarms of them hunt down and murder thousands of university students at twelve universities across the world. The TV talking heads inform us that investigators are suggesting that the students may have been targeted because they shared a video on social media ostensibly ‘exposing corruption at the highest level’. Then, suddenly, we’re back on stage with Mr Death-Jobs, who tells us: ‘Dumb weapons drop where you point. Smart weapons consume data. When you can find your enemy using data, even by a hashtag, you can target an evil ideology right where it starts.’ He points to his temple as he speaks, so that we are left in no doubt as to just where that starting point is.

It’s all very chilling, and it taps into some of our deepest fears and emotions. Weapons like tiny bugs that attach to your face just before exploding – creepy. Shadowy killers (states? terrorists? hyper-empowered individuals?) striking at will against helpless civilians for reasons we don’t fully understand – frightening. People targeted on the basis of data gathered from social media – terrifying.

Slaughterbots was released to coincide with, and influence, the first of the 2017 Geneva meetings of the delegates working under the auspices of the United Nations’ Convention on Conventional Weapons (CCW) to decide, on behalf of the international community, what (if anything) should be done about the emergence of lethal autonomous weapons systems (LAWS).1 The year 2017 was the first year of formal meetings of the Group of Governmental Experts (GGE) on LAWS, though it followed on the heels of three years of informal meetings of experts tied to this process. At the time of writing, this international process continues. In addition to the state delegates to these meetings, a range of civil society groups are also represented, most notably the coalition of non-governmental organizations (NGOs) known as the Campaign to Stop Killer Robots. Originally launched in April 2013 on the steps of Britain’s Parliament as the Campaign to Ban Killer Robots, it was ‘the Campaign’ (as it is commonly known) that hosted the viewing of Slaughterbots at the 2017 GGE meeting in Geneva.

Slaughterbots certainly provided a significant boost to the Campaign’s efforts to secure a ban on lethal autonomous weapons (or, failing a ban, to otherwise ‘stop’ these weapons). Unfortunately, the emotive reaction generated by the film is in large part the result of factors that are entirely irrelevant to the issue at hand: the question of autonomous weapons.

Remember what Russell identified as the key issue? ‘Allowing machines to choose to kill humans’. If you have time, watch the film again, and ask yourself this question throughout: what difference would it make to the scary scenarios in the film if, instead of the drones selecting and engaging their targets autonomously, a human being seated in front of a computer somewhere was watching through the drone’s cameras and making the final call on who should or should not be killed? I don’t mean just pressing the ‘kill’ button every time a red indicator flashes up on his or her screen – let’s assume he or she takes the time to (say) check a photo and make sure that the person being killed is definitely on the kill list. To use a key term at the centre of the debate (which I will examine in depth in chapter 2), in this mental ‘edit’ of the film, a person is maintaining ‘meaningful human control’.

In this alternative, imagined version, AI would still be vitally important in that it would allow the tiny quadcopters to fly, enable them to navigate through the corridors of Congress or Edinburgh University, and so on. But there are no serious suggestions that we should try to ban the use of AI in military autopilot and navigational systems, or even that we should ban military platforms that employ AI in order to carry out no-human-in-the-loop evasive measures to protect themselves. So that’s not relevant to the key question at hand.

What about the nefarious uses to which these tiny drones are put in the film? It is, without question, deeply morally problematic, abhorrent even, that students should be killed because they shared or ‘liked’ a video online; but the fact that the targeting data were sourced from social media is an issue entirely independent of whether the final decision to kill this student or that was made by an algorithm or by a human being. Also irrelevant is the fact that autonomous weapons could in principle be used to carry out unattributed attacks: the same is true of a slew of both sophisticated and crude military capabilities, from cyberweapons to improvised explosive devices (IEDs), and even to antiquated bolt-action rifles. In short, a ban on autonomous weapons – even if adhered to – would make essentially no material difference to the frightening scenarios depicted in Slaughterbots.

There are real and important questions that need to be asked and answered about LAWS. But in order to make genuine progress we will need to disentangle those questions from the red herrings thrown up by Slaughterbots and, indeed, by many contributors to the debate. This book seeks to take steps in that direction by trying to give a clear answer to the question raised by the Campaign at its formation: should we ban these ‘killer robots’? As campaigners rightly point out, this is a choice we have made before, in the case of other kinds of weapons systems: the international community has successfully negotiated treaties and agreements that have resulted in bans on military capabilities, including bans on chemical and biological weapons, antipersonnel landmines, and even blinding lasers. There’s much that could be said about the process of securing such a ban, and what avenues might be available for doing so and to what effect, but that is not the question in focus here. Rather, this book is about whether or not we should ban LAWS.

To give you the bottom line up front, my answer to this question is in the negative. I hope to show here that the central considerations that have been raised in support of the view that we should ban (or in some other, undefined sense, ‘stop’) these systems are not, when put under scrutiny, ultimately convincing. This does not mean I think there should be no controls or constraints on the development and employment of LAWS; there certainly should be. Indeed, I have had the privilege of working alongside a group of international experts to try to outline a first attempt at a set of guiding principles for the international community now titled ‘Guiding Principles for the Development and Use of LAWS’. But that is not the focus of this book. Instead, my argument here is focused on showing that we do not in fact have compelling reasons to ban ‘killer robots’.

A Definition

Before proceeding, I do, of course, have to clarify what this phenomenon is that is the focus of our investigation. While ‘killer robots’ is much racier than ‘lethal autonomous weapons’, we are on firmer ground with the latter terminology; so, going forward, that is what I will generally use. So then, what exactly is a lethal autonomous weapon? There is, as yet, no universally accepted definition, and some parties to the debate have been accused (perhaps with some justification) of playing definitional games. There is, however, growing acceptance of the definition put forward by the International Committee of the Red Cross and Red Crescent (ICRC), according to which an autonomous weapon is

[a]ny weapon system with autonomy in its critical functions. That is, a weapon system which can select (i.e. search for or detect, identify, track, select) and attack (i.e. use force against, neutralize, damage or destroy) targets without human intervention. (ICRC 2016, pp. 11–12, n. 2)

Despite being widely accepted, this definition is not without shortcomings. In particular, some of the terminology is, arguably, loaded. ‘Select’ carries an implication of deliberate cognitive activity, which may not be an appropriate description of how many autonomous weapons do or will function; ‘discern’ or ‘identify’ would be a more neutral alternative. Likewise, ‘attack’ is a loaded term in this context, given the importance of the question of the point at which human agency is relevant; ‘engage’ would, again, be a more neutral alternative. Nonetheless, for the purposes of this volume, I will take it that the ICRC definition is a sufficiently accurate description of the phenomenon under consideration to enable us to weigh up whether or not a ban is necessary.

1 1. The formal name of the group is the Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems (GGE LAWS) of the High Contracting Parties to the Convention on Prohibitions of Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects (CCW).

Should We Ban Killer Robots?

Подняться наверх