Читать книгу The Centaur's Dilemma - James E. Baker - Страница 11

Оглавление

FOUR

Sitting on a Hot Frying Pan

NATIONAL SECURITY IMPLICATIONS OF AI

The potential benefits of AI also bring potential problems and challenges. These challenges are tactical, like ensuring AI capabilities are used wisely and fairly as intelligence indicators. They are also strategic, such as the risks that come with technology and arms races. This chapter introduces the reader to some of these risks.

THE LAW OF UNINTENDED CONSEQUENCES AND THE CHALLENGE OF MACHINE-HUMAN INTERFACE

Technology does not always work as intended. Here we are not just talking about R & D miscues and mistakes, but deployed systems. Some examples are apocryphal; Icarus’s wings did not work as intended. Most examples are not apocryphal. The principal attack torpedo in the navy’s pre-WWII inventory tended to not explode upon contact and, in some cases, returned (it is thought) to strike the submarine from which it was launched. America’s first attempt to launch a satellite after Sputnik ended in an explosion on the launch pad. The Thresher and Scorpion and Apollo 13 are testimony to technology that worked well and then failed to work as intended. Let’s add Columbia and Challenger to this list. Most recently, we have the Boeing Super Max crashes and the impact of social media on propagating video of the Christchurch terrorist killings as illustrations of technology that failed, did not work as intended, or overwhelmed the capacity of human operators to successfully interface with the technology. There are also examples of technological emergencies that were supposed to take place but did not—like the Y2K crisis, which, some predicted and the government feared, might lead to a cyber-meltdown at the turn of the century.

There is hubris in thinking AI will work precisely as intended. Remember, many current narrow AI applications depend on classifying data and probability assessments about that data. Recall M. L. Cummings’s description of driverless cars.

Given this immense problem of computation, in order to maintain safe execution times for action, a driverless car will make best guesses based on probabilistic distributions. In effect, therefore, the car is guessing which path or action is best, given some sort of confidence interval.1

But what if the assessment is wrong? With a car, the consequences are presumably finite and limited. Where AI is enabling cyber- or kinetic weapons systems or warning systems, the consequences of AI failure could be catastrophic. Is the Amazon shopping algorithm correct every time? Does the Google search engine always provide a link that answers the question?

The risk with strong AI, commentators such as Nick Bostrum believe, is that you may only have one shot to get the technology right, if indeed the AI-enabled technology, capacity, or weapon is introduced to the internet. It is one thing if it fails to successfully perform its task. But what if that task is lethal or linked to a critical infrastructure? What if it does not shut down as intended or directed? Or continues to pursue its tasks through other machines, even when commanded to stop or disconnected from the internet? Experts express skepticism about how realistic these scenarios are. But how far-fetched is it to ask whether AI will work as intended? What if Stuxnet, for example, was introduced on the internet and not in a confined system, and what if it could not only attack a Siemens-designed supervisory control and data acquisition system (SCADA), but was designed to identify any firewall defenses and rewrite its code to find an entrance?

One lesson from the Cold War is that arms races place pressures on states and actors to produce, deploy, and match technologies before they are ready, tested, and foolproof. The “Get It Right Once” risk might occur in a different kind of setting, where AI is a critical component in a tactical or strategic warning system—for example, a naval system designed to identify and prevent a missile attack, or a satellite system designed to warn of a missile launch.

One way to address unintended consequences is to ensure there is a human in the decision loop to act as a circuit breaker. This seems particularly imperative where weapons are concerned. Here, one of the questions is where to put the human in, on, or out of the loop—again, we have the centaur’s dilemma. However, part of the allure and the advantage of AI-enabled systems is their speed and ability to respond in an instantaneous manner, whether responding to swarms, cyberattacks, missile launches, or stock trades. The tension is between taking full advantage of the AI capacity and providing a human air gap in the system—on or in the loop—that will slow it down and introduce both human judgment and frailty. There are military and arms race pressures pushing, perhaps inexorably, toward out-of-the-loop constructions. In cyberspace, there is the added tactical necessity of providing instantaneous defense to cyberattacks.

In such settings, risks may come in threes.

1 Misunderstanding: AI appears to work as intended, but the human in the loop does not understand the results.

2 Fumbled Interface: The human in the loop does not have time to process or act on the results. Or,

3 Inaccuracy: The AI-enabled system and results are not accurate.

Sydney Freedberg and Matt Johnson illustrate some of these risks with reference to Air France flight 447, which crashed in 2009 in the Southern Atlantic Ocean en route from Brazil to France. Apparently, the pilots were unable to transition from autopilot to manual flight during an in-flight emergency caused when external speed gauges likely froze and erroneously signaled the autopilot computer that the aircraft was losing speed and at risk of stalling. The pilots had only seconds to assess the situation and respond. Before they could do so, the aircraft apparently stalled and plunged to the ocean. The authors also cite a 2003 friendly-fire incident in Iraq involving a Patriot battery. The technology worked, but when the technology passed control back to the humans in the loop to make a fire or no-fire decision, the operators were not sure what they were seeing and made an erroneous choice to fire at friendly forces.2 The USS Vincennes incident, involving the shooting down of a civilian Iranian airliner by an Aegis cruiser in 1988 is often cited as an illustration of a technology that worked—the data was correct regarding the speed, direction, and climb of the aircraft—but in the moment, human actors misread the data perceiving an inbound aircraft on an attack azimuth. Scholars speculate that time pressure—and, perhaps, the commander’s aggressive disposition—compounded the interface challenge.3

The examples continue. In 2018 and 2019, following the crash of two Boeing 737 Max aircraft, safety officials determined new software designed to prevent stalls, known as the Maneuvering Characteristics Augmentation System (MCAS), was at issue. The software was added to the 737 Max because more powerful engines were added to the Max and located in a different place on the airframe than in previous models. The investigations continue; however, flight data indicates the pilots on both aircraft struggled to manually fly the aircraft after the MCAS activated in an erroneous effort to prevent a stall. Investigations also indicate the pilots were not trained on the new software and may not have even known of its existence. They further indicate that different configurations of the software were sold to different airlines and that certain safety features, such as the addition of a second stall warning indicator and a warning light when the indicators were in conflict, were “extras” that cost-saving airlines did not purchase. Finally, it appears that Boeing rushed the 737 Max to market to compete with the new fuel-efficient Airbus A320neo. In the process, Boeing may have persuaded internal and external safety officials, including the FAA, that the aircraft was using existing technology rather than new technology requiring trials and FAA certification.4 Arms races, like market races, create incentives to cut safety and security corners.

In a different manner, following the terrorist attacks on two mosques in Christchurch, New Zealand, in March 2019, social media outlets struggled to prevent the uploading of videos of the attack despite algorithms designed to identify and remove violent content and specific efforts to remove the Christchurch video. Small modifications in speed, content, and length, it turned out, fooled the corrective algorithms and overwhelmed the capacity of human reviewers to intercede. Meanwhile, push algorithms were automatically “recommending” the video to platform users with a propensity to view violent content. Media estimated 300,000 copies of the video, or portions of the video, were uploaded to the internet despite active human and technical efforts to prevent its promulgation, which blocked an estimated 1.2 million attempts to upload the video.5 Three points emerge. First, the safety algorithms were, it seems, easily fooled. Second, the human part of the centaur—the humans in the loop—could not keep pace and were overwhelmed by the push algorithms operating on autopilot. Third, offense beat defense; it was easier to upload the video than to take it down.

Now connect these risks to existential weapons. In 1979, in an incident documented by the National Security Archives6 and recounted in Robert Gates’s 1996 book From the Shadows, President Carter’s national security advisor Zbigniew Brzezinski

… was awakened at three in the morning by [military assistant William] Odom, who told him that some 250 Soviet missiles had been launched against the United States. Brzezinski knew that the president’s decision time to order retaliation was from three to seven minutes. Thus, he told Odom he would stand by for a further call to confirm Soviet launch and the intended targets before calling the president. Brzezinski was convinced we had to hit back and told Odom to confirm that the Strategic Air Command was launching its planes. When Odom called back, he reported that 2,200 missiles had been launched. It was an all-out attack. One minute before Brzezinski intended to call the president, Odom called a third time to say that other warning systems were not reporting Soviet launches. Sitting alone in the middle of the night, Brzezinski had not awakened his wife, reckoning that everyone would be dead in half an hour. It had been a false alarm. Someone had mistakenly put military exercise tapes into the computer system.7

History, in a way, would repeat itself four years later. In a case often cited by AI skeptics, Soviet lieutenant colonel Stanislav Petrov was serving as a watch officer at a Soviet early-warning radar and command center. Alarms indicated the launch of an American first strike at the Soviet Union. Petrov was skeptical. The pattern on the radar did not look like what he anticipated a first strike would look like. There were too few missiles. It was Petrov’s duty as senior officer of the watch to call the Kremlin to trigger the Politburo’s response, perhaps a nuclear exchange. He stalled. As recounted in his 2017 obituary in the New York Times (quoting a BBC Russian service interview thirty years after the incident),

I had all the data [to suggest there was an ongoing missile attack]. If I had sent my report up the chain of command, nobody would have said a word against it. There was no rule about how long we were allowed to think before we reported a strike. But we knew that every second of procrastination took away valuable time; that the Soviet Union’s military and political leadership needed to be informed without delay. All I had to do was to reach for the phone; to raise the direct line to our top commanders—but I couldn’t move. I felt like I was sitting on a hot frying pan.8

The warning system screamed alert; data pointed to an attack, but with an anomalous pattern. Petrov’s military and bureaucratic training might have driven any doubts and decisions up the chain of command. But Petrov’s intuition cautioned against doing so. He took the risk of waiting and was right. But not every lieutenant colonel is a Petrov and not every national security advisor a Brzezinski.

With AI, there likely will be less time to think and adjust—that is, if the system is intended to augment human decision, as opposed to displacing it altogether, as in the case of an autonomous and automatic system. Recall that what AI is less good at than humans is situational awareness and judgment, which depend on context, experience, and intuition. It is also more likely that an AI-enabled system will instill greater confidence in its operators than Cold War–era technology, and with good reason. In the abstract, would you be more likely to trust a Soviet-era early warning system or IBM’s Watson? Watson, of course. Moreover, narrow AI is best at—and better than humans at—pattern recognition and identification, which is what early warning detection is all about. But let’s put the question another way: would you be willing to trust Watson—to bet your life and the survival of humanity on Watson—without first having a trained, calm, and rational officer assess the results?

Brzezinski not only knew enough to verify the initial warning; he appears to have calmly waited as the minutes ticked by for not one, but two verification inquiries. In other words, he was in the loop, and waited in the loop as the clock ticked down. Note as well that according to the National Security Archives, the problem was not the mistaken use of an exercise tape but the loading of software into NORAD’s computers. As a result, “The information on the display simultaneously appeared on the screens at [Strategic Air Command (SAC)] headquarters and the National Military Command Center …” thus in circular manner confirming the attack with two sources.9

No doubt informed by these incidents, the United States has stated that it will not deploy weapons systems without a human in the loop, unless an opponent does so first, creating a tactical or strategic advantage. In short, the United States issued a no-first-use pledge. Other states have not provided similar negative security assurances. Arms race pressures to deploy better and faster systems and to maximize the advantages of AI may prompt governments to remove the human from the loop. One immediate question is whether decisionmakers should embed in policy or law a prohibition on the use of autonomous warning systems linked to kinetic weapons and especially nuclear weapons.

The risk of unintended consequences is compounded by three counterintelligence risks. The first risk is from supply-chain contamination or sabotage. In this scenario, the AI-enabled system does not work as intended because an opponent has altered the circuitry or code involved by introducing faulty hardware or software. The second risk comes in the form of AI used as a weapon, perhaps through supply-chain weakness or through first-mover advantage, causing an opponent’s system to fail. Finally, AI presents new capabilities to spoof an opponent by disguising an attack, camouflaging attribution, or engaging in false flag operations. As the DOD Roadmap report notes, “This problem is especially apparent in unmanned systems, which by their very nature have an elevated reliance on information systems to function safely, effectively, and consistently.”10

Finally, a technology arms race is likely to prompt a parallel espionage race. The faster the race the greater the effort to collect information on that race and to curtail an opponent’s advantage through theft and espionage. Knowledge of an opponent’s capabilities and intentions can be stabilizing, as in the case where actual knowledge debunks perceptions of a bomber or missile gap and thus deters unnecessary and additional arms expenditure. However, it can also be destabilizing where it leads to increased, if not rampant, efforts to penetrate and steal an opponent’s knowledge and capabilities, which in the case of AI may also lead to uncertainty over the integrity of any ensuing AI function. In this race, one of military and economic espionage, the United States may find itself at an asymmetric disadvantage, depending on how it approaches the subject of economic espionage in law and policy.11

FOREIGN RELATIONS IMPACT

AI will have foreign relations impact in at least eight ways. First, it will affect global stability in known and unknown ways. Former secretary of the treasury and Harvard economist Larry Summers predicts that, on a global basis, we “may have a third of men between the ages of twenty-five and fifty-four not working by the end of this half century.” This would represent a higher unemployment rate than during the Depression, affecting political as well as economic stability and potentially leading to mass migration and military conflict.12

Second, a global AI economy will potentially reorder our understanding of north-south divides, as well as so-called third-world, second-world, and first-world orders, perhaps by exacerbating those divides. AI capacity may redefine the nature and number of superpowers, a phrase first coined to capture the advent of nuclear weapons.

Third, AI could alter and shuffle the relative power of small, but technologically sophisticated states in terms of economic, political, and military power. Let’s call this the AI Singapore Effect. To the extent AI comes to influence, or perhaps transform the nature of military power, it may give smaller, less populous states the military capacity to fight above their weight or do so in an asymmetric manner.

Fourth, so long as military power, intelligence capacity, and economic stability depends on AI, supply-chain security will increase in importance. The import and export control regimes and like-minded regimes will increase in importance as well. AI-enabled technology will only be as good or reliable as the individual components that comprise AI-enabled systems, such as transistors, circuitry, and software. A failure to advance AI capacities could have devastating security impact. So, too, could reliance on an AI-enabled capacity penetrated by an adversary. Policymakers might ask whether it is time for an Australia Group to address the dissemination of AI technologies on a like-minded basis.

Fifth, AI presents asymmetric opportunities for non-state actors just as it creates opportunities for state actors. Unlike nuclear weapons, which present a triad of obstacles to their acquisition, including fissile material, a delivery vehicle, and warhead, AI is potentially accessible to almost every actor at a relatively inexpensive price, depending on how it is defined and what it is used for. Consider, for example, that the autonomous commercial vehicle or remotely piloted aerial delivery system can also be used as a mobile IED or bomb. Likewise, the Stuxnet code jumped the rails, was publicly identified by a private actor, and was reused by non-state actors.13 AI-driven cyber-tools could be as well.

Sixth, AI may help authoritarian regimes better track and control their populations and retain power. AI algorithms are the censor’s tool on the internet. Gregory Allen, coauthor of the Belfer/IARPA study “Artificial Intelligence and National Security,” describes how facial recognition is used:

Snapchat uses AI-enabled facial recognition technology to allow teenagers to send each other funny pictures. China uses the same technology in support of domestic surveillance. Jaywalk across a street in Shenzhen, and you’re liable to have your face and name displayed on a screen nearby, along with a police reminder that “jaywalkers” will be captured using facial recognition technology.14

With AI, advantage goes to the authoritarian regime and to law enforcement.

Finally, AI will lead to an arms race.

ARMS RACE RISKS AND IMPERATIVES

In 2015, leading AI researchers signed an open letter expressing concern about an AI arms race.

Many arguments have been made for and against autonomous weapons; for example, replacing human soldiers by machines is good because it reduces casualties for the owner but bad because it lowers the cost threshold for going to battle. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow.15

So long as states perceive security advantage from AI and, equally important, existential disadvantage from an opponent’s AI capacity, they will feel compelled to keep pace and respond in kind or better. An AI “arms race” is not only inevitable, it is already here.

As the Belfer study indicates, and others have observed, AI is a potentially transformative military technology on par with nuclear weapons and the aircraft. It is also a transformative security technology, offering not just military advantage, but economic, intelligence, and decisional benefits. AI offers potentially decisive advantage to any breakout power in the field. That makes this a technology race and not just an arms race. For the United States, AI presents the classic opportunity to offset, that is, compensate for China’s advantages in geographic proximity to disputes arising in the Pacific region and numeric advantages in manpower and certain conventional weapons. It also offers the United States the allure of a panacea or technological fix to the challenge of maintaining a large standing military in a cost-conscious democracy.

But offset works both ways. For China, AI-enabled weapons, like swarms, hold the prospect of offsetting America’s naval advantages, especially in aircraft carriers, submarines, and ships, while AI-enabled cyber-weapons may mitigate U.S. technological advantages in cyberspace. It also offers China an equal seat at the national security table, while before it was not equal to the United States or the Soviet Union in a Cold War dominated by nuclear weapons. Commentators note that AI-enabled systems are particularly suited for the vast maritime domain presented in the Pacific, including so-called areas of denial in the Western Pacific, such as the South China Sea.

For Russia, AI-enabled weapons and active measures present new tools to assert “great power” influence, or at least to compete beyond its economic and military means. Russia’s willingness to use cyber-tools to interfere in the 2016 and other U.S. elections makes Russia’s interest in AI problematic. So does Russia’s “asymmetric” willingness to ignore or violate cyber-norms and laws.16 Given its potential to transform a nation’s physical and economic security, no power great or regional can afford to fall too far behind.

This arms race is also a race between systems and the relative security advantages of each. China has the advantage of centralized control and purpose. It also has an enormous and growing cache of data. In 2017, China announced a US$150 billion state-driven program for AI development with a goal of becoming the leading AI economic and security power by 2030. The development plan overtly states as an objective “first-mover advantage” in the development of AI.17 In this system, the government can channel AI applications to security applications without restriction, just as it can control and access data at the national level for security purposes.

The United States has the advantage of creative dispersion fueled by financial incentive and relative regulatory freedom. In 2019, six of the seven leading AI companies were in the United States.18 But it should not be lost on American observers that Baidu, Tencent, and other companies are active partners in China’s efforts. Moreover, as commentators note, China has had its Sputnik AI moment. This occurred when AlphaGo beat China’s best player in 2016. China noticed and watched. The United States did not. At least one former senior government official has stated that if this were the Cold War, we would be losing.19 Others offer optimism. Like China, Russia has the advantage of authoritarian focus, as well as the element of surprise that comes with low expectations and a willingness to operate outside of expected norms. It also has the flexibility and freedom of action that comes from having less stake in the stability and viability of the international economic system and norms. Certain AI applications offer great promise for a government willing to interfere through cyber means in the democratic and economic institutions of other states, such as Estonia (2007), Georgia (2008), and the United States (2016).

There have been other arms races in history. Indeed, policymakers and decisionmakers will search for historical as well as legal metaphor to define and address the questions presented by an AI race. The most obvious metaphor is the nuclear arms race during the Cold War. There are strong similarities, including to the time before nuclear weapons, when scientists and governments raced to harness the power of the atom, not quite sure how, and not quite sure when and to what end and ultimate result. AI, like nuclear weapons in the 1950s, also has the potential to transform military doctrine, spending, and policy. As with the nuclear arms race, until that doctrine is set, understood, and stable, the world may be less stable.

However, there are many differences between AI and nuclear weapons. For one, AI is not a weapon, it is a range of capacities that can be used to enable weapons, robots, and autonomous vehicles and for other purposes. Perhaps it is more like atomic energy, which has both peaceful and military purposes. Two things that do seem more alike than different are the potential for AI to transform military strategy like nuclear weapons before, and the absence of a framework, let alone an agreed framework, to address the legal, moral, and ethical issues presented. Because security entails both physical safety and the preservation of our values, we should create such a framework now. That is what it means to both support and defend the Constitution. While nuclear weapons provide the most apt metaphors, arms control and the law of armed conflict (LOAC) offer additional lessons from which to assess AI.

As discussed in chapters 8 and 9, the threshold question now is which lessons are most apt. Policymakers, lawyers, and ethicists should ask:

What lessons can be learned from the Cold War arms race?

What principles from arms control and LOAC might or should apply to AI?

What strategic and tactical doctrine should apply to AI-enabled weapons, weapons systems, and warning components?

What are the opportunity costs of an AI arms race?

AN INCREASE IN CONFLICT?

A greater number of automated and unmanned kinetic and cyber-weapons selections could change the policy calculus for employing military force, as UAVs have done. Commentators debate whether AI-enabled warfare will increase the risk of conflict by reducing its “cost,” at least to the initiator of action. The calculus may change, the argument goes, because force may be used with less risk to U.S. military personnel and collateral civilian harm. The same argument was, and is, made with respect to UAVs. Of course, international law and policy is reciprocal. Thus, if this assessment is correct, AI-enabled weapons may increase the frequency in which such weapons are used against the United States and not just by the United States.

It is also possible that AI-enabled weapons may reduce the risk of conflict by increasing its costs, or by changing the military balance of power. For example, swarm technology may make surface ships and especially aircraft carriers vulnerable in ways that will limit the way military power is projected across oceans and from offshore. Likewise, some war game modeling indicates that with AI-enabled weapons casualties in a Pacific war between the United States and China could be in the hundreds of thousands within days.20 To avoid a costly conflict, the United States may be less inclined to defend Taiwan with naval power in the Taiwan Straits than before. If this alternative calculus is correct, it will necessitate changes in policy or changes in deterrence strategy, most likely both. Whether this is a good policy outcome or not is a different question from whether it makes conflict more or less likely. AI-enabled, or lethal, AWS could dramatically increase rather than decrease the cost of conflict.

Just as AI may augment the U.S. security toolbox, it will augment the adversary’s toolbox. Thus, policymakers need to anticipate new weapons, new threats, and new uses for AI whether those uses are contemplated by the United States or not. We have seen this with Russian information operations in cyberspace, where much of the action is occurring below the level of armed conflict. These are not remarkable insights, but they dominate the literature on AWS, along with debate over whether and when a human should or must be engaged in the decision to use lethal force generally or against specific targets.

DECISIONMAKING PATHOLOGIES

As AI will introduce new capabilities and risks, it will also exacerbate existing national security decisionmaking pathologies, especially those associated with the rapidity of decision and secrecy. “Pathology” is defined here as a factor or condition that undermines optimal decisionmaking. Good process—meaning timely, contextual, and meaningful, along with calm leadership—is the antidote to the decisional pathologies. However, they are mitigating antidotes, not eliminating immunizations. Five prevalent pathologies are:

 Speed

 Secrecy

 Incomplete and/or lack of complete information

 A focus on the immediate, and

 The national security imperative

Issues of cognitive bias also come into play, especially in how decisionmakers analyze information, assess history, and apply doctrinal perspectives. AI has the potential to exacerbate, or mitigate, each of these pathologies.

Bureaucratic Speed and Machine Speed

Let’s focus on the rapidity of national security decisionmaking, because speed is a signature attribute of most AI applications. It is also a factor that can aggravate most of the underlying AI risks. Indeed, it is certain to do so, as it has in cyberspace. Because security actors cannot risk an adversary gaining a first-mover advantage, decisionmakers may feel pressure to change the way decisions are made, to take shortcuts, and, perhaps, to remove humans from the decisional loop.

Rapid decisions are endemic to national security. The compulsion and necessity for speed comes from several factors. In the case of real-world events, the need for speed is intuitive. If you are reacting to or seeking to influence events, your timeline is dictated by those events and not by optimum considerations of process, factual development, and policy consideration. Moreover, opponents may seek moments of distraction and commitment to act presenting additional challenges and further minimizing the capacity and time to respond. The intelligence process seeks to prevent surprise and to provide early warning, and thus extend decisional timelines and mitigating opportunities. But intelligence is an imperfect instrument for reasons of scope, capability, and the simple difficulty of predicting or seeing actions that are designed to be hidden.

The media cycle also feeds the need, or at least the pressure to act with speed. This started as the “CNN effect,” the impact of a twenty-four-hour news cycle driving a parallel response cycle. Rather than digesting and responding in a deliberate manner, each event or story now seemingly necessitates an immediate response before someone else controls the narrative. Executive actors buy into the cycle so as not to appear to be in a reactive mode, indifferent, and so that their version or understanding of the story is told. Alas, and of course, to those who draft and review government press guidance, the CNN days seem like the good old days of thoughtful and timely reflection. The media cycle is further compressed by the advent of new platforms and a decline in the societal ethic of what it means to produce and espouse fact-based news on either side of the microphone. Let’s call this last factor the “Moynihan effect,” after Senator Daniel Patrick Moynihan, who coined the phrase “You are entitled to your own opinion, but not your own facts.” Not anymore. Moynihan sensed what was coming. But Moynihan passed away in 2003, before the advent of Twitter, Facebook, and the internet gave every person a media platform. It was also before push algorithms and information bots. Executive actors, including the president, may feel compelled to respond to every tweet and post and not just inquiries from established media outlets, if they are not already affirmatively using these platforms to shape the news.

Technology has also changed the decisionmaking timeline, and not just in the handling of public communications. The most dramatic manifestation of this trend is in the realm of cyber-operations. Cyber-tools used for hacking, crime, espionage, and information operations are instantaneous in effect. That means that to be effective, technology-based defenses must be instantaneous or proactive. In cyberspace, decisionmakers are always on the clock and this clock runs on milliseconds.

Technology has influenced the necessity of decisional speed in other dramatic ways as well, including, for example, the use of algorithm-based hedge fund trading that can lead to instantaneous or flash market crashes, without human decision, intent, or action. This was the case with the 2010 Flash Crash, an event cited by many AI analysts. The crash was AI-driven. The need for machine speed led traders to remove human decisionmakers from the operational loop, because humans could not compute the marginal gains or losses from fractional trades fast enough to compete against algorithms making the same trades.21 Humans are involved in the decision chain—in writing the codes that inform algorithms that drive the trades—but the codes are opaque to public inspection and regulation, even if the economic and potential national security consequences are not. The market fell almost a thousand points in under fifteen minutes before largely recovering, in part because a different algorithm triggered a pause in trading on the Chicago Mercantile Exchange.

In ironic fashion, bureaucracy can also create its own necessity for speed. It is ironic, because “bureaucracy” is associated with layered delay. Here, delay and speed work together. It originates when one part of the bureaucracy takes too much time necessitating that another part act with too little. This may occur naturally, for example, when it takes time to identify the expert or process in the government to effectively address the subject at hand. The issue may be out of the ordinary or unpracticed, illustrated by the government’s response to the 2010 Deepwater Horizon BP oil spill in the Gulf of Mexico or the 2015 response to the West African Ebola crisis. The executive branch had not prepared for, nor practiced a response to these specific, or even general types of crises. In such situations, by the time a national decision is framed and presented, if there is one to be made at all, there is “no time left to make it.” This does not excuse the delay, it explains it.

More commonly, bureaucratic delay necessitating rapid decision derives from bureaucratic function. Bureaucratic actors put off the near term in favor of the immediate, until the near term becomes the immediate. This may happen when key staff actors are not responsible, accountable, or identifiable within the decisionmaking process; in other words, they are critical, without feeling the burden and responsibility of being critical. Think here of a deadline to transfer aid, or provide a report, or make a speech. So long as responsibility is anonymous or diffuse, the staff actor or agency has incentive to hold the matter until the last minute, at which point the actor or agency will urgently convey the matter up the chain of command for immediate decision.

The legislative cycle has come to operate in a similar way, on a perpetual delay-speed cycle. The Congress sits on an issue for months, and then rushes to complete a funding or policy task at the last minute, using the leverage of a real-world deadline to help create the political necessity and cover for acting. That this delay is artificial, or self-induced, does not change the imperative to act with speed, or some might say haste, when the decisionmaker or institution with authority is finally presented with options.

Finally, bureaucratic speed can be necessitated by false deadlines, of the sort that occur when decisionmakers want to get something done on their timeline. This may occur for convenience—“I want the proposal before I go on my trip.” Or it may be used to drive bureaucracy—“If I do not get this proposal by Friday, I will fire you.” All of which is not necessarily good or bad, but it does sometimes explain the necessity for speed.

Machine speed is altogether different from bureaucratic speed. Machine decisions are instantaneous. In many cases, they are also pre-set based on software and algorithms, or pre-delegated based on human choice and decision. AI will, or can, depending on how it is applied, mitigate the impact of speed on decisionmaking as well as exacerbate it and do so in profound ways. A decisional process that is not ready for these impacts may not reap the advantages of AI capacity and eschew the use of a valuable tool. However, decisionmakers may swing in the other direction and rely on AI-driven actions when human judgment and decision are needed.

There are added risks generated by machine speed. Because AI moves so quickly—and must, if it is to be effective—in some situations, decisionmakers may have less time or no time (or perceive that they have less time or no time) to respond. This can drive policymakers to rapid decisions or to defer to automatic responses, which may or may not be optimally tailored to actual events or situational facts. Imagine an instantaneous Schlieffen Plan. This is, of course, already an existing reality in the realm of cybersecurity and cyber-operations. In cyberspace, there is risk in waiting to respond to an attack while facts are gathered, attribution confirmed, and options identified. This paradigm may drive automatic responses to defensive options and away from offensive-defensive or offensive responses that may more effectively stop attacks and serve to deter future attacks.

EXISTENTIAL THREAT?

Some commentators believe that AI in the potential form of superintelligent artificial intelligence (SAI) presents an existential threat to humanity. Others place SAI in the realm of science fiction, finding it an overwrought distraction from the real and immediate security and commercial applications and implications of AI. However, given the media attention afforded to the topic, security and legal generalists ought to understand the argument and its nomenclature. In 2017, while touring Yandex—one of Russia’s leading AI labs—Vladimir Putin was recorded asking the CEO, “When will it eat us?”22 The question received media mockery. But AI specialists would know the question arose from the debate about AI as an existential threat. They would further understand that the concern presented was not the risk that the AI system might one day eat its developers, but the reality that the president of Russia was immersed in AI at this level of detail.

The most visible proponents of the existential-threat school are Tesla CEO Elon Musk and the late Cambridge astrophysicist Stephen Hawking. In a widely quoted speech, Hawking concluded, “The rise of powerful AI will be either the best or the worst thing ever to happen to humanity. We do not yet know which. That is why, in 2014, I and a few others called for more research to be done in this area.”23

Musk, in turn, has stated, “AI is a fundamental existential risk for human civilization, and I do not think people fully appreciate that … [AI is] the scariest problem. I think by the time we are reactive in AI regulation, it’s too late.… I keep sounding the alarm bell.”24

On the question of existential threat, an outside observer might divide the AI community into the following three camps.

The end of the human era. The journalist James Barrat states his thesis in the title of his book Our Final Invention. The premise of the book, and this school of thought, is that scientists, governments, and industry are inexorably marching toward the creation of superintelligent artificial intelligence. The motivation to do so varies. There is the promise of curing cancer and other diseases. There is the allure of making money. There is the prospect of immortality. There is also the sense, as there was with nuclear weapons, that some scientists and engineers simply cannot stop pushing to the edge of the possible. The thesis further posits that humankind will lose control of its own invention, however benign or noble the initial intent.

AI will become humankind’s last invention, because SAI-enabled machines will optimize and maximize whatever it is they are initially programmed to do. But having achieved SAI, and thus the capacity to outthink their inventors, they will rewrite their instructions to override the “off switch,” or to hide their ability to do so, until they have propagated through the internet to survive beyond their immediate source of power and connection to the outside world. This scenario is characterized in different ways with varying anthropomorphic effect.

One version of this scenario is the Bostrum Paper Clip Optimizer.25 The paper clip is chosen because there is nothing inherently good or bad about a paper clip, or a paper clip machine. However, having achieved SAI, the machine rewrites its code to optimize paper clip production. Thus, using the internet and its superior knowledge, the optimizer diverts all sources of energy to its paper clip efforts. Next, of course, it converts all sources of carbon into energy. Humans are made of carbon, and thus, eventually, the machine programs other machines to capture and turn humans into carbon energy. The paper clip machine is not evil, it is just good at what it does.

Friendly and unfriendly AI—The fork in the road. This camp includes scientists, businessmen, and commentators who believe AI could go either way. AI could be a force for good like no other; it could help find a cure for cancer, solve climate change, and alleviate hunger and poverty. Or it could be a force of harm. AI could become unfriendly because of unintended or unanticipated effects, like the paper clip optimizer, or more likely because humans program AI to do unfriendly things. The fork in the road is at the root of Hawking’s and Musk’s concerns. It also informs Nick Bostrum’s concerns. Bostrum postulates “an extremely good or an extremely bad outcome is more likely than a more balanced outcome.”26

Keep calm and carry on. The third camp is largely the province of governments and technology companies. It acknowledges the risk but embraces a fundamental confidence that AI will be a force for good and will ultimately evolve in a positive manner under human control. This view is captured in IBM senior scientist Murray Campbell’s 2016 response to Hawking and Musk:

I definitely think it’s overblown. It’s worthwhile to think about these research questions around AI and ethics, and AI and safety. But it’s going to be decades before this stuff is really going to be important. The big danger right now, and one of IBM’s senior VPs has stated this publicly, is not following up on these technologies.27

This view is also captured in the Stanford 100 Year Study of AI, which concludes:

While the study panel does not consider it likely that near-term AI systems will autonomously choose to inflict harm on people, it will be possible for people to use AI-based systems for harmful as well as helpful purposes.28

Contrary to more fantastic predictions for AI in the popular prose, the study panel found no cause for concern that AI is an imminent threat to humankind.29

This is almost identical to the view of Ryan Calo: “My own view is that AI does not present an existential threat to humanity, at least not in anything like the foreseeable future.”30

There are seeds of caution in terms like “near-term,” “imminent,” and “at least not in anything like the foreseeable future.” One reason this camp is optimistic about AI is that they do not believe AI will work entirely as anticipated and thus be as omnipresent and efficient at making paper clips, or whatever it is programmed to do, as some have forecast. Again, IBM engineer Murray Campbell:

When was the last time somebody walked into your office and posed a perfectly well-formed, unambiguous question that had all of the information in it required to give a perfectly formed, unambiguous answer? It just does not happen in the real world.31

But if confidence comes from a lack of perfection, we find ourselves back at the first risk identified in this chapter, the risk that AI will not function as intended.

TAKEAWAYS

AI comes with great promise and potential risk. While much of the popular commentary focuses on “existential risk,” one suspects that some of the doomsday rhetoric is motivated by a desire to generate discussion and avert worst-case scenarios, not necessarily predict them. Let’s focus on the real, known, and immediate risks:

1 Technology rarely works entirely as intended, at least at the outset. Scientists in the weapons field and others have not demonstrated a long track record of self-regulation when peril and promise converge on the road to knowledge. Moreover, so long as AI in some form and in some manner holds out the prospect of military advantage, including existential military advantage, national security actors will strive to keep pace. They cannot risk doing otherwise.

2 AI may first drive decisionmakers to act quickly, too quickly, perhaps automatically, based on percentages, models, and potential false positives, without time for reflection and the sort of slow thinking that also should inform national security. This is the centaur’s dilemma.

3 AI-enabled weapons and trip wires may increase the risk of mistaken war as well as intended war. This risk already occurs in cyberspace, but heretofore, it has been contained to cyberspace. AI has the potential to combine the risk of Cold War nuclear first strikes, real and perceived, with the immediacy of cyber-operations. When AI enables weapons across the spectrum from space to sea, it has the potential to place global warfare on a hair trigger. That trigger may be an AI-enabled maritime weapon intended to detect the approach of an offensive swarm, and it may be an AI-enabled counter-battery weapon at the Korean DMZ or on the Golan Heights that is programmed to respond before it is too late to defend.

One purpose of law and process is to mitigate these risks while maximizing opportunities to reap the benefits of technology.

The Centaur's Dilemma

Подняться наверх