Читать книгу The Centaur's Dilemma - James E. Baker - Страница 10

Оглавление

THREE

The Perfect Sentinel

NATIONAL SECURITY APPLICATIONS

How might AI influence national security? One response: What won’t it influence? To understand AI’s potential impact, it helps to briefly consider AI’s application to intelligence and military operations.

INTELLIGENCE

AI’s potential as a national security tool is most evident with intelligence. Here, narrow AI’s capacity to outperform humans in pattern recognition and anomaly detection, and the speed with which it can do so, presents an obvious application of existing and emerging technology. Think of intelligence concepts such as “connecting the dots” and the “mosaic theory”—that is what AI is all about. Or imagine the computational capability to search the entirety of the internet for threats in real time, along with the capacity to connect those threats to purchase and travel records as well as phone numbers. AI also provides the capacity to immediately analyze information, or a story, trace its origin to the head of the cyber-stream, and distinguish fact from fiction or something in between.

AI software or enabled systems can, or will, perform the following intelligence tasks:

 Persistent surveillance

 Image recognition, including facial recognition

 Link analysis

 Voice recognition

 Sorting

 Aggregation, a.k.a. fusion

 Political prediction

 Policy modeling

 Translation

 Deviation and anomaly detection

 Cyber-detection, attribution, and response

Now consider these capacities from the standpoint of the five intelligence tools (collection, analysis, covert action, counterintelligence, and liaison), as well as with respect to homeland security.

Collection

AI tools offer myriad additional collection capabilities or, in current vernacular, attack surfaces, based on the existing ubiquity of smartphones, CCTV, computers, and the IoT (Internet of Things). Media reports the Chinese government is using AI facial recognition algorithms to calculate social credit scores based on internet activity, and infractions like jaywalking, and that there are now over 800,000 CCTV cameras in Beijing alone. The Chinese also reportedly use AI to track the movements and associations of its ethnic minority and largely Muslim Uighur population outside of the Uighur Autonomous Region.

Privacy is relative, but the use of AI for surveillance is not just an authoritarian phenomenon. “At least seventy-four … countries are engaging in AI-powered surveillance, including many liberal democracies.”1 According to the Government Accountability Office, “since 2011, the FBI has logged more than 390,000 facial-recognition searches of federal and local databases, including state DMV databases” with access to 641 million face photos. “The FBI has said its system is 86 percent accurate at finding the right person if a search is able to generate a list of fifty possible matches.”2 Here is what Chief Justice Roberts wrote in the Carpenter case (discussed in chapter 6), holding 127 days of cell tower data inadmissible in a criminal case without a search warrant:

A majority of this Court has already recognized that individuals have a reasonable expectation of privacy in the whole of their physical movements.… when the government tracks the location of a cell phone, it achieves near perfect surveillance, as if it had attached an ankle monitor to the phone’s user.… Unlike with the GPS device in Jones, police need not even know in advance whether they want to follow a particular individual, or when.3

Now consider the IoT, the interconnection between electronic devices and sensors designed and connected, in theory, to make our lives easier (and to make it easier to monetize our data with advertising and sales). Domestic sensors are everywhere: remote door locks, alarms, and cameras; the television; the refrigerator; the printer; home assistants; and so on. All of this produces collectable data. Corporations and social media platforms know this, because they installed the sensors. Governments know this, too. Have you ever had the eerie experience of having a conversation with someone about the need to purchase a grocery or toiletry item only to find instants later that your iPhone has received a series of advertisements for such items? There is a reason that cell phones are not permitted in many government buildings. They can be used as passive listening devices, receivers. They can also be used to track, either directly as a beacon or by pattern analysis of roaming cell tower signals. Most cell phone users know their cell phones are registering with cell towers as they move, but many people tend to forget or ignore the implications. Perhaps others simply accept the ubiquitous nature of cellular emanations, just as one necessarily accepts the ubiquitous nature of one’s image-capture on CCTV cameras in London and many other urban areas and malls. Because all this data can now be stored in the Cloud, it can also be stored indefinitely without overwhelming the capacity of privately owned hardware or business mainframes.

In short, AI algorithms and link-analysis enables governments to aggregate and search information as never before on the internet, Dark Web, Dark Net, and the sensors of our everyday lives. The legal policy question is how much authority the government should have to collect and aggregate data and subject to what right and left boundaries regarding its storage, use, and transfer.

Analysis

There is an intelligence maxim that there is either too much information to analyze or too little. So it seems. The latter is illustrated by the challenge of determining intent within the leadership circle of a closed authoritarian regime, such as North Korea’s or the Soviet Politburo, or one with a small footprint, such as a terrorist cell. The former is illustrated by just about everything else, but for the sake of brevity, consider the complexity of analyzing Open Source Intelligence from the internet and Dark Web. In short, AI allows analysts to derive meaning from data.

One might think that the dilemma of too much information is a recent phenomenon. The challenge is not new, the scale is. Sherman Kent, one of the architects of the intelligence analysis discipline, wrote in the 1950s about the volume of information potentially available to analyze and the necessity of knowing when and where to put the human in the informational loop. The volume of data is exponentially greater today. YouTube uploads 500 million hours of video every minute. “More than 2 billion people now visit the site at least once a month. It would take 100,000 years to watch it all at a single sitting.”4 For those with less time, it would take 951 years to view a single day’s uploads. Pick your metaphor; analysts talk about noise-to-signal ratios or finding a needle in a haystack.

Analysts used to measure the amount of data collected with reference to the number of Libraries of Congress. Today a common unit of measure is the petabyte, the equivalent of the holdings of seven Libraries of Congress. By the time this book is published, information may be routinely sorted into exabytes. That is a number followed by eighteen zeroes. Intelligence specialists have also spoken for years of the mosaic theory of intelligence and, after 9/11, the necessity of connecting the dots, the process of piecing together diffuse bits of information to create a greater whole for the purpose of informing, warning, and predicting. Data mining and AI algorithms offer a solution through link analysis.

AI is an intelligence force multiplier. Properly constituted, algorithms can detect anomalous trade or travel patterns. If one were tracking sanctions enforcement and evasion, for example, algorithms can find, aggregate, sort, and identify anomalous patterns in the transfer of goods based on bills of lading, bank transfers, shipment weights, routes, and all the other data that lies beyond the capacity of human fingertips to collect and analyze in real time or near real time.5 Recall the earlier description of the gorilla in the crowd experiment. AI-enabled machines do not miss gorillas walking across the room, or out-of-place bills of lading. Likewise, AI algorithms can convert IoT data into pattern-of-life analysis, revealing one’s friends, place of worship, diet, schools, and time of entry to and from the home and the refrigerator.

Finally, AI can translate foreign print, broadcast, and social media instantly, giving analysts new access to open-source information. This task used to be performed laboriously “by hand” by the Foreign Broadcast Information Service (FBIS). AI is instant FBIS, but with limitations. One need only ask Siri or Alexa a question involving a foreign phrase, or to track down a Jabberwocky word, to realize narrow AI has limitations regarding accented speech, entendre, and children’s speech.

Covert Action

The capacity of AI to convert symbolic language (coded numbers) into natural language along with its capacity to recognize and distinguish patterns make AI a tool of choice not only to identify voices, but to mimic voices and alter images. Moreover, this can be done with real-life precision with images or recordings known as deep fakes. As is often the case, the capacity found its first public manifestation with pornography and pornographic revenge, with digital editors grafting one person’s face onto another person’s body. However, it does not take imagination to realize this capacity has potential, perhaps already realized, to enable some of the traditional tools of covert action, in American parlance, or active measures, in Russian parlance, including disinformation, false flag operations, and propaganda. The Cold War press placement is today’s video feed. If one wants to discredit or blackmail an official, why go to the trouble of setting up a honey trap when one can “obtain” the same result with a virtual deep fake. What is more, deep fakes work against the incorruptible as well as the predisposed and susceptible.

AI also makes cyber-weapons more effective, by helping to find zero days, enhancing the speed of response and counterresponse, and better disguising the attributable characteristics of the attacker. The February 2018 indictment of thirteen named Russian agents for interfering in the 2016 U.S. presidential election illustrates an aggressive use of cyber-instruments as covert tools. The Russian government operated a 24/7 bot farm, spreading false flag information not just about the presidential candidates but seeking to suppress the vote in African American communities. The Russian efforts extended across social media platforms—Twitter, Instagram, Facebook. One Russian social media account had over a hundred thousand followers. The policy question is whether governments should establish norms against the use of deep fakes, as many state legislatures are now doing with deep fake pornography.

In military context, lawful efforts to deceive the enemy are called ruses. The Trojan horse was a ruse. So were the subterfuges used by the Allies to keep the Germans guessing as to where the D-Day landings would occur. AI will allow military forces to engage in such ruses more effectively, perhaps dangerously so. AI might be used to disable an opponent’s air defense system, or perhaps turn it against an opponent’s own aircraft. AI can be used to mimic the voices of commanders and realistically so. And AI can be used to mimic a nation’s leaders to sow confusion at home and undermine morale. All of which heightens the need for sound encryption and active counterintelligence. Moreover, as American political observers know, disinformation need not be clever or well crafted to sow confusion or leave the public uncertain as to what to believe or not to believe.

Counterintelligence

AI will have, and no doubt has had, two immediate counterintelligence (CI) impacts. First, as already noted, it can aggregate information and identify patterns in financial, physical, and digital behavior along with anomalies in that behavior indicative of insider threats. Consider how quickly your credit card company knows when your card is used out of pattern when you travel overseas, or purchase gas on a long-distance road trip. Such tools might help identify a Snowden or a Manning accessing information outside their normative responsibilities, or an Ames or a Hanson spending money in new ways or beyond their apparent means. Would the government rely on AI alone to make these connections? One hopes not. The potential for false positives and spoofing is too great. But it is an immediate tool for the centaur to use to vindicate or corroborate. However, CI cuts both ways. AI may make it easier for an adversary to identify a case officer or an asset not careful with his or her own electronic footprint, fingerprint, facial print, or credit trail. Likewise, if AI enables counterintelligence, it also enables the internal police to track citizens more effectively within authoritarian states—counterintelligence of a different sort.

Second, it is a transformative technology. AI assets are an intelligence target presenting a CI challenge commensurate with its importance, but with a twist. Because the majority of AI R & D is academic and corporate, their laboratories are intelligence targets in ways they have not been before. Stated more pointedly, we can expect federally funded research and development centers (FFRDCs), university research centers, and corporations like Google and Facebook to become perpetual adversarial targets requiring new efforts to spot and counter technical and human penetration, all within cultures new to, if not resistant to, security and personnel safeguards.

Likewise, data used for machine learning and link analysis will take on added importance as an espionage target. Consider how the SF-86 data stolen from OPM could be used for machine learning. The ubiquitous Chinese effort to collect genomic data6 might serve a secondary purpose of providing data for machine learning. Nor should we be surprised when, without additional law or regulation, genomic data collected for one purpose, like family ancestry, is sold and makes its way into learning-enabled machines for further analysis and later pattern recognition.

One question for security specialists is how to deploy AI-enabled machines down to the tactical level in a manner that mitigates the risk of counterintelligence penetration or loss. Another question is where we, as a nation, should draw the left and right boundaries of data collection for CI and other purposes. More particularly, should the government collect (or purchase) private data for AI development and what responsibility and role should the government play in protecting data held in the private sector?

Liaison

Liaison is the intelligence term used to describe the sharing of information between nations and, in particular, intelligence services. Intelligence liaison is an expected activity between allies, as reflected in the so-called Five Eyes—an intelligence alliance between the United States, the United Kingdom, Canada, Australia, and New Zealand. However, intelligence liaison also occurs between like-minded services or momentarily like-minded services, for example, two generally hostile services sharing information on a common adversary or need. Liaison is an essential intelligence tool and multiplier, because it can provide access to information not otherwise available to a party based on location, access, or means.

Liaison can also create risk and controversy in at least four ways. First, information is rarely shared out of good will alone. Horse trading can be part of the process, especially outside of routinized liaison arrangements. Second, intelligence liaison does not always align with the stated and overt values of the governments involved. Especially when dealing with unsavory governments, liaison requires careful assessment of reputational and diplomatic risk weighed against security benefit. It also requires an assessment as to how U.S. information will be used, including whether it will be used in violation of U.S. laws. Third, because liaison is an intelligence activity, in U.S. practice, it receives less policy appraisal and legal oversight. (Depending on one’s perspective, this can be a good or bad thing.) In U.S. practice, there have been notable instances when one part of the government has been engaged in diplomatic condemnation at the same time the intelligence arm is engaged in liaison, presenting, at best, mixed public perceptions of government intent and purpose. For example, within months of Secretary of State Colin Powell declaring the actions of the Sudanese government in Darfur genocide, the CIA director was meeting with his Sudanese counterpart in Langley, surprising many policymakers and diluting the U.S. message.7

Finally, information provided through liaison channels is harder to validate and confirm, because the service providing information is often hesitant, if not opposed, to identifying its sources and methods or subjecting them to third-party validation. This risk is illustrated by the aptly code-named “Curveball,” a German intelligence asset who provided erroneous (perhaps intentionally false) information on Iraqi weapons of mass destruction (WMD) prior to the U.S. invasion of Iraq in 2003.8

How might AI affect liaison? First, it may add pressure on value-matching as authoritarian regimes seek data about their citizens or their citizens’ movements and communications overseas. Second, validating liaison information—for example, the identity of a person placed on a watch list or information on a terrorist target list—will be more difficult, if not impossible, where the information is based on AI input without access to the underlying data and algorithms concerned. Third, AI will increase the intelligence advantages of states that already enjoy an advantage in technical means of collection or data set access. Thus, it will also increase their value as potential liaison partners.

Homeland Security

In no area is AI more likely to have immediate intelligence impact than with homeland security. Narrow AI is well suited to many core homeland security tasks, including cybersecurity, public health, border security, and counterterrorism. That is because narrow AI is especially suited to detect anomalous travel, unseen connections, public health warnings, patterns, and indicators, as well as the use of facial recognition to find specific individuals. However, with promise comes challenge. AI can help generate what Chief Justice Roberts referred to as “near perfect surveillance,” which may be helpful for contact tracing during a pandemic, but aggravates concerns about AI privacy and places new stress on old legal doctrines. As described in chapter 6, the domestic use of AI presents First, Fourth, Fifth, and Sixth Amendment issues, including those involving algorithmic bias, distinguishing between U.S. persons and other persons, and data collection, use, and retention.

MILITARY APPLICATIONS

Nowhere is AI more likely to transform security than in the area of military planning, operations, and weapons design and employment.9 A 2017 Belfer Center Study prepared for the Intelligence Advanced Research Projects Activity (IARPA) concluded that AI is likely to be as transformative a military technology as aviation and nuclear weapons were before. The Department of Defense agrees. The department has made AI a centerpiece of its innovation strategy. DOD has identified a number of areas where AI “has massive potential,” including command and communications (C2), navigation, perception, obstacle detection, and swarm behavior and tactics.10 In June 2018, bureaucracy followed concept as the department established a Joint Artificial Intelligence Center (JAIC) to facilitate and coordinate the integration of AI across DOD. The National Commission on Artificial Intelligence Interim Report states that “a recent estimate suggested there are over six hundred active AI projects across DOD.”11

The Department of Defense is not alone in considering AI a military game-changer. A 2015 open letter from AI researchers states, “the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow.”12 Non-state actors are known to have sought, and in a few instances used, unmanned aerial vehicles as weapons. This is an obvious path.13 The question is what role AI will play in increasing the efficacy of such weapons and expanding their use.

Commentators focus on the potential of AI to enable lethal autonomous weapons systems (LAWS, also known as AWS). The Russian military, for example, is testing a robot named FEDOR, which can fire weapons and carry heavy loads. Defense Department vernacular generally refers to LAWS, but also with almost as much frequency RAS—robotic and autonomous systems. Much of the research is intended to make existing weapons platforms better, like fire support systems for tanks and electronic warfare modules for aircraft. Militaries have used some form of AWS for years, such as the Aegis ship defense system, the Phalanx, and the Counter-Rocket, Artillery, and Mortar (C-RAM) system.14 If a heat-seeking (infrared-homing) missile is an autonomous weapon once fired, such a missile, the Sidewinder air-to-air missile, has been deployed with U.S. aircraft since 1956. So what is new? What is transformative?

A number of militaries, including the U.S. military, are experimenting with AI-enabled “swarms.” This is not a secret. The TV show 60 Minutes did a segment titled “The Coming Swarm” in August 2017. What is secret is the trajectory of progress, capacity, date of deployment, and potential doctrinal uses. Swarms can be composed of unmanned formations of aerial, vehicular, maritime, or submarine platforms—or, in current vernacular, unmanned aerial vehicles (UAV), unmanned ground vehicles (UGV), unmanned maritime vehicles (UMV), and unmanned underwater vehicles (UUV). Swarms thus illustrate how robotics, autonomy, and AI come together to create new capabilities. Swarms can also be programmed to work in coordination with, or independent of, the command of human operators in manned vehicles or operating remotely. (Recall from the previous chapter the use of AI to play capture-the-flag.)

Imagine chaff fired from the side of an aircraft designed to fool incoming missiles. But this chaff is not comprised of metal fragments. It consists of AI-enabled pods that can maneuver around incoming missiles before deciding whether to attack the missile(s), lead them into the ground, or perhaps direct them back to their point of origin. This is based on AI, because the system depends on instantaneous sensory input, calculation, and adjustments too complex and too rapid for a human to make in the moment, let alone a human sitting in the cockpit of an aircraft contending with the pressures of combat. (Think flying driverless cars.)

Now switch from defense to offense and imagine the capability of a swarm—hundreds, perhaps thousands of flying objects called birds, projectiles, or robots—attacking naval vessels or airfields, like the kamikazes at Okinawa but in sync and without the moral and supply chain challenges of recruiting and expending pilots. AI also offers offset potential to U.S. adversaries, who may see in swarm technology and LAWS an inexpensive way to neutralize America’s superiority, if not dominance, in surface warfare capacity. It is possible as well that AI not only serves to offset U.S. advantages, but to reset the calculus of naval and aerial warfare in the same way the advent of the Dreadnought class battleship in 1906 made existing battleships obsolete and thus effectively zeroed out Britain’s numeric battleship advantage over Germany.15

Swarms are not the only offensive and defensive weapons uses for AI. The Defense Department Unmanned Systems Integrated Roadmap describes AI-enabled systems as the perfect wingman:

Unmanned Systems with integrated AI, acting as a wingman or teammate, with lethal armament could perform the vast majority of the actions associated with target identification, tracking, threat prioritization, and post-attack assessment while tracking the position and ensuring the safety of blue-force assets—minimizing the risk to its human teammates.16

Further, whatever is occurring in the cyber-domain will be “enhanced” by AI. AI can be used, if it is not already in use, to both enable cyber-combat as well as cloak attribution. If AI can be used to run stock market trading platforms that automatically buy and sell based on finite differences in price, AI can be used to increase the speed and sophistication of cyber-offense and defense. It can also be used to spoof such attacks or mask attribution.

Here definitions are important, as are distinctions between what is autonomous, what is automated, and what is used to augment human capacity. In current DOD vernacular:

An automated system is one that automatically responds or acts without human decision or input.

Autonomy is defined as the ability of any entity to independently develop and select among different courses of action to achieve goals based on the entity’s knowledge and understanding of the world, itself, and the situation.17

An autonomous system is one that can operate on its own, but not necessarily without human input or direction.

Augmentation, in turn, is the process by which a human and autonomous system work together, with the autonomous system augmenting human capacity.18

A 2018 report written by the UN Institute for Disarmament Research makes an additional and important definitional distinction. “Intelligence is a system’s ability to determine the best course of action to achieve its goals. Autonomy is the freedom a system has in accomplishing its goals.”19 In short, AI can enable weapons and weapons systems to identify and engage targets in all domains rapidly, continuously, simultaneously, and sequentially, and do so in a manner humans could not do, or would take too long to do, to calculate distances, angles, numbers, and response choices.

One question is whether such systems will be, or should be, empowered to do so autonomously without affirmative human activation, choice, and decision, based on programming and sensors alone. The United States initially took the position that with lethal autonomous systems “we will always have a human being in the loop.”20 Defense Directive 3000.09 now refers in more opaque fashion to the “exercise of appropriate levels of human judgment,” a less precise formulation. “In the loop” generally means that an autonomous system is programmed or designed to only perform its task upon human direction or command, such as fire a weapon. Human “on the loop” generally refers to a system where a human is supervising the machine’s use, for example, targeting process, and can intervene at any time during the cycle. Human “out of the loop” means the system is free to operate, for example, select and engage targets, without subsequent human decision, supervision, or intervention. The terms deceive, as a human is involved in writing the code and designing the system in the first place, whether it is ultimately described as one with a human in, on, or out of the loop. Moreover, what it means to “supervise” a system on the loop may vary widely, depending on the nature and speed of engagement.

According to the 2018 DOD Roadmap report, “DOD does not currently have an autonomous weapon system that can search for, identify, track, select, and engage targets independent of a human operator’s input.”21 Of course, this sentence is ripe with ambiguity, as one does not know whether the caveat derives from a singular verb (identify, track, select …) or whether DOD intends to have such a fully autonomous system. However, we should anticipate that potential opponents may seek such a system, which is one reason why the United States has reserved the right to respond in automated fashion to an opponent’s use of automated AI as a weapon.

A second question involving autonomous systems is who should be responsible (and held to account) for what the software does or does not do. We know from Stuxnet, the malware discovered in 2010 that attacked and destroyed centrifuges in Iran’s Natanz nuclear enrichment facility, how a cyber-weapon might be employed, what it can accomplish, and how it might jump the rail, as well as how it might be repurposed and used by others, even when originally designed and intended to remain air-locked. What, then, might fully autonomous weapons do and how, if at all, should the law seek to cabin that potential?

AI will also make it easier to test weapons. As discussed in part II, the law of armed conflict requires that new weapons, as well as the means and methods of warfare, be tested for compliance with the law of armed conflict prior to deployment. AI capacities can be used to model this activity, where actual testing is unfeasible, unreliable, or incomplete. Nuclear weapons, which are already tested through modeling, are an obvious example. But so are cyber-weapons and swarms. Just as pilots train through simulation, AI can simulate variables that will affect weapon performance.

While much of the public attention is on LAWS, many of AI’s enabling capabilities are intended to enable and augment logistical, administrative, intelligence, and decisionmaking capacities. AI is a military force multiplier in at least six interlocking ways. First, it can enable machines, shaped or not shaped to look like animate robots, to perform inherently dangerous tasks. Depending on the capability and the task, it may perform these tasks better than humans, and certainly more safely (to the operating humans), like bomb detection and disposal. Second, as an AI-enabled economy may automate repetitive tasks performed by humans, AI may have the same effect on military personnel requirements. AI-enabled machines, for example, may eliminate or reduce the need for personnel to provide meal and laundry services, along with some of the costs associated with these tasks, like health and retirement benefits. (However, not all AI applications are manpower-neutral or -reductive. In the U.S. Air Force, it takes about ten people to operate one large UAV.22 Put another way, as reported in the Washington Post, “it takes up to four drones to provide 24-hour coverage for a single combat air patrol. Although the aircraft are unmanned, they require lots of personnel to fly them by remote control and provide support on the ground—about 400 to 500 people for each combat air patrol.”23)

Third, AI may provide for capacity that does not already exist or exists at scale at the tactical level, such as an intelligence capability to fuse data or translation. This could offer significant advantage in counterterrorism and counterinsurgency contexts, where the support, or at least neutrality, of local populations is essential and miscommunication disastrous.

Fourth, AI-enabled systems can perform tasks not only faster than humans, but near-instantly, based on their capacity to compute, sort, and structure. Thus, algorithms can model the best methods to transport and deliver logistics, considering weather, fuel, urgency, and any other factors that might take an inordinate amount of time for humans to calculate. Consider how a Waze or GPS navigational system could assist logisticians in planning delivery routes and medical evacuation. Even better, imagine planning D-Day or the 1990 Desert Shield deployment with algorithms that can identify to the second optimum airlift and sealift schedules to address needs and contingencies, and instantly adjust these schedules every time the weather shifts, or an ally adjusts its end force commitments. The Defense Department describes how AI will also enable logistics delivery: “Elevated levels of autonomy in unmanned systems will allow for leader-follower capabilities, where trailing semiautonomous vehicles follow a designated vehicle in logistics convoy operations.”24

Fifth, AI, in many cases, is already better than humans at sorting vast amounts of information, characterizing that information, linking that information, and making predictions based on that information. In other words, it can bring to the military decisionmaker instantaneous sources of intelligence and intelligence analysis, while also spotting anomalies and patterns predictive of risk or attack. This also allows AI applications to enable realistic simulated training for pilots and other military actors. It can also be used in war games and exercises. If an AI-enabled computer can play Go or chess, it can simulate a military opponent on a tactical or strategic level.

Sixth, AI-optimized machines are less prone to types of human error brought on by fear or fatigue. Consider the targeting process where AI’s impact on intelligence and weapons comes together. AI software can truncate hours of surveillance video from a UAV feed into minutes, ensuring key facts and events are observed. Combine this with the AI-enabled sensors and pattern recognition, and one sees one purpose of DOD Project Maven: “computer vision … that autonomously extracts objects of interest from moving or still imagery.”25 Such a processing system can eliminate the sorts of human mistakes that come with fatigue and repetition. It can also mitigate the cognitive tendency to focus on the mission and immediate objective—for example, target the enemy—with the unintended consequence of seeing and excluding other variables—such as the collateral behavior around a target.

In conclusion, if you want to know how militaries might use AI beyond weapons systems, imagine how an AI-enabled machine might mitigate five of the factors identified above—risk, repetition, fear, fatigue, and speed—if a machine can be “taught” to perform the task in question. That is AI. Here the benefit derives not from the ability of the machine to act like a human or with human intelligence, but expressly from the fact that the machine does not do so. In this sense, AI is the perfect sentinel or wingman, one that does not fall asleep on post, talk, or show fear. AI is the sentry that actually observes the Second General Order: “To walk my post in a military manner, keeping always alert, and observing everything that takes place within sight or hearing.”

TAKEAWAYS

AI will transform intelligence and military systems and capabilities. It will also change the nature of the national security toolbox. AI will influence national security decisionmaking in multiple ways. First, if more intelligence, and more accurate intelligence, helps policymakers better predict threats and thus better deter threats before they materialize, then AI’s capacity to identify, fuse, and connect intelligence streams is a positive development, as long as this capability is used and used wisely. Likewise, any mechanism that can more accurately and rapidly distinguish between real and fake information, and between noise and signals, should contribute to better decisions. In theory, intelligence also contributes to stability by reducing the risk of miscalculation and misperception, at least where the intelligence is accurate and understood. Of course, more intelligence also means more noise, and thus a necessity to adjust any intelligence process to account for this challenge.

If used effectively, AI will also help decisionmakers model and predict potential policy outcomes, just as Deep Blue rapidly modeled Gary Kasparov’s potential chess moves and countermoves. AI will allow Red Team and Blue Team testing of policy proposals in real time. But these are potential benefits. They only become actual benefits if the policy process is changed or adjusted to effectively provide this input at the tactical, agency, and national level.

Policymakers, technologists, and lawyers should consider the following points and questions. The sooner the better.

1 What law and process should apply to the collection of data from the IoT and other sources for intelligence purposes? What are the left and right boundaries of conduct? If the law is different at home and abroad, how is it different? Should the U.S. government collect foreign data overtly and clandestinely as the Chinese government does for the purpose of developing and training AI applications? What law and policy should apply to the collection, storage, retention, and use of data generally?

2 AI is brittle with respect to its own situational awareness, but nimble in identifying leads and links involving people in public places and subjects such as sanctions enforcement and proliferation. One intelligence process question is when to rely on AI results outright, when to use it to augment human judgment, and when to ignore it altogether. Is the government relying on AI for these purposes? If so, in accordance with what standards? What role does AI and what role do humans play in analyzing and using AI outputs? What process exists to validate the outputs after the fact?

3 CI is as important as AI. AI systems offer multiple attack surfaces and weakest link vulnerabilities to penetration and co-option. Policymakers must spend as much time on CI as they do on AI, and triple whatever time is being spent now.

4 Many states are legislating to prohibit the use of deep fakes. Should the Congress do so as well? Should it prohibit deep fakes using certain images, for example, those of public officials, and what are the First Amendment implications of doing so? What policy limits, if any, should the government place on the use of deep fakes for intelligence purposes? Should the government seek to establish domestic or international redlines now?

5 Does military doctrine exist for using AI-enabled systems, including in LAWS? How is human-in-the-loop, on-the-loop, and out-of-the-loop defined and implemented with respect to each application? In the absence of general principles of responsibility and accountability, is a specific official designated as responsible and accountable for each application?

6 How has the government defined “reliability” in the context of code, algorithms, and programs? Is that definition understood and accepted at the policy, legal, and technical policy level? Should the definition vary depending on the application?

7 Does the government have an independent process, such as that found in Civil Liberties and IG offices, for validating the accuracy and reliability of AI applications? Testing for bias? Do the relevant staff have the relevant technical, policy, and legal skill to perform these functions and an appropriate level of access?

In short, is government process nimble enough and “ready enough” to keep pace with AI developments in law and doctrine; in how it contracts; in how it recruits and retains personnel; and in how it uses AI to inform decision? This book is intended to help policymakers address these questions.

The Centaur's Dilemma

Подняться наверх