Читать книгу Striking Power - John Yoo - Страница 6
ОглавлениеEconomists call it “creative destruction.”1 Robots are replacing factory workers. Online news sites are displacing newspapers. Passengers are abandoning taxis and summoning part-time drivers with cell phones. Household appliances and security systems are operating on home networks.
New technologies are having an impact beyond the workplace and household. Presidents George W. Bush, Barack Obama, and Donald J. Trump, for example, have ordered robots to kill individuals with precision-guided missiles from the sky. Unmanned aerial vehicles (UAVs) are leading the way for even greater technological innovations in war. The same high-speed computer systems can accelerate financial markets or disrupt national economies. Robotics and precision mapping can automate transportation, even passenger cars. They can also control pilotless aircraft that strike specific buildings or individuals. The same technologies that can assemble and deliver a book, a piece of furniture, or a sophisticated appliance to a customer within days are also enhancing military “productivity,” which means fewer soldiers can kill or incapacitate more of the enemy at lower cost.
Technologies often transcend their original purpose. The cell phone initially freed people to make voice calls without the physical tether of telephone wires. Engineers next added cameras and data communications to the handheld phone. Users could now record and send pictures of controversial police actions, repressive crowd-control measures, or riots. Phones can now distribute these pictures to millions of strangers, before a journalist on the scene could write an eyewitness account. Users can also receive, as well as transmit, a stream of text, data, and information that is rearranging social relationships, consumer activity, travel, and entertainment. A world that is wired allows a vastly wider and more consequential range of communication than telephone calls.
So it is with war. Instead of ending armed conflict, technological advances have expanded it. World War II came to an abrupt end shortly after the United States dropped two atomic bombs on Japan. Many concluded that science had created a weapon so devastating, rational statecraft could never use war as a tool again. “Military alliances, balances of power, Leagues of Nations, all in turn failed, leaving the only path to be by way of the crucible of war. The utter destructiveness of war now blocks out this alternative,” said even General Douglas MacArthur, no pacifist he, on the deck of the USS Missouri during the Japanese surrender. “We have had our last chance. If we will not devise some greater and more equitable system, Armageddon will be at our door.”2 Surely the United Nations would ensure that nations never again looked to settle their differences by resorting to war. It was not to be. Responding to those who hoped that the end of monarchy spelled the end of tyranny, Edmund Burke warned: “Wickedness is a little more inventive.”3 So it has proven in the decades after 1945. The major powers have not waged an all-out conflict, thanks, perhaps, to the very awfulness of the nuclear weapons that ended the last one. But, in the meantime, smaller armed conflicts and civil wars have together taken millions of lives.4
During the Cold War, many of these conflicts were viewed as “proxy wars.” In the 1950s, the United States led an international action against North Korea’s invasion of South Korea, because the Soviet Union and then Communist China supported Pyongyang. Starting in the early 1960s, the United States began committing troops to defend South Vietnam from North Vietnamese infiltration on the same theory. In the 1980s, the United States supported Afghan guerillas resisting the Soviet-backed government. Proxy wars allowed the great powers to continue their competition, but at less risk of nuclear war.
Even as the Cold War thawed, conflicts continued to break out. In 1991, the United States and its allies mobilized 600,000 troops to drive Saddam Hussein’s forces from Kuwait. By 2003, another American-led coalition toppled Saddam’s regime in Baghdad with a little over a third of that force. In 2001, an even smaller fraction of that force, working with local insurgents, removed the Taliban from power in Afghanistan. In 2010, the United States, Britain, and France helped overthrow Libyan dictator Muammar Gaddafi without any ground troops at all, simply by providing focused air support to Libyan rebel forces. This was the same strategy that NATO had used a decade earlier, when it ran an intense bombing campaign to stop Serbian dictator Slobodan Milosevic’s ethnic cleansing in Kosovo.
Most of these interventions did not produce permanent peace. Air attacks cannot control territory. Yet nations may still want to deploy force, whether for self-defense, to defend allies, to prevent human rights catastrophes, or to gain advantage. After almost two decades of inconclusive war in the Middle East, however, pessimists say that western states confront a choice between committing massive ground forces or standing on the sidelines. Smaller conventional forces have met with frustration in achieving the aims of strategy.
New technologies promise an alternative. Robotics, cyber, and space weapons can reduce the size of ground forces needed to wage war. They can withdraw human soldiers from the battlefield while making attacks more precise and deadly. They can allow nations to coerce each other without inflicting the same level of casualties and destruction as in the past. They can reach far beyond borders to pick out terrorists or selectively destroy WMD sites. They can reduce the costs that discourage western nations from stopping humanitarian disasters or civil wars. While armed conflict will continue as a feature of the human condition, it might now come at lower cost, for a shorter time, and with less violence.
Some critics do not share this optimism. They fear that because these new technologies will reduce the costs of military intervention, force will become a more attractive option in international relations. Philip Alston, a United Nations special human rights expert, argues against drones because “they make it easier to kill without risk to a State’s forces.”5 U.S. practice may further violate international law because it uses robotic weapons to attack terrorists off of any recognized battlefield, which Alston believes is tantamount to killing civilians in peacetime. Even if this analysis is correct, it is no reason to reject new technologies. Nations that are able to deploy advanced technologies will not see the virtue in risking the lives of more of their troops as an alternative. Nations are unlikely to agree to treaties to limit these technologies until they are more certain of their impact on war and the balance of power. Moreover, these new methods of warfare may serve wider humanitarian concerns that are more significant than the legality of killing off-battlefield terrorists. Because drone strikes and cyber attacks can strike with more precision, they reduce death and destruction among civilians and even among combatants. If advanced technology can disrupt the financial or transportation networks of their rivals, they may achieve the goal of war—coercion of the enemy—with far less bloodshed than a focus only on military targets.
Meanwhile, new capacities may actually lead to less destructive wars by giving nations more options to resolve their disputes, or, better yet, more information that prevents conflicts from occurring in the first place. Armed conflict often results from miscalculation. Sometimes, aggressors doubt the resolve of potential opponents to commit force against them. Saddam Hussein, for example, seems to have assumed his seizure of neighboring Kuwait would trigger no serious opposition.6 States may also resort to force because they do not trust the resolve of potential allies to protect them. In part, Israel launched its preemptive war on its Arab neighbors in 1967 for this reason. Robotic and cyber weapons provide nations with signals to convey information about their resolve or their trustworthiness. Reducing uncertainty in war will help nations to negotiate their differences with less need for armed conflict. New weapons offer more opportunity to reach settlements with less death and destruction.
In this chapter, we will briefly describe the military revolution in technology and its benefits. We will describe the current framework of the laws of war and its refusal to accommodate new forms of combat. History shows that technological improvements produce advances in warfare just as they bring economic development. Law has proven ill-equipped to slow military progress until well after weapons are first used and better understood. We conclude by explaining that the security demands of the twenty-first century will create even more demand for the deployment of new military technologies, which can help respond to threats to international stability with reduced costs and harms. Those who would prohibit or limit new weapons may well encourage conflict that is far more brutal and destructive.
The Revolution in Military Affairs
Unmanned Predator and Reaper drones rove the skies above the Middle East and Africa. They hover over a target for days and launch Hellfire missiles on a moment’s notice. Robots on the battlefield below breach doors in house-to-house searches and explode improvised explosive devices commonly used by terrorists and guerrillas. UAVs take off and land on aircraft carriers while others perform reconnaissance and strike missions. Future advances will bring armed sentry robots, autonomous armored vehicles, and automatic missile and artillery fire. Soon, unmanned surface vessels may deploy on the high seas, close to shore, and others beneath the waves.
Combat is not just moving toward the robotic, it is also becoming ethereal. During its 2008 Georgia incursion, Russia became the first nation to deploy cyber attacks on enemy command, control, and communications systems to augment a ground invasion.7 To delay the Iranian nuclear program, the United States and Israel allegedly launched the Stuxnet virus to damage centrifuges engaged in uranium enrichment.8 China has stolen large databases of U.S. government personnel information in addition to penetrating the networks of U.S. defense contractors, airlines, and technology companies.9 Russia has allegedly hacked into databases and email systems of the U.S. Departments of Defense and State, as well as those of the Democratic National Committee and the 2016 campaign of presidential candidate Hillary Clinton.10
These examples illustrate the dramatic advances in weapons technology over the last two decades, which observers sometimes refer to as the “revolution in military affairs.”11 The United States now fields thousands of UAVs for both reconnaissance and attack. Armed with stealth technology, these robots gather intelligence around the clock and launch immediate attacks in trouble spots around the world. In the future, the most advanced ground- and sea-based armed forces will employ remote-controlled units, such as sentries, light armor, and littoral naval vessels. Advances in missile technology and precision targeting will allow the United States to field a conventional global-strike capability that can hit any target in the world within an hour. Some experts even predict that autonomous weapons systems will soon be able to act free of direct human control.12
Some hope the revolution in military affairs will reduce the destruction of war. A nation will place fewer soldiers in harm’s way when remote-controlled combatants are available. Precision-guided weapons, directed by clearer real-time intelligence, will inflict less death and destruction on soldiers and military assets. With drones available, for example, nations will no longer need to resort to World War II- or Vietnam-era bombing runs to destroy arms factories or oil installations. Precision-strike technology may also shorten war by targeting an opponent’s leadership and strategic vulnerabilities, as the U.S. did in the 1991 Persian Gulf War and the 2003 Iraq invasion. Future technology could also reduce harm to civilians—one of the central aims of the law of war—by tightly concentrating the use of force on its intended targets.
Critics, however, worry that advances in weapons could increase conflict by making war easier to begin. If a nation can simply press a button and destroy a target without risking its own personnel, it will choose a military response more often. United Nations officials give voice to these growing worries. “The expansive use of armed drones by the first States to acquire them, if not challenged, can do structural damage to the cornerstones of international security and set precedents that undermine the protection of life across the globe in the longer term,” declares Christof Heyns, the U.N.’s Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions.13 States can use drones and other technology to launch attacks far from conventional battlefields in ways that escape immediate detection, and perhaps even responsibility. Ultimately, pinpoint strikes will continue to blur any clear line between war and peace.
Whether outside observers applaud or deplore it, technology has driven an evolution of “war” from a clash of national armies on a battlefield to its current, multifaceted, and decentralized forms of conflict. Technology has played a role both in the rise of non-state actors and in helping states formulate responses to these non-conventional security threats. The September 11, 2001 attacks and the evolving sophistication of the Islamic State of Iraq and Syria (ISIS) show that states no longer have a monopoly on armed conflict. These groups have used modern communications, transportation, financial, and social networks to operate across state borders and carry out attacks in the home cities of the West, from Paris and Nice to San Bernardino and Boston. In formulating strategic responses, states have also relied on advanced technology, using high-tech surveillance and strike systems to gain better intelligence and effect precision attacks without the need for large conventional forces. In contrast to the wars of the twentieth century, which concentrated highly destructive forces on discrete battlefields, technology is now dispersing less acutely destructive forces over a broader span. Though technology has contributed to the reach of non-state terrorist groups, it can also assist nations in fighting them.
Cyber warfare, which is even easier to begin and more difficult to prevent, presents yet another form of unconventional conflict driven by technology. Internet attacks can cause real-world destruction and harm, or they can simply interfere with another nation’s communications, financial, or information networks. A cyber attack, for example, could cause a flood by disabling the control mechanisms for a dam or could trigger an explosion by causing a power plant to malfunction. As Russia demonstrated in its invasions of Georgia in 2008 and Ukraine in 2014, nations can also use cyber weapons to support a conventional armed attack.14 Cyber weapons can replace conventional weapons to commit sabotage, as was done through the Stuxnet virus aimed at the Iranian nuclear program. Or governments can use the Internet to steal significant military or intelligence information, such as weapons designs or strategic plans, which appears to be occurring with increasing frequency between the United States and China. “China is using its cyber capabilities to support intelligence collection against the U.S. diplomatic, economic, and defense industrial base sectors that support U.S. national defense programs,” the U.S. Defense Department stated in a 2016 report to Congress.15 “The accesses and skills required for these intrusions are similar to those necessary to conduct cyberattacks.”
In this way, robotics and cyber weapons can exert force that does not necessarily kill or destroy tangible objects, but nonetheless is overtly hostile. Governments and scholars are not always clear about when such attacks meet the legal standards for an armed attack. For example, the new United States Law of War Manual, issued by the Department of Defense in 2015, declares that the existing laws of war should apply to what it calls “cyber operations.”16 But it then concedes that the rules here are “not well-settled” and are “likely to continue to develop.” The United States even takes the position that it may not have a position. The Manual declares that it does not “preclude the [Defense] Department from subsequently changing its interpretation of the law.”17
Indeed, the uncertainty as to how to classify these attacks has played itself out in nations’ inconsistent responses to acts of robotic and cyber warfare. States have sometimes treated them as a form of espionage or covert action, refusing to consider the resulting damage as an act of war. China’s theft of the U.S. Office of Personnel Management database did not prompt U.S. force in response, nor did North Korea’s hacking of Sony’s electronic files. Iran took no overt military action in response to the Stuxnet virus. American drones execute dozens of strikes in countries—such as Yemen, Somalia, Afghanistan, and Pakistan—without sparking any military reaction. And yet, it seems clear that an armed response to such attacks would be appropriate in at least some contexts. Cyber attacks that disable key military command structures or critical civilian networks might very well be regarded as acts of war, along with attacks by robots and drones that kill or injure human beings or destroy property on a large scale.
The rules of war must evolve to keep pace with technology. Some nations demand an inflexible approach to the law of armed conflict because they hope that law can suppress war. Nations with minimal armed forces or weak strategic positions may support legal rules that inhibit other states from asserting potential advantages. Other states may oppose new technologies in the hope of preserving the advantages derived from their current forces. Still others may want to preserve military opportunities for low-tech, asymmetric tactics—those favored by guerrillas, insurgents, and terrorists. Many scholars and international officials may support an inflexible approach to the rules of war from intellectual comfort with the old way of doing things. It is to these problems that we now turn.
Frozen Law in a Changing World
Even as technology advances, legal and political leaders remain reluctant to embrace the use of new weapons. Despite their advantages, these new weapons have become the subject of a broad campaign to limit or even prohibit them. The claim is that these military advances violate the rules governing civilized warfare. Advocates for today’s law of war, known as International Humanitarian Law (IHL) to specialists,18 have used multilateral treaties to construct a set of rules that depart from the realities of modern war. They now hope to freeze into place this new IHL, which tends to favor guerrillas, terrorists, and insurgents over western nations, and conventional ground combat over technology and innovation.
The law of war, however, more appropriately changes through natural evolution rather than artificial codification. They have long depended on the customs and traditions followed by states at war, which have usually decided on regulation after experience with weapons, not before. An evolutionary approach concedes that we do not currently know all the implications of these new weapons. We do not have full information on their characteristics and consequences, or the factual circumstances of their use. Rather than imposing rigid rules, the customary laws of war have usually adopted flexible standards—such as reasonableness in the selection of targets—that allow future decisionmakers to judge the legality of force in their own circumstances. They place more faith in future leaders, commanders, and judges to come to better conclusions in reviewing the use of force after the fact than in the prescience of today’s treatymakers. An approach built on flexible standards allows nations to gather more knowledge about the effects of new weapons, under conditions of deep uncertainty, before reaching fundamental decisions of policy.
In this respect, customary rules on the use of force resemble a common-law standard, such as the classic legal norm of reasonableness. A standard such as reasonableness allows judges to consider the totality of the circumstances before ruling on whether a defendant’s actions were legal. A strict rule, however, such as contributory negligence, imposes a clear norm that reduces liability to a single factor and precludes the influence of later circumstances. Rules reduce decision costs because they are clear and easy to apply, they create legal certainty and predictability, and they require less gathering of information. Rules, however, prevent a nuanced application of law to facts and so often result in inequitable outcomes. Standards demand higher decision costs because of the need for more information and time for consideration. Standards produce greater uncertainty and unpredictability, but they more often produce the better answer. A rule gives more power to the legislators who write the norm earlier and narrow the discretion of future officials, while a standard places more trust in the competence and knowledge of later decisionmakers.19 By following custom, the law of war accepts that the lawfulness of the use of force depends far more on the circumstances, that later officials will have greater access to information and experience, and that it is more important to get right answers than fast answers.
Many international leaders and scholars would replace the millennial-old, customary approach to the rules of war with instant law—with strict rules rather than standards. Nations launched an ambitious movement to codify new rules of military operations in 1977 with Additional Protocol I to the Geneva Conventions (AP I).20 Controversially, AP I promoted two significant changes to the laws of war. First, it elevated non-state actors, such as independence movements and guerrillas, to the same status as nations with conventional armed forces. Second, it attempted to reduce the discretion of combatants to use force by expanding the definition of civilian targets that were to be off limits to combat. Because of these policies, the United States defied the majority of other nations and refused to ratify the treaty. In his message to the Senate withdrawing AP I, President Ronald Reagan declared that the Protocol was “fundamentally and irreconcilably flawed,” and that its problems were “so fundamental in character that they cannot be remedied through reservations.” He therefore had “decided not to submit the Protocol to the Senate in any form.”21 Chief among these flaws, President Reagan observed, was AP I’s “grant [of] combatant status to irregular forces even if they do not satisfy the traditional requirements to distinguish themselves from the civilian population and otherwise comply with the laws of war.” Reagan recognized the political symbolism of his action, characterizing it as “one additional step, at the ideological level so important to terrorist organizations, to deny these groups legitimacy as international actors.”
AP I demonstrates the pitfalls of replacing the evolutionary approach to war with instant, inflexible legislation. AP I took form before the advent of desktop computers, the Internet, cell phones, global positioning satellites, and cruise missiles. That era’s political circumstances were equally different. Nations were still drafting the text of AP I when North Vietnam conquered South Vietnam. The U.S. and U.S.S.R. dominated world politics and economics, half of Europe was forced into the Warsaw Pact, and the Third World, as it was then known, was still emerging from the throes of decolonization. The collapse of the Soviet Empire, the rise of China, the advent of Islamic extremism, and the spread of global terrorism were yet to come.
Given these significant changes in the world since the mid-1970s, AP I’s provisions are growing hopelessly out of touch with the practice of the states that actually fight wars. Nations themselves realize this. In 1998, for example, a conference in Rome negotiated a treaty to establish the International Criminal Court (ICC).22 Its drafters drew extensively on AP I to define the “war crimes” subject to prosecution. Neither the United States nor other major powers, including Russia, China, India, Turkey, Indonesia, Egypt, Iran, Israel, and Syria, ratified that treaty either. The ICC has so far reached convictions in only a handful of cases, none of them dealing with actions by western armies or with forces outside of Africa.
Nevertheless, AP I remains influential. Even at the time, the United States conceded that much of the treaty merely restated accepted practices.23 It remains the most comprehensive statement of rules for the conduct of military operations. Commentaries on the law of armed conflict, including those by American scholars, assume that provisions of AP I are solid evidence of what the law of war now requires—if not by treaty, then as a matter of “customary law,” which is binding on all states.24
It is important to understand what a significant break AP I is with the history and practice of the law of war. Historically, the laws of war represented customary law, which was established by the actual practice of states over long periods of time. Nations, for example, have long followed a principle of discriminating between combatants and civilians on the battlefield, but had never declared the rule in a general treaty before. States established the rule over centuries through the norms that they consistently followed in wartime. Their applications of the standard of discrimination in different factual circumstances provided guidance for future cases. AP I represents a wholly different approach. It assumes that interested nations can simply legislate the rules of armed conflict by treaty, rather than practice. It assumes that the treatymakers in 1977 could determine the best application of the rules to future circumstances, as opposed to those who fight the wars then.
This view attempts to transform treaty language into instant “custom.” If many states have ratified a treaty, it must represent customary law just as much as, if not more than, universal conduct. Many advocates claim the promises of nations create law more firmly than the practices of nations.25 This is not just an academic exercise, but has become the opinion of international tribunals. For example, the International Court of Justice (ICJ) stated in its 1996 advisory opinion on nuclear weapons, “Extensive codification of humanitarian law and the extent of the accession of the resultant treaties . . . have provided the international community with a corpus of treaty rules the great majority of which . . . reflected the most universally recognized humanitarian principles.” It concluded that IHL treaties, as solidified into custom, “indicate the normal conduct and behaviour expected of States,” presumably whether they had ratified the agreements or not.26 Under this view, the United States can be bound by the rules set out in a treaty it has never ratified because its provisions can be regarded as customary law.27 But those rules remain what they were in the mid-1970s because “customary law” is impervious to contrary practice, even though much military action over the last four decades has not followed AP I.
Domestic analogies reveal the peculiarity of this approach. On domestic statutory questions, courts and executive branch agencies regularly adapt legal principles to fit new factual circumstances. Some of these exercises, of course, can stoke controversy when the application of old legal rules to new technologies is disputed (and disputable). In 2015, for example, the Federal Communications Commission (FCC) prohibited Internet service providers from imposing different charges for carrying different types of content. The FCC subjected Internet service to the same regulatory framework as long-distance telephone service, even though Congress could never have imagined the Internet at the time it enacted the 1934 Federal Communications Act. To be sure, the Commission was divided, as was the U.S. Court of Appeals for the D.C. Circuit, which ultimately upheld the FCC regulation.28 A new majority on the FCC has announced its intention to repudiate the Obama effort to regulate twenty-first century technology with an 80-year-old law.
The tension between adaptation and innovation becomes even more acute when cases turn on the meaning of constitutional provisions now centuries old. The framers of the Bill of Rights could not have envisaged modern technology. It was left to the Supreme Court to decide that First Amendment guarantees of free speech and press did not apply to television broadcasters as to newpapers. Yet the Supreme Court later concluded that the Fourth Amendment’s protection against unreasonable searches and seizures prohibited police from using thermal imaging detectors to identify drug-growing operations and GPS tracking devices to track suspect movements without a warrant.29 Even though there may be disagreement among both tribunals and society about the application of a particular law to new circumstances, legislators often deliberately write laws with generality so that future judges and lawyers can adapt the law’s purpose to new circumstances.
Clinging to 1970s understandings of the law of war presents much greater difficulties than waiting for Congress or the constitutional amendment process to update legal rules. Unlike domestic law, which enjoys the enforcement of the judicial and executive branches, international law has no central institution capable of applying a uniform understanding of the law throughout the world. Because international law cannot punish rule breakers, states that violate the civilized laws of war will seize an advantage in armed conflict. States, for example, may keep to earlier understandings of the laws of war, others to a 1970s understanding, and still others may choose to skirt, subvert, or defy the rules outright. Those who honor the old rules will be at a disadvantage when fighting those who do not. It is hard to see that as a gain for international law. As Winston Churchill protested, “I do not see why we should have all the disadvantages of being the gentleman while they have all the advantages of being the cad.”30 Disorder, tyranny, and intimidation will have greater sway if western nations shrink from defending the postwar system because old rules make it too hard for them to fight.
Yet many scholarly commentators and government officials still tend to view the laws of war in quite formalistic ways. They rely on textual provisions of AP I, U.N. resolutions, and even dicta found in ICJ rulings and advisory opinions. From a fabric of words, they stitch together a protective suit that will supposedly protect us from foreign attacks (now unlawful, so our enemies cannot penetrate our rhetorical armor) and against foreign condemnation (because we wear the protective armor of “law”). But this pick-and-choose approach cannot work when confronted by new circumstances. In the past few years, for example, major academic publishers have produced several books on the legal limits of cyber war.31 Scholars have published dozens of long, scholarly articles on this subject.32 As these works acknowledge, however, the world has never seen anything that could rightly be described as “cyber war.” These works cannot describe the actual practice of states, which can then coalesce into “customary law,” because no practice yet exists.
Instead, these commentators and officials simply imagine the way that existing rules might apply to new technologies. They have little of substance to work with. Scholarly studies on the laws of war, published in the second decade of the twenty-first century, are of little help. They assume that the relevant rules are those codified in AP I, before the emergence of the Internet, email, and the information revolution. Nations never reconvened to rewrite the treaty to address new technologies. These commentators believe that these rules should govern only because they are most familiar. Their views, however, will have little purchase because they do not arise from the strategic needs and military capacities that do so much to determine how nations behave in times of conflict.
We are not arguing for a world without law. It will not be easy to decide what rules should prevail or what applications would be most feasible and desirable. We are arguing for rules that respond to the circumstances of war in the twenty-first century and the opportunities presented by new technologies. To argue in this way is not radical or extreme. It is entirely traditional. The law of war as laid down in the 1970s was not the law as it was understood in the 1940s. The law of the 1940s was not the law of 1914. As law in all areas regularly does, the law of war has continually adapted to new technologies and new circumstances, when old means no longer serve necessary ends.
War, Law, and Weapons
War and law are inextricably intertwined. As mankind has discovered new technologies and developed more effective institutions, it has brought invention to war. But nations did not then develop legal codes to impose on armed conflict. Instead, their consistent behavior over time gave rise to general principles that could guide leaders and combatants in the next war. Wars come first and the law follows, rather than the other way around. Rules limited, but did not prevent, the use of force by nations to coerce other nations.
The direct relationship between innovation and war is nothing new. In the ancient world, the evolution from bronze to iron tools and the discovery of more productive means of agriculture allowed cities to deploy larger, trained armies. Economic surplus allowed states to support warriors who specialized in combat. Progress in animal breeding made possible first the chariot and then large cavalry formations. The emergence of market institutions and effective government allowed China in the East and the Romans in the West to manufacture iron weapons on a larger scale, train and deploy bigger armies, and administer larger territories.33 Whereas Sparta and its allies fielded an army in the Peloponnesian Wars of no more than 30,000,34 the Roman imperial army under Augustus reached 250,000 troops and hit a high of perhaps 450,000 under Caracalla—numbers that Europe would not see again for more than a millennium.
In the Middle Ages, advances in technology, though slow, still prompted changes in warfare. As armor improved, mounted knights supported by rural towns prevailed. A few hundred knights controlled southern Italy and Sicily; a few thousand in the First Crusade successfully invaded and held Jerusalem.35 But the invention of the crossbow in the eleventh century (along with improvements in the longbow) led to the weakening of knightly superiority in Europe; in China, where these weapons came into existence much earlier, mounted knights never held the upper hand. Progress in shipbuilding and navigation led to the replacement of human-powered triremes with wind-powered men-of-war. The invention of gunpowder made possible artillery and siege weapons, and professional militaries equipped with small arms. Military historian Victor Davis Hanson has argued that the Western nations became dominant because their innovative societies, capitalist and proto-democratic at the beginning of the modern world, more quickly adapted and deployed new technologies to war.36
Nevertheless, the relatively slow development of human societies kept military affairs relatively static. Despite the evolution of weaponry from the ancient and medieval worlds to the Renaissance, tactics and strategy did not significantly change. Horses still provided mobility on land and wind drove ships at sea. Armies and navies still fought at close quarters within eyesight of each other. Generals could move their forces only short distances because of the limits of transportation technology and logistics. Firearms and artillery increased the casualties in these confrontations, but not their distance or speed. Alexander the Great would have recognized the formations, tactics, and strategies of Napoleon Bonaparte or even Robert E. Lee.
As military technology evolved at this slow pace, the rules of warfare did not change much. In the ancient world, law imposed few limits on combat, and those that prevailed seemed to hinge more on fidelity to the gods, rather than to man. It is not recorded whether the victims of the first iron weapons or the first war chariots demanded an international convention to outlaw their use. But if there were demands for a ban on such weapons, they did not succeed. In medieval times, there were repeated efforts to ban the crossbow. Pope Urban II, better known for urging knights to embark on the First Crusade, also urged the repudiation of this weapon. A Byzantine princess denounced it as “a truly diabolical machine.” A few decades later in 1139, the Second Lateran Council urged a formal ban. The Holy Roman Emperor Conrad III decreed that use of the weapon should be punished as a capital crime.37
These protestations went unheeded because the law could not prevent armies from seizing the battlefield advantages offered by these new weapons. In a world where mounted nobles were the decisive military force, the crossbow was a disruptive weapon. It could launch arrows with sufficient force to burst through armor. It threatened to displace the lifelong training and valor of knights and nobles with a devastating mechanism, typically wielded by artisans or peasants (who were otherwise prohibited from carrying arms). The ban failed simply because the crossbow proved too valuable in winning battles.38 With the right training, companies equipped with the longbow—a weapon already mentioned in the Bible—could devastate mounted nobles, as English kings and their well-trained peasants proved repeatedly in the Hundred Years War. Efforts to ban the longbow also failed; so too with the arquebus, forerunner of the musket, which appeared in the sixteenth century.39 Some commanders treated wielders of this new weapon as, in effect, war criminals who should be killed at once.40 But advanced nations would not stop deploying soldiers trained to use the new weapons because they so effectively altered the balance of power in their favor. Too many commanders insisted on retaining the advantages offered by these new weapons.
War’s nature only began to shift significantly with the profound economic changes that occurred in the nineteenth century. Before the Industrial Revolution, mankind made no significant gains in productivity. The distribution of wealth in the world depended more on the size of a nation’s population. In 1000, for example, Western European GDP was $11 billion (in 1990 dollars), in 1500 it had risen to only $44 billion (less than doubling every century), and in 1700 it reached $81 billion. But the Industrial Revolution and the emergence of market capitalism in the nineteenth century broke humanity out of this Malthusian trap. Advances in agricultural and industrial productivity, due to technology, management, and political and legal systems, allowed for stunning increases in wealth and economic growth. In 1870, Western European GDP, estimated at $11 billion in 1500 (in 1990 dollars), exploded to $367 billion by 1870 and $840 billion by 1913.41 The United States grew even faster. Its economy started at only $527 million in 1700. By 1870, the U.S. economy had reached $98 billion, and in 1913 it quintupled to $500 billion. China’s economy, which did not experience a nineteenth-century Industrial Revolution, actually shrunk for most of that period.
The innovations that made these steep gains in economic growth possible also enabled far more lethal armed forces. Combat during the U.S. Civil War from 1861 through 1865 gave a hint of what was to come. Industrial production permitted larger, better-equipped armies. Union and Confederate armies could throw much larger weights of bullets and bombs with more precise accuracy over greater distances than ever before. Railroads allowed for the swift movement of men and supplies. The telegraph permitted faster, clearer communications. Wooden naval vessels, which had depended on the winds for more than two thousand years, evolved into warships protected by armor and driven by steam.
War’s exponential growth in size and destructiveness triggered the first efforts at regulation. During the Civil War, President Lincoln issued General Orders No. 100, the Instructions for the Government of Armies of the United States in the Field—the first official public code of the laws of war. Article 15 of the Code set out the wide means available to nations at war:
Military necessity admits of all direct destruction of life or limb of armed enemies, and of other persons whose destruction is incidentally unavoidable in the armed contests of the war; it allows of the capturing of every armed enemy, and every enemy of importance to the hostile government, or of peculiar danger to the captor; it allows of all destruction of property, and obstruction of the ways and channels of traffic, travel, or communication, and of all with-holding of sustenance or means of life from the enemy; of the appropriation of whatever an enemy’s country affords necessary for the subsistence and safety of the army, and of such deception as does not involve the breaking of good faith either positively pledged, regarding agreements entered into during the war, or supposed by the modern law of war to exist. Men who take up arms against one another in public war do not cease on this account to be moral beings, responsible to one another and to God.42
Following this logic, an army at war could impose blockades and sieges and even bombard a city, so long as its reduction had military value. Still, Lincoln’s General Orders No. 100 also put into written terms the need to shield civilians, where possible, from the harshness of war. Article 22 declared that the Union armies would respect a distinction between “the private individual belonging to a hostile country and the hostile country itself, with its men in arms.” But civilian immunity from hostilities would run only as far as military necessity allowed. “The unarmed citizen is to be spared in person, property, and honor as much as the exigencies of war will admit.”
Francis Lieber, a Prussian immigrant and advisor to the Lincoln administration, drafted the rules. He believed that the modern age had made war into a contest between mass armies rather than individuals.43 Lieber did not think that the customs of international law should ban most methods of war, so long as the destruction was no “greater than necessary.” Accepting conflict as a permanent feature of international affairs, Lieber believed that fiercer wars were more humane because they were shorter, an idea shared by Clausewitz and Machiavelli. Therefore, the laws of war allowed almost any destruction that advanced the goals of the war, and the use of “those arms that do the quickest mischief in the widest range and in the surest manner.”44
The Industrial Revolution and the rise of mass production equipped armies of draftees with highly lethal, yet relatively cheap, standardized weapons. In World War I, rifles accurate over long distances became commonplace. The British Lee-Enfield rifle could fire twelve rounds per minute with accuracy at 600 meters from a ten-round magazine; the British Vickers machine gun could fire 450-600 rounds per minute at a range of 4,000 meters. Artillery became much more significant due to the larger number of pieces, their range and accuracy, and the use of high-explosive shells. Airplanes and tanks made their first appearance in World War I. Dreadnoughts used oil engines, displaced 16,000 tons, and mounted fifteen-inch guns that could hit targets twenty miles away. Submarines entered into widespread use for the first time. Western armies unleashed the first weapons of mass destruction, chemical and biological agents that killed or incapacitated on the battlefield.
Modern industrial production, transportation, communication, and logistics produced even larger armies. By the end of World War I, Russia had mobilized about 12 million soldiers, Germany 11 million, Great Britain 8.9 million, France 8.4 million, Austria-Hungary 7.8 million, Italy 5.6 million, and the U.S. 4.35 million. Modern weaponry’s longer range and destructiveness gave the advantage to defensive warfare in trenches, which inflicted staggering casualties on these new large armies. In the Battle of the Somme, from July 1 to November 18, 1916, both sides suffered more than 1 million killed or wounded—the British Army lost 57,470 on the first day alone, the worst day in its history. Overall casualties dwarfed any previous war in human history: the Allied Powers lost 5 million killed, 12.8 million wounded; the Central Powers lost 8.5 million killed, 21 million wounded. By comparison, in the Napoleonic Wars, France lost 371,000 killed, 800,000 wounded. Its allies lost similar numbers: the British, 312,000; Austria 376,000; Russia 289,000, and Prussia 134,000. Efficiency did not stop with the production of consumer goods; it extended even to the business of killing.
World War II exploited transportation advances to expand the use of air power and armored vehicles, which returned the combat advantage to the offense. Casualty levels climbed again, but within World War I ranges: the Soviet Union lost about 9 million in combat deaths, Germany 5 million, Japan about 2.5 million, the U.S. 407,000, the United Kingdom 384,000, with total worldwide combat deaths ranging from 21-25 million. But World War II witnessed a phenomenon that perhaps had not appeared since the Thirty Years War: massive civilian deaths (about 30 million) in numbers that exceeded military ones. The advent of the atomic bomb at the war’s end raised the specter of even greater civilian casualties in the future.
International law could not stop the spread of technological progress to the machines of war. This has been the lesson of history. Lieber’s Code did not prohibit the Union blockade of the South, the burning of Atlanta, or Sherman’s march to the sea, nor did it prevent the introduction of new weapons such as modern rifles, trenches, and artillery. In World War I, the Allies demanded that German submarines allow ships the opportunity to off-load civilians, which also gave up the element of surprise. Germany ultimately refused, which handed Woodrow Wilson the official rationale to bring the United States into World War I on the side of Great Britain and France. The Washington Naval Conference of 1922 sought to limit large battleships and maintain a rough balance of maritime power between the great powers, but Japan evaded the rules while the locus of sea power shifted instead to aircraft carriers. Nations used chemical weapons in World War I, signed a treaty to ban them in 1925, but have used them in conflicts since.
We are not arguing against all forms of cooperation during armed conflict. Nations have applied general custom to limit the use of weapons that cause unnecessary suffering or superfluous damage and destruction, depending on the factual context. The general principles of Lieber’s Code still guide the conduct of war, even as new technologies transform weapons, tactics, and strategy. Nations have the freedom to use military force in ways that advance the objectives of the war, so long as they minimize harm to civilians as best they can.
Instead, we question the idea that nations should look to formal treaties and rules to produce lasting limits on war. Despite the recent deterioration in the Syrian civil war, nation-states have generally refrained from the use of chemical weapons against each other since the end of World War I. They have followed the Geneva Conventions on prisoners of war, though not consistently. Nations have observed others norms in the breach, chief among them the immunity of the civilian population and resources from attack. World War II not only saw the aerial bombing of cities and the nuclear attacks on Japan, but the years since have seen precision targeting of terrorists off the battlefield, attacks on urban infrastructure, and the acceptance of high levels of collateral damage among civilians. International lawyers and diplomats may proclaim that nations follow universal rules, either because of morality or a sense of legal obligation, but the record of practice tells a far different story. Efforts to impose more specific and demanding rules, such as limiting targeted drone attacks, banning cyber attacks, or requiring human control of robotic weapons, will similarly fail because they cannot take into account unforeseen circumstances, new weapons and military situations, and the immediate exigencies of war. Just as new technology led to increases in economic productivity, so too has it allowed nations to make war more effectively.
Nations will readily adhere to humanitarian standards when they gain a benefit that outweighs the cost, as when protecting enemy prisoners of war secures reciprocal protection for a nation’s own soldiers taken captive by the enemy. Limitations on the use of weapons will follow a similar logic. Nations will be most inclined to respect legal restraints on new weapons when their use by both sides would leave no one better off or would provide little advantage. Cyber and robotic weapons do not bear the same features as the weapons where legal bans have succeeded, as with use of poison gas on the battlefield. Cyber and robotic weapons need not inflict unnecessary suffering out of proportion to their military advantages, as do poisoned bullets or blinding lasers. Rather, these weapons improve the precision of force and thereby reduce human death and destruction in war.
Nor have these new weapons technologies yet sparked a useless arms race. Nuclear weapons eventually became opportune for arms control because larger stockpiles provided marginal, if any, benefits due to the destructive potential of each weapon and the deterrence provided by even a modest arsenal. Mutual reductions could leave both sides in the same position as they were before the agreement. Today, the marginal cost of nuclear weapons for the U.S. and Russia so outweighs their marginal benefit that it is not even clear that a binding international agreement is needed to reduce their arsenals. Russia, for example, reduced its arsenal below New START’s ceilings of 1,550 nuclear warheads and 700 strategic launchers even before the U.S. approved the deal.45 The United States likely would have reduced its forces to those levels even if the Senate had refused to consent to the treaty, a position the executive branch also took in 2002 with the Treaty of Moscow’s deep reduction in nuclear weapons. Today’s new weapons do not yet bear these characteristics. The marginal gains in deploying these weapons will likely be asymmetric across nations insofar as some nations will experience much greater gains in military capability by developing cyber and drone technology. Put differently, prohibition or regulation of these new weapons will not have equal impacts on rival nations. Indeed, we do not even now have enough information to understand which nations will benefit and which will not, which makes any form of international ban even less likely.
Nuclear weapons are the exception that proves the rule. Their unique characteristics and deterrent value make them suitable for international cooperation to limit their use. But the twentieth century has otherwise shown that technological advances, and the increases in military effectiveness that have followed, have outpaced law. Efforts to prevent the introduction of new weapons have failed because the weapons themselves initially advantage early adopters. Legal regulation will not emerge until nations have gained significant information about how the technology and its constraints on its use may affect them. In the absence of specific agreements, nations will still follow the customary rules of war, which provide general principles of reasonableness to apply to new circumstances, such as traditional prohibitions against wanton destruction or unnecessary suffering. It is to the new world of war that we now turn.
Static Law for a Changing World?
The laws of war have not kept pace with the rapid change in weapons technology. Efforts to freeze war in place by adopting an inflexible legal approach may lead to a failure of the current framework of war or its rewriting by nations less friendly to the Western international order. Another set of changes in war, also spurred by changes in politics, economics, and technology, is placing further pressure on the idealized AP I and U.N. Charter vision of the international order. In this new century, the classic paradigm of war between nation-states with disciplined militaries has slowly given way to a more chaotic world in which terrorist organizations, regional guerrillas, and ethnic or religious groups conduct equally violent hostilities. The great majority of casualties now come from civil wars and disputes within states, rather than wars between states. Today, the great powers use their militaries to threaten or intervene against smaller states, rather than in direct battles against each other. To grapple with these problems, the laws of war should allow both the use of new weapons more widely and the use of force more often.
The geopolitical order of the nineteenth century was determined largely by the military and economic strength of nations, not international agreements. Though legal treatises at that time still embraced the traditional view that a just war requires a legitimate causus belli, many causes were regarded as “good” and there was little enforcement of the just war requirement. Nations often went to war to enlarge their territory, as the United States did in the Mexican War of 1846-48, or to prevent others from expanding their influence, as when Great Britain and France fought Russia in the Crimean War of 1853-56. European nations constructed a Concert of Europe to strike a balance between the great powers, with war as the final mechanism to ensure that no state grew too powerful. Even when a state invoked transparently contrived claims to justify war for other reasons, as Bismarck’s Prussia did to unify Germany, no outside power helped the victims. Wars were brief in time and limited in scope.
The breadth of destruction wrought by World War I, however, prompted nations to attempt a rewrite of the international order. Rather than a balance of power and contending alliances, nations would guarantee their security in a peace treaty that established a scheme for “collective security” to guarantee every state its “political independence and territorial integrity” against aggression.46 They established an international forum to help resolve disputes, the League of Nations, then tried to outlaw war in the Kellogg-Briand Pact, allowing (supposedly) a resort to force only in self-defense. But the League failed to take effective action in response to either Japanese aggression against China or Italian aggression against Ethiopia. It could not draw in the two most powerful nations in the world, the United States and the Soviet Union, to cooperate to maintain international order. When Britain and France declared war in response to Germany’s aggression against Poland, no one even bothered to consult the League.
After the failure of the League to keep the peace, the victors renewed and extended their commitment to collective security. Maintaining the League’s guarantee of political independence and self-determination for all member states, the United Nations Charter banned war except in cases of self-defense. Force would remain the province of the Security Council, along with coercive measures short of war, such as economic sanctions. Article 42 declares that the Security Council “may take such action by air, sea, or land forces as may be necessary to maintain or restore international peace and security. Such action may include demonstrations, blockade, and other operations by air, sea, or land forces of Members of the United Nations.”47 Article 51, however, contained the great exception for self-defense: “Nothing in the present Charter shall impair the inherent right of individual or collective self-defence if an armed attack occurs against a Member of the United Nations, until the Security Council has taken measures necessary to maintain international peace and security.”48
Article 51 itself does not define self-defense. Nonetheless, well-regarded legal commentators insist that the U.N. Charter allows force only when the Security Council cannot intervene successfully to counter a cross-border invasion.49 This logic mimics domestic criminal law, which allows victims to physically resist a threat of deadly harm only if the police cannot prevent the violence. This doctrine does not, however, reflect the practice of most U.N. states. States have used armed force against each other hundreds of times since 1945. The Security Council has authorized only a handful of them due to the veto power of any of the five permanent members of the Council (U.S., Russia, Britain, France, and China). Moreover, the U.N. Charter does not simply authorize the Security Council to respond to “aggression.” It also authorizes the Council to act against “breaches of the peace” and “threats to the peace,” which implicitly acknowledges that threats to a nation’s security go beyond an actual cross-border “armed attack.”50
Prior to 1945, international law recognized that nations could respond to lesser threats with measures short of all-out war. These measures might run from mere diplomatic protest to various coercive actions.51 The U.N. Charter also recognizes this tool of traditional statecraft. It allows the Security Council to authorize “measures not involving the use of armed force” against threatening states. Such tactics include “complete or partial interruption of economic relations and of rail, sea, air, postal, telegraphic, radio . . . communications.”52 The Charter also provides that the Security Council may deploy “air-force contingents for combined international enforcement action” when “urgent military measures” are required.53 This provision evidently contemplates the deployment of such “contingents” in operations that are independent of a land invasion (which could rarely be organized on an “urgent” basis and is not provided for in the Charter).
For all of its peaceful aspiration, however, the U.N. Charter has been ineffective in constraining state military behavior as the political and technological circumstances of the world have evolved. The Security Council did not reach agreement on establishing an international bomber force. The Council has rarely invoked most of its other coercive powers. Of course, that is not because the world has experienced peace. Instead, the differing agendas of the permanent members of the Security Council have paralyzed the U.N., leaving it to nations to resolve disputes in other ways.54 While disputes have sometimes erupted into war, they have not yet escalated into a World War III. Some scholars believe that the “Long Peace” of the postwar world emerged from the balance between the U.S. and the U.S.S.R., while others argue that nuclear weapons imposed caution on all the great powers.55 We cannot be sure that even that reprieve will endure, now that Pakistan, North Korea, and, perhaps others have acquired nuclear weapons.
Even as the prospect of general war has receded, other forms of hostilities short of all-out war have emerged. Formal declarations of war have become rare—the U.S. Congress last enacted a formal declaration of war in 1942 (against Bulgaria). Perhaps that is partly because the U.N. Charter seems to prohibit resorting to “war.” But formal “war” has also become less easy to define because hostilities often fall short of open armed conflict, and formal peace treaties rarely mark their end. While the Second World War ended with unconditional surrender, Cold War disagreements prevented any general or comprehensive peace treaties among all participants. In Korea, a “temporary” armistice remains in place to this day, as with Israel and its neighbors (except Egypt and Jordan). North Vietnam ended its war in a unilateral annexation without awaiting any other nation’s approval.
The lines between war and peace (or conflict and trust) have become even more blurred in recent years, as major conflicts have involved non-state actors or states acting in disguised ways. If the Russian army had launched tank columns and regular infantry against the Baltic states, it would almost certainly have triggered a NATO military response and even a conventional interstate war. But the Russian invasion of Crimea in 2014 was much more ambiguous. Moscow sent special agents to seize strong points, enlisted local support, and disguised the underlying aggression with claims about local consent to what quickly became a fait accompli.56 Russia has continued the challenge by supporting local “militia” in eastern Ukraine who resist the authorities in Kiev, ostensibly in the interests of regional autonomy. Military analysts call this mix of covert operations combined with limited conventional military force a hybrid war, which often does not provoke the sort of response that would meet a full-scale invasion.57
New forms of irregular warfare do not just benefit revisionist states such as Russia, even small groups can now wage hostilities that generate instability and even seize territory. ISIS emerged from terror groups that had challenged the government established in Baghdad after Saddam’s overthrow. Terrorism has become another way to exert pressure without risking a direct trial of arms against an organized army. In the hands of ISIS, terrorism took on some of the characteristics of guerilla warfare, using surprise attacks and terrorizing civilians to take and hold land, people, and resources.58 In Lebanon and Syria, outside states such as Iran have made the challenges even more difficult by supporting armed militias with arms, training, and even troops. A regular army can defeat guerillas, but that has usually required a long struggle. Great powers, like the French in Algeria by the late 1950s, often lose patience with a lengthy struggle.
Hybrid war and guerrilla or terrorist insurgencies present the same set of strategic challenges for nations. Hybrid war tactics discourage initial intervention of an outside power by making the challenge seem limited, partial, and small stakes. Terror and guerilla tactics discourage a major power from making a long-term commitment for fear of a slow drain in low-intensity combat. Terrorists and guerrillas disperse to prevent a government from focusing its forces in conventional battle, and they seek cover behind the civilian population. They target civilians, even their own supporters, to intimidate them into remaining on their side. They aim to goad a government’s regular forces into making wider attacks on civilians, which may have the effect of driving civilians away from the state. The power with the greatest military muscle is not necessarily the most capable contestant. If the only answer to such irregular tactics is “war,” then a major power may decide not to respond.
States have also developed their own means of coercing opponents with tactics short of conventional war, such as annoying or harassing adversaries through intermediaries, another accepted military tactic that the U.N. Charter does not apparently endorse. As early as the American Revolutionary War, the British stirred up frontier Indians to attack American settlers. An independent United States winked at pirate attacks on Spanish commerce as a way of assisting independence movements in Latin America. In the nineteenth century, states deployed even more active measures. When small countries defaulted on loan agreements or seized or abused their nationals, major powers were often unwilling to declare war, either because the injuries were small or because they feared intervention by other powers. They instead resorted to “gunboat diplomacy.” Demands for redress were backed by a show of naval force, sometimes by small-scale bombardment. Nineteenth-century scholars of international law called it “pacific reprisal.”59 It was not war but a form of retaliation for the sake of coercion.
The U.N. Charter authorizes states to use force in self-defense “if an armed attack occurs.” But we have something close to pacific reprisals in the current policy of drone missile attacks on terrorists not only in Afghanistan but also in Pakistan, Yemen, and Libya. While the United States claims that it is acting in “self-defense” when conducting these strikes, it cannot credibly claim it is repelling an ongoing attack. Such strikes are consistent with the U.N. Charter only if we expand the concept of self-defense to include anticipation of an attack, even one that may not be imminent. In other words, the United States might claim that anticipatory self-defense allows preemptive strikes when the probability of an attack is small, but the potential for destruction is high. Or the United States and its allies must admit that they are engaging in preventive war designed to nip challenges to international security in the bud, even when there is no immediate claim to self-defense.
We are not arguing that war between states is disappearing, as some utopian writers do. Great power competitions continue to plague international politics, and those rivalries can still break out into conflict. Nuclear weapons may reduce the scope of hostilities, but the limits imposed by the superpower balance have eroded with the disappearance of the Soviet Union. The world is returning to a less orderly state of affairs. If the lines between war and peace blurred during the Cold War, they have become even less distinct today. Nations rely even more on measures short of full-blown armed conflict to coerce each other. New forms of international actors, such as terrorist groups, use force at a level that does not provoke a full-scale military response. Technological advances may provide western nations with a broader spectrum of coercion to respond. The rules imposed by the U.N. Charter and AP I cannot meaningfully govern these changes.
Responding to these new facts of war with new military technologies will provoke objections from international lawyers. They insist on highly restrictive rules for defining legitimate targets in war, such as AP I’s prohibition on the targeting of civilian property.60 Legalists can also invoke the AP I requirement that even attacks on otherwise legitimate “military objectives” are unlawful, if they “may be expected to cause incidental loss of civilian life, injury to civilians, damage to civilian objects . . . which would be excessive in relation to the concrete and direct military advantage anticipated.”61
These requirements of distinction and proportionality may make sense in a conventional war in which the main aim is to defeat the enemy’s army on the battlefield. But they make less sense if the aim is to affect the political calculus of enemy leaders. That was surely what the drafters of the U.N. Charter had in mind when they authorized interruption of sea or radio communication or even bombing as an “urgent military measure.” These tools built upon the blockades and economic embargoes that the Allies had used to deadly effect in World Wars I and II. Such actions seek not to defeat an enemy’s armed forces, but to increase the costs on its society and economy. Much of that pain would fall upon civilians. Intercepting all “postal, telegraphic, radio, and other means of communication” imposes a sense of isolation, which might undermine civilian confidence in the target country’s leaders. If it seemed reasonable for the Council to undertake such measures, it seems reasonable for member states to have those options on their own.
By the 1990s, nations regularly resorted to economic sanctions, such as those against Haiti, Serbia, and Iraq. They were never confined to military objects, but included civilian goods and services, such as oil and banking. The resulting pain primarily struck civilians.62 Sanctions on Serbia in the early 1990s, for example, produced an “economic meltdown,” in which unemployment and extreme poverty engulfed half the population and average income actually dropped by fifty percent.63 If the Council hoped to induce governments to change their positions, it was by threatening them with domestic disorder as food and other civilian necessities became more scarce and more costly.
U.N. Charter rules, however, may not justify even armed attacks on military targets if the purpose is coercion. AP I insists that “attacks” must be launched solely at “military objectives.”64 When there is no purpose to incapacitate the target state’s military capacity, it may be the case that there is no “military objective.” In the 1980s, the United States accused Libya of involvement in a terror attack on American soldiers in a Berlin nightclub. Libya had not invaded American territory. Its terrorism was not ongoing, though it might have been repeated. The Reagan administration retaliated by bombing Tripoli. It took care to say that the bombs were aimed at Libyan military installations, including a civilian site where the Libyan dictator Muammar Gaddafi was known to meet with top military commanders. Gaddafi “was not personally immune from the risks of exposure to a legitimate attack,” stated Abraham Sofaer, legal advisor at the U.S. State Department. “He was and is personally responsible for Libya’s policy of training, assisting, and utilizing terrorists in attacks on U.S. citizens, diplomats, troops, and facilities.”65 Nothing achieved by the bombing would have made it substantially more difficult for Libya to organize future terror attacks in Europe or elsewhere.
The U.S. attack on Libya also skirted the conventional understanding of discrimination and proportionality. One can see the point by thinking about “collateral damage.” Close members of Gaddafi’s family were killed in that attack. They were civilians who took no part in military affairs. AP I does not make clear how much incidental loss of life among civilians would have to occur before an attack would be “excessive” in relation to the “concrete and direct military advantage” achieved by attacking Gaddafi’s meeting place. A critic of the U.S. attacks might argue that the military advantage was so remote and speculative that it could not justify any incidental harm to civilians. A defender of the strike could respond that the air attacks might deter Gaddafi from pursuing further terror attacks.66 The U.S. gained a concrete and direct military advantage by deterring Libya from future international terrorist attacks on U.S. troops. Its limited strikes achieved an objective that otherwise might have demanded far greater attacks, with more loss of life and destruction. Again, they reveal the growing incompatibility between the formal rules of international humanitarian law, spun together out of the U.N. Charter and AP I, and the demands for coercive, limited uses of force in today’s world.
Two trends now seem to be converging. On the one hand, the underlying architecture of international politics is becoming more disordered. Instability is spreading throughout the world, in Eastern Europe, East Asia and Central Asia, North Africa, and the Middle East. The European Union has not developed any military capacity of its own but NATO is under more internal stress than ever before. Meanwhile, insurgent or revanchist forces have found ways to project intimidating force without the risk of full-scale military invasion. We face hybrid war in Eastern Europe, terror campaigns in Western Europe, and the construction of new islands to extend maritime claims in the South China Sea.
Part of the response may be new weapons technologies, but only if they are accompanied with new thinking on how and where they can be used. The most important characteristic of new technologies, in cyber, drone, and robotic weapons, is the capacity for remarkable degrees of precision. It was once possible to claim that bombs aimed at “military objectives” were only incidentally working “collateral damage” on civilian objects. Now, military technology gives us the capacity to strike with precision, which means destroying relatively little beyond intended targets. New technologies may offer a compelling response to the challenges of our time by allowing western nations to respond to the provocations of authoritarian aggressors or reach out to strike terrorists far removed from a battlefield.
We are not claiming that new weapons will, by themselves, resolve every challenge and deliver us to a new era of stability and peace. Every weapon, even supposedly autonomous or robotic ones, requires human guidance and strategy in the background. We may misjudge our challenges or our opportunities. We may underestimate the resolve of enemies or overrate the immediate threats they pose. Technology does not make statecraft obsolete. It simply offers more tools and options.
Embracing new technologies does not require us to believe in literal magic bullets that will render confrontational opponents supine after one volley. Nor would relaxing current understandings of the laws of war. The point is to provide alternatives to avoid the choice between all-out war and fatalistic resignation. The aim of many interventions would not be so much to disable the military capacity of the opposing side as to indicate the Western capacity and willingness to impose costs.
Short of completely incapacitating the opposing side, even large-scale war is a tacit bargaining situation, as Thomas Schelling pointed out more than fifty years ago.67 Part of the bargaining may involve inflicting harm on an opponent to signal readiness to do so on a larger scale. It may not be feasible to penetrate the delusions of the most crazed, megalomaniacal dictator—but even sobering those in his circle may be helpful. At any rate, most tyrants have concerns about preserving themselves. Signaling, as we will argue later on, is an important element of military exchanges. One might think of new technologies as providing us the capacity to communicate with more exclamation points, and to indicate that our enemies cannot rely on the protections afforded by highly restrictive interpretations of the laws of war.
Conclusions
By the end of 1862, Union armies had been struggling for almost two years against Confederate armies in the American Civil War. On December 1, President Lincoln offered a new strategy in his message to Congress. He proposed a constitutional amendment, authorizing federal compensation to states that abolished slavery over the next four decades. His message concluded with this memorable admonition: “The dogmas of the quiet past are inadequate to the stormy present. . . . As our case is new, so we must think anew and act anew. We must disenthrall ourselves, and then we shall save our country.”68
It was an offer of peace through compromise. As Lincoln may have expected, the offer was not accepted. Perhaps that gave Lincoln the confidence—or the political support in the North—to proceed with an alternative approach. A month later, on his own authority as commander-in-chief, President Lincoln proclaimed the emancipation of all slaves in all states “in rebellion against the United States.”69
Infuriating Southerners, the Emancipation Proclamation cut off hopes for a compromise peace. But it also meant that Southern states would have to retain more military units to guard the home front, thus depleting manpower available to the main Confederate armies. Many slaves were encouraged to escape, undermining agricultural production. Many escaped slaves then reinforced Union strength as laborers or soldiers. Those who remained often provided valuable intelligence to advancing Union armies. Making the war a battle over slavery helped deter European powers from offering support to the Confederacy. Though an extreme and risky measure, the Emancipation Proclamation proved to be a highly effective tactic.
Lincoln’s Emancipation Proclamation should remind us of this fundamental truth: Conflict stimulates new thinking. To suppress the Southern rebellion, the North harnessed its industrial prowess to deploy a number of historic innovations, from ironclad ships to repeating rifles. President Lincoln was personally involved in promoting these technical innovations. But he remained mindful that war is, above all, a political, not a technical undertaking.
Lincoln’s words describe the central argument of our book: “The dogmas of the quiet past are inadequate to the stormy present.” We are living through a revolution in military affairs as fundamental as the emergence of vast armies and mass-produced arms in Lincoln’s time. Military cyber units—the U.S. recently elevated its cyber command to a par with its regional combatant commanders—launch viruses to harm an enemy’s military capacity and disrupt its economic and communications networks. Unmanned robots patrol the skies hunting for individual terrorist leaders with air-to-ground missiles. Massive computing power, instant communications, and precise satellite reconnaissance bring any location on earth within one hour of a global strike missile. Much discussion of military affairs is now constrained by anachronistic understandings of the “law of war,” as much as Napoleonic approaches initially infected early thinking about the Civil War. We want to expand the debate over war today by rethinking the prevailing dogmas. As our case is new, we must think anew and act anew. We must disenthrall ourselves.