Читать книгу Super Imperialism - Michael Hudson - Страница 8
ОглавлениеIntroduction
It would be simplistic to view the United States’ rise to world dominance as following the European model characterized by the drives of private finance capital. One must do more than merely read John Hobson and V. I. Lenin to perceive the dynamics of U.S. diplomacy over the past eight decades. The United States has achieved its global position through novel policies that were not anticipated by economists writing prior to World War I, or indeed prior to the 1970s.
One lesson of U.S. experience is that the national diplomacy, embodied in what now is called the Washington Consensus, is not simply an extension of business drives. It has been shaped by overriding concerns for world power (euphemized as national security) and economic advantage as perceived by American strategists quite apart from the profit motives of private investors. Although the roots of imperialism and its diplomatic rivalries always have been economic in character, these roots – and especially their tactics – are not the same for all nations in all periods.
To explain the principles and strategies at work, this book describes how the United States’ ascent to world creditor status after World War I resulted from the unprecedented terms on which its government extended armaments and reconstruction loans to its wartime allies. In administering these Inter-Ally debts, U.S. Government aims and objectives were different from those of the private sector investment capital on which Hobson and Lenin had focused in their analysis of Europe’s imperial conflicts. The United States had a unique perception of its place and role in the world, and hence of its self-interest.
The United States’ isolationist and often messianic ethic can be traced back to the 1840s, although Republicans expressed it in a different way from Democrats. (I describe this social philosophy in my 1975 survey of Economics and Technology in 19th-Century American Thought.) Spokesmen for American industrialists prior to the Civil War – the American School of political economy led by Henry Carey, E. Peshine Smith and their followers – believed that their nation’s rise to world power would be achieved by protecting their economy from that of Britain and other European nations. The objective was to create nothing less than a new civilization, one based on high wages as a precondition for achieving even higher productivity. The result would be a society of abundance rather than one whose cultural and political principles were based on the phenomenon of scarcity.
The idea that America needed an ever-receding western frontier was voiced by Democrats motivated largely by the Slave Power’s desire to expand cotton cultivation southward, while promoting westward territorial expansion to extend wheat-growing to provide food. The Democratic Party’s agenda was to expand foreign trade by reducing tariffs and relying largely on food and raw materials exports to buy manufactures from abroad (mainly from Britain). By contrast, Republican protectionists sought to build up a domestic market for manufactures behind tariff walls. The party’s industrial advocates focused on technological modernization in the eastern urban centers.
Whereas the Democratic Party was Anglophile, Republican strategists had a long history of Anglophobia, above all in their opposition to British free trade doctrines, which dominated the nation’s religious colleges. It was largely to promote protectionist doctrines that state land-grant colleges and business schools were created after the Civil War. In contrast to the economic theories of David Ricardo and Thomas Malthus, these colleges described America as a new civilization, whose dynamics were those of increasing returns in agriculture as well as industry, and the perception that rising living standards would bring about a new social morality. The protectionist Simon Patten was typical in juxtaposing American civilization to European society wracked by class conflict, pauper labor and a struggle for foreign markets based on reducing wage levels. Teaching at the University of Pennsylvania from the 1890s through the 1910s, Patten’s students included such future luminaries as Franklin Roosevelt’s brains-truster Rex Tugwell and the socialist Scott Nearing.
Europe’s imperial rivalries were viewed as stemming from its competing princely ambitions and an idle landed aristocracy, and from the fact that its home markets were too impoverished to purchase industrial manufactures of the type that were finding a ready market in the United States. To Republican nationalists the United States did not need colonies. Its tariff revenues would better be spent on internal improvements than on vainglorious foreign conquests.
This attitude helps explain America’s belated commitment to World War I. The nation declared war in 1917 only when it became apparent that to stay out would entail at least an interim economic collapse as American bankers and exporters found themselves stuck with uncollectible loans to Britain and its allies. Reflecting the ideological and moral elements in America’s entry, President Wilson viewed the nation’s political and cultural heritage as stemming largely from England. He was a Democrat, and a southerner to boot, whereas most of the leading Republican intellectuals, including Patten, Thorstein Veblen and Charles Beard, felt a closer kinship to Germany. That nation was after all in much the same position as the United States in seeking to shape its social evolution by state policy to build a high-income, technologically innovative economy, marked by government leadership in social spending and the financing of heavy industry.
This social philosophy helps explain America’s particular form of isolationism preceding and after World War I, and especially the government’s demand to be repaid for its wartime loans to its allies. U.S. officials insisted that the nation was merely an associate in the war, not a full ally. Its $12 billion in armaments and reconstruction loans to Europe were more of a business character than a contribution to a common effort. America saw itself as economically and politically distinct.
The dilemma of U.S. economic diplomacy in the interwar years
The United States, and specifically its government, emerged from the war not only as the world’s major creditor, but a creditor to foreign governments with which it felt little brotherhood. It did not see its dominant economic position as obliging it to take responsibility for stabilizing world finance and trade. If Europe wished to channel its labor and capital to produce armaments instead of paying its debts, and if it persisted in its historical antagonisms – as evidenced by the onerous Treaty of Versailles imposed on Germany – the United States need feel no obligation to accommodate it.
The government therefore did not seek to create a system capable of extending new loans to foreign countries to finance their payments to the United States, as it was to do after World War II. Nor did it lower its tariffs so as to open U.S. markets to foreign producers as a means of enabling them to pay their war debts to the U.S. Treasury. The United States rather wished to see Europe’s empires dissolved, and did not mind seeing imperial governments stripped of their wealth, which tended to be used for military purposes with which few Americans sympathized. The resulting failure to take the lead in restructuring the world economy and to perceive the financial and commercial policy obligations inherent in the United States’ new economic status rendered its war credits uncollectible.
Economically, the U.S. attitude was to urge European governments to reduce their military spending and/or living standards, to permit their money to flow out and their prices to fall. In this way, it was hoped, world payments equilibrium might be re-established even in the face of rising American protectionism and full payment of the Inter-Ally debts that were the legacy of the Great War.
This was not a clearly thought-out position or a realistic one, but many leading Europeans shared these attitudes. In trying to cope with the international financial breakdown of the 1920s, their governments were advised by anti-German writers such as Bertil Ohlin and Jacques Rueff, who insisted that Germany could repay its assessed reparations if only it would submit to sufficient austerity.
The parallel with monetarist Chicago School attitudes towards today’s debtor economies is appallingly obvious. Its view of international payments adjustment was as self-defeating in the 1920s as are the IMF’s austerity programs today. By insisting on repayment of its allies’ war debts in full, and by simultaneously enacting increasingly protectionist tariffs at home, the U.S. Government made repayment of these debts impossible.
Private investors traditionally had been obliged to take losses when debtors defaulted, but it became apparent that the U.S. Government was not about to relinquish its creditor hold on the Allies. This intransigence obliged them to keep tightening the screws on Germany.
To review the 1920s from today’s vantage point is to examine how nations were not acting in their enlightened self-interest but in an unquestioning reaction against obsolete economic attitudes. The orthodox ideology carried over from the prewar era was anachronistic in failing to recognize that the world economy emerged from World War I shackled with debts far beyond its ability to pay – or at least, beyond the ability to pay except on conditions in which debtor countries merely would borrow the funds from private lenders in the creditor nation to pay the creditor-nation government. U.S. bankers and investors lent money to German municipalities, which turned the dollars over to the central bank to pay reparations to the Allies, which in turn used the dollars to pay their war debts to the U.S. Treasury. The world financial system thus was kept afloat simply by intergovernmental debts being wound down by a proportional build-up in private sector and municipal debts.
The ensuing débâcle introduced a behavioral difference from the processes analyzed by Hobson, Lenin and other theorists of prewar world diplomacy. In the nineteenth century Britain took on the position of world banker in no small measure to provide its colonies and dependencies with the credit necessary to sustain the international specialization of production desired by British industry. After World War I, the U.S. Government pursued no such policy. An enlightened imperialism would have sought to turn other countries into economic satellites of the United States. But the United States did not want European exports, nor were its investors particularly interested in Europe after its own stock market outperformed those of Europe.
The United States could have named the terms on which it would have supplied the world with dollars to enable foreign countries to repay their war debts. It could have specified what imports it wanted or was willing to take. But it did not ask, or even permit, debtor countries to pay their debts in the form of exports to the United States. Its investors could have named the foreign assets they wanted to buy, but private investors were overshadowed by intergovernmental financial agreements, or the lack of them, enforced by the U.S. Government. On both the trade and financial fronts the U.S. Government pursued policies that impelled European countries to withdraw from the world economy and turn within.
Even the United States’ attempt to ameliorate matters backfired. To make it easier for the Bank of England to pay its war debts, the Federal Reserve held down interest rates so as not to draw money away from Britain. But low interest rates spurred a stock market boom, discouraging U.S. capital outflows to European financial markets.
America’s failure to recycle the proceeds of its intergovernmental debt receipts into the purchase of European exports and assets was a failure to perceive the implicit strategy dictated by its unique position as world creditor. European diplomats spelled out the required strategy clearly enough in the 1920s, but the U.S. Government’s economic isolationism precluded it from collecting its intergovernmental debts. Its status as world creditor proved ultimately worthless as the world economy broke into nationalist units, each striving to become independent of foreign trade and payments, and from the U.S. economy in particular. In this respect America forced its own inward-looking attitude on other nations.
The upshot was the breakdown of world payments, competitive devaluations, tariff wars and international autarchy that characterized the 1930s. This state of affairs was less an explicit attempt at imperialism than an inept result of narrowly legalistic and bureaucratic intransigence regarding the war debts, coupled with a parochial domestic tariff policy. It was just the opposite of a policy designed to establish the United States as the world’s economic center based on a reciprocity of payments between creditor and periphery, a complementarity of imports and exports, production and payments. A viable U.S.-centered world economic system would have required some means of enabling Europe to repay its war debts. What occurred instead was isolationism at home, prompting drives for national self-sufficiency abroad.
One can find cases throughout history in which seemingly logical paths of least resistance have not been followed. In most such cases the explanation is to be found in leadership looking backward rather than forward, or to narrow rather than broad economic and social interests. Although it certainly was logical in the 1920s for private U.S. investors to extend their power throughout the world, the financial policies pursued by the U.S. Government (and to a lesser extent by other governments) made this impossible. The Government narrowly construed America’s national self-interest in terms of the Treasury’s balance sheet, putting this above the cosmopolitan tendencies of private financial capital. This forced country after country to withdraw from the internationalism of the gold exchange standard and to abandon policies of currency stability and free trade.
The burden of Britain’s war debts impelled it to convene the Ottawa Conference in 1932 to establish a system of Commonwealth tariff preferences. Germany turned its eyes inward to prepare for a war to seize by force the materials which it could not buy under existing world conditions. Japan, France and other countries were similarly stymied. Depression spread as the world financial crisis was internalized in one country after another. As world trade and payments broke down utterly, the national socialist governments of Italy and Germany became increasingly aggressive. Governments throughout the world responded to falling incomes and employment by vastly extending their role in economic affairs, prompting Keynes to proclaim the end of laissez-faire.
The Great Depression extinguished private capital throughout the world, just as intergovernmental capital had been extinguished by the shortsightedness of governments seeking to derive maximum economic benefit from their financial claims on other governments. This poses the question of why such debts were allowed to become so problematic in the first place.
Britain’s agreement to begin paying its war debts to the United States no doubt was inspired largely by its world creditor ideology of maintaining the “sanctity of debt.” Yet this policy no longer was appropriate in a situation where Britain, along with continental Europe, had become an international debtor rather than a creditor. There was little idea of adjusting the traditional ideology concerning the sanctity of debts to their realistic means of payment.
The Great Depression and World War II taught governments the folly of this attitude, although they were to lose it again with regard to Third World and Eastern Bloc debts within a few decades of the close of World War II.
American plans for a postwar “free trade imperialism”
Since 1945, U.S. foreign policy has sought to reverse foreign state control over economic policies generally, and attempts at economic self-reliance and independence from the United States in particular.
As U.S. diplomats and economists theorized during 1941–45 over the nation’s imminent role as dominant power in the postwar world, they recognized that it would emerge from the war by far the strongest national economy, but would have to be a major exporter in order to maintain full employment during the transition back to peacetime life. This transition was expected to require about five years, 1946–50. Foreign markets would have to replace the War Department as a source of demand for the products of American industry and agriculture. This in turn required that foreign countries be able to earn or borrow dollars to pay the United States for these exports.
This time around it was clear that the United States could not impose war debts on its Allies similar to those that had followed World War I. For one thing, the Allies had been stripped of their marketable international assets. If they were obliged to pay war debts to the United States, they would have no remaining funds to buy American exports. The U.S. Government therefore would have to provide the world with dollars, by government loans, private investment or a combination of both. In exchange, it would be entitled to name the terms on which it would provide these dollars. The question was, what terms would U.S. economic diplomats stipulate?
In January 1944 the annual meeting of the American Economic Association was dominated by proposals for postwar U.S. economic policy. “For the first time in many decades,” wrote J. B. Condliffe of the Carnegie Endowment for Peace, “– indeed for the first time since the very earliest years of the infant republic – attention is now being paid by soldiers and political scientists, but little as yet by economists, to the power position of the United States in the modern world. This attention is part of the reexamination of national policy made necessary by the fact that this war has shown the folly of complacent and self-centered isolationist theorist and attitudes.”1 Such an examination should not be thought of as Machiavellian or evil, Condliffe urged, but as a necessity if U.S. ideals were to carry real force behind them.
A central theme of the meeting was the relative roles that government and business would play in shaping the postwar world. In a symposium of former presidents of the American Economic Association on “What Should be the Relative Spheres of Private Business and Government in our Postwar American Economy?” most respondents held that the distinction between private business and government policy was becoming fuzzy, and that some degree of planning was needed to keep the economy working at relatively full employment.
This did not necessarily imply a nationalist economic policy, although that seemed to be an implicit long-term tendency. Speaking on “The Present Position of Economics,” Arthur Salz observed that “government and economics have drawn close together and live in a real and, to a large extent, in a personal union. While formerly the economist made his reputation by constructive[ly] criticizing governments, he is now hand and glove with them and has become the friend and patron of the government machinery whose severest critic he once was.”2
The problem of government/private sector relations was put in most rigorous form by Jacob Viner, the laissez-faire theoretician from the University of Chicago. His speech on “International Relations between State-Controlled National Economies” challenged the idea that private enterprise “is normally unpatriotic, while government is automatically patriotic.” National economic planning was inherently belligerent, he warned, and the profit motive would be the best guarantee against the waste and destruction of international conflict. Corporations could not go to war, but governments found in war the ultimate expression of their drives for power and prestige. Viner concluded hopefully: “The pattern of international economic relations will be much less influenced by the operation of national power and national prestige considerations in a world of free-enterprise economies than in a world of state-operated national economies.”3
This was just the opposite of socialist theory, which assumed that national governments were inherently peaceful, except when goaded by powerful business cartels. Hobson had insisted that “The apparent oppositions of interests between nations . . . are not oppositions between the people conceived as a whole; they are expositions of class interests within the nation. The interests of America and Great Britain and France and Germany are common,”4 although those of their individual manufacturers and exporters were not.
The war debts and reparations after World War I had brought into question this generality. According to Viner’s laissez-faire view, the tendency for conflict among nations – and hence the chances of war – would be greater rather than smaller in a world of state-controlled economies. Looking back on the experience of the 1930s in particular, he found that “The substitution of state control for private enterprise in the field of international economic relations would, with a certain degree of inevitability, have a series of undesirable consequences, to wit: the injection of a political element into all major international economic transactions; the conversion of international trade from a predominantly competitive to a predominantly monopolistic basis; a marked increase in the potentiality of business disputes to generate international friction,” and so forth. From this perspective national rivalries as conceived and carried out by governments were inherently more belligerent than commercial rivalries among private exporters, bankers and investors.
Viner did not, however, cite the U.S. Government’s own behavior in the 1920s. Inverting the Hobson–Lenin view of international commercial rivalries, his view had little room for such phenomena as IT&T’s involvement in Chile in the early 1970s to oppose Allende’s socialism, Lockheed’s bribery scandals in Japan or other international bribery of foreign and domestic officials, or even presidential campaign promises to protectionist interests such as those made by Richard Nixon to America’s dairy and textile industries in 1968 and again in 1972. Government planning was the problem as an autonomous force based on the inherently nationalistic ambitions of political leaders. No room was acknowledged for planning even of the kind that had led American industry to achieve world leadership from the end of the U.S. Civil War in 1865 to the end of World War I under a program of industrial protectionism and active internal improvements. “Insofar as, in the past, war has resulted from economic causes,” Viner insisted,
it has been to a very large extent the intervention of the national state into the economic process which has made the pattern of international economic relationships a pattern conducive to war . . . socialism on a national basis would not in any way be free from this ominous defect . . . economic factors can be prevented from breeding war if, and only if, private enterprise is freed from extensive state control other than state control intended to keep enterprise private and competitive . . . War, I believe, is essentially a political, not an economic phenomenon. It arises out of the organization of the world on the basis of sovereign nation-states . . . This will be true for a world of socialist states as for a world of capitalist states, and the more embracing the states are in their range of activities the more likely will be the serious friction between states. If states reduce to a minimum their involvement in economic matters, the role of economic factors in contributing to war will be likewise reduced.5
It seemed to many observers that U.S. officials were structuring the IMF and World Bank to enable countries to pursue laissez-faire policies by insuring adequate resources to finance the international payments imbalances that were anticipated to result from countries opening their markets to U.S. exporters after the return to peace. Special reconstruction lending would be made to war-torn Europe, followed by development loans to the colonies being freed, and balance-of-payments loans to countries in special straits so that they would not need to resort to currency depreciation and tariff barriers. It was believed that free trade and investment would settle into a state of balanced international trade and payments under the postwar conditions being created under U.S. leadership. Bilateral foreign aid would serve as a direct inducement to governments to acquiesce in the United States’ postwar plans, while ensuring the balance-of-payments equilibrium that was a precondition for free trade and an Open Door to international investment.
When President Truman insisted, on March 23, 1946, that “World trade must be restored – and it must be restored to private enterprise,” this was a way of saying that its regulation must be taken away from foreign governments that might be tempted to try to recover their prewar power at the expense of U.S. exporters and investors. America’s laissez-faire stance promoted the United States as the center of a world system vastly more extensive and centralized, yet also more flexible, less costly and less bureaucratic than Europe’s imperial systems had been.
Given the fact that only the United States possessed the foreign exchange necessary to undertake substantial overseas investment, and only the U.S. economy enjoyed the export potential to displace Britain and other European rivals, the ideal of laissez-faire was synonymous with the worldwide extension of U.S. national power. It was recognized that American commercial strength would achieve the government’s underlying objective of turning foreign economies into satellites of the United States. The objectives of U.S. exporters and international investors thus were synonymous with those of the government in seeking to maximize U.S. world power, and this was best achieved by discouraging government planning and economic statism abroad.
The laissez-faire ideology that American industrialists had denounced in the nineteenth century, and that the U.S. Government would repudiate in practice in the 1970s and 1980s, served American ends after World War II. Europe’s industrial nations would open their doors and permit U.S. investors to buy in to the extractive industries of their former colonies, especially into Near Eastern oil. These less developed regions would provide the United States with raw materials rather than working them up into their own manufactures to compete with U.S. industry. They would purchase a rising stream of American foodstuffs and manufactures, especially those produced by the industries whose productive capacity had expanded greatly during the war. The resulting U.S. trade surplus would provide the foreign exchange to enable American investors to buy up the most productive resources of the world’s industry, mining and agriculture.
To the extent that America’s export surplus exceeded its private sector investment outflows, the balance would have to be financed by growth in dollar lending via the World Bank, the Export-Import Bank and related intergovernmental aid-lending institutions. Under the aegis of the U.S. Government, American investors and creditors would accumulate a growing volume of claims on foreign economies, ultimately securing control over the non-Communist world’s political as well as economic processes.
This idealized model never materialized for more than a brief period. The United States proved unwilling to lower its tariffs on commodities that foreigners could produce less expensively than American farmers and manufacturers, but only on those commodities that did not threaten vested U.S. interests. The International Trade Organization, which in principle was supposed to subject the U.S. economy to the same free trade principles that it demanded from foreign governments, was scuttled. Private U.S. investment abroad did not materialize to the degree needed to finance foreign purchases of U.S. exports, nor were IMF and World Bank loans anywhere near sufficient to buoy up the payments-deficit economies.
The result was that much of Europe’s remaining gold was stripped by the United States, as was that of Latin America in the early postwar years. By 1949 foreign countries were all but faced with the need to revert to the protectionism of the 1930s to prevent an unconscionable loss of their economic independence. The U.S. Treasury accumulated three-fourths of the world’s gold, denuding foreign markets of their ability to continue buying U.S. exports at their early postwar rates. Britain in particular floundered in a virtually bankrupt position with its overvalued pound sterling, having waived its right to devalue or protect its Sterling Area in exchange for receiving the 1946 British Loan from the U.S. Treasury. Other countries were falling into similar straits. America’s payments surplus position thus was threatening its prospective export potential.
In these circumstances U.S. economic planners learned what European, Japanese and OPEC diplomats subsequently have learned. Beyond a point, a creditor and payments surplus status can be decidedly uncomfortable.
It was in America’s enlightened self-interest to return some of Europe’s gold. What private investors failed to recycle abroad, the government itself would have to do via an extended foreign aid program, perhaps under the emerging Cold War’s military umbrella.
There were two potential obstacles to this strategy. First was the drive by foreign economies to regain a modicum of balance-of-payments equilibrium and to promote their own self-sufficiency through protectionism and other nationalist economic policies. This tendency was muted, however, as Britain led Europe’s march into the U.S. orbit. This seemed to preempt any drives that continental Europe might have harbored toward achieving economic autonomy from America.
The other major obstacle to U.S. Government plans for the postwar world did not derive from foreign countries, but from Congress. Despite the overwhelming domestic benefits gained by foreign aid, Congress was unwilling to extend funds to impoverished countries as outright gifts, or even as loans beyond a point. The problem was not that it failed to perceive the benefits that would accrue from extending further aid, after the pattern of the British Loan and the subsequent Marshall Plan. It was just that Congress gave priority to domestic spending programs. What was at issue was not an abstract cost-benefit analysis for humanity at large, or even one of overall U.S. long-term interests, but one of parochial interests putting their local objectives ahead of foreign policy.
America embarks on a Cold War that pushes its balance of payments into deficit
As matters turned out, the line of least resistance to circumvent this domestic obstacle was to provide Congress with an anti-Communist national security hook on which to drape postwar foreign spending programs. Dollars were provided not simply to bribe foreign governments into enacting Open Door policies, but to help them fight Communism which might threaten the United States if not nipped in the bud. This red specter was what had turned the tide on the British Loan, and it carried Marshall Aid through Congress, along with most subsequent aid lending down through the present day. Congress would not appropriate funds to finance a quasi-idealistic worldwide transition to laissez-faire, but it would provide money to contain Communist expansion, conveniently defined as being virtually synonymous with spreading poverty nurturing seedbeds of anti-Americanism.
The U.S. Government hoped to keep its fellow capitalist countries solvent. U.S. diplomats remembered the 1930s well enough to recognize that economies threatened with balance-of-payments insolvency would move to insulate themselves, foreclosing U.S. trade and investment opportunities accordingly. As the Council of Foreign Relations observed in 1947:
In public and Congressional debate, the Administration’s case centered on two themes: the role of the [British] loan in world recovery, and the direct benefits to the country from this Agreement. American self-interest was established as the motivation . . . The Administration made a persuasive argument by pointing out what would happen without the loan. Britain would be forced to restrict imports, make bilateral trade bargains, and discriminate against American goods. . . . With the loan, things could be made to move in the other direction.6
Former U.S. Ambassador to Britain Joseph Kennedy was among the first to urge U.S. credits for that nation, “largely to combat communism.” He even urged an outright gift, on the ground that Britain was for all practical purposes broke.
Tension with Russia helped the loan, playing a considerable part in offsetting political objections and doubts of the loan’s economic soundness. Anti-Soviet sentiment had risen throughout the country, since Winston Churchill, speaking at Fulton [Missouri] on March 5 [1946], had proposed a “fraternal association” of English-speaking nations to check Russia . . . Now . . . his idea seemed to be a decisive factor in determining many Congressmen to vote for the loan . . . Senator Barkely said, “I do not desire, for myself or for my country, to take a position that will drive our ally into arms into which we do not want her to be folded.”
Speaker of the House Sam Rayburn endorsed this position. It was to become the political lever to extract U.S. foreign aid for the next two decades. International policies henceforth were dressed in anti-Communist garb in order to facilitate their acceptance by non-liberal congressmen whose sympathies hardly lay with the laissez-faire that had afforded the earlier window dressing for the government’s postwar economic planning.
The problem from the government’s point of view was that the U.S. balance of payments had reached a surplus level unattained by any other nation in history. It had an embarrassment of riches, and now required a payments deficit to promote foreign export markets and world currency stability. Foreigners could not buy American exports without a means of payment, and private creditors were not eager to extend further loans to countries that were not creditworthy.
The Korean War seemed to resolve this set of problems by shifting the U.S. balance of payments into deficit. Confrontation with Communism became a catalyst for U.S. military and aid programs abroad. Congress was much more willing to provide countries with dollars via anti-Communist or national defense programs than by outright gifts or loans, and after the Korean War U.S. military spending in the NATO and SEATO countries seemed to be a relatively bloodless form of international monetary support. In country after country, military spending and aid programs provided a reflux of some of the foreign gold that the United States had absorbed during the late 1940s.
Within a decade, however, what at first seemed to be a stabilizing economic dynamic became destabilizing. The United States, the only nation capable of financing a worldwide military program, began to sink into the mire that had bankrupted every European power that experimented with colonialism. America’s Cold War strategists failed to perceive that whereas private investment tends to be flexible in cutting its losses, being committed to relatively autonomous projects on the basis of securing a satisfactory rate of return year after year, this is not the case with government spending programs, especially in the case of national security programs that created vested interests. Such programs are by no means as readily reversible as those of private industry, for military spending abroad, once initiated, tends to take on a momentum of its own. The government cannot simply say that national security programs have become economically disadvantageous and therefore must be curtailed. That would imply they were pursued in the first place only because they were economically remunerative – something involving the sacrifice of human lives for the narrow motives of economic gain, even if national gain. What began as pretense became a new reality.
The new characteristics of American financial imperialism
If the United States had continued to run payments surpluses, if it had absorbed more foreign gold and dollar balances, the world’s monetary reserves would have been reduced. This would have constrained world trade, and especially imports from the United States. A US payments surplus thus was incompatible with continued growth in world liquidity and trade. The United States was obliged to buy more foreign goods, services and capital assets than it supplied to foreigners, unless they could augment monetary reserves with non-U.S. currencies.
What was not grasped was the corollary implication. Under the key-currency dollar standard the only way that the world financial system could become more liquid was for the United States to pump more dollars into it by running a payments deficit. The foreign dollar balances being built up as a result of foreign military and foreign aid spending in the 1950s and 1960s were, simultaneously, debts of the United States.
At first, foreign countries welcomed their surplus of dollar receipts. At the time there was no doubt that the United States was fully capable of redeeming these dollars with its enormous gold stock. But in autumn 1960 a run on the dollar temporarily pushed up the price of gold to $40 an ounce. This was a reminder that the U.S. balance of payments had been in continuing and growing deficit for a decade, since the Korean War. It became clear that just as the U.S. payments surplus had been destabilizing in the late 1940s, so in the early 1960s a U.S. payments deficit beyond a point likewise would be incompatible with world financial stability.
The run on gold had followed John Kennedy’s victory in the 1960 presidential election, waged largely over a rather demagogic debate over military preparedness. It seemed unlikely that the incoming Democratic administration would do much to change the Cold War policies responsible for the U.S. payments deficit.
Growing attention began to be paid to the difference between domestic and international money. Apart from metallic coinage, domestic currency is a form of debt, but one that nobody really expects to be paid. Attempts by governments to repay their debts beyond a point would extinguish their monetary base. Back in the 1890s high U.S. tariffs produced a federal budget surplus that obliged the Treasury to redeem its bonds, causing a painful monetary deflation. But in the sphere of international money and credit, most investors expect debts to be paid on schedule.
This expectation would seem to doom any attempt to create a key-currency standard. The problem is that international money (viewed as an asset) is simultaneously a debt of the key-currency nation. Growth in key-currency reserves accumulated by payments-surplus economies implies that the nation issuing the key currency acts in effect, and even in reality, as an international borrower. To provide other countries with key-currency assets involves running into debt, and to repay such debt is to extinguish an international monetary asset.
This debt character of the world’s growing dollar reserves hardly had been noticed by foreign governments that needed them in the 1950s to finance their own foreign trade and payments. But by the early 1960s it became clear that the United States was approaching the point at which its debts to foreign central banks soon would exceed the value of the Treasury’s gold stock. This point was reached and passed in 1964, by which time the U.S. payments deficit stemmed entirely from foreign military spending, mainly for the Vietnam War.
It would have required a change in national consciousness to reverse the military programs that had come to involve the United States in massive commitments abroad. America seemed to be succumbing to a European-style imperial syndrome, and was in danger of losing its dominant world position in much the way that Britain and other imperial powers had done, weighed down by the cost of maintaining its worldwide empire. And just as World Wars I and II had bankrupted Europe, so the Vietnam War threatened to bankrupt the United States.
If the United States had followed the creditor-oriented rules to which European governments had adhered after World Wars I and II, it would have sacrificed its world position. Its gold would have flowed out and Americans would have been obliged to sell off their international investments to pay for military activities abroad. This was what U.S. officials had demanded of their allies in World Wars I and II, but the United States was unwilling to abide by such rules itself. Unlike earlier nations in a similar position, it continued to spend abroad, and at home as well, without regard for the balance-of-payments consequences.
One result was a run on gold, whose momentum rose in keeping with sagging military fortunes in Vietnam. Foreign central banks, especially those of France and Germany, cashed in their surplus dollars for U.S. gold reserves almost on a monthly basis.
Official reserves were sold to meet private demand so as to hold down the price of gold. For a number of years the United States had joined other governments to finance the London Gold Pool. But by March 1968, after a six-month run, America’s gold stock fell to the $10 billion floor beyond which the Treasury had let it be known that it would suspend further gold sales. The London Gold Pool was disbanded and informal agreement (i.e., diplomatic arm-twisting) was reached among the world’s central banks to stop converting their dollar inflows into gold.
This broke the link between the dollar and the market price of gold. Two prices for gold emerged, a rising open-market price and the lower “official” price of $35 an ounce at which the world’s central banks continued to value their monetary reserves.
Three years later, in August 1971, President Nixon made the gold embargo official. The key-currency standard based on the dollar’s convertibility into gold was dead. The U.S. Treasury bill standard – that is, the dollar-debt standard based on dollar inconvertibility – was inaugurated. Instead of being able to use their dollars to buy American gold, foreign governments found themselves able to purchase only U.S. Treasury obligations (and, to a much lesser extent, U.S. corporate stocks and bonds).
As foreign central banks received dollars from their exporters and commercial banks that preferred domestic currency, they had little choice but to lend these dollars to the U.S. Government. Running a dollar surplus in their balance of payments became synonymous with lending this surplus to the U.S. Treasury. The world’s richest nation was enabled to borrow automatically from foreign central banks simply by running a payments deficit. The larger the U.S. payments deficit grew, the more dollars ended up in foreign central banks, which then lent them to the U.S. Government by investing them in Treasury obligations of varying degrees of liquidity and marketability.
The U.S. federal budget moved deeper into deficit in response to the guns-and-butter economy, inflating a domestic spending stream that spilled over to be spent on more imports and foreign investment and yet more foreign military spending to maintain the hegemonic system. But instead of U.S. citizens and companies being taxed or U.S. capital markets being obliged to finance the rising federal deficit, foreign economies were obliged to buy the new Treasury bonds being issued. America’s Cold War spending thus became a tax on foreigners. It was their central banks who financed the costs of the war in Southeast Asia.
There was no real check to how far this circular flow could go. For understandable reasons foreign central banks did not wish to go into the U.S. stock market and buy Chrysler, Penn Central or other corporate securities. This would have posed the kind of risk that central bankers are not supposed to take. Nor was real estate any more attractive. What central banks need are liquidity and security for their official reserves. This is why they traditionally had held gold, as a means of settling their own deficits. To the extent that they began to accumulate surplus dollars, there was little alternative but to hold them in the form of U.S. Treasury bills and notes without limit.
This shift from asset money (gold) to debt money (U.S. Government bonds) inverted the traditional relationships between the balance of payments and domestic monetary adjustment. Conventional wisdom prior to 1968 held that countries that ran deficits were obliged to part with their gold until they stemmed their payments outflows by increasing interest rates so as to borrow more abroad, cutting back government spending and restricting domestic income growth. This is what Britain did in its stop–go policies of the 1960s. When its economy boomed, people bought more imports and spent more abroad. To save the value of sterling from declining, the Bank of England raised interest rates. This deterred new construction and other investment, slowing the economy down. At the government level, Britain was obliged to give up its dreams of empire, as it was unable to generate a large enough private sector trade and investment surplus to pay the costs of being a major world military and political power.
But now the world’s major deficit nation, the United States, flouted this adjustment mechanism. It announced that it would not let its domestic policies be “dictated by foreigners.” This go-it-alone policy had led it to refrain from joining the League of Nations after World War I, or to play the international economic game according to the rules that bound other nations. It had joined the World Bank and IMF only on the condition that it was granted unique veto power, which it also enjoyed as a member of the United Nations Security Council. This meant that no economic rules could be imposed that U.S. diplomats judged did not serve American interests.
These rules meant that, unlike Britain, the United States was able to pursue its Cold War spending in Asia and elsewhere in the world without constraint, as well as social welfare spending at home. This was just the reverse of Britain’s stop–go policies or the austerity programs that the IMF imposed on Third World debtors when their balance of payments fell into deficit.
Thanks to the $50 billion cumulative U.S. payments deficit between April 1968 and March 1973, foreign central banks found themselves obliged to buy all of the $50 billion increase in U.S. federal debt during this period. In effect, the United States was financing its domestic budget deficit by running an international payments deficit. As the St. Louis Federal Reserve Bank described the situation, foreign central banks were obliged “to acquire increasing amounts of dollars as they attempted to maintain relatively fixed parities in exchange rates.”7 Failure to absorb these dollars would have led the dollar’s value to fall vis-à-vis foreign currencies, as the supply of dollars greatly exceeded the demand. A depreciating dollar would have provided U.S. exporters with a competitive devaluation, and also would have reduced the domestic currency value of foreign dollar holdings.
Foreign governments had little desire to place their own exporters at a competitive disadvantage, so they kept on buying dollars to support the exchange rate – and hence, the export prices – of Dollar Area economies. “The greatly increased demand for short-term U.S. Government securities by these foreign institutions resulted in lower market yields on these securities relative to other marketable securities than had previously been the case,” explained the St. Louis Federal Reserve Bank. “This development occurred in spite of the large U.S. Government deficits that prevailed in the period.” Thanks to the extraordinary demand by central banks for government dollar-debt instruments, yields on U.S. Government bonds fell relative to those of corporate securities, which central banks did not buy.
This inverted the classical balance-of-payments adjustment mechanism, which for centuries had obliged nations to raise interest rates to attract foreign capital to finance their deficits. In America’s case it was the balance-of-payments deficit that supplied the “foreign” capital, as foreign central banks recycled the dollar outflows – that is, their own dollar inflows – into Treasury securities. U.S. interest rates fell precisely because of the balance-of-payments deficit, not in spite of it. The larger the balance-of-payments deficit, the more dollars foreign governments were obliged to invest in U.S. Treasury securities, financing simultaneously the balance-of-payments deficit and the domestic federal budget deficit.
The stock and bond markets boomed as American banks and other investors moved out of government bonds into higher-yielding corporate bonds and mortgage loans, leaving the lower-yielding Treasury bonds for foreign governments to buy. U.S. companies also began to buy up lucrative foreign businesses. The dollars they spent were turned over to foreign governments, which had little option but to reinvest them in U.S. Treasury obligations at abnormally low interest rates. Foreign demand for these Treasury securities drove up their price, reducing their yields accordingly. This held down U.S. interest rates, spurring yet further capital outflows to Europe.
The U.S. Government had little motivation to stop this dollar-debt spiral. It recognized that foreign central banks hardly could refuse to accept further dollars, lest the world monetary system break down. Not even Germany or the Allies had thought of making this threat in the 1920s or after World War II, and they were not prepared to do it in the 1960s and 1970s. It was generally felt that such a breakdown would hurt foreign countries more than the United States, thanks to the larger role played by foreign trade in their own economic life. U.S. strategists recognized this, and insisted that the U.S. payments deficit was a foreign problem, not one for American citizens to worry about.
In the absence of the payments deficit, Americans themselves would have had to finance the growth in their federal debt. This would have had a deflationary effect, which in turn would have obliged the economy to live within its means. But under circumstances where growth in the national debt was financed by foreign central banks, a balance-of-payments deficit was in the U.S. national interest, for it became a means for the economy to tap the resources of other countries.
All the government had to do was to spend the money to push its domestic budget into deficit. This spending flowed abroad, both directly as military spending and indirectly via the overheated domestic economy’s demand for foreign products, as well as for foreign assets. The excess dollars were recycled to their point of origin, the United States, spurring a worldwide inflation along the way. A large number of Americans felt they were getting rich from this inflation as incomes and property values rose.
Figure 1 shows that foreign governments financed the entire increase in publicly held U.S. federal debt between the end of World War II and March 1973, and were still doing this throughout the 1990s. (How the system ended up after that time is outlined in my sequel to this book, Global Fracture.) The process reached its first crisis during 1968–72, peaking in the inflationary blowout that culminated in the quadrupling of grain and oil prices in 1972–73. Of the $47 billion increase in net public debt the publicly held federal debt during this five-year period – the gross public debt, less that which the government owes to its own Social Security and other trust funds and the Federal Reserve System – foreign governments financed $42 billion.
This unique ability of the U.S. Government to borrow from foreign central banks rather than from its own citizens is one of the economic miracles of modern times. Without it the war-induced American prosperity of the 1960s and early 1970s would have ended quickly, as was threatened in 1973 when foreign central banks decided to cut their currencies loose from the dollar, letting them float upward rather than accepting a further flood of U.S. Treasury IOUs.
How America’s payments deficit became a source of strength, not weakness
This Treasury bill standard was not at first a deliberate policy. Government officials tried to direct the private sector to run a balance-of-payments surplus capable of offsetting the deficit on overseas military spending. This was the stated objective of President Johnson’s “voluntary” controls announced in February 1965. Banks and direct investors were limited as to how much they could lend or spend abroad. U.S. firms were obliged to finance their takeovers and other overseas investments by issuing foreign bonds so as to absorb foreign-held dollars and thereby keep them out of the hands of French, German and other central banks.
Figure 1 Ownership of U.S. Public Debt, 1945–76
Source: Hudson, Global Fracture New York: Harper & Row 1977
But it soon became apparent that the new situation possessed some unanticipated virtues. As long as the United States did not have to pay in gold to finance its payments deficits after 1971 (in practice, after 1968), foreign governments could use their dollars only to help the Nixon Administration roll over the mounting federal debt year after year.
This inspired a reckless attitude toward the balance of payments that U.S. officials smilingly called one of benign neglect. The economy enjoyed a free ride as the payments deficit obliged foreign governments to finance the domestic federal debt. When foreign governments finally stopped supporting the dollar in 1971, its exchange rate fell by 10 per cent. This reduced the foreign exchange value of foreign-held dollar debt accordingly, above and beyond the degree to which inflation was eroded its value. But American companies that had invested abroad saw the dollar value of their holdings rise by the degree to which the dollar depreciated.
What was so remarkable about dollar devaluation – that is, an upward revaluation of foreign currencies – is that far from signaling the end of American domination of its allies, it became the deliberate object of U.S. financial strategy, a means to enmesh foreign central banks further in the dollar-debt standard. What newspaper reports called a crisis actually was the successful culmination of U.S. monetary strategy. It might be a crisis of Europe’s political and economic independence from the United States, but it was not perceived to be a crisis of domestic U.S. economic policy.
A financial crisis usually involves a shortage of funds resulting in a break in the chain of payments somewhere along the line. But what occurred in February and March 1973 was just the reverse, a plethora of dollars that inflated rather than deflated the world monetary system. In this respect that year’s runs on the dollar were like the competitive devaluations of the 1930s, fed by U.S. official pronouncements of further devaluation to come. The Federal Reserve System expanded the money supply at a rapid pace and held down interest rates.
From the 1920s through the 1940s the United States had demanded concessions from foreign governments by virtue of its creditor position. It would not provide them with foreign aid and military support unless they opened their markets to American exports and investment capital. U.S. officials made similar demands in the 1960s and 1970s, but this time by virtue of their nation’s payments-deficit status! They refused to stabilize the dollar in world markets or control U.S. deficit-spending policies unless foreign countries gave special treatment to favor American exports and investments. Europe was told to bend its agricultural policy to guarantee U.S. farmers a fixed share of Common Market food consumption, to relax its special trade ties with Africa, and to proffer special aid to Latin America with the intention that the latter region would pass on the money to U.S. creditors and exporters.
The United States thus achieved what no earlier imperial system had put in place: a flexible form of global exploitation that controlled debtor countries by imposing the Washington Consensus via the IMF and World Bank, while the Treasury bill standard obliged the payments-surplus nations of Europe and East Asia to extend forced loans to the U.S. Government. Against dollar-deficit regions the United States continued to apply the classical economic leverage that Europe and Japan were not able to use against it. Debtor economies were forced to impose economic austerity to block their own industrialization and agricultural modernization. Their designated role was to export raw materials and provide low-priced labor whose wages were denominated in depreciating currencies.
Against dollar-surplus nations the United States was learning to apply a new, unprecedented form of coercion. It dared the rest of the world to call its bluff and plunge the international economy into monetary crisis. That is what would have happened if creditor nations had not channeled their surplus savings to the United States by buying its Government securities.
Implications for the theory of imperialism
The thesis of this book is that it is not to the corporate sector that one must look to find the roots of modern international economic relations as much as to U.S. Government pressure on central banks and on multilateral organizations such as the IMF, World Bank and World Trade Organization. Already in the aftermath of World War I, but especially since the end of World War II, intergovernmental lending and debt relationships among the world’s central banks have overshadowed the drives of private sector capital.
At the root of this new form of imperialism is the exploitation of governments by a single government, that of the United States, via the central banks and multilateral control institutions of intergovernmental capital rather than via the activities of private corporations seeking profits. What has turned the older forms of imperialism into a super imperialism is that whereas prior to the 1960s the U.S. Government dominated international organizations by virtue of its preeminent creditor status, since that time it has done so by virtue of its debtor position.
Confronted with this transformation of postwar economic relations, the non-Communist world seemed to have little choice but to move toward a defensive regulation of foreign trade, investment and payments. This objective became the crux of Third World demands for a New International Economic Order in the mid-1970s. But the United States defeated these attempts, in large part by a strengthening of its military power.
By the time the European Community and Japan began to assert their autonomy around 1990, the United States dropped all pretense of promoting the open world economy it had insisted on creating after World War II. Instead it demanded “orderly marketing agreements” to specify market shares on a country-by-country basis for textiles, steel, autos and food, regardless of “free market” developments and economic potential abroad. The European Common Market was told to set aside a fixed historical share of its grain market for U.S. farmers, except in conditions where U.S. shortages might develop, as occurred in summer 1973 when foreign countries were obliged to suffer the consequences of having U.S. export embargoes imposed. This abrogated private-sector contracts, destabilizing foreign economies in order to stabilize that of the United States.
In sum, U.S. diplomats pressed foreign governments to regulate their nations’ trade and investment to serve U.S. national objectives. Foreign economies were to serve as residual markets for U.S. output over and above domestic U.S. needs, but not to impose upon these needs by buying U.S. commodities in times of scarcity. When world food and timber prices exceeded U.S. domestic prices in the early 1970s, American farmers were ordered to sell their output at home rather than export it.
The United States thus imposed export controls to keep down domestic prices while world prices rose. In order that prices retain the semblance of stability in the United States, foreign governments were asked to suffer shortages and inflate their own economies. The result was a divergence between U.S. domestic prices and wages on the one hand, and worldwide prices and incomes on the other. The greatest divergence emerged between the drives of the U.S. Government in its worldwide diplomacy and the objectives of other governments seeking to protect their own economic autonomy. Protectionist pressures abroad were quickly and deftly defeated by U.S. diplomacy as the double standard implicit in the Washington Consensus was put firmly in place.
When the prices of U.S. capital goods and other materials exceeded world prices, for instance, the World Bank was asked (unsuccessfully) to apportion its purchases of capital goods and materials in the United States so as to reflect the 25 per cent subscription share of its stock. Japan was asked to impose “voluntary controls” on its imports of U.S. timber, scrap metal and vegetable oils, while restricting its exports of textiles, iron and steel to the United States. U.S. Government agencies, states and municipalities also followed “buy American” rules.
All this was moving in just the opposite direction from what Jacob Viner, Cordell Hull and other early idealistic postwar planners had anticipated. In retrospect they look like “useful fools” who failed to perceive who actually benefits from ostensibly cosmopolitan liberalism. In this regard today’s laissez-faire and monetarist orthodoxy may be said to play the academic role of useful foolishness as far as U.S. diplomacy has been concerned. Reviewing the 1945 rhetoric about how postwar society would be structured, one finds idealistic claims emanating from the United States with regard to how open world trade would promote economic development. But this has not materialized. Rather than increasing the ability of aid borrowers to earn the revenue to pay off the debts they have incurred, the Washington Consensus has made aid borrowers more dependent on their creditors, worsened their terms of trade by promoting raw materials exports and grain dependency, and forestalled needed social modernization such as land reform and progressive income and property taxation.
Even as U.S. diplomats were insisting that other nations open their doors to U.S. exports and investment after World War II, the government was extending its regulation of the nation’s own markets. Early in the 1950s it tightened its dairy and farm quotas in contravention of GATT principles, providing the same kind of agricultural subsidies which U.S. negotiators subsequently criticized the Common Market for instituting. Today (2002) nearly half of U.S. agricultural income derives from government subsidy.
World commerce has been directed by an unprecedented intrusion of government planning, coordinated by the World Bank, IMF and what has come to be called the Washington Consensus. Its objective is to supply the United States with enough oil, copper and other raw materials to produce a chronic over-supply sufficient to hold down their world price. The exception to this rule is for grain and other agricultural products exported by the United States, in which case relatively high world prices are desired. If foreign countries still are able to run payments surpluses under these conditions, as have the oil-exporting countries, their governments are to use the proceeds to buy U.S. arms or invest in long-term illiquid, preferably non-marketable U.S. Treasury obligations. All economic initiative is to remain with Washington Consensus planners.
Having unhinged Britain’s Sterling Area after World War II, U.S. officials created a Dollar Area more tightly controlled by their government than any prewar economy save for the fascist countries. As noted above, by the mid-1960s the financing of overseas expansion of U.S. companies was directed to be undertaken with foreign rather than U.S. funds, and their dividend remission policies likewise were controlled by U.S. Government regulations overriding the principles of foreign national sovereignty. Overseas affiliates were told to follow U.S. Government regulation of their head offices, not that of governments in the countries in which these affiliates were located and of which they were legal citizens.
The international trade of these affiliates likewise was regulated without regard either for the drives of the world marketplace or the policies of local governments. U.S. subsidiaries were prohibited from trading with Cuba or other countries whose economic philosophy did not follow the Washington Consensus. Protests by the governments of Canada and other countries were overridden by U.S. Government pressure on the head offices of U.S. multinational firms.
Matters were much the same in the financial sphere. Although foreign interest rates often exceeded those in the United States, foreign governments were obliged to invest their surplus dollars in U.S. Treasury securities. The effect was to hold down U.S. interest rates below those of foreign countries, enabling American capital investments to be financed at significantly lower cost (and at higher price/earnings ratios for their stocks) than could be matched by foreign companies.
The U.S. economy thus achieved a comparative advantage in capital-intensive products not through market competition but by government intrusion into the global marketplace, both directly and via the Bretton Woods institutions it controlled. This intrusion often aimed at promoting the interests of U.S. corporations, but the underlying motive was the perception that the regulated activities of these companies promoted U.S. national interests, above all the geopolitical interests of Cold War diplomacy with regard to the balance of payments.
Today’s source of financial instability as compared to that of the 1920s
In the 1920s and 1930s the world suffered from a shortage of liquidity. Nations sought to export goods and services, not import them. The objective was to earn dollars. How different things had become by the early 1970s, when the great problem was how to cope from the surplus of world liquidity resulting from enormous dollar inflows into nearly every economy. The U.S. Government spent dollars without constraint, while private U.S. investors bought up foreign companies and the population bought more imports than was exported to other countries
Even Communist countries began to aim at running trade deficits in order to increase imports. Today, Europe and East Asia struggle to dispose of their surplus dollars with as little loss as possible as they recycle the U.S. balance-of-payments deficit into world capital markets, through which these dollars end up back in the United States. The result has been a global financial bubble.
America’s shift from a creditor to a debtor strategy of world economic domination in the 1960s and 1970s reversed the kind of global relationships that had characterized the 1920s. At that time it was the U.S. balance-of-payments surplus on government account that untracked the world economy. Since the 1960s it has been the U.S. payments deficit that has done so, initially stemming from the government’s overseas military spending. During the 1950s, 1960s and 1970s this military spending was responsible for the entire U.S. payments deficit.
Most economic models neglect the degree to which such spending and its consequent balance-of-payments deficits have played in the transformation of twentieth-century international finance. The world dollar surplus of initially was catalyzed by U.S. overseas military spending in Asia, starting with the Korean War in 1950–51. It was this spending that inverted America’s balance-of-payments position from surplus to deficit, forced it off gold in 1971, and induced a debtor-oriented international financial policy vis-à-vis the rest of the world – the policy from which foreign economies have not been able to extricate themselves even today.
The new deficit strategy was accompanied by rising commercial protectionism and investment regulation – just the opposite of the philosophy that characterized early postwar U.S. policies, and continues in a vestigial manner to color much of today’s anachronistic economic rhetoric. The shape of economic development in one economy after another has become a function of intergovernmental negotiation and diplomacy in ways not anticipated a half-century ago. Even Russia’s privatizations were a product of U.S. diplomatic pressure, not a natural evolutionary development.
Rather than U.S. overseas military spending being designed simply to protect and extend private sector exports and investments, just the opposite set of priorities emerged in the 1960s and 1970s. U.S. foreign trade and investment were regulated increasingly to finance America’s world military and diplomatic system. To finance the Cold War in Southeast Asia, U.S. banks and corporations were regulated in their foreign lending and investment activities, the IMF was all but broken up, GATT was gutted, and the system of free trade for which the United States ostensibly fought in World War II (and in its subsequent Cold War confrontation with Russia and China) was pushed aside.
The U.S. deficit is still disrupting the world, but its character has shifted from a military focus to one of insisting that foreign economies supply the consumer goods and investment goods that the domestic U.S. economy no longer is supplying as it postindustrializes and becomes a bubble economy, while buying American farm surpluses and other surplus output. In the financial sphere, the role of foreign economies is to sustain America’s stock market and real estate bubble, producing capital gains and asset-price inflation even as the U.S. industrial economy is being hollowed out.
The United States’ attempt to limit its payments surpluses in the 1920s by holding down its interest rates vis-à-vis those of Britain worked to inflate the stock market bubble that broke in 1929. Today, America’s trade deficit is pumping dollars into the central banks of East Asia and Europe, to be recycled into the U.S. capital markets, creating a new form of financial bubble. The Plaza Accords of 1985, and the Louvre Accords the following year, obliged Japan’s central bank to lower interest rates and inflate a bubble economy that burst in five years, leaving Japan a financial wreck, unable to challenge America as had been feared by U.S. strategists in the 1980s.
Both in the 1920s and today the U.S. payments imbalance grew so large as to split the world economy asunder, culminating in a statist reaction in one region after another. But today’s government policies abroad ultimately are controlled by U.S. Government planners and the Washington Consensus they impose via the international organizations they dominate. The demand for free trade and dollarization of foreign debts is essentially a demand by the U.S. Government that other governments remain passive rather than adopting U.S.-style market regulation.
What is ironic is how short a period it took – just 25 years, from 1945 to 1970 – for the United States to invert its professed wartime idealism and build a double standard into the world “marketplace.” By the 1970s the United States was insisting that West Germany revalue the Deutschmark and relend its dollar reserves to the U.S. Treasury as the price for keeping U.S. troops on German soil. Similar economic coercion occurred vis-à-vis Saudi Arabia, Kuwait and Iran to buy U.S. arms with the dollar proceeds of their oil exports, and between America and Japan. Even vis-à-vis the Soviet Union the U.S. Government set out to negotiate bilateral agreements for the Soviet Union to spend the $10 billion anticipated proceeds from its natural gas exports to the United States exclusively on U.S. products. Such agreements recall the blocked-currency agreements developed by Hjalmar Schacht for Nazi Germany in the 1930s.
The drive to privatize public enterprises, ostensibly a move to get governments out of economic affairs, is a product of U.S. Government pressure (often wielded via the IMF and now increasingly by the World Bank) on debtor countries. The destruction of public sector initiative in countries selling off their public utilities and the rest of their public domain has not been matched by domestic U.S. policies, but is rather their mirror image. It is the kind of policy against which the U.S. Government itself protested in 1972–73 when Europe, OPEC and other creditors sought to use their creditor position to buy control of major American companies and key resources, and to dictate government policy at least to the extent of restraining international profligacy.
The public domains of debtor countries are passing into the hands of global finance capital, including that of Europe and Asia, plugged into an international system controlled and shaped by the Washington Consensus. American pension funds, mutual funds, vulture funds, hedge funds and other institutional investors and speculators have come to dominate Europe’s stock markets and, since the 1997 Asian crash, have been appropriating those of the Far East. Stock markets in the former Communist economies and Third World are now dominated by the shares of the hitherto public domain that has been sold to institutional financial investors in the United States and other leading payments-surplus economies. The proceeds from these sales have been spent to pay interest accruals on debts taken on from consortia organized by the IMF and World Bank for projects that turn out not to be as self-amortizing as they were promised to be.
So we are brought back to the question of how conscious this system was. When did it became a deliberate policy rather than merely an ad hoc official opportunism in the game of international diplomacy?
To begin with, the United States paved the way by demanding that it be given veto power in any multilateral institution it might join. This power enabled it to block other countries from taking any collective measures to assert their interests as these might be distinct from U.S. economic drives and objectives.
I believe that at first the use of the U.S. payments deficit to get a free ride was a case of making a virtue out of necessity. But since 1972 it has been wielded as an increasingly conscious and deliberately exploitative financial lever.
What is novel about the new state capitalist form of imperialism is that it is the state itself that is siphoning off economic surpluses. Central banks are the vehicle for balance-of-payments exploitation via today’s dollar standard, not private firms. What turns this financial key-currency imperialism into a veritable super imperialism is that the privilege of running free deficits belongs to one nation alone, not to every state. Only the credit-creating center’s central bank (and the international monetary institutions its diplomats control) is able to create its own credit to buy up the assets and exports of foreign financial satellites.
On the other hand, there is nothing unique to capitalism about this mode of imperialism. Soviet Russia exerted control over the rule-making bodies of trade, investment and finance to exploit its fellow COMECON countries. Controlling the pricing and payments system of trade under conditions of rouble inconvertibility, Russia obtained the economic surpluses of Central Europe much as the United States had exploited its fellow capitalist economies by issuing unconvertible dollars. Russia established the terms of trade with its satellites in a way highly favorable to itself, as the United States has done vis-à-vis Third World countries, although Russia exported fuels and raw materials and the United States grain and high-technology manufactures. But viewed abstractly as a body of tactics, state capitalist and bureaucratic -socialist imperialism seemed to be approaching one another in their mutual resort to intergovernmental instrumentalities. Like the United States, the Soviet Union brandished a military sword at its allies.
As Jacob Burckhardt observed over a century ago, “the state incurs debts for politics, war, and other higher causes and ‘progress’. . .. The assumption is that the future will honor this relationship in perpetuity. The state has learned from the merchants and industrialists how to exploit credit; it defies the nation ever to let it go into bankruptcy. Alongside all swindlers the state now stands there as swindler-in-chief.”8
A century ago national states were permitted to exploit only their own citizens by creating money and credit. The unique feature of this new system is that governments in Europe and Asia, the Third World and the former Soviet sphere may now tap the wealth of their citizens, only to be tapped in turn by the imperial American center, which defies the world’s creditor central banks to burst the international financial bubble and let the most open economies fall into bankruptcy. The U.S. economy remains the most self-reliant and hence readily able to insulate itself from any European and Asian breakdown, but the financial sector remains most highly leveraged, as it was in the 1920s. Suppose that in the 1980s and 1990s, when Japan and continental Europe had built up hundreds of billions in dollar claims on the United States, they had behaved in the way that America acted as creditor in the 1920s vis-à-vis Britain and its other World War I Allies. Japan and Europe would have insisted that the United States sell off its major industrial companies at distress prices, and even the contents of its art museums. This is what America asked Britain to do. It was the classical prerogative of creditor powers. It was how General de Gaulle played his cards in the 1960s.
But neither Japan nor Europe outside of France played their creditor card. Japan behaved as if it were a debtor country, accepting a U.S. request that its government artificially lower interest rates in 1984 and 1986 as its contribution to the U.S. presidential and congressional campaigns. The result was to induce Japan’s economy to run deeply into debt, creating a financial bubble that ended up obliging it to sell off its commanding heights to the Americans, even though the United States was itself a debtor to Japan. The United States thus played both sides of the creditor/debtor street.
The way to break such financial dependency is to do what America itself did as the world’s major debtor: default. This is what Europe did in 1931. But rather than taking this path, Third World countries (following the lead of General Pinochet’s Chile and Mrs. Thatcher’s Britain) have agreed to sell off their public utilities, fuel and mineral rights and other parts of their public domain. They are playing by the classical creditor rules, while America itself plays by new debtor rules against Europe and Asia. The euro for its part has not been created as a political reserve currency, but only as a unit of account to function as a satellite currency to the dollar. Russia’s rouble likewise has been dollarized.
The upshot has been to create a system in which the dollar is artificially supported by central bank capital flows offsetting those of the private sector. Capital movements in turn have become the byproduct of increasingly unstable, top-heavy stock and bond markets. It is these capital movements – mainly debt service for many countries – that determine currency values in today’s world, not relative commodity prices for exports and imports. The classical adjustment mechanism of interest rate and price changes thus have been unplugged by the Washington Consensus.
The world’s need for financial autonomy from dollarization
The Washington Consensus would not be so problematic if America used its free ride to invest in productive capital that yields future profits by putting capital in place. Unfortunately, it has pursued the less productive policy of maintaining an imperial military and bureaucratic superstructure that imposes dependency rather than self-sufficiency on its client countries. This is what makes the international system parasitic, in contrast to the implicitly productive and profitable private enterprise imperialism depicted prior to World War I by critics and advocates alike. Far from being the engine of development that Marx, Lenin and Rosa Luxemburg imagined the imperialism of Europe’s colonialist powers to be in their day, the United States has drained the financial resources of its industrial Dollar Bloc allies while retarding the development of indebted Third World raw materials exporters and, most recently, the East Asian “Tiger Economies” and the formerly Soviet sphere. The fruits of this exploitation are not being invested in new capital formation, but dissipated in military and civilian consumption, and in a financial and real estate bubble.
The early system was supposed to grow stronger and stronger until it culminated in armed conflict, but economically developing the periphery in the process. But the tendency of today’s Washington Consensus is to retard world development by loading down the economies of almost every country with dollar-denominated debt, and to require America’s own dollar debts as the medium to settle payments imbalances in every region. The upshot is to exhaust the system until local economies assert their own sovereignty and let the chips fall where they may.
In today’s world the form of breakdown is likely to be financial, not military. Vietnam showed that neither the United States nor any other democratic nation ever again can afford the foreign exchange costs of conventional warfare, although the periphery still is kept in line by American military initiatives, most recently in Yugoslavia and Afghanistan. The lesson is that peace will be maintained by governments refusing the finance the military and other excesses of the increasingly indebted imperial power.
Yet Europe, Japan and some Third World countries have made only feeble attempts to regain control of their economic destinies since 1972, and since 1991 even Russia has relinquished its fuels and minerals, public utilities and the rest of the public domain to private holders. Its overhead in acquiescing to the Washington Consensus has been to sustain a capital flight of about $25 billion annually for the past decade. Asian and Third World countries have permitted their domestic debts to be denominated in dollars, despite the fact that domestic revenues accrue in local currencies. This creates a permanent balance-of-payments outflow as a result of the privatization sell-offs that provided governments with enough hard currency to keep current on their otherwise bad dollarized debts, but demand future interest and dividend remittances, while the state must tax labor, not these enterprises.
This is a system that cannot last. But what is to take its place?
If foreign economies are to achieve financial independence, they must create their own regulatory mechanisms. Whether they will do so depends on how thoroughly America has succeeded in making irreversible the super imperialism implicit in the Washington Consensus and its ideology.
Financial independence presupposes a political and even cultural autonomy. The economics curriculum needs to be recast away from Chicago School monetarist lines on which IMF austerity programs are based and the Harvard-style economics that rationalized Russia’s privatization disaster.
Money and credit always have been institutional products of national economic planning not objective and dictated by nature. The pretense that monetarist policies are technocratic masks the degree to which the financial austerity programs enforced by the IMF and World Bank serve U.S. trade and investment objectives, and incidentally those of Western Europe and East Asia with regard to the terms of trade between creditor and debtor economies.
A great help to promoting the Washington Consensus has been its control over the academic training of central bankers and diplomats so as to remove the dimension of political reality from the analysis of international trade, investment and finance. Economists assume, for instance, that the gains from trade are shared fully and equally. But in practice the U.S. Government has announced that its economy must get the best of any bargain, just the opposite of the situation portrayed by academic trade theorists and the idealistic assumptions of international law. Although the preambles to most international agreements contain promises of commercial reciprocity, the U.S. Government has pressed foreign countries to reduce their tariff barriers while increasing its own non-tariff barriers, getting by far the best of an unequal bargain.
The trade theory promoted by the monetarist Washington Consensus neglects the degree to which countries that have let their development programs be steered by the World Bank have fallen into chronic deficit status. Economics students seeking to explain this problem get little help from their textbooks, whose logic ignores the defining characteristics of global affairs over the past thirty years. This hardly is surprising, as the criterion by which the economics discipline calls theories scientific is simply whether their hypothetical and abstract assumptions are internally consistent, not whether they are realistic.9
The tactics by which global credit flows are controlled are a secret that U.S. financial diplomats are not interested in broadcasting. But without such a study being given a central place in the academic curriculum, the minds of central bankers and money managers throughout the world will be inculcated with a narrow-minded view of finance that misses the dimension of national geo-economic strategy, the failures of IMF austerity programs, the dangers of dollarizing foreign economies and the free-ride character of key-currency standards.
The required study would show that in place of the competing national imperialisms that existed before World War I, only one major imperial power now exists. And instead of disposing of financial surpluses abroad as in Hobson’s and Lenin’s day, the U.S. Treasury draws in foreign resources, even as its American investors buy up controlling shares of the recently privatized commanding heights of French, German, Japanese, Korean, Chilean, Bolivian, Argentinian, Canadian, Thai and other economies, capped by that of Russia.
The above view of U.S. financial imperialism differs not only from the traditional economic determinist view, but also from the anti-economic, idealistic (or “national security”) rationale. Economic determinists have tended to neglect the full range of economic and political impulses in world diplomacy, and have limited themselves to those drives directly concerned with maximizing the profits of exporters and investors. This view by itself fails to note the drive for national military and overall economic power as a behavioral system that may conflict with the aim of promoting the wealth specifically of large international corporations.
On the other hand, “idealistic” writers (Samuel Flagg Bemis, A. A. Berle and so forth) have satisfied themselves simply with demonstrating the many non-economic motives underlying international diplomacy. They imagine that if they can show that the U.S. government often has been impelled by many non-economic motives, no economic imperialism or exploitation occurs.
But this is a non sequitur. It is precisely the United States’ drive for world power to maximize its own economic autonomy (whether viewed simply as an expression of “national security” or something more expansionist in character) that led it to innovate its parasitic tapping of the world economy through such instrumentalities as the IMF and World Bank. Its military-induced payments deficit led it to flood the world with dollars and absorb foreign countries’ material output, increasing its domestic consumption levels and ownership of foreign assets – the commanding heights of foreign economies, headed by privatized public enterprises, oil and minerals, public utilities and leading industrial companies. This again is just the opposite of the traditional view of imperialism, which asserts that imperialist economies seek to dispose of their domestic surpluses abroad.
The key to understanding today’s dollar standard is to see that it has become a debt standard based on U.S. Treasury IOUs, not one of assets in the form of gold bullion. While applying creditor-oriented rules against Third World countries and other debtors, the IMF pursues a double standard with regard to the United States. It has established rules to monetize the deficits the United States runs up as the world’s leading debtor, above all by the U.S. Government to foreign governments and their central banks. The World Bank pursues its own double standard by demanding privatization of foreign public sectors, while financing dependency rather than self-sufficiency, above all in the sphere of food production. While the U.S. Government runs up debts to the central banks of Europe and East Asia, U.S. investors buy up the privatized public enterprises of debtor economies. Yet while imposing financial austerity on these hapless countries, the Washington Consensus promotes domestic U.S. credit expansion – indeed, a real estate and stock market bubble – untrammeled by America’s own deepening trade deficit.
The early twenty-first century is witnessing the emergence of a new kind of centralized global planning. It is not by governments generally, as anticipated in the aftermath of World War II, but is mainly by the U.S. Government. Its focus and control mechanisms are financial, not industrial. Unlike the International Trade Organization envisioned in the closing days of World War II, today’s WTO is promoting the interests of financial investors in ways that transfer foreign gains from trade to the United States, not uplift world labor.