Читать книгу Future Primal - Louis G. Herman - Страница 11
ОглавлениеABANDONMENT OF THE QUEST — A PATH WITH NO HEART
With the seventeenth century begins the incredible spectaculum of modernity — both fascinating and nauseating, grandiose and vulgar, exhilarating and depressing, tragic and grotesque — with its apocalyptic enthusiasm for building new worlds that will be old tomorrow, at the expense of old worlds that were new yesterday; with its destructive wars and revolutions spaced by temporary stabilizations on ever lower levels of spiritual and intellectual order through natural law, enlightened self-interest, a balance of powers, a balance of profits, the survival of the fittest, and the fear of atomic annihilation in a fit of fitness; with its ideological dogmas piled on top of the ecclesiastic and sectarian ones and its resistant skepticism that throws them all equally on the garbage heap of opinion; with its great systems built upon untenable premises and its shrewd suspicions that the premises are indeed untenable and therefore must never be rationally discussed; with the result, in our time, of having unified mankind into a global madhouse bursting with stupendous vitality.
— ERIC VOEGELIN, Published Essays, 1966–1985
It has no doubt been worth the metaphysical barbarism of a few centuries to possess modern science.
— E. A. BURTT, The Metaphysical Foundations of Modern Science
The Western Revolutions
All the defining institutions of modernity emerged out of western Europe beginning in the sixteenth century as a thousand years of feudalism collapsed and three revolutionary movements converged. Traditionally, historians deal separately with the Protestant Reformation, the scientific revolution, and the commercial revolution. But taking them together we can see how each changed the common cultural and intellectual context in ways that reinforced the most revolutionary ideas of the other two to produce a civilizational shift. During the seventeenth and eighteenth centuries, these ideas solidified into the “metaphysics of modernity,” the philosophical underpinnings for global industrial capitalism. It provided the framework for approaching the search for the best way to live, which eventually produced the political philosophy of classical Liberalism. This was most clearly expressed by the writings of the defining Liberal thinkers — Thomas Hobbes, John Locke, and Adam Smith — who together offered a compelling vision of the good life and the just society that remains the default ideology of modernity. Liberalism remains the primary philosophical justification for our dominant institutions and values: free-market capitalism, minimal representational government, the rights and freedoms of the individual, industrial mass production, and a culture of unlimited material consumption.*
The Liberal vision inspired a wave of democratic revolutions, including, most notably, the American Revolution, which culminated in the drafting of the Constitution of the United States. The founding fathers of the American republic represented the creative elite of the new revolutionary philosophy. The delegates from the colonies who met in Philadelphia in 1787 to draft the Constitution were exemplars of a triumphant middle class: predominantly successful businessmen, lawyers, and farmers; Protestant, property owning, and imbued with the promise of the new mechanistic science. The country was vast and blessed with great natural wealth. Victory in the War of Independence eliminated the British colonial presence and, with it, the need for the victorious revolutionaries to accommodate an old guard. In this sense, the American Revolution was less a revolution than a war of national liberation against a foreign occupation. Since American Liberals, unlike those in Europe, never had to compromise with the residues of feudal opposition, their vision remained unalloyed. This produced a country so politically homogenous that its revolutionary ideology became almost invisible.1 The result was that America emerged into the twentieth century as a unified economic and military giant, showing in stark relief all the greatness and the flaws of this quintessentially modern, Western paradigm of the good life.
In Civilization, his lively and penetrating comparative history of the rise of Western civilization, the conservative historian Niall Ferguson identifies what he calls six “killer apps” (using unintentionally sinister cyber-slang) for the institutions that most distinguished the West from “the Rest” and that are most responsible for its dramatic rise to global dominance.2 All the apps were products of the three revolutions of modernity. They are: competition based on a degree of decentralization of political and economic life; science; property rights; medicine, as an application of science; the consumer society; and the work ethic. They were applied aggressively and inventively in the four hundred years between 1500 and 1900 to transform the position of the West from relative insignificance to comprehensive domination of the global population and economy.3 Western dominance is now in question, says Ferguson, because this quintessentially Western package has become global. “The Chinese have got capitalism. The Iranians have got science. The Russians have got democracy. The Africans are (slowly) getting modern medicine. And the Turks have got the consumer society.”4
Ferguson keeps the faith that the Western formula still offers human societies the best available set of institutions: “the ones most likely to unleash the individual human creativity capable of solving the problems the twenty-first century faces.”5 His concluding recommendations for the West to retain its edge are surprisingly timid: educational reform, perhaps reinstituting “formal knowledge” and “rote-learning” and reading the classics. He lists his “great books” — the King James Bible, Newton’s Principia, Locke’s Two Treatises on Government, Adam Smith’s Moral Sentiments and Wealth of Nations, the complete works of William Shakespeare, and so on.
But Ferguson’s Civilization operates like most conventional historical analyses, within the time frame of the past 5,500 years of written history. To grasp the larger significance of the crisis of Liberalism, we need to invoke the perspective of big history, within which civilization itself is a very recent event on an evolving earth. From this perspective, the world of industrial capitalism, ushered in by Liberalism, can be seen as dramatically intensifying some of the most conspicuous, defining aspects of civilization: division of labor, hierarchies of wealth and power, specialization in knowledge, and application of instrumental rationality to the mastery of nature and its conversion into wealth. This civilizational trajectory has reached a dead end. The Liberal narrative is exhausted and its institutional forms in their present state are undermining the very conditions necessary for civilization to flourish. The time for a radically more life-affirming vision arrived.
The “big history” perspective helps us see how some of the killer apps are becoming truly deadly. The metaphysics of modernity had a contradictory effect on the truth quest. Liberalism originally supported the truth quest in a number of crucial respects: it liberated the individual from ossified feudal structures and clarified and systematized the scientific method. It also made possible an explosive increase in the human population and our immense achievements in science, art, and the material quality of life. On the other hand, Liberalism’s emphasis on an instrumental, mathematical rationality, minimal government, and the invisible hand of the free market effectively eliminated the need for the individual to consider the good of the whole. So here we have a stark irony: Liberalism emerged from the pursuit of the truth quest but its consequences undermined the very quest responsible for its truth. The result is an increasingly corrupt political culture based on self-interest and avarice, while our policies and institutions are leading us to civilizational collapse.
The challenge for our age, in essence, is to advance, deepen, and in a sense complete the Liberal revolution by bringing to bear the larger perspective. This will involve recovering some of the oldest traditional wisdom that Liberalism rejected and then integrating it with some of the newest.
The Medieval Roots of Our Modern Crisis
One could sum up by saying that the missions of science, the Reformation, and the capitalist revolution converged in a single imperative: to exclude religious, spiritual, and philosophical concerns from political and economic affairs in the interest of transforming nature into ever-larger quantities of wealth. This formula succeeded beyond any of its founders’ wildest dreams. But its greatest achievements are now becoming some of its most destructive flaws. We can understand this more clearly when we see Liberalism as an inversion of the feudal worldview, its one-sidedness an extreme reaction to the traumatic collapse of the medieval order and to three centuries of repeated crop failures, famines, plagues, and almost incessant warfare.
The fourteenth century opened with price inflation; ruined harvests in northern Europe then caused a serious famine between 1315 and 1317. This was followed by a massive typhoid epidemic. In 1318 cattle and sheep were decimated by disease. This was followed by another bad harvest and famine in 1321. In Languedoc, poor harvests occurred twenty times between 1302 and 1348, by which time the weakened population was only too vulnerable to the first of a series of bubonic plague epidemics. The Black Death broke out in England in 1361, then again in 1368, 1369, 1371, and 1375. Altogether it killed approximately one-third to one-half of the population. Collective suffering was intensified by “the Hundred Years’ War” — actually a series of wars between England and France from 1337 to 1453 — which shattered faith in divine benevolence and the harmonious Great Chain of Being. The result was a profound sense of pessimism and failure.6
In the sixteenth and seventeenth centuries, Europe was still scourged by disease, famines, riots, popular uprisings, and war. It should not be surprising to find that the ideology that emerged out of this collective sustained trauma was radically oversimplified and overstated. The result was the replacement of a corrupt religious order and an aristocracy of birth with an increasingly corrupt aristocracy of wealth.
Our focus sharpens as we go further back. Feudalism emerged after the collapse of the Roman Empire as a decentralized, agrarian society based on relatively self-sufficient manorial estates, called fiefs or feudums, which were loosely held together by local custom, remnants of Roman law, and Christian-Aristotelian cosmology. Peasants swore an oath of loyalty to a warrior aristocracy of knights and lords, who in turn provided defense and administered justice. The whole static order was seen as part of the divine Chain of Being, under the custodianship of the Catholic Church. Trade and craft production were limited and carefully regulated through guilds, which were as concerned with spiritual life as they were with material production. The guilds would set the prices an artisan could charge for a product, with a “just price” being the amount calculated to cover the costs of production and maintain the artisan at his customary place in society. Moneylending for interest — the lifeblood of a capitalist economy — was despised as usury and declared a mortal sin, since it took advantage of the needy to enrich the wealthy. The simple act of buying wholesale and selling retail, known as regrating, was seen as intrinsically exploitative and became a punishable offense. The ethos of feudal economics was precisely the opposite of a modern market society and could be summed up by the medieval aphorism homo mercator vix aut nunquam deo placere potest — “the merchant will never be pleasing to God.” Making a profit was inherently sinful.
A story from the tenth century of one pious lord, St. Gerald of Aurrilac, illustrates the stark opposition between medieval and modern attitudes regarding profit. Upon returning from a pilgrimage, St. Gerald showed some Italian merchants a magnificent pallium (a religious garment) he had bought in Rome. When they heard what he paid for it, they congratulated him on his bargain. But instead of being delighted, St. Gerald was deeply disturbed and quickly sent the merchant additional money lest he be found guilty in the eyes of God of the sin of avarice.7 In principle the entire orientation of the social and economic order focused on inner, spiritual life and questions of meaning and value.
By the end of the fifteenth century, however, the corruption of the Catholic Church was undermining its spiritual authority, while a rapidly increasing population and growing need for new sources of raw materials made an expanding mercantile economy essential. The pressure to expand ocean trade intensified after 1453 with the fall of Constantinople to the Ottoman Empire and the closing of Europe’s land route to the East. Eastern spices were essential for preserving meat during Europe’s long winter, when fields froze and farm animals were slaughtered. At the same time innovations in naval and military technology opened the world’s oceans to European shipping. The compass, the brass cannon, and the three-masted, full-rigged ship, which could tack against the wind, opened the stormy Atlantic for transoceanic navigation. This made it possible for Columbus to reach the Americas in 1492, for Vasco da Gama to round the Cape of South Africa a few years later, and for Cortez to arrive in Mexico in 1519. The expansion of long-distance trade and the opening of foreign markets stimulated the shift from guild production to a new class of merchant-manufacturer-entrepreneurs that was able to invest in such risky but lucrative ventures. Overseas trade then provided the capital needed for further technological and scientific innovation, which in turn helped inform, arm, equip, and motivate further expansion. In the fifteenth and sixteenth centuries, a growing cash economy put pressure on the landed aristocracy to enclose and farm previously uncultivated common land. Peasants whose survival had depended for centuries on grazing their animals on the commons were forced off the land. They joined the ranks of the destitute in the growing towns and cities, helping to provide a workforce for an emerging class of property owners. These were neither serfs nor lords, neither peasants nor aristocrats, but a new “middle” class that would provide the base of support for the Liberal revolutions.
In the context of an expanding market society, Protestant notions of “doing God’s work” became connected over time to worldly success through thrift and hard work — what Max Weber famously identified as the “Protestant work ethic.”8 When this was combined with notions of religious freedom and the separation of church and state, it reinforced a political culture of growing individualism and materialism. Protestant nations soon led the commercial revolution; first the Netherlands, then England, followed by the United States. The scientific revolution reinforced both tendencies by appealing to the authority of the senses guided by the laws of logic and reason. At the heart of science was a single revolutionary insight — mathematics was the secret language of the world. Whatever could be quantified could be dealt with by mathematics; it could be known with precision and manipulated for our own ends. In practice this meant science focused exclusively on the outer measurable aspects of material reality. Since the method could be replicated by anyone with the right equipment anywhere in the world, results could be verified independently. The objectivity and universality of science undermined arbitrary ecclesiastical and political authority and helped remove traditional fetters on commerce. It also resulted in the rapid development of near-miraculous machinery, which was applied with accelerating effectiveness to the scientific investigation of nature, to the market production of goods, to exploration and navigation, and of course to warfare.
In 1543, King Henry VIII of England broke away from the authority of the Pope and the Catholic Church and established the Church of England with himself as its head. The religious and political conflict that followed ultimately ended in the turmoil of the English Civil War (1642–49) and a tenuously renegotiated relationship between the king and Parliament. The Liberal philosophers Thomas Hobbes and John Locke wrote against the memory of the dark ages of feudalism and its chaotic breakdown into civil war. Generalizing and extrapolating from their experience, they asserted that human nature was fundamentally aggressive, competitive, and selfish. Hobbes put this pungently in the most often quoted lines in modern political philosophy, from his masterwork Leviathan, published in 1651. He saw human beings in “a state of nature” without strong central government as naturally free but also selfish and aggressive, caught up in an endless struggle for advantage, in “a war of all against all.” In a condition devoid of justice and order, they were condemned to live a life that was “nasty, brutish and short.” He concluded that individuals needed to give up some freedom to establish a strong central authority — a “Leviathan” —in order to ensure the common good.
A few decades later, John Locke in his Second Treatise on Government softened this bleak understanding by recognizing that human beings also had a natural urge to be productive: to work rationally with hands and tools, crafting wilderness — which he considered simply wasteland — into useful, and thus valuable, products. Value accrued through labor. “Thus the Grass my Horse has bit; the Turfs my Servant has cut; and the Ore I have digge’d in any place where I have right to them in common with others, becomes my Property, without the assignation or consent of anybody.”9 Inspired by the physics of Isaac Newton and Enlightenment ideas of rationality, Locke provided the philosophical foundation for government based primarily on protecting individual rights and freedoms, most especially the right to hold and dispose of property and to enjoy the fruits of one’s labor in security. Individual rights increasingly meant property rights, and as we shall see, property rights became the organizing value for the writers of the Constitution of the United States.
From this baseline the Liberal philosophers constructed a theory of society, economics, and government providing for maximum individual liberty. Rebelling against the oppressiveness of aristocratic privilege and the divine right of kings, they sought a political order in which the individual would be neither beholden to nor responsible for others. Government was simplified into a social contract among such rationally calculating, independent, self-interested individuals, who came together to create society by giving up some of their freedom. In so doing, they gained the security necessary to hold and enjoy their wealth. This system made no appeal to altruism or generosity; it had little faith that self-interested individuals would take responsibility for the good of the whole. An impersonal mechanism — the invisible hand of the free market — was assumed to operate according to a semiscientific law of supply and demand, converting individual selfishness into growth in collective wealth. Since corporations barely existed, the threats to individual liberty were seen to come from social chaos on the one hand and big government on the other. It was a minimal vision of government giving maximum rein to self-interest.
During the seventeenth and eighteenth centuries, the North American continent offered Liberal revolutionaries a clean slate — a “state of nature” that from the European perspective was also a political vacuum. Settlers arrived in what appeared to be a vast game-filled wilderness, blessed with an incredible wealth of natural resources and peopled by “savages” who could be easily defeated or “civilized.”10 Under these idealized conditions, without a counterrevolutionary feudal aristocracy, the American Revolution produced the paradigmatic Liberal polity.
A potential counterrevolutionary force existed in the form of spiritually developed, but technologically undeveloped, Native American societies. Within a few centuries this living contradiction to the founding assumptions of Liberalism was crushed by mass immigration and industrial technology. When European settlers first arrived, North America contained about five hundred different indigenous tribes with a total population of perhaps five million. Vast herds of buffalo — some thirty million — covered the continent from east to west, from the current Canadian border to Mexico. As the United States transformed and expanded into an industrialized society, it saw both the native populations and the buffalo as obstacles to progress. By the 1860s, as the final Indian wars approached, the great herds had been destroyed and the survivors were confined to the Great Plains, where they provided subsistence for about three hundred thousand free Native Americans who still resisted the Europeans surrounding them.
Resistance ended with the final slaughter of the herds. Between 1872 and 1874, 3.5 million buffalo were killed. Of these only 150,000 were taken by Indians for subsistence.11 The rest were shot by Europeans for meat, hide, tongues, and sport and as a matter of military tactics. General Sheridan exhorted the US Congress to pass a bill to exterminate the herds, saying that “every buffalo killed is an Indian less.” By the 1880s, the buffalo were virtually extinct. Only a handful of pure specimens remained in the United States. Native Americans saw this blindness to the sacredness of the natural world as a kind of psycho-spiritual disease. The words of Lakota visionary Black Elk capture the gulf between a primal and industrial ethic:
That fall [1883] … the last of the bison herds was slaughtered by the Wasichus [Europeans]. I can remember when the bison were so many that they could not be counted, but more and more Wasichus came to kill them until there were only heaps of bones scattered where they used to be. The Wasichus did not kill them to eat; they killed them for the metal that makes them crazy, and they took only the hides to sell. Sometimes they did not even take the hides, only the tongues and I have heard that fire-boats come down the Missouri River loaded with dried bison tongues. You can see that the men who did this were crazy. Sometimes they did not even take the tongues; they just killed and killed because they liked to do that. When we hunted bison, we killed only what we needed. And when there was nothing left but heaps of bones, the Wasichus came and gathered up even the bones and sold them.12
From the primal point of view, wilderness is the closest face of the mystery of creation. From the Liberal point of view, the primary value of wilderness is in its potential for the private production of wealth — ultimately, marketable commodities. Part of the immediate universal appeal of Liberalism comes from its simplicity and the directness of its appeal to the most basic of human impulses: freedom, power, comfort, and wealth. Another part of its appeal is the fact that it presented itself as rational — the application of an empirical, scientific approach to government. A year after Isaac Newton published his monumental Principia Mathematica, the grand synthesis of mechanistic science, Locke modestly presented his own work as the contribution of a mere “under-labourer” to the “incomparable Mr. Newton.”13
But science offered a profoundly materialistic definition of truth, based on measurement and control of the natural world in the service of the production of material wealth. Such a science could not, by definition elucidate the good life. This separation of value and fact, paralleling the separation of church and state, or spirituality and politics, is at the heart of Liberalism’s abandonment of the quest. To understand how the use of science is implicated in our crisis and how it might serve in its transcendence, we need to look more closely at the conditions under which it emerged.
Deformation of the Soul: The Scientific Revolution
The revolutionary founders of the modern Liberal state understood science as the fruit of God-given rationality and the key to human liberation. They were also for the most part devout Christians who still believed in the literal truth of the Bible as God’s word. But the practical importance of science as a method of “certain knowledge” grew, and as science increasingly contradicted literal biblical descriptions of the natural world, the political relevance of the entire religious, ethical, and philosophical sphere declined. Religion and philosophy became private matters, as did all questions of value and meaning, other than the self-evident value of Liberal institutions as impersonal, legal mechanisms for checking and balancing individual self-interest. Science became equated with publicly reliable knowledge. The Liberal principles of minimal government and the invisible hand of the market further undermined the role of religion and philosophy.
In 1543 Copernicus initiated what was to become the scientific revolution by publishing an obscure mathematical text, De Revolutionibus de Oribum Celestium, in which he argued that the movement of the heavenly bodies could be more elegantly explained, using a simpler geometry, if one assumed that the earth and the planets rotated about the sun rather than the other way around. Copernicus had been inspired to consider this radical hypothesis by the rediscovery of classical Greek texts and the mystical Pythagorean notion that the world was constructed according to mathematical laws. Since medieval Christian cosmology regarded the heavens as the realm of divine perfection, and since mathematical laws were perfectly true, it seemed obvious to Copernicus that God would construct the heavens mathematically. At bottom Copernicus was as much persuaded by the mathematical elegance of this new model as he was by its practical capacity to explain and predict the movement of the heavenly bodies. This equation of mathematical elegance, precision, and truth became a cornerstone of the scientific method.
Medieval cosmology, however, was based on the notion of a Great Chain of Being, with the stable immovable earth below and the perfect heavenly bodies spinning above. The fact that the mathematics of Copernican astronomy literally turned medieval cosmology upside down had profound repercussions for epistemology, philosophy, and politics, which are still playing out in our present moment. The elegant certainties of mathematical proofs, and the capacity of mathematical formulations to explain, predict, and then control so much of the natural world, displaced religion and philosophy in intellectual life. Even more unsettling was the fact that science could contradict direct experience. Few things are more obviously true to human senses than the fact that the earth is solid and unmoving and that the blazing sun moves across the sky. Mathematics persuaded us of the opposite. In the ominous words of Galileo, the greatness of Copernicus’s intelligence was in allowing mathematics to “rape” his senses.14 Much of modern philosophy has still not recovered its center after being displaced by science.
The Italian mathematician, physicist, and philosopher Galileo Galilei took Copernicus’s insight about the explanatory power of mathematics and helped turn it into a fundamental epistemological and ontological principle. He argued that the deeper significance of Copernicus’s achievement was to show us that the universe is constructed mathematically, and so mathematics needs to be understood as the language of a true philosophy and a useful science: “Philosophy is written in that great book…of the universe.” And it is written in the language of mathematics, whose “symbols are triangles, circles and other geometrical figures without whose help it is impossible to comprehend a single word of it; without which one wanders in vain through a dark labyrinth.”15 By working closely with practical men — the gunners and artisans in the arsenals of Venice — on the eminently practical matters of ballistics, he developed the experimental method in which the mathematics that explained the heavens so brilliantly was applied to moving objects on earth. In so doing, he directly connected the act of cognition to manipulation, and technology became the embodiment of mathematical reason.
The French philosopher and mathematician René Descartes, inspired by the power of mathematics, then developed a coherent metaphysical system that enthroned this mechanical, mathematical method as the exclusive path to that certain knowledge that would make us “masters and possessors of nature.” Since this powerful method of mathematics could only deal with numbers, Descartes’s bold philosophical move was to assert an absolute distinction between those experiences that could be quantified (and dealt with by mathematics) and those that could not. Certain knowledge was henceforth confined to the world that could be measured, the world of tangible external things, which Descartes called res extensa. These were the unambiguous qualities of shape, weight, and movement — or in the language of physics, mass, extension, and motion — what Galileo called primary qualities. Almost everything else was unknowable, existing only in the realm of res cogitans, “things of the mind,” what Galileo called secondary qualities. It is hard to overestimate the world-changing significance of this intellectual gambit. Suddenly, in one move, Descartes rendered all the qualities of taste, smell, and color, the full range of human emotions — including all the grand passions of love and hate, grief and joy, despair and hope — unknowable and, by implication, irrelevant to what really mattered!
When they [res cogitans] are judged to be certain things subsisting beyond our minds, we are wholly unable to form any conception of them. Indeed, when anyone tells us that he sees colour in a body or feels pain in one of his limbs, this is exactly the same as if he said that he there saw or felt something of the nature of which he was entirely ignorant, or that he did not know what he saw or felt.16
Common sense tells us the opposite is true. The world of emotions and feelings is not only real but where we spend much of our time. When we feel nothing, we value nothing and life loses its meaning. To consign the fullness of our emotional life to the irrational and thus unknowable was an extraordinary act of metaphysical mutilation. By accepting Descartes’s proposal, and excluding most of what constitutes our experience from systematic disciplined exploration, society made possible our unfolding modern catastrophe.
Why was this patent absurdity so persuasive to Descartes and, following him, the intellectual elite of early modern Europe? First, Descartes, like many of his contemporaries, was enormously impressed with the sudden profusion in the seventeenth century of clockwork robots, “automata,” and machines. Since their workings were a predictable outcome of precisely measurable pieces of matter in motion, machines represented the most fully realized expression of “useful knowledge” based on res extensa. The explanatory power of mechanical knowledge was amplified by the fact that a number of the new inventions were instruments for measuring. By the time Descartes published his Discourse on the Method in 1637, the microscope, thermometer, pendulum clock, and telescope had all been invented and were radically extending the realm of what could be measured and mathematized.17
The application of machinery in the service of commerce added the incentive of profit — also calculated numerically — to manufacturing methods based on mass production of goods in factories. Finally, the superior truth of Cartesian science was demonstrated most emphatically by violence, as the machinery of killing annihilated the traditional societies that stood in the way of European expansion.
In applying the mechanical model to the earth and animals, Descartes was particularly impressed with vivisection — in which living cats and dogs were nailed by their paws onto boards in order to have their chests cut open to expose their still-beating hearts and breathing lungs. In the thrall of his mechanical revelation, Descartes fixated on the obvious similarity between a mechanical pump and the heart, and then he made a bold leap of logic, which was to become a monstrous leap in moral thinking. He postulated that a cat, for example, is nothing more than a kind of clockwork whose parts are so arranged that, when you nail it to a board, screams come out of its mouth.
[Animals]…are not rational, and that nature makes them behave as they do according to the disposition of their organs; just as a clock, composed only of wheels and weights and springs, can count the hours and measure the time more accurately than we can with all our intelligence.18
Cruelty to animals became an unofficial test of being a Cartesian. If you felt compassion for an animal, you simply failed to grasp the absolute separation between things of the mind and things of matter, between sentient humans and mechanical animals. Only humans had reason; therefore, only humans had feelings and could be moral agents. Descartes specifically attacked the notion that animals might have an inner life — a soul — as the most common source of error in the pursuit of reliable knowledge. Things of the mind might be ultimately unknowable, but the mind could recognize itself as absolutely separated from the body and nature. This doctrine, known as Cartesian dualism, eliminated in principle the moral constraints on doing with animals and nature what we would. Despite its obvious absurdity, this notion is almost universally embraced by industrial cultures. We see it on a mass scale in the cruelty of routine testing of pharmaceutical products on laboratory animals and in the heartlessness of our factory farms, in which animals are reared for slaughter as so much protein-per-unit-space-occupied, per-unit-input-of-feed.
Ultimately, we can say that Descartes’s ideas took their form to serve short-term human self-interest — the need for certainty under conditions of extreme existential anxiety. When we take a closer, un-Cartesian look at Descartes’s psychology, we can see the stages involved in how one man gradually turned away from the truth quest to develop a method that extinguished, in principle, the importance of the quest in public life. By carrying out this exercise, we reverse Descartes’s process and find ourselves beginning to recover the quest.
Descartes’s Dream
According to Descartes, res cogitans, things of the mind, were unknowable except for a few important exceptions — clear, evident intuitions. One was the reality of the doubting, thinking mind itself, hence his much-celebrated revelation cogito ergo sum, “I think, therefore I exist.” Also real and knowable were a few propositions of logic, the laws of mathematics, and a few supposedly self-evident propositions of theology. If we reflect for a moment in a wholistic fashion on the contents of consciousness at any particular moment, the fullness of experience becomes apparent. On the face of it the reality of the rational, thinking mind is no more or less self-evident than any other particular content of consciousness revealed through reflection, such as: “I sit contentedly on a warm rock watching a sunset with baboons — therefore I exist.” Why, then, was Descartes so convinced that the act of thinking, questioning, and doubting was more real than that of feeling and empathy?
From a psychological perspective, Descartes was like anyone else, an emotionally driven human being on a particular quest conditioned by time and place. Surprisingly, Descartes himself provides us with exactly such an un-Cartesian account of himself in the Discourse and more extensively in his journal Olympica.19 What is immediately striking is that he tells a story of an emotional revelation, a drama unfolding over time, taking place as unique events in the supposedly unknowable realm of things of the mind. These writings offer the strange spectacle of Descartes exemplifying elements of the truth quest, then coming to the conclusion that he should prohibit it for others.
He starts by describing how, despite having attended some of the best schools in Europe, he emerged disgusted with the state of philosophy, which seemed “built on mud and sand.” He notes that it had been studied by the most outstanding minds for centuries, yet it had failed to produce “anything that was not in dispute and consequently doubtful and uncertain.” In conclusion he describes his mental state as wracked by anxiety over the turmoil of the times, and he talks of his despair that he might never find the certain knowledge needed to improve the human condition.20 Descartes was desperate for certainty. Libido dominandi — the lust for power — comes to the fore.
Descartes had spent the summer directly involved in one of the military campaigns of the Thirty Years’ War, and this no doubt intensified his general sense of crisis. In late fall, he retired to winter quarters near Ulm. His search climaxed in a waking vision, reinforced by a series of dreams the following night of November 10, 1619. The dreams were highly symbolic but their meaning was clear to Descartes. He saw them as pointing to a new science based on mathematics as the key to that certain and useful knowledge he was seeking. The dreams were so profound, and so directly relevant to his quest, that he was convinced he had been graced with divine revelation. In gratitude he vowed to make an offering of a pilgrimage, on foot, from Venice to the shrine of Our Lady of Lorette, a vow he fulfilled five years later.
Descartes’s own account makes it clear that this entire, extraordinary process of discovery takes place within the realm of secondary qualities, the supposedly unknowable things of the mind. He experiences distress at the chaos and confusion of his time, anxiety over the failure of traditional wisdom, and fierce determination in searching for answers. He obviously values the spontaneous dreams and visions and then reflectively interprets them in the context of his life. He reacts, feels, considers, and creates meaning all within the subjective realm of his own mind. In addition, he communicates the meaning and value of his great discovery not through numbers but in the form of a narrative — the Discourse — a story describing a series of unique events. Oddly, his own certainty and relief is such that he neglects the decisive flaw in his conclusions: He is so enraptured by the results that he discards the process that led to them.
This is an extraordinary moment in the history of the West, where a drama unfolding in the psychology of one man catalyzes, then becomes emblematic of, a related process of transformation in the entire culture. This moment captures the contradiction that currently splits the modern mind, in which self-reflection attempts to annihilate itself in the spectacle of the genius telling his followers: “Do what I tell you, not what I do!”
By the same token, this account contains the kernel of a process of recovery and transformation for us today. Don’t only do what Descartes tells you to do. Do also what he does: question persistently; be open to the fullness of experience, including ecstatic revelation (like that of Descartes himself); reflect on the full amplitude of human experience in both its measurable and unmeasurable aspects; and then keep making connections between the part and the whole, between the drama taking place in the life of the individual and the story and state of the larger community. Of course, in formulating his ideas, Descartes was not alone. He was heavily indebted to the achievements of Copernicus, Kepler, and Galileo, and his framework was in turn developed, qualified, and articulated by others who followed, culminating in the grand synthesis of Sir Isaac Newton.
The scientific method emerged together with a rapidly growing, global, industrial capitalist economy to produce a remarkably uniform, characteristically Western way of experiencing and thinking about the world. These habits go so deep that even highly intelligent critiques of modernity, like those of many self-proclaimed postmodernists, as we shall see, reveal quintessentially Cartesian, and thoroughly modern, reflexes.
Deus Ex Machina
Today, we can see one particularly vivid reductio ad absurdum of mechanistic Cartesian science in the hypertechnological fantasies of cryogenicists, roboticists, and nanotechnologists who fantasize about transcending the messy biology of the human condition through robots and androids. Futurists Gregory Paul and Earl Cox point out that humans evolved as better hunter-gatherers, but we are only “marginally adapted for high level physics and novel writing, like the archaeopteryx for flight.” Marvin Minksy, MIT professor and researcher on artificial intelligence, laments that we have not become conspicuously smarter since Shakespeare or Euripedes. He notes that humans can only learn and remember about two bits of information per second. Even if we did nothing but learn twelve hours a day for a hundred years, the total sum of information would only be about three billion bits — less than we could store on a memory disk from 1998.21
If we start from our self-evident, fully embodied human consciousness, the fallacy is obvious: the mechanists unconsciously assume what they are trying to prove. They implicitly define intelligence mechanically, then triumphantly declare that machines do it better. The roboticist Rodney Brooks’s cheerful posthuman futurist fantasy celebrates genetic engineering as indicating “the very deep extent to which we are biological machines…molecules interacting with each other according to well defined laws, combining in predictable ways, and producing, in our case, a body that acts according to a set of specifiable rules. We are machines, as are our spouses, our children, and our dogs. And we are now building machines that will match and surpass us. Resistance is futile.”22 Dr. Robert Haynes, who was the president of the sixteenth International Congress of Genetics, concluded that “it is no longer possible to live by the idea that there is something special, unique or even sacred about living organisms.”23
These facile pronouncements about the human condition are generated by a method of inference — mechanistic science — that, as we have seen, begins from an already mutilated understanding of human consciousness. The method reproduces the Cartesian error of forgetting that we have chosen to focus exclusively on the primary qualities of res extensa that can be measured. We forget the freedom of the in-between. We forget we made a choice to treat the world as a machine, not because it is a machine, but because, if we treated it as if it were a machine, then we could get a certain type of knowledge useful for practical ends.
Descartes made the archetypal Faustian bargain. His deal with the devil required that we give up the science of soul for the science of wealth and power. Having turned away from considering the soul, we compounded our sin by repressing reflection, and with it the memory of having made the deal in the first place.
Technocrats entirely miss the point when they argue that machines can perform mechanistic functions more efficiently — compute data faster, shoot straighter, dig deeper, lift heavier, travel faster. Efficiency is hardly the point of being human. Cultural historian William I. Thompson says it best: “For the mechanists, the flesh is slow, sloppy, and wet, and, therefore, primitive.…[But] slow and wet is the ontology of birth and the act of making love.…Fast is fine for the programmed crystalline world of no surprises and no discoveries, but slow is better for the creative world of erotic and intellectual play.”24 It is exactly our inefficiency that gets to the heart of the human condition. Being embedded in our biological messiness explains our mortality; the sting of death, loss, and grief; and the joy and struggle of loving, of bringing up a child, writing a song, dancing, and philosophizing. Embracing this keeps us close to the irreducible mystery of the way nature works itself into the body, the body into consciousness, and consciousness back into nature.
The philosopher of science E. A. Burtt, in his 1925 masterpiece The Metaphysical Foundations of Modern Science, talked of “metaphysical barbarism” as the price we paid for the power of modern science, which he, along with many others since, thought worthwhile. Today, in an age marked by genocide, weapons of mass destruction, and ecocide, it seems long past due to renegotiate the terms of our culture’s Faustian bargain.
The Contradictory Logic of the Heart — Dialectics
One of the great ironies of modernity is that we have now exposed the philosophical absurdity of the Cartesian world-machine at exactly the same time that machinery has become nearly ubiquitous, defining almost every aspect of our direct experience of reality. Virtually every object in the room where I write — books, chairs, tables, computer, phone, and lamp — has been machine made. The components of the house itself — the sheetrock walls, the windows and blinds, the doors and floors — were all fashioned by machines. We live our lives moving from one manufactured interior to another. The primary reality of our wilderness origins has been eclipsed from direct experience, literally buried under our ever expanding cities. Faith in the Cartesian abstraction has led us to create a secondary reality that embodies that abstraction. We experience the world-as-machine because we live in a machine-made world.
The logic that got us here goes back to an original act of splitting: between agriculture and wilderness, or the fence that separated the civilized farm from undomesticated nature. From this has flowed a whole series of related, supporting splits: the outer objective world of matter from inner, subjective life, the rational from the irrational, idea from emotion, the human from the animal. One can trace a philosophical thread from Descartes back to humankind’s original alienation from wilderness. In each case of the above pairs, the first term is privileged over the second in a hierarchical dualism that is tearing our world apart. This brings up a fundamental issue about types of logic that we need to be clear about in recovering the quest.
Splitting reality into pairs of absolute opposites requires the deductive logic of noncontradiction, also called syllogistic logic. This stipulates that one thing, or A, cannot be something else, such as B, and still be A (A cannot be not-A). This seems self-evident and very useful, since it allows us to define pieces of reality unambiguously, as if they existed like parts of a machine. Each thing is understood as wholly distinct in itself and distinguishable from everything else. Defining each individual thing as isolated requires a related process of abstraction from the field of experience, taking things apart and then analyzing how they fit together. Our primary methodological habits have become abstraction, analysis, and critique. But the process of dismantling and separating cannot, in principle, make meaning. Meaning requires putting things together, making connections; it requires narrative description, integration, reconstruction, and synthesis, all of which run on a different logic.
For example, what of Descartes’s poor cat? In strict Cartesian fashion we can describe the beating heart of a living animal in abstract, measurable terms, even accounting for its changing dimensions over time — the volume of its chambers, the force of contraction, and the speed and pressure of the blood flow. But the truth is, when disconnected from the body, the heart no longer works, and neither does its owner. A living heart can only be understood in context. The separated pieces need to be reconnected and related through a process of creative integration and synthesis. This recognizes that the beating heart requires breathing lungs and healthy kidneys, oxygenating and detoxifying the blood flowing in the coronary arteries, feeding the heart muscle. We deepen our understanding of the healthy heart by recognizing the role of diet, exercise, environment, and general lifestyle. Further, as we consider these issues, in order to evaluate any particular heart, we ultimately must account for the mental and emotional state of the whole living creature in relationship with its total living environment. The bigger the picture, the more relations established, the deeper and more meaningful the understanding. We refer to this simply as wholistic thinking.
The search for meaning and value — the quest — requires integration and synthesis, which rests on the nonsyllogistic, contradictory logic of seeing how opposites give each other meaning. There is no up without connection to down, no hot without cold, no black without white, no A without not-A. Not surprisingly, we see this as a foundational principle in non-Western philosophies like Zen Buddhism or Taoism, where the meaning of yin is in relationship to its opposite, yang: female requires male, outer requires inner, matter requires mind, and so on. Such logic is common in primal and shamanic cosmologies, where knowledge based on direct experience and the closeness of wilderness immediately confronts one with the paradoxical structure of consciousness. We can call this dialectical as opposed to dualistic logic. This is the logic that is expressed in the dialektike of Socratic discussion, which recognizes that meaning begins and ends with unique, fully embodied human beings who inevitably experience the world differently. Understanding comes through face-to-face discussion, where the partial truth of thesis is challenged by the partial truth of antithesis, so that both can be integrated and transcended in a more-inclusive synthesis. This then becomes a new thesis, and so on. Western thinkers typically stumble over dialectical thinking and get stuck on one side or the other of paradoxical dualisms — mind-body, civilization-wilderness, human-animal.
In practice, we engage in dialectical synthesis all the time; it is necessary and unavoidable. Yet we have also failed to honor it and to cultivate it as a habit. This is striking in higher education, where one hears endless calls for teaching analytical and critical thinking skills but almost nothing about synthesis and constructive and creative skills. Without these complementary opposites we remain stuck. Wholistic, big-picture thinking is either neglected or produces frozen structures of meaning, blocking an understanding of integrated, organic, growing wholes — the whole person, the whole society, the entire species, the planet.
“There Is No Such Thing as Society”
Descartes helped eliminate the method for making meaning from the inner, emotional, qualitative data of the wisdom quest. Classical Liberalism eliminated the motive for even pursuing it. The clearest argument for this comes through the lineage of the classical Liberal philosophers, starting with Thomas Hobbes and proceeding through John Locke, Adam Smith, and America’s founding fathers, who embodied these ideas in the Constitution of the United States. American democracy offers the clearest example of a society created de novo according to the principles of Lockean Liberalism. This model is now in the final stages of globalizing under the guidance of the United States as the world’s preeminent superpower. In this sense the United States is the paradigmatic modern polity, demonstrating with great clarity both its most life-affirming and destructive aspects. While the following discussion focuses on America, in principle it is increasingly applicable globally.
As I’ve said, the creation of the political vision of Liberalism emerged from the truth quest — from a reflective, passionate concern with the good of the whole. But the Liberal values of personal freedom, private property, and competitive individualism were presented in Cartesian fashion as absolutes, abstracted from the whole without a living connection to their opposites: altruism, generosity, service to and responsibility for others, and love of community. As a result, the less-tangible, hard-to-measure, supreme values — love, beauty, and truth — were increasingly ignored. As Plato and Socrates made clear, any value pursued in isolation as a supreme good inevitably becomes a supreme evil.25 Any political order that forgets this and assumes certainty becomes deformed and ultimately deadly.
In 1787, after the success of the American Revolution, representatives from the various states gathered in Philadelphia to draft a constitution for a revolutionary theory of government. They were realists and pragmatists, familiar with the struggles among self-interested individuals in the marketplace. They were also aware of the latest advances in science, steeped in the writings of Hobbes and Locke, and imbued with a sense of their historical mission. Following Locke, the writers of the Constitution recognized that in a condition of freedom, where people are born into different social situations, with different abilities and dispositions, some will inevitably acquire more property and others less. Inequality was an inevitable consequence of competition. Thus, government was needed to stop the have-nots from robbing the haves. As Locke put it: “The great and chief end therefore of Mens uniting into Commonwealths, and putting themselves under Government, is the Preservation of their Property” (emphasis in the original).26 He assumed reasonable men, recognizing this, would come together to form a social contract and agree to give up some of their natural freedom for the security of a common authority. Such an authority needed to be strong enough to provide protection and order, but not so strong as to quash individual freedom and initiative. Adam Smith put it even more bluntly: “Civil government, so far as it is instituted for the security of property, is in reality instituted for the defense of the rich against the poor, or those who have some property against those who have none at all.”27
The members of the Constitutional Convention saw themselves as exactly such reasonable men, coming together to draft a social contract for a government that would not attempt to improve human nature but would simply work as a kind of institutional clockwork. A complicated set of checks and balances would ensure a government strong enough to protect private property and to foster trade and commerce, but not so strong as to unnecessarily cramp freedom, enterprise, and initiative.28 The case for ratification of the draft constitution was laid out in a series of some eighty-five anonymous essays published in New York newspapers, collectively referred to as The Federalist Papers. Its authors, now known to be John Jay, Alexander Hamilton, and James Madison, document a remarkable example of applied political philosophy — how institutions of government can be crafted from a philosophical paradigm of the good life.
Much of the focus of the Constitution is on the right to acquire and hold material property as the primary expression of individual freedom. In Federalist No. 10, James Madison argued that the most important function of the Constitution would be to prevent social chaos by guarding us against the violence of the competing interest groups he called factions. Following Locke and Smith, he noted, “The most common and durable source of factions has been the various and unequal distribution of property. Those who own property and those without property — i.e. the rich and the poor — are the most significant of the potentially violent factions.”29 We could say in this sense that class conflict was a founding assumption of the Constitution. Madison rejected the idea of direct democracy, criticizing the small, self-governing “pure” democracies of the ancient Greek polis for their instability and failure to protect inequalities in wealth: “Such societies …have ever been found incompatible with personal security or the rights of property and have in general been as short in their lives as they have been violent in their deaths.”30 Instead, the Constitution established a republic that offered the advantages of “a scheme of representation,” where the popular will would be refined and enlarged through a process of selection — “a chosen body of citizens, whose wisdom may best discern the true interests of their country, and whose patriotism and love of justice will be least likely to sacrifice it to temporary or partial considerations.”31 The Federalists assumed that the electoral process itself would somehow select the wisest. But would it? And what was wisdom, anyway?
Madison quickly passed over the challenging question of wisdom and moved on to the practical problem of mechanics. He identifies the structural advantage of a republic over a (direct) democracy in protecting private property. Because republics were large they included a greater diversity of interests, particularly the diversity of those factions based on different kinds of property — land owning, slave owning, mercantile, banking, and simply landless. With greater diversity, there is less chance that any particular interest will form a majority and violently impose its will on a minority. Like many of those attending the convention, Madison was deeply disturbed by the nation’s postrevolutionary financial crisis and no doubt had in mind Shay’s Rebellion, an uprising of impoverished war veterans and poor debtor farmers who took up arms and called for the abolition of debts.
The solution was, as Madison put it, that “ambition must be made to counteract ambition,” so that it would be virtually impossible for a single passion to overtake all branches of government. He comments ruefully, “It may be a reflection on human nature, that such devices should be necessary to control the abuses of government.” Then he repeats Hobbes’s argument for the necessity of strong government: “If men were angels, no government would be necessary. If angels were to govern men, neither external nor internal controls on government would be necessary.”32 In the absence of the wisdom of angels, the Constitution set up intricate rules for establishing the main branches of government — the executive, the legislature, and the judiciary. Then it clearly defined and limited the powers of each, so the net effect would be a pleasingly mechanical, apparently scientific system of checks and balances, which would automatically convert competing selfish interests into the best possible outcome. The machinery of government would make wisdom redundant.
In the eighteenth century, in the aftermath of an order based on a landed aristocracy and a landless peasantry, it must have seemed easier to accept as truth the Lockean assumption that private property was fundamental to human freedom and that wealth would constitute a minimal measure of wisdom and worthiness to govern. In fact, in 1787 most states had property qualifications for voting and holding office, and this probably excluded at least a third of the white male population. Women, Native Americans, and slaves were of course also excluded.33 Those who attended the convention — white, Protestant, property-owning men — could reasonably assume a larger moral consensus than we can today. In fairness, many of the founding fathers were also imbued with a high-minded sense of service, and none would have been fully satisfied with wealth as a measure of wisdom. But today, when the top 10 percent of the American population owns over 70 percent of the nation’s wealth, and where elections are determined by increasingly unregulated and undisclosed financial contributions to media campaigns, we can see the Madisonian formula for what it always was — wishful thinking. It succeeded in replacing an aristocracy of birth with an aristocracy of avarice — the rule of the wealthy.
Instead of seeking a politics that nurtures love of wisdom, the Federalists produced a government that would mitigate the effects of selfishness. The Constitution offered a mechanical arrangement of competing powers and interests intended to produce the best possible collective outcome for all. It provided a moral justification for shifting attention away from the community to the self-interested individual. The reductio ad absurdum of this approach is the notion — attributed to the conservative former British prime minister Margaret Thatcher — that “there is no such thing as society; there are [only] individual men and women and… families.” The political culture focused on rights and freedoms and said almost nothing about the individual’s responsibilities and duties toward the community — to the good of the social whole. The pursuit of wisdom was made irrelevant to government.
The Free-Market Machine:Short-Sighted, Narrow-Minded, and Selfish
If modern government was to have only the most minimal role in restricting the freedoms of individuals and providing for the common good, what would regulate the vast arena of economic activity, ensuring the optimum distribution of goods and services? The answer came from Adam Smith, the brilliant Scottish economist and philosopher.
Smith’s central work, The Wealth of Nations, published in 1776, has been celebrated as having more of an impact in transforming the world than the Bible. Although Madison never mentions Smith in The Federalist, his formula for government assumes the truth of Smith’s simple reasoning. Smith claimed to identify a natural law of the market that would automatically convert self-interest into public benefit. He reasoned that in a situation of complete freedom, where all relevant knowledge was available, competition between self-interested individuals, seeking only their own profit, would produce an outcome benefiting all. If every transaction between buyer and seller was voluntary, then, ipso facto, no exchange would take place unless both parties agreed and presumably benefited. It was as if there was an invisible hand, the hand of God, that operated in a free market guiding individuals who intended only their own gain to promote an end — the public good — that was no part of their intention. No governmental coercion, no violation of freedom, and no concern for the good of the whole would be required to produce the cooperation necessary for all to benefit. As Smith put it: “It is not through the benevolence of the baker that we get our bread.” He added that he had “never known much good to come from those who affected to trade for the public good.”
The irony of Smith’s genius is that he provided a philosophical and moral justification for the free market that inadvertently resulted in the elimination of philosophical and moral reasoning from economic life. Its impact on economics was equivalent to the effect of Cartesian dualism on metaphysics: “Don’t do what I do. Do what I tell you to!”
Smith pointed out that the market mechanism ensured that, not only would both parties to a transaction benefit, but society as a whole would be served by an overall increase in the “wealth of the nation.” He gave the famous example of the pin factory to show how the profit motive would impel a division of labor producing dramatic efficiencies in boosting production. Where one man working alone, without machinery, could barely make one pin a day, division of labor enabled ten men with machinery, each doing one piece of the job, to make a total of around forty-eight thousand pins a day, at forty-eight hundred each. “One man draws out the wire, another straights it, a third cuts it, a fourth points it, a fifth grinds it at the top for receiving the head; to make the head requires two or three distinct operations; to put it on is a peculiar business; to whiten the pins is another; it is even a trade by itself to put them into the paper.”34
The logic was compelling and the practice conclusive. The quickest way to boost profits was to grow the size of the business so as to take advantage of economies of scale: invest in machinery, institute a division of labor, buy raw material more cheaply in bulk, and save on bulk distribution. Labor would be kept cheap so long as the population kept growing, which it would as a consequence of the steady improvement in material existence. Increased profits would feed back into a spiral of continually growing productivity.
Smith’s insight brought together the mechanics of Cartesian-Newtonian science, the profit motive, and minimal government in the explosive growth of the mechanized production line. The factory soon became one of the defining institutions of modernity, bringing together raw materials, machinery, and the masses of workers displaced from the countryside. The results were rapid urbanization together with near-miraculous quantities of cheap commodities.35 City life and shopping reinforced in the most conspicuous way the power, truth, and goodness of the mechanical, materialistic metaphysics of modernity.
Certainly, the rise of America’s cities, especially by the late nineteenth century, provided opportunities for cultural creativity, but it also brought overcrowding, exploitation, pollution, and a life chained to the clockwork routine of the production line. The heaviest price was paid in the invisible currency of res cogitans — psychology. The production line subjected the working populations to a life-world defined by the division of labor, fragmentation, and hyperspecialization implicit in Descartes’s method. Under the feudal system, the individual artisan — fashioning, say, a bow, barrel, or sword — would gather raw materials and control much of the pace and nature of the work. Generally, the work involved learning a craft and some creative expression, and to this degree it reflected the individual who made it and who could take pride in the product. The sale or exchange of the product was also made more meaningful by face-to-face relationships within a localized community.
Now urbanized masses work according to the time clock and assist machinery; they focus endlessly on a few simple, repetitive tasks and produce identical products for distant markets. Depersonalization only increases as state, corporate, and educational bureaucracies apply the mechanical logic of the factory to society as a whole. The resulting irony has been that even though industrial capitalism has globalized, the experience of the urban individual has only become more fractured and narrowly focused. Today, economies are global and interdependent, but people continue to experience themselves as isolated, atomized individuals, no different from any other cog in the machinery of corporate and national bureaucracies. Meaning comes from doing your job and taking care of yourself.
Loss of autonomy and this fragmented experience of both the world and oneself can cause a person’s sense of being an integrated whole person and a moral agent with some responsibility for others to collapse entirely. At its most extreme, the modern paradigm for the morally vacuous individual is the Nazi war criminal Adolf Eichmann. Eichmann was the obedient bureaucrat directly responsible for the transport of millions of Jews to the production lines of the Nazi concentration camps. Hannah Arendt’s account of his trial, Eichmann in Jerusalem: A Report on the Banality of Evil, presents Eichmann as the personification of the modern form of evil: the soulless product of a materialistic, mass society; an individual who has lost the capacity to be the moral authority of his own life.36
According to Arendt’s account, Eichmann was no anti-Semite. In fact, during the trial, Eichmann took pains to point out that he had good reasons for being sympathetic to Jews. He was indebted to a distant uncle who offered him a job during the Depression and who was married to a Jewish woman. Eichmann had returned the favor by allowing the uncle and his wife to escape while the “final solution” was still under way. At one point during the war, Eichmann had committed the crime of Rassenschande, or “racial defilement,” by taking a Jewish mistress. Further, his psychological examination revealed him to be “thoroughly normal” with some very “positive attitudes” to family and society.37
Eichmann pleaded “not guilty in the sense of the indictment,” meaning he never directly killed a Jew with his own hands, and he certainly did not feel he was an innerer Schweinehund, a foul and corrupt person. He made it clear, as many of the Nuremburg war criminals did, that he “was doing his job and obeying orders” and that he would only have had a bad conscience had he not done what he was told. Though a nonreligious person, he said he felt “guilty before God” but added more revealingly that he thought “repentance is for little children.”38 For Eichmann, morality had shrunk to obeying orders. Repentance and, by implication, conscience, that inner tension we feel when there is a conflict between satisfying external authority and personal feelings of empathy for others, were childish. Eliminating conscience helps eliminate the tension of emotional conflict. Inner life is simplified. Just obey orders.
The psychologist Stanley Milgram was so disturbed by the implications of Arendt’s analysis that he devised an ingenious, and now infamous, experiment to test empirically the degree to which average Americans would follow the orders of an authority figure to inflict physical pain on someone else.39 In the name of a bogus memory experiment, a scientist urged the subject to give “dangerous” electric shocks to a second subject for failing to memorize word pairs. The second subject was restrained and hidden from view in an adjacent room but could still be heard. Though no shock was actually administered, the second subject would scream as if in intense agony. The results revealed that over half the subjects continued to give 350-volt electric shocks to a restrained, screaming victim pleading to be released and then falling ominously silent. Such obedience was induced by nothing more coercive than a scientist in a white coat intoning, “The experiment must go on.…You have no other choice than to proceed.” Milgram’s experiment records in painfully precise detail how quickly the anxiety of moral conflict in the increasingly isolated individual gives way before the simple habit of obedience to socially sanctioned authority.40
The psychological damage of Liberalism’s institutions is real and ubiquitous, and in Milgram’s experiment, this becomes measurable in terms of choices made and voltage delivered. Yet we are only shaping ourselves in the ways society asks of us, whether our job is to please the boss or to please the client. For example, one of the largest marketing agencies, Initiative Media, conducted market research to help develop an advertising strategy targeting children; the idea was to give children cues to help them nag their parents more effectively to purchase the product. The research revealed that ads could manipulate children into a type of reasoned nagging that would be more effective than repetitious whining. When the director of strategy was asked whether she thought it unethical to make money by manipulating children, she reflected for a moment: “Yeah. Is it ethical? I don’t know.” Then she quickly recovered, smiling enthusiastically, “Our role at Initiative is to move products. If we move products …we’ve done our job.”41 In a culture where the overriding ethical imperative has been narrowed to doing one’s job, earning a living, and maximizing profits, there is no sense of responsibility for one’s larger impact on society and nature. Ethics dwindles to an afterthought.
Calculating the Cost of Ethics
Adam Smith was a moralistic thinker who was deeply concerned with society as a whole and well aware that there was more to the good life than material wealth. In making the case for the invisible hand of the free market, he also made a number of critical assumptions and qualifications that we would do well to consider. For example, the nation was still small and relatively homogenous, so he could assume something of a shared moral universe. Since markets were smaller, even when national, the parties to any economic transaction would likely live closer to the consequences of their actions, and they would naturally feel a greater responsibility for the impact on their shared community. Smith was opposed to monopolies, and he assumed that individual businesses would not become large enough to distort the competitiveness of the free market. Most revealing for the present, Smith wanted to outlaw the corporation as fundamentally flawed, since it separated ownership from decision making. The ubiquitous idea of limited liability meant shareholders were only liable for the money they invested, reaping the benefits but taking no moral or legal responsibility for the behavior of the corporation.
In the United States, the problem of “corporate mischief” was compounded after the Civil War as an unintended consequence of the Fourteenth Amendment. Passed in 1868, this amendment to the Constitution was designed to protect the civil liberties of freed slaves, its “due process” clause stipulating that no state should deprive “any person of life, liberty or property without due process of law.” This was immediately seized on by clever corporate lawyers to argue that a corporation was in fact a “legal person,” and so corporations should enjoy the freedoms and rights of a person. Between 1890 and 1910, corporate lawyers invoked the Fourteenth Amendment 288 times to serve business interests, while African Americans claimed its protection on 19 occasions.42
Perhaps most importantly, we tend to forget Smith’s model also assumed wide distribution of knowledge of the whole, since voluntary transactions could only be mutually beneficial if all parties had full access to “relevant information.” Here, right at the inception of free-market capitalism, is an implicit requirement that something akin to wisdom needs to balance short-term self-interest. I will return to this remarkable caveat later on.
Today few of Smith’s assumptions hold. We face a global and growing economic machine that penetrates almost all aspects of modern life, with an overwhelming variety of moving parts and players. We have moved from Smith’s world of nations to one increasingly dominated by competing multinational corporations, with incomes that can dwarf the GDP of national economies. Of the 150 largest economic entities on the planet, fewer than half are nation-states. The rest are unelected, undemocratic, multinational corporations, which are free to come and go, manage themselves, shape markets and mass culture, manipulate global finance and government policy — all to serve the bottom line of private profit.43 Not only is relevant information often hidden, but global markets that make honest attempts to grasp the whole seem almost beyond reach. Obviously, total knowledge is an impossible standard, but as we will see later, dedication to the truth quest provides criteria and methods for developing a “truth of degree” that balances hubris with humility, greed with generosity, and love of wealth and comfort with love of beauty and learning.
Over the past century, as global economic reality has become overwhelmingly complex, centralized, and susceptible to self-interested manipulation by banks and financial institutions, the popular understanding of the economy has become even more simple-minded and ideological. One of the most influential expressions of this comes from Nobel laureate Milton Friedman, whose free-market fundamentalism helped inspire the progressive deregulation of much of the current American economy. Beginning in the 1960s, Friedman advocated a stripped-down, simplified version of Adam Smith’s invisible hand, asserting that the only social responsibility of business is to maximize its profits. A business will do more good for society, Friedman insisted, by pursuing its own profits than by consciously attempting to work for the collective good.44 This imperative of “profits first and last” obliges corporations wherever possible to pass on to the public the costs of risk taking.
One result of this is the common corporate practice of balancing the cost of compliance with regulations against the cost of deliberately breaking the law and paying the fine. This is a direct outgrowth of the limited-liability corporate status that Adam Smith argued against: that corporations would have the rights and freedoms of an individual, but no shareholder would be personally liable for its malfeasance. The corporation might be an individual in law, but as Baron Thurlow, an eighteenth-century politician, reportedly said, “It has no soul to save and body to incarcerate.” For example, between 1990 and 2001, General Motors, one of the world’s largest manufacturers, was caught breaking the law, prosecuted, and convicted on over forty occasions. One of the most revealing cases involved the 1979 Chevy Nova, whose gas tank had been dangerously repositioned to cut costs, making it susceptible to fuel-fed fires. Court documents from a lawsuit filed in 1993 revealed that even after GM realized the car was vulnerable to gas-fueled fires, it chose not to fix the car’s design. Initially, upon realizing there was a gas-tank issue, GM management asked the Advance Design Department to provide a comparative cost analysis. They calculated that the cost of paying out anticipated damages came to $2.40 per automobile, whereas the cost of repositioning the fuel tank more safely came to $8.59 per automobile. GM decided its primary responsibility was to shareholder profits and declined to reposition the fuel tanks.45 Would GM management have decided differently if the executives knew they themselves would have been held personally liable for any deaths or injuries that resulted?
As Joel Bakan, a professor of corporate law, points out, if the corporation were an actual person (as its legal status contends), then the standard practices of many of the largest corporations fit the Personality Diagnostic Checklist of a psychopath: “callous unconcern for the feelings of others; incapacity to maintain enduring relationships; reckless disregard for the safety of others; deceitfulness: repeated lying and conning others for profit; an incapacity to experience guilt and failure to conform to social norms with respect to lawful behavior.”46 No doubt GM’s executives didn’t regard themselves as psychopaths, yet they justified abdicating responsibility for the harm they knew their choices would cause because Liberal culture and their corporate charter required them to put profits first.
One of the most damaging large-scale expressions of this “ethic of no ethics” is in the work of full-time corporate lawyers and lobbyists who attempt to shape government regulations in the interests of their clients’ profit. This is reinforced by corporate-financed media campaigns that, in the name of freedom, whip up an ideological rage against the tyranny of “big government.” Self-regulation in the name of private profit regularly leads to environmental damage and public costs that can occasionally be catastrophic. One of the most dramatic examples in recent years was the 2010 blowout of the British Petroleum Macondo Deepwater Horizon drilling rig, which dumped five million barrels of crude oil into the Gulf of Mexico. Over the course of several years leading up to this, BP had cut costs by releasing thousands of older, more experienced, and thus expensive specialists, while at the same time it lobbied to weaken regulation and enforcement.47 These were all considered reasonable cost-cutting ways to improve the bottom line of boosting profits. The resulting disaster was less an accident than the likely outcome of a system that allowed short-term private gains to be reaped by transferring long-term risks and costs to some third party — the public.
Friedman’s simple-minded elimination of ethical thinking in business has helped create a political climate where many leaders in finance and politics no longer seem capable of distinguishing between greed, reasonable self-interest, and the common good. While examples of this are many, some of the most shocking recent events involve the US finance industry and the global financial collapse of 2008, which destroyed the assets of millions of American families and sparked the Great Recession. This occurred after years of corporate-sponsored deregulation of the financial markets made it possible for banks and financial institutions to make enormous profits through a variety of predatory credit and investment schemes. When the whole unsustainable system collapsed, most of the participating firms could reasonably claim they hadn’t done anything wrong.
This wasn’t, strictly speaking, true, but the pervasive culture of valuing private profit over public good made it difficult for executives to feel responsible. In an interview after the collapse, Lloyd Blankfein, then the chief executive officer of Goldman Sachs — one of the firms implicated in the mortgage crisis, and one of the world’s largest, most profitable, and most politically influential investment banking firms — defended the $54 million bonus he received in 2007 by explaining that he was simply “doing God’s work.”48
Such hubris can be seen as a predictable psychological inflation resulting from the politics of self-interest. In 1886, Black Elk, the great Sioux visionary, left his reservation to join Buffalo Bill’s Wild West Show in the desperate hope of traveling the country and finding a way of helping his defeated, starving people. After a few enervating months in New York City, he saw the noble pretensions of American democracy degenerating into a morally impoverished plutocracy:
After a while I got used to being there, but I was like a man who had never had a vision. I felt dead and my people seemed lost and I thought I might never find them again. I did not see anything to help my people. I could see that the Wasichus [whites] did not care for each other the way our people did before the nation’s hoop was broken. They would take everything from each other if they could, and so there were some who had more of everything than they could use, while crowds of people had nothing at all and maybe were starving. They had forgotten that the earth was their mother. This could not be better than the old ways of my people.49
From such a perspective, the notion that the invisible hand of the market entitles us to whatever we can get away with is a kind of mental and spiritual illness. In practice it means that the rich inevitably grow richer and feel self-righteous doing so. In the United States, in 1965 the average CEO earned 24 times what the average worker earned; by 2007, this had risen to 275 times the average worker’s wages,50 creating what the economists Paul Krugman and Robin Wells dubbed the “creed of greedism.”51 Small-government conservatives, blinded by the dogma of free-market ideology, have great difficulty recognizing that in the absence of wise regulation (imposed from both within and without), such a culture inevitably leads to the “tyranny of the biggest,” as the larger, more powerful entities use the competitive advantage of size to further undermine free competition in the service of their own wealth. Unlimited freedom without the guidance of wisdom leads to new forms of tyranny.
The Eclipse of Wisdom
Classical Liberalism as it has come to be embodied in the United States relies on three impersonal mechanistic understandings that converge in at least one major way: eliminating the need for the individual to consider the good of others and of the whole. The first is the mechanical materialism of Cartesian-based science, which values only the measurable certain knowledge of the tangible world and dismisses as unknowable and unimportant most of the things of the mind. The second is a minimal form of collective decision making and conflict resolution based on a mechanical system of elected representatives, separation of powers, and checks and balances. The third is the law of supply and demand embodied in the invisible hand of the market that supposedly converts self-interest into the collective good. Combined, these ideas and mechanisms release the citizen in principle from the struggle of soul searching and considering the big picture. They make selfishness and a lack of introspection into virtues. The casualty is not only the good but the truth of the whole. Greed eclipses a humble opening to the mystery of the human condition. Without openness and humility, deep learning and moral growth are impossible. Truth becomes whatever makes you rich and powerful. Ipso facto, the rich have the truth and should rule.
In the wake of the corrupt and decrepit feudal institutions of the medieval era, the American Constitution was a revolutionary and liberating advance. It was as close as one could imagine to a mechanical form of government congruent with the clockwork universe of Descartes and Newton, in which the various branches of government would act like the cogs and levers of a machine, converting the chaos of selfish humanity into the harmonious order necessary for commerce and agriculture to flourish. Today we rightfully celebrate its achievements: the rights and freedoms of the individual; the efficiencies of industrial production; the cornucopia of wealth; the endless succession of technological miracles; and the massively expanded perspective of science and the reliability of its inferences.
However, America’s founders could have had no inkling of how their ideas might translate in a twenty-first-century world. They wrote almost a century before Darwin and Marx and without the revolutionary insights of Freud and Einstein. We now know that neither human beings nor the universe operates like clockwork, and we are also painfully aware of the failings of eighteenth-century clockwork thinking as a basis for politics and society: most particularly, we’ve witnessed the depersonalization of mass bureaucratic societies and the failures of electoral and market mechanisms to ensure the good of the whole. We see that when individuals are encouraged to give up the search for wisdom, the blind outcome of collective selfishness damages and degrades the natural world while exploiting the weakest and most vulnerable. Without a truth-loving culture, no electoral mechanism can protect us from demagogues who manipulate fear and ignorance in their pursuit of power. The miracle of an ever more productive consumer economy — Adam Smith’s promised “wealth of nations” — now confronts us with a double bind: we face an immediate political crisis whenever the economy fails to keep growing, and we face the ultimate environmental catastrophe if the economy continues to keep growing. All the while, wealth is inexorably concentrated in the hands of the few, who then use part of it to perpetuate the status quo by propagating an ideology of self-interest.
Here we have to face squarely the most damaging and least understood consequence of the Liberal paradigm: the definitive elimination of a culture based on the love of wisdom — the truth quest. In the absence of the quest, which is both an individual and a collective effort, the culture fragments and society lurches between a cynical, pragmatic materialism and a closed-minded fundamentalism. People lose faith in each other and cling tightly to their own beliefs. Today, the majority of Americans regard politicians as morally equivalent to prostitutes, while many hold rigid ideological and religious beliefs in which bizarre individual interpretations are taken as divine certainties.52
It is demoralizing and ironic that one result of the Cartesian-based revolution of Liberalism in the United States is the persistence of scientific illiteracy. Today, a significant portion of the electorate defiantly embraces a simple-minded medievalism: one in five Americans believes the sun revolves around the earth, and almost one in two dispute biological evolution, believing instead that God created human beings in their present form within the past ten thousand years.53 In March 2009, an elected representative, Congressman John Shimkus from Illinois, a Lutheran who believes in the literal truth of the Bible, testified before a House Energy Subcommittee on Energy and Environment by quoting the book of Genesis (chapter 8, verse 22) to reassure the committee that humanity need not worry about global warming because of God’s promise to Noah after the flood. He added, “I believe that [the Bible] is the infallible word of God, and that’s the way it is going to be for his creation.…The earth will end only when God declares it’s time to be over.”54 Soon after this pronouncement, Shimkus made a credible bid to become chair of the House Energy Subcommittee dealing with global warming.
Even at the highest levels of government, dogma eclipses wisdom. Reality becomes what it is convenient to believe. Our political culture is losing its grip on the most elementary criteria for knowledge of what is real. Since industrial capitalism is more immense than ever — global, interlocked, and increasingly impersonal — the idea of radical change seems hopelessly quixotic. What to do?
We Are Each Responsible for the Good of the Whole
This brings us to another wonderful irony. At the inception of free-market capitalism, Adam Smith assumed that all participants in a transaction would need “relevant knowledge,” which suggests the more radical path not taken: invert the understanding that currently rules. Take Smith’s condition seriously. Recognize that ultimately markets will only serve justice and the common good to the degree that those involved pursue justice and the common good in their thinking and in their daily decisions. Instead of wishfully assuming, against all the evidence, that selfishness will be automatically converted by the market into the good of the whole, we need to address the heart of the matter — the values, awareness, and motivation of the human individual. Each of us needs to balance selfishness with an openness to others and a concern for the good of the whole, and we need to do this in our various roles and through the various institutions we participate in.
This pursuit of wisdom cannot be simply legislated and bureaucratically enforced. Government — like every other human-created, human-led institution — has a role and an influence in proportion to its power, but ultimately no external regulation will provide the good society we seek. Instead, at the center of such a revolution in political culture and consciousness must be the moral, intellectual, and spiritual regeneration of the individual — what Plato called a periagoge — a turning around of the soul toward a love of truth, beauty, and the good.
The rest of this book will examine what this “turning around” looks like and how it could be accomplished, in theory and practice, but it’s worth noting here that we can already see it taking place in a variety of arenas in civil society, the economy, and government. The formula is simple: start with where you are and with what you have. For example, one brief, paradigmatic story illuminates in general terms what is at the heart of our problem and the way forward. In 1996, Ray Anderson, the CEO of Interface, one of the largest office carpet manufacturing corporations in the world, became Forbes magazine’s Entrepreneur of the Year. Anderson’s recognition came after a revolutionary two-year process of transformation in his soul, and the ethos of his company, that produced a model of ethical behavior and business more aligned with the truth quest.
The story starts with Interface setting up an environmental task force in response to consumer pressure. The task force approached Anderson as the CEO to give an inspirational kick-off speech presenting an “environmental vision” for the company. With a shock, Anderson realized he had no environmental vision. He had never given a thought to what the company was taking from or doing to the earth in the making of its products. Desperate for inspiration, he read a book that had propitiously landed on his desk, Paul Hawken’s The Ecology of Commerce. In an interview, he described the moment when he came to a particular phrase that suddenly confronted him with the enormity of the industrial devastation of the earth: “the death of birth,” E. O. Wilson’s term for species extinction. Anderson said, “It was a point of a spear into my chest…and as I read on the spear went deeper, and it became an epiphanal experience, a total change of mind-set for myself and a change of paradigm… for the company.”
As he investigated his company’s processes, Anderson was shocked to discover that, for every ton of finished product, his company was responsible for thirty tons of waste. It was, he realized, the “way of the plunderer… plundering something that is not mine, something that belongs to every creature on earth.” He then faced squarely the political dimension of the issue and realized that “the day must come when this is illegal, when plundering is not allowed…[and] people like me will end up in jail.” Anderson translated his periagoge into action and institutional change. He decided that if Interface couldn’t produce carpets more sustainably, then maybe it shouldn’t be producing them at all. He instituted “Mission Zero” for the company — a model program for recycling and eliminating negative impacts on the environment. In 2007, he was named as one of Time magazine’s Heroes of the Environment. By 2009, Interface was about halfway to its goal.55 Implicit in what Anderson called his “mid-course correction” was the moral revolution of giving priority to the truth quest, inverting the market principle and making a concern for the good of the whole a condition of pursuing profit.
This sort of turning around of the soul can be seen in a variety of citizen groups now pushing for greater corporate responsibility. There is a related consumer movement advocating “socially responsible investing” and “triple bottom line” accounting, where the bottom line of profit is balanced with concern for people and planet. Since 2005, the United Nations has developed and supported Principles for Responsible Investing (PRI) dealing with sustainable environmental, social, and corporate governance. What is most encouraging is that as consumers become more aware of the bigger picture and push for moral reform, such measures increasingly make good business sense.56 There is even a new grassroots campaign in the United States to amend the Fourteenth Amendment to the US Constitution, which, as noted, has been the basis for corporations claiming the rights and freedoms of “personhood.” Called Move to Amend, the movement has passed resolutions in over thirty counties and cities across the United States to reaffirm that “constitutional rights and freedoms apply only to persons and not corporations, partnerships, and organizational entities.”57 Such initiatives taken together signal the beginning of a shift in business and corporate culture — one that is driven by the growing number of morally reawakened individuals working, shopping, and living differently.58
This turning around of the soul toward the quest inverts the current Liberal assumption that self-interest — and private profit — should be the main driver of every economic calculation and replaces it with a consideration of the whole. When individuals try to balance self-interest with a consideration of the bigger picture, they discover, as Socrates did, that deep self-interest actually includes concern for of the good of the whole.
As we slowly come to terms with the larger narrative of our runaway global economy, we are realizing that our two-hundred-year-old moral holiday in the interests of economic growth is an indulgence we can no longer afford. It was difficult to create a form of consciousness that would set about destroying the living biosphere that created us. We strain against our deepest nature to maintain this destructive one-sidedness, while we secretly crave wholeness and good health. Yet we can be encouraged by the fact that some of the more thoughtful of our founding fathers recognized that love of truth would inevitably require us periodically to rethink the foundations of government. Here are the words of Thomas Jefferson on this subject:
But I know also, that laws and institutions must go hand in hand with the progress of the human mind. As that becomes more developed, more enlightened, as new discoveries are made, new truths disclosed, and manners and opinion change with change of circumstance, institutions must advance also, and keep pace with the times. We might as well require a man to wear still the coat which fitted him when a boy, as civilized society to remain ever under the regime of their barbarous ancestors.59
We need to remember that Jefferson owned slaves and that the Constitution implicitly endorsed slavery as part of its political vision. Today we have a radically expanded understanding of the human condition. Jefferson himself recognized that as our understanding changed, so our institutions needed to follow. This is our political challenge today, to clarify a reliable method for understanding the human condition and its possibilities for improvement, so we can rethink government and economics.
This recovery of the truth quest proceeds as it has always done, on two levels, following the simple ancient wisdom of the alchemists: “as above so below; as within so without.” To know the world, one must know oneself; to know oneself, one must know the world. We are reminded that the search for the grail of the good life begins and ends with self-exploration — reflecting on anamnesis and discovering and telling one’s story.
The next chapter tells something of my individual story of opening to the wisdom quest. Then chapter 4 overviews the “big” story of self-reflective humanity’s emergence from nature and of how the core structure of the quest emerges from the paradoxical nature of consciousness. In the process of weaving together our personal and our collective stories, guided by a concern with the common good, we make a surprising discovery: we find ourselves already on a path with a heart, engaged in the practice of a new ethics and a new politics.
* In this sense both modern-day conservatives and liberals with a small l are classical Liberals with a capital L. Both are committed to the institutional package of individual rights, minimal government, and free-market capitalism. Modern conservatives tend to be ideological purists who resist reforming the established institutions of classical Liberalism (in particular ideas of minimal government in conjunction with free-market forces) and tend to favor cultural norms associated with the eighteenth-century founders. Modern “liberals” are more open to reforming ruling institutions in the light of new knowledge, but in the service of the core mission of the Liberal revolution: the liberation of the individual from internal and external oppression and the promotion of a fuller flowering of what it means to be human. Typically, neither seriously questions the founding assumptions of Liberalism by expanding the metanarrative to include “big history.”