Читать книгу Unsuspecting Souls - Barry Sanders - Страница 6
ОглавлениеONE | What Is Life?
WITH THE ARRIVAL of the new century, in 1800, the world, like a Humpty Dumpty egg, cracked wide open. Every belief and construct that held reality together simply loosened its grip. Even that most basic of entities, the human being, gave up its old, enduring definitions and fell in line as just another construct in need of serious rethinking. The century owes that radical shift, in part, not to a scholar, but to a little known pediatrician, Charles White, who published a treatise in 1799 that took on monumental importance, titled An Account of the Regular Gradation in Man, and in Different Animals and Vegetables; and from the Former to the Latter. In his Account, White predicted that the longest lasting and most static of philosophical ideas, the Great Chain of Being, would soon give way to something much more dynamic. Though he did not yet call it by name, we know it as the theory of evolution by natural selection.
The Chain had provided a schematic on which the Church hung the entirety of God’s created universe. Everything had its assigned place on that imagined Chain, starting with God and followed by nine levels of angels, then human beings, birds, animals, and finally rocks and stones. Under the new schema, human beings would no longer occupy their elevated position. To the contrary, God did not make man in His image, as we read in Genesis, but rather man evolved through accidental and competitive forces. We shared that same kind of birth with all the other animals. In this scheme, humans might thus wind up having no higher claim in the kingdom of created things than the apes and the chimps. “The Devil under form of Baboon is our grandfather!” Darwin wrote in his notebooks in 1885. The cosmic egg had decidedly shattered, and human essence slowly disappeared.
But that was just the beginning—or, in some significant ways, the end. In hindsight, one can sense the enormity of other basic changes in the air—the railroad would move people farther and faster than they had ever gone before and, along with the cinema, would utterly destroy the way the average person experienced time and space. The human voice, disembodied from the person, would soon move long distances over a system of telephone lines. The incandescent bulb would push daylight deep into the night. The list of advances and innovations that would occur in the nineteenth century seemed endless: the telegraph (1837), the steam locomotive (1840), inexpensive photographic equipment (1840s), the transoceanic cablegram (1844), anesthesia (1846), the phonograph (1878), radio (1896), and new and cheaper methods of industrial production. The list seemed to go on and on.
If all that were not bewildering enough for ordinary people, they would confront a further revolution in technology that would change forever the way they performed the most ordinary tasks. Typewriters arrived on the market in the 1870s. Bell invented the telephone in 1876; a year later Edison invented the phonograph. Cheap eyeglasses, made of steel, first appeared in 1843, but took another three or four decades to be affordable to the masses. The 1870s and 1880s saw the discovery of radio waves, the electric motor, the National Baseball League, and dynamite. By the end of the 1880s, everyday people could buy their first Kodak cameras. Guglielmo Marconi, the Italian inventor, received his first patent in 1896, for sending radio messages. With the invention of the internal combustion engine, scores of people would soon become auto-mobile. At the same time, the time clock would gradually fix every person to a tighter schedule. Modernity, like a mighty wind, swept across England, much of Europe, and the United States, blowing aside virtually every received idea. How to cope with the overwhelming enormity of so much change at such a fundamental level?
And then the absolutely unthinkable: God would die, or at least the idea of God would cease to define common, agreed-upon experience. The Chain and its maker both disappeared. When Nietzsche wrote his obituary for God in Also Sprach Zarathustra (1887), he meant a lot of things. But one thing in particular stands out. For almost two thousand years, the Church provided the definition of the human being. All of creation lay in God’s hands, including human beings, and every stitch of nature stayed radiantly alive through God’s constant, creative support, remaking all of existence second by second. Before the nineteenth century opened, God was not just alive in nature. He filled the world with His imminent presence. By the 1880s, such thinking, at least for intellectuals, no longer had vibrancy. God had opened his hands. And men and women and children—all his creatures, really—fell out of the comfort of his grip to fend for themselves. Without God’s constant attention, the contingent, imminent life, which had burned like a small flame, went out. Science took over the task of defining human existence. It could not hold a candle—or a Bunsen burner—to the Church.
Everything—all the usual, settled parameters of people’s day-to-day lives—would present themselves, one by one, as ripe for redefinition. It was a freeing time, an exhilarating time of prolonged experimentation and occasional moments of delight. The age promised power and pleasure, growth and the transformation of the self. But it also threatened to destroy everything people knew—more fundamentally, everything they were. It prompted Karl Marx, in his 1856 speech on the anniversary of the People’s Paper, to call for a new kind of human being: “[T]he newfangled forces of society . . . only want to be mastered by newfangled men.” Nietzsche, too, responded to the changes of the period by demanding a wholly new person, what he called “the man of tomorrow and the day after tomorrow.”
As Charles White had insisted, a static and fairly secure past would have to make way for a dynamic present—to say nothing of a wildly unpredictable future. The two towering intellects of the period, Marx and Nietzsche, held a fairly dim view of the fate of humankind. For, just as with our own technological revolution today, they both recognized the negative fallout from their own machine age. In fact, some historians argue that the discipline of sociology came into being in the nineteenth century through the efforts of Émile Durkheim, Max Weber, and Georg Simmel, because all three shared a general pessimism about the downward spiral of the social order, or as Simmel more succinctly described the times, “the tragic nature of social experience.”1 In his novel Fathers and Sons (1862), the Russian Ivan Turgenev described the philosophy of his radical hero Bazarov with a new word quickly co-opted by the philosophers: nihilism. Nietzsche expanded on the idea in his Will to Power (1885) and, a few years later, in 1888, in Ecce Homo, offered his ironically upbeat strategy for surviving the new modernism: “Nothing avails, one must go forward—step by step further into decadence (that is my definition of modern ‘progress’).”
When thinking about the mounting gloom in the nineteenth century, we should keep in mind that the term historians use to describe the century’s last years, fin de siècle, came from the title of a French play by F. de Jouvenot and H. Micard. First performed in Paris on April 17, 1888, Fin de Siècle chronicles the moral degradation that had been building in the century and which culminated around 1880. The playwrights intended the term to refer to the end: the end of human beings, as we know them, and the end of moral and spiritual progress. The play is a statement of anti-modernism, of despair and decadence, in which characters long for the good times of the past and deathly fear the horrors of the future.
Everything was up for grabs—which made the nineteenth century a period of tremendous uncertainty, or, more accurately, of indeterminacy. Lewis Carroll got it right—nobody, neither horses nor men, seemed able to put the world back together again. Even with all the prompting from Nietzsche and Marx, the future, perhaps for the first time in any significant way, began to collapse onto the present. The chasm separating now from then narrowed. No one knew what unsettling events lay ahead, what explosions might occur in traditionally familiar and stable areas, like travel, work, and recreation, but changes started coming at a faster and faster pace. The newly invented sweep-second hand kept unwinding the present until the future just seemed to disappear. The machine, without announcing itself or without, in many cases, being invited, bullied its way into the home and the office, and took charge. Nothing was off limits.
Near the end of the century, Louis and Auguste Lumière, pioneering brothers of the art form called cinema, produced a very short film titled La Charcuterie Mécanique. Louis described it this way: “We showed a sausage machine; a pig was put in at one end and the sausages came out at the other, and vice versa. My brother and I had great fun making the fictitious machine.”2 The film offers a potent allegory on mechanical production and technological innovation, the sausage machine perhaps even standing for motion picture cameras and projectors. But the brothers also joke about the way people get ground up and spit out at the whim of every new innovation and contraption. Surely, they also intended their viewers to think about the connection of swinishness with gross consumption—selfindulgence, gluttony, and materialism—and the numbing uniformity of the sausage links. Whatever the case, the Lumières do not present a very pretty or charming picture of the living and working conditions of the average person. Even with that most exuberant, most magical of all the new machines, the motion picture camera, the Lumières created what may be the first black comedy: a world of gloom and despair, in which individuals get reduced to their essence—to meat.
Alongside the invasion of privacy by that impolitic intruder, the Industrial Revolution, Sigmund Freud was busy taking people’s guarded secrets out of their innermost sanctum, the bedroom, and examining each one with the calculating eye of the scientist. In fact, Freud helped his patients confess all their secrets, no matter the content, in the bright light of day. Rooting about in the darkest recesses of life, Freud illuminated human behavior by making case studies, not out of the average, but out of the oddest and most bizarre individuals he could find. Just what are the boundaries of the human endeavor? What is the range of emotions that supposedly elevate us from animals to human beings? In his zeal to define human essence, Freud wrote about strange characters like the Rat Man and the Wolf Man. Like Oliver Sacks today, Freud took great delight and found great wisdom in all sorts of anomalies. He wrote about a man who dreamed of wolves hiding in trees and of another who spent his days and nights in sewers. We must look at the edges, he seemed to be saying, in order to find the rock-solid center.
The general public, forever curious about its own kind, heartily agreed. The noun freak, to describe characters, say, who look like they came out of a Diane Arbus photograph, comes into the language in the nineteenth century in the phrase “a freak of nature.” More recently, we know freaks, perhaps, as longhaired, unkempt hippies (who even turned the word into a verb by “freaking out”), or those from that same period whom we remember as Jesus freaks. At P. T. Barnum’s American Museum, an extremely popular fixture of the middle to late decades of the century in downtown Manhattan, at the corner of Broadway and Ann Street, more visitors filed into the sideshow, or the freak show, than into the big top for the main attraction. Barnum offered for public viewing bizarre creatures with names similar to Freud’s own cases: the Bearded Lady, the Lobster Boy, the Human Giant, and a host of other strange wonderments.
All this attention on the fringe and the freak and the underground resulted in a shockingly new aesthetics. Standards of decorum in behavior and dress and actions turned bizarre, outlandish—at times even revolting. Perhaps one of the biggest secrets a society holds is its criminals, its deviants and aberrant castoffs, which it prefers to keep safely tucked away in the shadows—in cells, in darkened chambers—anywhere, away from public view. Those lowlifes, too, became not just the darlings of the emerging avant-garde of the nineteenth century, but at times even models of the most powerful, unrestrained behavior. Imagine, thieves and prostitutes helping to define the idea of what it means to be human. Darwin’s theory of natural selection presented problems for the idea of moral development; in fact, they seemed at odds. While we may find such ideas outlandish, we need only think of Dostoevsky’s nineteenth-century tale of existential angst, Notes from Underground, whose antihero prefers to take aim at modern life from deep under the earth, in his dank dungeon. Thus, someone like Rudyard Kipling, who in a poem titled “The White Man’s Burden” harped about making good and morally upright citizens out of “your new-caught, sullen peoples,/Half devil and half child”—which should have been a snap for an imperial nation like Great Britain in the nineteenth century.
America and England suddenly were introduced to a new player from the demimonde, the con artist. The con artist could pull off sleight-of-hand tricks with the deftness of the best magician; rather than performing onstage, he or she preferred working the streets. Melville paid homage to the type in his novel The Confidence-Man. When the French symbolist poet Paul Verlaine shot his lover Arthur Rimbaud, in 1873, the press glorified Verlaine, particularly after he moved to England, calling him “the convict poet.” One century more and we get to Chris Burden, the American artist, shooting himself, not with a camera but with a rifle, in an art gallery, or, even more ghastly, having some random spectator shoot him. When we ask, “What is art?” we are also asking, “What are the limits of creativity, of human impulse?”
In the 1880s, leading lights in the demimonde enthroned Oscar Wilde as the reigning figure in an outlier aesthetic movement of the marginal that held a view of culture every bit as pessimistic as the sociologists’. To reveal their own character with total clarity, they named themselves, with a directness that startles, the decadents. These new aesthetes cultivated un esprit décadent, reflected in their profound distaste for bourgeois society and much of mass culture, and in their sympathetic embrace of the disreputable. Preferring to spend their leisure time in working-class hangouts, like music halls, theaters, and pubs—un mauvais lieu, as the poet Baudelaire put it—the decadents painted and wrote about the nefarious life that started up well after the sun went down. The decadents cozied up to thieves, ladies of the streets, and addicts of all kinds, and perhaps explored the most forbidden territory of all—homosexual love. Liberal psychologists in the period termed such lovers “inverts,” since their love tended to invert the normal and accepted order of society.
One can trace a fairly straight line from the decadents and underworld deviants of the nineteenth century to the thug and gangster aesthetic of the twenty-first century. If society declares certain people undesirable, not really fit for humanity, then outcasts can defuse the category by appropriating the term and using it for their own advantage. As a case in point, some black artists, kept out of white society for several centuries, turned the categories “outcast” and “criminal” and “loser” back on themselves, exploiting all the power and fear that the rest of the population found in those names. You call us thugs? Well, then, we will dress and act and talk like thugs, and with intensity. We will capitalize on the fear that you find in us; that way we can materialize, feel that we have some substance and meaning. Even our musical groups, with names like Niggaz With Attitude, they announced, will shock and disgust. We’ll even call our recording company Death Row Records.
The decadents’ outlandish, offensive behavior, some of it perhaps not even at a fully conscious level, formed part of an important nineteenth-century strategy of psychic survival. For example, Nietzsche declared that people could best develop their own potential by tapping into “the powers of the underground.” Not in the garden of earthly delights, but deep in the root cellar of humanity do people find the strength to move beyond all the accepted boundaries, categories, and definitions. No one wants to pass through life as one of the Lumières’ sausage people. By diving deep underground, ordinary people could transform, like mythological figures, into unpredictable human beings. Such wild strategies led Nietzsche to broadcast his own declaration of independence, in the name of the Übermensch, or the superman.
The new aesthetes did for art what Darwin and the demise of the Chain of Being did for the human psyche. They cracked wide open the old definitions of what was normal and abnormal, moral and immoral. They pushed the boundaries of gender, confounded the notion of correct and acceptable subjects for art, and refused all standard definitions and categories. No one would dictate to them where art stopped and music began, or where music stopped and dance began. They paved the way, in our own times, for the mixed-up objects that Robert Rauschenberg called his Combines, constructions that blasted apart finicky fifties definitions by combining sculpture and art, or art and dance, and so on. They granted Rauschenberg the liberty of gluing bits of newspaper cartoons, old advertising clips, and sections of menus onto large canvases. The new aesthetic made possible Cornell’s boxes, Picasso’s cubes, and contemporary mashups of all sorts.
The decadents wore funny clothes and smoked weird drugs. The fanciest dressers among them went by the name dandies. Only the beats or the hippies or the yippies could match them in their disruptive exuberance. Ginsberg owes his poetic life to that nineteenth-century soul force—Walt Whitman, yes, but with a nod, certainly, to Oscar Wilde as well. And like the free spirits of the late fifties and sixties, many nineteenth-century aesthetes, Wilde chief among them, spent time in jail or prison for their offbeat, deep-seated beliefs. Modern politics of conviction starts in earnest with Wilde’s The Ballad of Reading Gaol and ranges to Martin Luther King’s “Letter from Birmingham Jail” and Eldridge Cleaver’s Soul on Ice. You shall know us by our acts of disobedience. We shall find ourselves through such acts. Can it work? Can placing one’s body in front of the great inexorable machine accomplish anything serious and important and long lasting? Can it make a single person step out of line and confront what it means to be a thinking, fully alive human being? Well, just ask that other outspoken and feisty nineteenth-century jailbird, Henry David Thoreau.
Both the artist and the criminal stood outside society—today we would use the phrase alienated from society—that hung on for dear life to an old and tired set of values. And so they both, criminal and artist, aimed at blowing apart the status quo. Professionals in the nineteenth century, in their passion for categorizing everything in creation, saddled both the avant-garde artist and the petty thief with a new name, the deviant. In England, an emerging capitalist society led to the passage of new laws, covering a broad range of punishable offenses, most of which had to do with new bourgeois concepts of property. Authorities came to define a wider range of criminality within the growing class struggle, requiring a new penal system that carried with it longer and harsher sentences. This new social arrangement prompted British authorities to build many more prisons and asylums.
Prison, as many reformers argue, produces a criminal class. Recidivism rates in the United States today hover around seventy-eight percent. This idea of the repeat offender developed in Paris around 1870; by the nineteenth century’s end almost fifty percent of all trials in France involved repeat offenders. And so the demand to identify every convict, behind bars and free, got folded into the century’s own drive for finding humanity’s basic identity. The phrase “Round up the usual suspects” comes from this period, which means, in effect, “Bring in the poor people, the people of color, the out of work, the feeble and insane. Bring me those who have had to resort to petty theft in order to survive.” Taking the idea of identification several steps further, authorities wanted to finger the criminal type before he or she ever conceived of committing a crime. That’s the idea, expressed more than a century later, at the heart of the 2002 film Minority Report.
HAVELOCK ELLIS, the respected and well-known psychologist, wrote a nontechnical study in 1890, titled The Criminal, in which he derides thugs and artists as nothing more than petulant adolescents. While Ellis articulates the general attitude in the nineteenth century toward anyone or anything that appeared to defy its natural category, his prose reveals, at the same time, a bit of envy or jealousy for the criminal’s wholesale freedom to engage with life. One can sense a struggle within Ellis, even in this brief passage, with what it means to be fully alive. Abnormal may not be so off-putting as Ellis makes it out to be. Notice the way Ellis compares criminals with artists:
The vanity of criminals is at once an intellectual and an emotional fact. It witnesses at once to their false estimate of life and of themselves, and to their egotistic delight in admiration. They share this character with a large proportion of artists and literary men, though, as Lombroso remarks, they decidedly excel them in this respect. The vanity of the artist and literary man marks the abnormal elements, the tendency in them to degeneration. It reveals in them the weak point of mental organization, which at other points is highly developed. Vanity may exist in the well-developed man, but it is unobtrusive; in its extreme forms it marks the abnormal man, the man of unbalanced mental organisation, artist or criminal.
Later in this chapter I will come back to the criminal and to Cesare Lombroso, an important figure in confining the deviant to a very narrow and stifling category.
Not only the criminal and the artist, but the deranged, as well, rose to the level of seer in the nineteenth century. The madman and the fool took their solid place, in art and literature, as the embodiment of wisdom and insight. On the other side, in a desire to lock up so-called crazies and get them out of the way, the British Parliament passed its first act establishing public lunatic asylums in 1808. At the beginning of the century, England could count no more than a few thousand lunatics confined in its various institutions; by century’s end, that number had exploded to about 100,000. Authorities defined that troubling category, insane, more and more broadly, slipping it like a noose around the necks of more and more unsuspecting British citizens.
But the increase in numbers only added to the lure of the lurid. Whatever festered away, sometimes hidden from the direct line of sight—the criminal, the crazy, and the beggar—artists and writers dragged into full view. Two archetypal narratives got played out in the period: Orpheus descending into the underworld and returning to the world of the heroic and the unexpected; and Alice dropping down the rabbit hole of possibility, only to return to the land of the normal and the expected.
Both Orpheus and Alice descend beneath the earth, and return home radically altered. Both their stories prompted the same questions: Who are we? What does it mean to be a human being? Who is alive, or more alive? Who is really dead? Does the deviant live life more fully, know death more intimately, than so-called normal persons? Do things have meaning only when the Red Queen says they do? Many intellectuals confronted the century as such a bafflement. Henry David Thoreau gives us this account of an unsettling dream, mid-century, in Walden: “After a still winter night I awoke with the impression that some question had been put to me, which I had been endeavoring in vain to answer in my sleep, as what—how—when—where?”
In the context of the nineteenth-century search for deep meaning, Thoreau’s refusal to pay six years of delinquent poll tax in protest of the Mexican-American War and slavery takes him one more step in his journey to answer those nagging questions—what? when? how? where? In July 1846, a judge sentenced Thoreau to a night in jail as punishment for his actions. To refuse the tax, to “Say no in thunder!” in the words of Herman Melville, is to say yes to oneself. In the face of the machine bearing down, a person must resist at whatever level he or she finds.
The question remains, “How can I maintain the autonomy of my own self?” For Thoreau, it meant refusing to pay a tax. It also meant publishing his ideas in an essay, in 1849, titled “Civil Disobedience.” By civil, he did not mean polite; he meant the private as opposed to the government. Theories of civil disobedience—the stirrings of one body against the machine of government—start here with Thoreau. Such resistance provides one way of maintaining the autonomy of the self. Thoreau offers a kind of self-controlled deviance.
Scientists, philosophers, artists and even educators believed that they could only arrive at answers to Thoreau’s dreamy questions by finding the bedrock of human essence, but through means other than civil disobedience. For some, that meant finding deep meaning through experiments with drugs. For others, the secret lay in fingerprints, or genes, or in the words we use, or in the images we dream. No matter—whatever the defining characteristic a person might choose, the quest betrays a great irony. Human beings, undergoing vast changes in their own nature, were trying, at the same time, to define what was essential about themselves—something akin to a person attempting to draw a self-portrait while staring into a distorted mirror. No one could see things clearly. There was no foundation, no starting place, and certainly no solid ground on which people could stand to define that elusive creature, the human being.
As a result, experts often conflated the process and the product. They substituted the search for the answer. So, for example, in their experiments with drugs, chemists confused feelings of euphoria with essential change. Instead of claiming that heroin, say, had altered perception, the alteration itself served as proof, for them, that they had finally found the secret of life. Scientists never discussed how heroin actually interacted with the brain. Chemical reactions did not matter. Alteration and change were all. Such experiments had the look of someone trying to throw a ball off the wrong foot: The whole enterprise lacked grace and accuracy, the trajectory off target by a wide margin. Worse yet, no one could get a clear bead on the target.
The move to define human essence, in a great many instances, took on a coarseness and grossness. The century did not interest itself in the meaning of life, but more in what it meant to be alive. It did not ask, What is life? Instead, it went after What is aliveness? Early in the nineteenth century, only drugs could deliver the philosopher’s stone—opium and morphine and heroin serving as the alchemical key to universal understanding. Scientists confined their questions to matters of control—could they induce feelings of euphoria, say, and then reduce them, suspend them? They framed their questions the only way they knew how, for theirs was fast becoming a material and mechanical age, or rather, an age falling more and more under the sway of one machine or another. If machines produced, people would have to consume. And consume they did. And while people did not want to think or act like machines, the process of mechanization moved at a rapid clip, and exerted too much power for anyone to stop its advance. People did not, in many cases, know it, but the machine was clearly getting under their skin. It altered their perception. It framed their thinking.
As commerce raced rapidly forward and religion receded in people’s lives, science eroded faith, as well, by describing nature clicking away like some well-oiled engine. Consider Darwin’s theory of random, materialistic forces propelling life on its course. He even reduced human emotions and behavior to a series of chemical reactions in the brain. If chemists or psychologists could push a button or throw a switch, and with such an act activate the secret of life, that would have satisfied their urge. T. H. Huxley, the biologist, known in his day as “Darwin’s bulldog,” and who coined the word agnostic, described the new evolutionary world in this thoroughly mechanistic way: Man is “mere dust in the cosmic machinery, a bubble on the surface of the ocean of things both in magnitude and duration, a byproduct of cosmic chemistry. He fits more or less well into this machinery, or it would crush him, but the machinery has no more special reference to him than to other living beings.”
One of the Grimm brothers, Jacob, had retold the sixteenth-century story of the golem in 1808. Grimm was prescient. By 1808, he could already see that the golem perfectly captured the philosophical spirit of the age. In the original Jewish legend, a famed rabbi named Judah Low of Prague breathes life into a lump of clay, just as God had done with Adam, but, instead of creating human life, produces a creature called the golem. While the rabbi intends the golem to protect the Jews against attacks from the gentiles, in Jacob Grimm’s retelling, the golem assumes its own life, then grows monstrously large and out of control, so that its creator must eventually destroy it. Grimm had Goethe in mind, specifically his “Sorcerer’s Apprentice,” a 1797 ballad based loosely on the legend of the golem. In the “Apprentice,” a scientist’s supernatural powers, which he uses to animate inert life, once again lurch wildly out of control. Both Goethe and Grimm offered warnings to whoever would listen.
But neither Goethe nor Grimm seized the popular imagination. Mary Wollstonecraft Shelley did; and I want to spend some time with her story here, for she frames the period so well. In some ways, the search for the secret of life, as it must inevitably be, was naïve and immature. And that’s one reason, at least, that a young person, Shelley, could see with such astonishing clarity into the heart of the period. Shelley published her popular novel, Frankenstein; or, The Modern Prometheus, in 1818, at barely nineteen years of age, and settled on Prometheus for her subtitle with good reason, for she had written an allegory about an act of cosmic disobedience—stealing fire from the gods, or, in the context of the novel, discovering how to create life from inert matter. We can thus read Frankenstein as the ultimate pursuit for that one grand prize, life itself. But, like Grimm and Goethe before her, she also wanted her novel to serve as a warning, alerting scientists that they were chasing after the wrong thing, and that, in the end, the pursuit would ultimately destroy them. Shelley, who spent two years writing her book, invested Frankenstein with a great deal of humor and ironic detachment. She begs us to read her narrative in anything but a straightforward way. She makes us see the silliness in the search.
The search for the secret of life, because of its forbidden nature, found little favor with the general public and sank to the level of underground activity, pure and simple. Shelley understood that. Doctor Victor Frankenstein leaves home and follows his obsession and, in so doing, resembles many of the period’s own scientists: “From this day natural philosophy, and particularly chemistry, in the most comprehensive sense of the term, became nearly my sole occupation. . . . My application was at first fluctuating and uncertain; it gained strength as I proceeded, and soon became so ardent and eager, that the stars often disappeared in the light of morning whilst I was yet engaged in my laboratory.”
As Victor plunges deeper into his studies in natural philosophy, he moves more and more to the edges—becoming a fringe character—and, at the same time, deeper and deeper underground. The light goes out in the novel: Most of the action takes place at night, for Victor sees most clearly, he believes, by the dull reflected illumination of the moon. Like Dostoevsky’s underground man, Victor remains out of sight, working behind closed doors, in muted light, in hiding. He might as well work away at the center of the earth. Isolated and alone, he gives himself over to one scientific pursuit only, to the exclusion of everything else—family, friends, nature, even love—a pursuit that usurps God’s role: the creation of life. And so he asks himself the question that consumed the scientific community in the nineteenth century: “Whence . . . did the principle of life proceed? It was a bold question, and one which has ever been considered as a mystery; yet with how many things are we upon the brink of becoming acquainted, if cowardice or carelessness did not restrain our inquiries.” Nothing can turn aside the obsessive genius, for the person blinded by such overweening pride, by definition, has to resist as flat-out silliness any warning to steer clear of forbidden knowledge. The ego can become the world’s insult.
And so, like Doctor Faustus, some three hundred years earlier, Doctor Victor Frankenstein defies every accepted boundary of knowledge. His misguided ego demands that he possess all knowledge, starting with the creation of life and ending with the elimination of death:
“ Life and death appeared to me ideal bounds, which I should first break through, and pour a torrent of light into our dark world. A new species would bless me as its creator and source; many happy and excellent natures would owe their being to me. . . . if I could bestow animation upon lifeless matter, I might in process of time (although I now found it impossible) renew life where death had apparently devoted the body to corruption.”
He succeeds in achieving the first half of his dream, the secret of life: “After days and nights of incredible labour and fatigue, I succeeded in discovering the cause of generation and life; nay, more, I became myself capable of bestowing animation upon lifeless matter.” He has not realized the second and more crucial half of his dream: the creation of life. And, for a brief moment, he’s not certain he wants to open that door. He hesitates. The story teeters; it can go either way. But that momentary pause, freighted with centuries of metaphysical meaning, passes like a heartbeat.
What follows may not be the very first passage in the new genre of science fiction, but it surely counts as one of the earliest, and Shelley gives it to us with a recognizable amount of tongue in cheek. In fact, this singular event has all the trappings of a parody of the gothic romance—a dark and rainy night, a candle nearly burned out, the dreary fall of the year, a woods both deep and creepy. At precisely one in the morning—with every solid citizen fast asleep—as the rain begins pattering against the windowpanes, Frankenstein “collected the instruments of life around me, that I might infuse a spark of being into the lifeless thing that lay at my feet.” And then Doctor Victor Frankenstein does what only up to this point God has done. He creates life: “By the glimmer of the half-extinguished light, I saw the dull yellow eye of the creature open; it breathed hard, and a convulsive motion agitated its limbs.”
But Victor’s elation is shockingly short-lived. One brief paragraph later, Victor yearns to undo his miracle work: “Now that I had finished, the beauty of the dream vanished, and breathless horror and disgust filled my heart. Unable to endure the aspect of the being I had created, I rushed out of the room . . . I beheld the wretch—the miserable monster whom I had created.”
Frankenstein’s creature has a life—of sorts. But he lacks a soul. He faces the world as a lonely and baffled outsider—the ultimate deviant—desperately in need of a female partner. The monster (who remains unnamed throughout the novel) begs his creator for a soul mate, so that he, too, can create life—normal life. But the doctor recoils at the prospect of creating yet another aberration, another freak of nature. Suddenly, for the first time, Frankenstein looks his creation in the eyes, and offers us his only full-blown account of that alien, animated being. The description, once again, rivals any parody of the gothic:
Oh! no mortal could support the horror of that countenance. A mummy again imbued with animation could not be hideous as that wretch. I had gazed on him while unfinished; he was ugly then; but when those muscles and joints were rendered capable of motion, it became a thing such as even Dante could not have conceived.
Throughout the rest of the novel, Victor flees from what he has done, only to have the monster confront him in the last several pages. Each thinks the other heartless, soulless, cruel, and wicked. Creator and creation merge, the reader recognizing them—doctor and monster—perhaps for the first time, as twin aspects of each other. Such merging must always take place, for whatever the endeavor, one cannot help but replicate oneself. What else is there? That’s why, when readers confuse the doctor with the monster, and refer to the latter as Frankenstein, they reveal a basic truth: The doctor and the monster live in effect, really, as one—opposing characteristics of a single, whole person.
Mary Shelley hints at this idea in her subtitle, The Modern Prometheus. For one cannot invoke Prometheus without raising the specter of his twin brother, Epimetheus. Where Prometheus stands for foresight, for a certain degree of prophesying, Epimetheus represents hindsight. That’s the only way he can see clearly, for he is befuddled by reality, and continually misinterprets the present. He promises to marry Pandora, but in keeping his word manages to let loose on the earth every evil known to humankind. He bumbles.
Victor, as I have said, exists both as himself and monster, creator and creation, light side and dark side, victor and victim—a seeming prophet, but in reality a man unwittingly bent on ruining everything he holds dear. According to Carl Gustav Jung, the Swiss psychiatrist who founded analytical psychology at the end of the nineteenth century, we are all twins, all of us in need of integrating the two halves of our divided soul. When Rimbaud writes, “Je est un autre,” he reminds us that, like Victor, we all live as part solid citizen and part monster, sometimes buoyed by brightness, and sometimes dragged down deep into our shadow selves.
The nineteenth century gives birth to a great number of twins because, as with Jung, many writers and philosophers find our basic human nature in that twinning: We pass our days as schizophrenic creatures. I offer only a few examples: Robert Louis Stevenson’s Strange Case of Dr. Jekyll and Mr. Hyde, Oscar Wilde’s The Picture of Dorian Gray, Fyodor Dostoevsky’s Double, Poe’s “William Wilson” and “Black Cat,” Kipling’s “Dream of Duncan Parrenness,” Guy de Maupassant’s “Horla,” and a good deal of the work of E. T. A. Hoffmann. The age courted so many doubles it needed a new word to describe the phenomenon, so the British Folklore Society fashioned one out of German and introduced it in 1895: doppelgänger, the “double-walker.”
We know the nineteenth century itself, as I said at the outset of this chapter, as a formidable and mighty twin—a century characterized by an energetic, lighter, and upbeat side, and a darker, more tragic one. That dichotomy helped fuel the search for meaning throughout the entirety of the nineteenth century, for in a large sense, meaning usually results in a degree of resolution. In the end, in a world more perfect, the nineteenth century, perhaps, might have reached some sublime integration.
But the age wanted little if any of that. We bump up against exceptions, of course—Thoreau, Emerson, Whitman—but for the most part professionals, from businessperson to biologist, craved Frankenstein’s power. That legacy has helped to shape the twentieth century, and it has gathered momentum in the first few years of the twenty-first century, in our avaricious political appetite for power and control and all-out victory at any cost. Mary Shelley’s warning about creating monsters and then having to live with them has gone unheeded. That’s the problem with nineteen-yearolds. No one listens to them. While generations of readers have embraced her novel as pure horror, Hollywood, of all industries, seems to have gotten it right. It read her story not just as horror, but horror leavened with a good deal of dark, sometimes very dark, humor. Frankenstein has provided solid material for both the prince of gore, Boris Karloff, as well as for those sillies Abbott and Costello and Mel Brooks. The formula has proved highly successful: Since 1931, Hollywood has released forty-six separate movies based on Mary Shelley’s teenage novel.
And even if Hollywood never releases another film about Frankenstein, the monster will never really die—on the screen or off—for, as we shall see, he keeps reappearing, in many different forms, all through the course of the nineteenth and twentieth centuries. He certainly stalks our own times. He is, many would argue, us. Which is to say that we have all had a hand not just in creating him. We have also done our part in keeping him alive.
Very early in the nineteenth century, physicians got infected with Victor Frankenstein’s vision, and radically changed the underlying philosophy of medicine. From the ancient world on, doctors had aimed at restoring patients to health. Then, in the last decades of the nineteenth century, the profession assumed a radically different goal: the prolongation of life—no, the extension of life—sometimes beyond a time when it made good medical sense. (This desire, too, has been passed down to us in the twenty-first century.) Well-known and respected surgeons in London even believed that they could reanimate the recently dead, as if the longer one remained dead the harder it would be wake up. This is no less than what Victor Frankenstein and hundreds of real scientists and writers also pursued in the period, expending their time and their souls in the pursuit of some special elixir, a magic potion, that would unlock that key secret of the universe: mastery over death or, turned another way, a hold on eternal life. But first there was that deep-seated, elemental spark to discover—as Victor Frankenstein put it—the “cause of generation and life.”
How different, Frankenstein from Pinocchio, but also how similar. “In 1849, before he became Carlo Collodi, Carlo Lorenzini described the Florentine street kid as the incarnation of the revolutionary spirit. Whenever there is a demonstration, said Lorenzini, the street kid ‘will squeeze himself through the crowd, shove, push and kick until he makes it to the front.’ Only then will he ask what slogan he must shout, and ‘whether it is “long live” or “down with”’ is a matter of indifference.”3 This passage is from a review of a book about the creation at the center of Collodi’s immensely popular and perdurable book, Pinocchio. Collodi imparted that revolutionary spirit to the heart of the children’s book when Geppetto, the lowly artisan, starts carving his puppet out of a block of wood only to pull off the ultimate miracle of the period—bringing Pinocchio to life as a young boy.
Collodi published his Pinocchio in 1881. It has been ever since one of Italy’s most treasured books: Italo Calvino, the great fantastical writer, confessed that Pinocchio had influenced his writing career his entire life. Toward the end of the story, Pinocchio begins to read and write. Through those two activities, he transforms into a “ragazzo per bene,” literally “a respectable boy,” or idiomatically, “a real live boy.” He looks at “his new self” in a mirror—a traditional way of representing self-reflection—and feels a “grandissima compiacenza,” which has been translated as simply “pleased.” Pinocchio is of course far from Frankenstein’s monster, but he arises out of the inertness of matter—in this case, out of wood. He begins as a puppet—controlled and manipulated and directed—but his master, Geppetto, imbues him with that revolutionary fervor of the street kid, a perfect blending of the political, the scientific, and the spiritual, in keeping with the interests of the late nineteenth century. Pinocchio “comes alive” in the broadest sense. The prize for his good deeds in the book is consciousness, the seat and secret of all life.
In the opening days of the new century—January 6, 1800, to be exact—Britain’s old and very staid Royal Academy got the search for the secret of life started. The Academy signaled both its approval and support for the quest in a peculiar way by announcing a prize of fifty guineas and a gold medal for the first person who could produce twenty pounds of raw opium from five acres of land. Settling on opium for its own experiments, the Academy had chosen a most ancient drug. In fact, the nineteenth century could have easily installed Paracelsus, the sixteenth-century physician, as patron saint of opium. According to legend, Paracelsus carried a sword with a hollow pommel in which he kept the elixir of life. Historians conjecture that the potion may indeed have been opium, which he affectionately called “the stone of immortality.” A contemporary description of his healing method links him closely with opium and connects him with the nineteenth-century belief in the essentialist qualities of that drug:
In curing [intestinal] ulcers he did miracles where others had given up. He never forbade his patients food or drink. On the contrary, he frequently stayed all night in their company, drinking and eating with them. He said he cured them when their stomachs were full. He had pills which he called laudanum which looked like pieces of mouse shit but used them only in cases of extreme illness. He boasted he could, with these pills, wake up the dead and certainly he proved this to be true, for patients who appeared dead suddenly arose.4
This is the sort of resurrective magic that drove scientific interest in the magical properties of opium.
An enterprising laboratory assistant named Thomas Jones claimed the prize money just after the century opened, by producing twenty-one pounds of opium from five acres that he had planted near Enfield, north of London. And the race was on. Opium tapped into some center of sensation, but no one knew for certain how, or where; and no one seemed able to control its effect. Nonetheless, from this moment on, chemists and physicians took the cessation of pain as the principal piece of evidence that they had homed in on that hidden center of power. The nineteenth century was shaping up as the century of the anodyne to such a degree that many scientists considered their experiments a success if they could eliminate pain in their subjects and maximize their pleasure. Even before Freud popularized the pleasure principle, the nineteenth century was hard at work putting that idea into practice.
Like every drug in this period, opium quickly moved out of the laboratory and into the streets. Doctors recommended it for virtually every illness, from a simple headache to tuberculosis to menstrual cramps. Up to the time of the first opium wars in the late 1830s, a British subject could freely buy opium plasters, candy, drops, lozenges, pills, and so on, at the local greengrocer’s. The English took to smoking opium, or ingesting a tincture called laudanum (from the Latin laudere, “to praise”), available, quite readily, at corner apothecary shops. Since laudanum sold for less than a bottle of gin or wine, many working-class people ingested it for sheer pleasure.
Some people, like the Victorian novelist Thomas De Quincey, perhaps smoked a bit too much. De Quincey started taking opium in 1804, and four years later found himself addicted. Even so, he could not stop himself from praising its exquisite pleasures. In 1821, he published his Confessions of an English Opium-Eater, in which he makes clear exactly what the scientists had hoped to accomplish. But his experience went well beyond science, and sounds, in its images of rebirth, vaguely like Paracelsus: “What a revulsion! what an upheaving, from its lowest depths, of inner spirit! what an apocalypse of the world within me! That my pains had vanished was now a trifle in my eyes: this negative effect was swallowed up . . . in the abyss of the divine enjoyment thus suddenly revealed. Here was a panacea . . . for all human woes; here was the secret of happiness . . . ”
America had no De Quincey. No one championed opium as the gateway to insight. Americans went after opium purely for enjoyment and recreation. And so, helped along in great part by the Chinese, who opened smoking dens in great numbers in the Far West during the 1849 rush for gold, opium attracted more users in America than in England. In 1870, in the United States, opium was more widely available than tobacco was in 1970. The Union Army issued over ten million opium pills and two million ounces of powdered opium to its soldiers. Both Kit Carson and Wild Bill Hickock wrote in their diaries how they passed many pleasant hours in those opium dens, much preferring smoking opium to drinking whiskey. Smoking left them with no hangover and, as a bonus, sharpened their shooting and roping skills to such a point, Hickock claimed, that he could perform their sometimes dangerous cowboy shows with not a trace of fear. In fact, he claimed invisibility on the stage. At the end of the nineteenth century, even the distinguished Canadian physician Sir William Osler, professor of medicine at Oxford, declared opium “God’s own medicine,” for, he said, it could perform miracles, not the least of which was curing all of the world’s ills.
Shortly after the Academy awarded its prize, a German chemist named Friedrich Wilhelm Sertürner, eager to find in that magic poppy the active ingredient—termed its “essence” or “basic principle”—isolated an alkaloid from raw opium in 1805. He named it “morphium,” after Morpheus, the god of dreams; later it became known as morphine. For many years, Sertürner experimented on himself, trumpeting the drug’s ability to eliminate all worry and pain. At one moment, Sertürner wrote, morphine could induce feelings of a state so foreign and so elusive that the philosophers had to coin a new word to describe it, euphoria. But, he added, sounding a bit like De Quincey, the very next moment it could make him feel “outright terrible”5—pitching him into a depression so dark and heavy and deep that it resembled a near-death experience. And then there it was again, something powerful, almost magnetic, pulling him back up to the heights of ecstasy. Oh, to be born and die and be reborn over and over again. It was the experience of the century—and, for a moment or two, a victory for the chemist in his laboratory that he could not help broadcasting to the rest of the world.
So much did Sertürner crave the morphine experience, and so much was he a product of his own century, that he demanded a faster way to get the drug into his bloodstream. After all, speed was one of the great side benefits of the Industrial Revolution—faster travel, faster communication, and faster consumption. Sertürner read the zeitgeist; he wanted the high, and he wanted it right now. (High comes out of nineteenth-century street slang, first used around 1880.) But he never got his desire. Injecting drugs was not really possible until fifty years after Sertürner discovered morphine, when, in 1853, a Scottish doctor named Alexander Wood invented a crude hypodermic syringe, making it fairly simple for anyone to inject morphine directly into his or her own bloodstream.
By the time of the Civil War, American surgeons on both sides regularly dispensed morphine to soldiers on the battlefield. Farmers cultivated poppies in both Union and Confederate territories, with Virginia, Tennessee, South Carolina, and Georgia as the major producers. One particularly well-liked Union surgeon, Major Nathan Mayer, who found giving injections much too cumbersome, poured generous amounts of morphine onto his gloved hands. Riding past the troops, he invited them, like so many puppies, to take a lick. It mattered not at all that morphine left the battlefield littered with addicts: Immediate pleasure trumped long-range pain. Could such a potent drug count as the secret of life? It could, indeed, but could not, it seems, hold on to its top position.
For chemistry did not pause at the pure morphine stage for long. So intense was the search for more and cheaper ways to get hold of the spark of life that, in 1874, a pharmacist in London, searching for a nonaddictive alternative to morphine, boiled a batch of it with acetic anhydride, producing a substance with immensely powerful narcotic properties. (Though he did not know it, he had produced something highly addictive once more.) By 1898, a worker at the Bayer company in Germany had noted the amazingly powerful properties of that solution as a painkiller and, again quite accidentally, a cough suppressant. The head of Bayer’s pharmacological laboratory, Heinrich Dreser, tried the drug and called the experience absolutely heroisch, “heroic.” Like De Quincey, he had been yanked up to heaven and dropped into hell. His laboratory assistant exclaimed, after injecting a dose of heroin, “I have kissed God!” Such a tremendous high, experience tells us, brings with it an equally horrific downer.
Beginning in November of 1898, Dreser marketed the new drug under the brand name Heroin. An anodyne of such heroic proportions, Dreser proclaimed, should go by no other name. By 1899 Bayer was producing over a ton of Heroin a year and exporting it to twenty-three countries. Nine years later, after learning of the terribly addictive qualities of his new product, Dreser tried to bolster Bayer’s reputation by marketing a much safer albeit less potent anodyne. Dreser succeeded in developing a pain reliever from natural ingredients, extracting from the willow plant the chemical salicin, which he marketed as a powder under the trade name Aspirin.
In West Germany, meanwhile, another German chemist, Friedrich Gaedcke, was working to isolate the alkaloid of the coca leaf. His experiments proved successful around 1855, at which point he coined the name cocaine. The word appears for the first time in English in 1874. Here was another very powerful and very popular anodyne that physicians came to use as an anesthetic, in this case around 1860. The Coca-Cola Company used the coca leaves in its drink, which it advertised—in an understated way—as having great medicinal properties.
In 1884, Freud wrote a famous research paper on how the drug affected levels of awareness, entitled, simply, “Über Coca” (“About Cocaine”). In the essay, Freud talked of “the most gorgeous excitement of ingesting the drug,” and goes on to sound out a testimonial for the drug’s “exhilaration and lasting euphoria, which in no way differs from the normal euphoria of the healthy person . . . You receive an increase of self-control and possess more vitality and capacity for work . . . In other words, you are simply normal, and it is soon hard to believe you are under the influence of any drug.” Freud recommended cocaine to cure morphine addiction and used it himself as an anodyne for about two years. Both of the drug’s manufacturers, Merck and Parke-Davis, paid Freud to endorse their rival brands. Freud was a believer, and his use of the drug, some people believe, led directly to his work on dreams.
In that annus mirabilis, 1800, Sir Humphry Davy, a chemist, poet, and pioneer in electricity, came to the laboratory of a friend, Thomas Beddoes, in Bristol, England, called the Pneumatic Institute, to assume the role of supervisor. At his laboratory, Beddoes dedicated himself to exploring the healing effects on sick patients of inhaling various gases. The Pneumatic Institute was a popular place; the poets Robert Southey and Samuel Taylor Coleridge frequented it to take what they called “the airs,” not for any particular illness, but for feelings of general well-being. In 1772, the English chemist Joseph Priestley had discovered an odorless, colorless gas that produced an insensitivity to pain. While working at the Institute, Davy purified that chemical compound and named it nitrous oxide. After his first inhalation, he quickly lauded its astonishing property of inducing feelings of euphoria.
Completely taken with his discovery, Davy immediately announced to his colleagues that he had stumbled upon the true philosopher’s stone. He dazzled various assemblies of friends by inhaling his odorless gas and then sticking pins and needles into his body without any noticeable pain, giggling like a child through the entire experience. That proved, he declared, that he could control life, for he had located the seat of all feeling and sensation. Even the esteemed psychologist William James devoted an entire essay, which he published in the journal Mind in 1882, to the “fleeting brilliance of nitrous oxide inhalation.” Titled “The Subjective Effects of Nitrous Oxide,” James’s article extols the virtues of finding insight by inhaling the gas: “With me, as with every other person of whom I have heard, the keynote of the experience is the tremendously exciting sense of an intense metaphysical illumination.”
Robert Southey, one of the wilder poets of the period, on one of his trips to Beddoes’s laboratory, tried the odorless gas and praised it as the highest order of religious experience: “I am sure the air in heaven must be this wonder working gas of delight.”6 That was enough for Davy to keep his research alive on what came to be called in the period the famous “factitious air.” With James Watt, another wizard of the invisible—the inventor of the steam engine—Davy built a nitrous oxide chamber, in which people could absorb the wondrous gas through all the pores of their body. After one such session in his own chamber, Davy wrote the following paean to the power of the invisible, in his diary: “Nothing exists but thoughts. The universe is composed of impressions, ideas, pleasures and pain.” Another session in the chamber prompted the following poem from Davy: “Yet are my eyes with sparkling luster fill’d/Yet is my mouth replete with murmuring sound/Yet are my limbs with inward transports fill’d/And clad with new-born mightiness around.”7
In England, nitrous oxide became a popular plaything. One of the features of traveling carnivals through the countryside of England was something called “nitrous oxide capers,” where for a few pence people could enter a tent, inhale the wonder gas, and giggle their hearts out. They staggered so dizzily that they, themselves, became part of the amusement. Inhaling intoxicants became a staple of British life. Traveling mountebanks, calling themselves professors, would extol the virtues of, say, nitrous oxide or ether, and then invite audience members to step forward and breathe deeply.
A group of amateur scientists, who called themselves the Askesian Society, fueled their curiosity with group inhalations of nitrous oxide. After serving some time with the Society, a young member named Luke Howard gave a lecture to the group, in 1802, on the most evanescent thing imaginable, clouds—one vapor seeming to be as good as any other. He had done something, he said, that no one else had ever done—a taxonomy of clouds. The names he chose—cirrus, stratus, cumulus, and nimbus—meteorologists still use today. Believing that nitrous oxide had provided him with his revelation about the new science he had launched, meteorology, Howard came back to the gas so many times he became addicted.
No intellectual groups formed around nitrous oxide in America, but nonetheless it had a following in this country, too. By the 1820s, students at Yale used laughing gas regularly at their weekend parties to break from their studies. Different from England, someone in America found out quickly how to make money off the giggling gas. A Hartford, Connecticut, dentist, Horace Wells, witnessed a volunteer inhale nitrous oxide in 1844, while someone else cut a long gash into the man’s leg; the man looked down at his wound, wondered aloud about the long red line running down his thigh, and began laughing uncontrollably. The next day, Wells had a fellow dentist administer the “laughing gas” to him and extracted one of his teeth. Wells later commented: “It is the greatest discovery ever made. I didn’t feel as much as the prick of a pin.”8 And so nitrous oxide gave birth to a new profession in America: painless dentistry.
Davy, meanwhile, turned his discovery to more serious medical uses, most notably to surgery: “As nitrous oxide, in its extensive operation, seems capable of destroying physical pain, it may profitably be used with advantage in surgical operations in which no great effusion of blood takes place.”9 While he made that statement in 1800, it took almost half a century more, all the way to 1844, for hospital officials to allow his laughing gas into the operating room in the form of anesthesia. The word first enters the English language (from Latin by way of Greek) in 1721, to describe those who, because of some major paralysis or other disabling injury, experience “a defect of sensation.” In November 1846, Oliver Wendell Holmes suggested that the state created by nitrous oxide be named anesthesia, from Greek an, “without,” and aesthesia, “sensibility.” Just two years later, in 1848, anesthesia had already entered common parlance as a loss of all sensation induced by some chemical agent, which, at this early stage in its development, lasted for no more than two or three minutes.
Davy believed that anesthetized patients lay in a state of hibernation, lingering at the border separating life and death. The age had thus to confront this new creature, a breathing human frame that had been intentionally stripped of all feeling. It would do so over and over, in many permutations. Would anyone still call that seemingly inert lump of flesh on the operating table a human being? A truly sensate human being? Did that thing actually “have” a body? Or, more to the point, did it “have” a life? Though the patient felt no pain, he or she or it certainly seemed to lack consciousness. Or, on the other hand, was the patient pure consciousness, the state that everyone so vigorously pursued but that no one could ever name?
Michael Faraday, a chemist and physicist, towered over the Industrial Revolution as one of its chief inventors, producing an electric motor, dry-cell batteries, and a machine that powered a good deal of the revolution: a device he called a dynamo, a word from the Greek meaning “power and force.” He began conducting laboratory experiments at the Royal Institution in London in 1808, transforming liquids and then reversing the action, a process he called phase changes. In 1818, Faraday, a decade into his intense experiments, concocted an odorless gas that, he claimed, could produce much longer-lasting anesthetic effects than nitrous oxide.
Because of the potency of the new gas, Faraday claimed that he, and he alone, had tapped deeply into the secret compartments of life. So assured was he that he even named his discovery after the element that only the gods had the privilege of breathing—the quintessential or fifth element: the ethereal, or simply ether. With his new success in the laboratory, Faraday bragged, all human beings could now breathe deeply of the very same stuff as the gods; but the question remained, could they feel godlike? Crawford Long, a young doctor in Jefferson, Georgia, threw a series of wild parties where he dispensed ether to his guests, which he called his “ether frolics.” A protégé of the dentist Horace Wells, named William Thomas Green Morton, administered ether to a patient at Massachusetts General Hospital on October 16, 1846. While the Boston Medical Association marked the success of Morton’s operation by naming October 16 Ether Day, the first successful operation using ether as an anesthetic did not take place in England until a year later, in 1847.
BUT SOME OTHER chemist or physicist always lurked in the wings, just waiting to announce the next potion that would throw that all-important switch, releasing the subject from pain and moving him or her into a fully altered state. Thus, in November of 1847, Sir James Simpson, a noted chemist, announced the anesthetic properties of yet another new substance, trichloromethane, that Simpson insisted would place patients in a much less troubled slumber than all the other chemicals combined. He shortened the technical name to chloroform. (A Frenchman named Jean Dumas had compounded the greenish liquid—chloro—in 1834.)
Linda Stratmann, in her book on the history of chloroform, aptly titled Chloroform: The Quest for Oblivion, points out that never had anyone devoted so much attention to the quest of deposing consciousness. These experiments, for lots of people, carried real dangers. She says:
Sleep was believed to be a halfway house between consciousness and death, during which the brain, apart from maintaining basic functions, was inactive. The higher functions of the brain, the essence of what made an individual human, were therefore locked in the state of consciousness, and to remove, or suspend, these functions was to reduce man to little more than an animal. The creation of artificial unconsciousness therefore raised the specters of madness and idiocy. It was not only life, reason and intellect that were at risk, for the search was still in progress for the physical seat of the human soul, which might be in some part of the nervous system as yet not fully understood.
And then we have to think about the testimony of an expert, a doctor named James Parke, of Liverpool, who wrote his colleagues these words of warning in 1847: “I contend that we violate the boundaries of a most noble profession when, in our capacity as medical men, we urge or seduce our fellow creatures for the sake of avoiding pain alone—pain unconnected with danger—to pass into a state of existence the secrets of which we know so little at present.” Parke believed the very profession itself was at stake, and he saw the field of medicine headed in the wrong direction. He may have been right, for medicine did begin to focus on two things after mid-century: the cessation of pain, and the prolongation of life.
Such warnings did not, however, go totally unheeded. Parke and others like him prompted a cadre of professionals who believed that they could explore the foundations of life much better and more efficiently, without the use of any chemicals or gases. The list includes mind readers, a great number of spiritualists, ghost talkers, experts on a new phenomenon called paramnesia or déjà vu, showmen, and even some scientists. James Braid, a highly respected Manchester surgeon, embodied all of those categories, along with a fairly large helping of daring and self-promotion. In 1841, reacting to the idea of anything so artificial as chloroform, Braid lectured about a much more powerful and organic fluid that coursed through the limbs of every man, woman, and child, uncovered earlier by the psychologist G. H. Schubert. Braid called this fluid “animal magnetism,” and it did not just hold the key to the secret of life, he told audience after audience; it was life itself. A trained practitioner, Braid advised, could reinvigorate the élan vital in a sickly person, even in someone near death, and, like Paracelsus himself, miraculously return the person to a healthy and robust life. Or, that same practitioner could move a person to a state where feeling totally disappeared, allowing a physician to perform the most complicated operations on the subject using as an anesthetic only Braid’s method.
Braid called his regimen neuro-hypnotism, which he shortened, in 1843, to hypnotism. The popular press called it “nervous sleep” or “magnetic sleep,” a quasi-coma in which people found themselves unusually susceptible to suggestions. No matter the name, Braid, with his new field of hypnotism, or Braidism, as the press came to call it, declared that he had found the philosopher’s stone—the pluck of life—the seat of liveliness itself. And, at the drop of a three-shilling ticket, Braid would most happily prove, through his onstage manipulations, that he could turn a fully animated, waking life off and on in even the most defensive, disbelieving person. Capitalizing on the period’s emphasis on the gaze—in cinema, in the new museums, in photo galleries—Braid would begin each of his performances with the line, “Look into my eyes.”
Braid’s method migrated to America, in the late 1840s, through a flamboyant character named Andrew Jackson Davis, known during his lifetime as the John the Baptist of modern spiritualism. Davis had the ability, he claimed, to enter at will into a state of higher consciousness, what he called a “superior state,” lifting him far beyond ordinary clairvoyance. From that rarefied position, he said, he could see the secrets that so many scientists were looking for and failing to find. For instance, the human body became transparent to him, allowing him to view each organ and, most dramatically, those organs with direct access to the source of life. That’s why, without being told the illness beforehand, he could bring patients not just back to health, he wrote, but to extended life. Davis bore witness to the entire process of death and the soul’s voyage to the hereafter, a place that, with the help of a bit of Braidism, he claimed, he had entered many times. And while he could not bring other people to his own superior state, he could, through the use of Braid’s techniques, place them in a trance, allowing them to enjoy, even momentarily, a somewhat higher level of spiritual awareness, and a taste, brief though it was, of eternal life.
But some social philosophers in the period believed that if wakefulness could be disembedded from daily experience, enhanced through various opiate derivatives, muffled through anesthesia, or suspended through hypnotic power, then it did not serve well as the fundament of the human condition. It behaved more like an accessory to something else, something much more thoroughly basic and unshakable. With wakefulness as the defining element, people could easily go through life as transparencies—as ghosts. They might just as well feel that they did not exist, except in those moments of peak emotions—profound euphoria or deep depression. In short, drugs worked at just too ethereal a level for some professionals, the experience just too evanescent in the turbulence of the nineteenth century, to serve such a vital role. Scientists demanded something more perdurable, perhaps even tangible, by which to define human existence. They went off in different directions to find the bedrock, the fundamental.
As with many advances and inventions, this next one also came about quite by accident, but it fit perfectly into the period’s desire to define human essence. A worker in a chemistry laboratory in England, J.E. Purkyne, thought he had found that foundational correlative when he picked up a glass slide one day in 1823 and noticed that he had left behind an indelible, precise impression of the patterns on the ends of his fingers. After some rather simple experimenting, he had to conclude, much to his own amazement, that everybody possessed a unique system of identification within easy reach—at the tips of their fingers. While a person might alter his or her behavior, or even personality traits, or fake being hypnotized, prints persisted absolutely unaltered, over a person’s entire lifetime.
Even before the publication of Darwin’s On the Origin of Species, Herbert Spencer, the English philosopher responsible for the phrase “survival of the fittest,” posited a theory that the growing human sciences later tagged “social Darwinism.” His idea fit tightly into the period’s belief in the inferiority of those people with darker skin. Spencer argued that those with a decided lack of social and moral development—criminals—were nothing more than “savages,” an inborn state, which included the poor, the laboring class, the Irish, women of lower class, and of course blacks. One could spot their internal deficiencies by distinctive outward signs. To ensure the smooth functioning of upper-crust white society, authorities needed to describe, type, classify, and, most important of all, keep these dangerous people under close supervision and observation. To keep its population safe, the state should have to produce a taxonomy of deviants.
Key social changes were underway to place great emphasis on the criminal. The revolution in production created a new bourgeois appreciation of property, bringing with it a wide range of new punishable offenses along with punishments of greater severity. Carlo Ginzburg, in a brilliant essay about crime and punishment in the nineteenth century, “Morelli, Freud and Sherlock Holmes: Clues and Scientific Method,” makes the following observation:
Class struggle was increasingly brought within the range of criminality, and at the same time a new prison system was built up, based on longer sentences of imprisonment. But prison produces criminals. In France the number of recidivists was rising steadily after 1870, and toward the end of the century was about half of all cases brought to trial. The problem of identifying old offenders, which developed in these years, was the bridgehead of a more or less conscious project to keep a complete and general check on the whole of society.
A novelist like Dickens could thus identify all kinds of brutes in his books strictly by their appearances. “Low brow” and “high brow” referred to people’s foreheads—those who looked like Neanderthals or those who looked like intellectuals, as if such a thing were even possible—as they occupied either a lower or higher class. People came into the nineteenth-century world, then, born as criminals, an innate and irreversible fault of character and personality. Anthropologists, psychologists, sociologists, ethnographers—all the emerging sciences devoted their attention to identifying and categorizing the antisocial element in society.
We have not shaken ourselves free of such base and racist thinking; it operates for some unconsciously, for others more overtly. But it is, nonetheless, part of the essentialist thinking of the period; a good part of that thinking that tried to find the bedrock of, well, everything. Taxonomies ruled the day: Order and definition and category made the world come alive, and made it possible at the same time for those in authority to control it with ease. One of those most basic and wrongheaded essential nineteenth-century categories was race.
That narrowing of thinking continues through the twentieth century, and on through the twenty-first. The majority of men in prisons today in America are African Americans, the overwhelming majority of those for nonviolent drug offenses—inhaling or ingesting or imbibing some controlled substance. America incarcerates its black adult males at a higher rate per capita than did the South African government during the worst years of apartheid. Sentences turn out to be much harsher for young men of color than for whites who commit the very same crimes.
Francis Galton, the cousin of Charles Darwin, knew the problem only too well, and decided to do something about it. Galton had already expressed his deep-seated fear of the end of the “highly evolved” white race in his first major book, entitled Hereditary Genius: An Inquiry into Its Laws and Consequences, which he published in 1869. In that book, he made his case that the highest ideal for the white race could be found in the ancient Athenians. The closest to them in his day, he maintained, were the aristocratic British. And he named the enemy that threatened the established order and heredity as the darker, lower, and less moral races.
He relied on earlier work on fingerprints carried out by the founder of histology, J. E. Purkyne, who, in 1823, distinguished nine types of lines in the hand, with no two individuals possessed of the same exact combinations. Galton turned Purkyne’s discovery into a practical project and began it by sorting fingerprints into eight discrete categories to use as a tool for mass identification. To make his taxonomy practical, Galton proposed that hospitals take the hand- and footprints of every newborn, thus creating an indispensable record of every citizen’s identity. If someone committed a crime, the authorities could more easily track down the identity of the suspect.
Near the end of his life, in 1892, Galton finished his project of sorting fingerprints and published his results in a long and dull tome titled, very simply and directly, Finger Prints, in which he laid out, with graphic detail, the eight major patterns—swirls, whorls, curlicues, spirals, and so on—shared by every last person in the entire world. England quickly adopted his method, and other countries, including the United States, soon followed. Carlo Ginzburg, in “Morelli, Freud and Sherlock Holmes,” says about Galton’s project, “Thus every human being—as Galton boastfully observed . . .—acquired an identity, was once and for all and beyond all doubt constituted an individual.”
We reach an incredible crossroad here: a semiotics of the individual based on patterns—or numbers—and not on anything so indeterminate or even as informative and telling as personality. Ginzburg points out that by the end of the nineteenth century, and specifically from 1870 to 1880, “this ‘semiotic’ approach, a paradigm or model based on the interpretation of clues, had become increasingly influential in the field of human sciences.”
Fingerprints as a unique identifying tool had to compete with an already existing system of identification called bertillonage, named for a clerk at the prefecture of Paris, Alphonse Bertillon. His intense scrutiny of police files had convinced him that no two human beings—not even identical twins—carried the exact same physical features. He worked out a fairly crude classifying system—the base identity of the human being, or as he called it, anthropometry—and thus developed, around 1879, the world’s first codified system for identifying human beings. Bertillon had such faith in his system that, on the basis of an oral description of a criminal, he had an artist sketch what the prefecture called “mug shots,” and used them as wanted posters. Later, he had two photographs taken of the accused, one frontal and the other in profile—the template for booking photographs to this day. Bertillon called these shots portraits parlés (“speaking likenesses”) and kept them filed by measurements of facial features.
Bertillon became well known in this country when his methods went on display at the World’s Columbian Exposition in Chicago in 1893. This is the place that so many young women went missing without a trace: This is where they had disappeared in some numbers. Erik Larson opens his book The Devil in the White City: Murder, Magic, and Madness at the Fair That Changed America very calmly with an unsettling one-sentence paragraph: “How easy it was to disappear.” And then the next paragraph: “A thousand trains a day entered and left Chicago. Many of these trains brought single young women who had never even seen a city but now hoped to make one of the biggest and toughest their home.” “Vanishment,” Larson goes on to explain, “seemed a huge pastime. There were too many disappearances, in all parts of the city, to investigate properly, and too many forces impeding the detection of patterns.” The Chicago police grew more and more anxious and they adopted Bertillon wholesale the following year, and in 1898 Chicago established the National Bureau of Criminal Identification based on his methodology.
Two other schemes took a radically different approach to identification and worked at defining human beings at a much more fundamental, more essentialist level. Like Galton’s and Bertillon’s, they also aimed at identifying the most terrifying of the new and growing problems in the nineteenth century, the ultimate destroyer of existing categories, the criminal. In every country police hoped to identify criminals before they committed their antisocial acts. Given the theories of social Darwinism and racial inferiority, scientists had no doubt that they could satisfy the police. Here, social scientists plumbed the most essentialist level imaginable, trying to define what it meant to be not only a human being, but an aberrant human being, at that.
In the first, a German physician named Franz Joseph Gall, working in the first couple of decades of the nineteenth century, believed he could determine character, personality traits, and, most important, criminality, by reading the bumps on a person’s head. As a medical man, Gall engaged in a kind of medical semiotics, making serious declarations about personality, for instance, based on certain telltale signs. Gall held to a fairly complicated theory about human essence. In principle, he argued, the brain functioned as an organ of the mind, and the mind possessed a variety of different mental faculties, each of which represented a different part, or organ, of the brain. The brain consisted of exactly twenty-six separate organs, including the dreaded “murder organ”—more precisely, the Organ of the Penchant for Murder and Carnivorousness. These organs, or areas, raised bumps on the skull in proportion to the strength of a person’s particular mental faculty. Fortunately for society, he allowed, he knew how to find the murder bump, and could do so by the time the poor subject reached puberty. He named his new system phrenology.
Like Gall, an Italian physician named Cesare Lombroso, the person mentioned by Havelock Ellis earlier in the chapter, resorted to this same sort of medical semiotics. He stands as the first person, really, to articulate the biological foundations of crime. Lombroso, too, believed perhaps even more strongly, and certainly more ardently, than someone like Galton in social Darwinism and genetics to declare that criminals were born and not created out of conditions of poverty and class and color and so on. In 1876, Lombroso published a book titled The Criminal Man, in which he listed a range of physiognomic details that indicated a propensity toward both brutishness and criminality in men. These included large jaws, high cheekbones, handle-shaped ears, fleshy lips, shifty eyes, and, most telling of all, insensitivity to pain. He writes with a style that borders on the pathological:
The problem of the nature of the criminal—an atavistic being who reproduces in his person the ferocious instinct of primitive humanity and the inferior animals. Thus were explained anatomically the enormous jaws, high cheek-bones, prominent superciliary arches, solitary lines in the palms, extreme size of the orbits, hand-shaped or sessile ears found in criminals, savages, and apes, insensibility to pain, extremely acute sight, tattooing, excessive idleness, love of orgies, and the irresistible craving for evil for its own sake, the desire not to only to extinguish life in the victim, but to mutilate the corpse, tear its flesh, and drink its blood.
Fingerprinting and bertillonage, phrenology and physiognomy, all those systems of classification, led to moving medical forensics out of the hospital ward and into the offices of new nineteenth-century professionals, detectives. The man credited with that move, Sir Bernard Spilsbury, an Oxford graduate, went to work in October of 1899 for St. Mary’s Hospital Medical School in Paddington, London. As his biographer bluntly says: “Single-handedly he transported forensic medicine from the mortuary to the front page with a series of stunning, real-world successes.”10 In the process, he also developed the role of the expert witness. The staff of St. Mary’s quickly came under Spilsbury’s spell, and saw him as a person possessed of supernatural powers of deduction; they talked about him as if he were Sherlock Holmes come to life. Spilsbury prided himself on solving crimes with the slightest of clues, and preferred working in the field, alone, sifting through the muck for the slightest shred of evidence: “While others preferred the comfort of the predictable laboratory, he clambered across muddy fields, stood knee-deep in icy water, bent his back into howling blizzards, wrinkled his nose over foul-smelling corpses, prepared to travel to any destination and endure any hardship in order to study the fractured detritus of death.”
What made him perfect for the age is that he needed no body, no corpse, to solve, say, a murder case. At one point he concluded, for instance, that a pool of grey pulpy substance spread over a basement floor had once been a human being. In the way that he could construct the most complex of stories from the simplest of clues, Spilsbury stands as the fore-father of the most celebrated of contemporary pathologists, like Michael Baden, Herbert MacDonell, and Doctor Henry Lee, notably of the O. J. Simpson trial.
More than anything, Spilsbury loved to work on the long forgotten and unsolvable—what we today know as cold cases. Ordinary citizens in the nineteenth century followed Spilsbury’s magic in the newspapers the way contemporary audiences watch on evening television the wonders of CSI and Cold Case. As in the nineteenth century, we live in fear—there are crazies out there—and we must have our crimes solved, or at least we have to hear a good story about one, no matter how true, that ends in the resolution of the case. Then, we can breathe a bit easier, walk a little freer.
As comforting as Spilsbury may have been, it’s hard to imagine besting the well-ordered and logical mind of that quintessential sleuth, Sherlock Holmes, of 221B Baker Street, London. In story after story, Holmes focuses his infallible reasoning abilities on a jumble of evidence in order, as he repeatedly says in solid nineteenth-century fashion, “to get to the bottom of the matter.” Holmes continually amazes his friend, the naïve Doctor Watson, with his ability to solve crimes—CSI redux—and, like Spilsbury, he needs no corpse. Why Holmes, how did you ever come to that conclusion? the doctor asks over and over. To which Holmes answers, using the one word popular with nearly every scientist of the nineteenth century, “Elementary, my dear Watson, elementary.” An essentialist to the core of his very being, Holmes processes all experience, including fingerprints, facial characteristics, and other bits of evidence, as elementary stuff. He cares not a whit about punishment or justice, desiring only to finger the culprit—to identify the perpetrator—and announce to an expectant audience the suddenly obvious truth.
Through his superhuman powers of deduction, Holmes plays the pure scientist, discarding everything superfluous to arrive at the rock-bottom, basic truth. At certain moments, when Holmes finds himself stumped by a crime, he reaches a heightened awareness—actually, just the sort of state that the chemists were after—by using his favorite seven-percent solution, cocaine. Under the influence, or high, Holmes, following the model of Andrew Jackson Davis, claims for himself the clarity of insight or the seer or the clairvoyant. (Recall, Freud recommends cocaine use for increasing levels of awareness.) In the middle of a welter of information and facts, Holmes sees an immediate pattern; or, rather, the cocaine, like an intellectual magnet, pulls all the particles into a pattern.
In a story titled “The Adventure of the Cardboard Box” (1892), some unidentified culprit has sent a box to an old lady, containing two severed ears. Who would do such a vile thing to a nice old lady? Such is Holmes’s usual problem, and the reader’s usual delight. Holmes solves the case using a version of Gall’s “medical semiotics” (recall, Conan Doyle had been a doctor before he took up writing):
‘As a medical man, you are aware, Watson, that there is no part of the human body which varies so much as the human ear. Each ear is as a rule quite distinctive, and differs from all other ones. . . . Imagine my surprise when, on looking at Miss Cushing, I perceived that her ear corresponded exactly with the female ear which I had just inspected. The matter was entirely beyond coincidence. There was the same shortening of the pinna, the same broad curve of the upper lobe, the same convolution of the inner cartilage. In all essentials it was the same ear.’
Of course, I at once saw the enormous importance of the observation. It was evident that the victim was a blood relation, and probably a very close one.
Usually, when Holmes comes down from one of his highs to confront Watson, he assumes the intellectual strategy of Socrates, a seeming knownothing—who of course knows everything—and more, allowing Watson to come to the solution as if through his own powers of induction. As the characters trade roles, Conan Doyle makes apparent to his readers that Holmes and Watson are really twin aspects of each other. In “The Adventure of the Cardboard Box,” however, Holmes instructs: The details are too important, the lessons too crucial. He lectures the doctor on his own profession, about the most crucial topic, a way of seeing individuality.
In this example of semiotics at work, Holmes attempts to deduce the solution to a crime based on one small set of details, in this case the shape of the pinna and lobe and cartilage of one ear. Carlo Ginzburg, in “Morelli, Freud and Sherlock Holmes,” connects the case to a nineteenth-century art historian named Giovanni Morelli who authenticated paintings in museums across Europe based not on stylistic features like color and strokes, but on such “inadvertent little gestures” as the shape of the ears of the artist’s figures. Ginzburg points out that Freud had read and commented on Morelli’s strategy, causing Ginzburg to realize “the considerable influence that Morelli had exercised on [Freud] long before his discovery of psychoanalysis.” Detective, psychoanalyst, and art historian all share (or exhibit) the age’s obsession for ordering and classifying: for the semiotics of seeing.
The task for all the practitioners of the so-called human sciences, then, was precisely Holmes’s task—to find the pinna, the lobe, that one essential truth. Like Holmes, they were trying to solve a major crime, the theft of human essence, to find ultimate meaning in the marginal, the irrelevant—the detail, to quote Freud, that lay “beneath notice.”11 No one particularly cared who had pulled off this particular slick robbery. The idea was just to return the booty, to redefine the human being at the core level. The academic branch we now know as the social sciences, devoted to the study of human behavior, came into existence in the latter half of the nineteenth century. One of its first fields of study was something called eugenics, a word coined in 1883 by Francis Galton, the same man who eventually concluded his career, out of exasperation, with fingerprints.
Just as Holmes arranged seemingly random details into recognizable patterns, Galton applied statistical methods to myriad physical characteristics and arranged them into basic definitions and types. By so doing, he caused the century to face new, bizarre questions: What does it mean to be classified as a Negro (or Ethiopian, to use the nineteenth-century term), or a Mongoloid, or—of supreme importance for Galton—a person who occupied the most elevated of those categories, a Caucasian? In his 1869 study Hereditary Genius, Galton expressed his fear about the demise of superior breeding if the upper classes did not maintain their dominance. After all, they had the good fortune of occupying that premier category, the “highly evolved” white race, and that meant they had certain responsibilities; for instance, they were charged with keeping the race pure.
While Galton turns out to be a somewhat obscure but pivotal figure in the debate over heritable traits, his new science of eugenics supplied the necessary foundation for all the later discussions of the key essentialist idea of the late nineteenth century—race. Galton argued that people inherited all their behavior, and insisted that those traits could and should be measured. Moreover, he wanted to rank them so as to demonstrate the relative worth of one group of people over another: Bad behavior meant bad genes. So, for example, in a shocking display of racial hubris, Galton proposed to show the superiority of whites over blacks by examining the history of encounters of white travelers in Africa, their manners and deportment, with unruly, hostile black tribal chiefs.
In the most public display of quantification—in which he used statistical methods to reduce living human beings to numerical arrays—Galton set up a makeshift laboratory at the International Health Exhibition of 1884. For threepence, a man in a white smock would test and measure participants for all sorts of indices, including head size and its alleged concomitants, intelligence and beauty. People came in one end of the tent as personalities, and left the other end as averages, norms, and deviations. Each of these measurements, Galton believed, would take the world of science that much closer to the ultimate definition of the human being.
So convinced was Galton that he had found the way to define human essence, he wanted to use his theory to effect a social cleansing. In Galton’s scheme, heredity governed not only physical characteristics but talent and character as well, so, he said, “it would be quite practical to produce a highly gifted race of men by judicious marriages during several consecutive generations.”12 In Macmillan’s Magazine, a popular monthly, he proposed an idea that mimics Victor Frankenstein’s—the creation of life. His plan involved state-sponsored competitions based on heredity, celebrating the winners in a public ceremony, culminating with their weddings at no less a location than Westminster Abbey. Then, through the use of postnatal grants, the government could encourage the birth of eugenically superior offspring. (Later, he would argue that the state should rank people by ability and authorize more births for the higher- than the lower-ranked unions.) Finally, he would have the state segregate and ship off to monasteries and convents the categorically unfit and unworthy, where society could be assured that they would not propagate their enfeebled kind.
Charles Darwin went beyond mere individual races. He believed that he had found the secret for the collective human species. Like the death of God, evolution was an eighteenth-century idea that took hold in the nineteenth century. No one in early Christian Europe took seriously the idea that the present emerged out of the past. Of course, people observed change, but they did not necessarily hold to the idea of continuity, for Christianity postulated a world of living things completed in six days, a creation that forever after remained unchanged. The Christian worldview could accommodate sudden change, even catastrophe, but not slow, small changes over long stretches of time. For Christians, today’s natural world has existed this same way since its creation, except, as with Noah’s flood, when God chose to alter it.
In titling his 1859 book On the Origin of Species by Means of Natural Selection, Darwin deliberately overreached, for the idea that the origin of anything could exist outside God, he knew, would smack of heretical thinking. Against every acceptable theory of the scientific community, Darwin set out to prove by himself the mechanism by which plants and animals—most notably humans—had achieved their present form. How could anyone get any more fundamental than Darwin, who was after all determined to plumb the very origin of every species on Earth, to trace our evolutionary ancestors back to the primordial muck from which they arose? With just one overarching mechanism, natural selection, Darwin could account for the endless variety of nature using only three related principles: one, all organisms reproduce; two, within a given species each organism differs slightly; and three, all organisms compete for survival. Not a divine plan, but changes in climate, weather, geology, food supply, and numbers and kinds of predators, created nature’s incredible biodiversity. Darwin had no patience with the idea of creation through any supernatural force.
Ironically, despite the tremendous controversy surrounding On the Origin of Species in the nineteenth century, Darwin does not mention human beings until the next to the last page, and then only in a single sentence: “Much light will be thrown on the origin of man and his history.” Twelve years later, in 1871, in The Descent of Man, and Selection in Relation to Sex, he quite explicitly places human beings at the forefront of evolutionary theory, depicting them, along with other animals, as totally shaped by natural selection, a conclusion he had reached much earlier, in his notebook, in 1838, with the following absolute certainty: “Origin of man now proved—Metaphysics must flourish—He who understand[s] baboon would do more toward metaphysics than [John] Locke.” In the simplest terms, Darwin sought to wind back through time to uncover humankind’s ancestral traces, using every prehistoric cache of bones as evidence of the origin of species—ancient fingerprints of the ur-human being. Science would never reach a definition of the human being, Darwin reasoned, until it could fully explain its origins. In fact, its definition lay in its origins. The popular press interpreted his ideas in a simple, incorrect, and, for the great majority of people, frightening way: Human beings were descended from apes. It gave Edgar Rice Burroughs great delight to parody such ideas in his Tarzan books.
Darwin subtitled On the Origin of Species by Means of Natural Selection, “or The Preservation of the Favoured Races in the Struggle for Life.” Race was a convenient vessel into which scientists began to pour one of their major definitions of human essence. Here, as Darwin suggests, not all races compete as equals. Some, like Caucasians, are inherently more intelligent, stronger, craftier, and so on. Africans belonged to a much inferior race, a designation they could never shake. Caucasians could take heart that they enjoyed a superior existence. At the coaxing of many scientists, whites could at least define themselves by what they were not. And they definitely were not Africans. Testing would demonstrate the point—for instance, in cranial size. Centimeters mattered greatly.
Darwin and Galton—along with every other scientist in the nineteenth century—shared an almost religious fervor, as Stephen Jay Gould has observed, for the comfort of numbers: “rigorous measurement could guarantee irrefutable precision, and might mark the transition between subjective speculation and a true science as worthy as Newtonian physics.”13 Both Darwin and Galton constructed precise x-ray photographs of the roots of humanity, several decades before Wilhelm Roentgen discovered what many called the Roentgen Ray in his laboratory in 1895. Not knowing exactly what he had found, and refusing to name it after himself, Roentgen settled on the name x-rays. The x-ray camera functioned as the ultimate tool for revealing the blueprint—the basic skeletal structure—of a single human being. Just as the cosmos contained under its visible crust a compelling, invisible structure that held it together, so people carried bones under their flesh that functioned in the very same way. Roentgen’s discomfiting magic camera—people complained of its invasive and insidious nature—allowed everyone to miraculously see the human substructure without killing the patient. Think, today, about the outrage against proposed airport screening machines that can x-ray the entire body at various depths. In Paris, near the turn of the century, x-ray “technicians” purported to show photographs of ghosts taken with the new invention.
Self-styled social philosophers, roughly around the same time as Roentgen—that is, in the last decades of the nineteenth century—held that the skeletal structure of human interaction lay in language and the stories that percolated out of language. The Grimm brothers, before they began their project of collecting and sorting fairy tales, had helped to construct, in an early philological undertaking, the Proto-Indo-European family of languages. In their drive to find that same elusive origin of the human species, the Grimms pushed the beginning of language back beyond historical record to a construct called Proto-Indo-European, or Hypothetical Indo-European. Some philologists argued that in studying Greek one could discover humanity’s basic tongue. Others countered, No, one must tunnel farther back, to Hebrew, or even the older Aramaic, to hear pure utterance prior to the babble of the Tower of Babel. There, one could come into contact with speech uncorrupted by time and thus tune in to what provides human beings with their essential humanness. Whatever the language one settled on, the brothers Grimm launched the study called philology, arguing mightily for the philosopher’s stone in the first primal grunt.
What makes the pedigree of languages visible is something called cognates—words common to several languages but with variant spellings. Similar sound patterns and slight sound changes across languages suggest family members—from distant cousins to brothers and sisters. Using the analogy of cognates, a concept promulgated around 1827, Carl Jung constructed his theory of the collective unconscious, whereby our stories, myths, and even our dreams find expression in similar symbolic patterns from one culture to the next. Why else would themes repeat themselves in stories and dreams, from disparate countries over vast spans of time? Surely, such cultural echoes must reveal yearnings deep within the DNA of human experience.
An English physician named Peter Mark Roget embodied the period’s obsession for classification and ordering, coupled with a great love of the language. He was one of those remarkable people who knew a little about a vast range of things. He came to understand the way the retina made a series of stills into moving images, an observation that led to the discovery of an early version of the motion picture camera, the zoetrope. Roget also helped Humphry Davy with his experiments with nitrous oxide.
As a young man he made lists—of death dates, of remarkable events—but most of all he loved to collect words that had similar definitions. In 1852, he published one of his most extensive lists, of words with overlapping definitions, and gave it the title Roget’s Thesaurus of English Words and Phrases Classified and Arranged So as to Facilitate the Expression of Ideas and Assist in Literary Composition.
The Grimms had dug through the culture at such a basic level that their work seemed in perfect harmony with the birth of both the idea of the “folk” and the idea of the “folk soul.” We owe to the nineteenth century the fact that we can talk so freely about such a thing as the German people as distinct from the French or the Irish. The anthropologist Peter Burke sets this most radical discovery into its historical context: “It was in the late eighteenth and early nineteenth centuries, when traditional popular culture was just beginning to disappear, that the ‘people’ or the ‘folk’ became a subject of interest to European intellectuals.”14 German philologists, like the Grimms, first posited the idea of “the people,” and introduced a cluster of new terms to help give shape to their discovery: folk song, folktale, and folklore. This idea had far-ranging implications, of course, for politics—just think about nascent nationalisms—but the idea also changed the face of education around the world.
A German educator, Friedrich Froebel, believed that the folk soul developed very early in children. And so, in 1840, to nurture that most basic quality, Froebel invented the idea of kindergarten. In those “children’s gardens,” where teachers planted their seeds of learning, Froebel hoped to bring out that very same thing that scientists and philosophers were also pursuing, “the divine essence of man.” To that end, Froebel designed a series of blocks in various forms—the world reduced into its constituent shapes—and asked children to make out of them stars, fish, trees, and people. (Frank Lloyd Wright’s mother bought her son a set of Froebel blocks with great results. Maybe the blocks helped shape his sense of space and form.) As an educator, Froebel was asking children to see everything in its basic, elemental parts. No wonder, for Froebel had a background in crystallography, and just as crystals grow from a molecular seed, he believed, children could create the world out of similar seeds—in this case, building blocks. His exercises further reduced the world to forms of nature (or life), forms of beauty (art), and forms of knowledge (science, mathematics, and especially geometry).
The British immediately delved deeply into their own country’s folk soul and found a rather distinctive and powerful one. Francis James Child, Britain’s first major folklorist—the British and American Folklore Societies have their beginnings in the nineteenth century—did for ballads what the Grimms, in their later careers, managed to do for fairy tales: He collected, described, and arranged them in the 1890s. And because he dated them as much earlier than most fairy tales, Child claimed ultimate cultural authority for his ballads, arguing, in fact, that he had caught more than English ballads in his net. He had far surpassed the Germans, he claimed, for he had found the folk soul of all the Anglo-Saxon peoples. Child lobbied all European countries to establish a national repository of their own earliest songs and tales.
In all these narrative expressions—language, myth, song, fairy tale, ballad, dream—social scientists tried to reduce the great and wild variety of creative production into its elemental parts. This kind of early anthropological study acquired the name, quite appropriately, of structuralism, since it attempted to disclose the scaffolding of society, the armature on which every artistic pursuit rested. As its name implies, structuralism purported to uncover essentials or elementals—the defining units of human interaction, across various cultures. The idea spread. Emerging scientists of human behavior, soon to be called social scientists, sifted every human activity and enterprise for its constituent parts, in the desire to reveal the blueprint of the human psyche.
Taking structuralism as a model, for instance, technological advances made it possible to shatter the illusion of activity itself by breaking movement into a series of stills. And one remarkable piece of nineteenth-century technology, cinematography, perfected by Auguste and Louis Lumière in 1895, exemplified what virtually every scientific invention and innovation attempted to accomplish during the period: to arrest the rush of history and analyze it in a single, understandable unit. Likewise, as experience tried to run away, the camera continually froze it in place. The fin de siècle knew this technological marvel initially in a machine called a stroboscope, part of the burgeoning science of chronophotography. (The Lumières had borrowed from the technology developed by Louis Daguerre in the 1830s for photo reproduction.)
What is myth, after all, but a series of events retold in a fabulous way? What is speaking but the uttering of discrete sounds, which the linguists of this period called phones and later phonemes—sounds that, in the right combination, the mind perceives as words? What is motion but a series of still frames? The century had prepared for such ideas, and the motion picture camera caught on quickly. In less than a year after its introduction, a number of dealers in various European cities began selling the Lumières’ new invention. In Vienna, one of those dealers staged the first public performance of moving pictures on March 22, 1896. Writers began to refer to something called “the age of the cinema” and “the new cult of the cinema.” The world now had something startlingly different—mass entertainment. The novelty not only refused to die out or disappear as nothing more than a fad, it increased in popularity and continues to this day, of course, to grab the imaginations of audiences.
At the end of the century, as philosophers and scientists exhausted their attempts to seize on essential definitions for a world that seemed to be fast slipping away, technology came to the rescue. The camera stanched the hemorrhaging of humanity by making at least one instant of experience permanent. But that technology had another, opposite side, for the photograph left its trace in nothing more substantial than ghostly images—the very same state, ironically, as the disappearing fleshy existence it hoped to record. With motion picture technology, perceiver and perceived came eerily together.
This new technique of reproducing amazingly exact, moving images of objects and people, as one historian of nineteenth-century Vienna puts it, “went hand in hand with a loss of the material, haptic and vital existence.”15 The haptic life—that is, a touching and feeling, fully alive existence—presented itself to people, for the very first time in history, as a choice. In an astonishing historical moment, screen images, only slightly fresher, brighter, and glossier than the original, began to compete with reality for people’s attention. Marx adapted his writing style to counter this draining away of feeling. In 1856, he asks, “the atmosphere in which we live weighs upon everyone with a 20,000 pound force, but do you feel it?”16 Are we really expected to? Is it possible for us to feel it? With enough wakefulness and awareness, can we really feel it? We are supposed to answer yes, I believe, for one of Marx’s major concerns was to wake people up to their feelings. According to the scholar Marshall Berman, Marx even tailored his writing style to this goal, expressing his ideas “in such intense and extravagant images—abysses, earthquakes, volcanic eruptions, crushing gravitational force,”17 forcing people to read from their nerve endings out.
Over time, as their experience included less and less of the fleshy original, people would do more than just accommodate themselves to ghostly emanations. They pushed aside the real thing and went for the image. After all, the simulacrum was neater and less messy than the real thing. The U.S. Army made life-sized cardboard cutouts of soldiers serving in Iraq—made from photographs of the real person—to pass out to families, to keep them company while their sons or husbands did battle ten thousand miles away. The Marines could not fashion these so-called Flat Daddies fast enough to keep up with the demand. People in the nineteenth century, just like these contemporary families, were being asked, more and more, to situate themselves within the new world of flattened images, to place their faith in a technology that robbed them of their senses.
Indeed, faith—in its base, religious sense—became an issue, in some ways one of the grandest issues, in the nineteenth century. It started early in the century; we continue to debate its influence today. In the late summer of 1801, some twenty thousand people—young and old, men and women, overwhelmingly white but with a few blacks as well—gathered for what was billed as the largest revival meeting in all history, in Bourbon County, Kentucky. The event became known as the Cane Ridge Revival. Its interest and influence developed over the course of the century, emerging as a politically powerful evangelical wing of Protestantism. The publication of Darwin’s godless theories mid-century gave the movement just the boost it needed to ensure its success, causing it to spread throughout the South and West.
One can trace an almost uninterrupted history of religious fundamentalism from August 6, 1801, at Cane Ridge, Kentucky, to the present day. A reemergence of Darwin as a scapegoat for the alleged moral lassitude of the majority of Americans has helped recharge fundamentalism today. We still live, in large part, in a context shaped by the nineteenth century. School boards and legislators and clergy argue the case for evolution or creationism with great, if not greater, conviction and rancor than in the nineteenth century. And, repeating conditions in the nineteenth century, technology directs a larger and larger share of our lives, serving to intensify the debate. Intelligent design, creationism soft-pedaled, vies now with evolutionary theory for space in school curricula—a continuation, in other words and terms, of the old nineteenth-century struggle to understand how the world exactly works.
The Indo-European family of languages, the structural components of myths, the phonetic patterns of speech; other innovations of the period such as the Braille reading system, the gestures of sign language, and the symbols of the International Phonetic Alphabet; and, perhaps most important, the fundament of God’s creation—all these undertakings and endeavors left their trace, a distinct and basic pattern that, like footprints in the sand, one could read. The family tree of languages also described a distinct outline, a particular shape of human communication. Architecture provided perhaps the clearest outline, the most salient blueprint of order and arrangement. It also provided something more, a bonus for this period.
In both England and America, architecture meant out and out solidity. The Victorian critic and man of letters John Ruskin reclaimed the Gothic as the arke techne—the highest artistic pursuit—of the nineteenth century. Architecture in general, Ruskin argued, was not about design but about something much more fundamental, pure, and basic. Buildings rose in the air through a strict adherence to mathematical relationships. Those relationships revealed God’s imprint, His divine plan for the order of things, from the golden mean to the magic square.
If one wants to study the subject of education in the nineteenth century, or even in the Middle Ages, for that matter—when architecture predominated—one must look it up in any encyclopedia under the heading “edification.” Germans called a nineteenth-century novel of a young man or woman moving toward maturity a bildungsroman—a foundation book in which a young person begins to construct his or her own life, hopefully on a solid foundation. Within this context, to raise a child is to build a building. To edify is to build buildings, but it also refers, in the nineteenth century, to moral education, to the building of character.
In Europe and in America, in the nineteenth century, schools for teaching teachers were called normal schools, named after the seventeenth-century template, the norma, used for drafting perfect right angles. Being normal is about assuming right angles to the ground, a perfect ninety degrees so that one stands true and tall. Posture reflects attitude, a particular kind of leaning or inclination. In the nineteenth century, the idea of the normal expands to refer for the first time to human behavior. Such language conflates architecture, building, posture, and growing up, as if there were no inherent stability and solidity to the idea of education itself, and it needed to borrow the vocabulary of the most seriously engineered activity—architecture.
In the nineteenth century, as buildings rose higher and higher, architecture more and more appeared to defy gravity. Just at the end of the century, America erected the first skyscrapers in concrete, made possible by a contemporary British invention, around 1824, cement, which gave the requisite strength to concrete so that buildings could reach far into the sky. People could mimic those buildings—standing tall and powerfully straight by gaining bulk and mass and, above all, strength. Many people achieved such stature through weight training, what devotees today call “pumping iron.”
An American naturopath named Bernarr MacFadden developed a weight-training program during this period, employing a regimen of both nutrition and exercise for a population that he saw in desperate need of strength and solidity. Aiming his program at both the body and the mind—at physical as well as moral well-being—MacFadden redefined for the common person the idea of the normal. He took the metaphor of the body as edifice quite literally, and called his new movement, in an odd locution, as if everyone served as his or her own biological engineer, bodybuilding. His magazine, Physical Culture, carried the slogan “Weakness Is a Sin.” MacFadden made the normal person synonymous with the strong person. In times of moral and psychic uncertainty like the nineteenth century, as well as ours, it appears, people need to keep up their strength. To do otherwise, for MacFadden, was to deviate from a path of absolute righteousness. As everyone else began to disappear, he gave his disciples a way to solidly stand their ground. He offered them a way to attain substance and strength. He fortified them—with words and with nutritional supplements.
Nineteenth-century architecture made the idea of solidity starkly visible. It is hard to argue with a building’s presence. America, in particular, reached its architectural apogee in one particularly amazing structure. John A. Roebling, a structural engineer, completed the first successful suspension across any appreciable span—1,595 feet—in 1883, with the Brooklyn Bridge. The so-called stiffened suspension bridge—hovering high above the earth, levitating in midair—seemed to be fashioned out of solid metal, but poised in a powerful hypnotic trance. Seen from the side, it could even pass for a bony x-ray of a bridge. The suspension bridge, an image of architecture turned inside out—suspended, dependent—held aloft by . . . what? A few steel cables? Cement piers? Perhaps only faith.
If Gothic architecture characterizes the High Middle Ages, the suspension bridge characterizes the late nineteenth century. The master mason of medieval Britain, the architect, gave way to a new magus of the Industrial Revolution, the structural engineer. The suspension bridge shows itself off in underlying, basic elements—in tensile strength, coefficients of expansion, braided steel cables, the calculus. But what makes Roebling’s bridge defy gravity lies in his refinement of a seventeenth-century invention for the building of cathedrals—the cantilever—or as a nineteenth-century book on bridge building more accurately refers to it, “the flying lever.”18 The architectural exuberance of the Brooklyn Bridge hangs in space as a monument to the underlying philosophy of the period—the drive to uncover the essential in virtually everything in the natural world, and in the created world, as well.
But there was another world, too. The bridge also served as a potent symbol for the idea of crossing—in particular, the idea of crossing over to the other side. To find the secret of life, some explorers ventured to that other world where the dead were thought to congregate, and brought back news of eternal life. Séances provided them a bridge. As we shall see, the nineteenth century found other such bridges—out-of-body travel, trances, hypnotic states, and so on. In this sense, we can count all bridges as suspension bridges—suspended between the land of the living and the land of . . . well, no one knew for sure.