Читать книгу How to Build a Human: Adventures in How We Are Made and Who We Are - Philip Ball - Страница 8
CHAPTER 1 PIECES OF LIFE CELLS PAST AND PRESENT
Оглавление“Ex ovo omnia.”
So declared the frontispiece of Exercitationes de generatione animalium (1651) by the seventeenth-century English physician William Harvey, sometime physician to James I. It expressed a conviction (and no more) that all things living come from an egg.
William Harvey’s motto “All things from an egg” in the frontispiece of his 1651 treatise on the “generation of animals”.
It’s not really true: plenty of living organisms, such as bacteria and fungi, do not begin this way. But we do. (At least, we have done so far. I no longer take it for granted that this will always be the case.)
“Egg” is an odd term for our generative particle, and indeed Harvey was a little vague about what he meant by it. Strictly speaking, an egg is just the vessel that contains the fertilized cell, the zygote in which the male genes from sperm are combined with the female genes from the “egg cell” or ovum. Yet it’s easy to overlook how bold a proposal Harvey was making, in a time when no one (himself included) had ever seen a human ovum and the notion that people might begin in a process akin to that of birds and amphibians could have sounded bizarre.
The truth of Harvey’s insight could only be discerned once biology acquired the idea of the cell, the fundamental “atom of biology”. That insight is often attributed to Harvey’s compatriot and near-contemporary Robert Hooke, who made the most productive use of the newly invented microscope in the 1660s and ’70s. Hooke discerned that a thin slice of cork was composed of tiny compartments that he called “cells”. This is often said to be an allusion to the cloistered chambers (Latin cella: small room) of monks, but Hooke drew a parallel instead with the chambers of the bee’s honeycomb, which in turn probably derive from the monastic analogy.
Robert Hooke’s sketch of cells in cork, as seen in the microscope.
The popular notion that Hooke established the cellular basis of all living things is wrong, however. Hooke saw cells, for sure – but he had no reason to suppose that the fabric of cork had anything in common with human flesh. In fact, Hooke imagined that cork cells are mere passages for transporting fluids around the cork tree. The cell here is just a passive void in the plant’s fabric, a very different notion from the modern concept of an entity filled with the molecular machinery of life.
More provocative were the observations of the Dutch cloth merchant Antonie van Leeuwenhoek in 1673 that living organisms can be of microscopic size. Leeuwenhoek saw such “animalcules” teeming in rainwater – mostly single-celled organisms now called protists, which are bigger than most bacteria. You can imagine how unsettling it was to realize that the water we drink is full of these beings – although not nearly so much so as the dawning realization that bacteria and other invisibly small organisms are everywhere: on our food, in the air, on our skin, in our guts.
Leeuwenhoek contributed to that latter perception when he discovered animalcules in sperm. That was one of the substances that the secretary of London’s Royal Society, Henry Oldenburg, suggested the Dutchman study after receiving his communication on rainwater. Leeuwenhoek looked at the semen of dogs, rabbits and men – including his own – and observed tadpole-like entities that “moved forward with a snake like motion of the tail, as eels do when swimming in water.” Were these parasitic worms? Or might they be the very generative seeds themselves? They were, after all, absent in the sperm of males lacking the ability to procreate: young boys and very old men.
Here we can see the recurrent tendency to impose familiar human characteristics on what is evidently “of us” but not “like us”. The French physician Nicolas Andry de Boisregard, an expert on tiny parasites and a microscope enthusiast, claimed in 1701 that “spermatic worms” could be considered to possess the formative shape of the fetus: a head with a tail. In 1694, the Dutch microscopist Nicolaas Hartsoeker drew a now iconic image of a tiny fetal humanoid with a huge head tucked up inside the head of a spermatozoa: not, as sometimes claimed, a reproduction of what he thought he could see, but a figurative representation of what he imagined to be there.
The “homunculus” in sperm, as imagined by Nicolaas Hartsoeker.
This was one of the most explicit expressions of the so-called preformationist theory of human development, according to which the human body was fully formed from the very beginning of conception and merely expands in size: an extrapolation down to the smallest scale of the infant’s growth to adulthood. According to this picture, the female egg postulated by Harvey continued to be regarded in the prejudiced way in which Aristotle had perceived the woman’s essentially passive role in procreation: it was just a receptacle for the homunculus supplied by the man.
That was, though, a different view to the one Harvey envisaged, in which the body developed from the initially unstructured egg. Harvey, like Aristotle, thought that semen triggered this process of emergence, which Aristotle had imagined as a kind of curdling of a fluid within the female. The idea that the embryo unfolds in this way, rather than simply expands from a preformed homunculus, was known as epigenesis. These rival views of embryo formation contended until studies with the microscope, particularly investigations of the relatively accessible development of chicks inside the egg, during the eighteenth and early nineteenth century put paid to the preformation hypothesis. Embryos gradually develop their features, and the question for embryologists was (and still is) how and why this structuring of the tissues comes about.
* * *
The observations of early microscopists did not, then, engender a belief that life is fundamentally cellular. That cells are a general component of living matter was not proposed until the early nineteenth century, when the German zoologist Theodor Schwann put the idea forward. “There is one universal principle of development for the elementary parts of organisms,” he wrote in 1839, “and this principle is in the formation of cells.”
Schwann developed these ideas while working in Berlin under the guidance of physiologist Johannes Müller. One of Schwann’s colleagues in Müller’s laboratory was Matthias Jakob Schleiden, with whom he collaborated on the development of the cell theory. Schleiden’s prime interest was plants, which were more easily seen under the microscope to have tissues made from a patchwork of cells, their walls constituting emphatic physical boundaries. This structure wasn’t always evident for animal tissues (especially hair and teeth), but Schwann and Schleiden were convinced that cell theory could offer a unified view of all living things.
Schleiden believed that cells were generated spontaneously within the organisms – an echo of the notion of “spontaneous generation” of living matter that many scientists still accepted in the early nineteenth century. But he was shown to be wrong by another of Müller’s students, Robert Remak, who showed that cells proliferate by dividing. Remak’s discovery was popularized – without attribution – by yet another Müller protégé, Rudolf Virchow, who tends now to be given the credit for it. All cells, Virchow concluded, arise from other cells: as he put it in Harveian manner, omnis cellula e cellula. New cells are created from the division of existing ones, and they grow between successive divisions so that this series of splittings doesn’t result in ever tinier compartments. Virchow proposed that all disease is manifested as an alteration of cells themselves.
Virchow was the kind of person only the nineteenth century could have produced – and perhaps indeed only then in Germany, with its notion of Bildung, a cultivation of the intellect that encouraged the emergence of polymaths like Goethe and Alexander von Humboldt. Virchow studied theology before taking up medicine in Berlin. While establishing himself as a leading pathologist and physician, he also became a political activist and writer and was involved in the uprisings of 1848. As if to demonstrate that nothing was simple in those times, this eminent biologist and religious agnostic was also profoundly opposed to Charles Darwin’s theory of evolution, of which his student Ernst Haeckel was Germany’s foremost champion.
Virchow thus had fingers in many pies; but in his view they were different slices of the same pie. The influence of politics, ideology and philosophy on science is always clearer in retrospect, and that’s nowhere more apparent than in the physiology of the nineteenth century. For Schwann, “each cell is, within certain limits, an Individual, an independent Whole”: an idea indebted to the Enlightenment celebration of the individual. The cell was a living thing – as the physiologist Ernst von Brücke put it in 1861, an “elementary organism” – which meant that higher organisms were a kind of community, a collaboration of so many autonomous, microscopic lives, in a manner that paralleled the popular notion of the nation state as the collective action of its citizens. Meanwhile, Schwann’s conviction of the cellular nature of all life, implying a shared structural basis for both plants and animals, was motivated by his sympathy for the German Romantic philosophical tradition that sought for universal explanations.
For Virchow, this belief in tissues and organisms as cellular collectives was more than metaphor. It was the expression writ small of a principle that applied to politics and society. He was convinced that a healthy society was one in which each individual life depended on the others and which had no need of centralized control. “A cell … yes, that is really a person, and in truth a busy, an active person,” he wrote in 1885. “What the individual is on a grand scale, the cell is that and perhaps even more on a small one.”
Life itself, then, showed for Virchow how otiose and mistaken was the centrist doctrine of the Prussian statesman Otto von Bismarck, who at that time was working towards the unification of the German states. Virchow attacked Bismarck’s policies at every opportunity and denounced his militaristic tendencies, so enraging the German nobleman that Bismarck challenged Virchow to a duel. The physiologist shrugged it off, making Bismarck’s belligerent bluster seem like the aristocratic posturing of a bygone era.
The idea of the “germ” as a rogue microbial invader was the counterpart to the idea of a body as a community of collaborating cells. It was entirely in keeping with the political connotations of cell theory that germ theory blossomed in parallel. Once Louis Pasteur and Robert Koch showed in the nineteenth century that micro-organisms like bacteria can be agents of disease (“germs”), generations of children were taught to fear them. Germs are everywhere, our implacable enemies: it is “man against germs”, as the title of a 1959 popular book on microbiology put it. After all, hadn’t a bacillus been identified as the cause of cholera in 1854? Hadn’t Pasteur and Koch established bacteria as the culprits behind anthrax, tuberculosis, typhoid, rabies? These germs were nasty agents of death, to be eradicated with a thorough scrub of carbolic. And to be sure, the antiseptic routines introduced by the unjustly ridiculed Hungarian-German physician Ignaz Semmelweis in the 1840s (as if washing your hands before surgery could make any difference!), and later by Joseph Lister in England, saved countless lives.
This new view of disease had profound sociopolitical implications. The old notion of a disease-generating “miasma” – a kind of cloud of bad air – situated illness in a particular locality. But once disease became linked to contagion passed on between people, a different concept of responsibility and blame was established. The politicized and racialized moral framework for germ theory is very clear in the description of a French writer from 1885 who talked of disease as “coming from outside, penetrating the organism like a horde of Sudanese, ravaging it for the right of invasion and conquest.” This was the language of imperialism and colonialism, and disease was often portrayed as something dangerously exotic, coming from beyond the borders to threaten civilization. Those who supported contagion theory tended to be politically conservative; liberals were more sceptical.
From the outset, then, our cellular nature was perceived to entail a particular moral, political and philosophical view of the world and of our place within it.
* * *
We come from a union of special cells: the so-called gametes of our biological parents, a sperm and an egg cell. Perhaps because this fact is taught in primary schools – a reassuringly abstract vision of human procreation, typically presented with no hint of the confusing richness and peculiarity of stratagems and diversions it engenders – we forget to be astonished by it. That we begin as a microscopic next-to-nothing, a pinprick somehow programmed with potential, is remarkable and counter-intuitive. It’s slightly absurd to image that infants will casually accommodate this “fact of life”, which seems on reflection barely conceivable. Only their endless capacity to accept the magical with equanimity lets us get away with it.
And so it continues through the educational grades: cells exist, and that is all we need to know. School children are taught to label their parts: the oddly named bits like mitochondria, vacuoles, endoplasmic reticulum, the Golgi apparatus. What has all this to do with arms and legs, hearts, brains? One thing is sure: the cell is anything but a homunculus. In which case, where does the body come from?
What the cell and the body do have in common is organization. They aren’t random structures; there’s a plan.
But that word, so often bandied about in biology, is dangerous. It doesn’t seem likely that we can ever leach from the notion of a plan its connotations of foresight and purpose. To speak, as biologists do, of an organism’s “body plan” is to lay the ground for the idea that there exists somewhere – in the cell, for where else could it be? – a blueprint for it. But it is not merely by analogy, but rather as a loose parallel, that I would invite you then to wonder where exists the plan, the blueprint, for a snowflake. There’s a crucial distinction between living organisms and snowflakes, and it is this distinction that explains why snowflakes do not evolve one from the other via inheritance but are created de novo from the cold and humid winter sky.1 But there are good motivations for the comparison, for the body’s organization, like the snowflake’s, is an instantiation of a particular set of rules that govern growth.
Virchow and his contemporaries were already coming to appreciate that cells are more than fluid-filled sacs. In 1831, the Scottish botanist Robert Brown reported that plant cells have a dense internal compartment that he called a nucleus.2 By Virchow’s time, cells were considered to have at least three components: an enclosing membrane, a nucleus, and a viscous internal fluid that the Swiss physiologist Albert von Kölliker named the cytoplasm. (Cyto- or cyte means cell, and we’ll see this prefix or suffix a lot.)
Kölliker was one of the first to study cells microscopically using the technique of staining: treating them with dyes that are absorbed by cell components to make their fine structure more visible. He was a pioneer in the field of histology, the study of the anatomy of tissues and their cells. Kölliker was particularly interested in muscle cells, and he showed that these are of more than one type. One variety has a stripy (“striated”) appearance when stained, and Kölliker noticed that striated muscle cells also contain many tiny granules, later identified as another component of animal cells that were named mitochondria in 1898. Around the same time, other substructures of the cell’s interior were recognized: the disorderly, spongelike folded membrane called the endoplasmic reticulum, and the Golgi apparatus, named after the Italian biologist Camillo Golgi. A vigorous debate began about whether the cell’s jelly-like internal medium – called the protoplasm – is basically granular, reticular (composed of membrane networks) or filamentary (full of fibrous structures). All of these options were observed, and the truth is that it depends on when and where you look at a cell.
Images of onion cells drawn in 1900 by the biologist Edmund Beecher Wilson. These show some of the internal structures seen at different times in different cells. The single dense blob in several of these is the nucleus. Sometimes this seemed to break up into threads or blobs, perhaps in the process of cell division. What was going on in there?
All this internal structure led the zoologist Edmund Beecher Wilson to express regret that the term “cell” had ever been coined. It was a misnomer, he said – “for whatever the living cell is, it is not, as the word implies, a hollow chamber surrounded by solid walls.” Some others wondered if the cell was really the fundamental entity it had been supposed to be, not least because a cell membrane was not always visible under the microscope. Perhaps it was actually these organized contents in the protoplasm that were the real stuff of life? “We must be careful,” warned zoologist James Gray in 1931, “to avoid any tacit assumption that the cell is the natural, or even the legitimate, unit of life and function.”3
At any rate, the cell contained so much stuff. What was it all for?
It was becoming clear, too, what some of the key chemical components of cells are. Chemists interested in the processes of life – by the end of the century that topic was known as biochemistry – had figured out that they contain chemical agents called enzymes that carry out its characteristic panoply of metabolic reactions. Some enzymes, for example, allow yeast to ferment sugar into alcohol. In 1897, German chemist Eduard Buchner showed that intact cells weren’t necessary for that to happen: the “juice” that could be extracted from yeast cells could produce fermentation on its own, presumably because it still carried the crucial enzymes, undamaged, within it.
These molecular ingredients, like workers in a city, had to be organized, segregated, orchestrated in the time and place of their actions and motions. Chemical reactions in the cell have to happen in the proper order and in the right location; things can’t be the same everywhere in the cell. And so the “social” view of bodies as communities of cells was repeated for the single cell itself: it was a sort of factory populated by cooperating enzymes and other molecules. This hidden machinery enables a cell to persist and maintain itself, to take in substances and energy from the environment and use them to carry out the metabolic reactions without which there is only death.4
At the turn of the century, the substructure and organization on which the animation of cells depends was largely beyond the resolving power of the microscopes. But it was clear enough that not all cells are alike in their composition and structure. Bacteria and protists have rather little in the way of visible internal organization. They belong to a class of micro-organisms called prokaryotes, and they are typically round or elongated and sausage-shaped. The language of biological classification is always a little presumptuous, but it takes nothing away from bacteria to say that their cells are structurally relatively “simple”. They lack a nucleus – hence the label “prokaryotes”, meaning “pre-nucleus”. (More presumption – as though bacteria just haven’t yet discovered the wisdom of having a nucleus but will wake up to it one day. In fact, bacteria have existed for longer than eukaryotes; they and other prokaryotes dominate much of the planet’s ecology, and evidently have no need of “greater sophistication” in order to thrive.)
Human cells, along with those of other animals, plants, fungi and yeast, are said to be eukaryotes: a term that simply connotes that their cells have a nucleus. Eukaryotic organisms may be multicelled, like us, or single-celled, like yeast. The latter is an example of a “lower” eukaryote: more presumption, of course, but meaning that the degree of organization in the cell is less than that evident in the higher eukaryotes like peas, fruit flies and whales.
For now we can set prokaryotes aside. There is, mercifully, no need either to look in detail at what all the complex structure of the human cell is about, other than to say that it can be usefully regarded as a compartmentalization of the processes of existence. Membrane-wrapped substructures of the cell are called organelles, and each can be somewhat crudely considered to carry out a specific task. Mitochondria are the regions where a eukaryotic cell produces its energy, in the form of small molecules that release stored chemical energy when transformed by enzymes. The Golgi apparatus functions as a kind of cellular post office, processing proteins and dispatching them to where they are needed. The nucleus is where the chromosomes are kept: the material encoding the genes that are passed on when a cell divides or an organism reproduces. What we do need to hear more about, very shortly, are those chromosomes, because they are an important part of what defines you as an individual, and absolutely vital for orchestrating the life processes that enabled you to grow and which sustain you daily.
The human cell.
By the early twentieth century, it was clear that what sets living matter apart from the inanimate is not merely a question of composition: of what life is made of. Neither is it just a question of structure. Organisms and cells clearly did have a hierarchy of significant, specific yet hard-to-interpret structures reaching down to the microscopic and beyond. And that mattered. But the real reason living matter is not equivalent to some other state of matter such as liquids and gases is that it is dynamic: always changing, always in the process of doing something, never reaching a steady equilibrium. Staying alive is not a matter of luxuriating in the state of aliveness but is a relentless task of keeping balls in the air.
Researchers today might rightly point out that this dynamic, out-of-equilibrium character is not unique to life. Our planet’s climate system is like that too: a constant channelling of the energies of the sun and of the hot planetary interior into orchestrated cycling movements of the oceans, atmosphere and sluggish rocky mantle, accompanied by flows of chemical elements and heat between the different components of the planetary system. The system is responsive and adaptive. But this is precisely the point: there are parallels between a living organism and the planet itself, which is why the independent scientist James Lovelock pushed the point from analogy to the verge of genuine equivalence in his Gaia hypothesis. Arguments about whether the planet can be truly considered “alive” are moot, because the living systems – rainforests, ocean microfauna, every creature that takes in chemicals and turns them into something else plus heat – are in any case a crucial, active part of the planet’s “physiology”.
This activity of the planetary biosphere commenced close to four billion years ago and has not ceased since. Virchow’s omnia cellula e cellula has a significance barely any lesser than that of Darwinian evolution, which ultimately depends on it (ironically, given Virchow’s views on Darwin). It establishes a basis for what Aristotle imagined as a Great Chain of Being, in which the fundamental unit is no longer the reproducing organism but the dividing cell. All cells are, in evolutionary terms, related to one another, and the question of origin reduces to that of how the first cell came into being. Since that obscure primeval event, to the best of our knowledge no new cell has appeared de novo.
At the same time, Virchow’s slogan is a description, not an explanation. Why is a cell not content to remain as it is, happily metabolizing until its time runs out? One answer just begs the question: if that is all cells did, they would not exist, because their de novo formation from a chemical chaos is far too improbable. Then we risk falling back again on anthropomorphism – cells intrinsically want to reproduce by division – or on tautology, saying that the basic biological function of a cell is to make more cells (“the dream of every cell is to become two cells”, said the Nobel laureate biologist François Jacob). Biological discourse seldom does much better than this. Cell and molecular biologists and geneticists have a phenomenal understanding of how cells propagate themselves. But explaining why they do so is a very subtle affair, and it’s fair to say that most biologists don’t even think about it. Yet that “impulse” is the engine of Darwinian evolution and consequently at the root of all that matters in biology.
There is not a goal to this process of life, towards which all the machinery of the cell somehow strives. We can’t help thinking of it that way, of course, because we are natural storytellers (and because we do have goals, and can meaningfully ascribe them to other animals too). So we persuade ourselves that life aims to make babies, to build organisms, to evolve towards perfection (or at least self-improvement), to perpetuate genes. These are all stories, and they can be lovely as well as cognitively useful. But they do not sum up what life is about. It is a thing that, once begun, is astonishingly hard to stop; actually we do not know how that could be accomplished short of destroying the planet itself.
* * *
Life’s unit is the cell. Nothing less than the complete cell has a claim to be called genuinely alive.5 It’s common to see our body’s cells referred to as “building blocks” of tissues, much like assemblies of bricks that constitute a house. To look at the cells in a slice of plant tissue, such as Wilson’s drawing of the onion earlier, you can understand why. But that image fails to convey the dynamic aspect of cells. They move, they respond to their environment, and they have life cycles: a birth and a death. They receive and process information. As Virchow suggested, cells are to some degree autonomous agents: little living entities, making their way in the world.
Anything less than a cell, then, has at best a questionable claim to be alive; from cells, you can make every organism on Earth. We have known about the fundamental status of the cell for about two centuries but have not always acknowledged it. For much of the late twentieth century, the cell was relegated before the supremacy of the gene: the biological “unit of information” inherited between generations. Now the tide has turned again. “The cell is making a particular kind of reappearance as a central actor in today’s biomedical, biological, and biotechnological settings,” writes sociologist of biology Hannah Landecker. “At the beginning of the 21st century, the cell has emerged as a central unit of biological thought and practice … the cell has deposed the gene as the candidate for the role of life itself.”
Cells do more than persist. Crucially, they can replicate: produce copies of themselves. Ultimately, cell replication and proliferation drives evolution. Life is not what makes this propagation of cells possible; rather, that is what life is.
Biologists towards the end of the nineteenth century recognized that reproduction of cells happens not by the spontaneous formation of new cells, as Schwann believed, but by cell division as Virchow asserted: one cell dividing in two. Single-celled organisms such as bacteria simply replicate their chromosomes and then bud in two, a process called binary fission. But in eukaryotic cells the process is considerably more complex. Cell “fission” was first seen in the 1830s and was called mitosis in 1882 by the German anatomist Walther Flemming, who studied the process in detail in amphibian cells.
Flemming was a champion of the filamentary model of cells – the idea that their contents are organized mainly as long fibrous structures. In the 1870s, he showed that as animal cells divide, the dense blob of the nucleus dissolves into a tangle of thread-like structures (mitosis stems from the Greek word for thread). The threads then condense into X-shaped structures that are arranged on a set of star-like protein filaments dubbed an aster. (The word means “star”, but actually the appearance is more reminiscent of an aster flower.) Flemming saw that the aster gets elongated and then rearranged into two asters, on which the chromosomes break in half. As the cell body itself splits in two, these chromosomal fragments are separated into the two “daughter” cells and enclosed once again within nuclei.6
Various stages of cell division or mitosis as recorded by Walther Flemming in his 1882 book Zellsubstanz, Kern und Zelltheilung (Cell Substance, Nucleus and Cell Division).
So cell division is preceded by a reorganization of its contents: apparently, they are apportioned rather carefully into two. The thread-like material seen by Flemming unravelling from the nucleus readily takes up a staining dye (so that it is more easily seen under the microscope), leading it to be called, after the Greek word for colour, chromatin. The individual threads themselves were christened chromosomes – “coloured bodies” – in 1888.
In that same year, the German biologist Theodor Boveri discovered that the movement of chromosomes during cell division is controlled by a structure he called the centrosome, from which the strands of asters radiate. The two asters that appear just before a cell splits in two, each with a centrosome at their core, could in fact be seen to be connected by a bulging bridge of fine filaments, called the mitotic spindle. Flemming became convinced that these spindle fibres act as a kind of scaffold to direct the segregation of the chromosome threads into two groups. He was right, but he lacked a sufficiently sharply resolved microscopic technique to prove it.
So the division of animal cells isn’t just like the splitting of a water droplet into two. It has to be accompanied by a great deal of internal reorganization. Flemming and others identified a series of distinct stages along the way. While cells are going about their business with no sign of dividing, they are said to be in the interphase state. The unpacking of the nucleus into filamentary chromosomes is called prophase, and the formation and elongation of the aster is called metaphase. As the aster-like cluster splits in two, the cell enters the anaphase, from where it is downhill all the way to fission and the re-compaction of the nucleus.
This procedure is called the cell cycle, which is an interesting phrase when you think about it. Its implication is that, rather than thinking of biology as being composed of cells that do their thing until they eventually divide, we might regard it as a process of continual replication and proliferation that involves cells. With all due warning about the artificiality of narratives in biology, we might thus reframe the Great Chain of Being as instead a Great Chain of Becoming.
* * *
It was a fundamental – perhaps the fundamental – turning point for modern biology when, around the turn of the century, scientists came to appreciate that much of the complicated reorganization that goes on when cells divide is in order to pass on the genes, the basic units of inheritance, that are written into the strands called chromosomes. What they were seeing here in their microscopes is the underlying principle that enables inheritance and evolution.
The notion of the gene as a physical entity that confers inheritance of traits appeared in parallel with the development of cell theory in the mid-nineteenth century. The story of how “particulate factors” governing inheritance were posited by the Moravian monk Gregor Mendel from his studies on the cultivation of pea plants has been so often told that we needn’t dwell on it. In the 1850s and ’60s Mendel observed that inheritance seemed to be an all-or-nothing affair: peas made by interbreeding plants that make smooth or wrinkly versions are either one type or the other, not a blend (“a bit wrinkly”) of the two. Of course, real inheritance in humans is more complicated: some traits (like hair or eye colour) may be inherited discretely, like Mendel’s peas, others (like height or skin pigmentation) may be intermediate between those of the biological parents. The puzzle Mendel’s observations raised was why inheritance is not always such mix, given that it comes from a merging of the parental gametes.
Charles Darwin didn’t know of Mendel’s work, but he invoked a similar idea of particulate inheritance in his theory of evolution by natural selection. Darwin believed that the body’s cells produced particles that he called gemmules, which influence an organism’s development and are passed on to offspring. In this view, all the cells and tissues of the body play a role in inheritance, whence the term “pangenesis” that Darwin coined for his speculative mechanism of evolution. These gemmules may be modified at random by influences from the environment, and the variations are acquired by progeny. In the 1890s, the Dutch botanist Hugo de Vries and German biologist August Weismann independently modified Darwin’s theory by proposing that transmission of gemmules could not occur between body (somatic) cells and the so-called “germ cells” that produce gametes. Only the latter could contribute to inheritance. De Vries used the term “pangene” instead of gemmule to distinguish his theory from Darwin’s.
At the start of the twentieth century, the Danish botanist Wilhelm Johannsen shortened the word for these particulate units of inheritance to “gene”. He also drew the central distinction between an organism’s genotype – the genes it inherits from the biological parents – and its phenotype, the expression of those genes in appearance and behaviour.
In 1902 Theodor Boveri, working on sea urchins in Germany, and independently the American zoologist Walter Sutton, who was studying grasshoppers, noticed that the faithful passing on of chromosomes across generations of cells mirrored the way that genes were inherited. Perhaps, they concluded, chromosomes are in fact the carriers of the genes. Around 1915, the American biologist Thomas Hunt Morgan established, from painstaking studies of the inheritance of characteristics in fruit flies, that this is so. Moreover, Morgan showed how one could deduce the approximate positions of two different genes relative to one another on the chromosomes by observing how often the two genes – or rather, the manifestation of the corresponding phenotypes – appear together in fruit flies made by mating of individuals with the respective genes. As the chromosomes were divvied up to form egg and sperm cells, genes that sat close together were more likely to remain together in the offspring. Morgan’s work established the idea of a genetic map: literally a picture of where genes sit on the various chromosomes.
The sum total of an organism’s genetic material is called its genome, a word introduced in 1920. For many years after Morgan’s work, it was suspected that genes are composed of the molecules called proteins, in which the much smaller molecules called amino acids are linked together in chains. Proteins, after all, seemed to be responsible for most of what goes on in cells – they are the stuff of enzymes. And chromosomes were indeed found to consist partly of protein. But those threads of heredity were also known to contain a molecule called DNA, belonging to the class known as nucleic acids (that’s what the “NA” stands for).
No one knew what this stuff did until the mid-1940s, when the Canadian-American physician Oswald Avery and his co-workers at the Rockefeller University Hospital in New York reported rather conclusive evidence that genes in fact reside on DNA. That idea was not universally accepted, however, until James Watson, Francis Crick, Maurice Wilkins, Rosalind Franklin and their co-workers revealed the molecular structure of DNA – how its atoms are arranged along the chain-like molecule. This structure, first reported in 1953 by Watson and Crick, who relied partly on Franklin’s studies of DNA crystals, showed how genetic information could be encoded in the DNA molecule. It is a deeply elegant structure, composed of two chain-strands entwined in a double helix.
The double helix of DNA. This iconic image creates a somewhat misleading picture, since for most of the time DNA in a cell’s chromosomes is packaged up quite densely in chromatin, in which it is wrapped around proteins called histones like thread on a bobbin. The “rungs” of the double-helical ladder consist of pairs of so-called nucleotide bases (denoted A, T, C and G) with shapes that complement each other and fit together well.
So beautiful, indeed, was this molecular architecture and the story it seemed to disclose that modern biology was largely seduced by it. It was immediately obvious to Watson and Crick how heredity could be enacted on the molecular scale. The information in genes could be replicated by unzipping the double helix so that each strand could act as the template on which replicas could be assembled.7 Here, then, was how genetic information could be copied into new chromosomes when cells divide: a molecular-scale mechanism for the inheritance described by Mendel and Darwin, which Morgan and others had situated on the chromosomes. DNA married genetics with inheritance at the molecular level, bringing coherence to biology.
And Darwinian evolution? If genes govern an organism’s traits, then random copying errors in DNA replication could alter a trait, mostly to the detriment of an organism but occasionally to its advantage. This is the variation on which natural selection acts to make organisms adapted to their environment.
It all seemed to fall into place. All the important questions – about evolution, genetic disease, development – might now be answered by referring to the information in the genome. Cells didn’t seem to be a very important part of the story except as vehicles for genes and as machines for enacting their commands.
To speak of information being “encoded” in DNA is to speak literally. Genes deploy a code: the genetic code. But what exactly do genes encode? On the most part, it is the chemical structure of a protein molecule, typically an enzyme. Because of the ways in which different amino acids “feel” one another and interact with the watery solvent all around them in the cell, a particular sequence of amino acids determines the way most protein chains fold up into a compact three-dimensional shape. This shape enables enzymes to carry out particular chemical transformations in the cell: they are catalysts that facilitate the cell’s chemistry. So the protein’s sequence, encoded in the respective gene, dictates its function.
A protein’s amino-acid sequence is represented in its gene by the sequence of chemical constituents that make up DNA. There are four of these, called nucleotide bases and denoted by the labels A, T, G and C. Different triplets of bases represent particular amino acids in the resultant protein: AAA, for example, corresponds to the amino acid called lysine.
Turning a gene into its corresponding protein is a two step-process. First, the gene on a piece of DNA in a chromosome is used as a template for building a molecule of another kind of nucleic acid, called RNA. This is called transcription. The piece of RNA made from a gene is then used as a template for putting the protein together, one amino acid at a time. This is called translation, and it is performed by a complex piece of molecular machinery called the ribosome, made of proteins and other pieces of RNA.
Chromosomes consist of lengths of DNA double-helix wound around disk-like protein molecules called histones, like the string on a yoyo. This combination of DNA and its protein packaging is what we call chromatin. The genomes of eukaryotes are divided up into a number of chromosomes that is always the same for every cell of a particular species (if they are not abnormal) but can differ between species. Human cells have 46 chromosomes, in sets of 23 pairs.
* * *
It’s common to see genes called the instructions to make an organism. In this view, the entire genome is then the “instruction booklet”, or even the “blueprint”. This is an understandable metaphor, but misleading. Genes are fundamental to the way an organism turns out: the genome of a frog egg guides it to become a frog, not an elephant, and vice versa. But the way genes influence and to some degree dictate that proliferation of cells is subtle, complex, and resistant to any convenient metaphors from the technological world of design and construction. By leaping from genome to finished organism without taking into account the process of development from cells, we risk simplifying biology in ways that can create some deep misconceptions about how life proceeds and evolves.
To the extent that a gene is an “instruction”, it is an instruction to build a protein molecule. It is far from obvious what, in general, this has to do with the growth and form of an organism: with the generation of our flesh. We know of no way to map an organism’s complement of proteins onto its shape, traits and behaviour: its phenotype. The two are worlds apart: it’s rather like trying to understand the meaning of a Dickens novel from a close consideration of the shapes of its letters and the correlations in their order of appearance.
Besides, this conventional “blueprint” description of what genomes do is too simplistic even if we consider only how they dictate that roster of proteins. Here are some reasons why:
Only about 1.5 per cent of the human genome encodes proteins, and a further 8 to 15 per cent or so is thought to “regulate” the activity of other genes by encoding RNA that turns their transcription up or down. We don’t know what the rest does, and scientists aren’t agreed on whether it is just useless “junk” accumulated, like rubbish in the attic, over the course of evolution, or whether it has some unknown but important biological function. In all probability, it is a bit of both. But at any rate, a lot of this DNA with no known protein-coding or regulatory function is nonetheless transcribed by cells to RNA, and no one is sure why.
Most protein-coding human genes each encode more than one protein. Genes are not generally simply a linear encoding of protein sequences that start at one end of the protein chain and finish at the other; they are, for example, interspersed with sequences called introns that are carefully snipped out of the transcribed RNA before it is translated. Sometimes the transcribed RNA then gets reshuffled before translation, providing templates for several different proteins.
Proteins are not just folded chains of amino acids. Sometimes those folded chains are “stapled” in place by chemical bonds, or clipped together by other chemical entities such as electrically charged ions. Most proteins have other chemical groups added to them (by other enzymes) – for example, a group containing an iron atom is needed by the protein haemoglobin to bind oxygen and carry it around the body in the blood. None of these details, essential to the protein’s structure and function, is encoded in DNA. You would not be able to deduce them from a gene sequence.
We only know what around 50 per cent of gene-encoded proteins do, or even what they look like. The rest are sometimes called “dark” proteins: we assume they have a role but we don’t know what it is.
Plenty of proteins do not seem to have well-defined folded states but appear loose and floppy. Understanding how such ill-defined “intrinsically disordered proteins” can have specific biological roles is a very active area of current research. Some researchers think that the floppiness may not reflect the state of these proteins in cells themselves – but we don’t really know if that is so or not.
Ah, details, details! How much should we care? Do they really alter the picture of genes dictating the organism?
That depends, to some degree, on what questions you are asking. A genome sequence – the ordered list of nucleotides A, T, C and G along the DNA strands of chromosomes – does specify the nature of the organism in question. From this sequence you can tell in principle if the cell that contains it is from a human, a dog or a mouse (something that may not be obvious from a cursory look at the cell as a whole). These distinctions are found only in some key genes: the human genome differs from that of chimpanzees in just 1 per cent of the sequence, and a third of it is essentially the same as the genome of a mushroom.8 The differences between the genomes of individual people are even tinier.
But whereas you can look at a real blueprint, and probably an instruction manual, and figure out what kind of object will emerge from the plan, you can’t do that for a genome. Indeed, you can only deduce that a genome will “produce” a dog at all if you have already decoded the generic dog genome for comparison, laying the two side by side. It’s simply a case of seeing if the two genomes superimpose; there’s nothing intrinsic in the sequence that hints at its “dogness”.
This isn’t because we don’t yet know enough about the “instructions” in a genome (although that is the case too). It’s because there is no direct relationship between the informational content of a gene – which, as I say, typically dictates the structure of a class of protein molecules, or at least of the basic amino-acid fabric of those proteins – and a trait or structure apparent in the organism. Most proteins do jobs that can’t easily be related to any particular trait. Some can be: for example, there’s a protein that helps chloride ions get through the membranes of our cells, and if this protein is faulty – because of a mutation in the corresponding gene – then the lack of chloride transport into cells causes the disease cystic fibrosis. But in general, proteins carry out “low-level” biochemical functions that might be involved in a whole host of traits, and which might have very different outcomes if the protein is produced (“expressed”) at different stages in the development or life cycle of the organism. As microbiologist Franklin Harold has said, “the higher levels of order, form and function are not spelled out in the genome.”
Might we, then, call a genome not a blueprint but a recipe? The metaphor has rather more appeal, not least because many recipes assume implicit knowledge (especially in older cookbooks). But a recipe is still a list of ingredients plus instructions to assemble them. Genomes do not come with users’ instructions, more’s the pity. Harold offers a different image, allusive and poetic and all the more appealing for that:
I prefer to think of the genome as akin to Hermann Hesse’s Magister Ludi [aka The Glass Bead Game]: master of an intricate game of cues and responses, in which he is fully enmeshed and absorbed; a game that is shaped as much by its own internal rules as by the will of that masterful player.
If there was better public communication of the complex, contingent and often opaque relationship of genotype to phenotype, there might be rather less anxiety about the idea that genes affect behaviour. Small variations in each individual’s genetic make-up can have an influence – sometimes a rather strong one – not just what you look like but what your behaviour and personality are like. This much is absolutely clear: there is not a single known aspect of human behaviour so far investigated that does not turn out to show some correlation with what gene variants we have. Even habits or experiences as apparently contingent and environmental as the amount we watch television9 or our chance of getting divorced are partly heritable, meaning that the differences between individuals can be partly traced to differences in their genes.
Far from alarming us, this shouldn’t surprise us. We have always been content to believe that, for example, some people seem blessed with talents that can’t obviously be explained by their environment and upbringing alone. By the same token, some seem hardwired to find particular tasks challenging, such as reading or spatial coordination.
Yet perhaps because we have a strong sense of personal agency, autonomy and free will, many people are disturbed by the idea that there are molecules in our cells that are pulling our strings. They needn’t worry. It is precisely because genetic propensities are filtered, interpreted and modified by the process of growing a human cell by cell that they don’t fully determine how our bodies turn out, let alone how our brains get wired … let alone how we actually behave.
Genes supply the raw material for developing our basic cognitive capabilities – to put it crudely, they are a key part of what allows most human embryos to grow into bodies that can see, hear, taste, that have minds and inclinations. But how they exert their effects is very, very complicated. In particular, very few genes affect one trait alone. Most genes have influences on many traits. Some traits, both behavioural and medical (such as susceptibility to heart disease), seem to be influenced – in ways that are imperceptible gene by gene, but detectable when their effects are added up – by most of the genome. That’s why the popular notion of a “gene for” some behavioural trait is misguided. In fact, it means that there may be no meaningful “causal” narrative that can take us from particular genes to behaviours.
* * *
This is precisely why we need to resist seductively simple metaphors in genetics: blueprints, selfish genes, “genes for”. Of course, science always needs to reduce complex ideas and processes to simpler narratives if it is going to communicate to a broader audience. But I’ve yet to see a metaphor in genomics that does not risk distorting or misrepresenting the truth, so far as we currently know it. Fortunately, I do not think this matters for talking about the roles of genes in making a human. We will deal with those roles as they arise, without resorting to any overarching story about what genes “do”.
I haven’t even told you yet the worst of it, though. It’s not simply difficult to articulate clearly what, in the scheme of growing humans the natural way, genes do. For we don’t exactly know how to define a gene at all.
This isn’t a failure of biology, but a strength. It’s tempting to imagine that science can’t be fully coherent if it can’t define its key terms. But the most fundamental concepts are in fact almost invariably a little hazy. Physicists can’t say too precisely or completely what time, space, mass and energy are. Biologists can’t say what a gene or a species is. For that matter, chemists aren’t fully agreed on what an element or a chemical bond is. In all cases, these terms arose because it seemed as though they had a very specific meaning, but when we looked more closely we found fuzzy edges. Yet the reason we coined the terms in the first place was because they were good for thinking with.
That remains true. A gene is a useful idea, perhaps in much the same way as words like “family” and “love” and “democracy” are useful: they are vessels for ideas that enable us to have useful conversations. They are usually precise enough.
Here, then, is a definition good enough to let us talk about genes in the role of growing a human from cells. Think of a gene as a piece of DNA from which a cell is able to make a particular molecule, or group of molecules, that it needs in order to function. By passing on copies of genes, cells can pass on that information so that the progeny doesn’t have to rediscover it from scratch.
If you raised your eyebrows at “so that”, good for you. In such phrases, biology is given a false purpose, an illusory sense that it pursues goals. It is nigh on impossible to talk about biology – about growth, development, evolution – without some mention of aims. Try to remember that this is always mere metaphor. The way that the laws of the physical world have played out on our planet is such that entities called cells have appeared that have a propensity to pass on genes to copies of themselves. This is remarkable and marvellous. No one really understands why it happens – why reproduction, inheritance and evolution is possible – and that’s why we find it necessary to tell stories about it. All we can say is that there is absolutely nothing that forces us to invoke any supernatural explanation for it. The gaps that remain would make an extremely peculiar shape into which such an account might be squeezed.
Here’s another thing worth knowing about genes: a gene on its own is useless. It can’t replicate,10 it can’t even do the job that evolution “appears” to have given it. Frankly, there is no real point in calling a gene on its own a “gene” at all: the name connotes an ability to (re)produce, but a lone “gene” is sterile, just a molecule that happens to resemble a part of the DNA in a chromosome. It’s common to say that a gene is a piece of DNA with a particular sequence, but the truth is that such a physical entity only becomes a gene in the context of a living system: a cell, at minimum. Genes are central ingredients of life, but by the time you reach the level of the gene there is nothing left that is meaningfully alive.
No, life starts with the cell. And that’s why a gene only has meaning by virtue of its situation in a cell. Does this, then, mean that the cell is more fundamental to biology than the gene? You might as well ask if words are more fundamental to literature than stories. It is “stories” that supply the context through which words acquire meaning, making them more than random sounds or marks on paper.
And by “context” here I don’t just mean that a gene has to be in a cell in order to represent any biologically meaningful information. I mean also that, for example, the history of the cell, and of the entire organism, might matter to the function a gene has. A gene that is “active” at one point in the organism’s growth might represent a quite different message – have a different implication – than at a later or earlier point. Yet the molecular machine (the protein) encoded by the gene may be identical in the two cases. The gene doesn’t change, but the “instruction” it represents does.
You might compare it to the exclamation “Stop it!” Is that an instruction? Well, of course you don’t know from that enunciation alone what it is you are supposed to stop, but perhaps you might regard it as a generic instruction to desist from the activity you’re engaged in. But what if you hear someone shout “Stop it!” as you see a football rolling towards a cliff edge? Is that an instruction to desist in anything, or on the contrary an injunction to action? You need to know the context.
* * *
The gene-centred narrative of life is just one example of our urge to somehow capture the essence of this complicated, astounding process – to be able to say “life starts here!” Science’s reductionist impulse gets a bad press, but breaking complicated things down into simpler ones is a tremendously powerful way of making sense of them. I think that what many people who complain about reductionism are reacting against is not so much this process of analysis – of taking apart – in itself, but the tendency then to assert “this is what really matters”. Science has sometimes been a little slow to recognize the problem with such assertions. When one group of physicists started insisting it was going to find a Theory of Everything – a set of fundamental laws from which the entire physical universe emerged – others pointed out that it would be nothing of the kind, because it would be useless in itself for predicting or explaining most of what we see in the world.
It’s not just that we should resist the temptation to see reductive analysis as a quest to identify what is more important/fundamental/real in the world. Sometimes the phenomenon you’re interested in only exists at a particular level in the hierarchy of scale, and is invisible above and below it. Go to quarks and you have lost chemistry. With genes and life it is not quite that extreme – but at the level of genes you are left with only a rather narrow view of some of the entities and processes that underpin this notion we call “life”. Life remains a meaningful idea from the macro level of the entire biosphere of our planet right down to the micro level of the cell. Within those bounds it encompasses a whole slew of factors: flows of energy and materials, the appearance of order and self-organization, heredity and reproduction. But below the level of the cell, you’ll always be overlooking something vitally important in life. As Franklin Harold has put it:
Something is not accounted for very clearly in the single-minded dissection to the molecular level. Even as the tide of information surges relentlessly beyond anyone’s comprehension, the organism as a whole has been shattered into bits and bytes. Between the thriving catalog of molecules and genes, and the growing cells under my microscope, there yawns a gulf that will not be automatically bridged when the missing facts have all been supplied. No, whole-genome sequencing won’t do it, for the living cells quite fail to declare themselves from those genomes that are already in our databases … The time has come to put the cell together again, form and function and history and all.
It is precisely the multivalent, multiscale implication of the word “life”, too, that creates the tensions, ambiguities and ambivalences about what it means to “grow a human”. We are thereby “making a life”, but not “making life”. That same truth is spoken in jest in a cartoon by Gary Markstein in which two white-coated scientists contemplate IVF embryos. “Life begins at the Petri dish!” exclaims one embryo; “Cloning for research!” demands another. “Even the human embryos are divided”, sighs a scientist.
This is the struggle we face in reconciling our notion of life as human experience with the concept of life as a property of our material substance. We are alive, and so is our flesh. While those two visions of life were synonymous, we could ignore the problem. Having a mini-brain grown in a dish from a piece of one’s arm tends to make that evasion no longer tenable.
It’s no wonder that different cultures at different times have had such diverse attitudes to the connection between the human body in utero, forming in hidden and mysterious fashion from something not remotely human-like, and the human body in the world. The insistence by some people and in some belief systems that “life begins at conception” is a modern utterance, often claiming firm support from the very science that in fact shows how ill-defined the idea is.
But the tension is an old one, as demonstrated by preformation theories of the human fetus. This was an anthropomorphization of the cell as explicit as that in cartoons that attribute voices and opinions to human embryos in petri dishes. Intuition compels us to look for the self in the cell. An insistence on locating it instead in our genes – as cell biologist Scott Gilbert puts it, to see “DNA as our soul” – comes from the same impulse. Perhaps we must be gentle in dispensing with these superstitions. Aren’t old habits always hard to shake off?