Читать книгу Why Us?: How Science Rediscovered the Mystery of Ourselves - James Fanu Le - Страница 9

The Brain

Оглавление

Meanwhile, the human brain too was about to reveal its secrets. Its physical appearance is quite as familiar as the Double Helix. But the specialisation of those separate parts for seeing, hearing, movement and so on is in a sense deceptive, concealing the crucial question of how their electrical firing translates the sights and sounds of the external world, or summons those evocative childhood memories from the distant past. How does this mere three pounds of soft grey matter within the skull contain the experience of a lifetime?

Here again, a series of technical innovations, paralleling those of the New Genetics, would permit scientists for the first time to scrutinise the brain ‘in action’. In 1973 the British physicist Godfrey Hounsfield invented the Computed Tomography (CT) scanner, revealing the brain’s internal structure with an almost haunting clarity, revolutionising the diagnosis of strokes and tumours and other forms of mischief. Soon after, the further technical development of Positron Emission Tomography (PET) scanning would transform the CT scanner’s static images or ‘snapshots’ of the brain into ‘moving pictures’.

Put simply, this is how it works. All of life requires oxygen to drive the chemical reactions in its cells. This oxygen is extracted from the air, inspired in the lungs and transported by blood cells to the tissues. When, for example, we start talking, the firing of the neurons in the language centre of the brain massively increases their demand for oxygen, which can only be met by increasing the bloodflow to that area. The PET scanner detects that increase in bloodflow, converting it into multi-coloured images that pick out the ‘hotspots’ of activity. Now, for the first time, the internal workings of the brain induced by smelling a rose or listening to a violin sonata could be observed as they happened. Or (as here) picking out rhyming words:

A woman sits quietly waiting for the experiment to begin – her head ensconced in a donut-shaped device, a PET scanning camera. Thirty-one rings of radiation detectors make up the donut, which will scan thirty-one images simultaneously in parallel horizontal lines. She is next injected with a radioactive isotope [of oxygen] and begins to perform the task … The words are presented one above the other on a television monitor. If they rhyme, she taps a response key. Radiation counters estimate how hard the brain region is working … and are transformed into images where higher counts are represented by brighter colours [thus] this colour map of her brain reveals all the regions acting while she is judging the paired words.

The details will come later, but the PET scanner would create the discipline of modern neuroscience, attracting thousands of young scientists keen to investigate this previously unexplored territory. Recognising the possibilities of the new techniques, the United States Congress in 1989 designated the next ten years as ‘the Decade of the Brain’ in anticipation of the many important new discoveries that would deliver ‘precise and effective means of predicting, modifying and controlling individual behaviour’. ‘The question is not whether the neural machinery [of the brain] will be understood,’ observed Professor of Neurology Antonio Damasio, writing in the journal Scientific American, ‘but when.’

Throughout the 1990s, both the Human Genome Project and the Decade of the Brain would generate an enormous sense of optimism, rounding off the already prodigious scientific achievements of the previous fifty years. And sure enough, the completion of both projects on the cusp of the new millennium would prove momentous events.

The completion of the first draft of the Human Genome Project in June 2000 was considered sufficiently important to warrant a press conference in the presidential office of the White House. ‘Nearly two centuries ago in this room, on this floor, Thomas Jefferson spread out a magnificent map … the product of a courageous expedition across the American frontier all the way to the Pacific,’ President Bill Clinton declared. ‘But today the world is joining us here to behold a map of even greater significance. We are here to celebrate the completion of the first survey of the entire human genome. Without a doubt this is the most important, most wondrous map ever produced by mankind.’

The following year, in February 2001, the two most prestigious science journals, Nature and Science, each published a complete version of that ‘most wondrous map ever produced by mankind’ as a large, multi-coloured poster displaying the full complement of (as it would turn out) twenty-five thousand human genes. It was, as Science observed, ‘an awe-inspiring sight’. Indeed, it was awesome twice over. Back in the 1950s, when Francis Crick and James Watson were working out the structure of the Double Helix, they had no detailed knowledge of a single gene, what it is or what it does. Now, thanks to the techniques of the New Genetics, those involved in the Genome Project had, in less than a decade, successfully culled from those three billion ‘coloured discs’ strung out along its intertwining strands the hard currency of each of the twenty-six thousand genes that determine who we are.

The Human Genome map, like Thomas Jefferson’s map of the United States, portrays the major features of that genetic landscape with astonishing precision. While it had taken the best part of seven years to find the defective gene responsible for the lung disorder cystic fibrosis, now anyone could locate it from that multi-coloured poster in as many seconds. Here too at a glance you can pick out the gene for the hormone insulin, which controls the level of sugar in the blood, or the haemoglobin molecule that transports oxygen to the tissues. To be sure, the functions of many thousands of those genes remained obscure, but now, knowing their precise location and the sequence of which they are composed, it would be only a matter of time before they too would be known. It was a defining moment. ‘Today will be recorded as one of the most significant dates in history,’ insisted one of the major architects of the Genome Project, Dr Michael Dexter of the Wellcome Trust in Britain. ‘Just as Copernicus changed our understanding of the solar system and man’s place within it, so knowledge of the human genome will change how we see ourselves and our relationship to others.’

The goals of the Decade of the Brain were necessarily more open-ended, but still the PET scanner, and the yet more sophisticated brain imaging techniques that followed in its wake, had more than fulfilled their promise, allowing scientists to draw another exquisitely detailed map locating the full range of mental abilities to specific parts of the brain. There were many surprises along the way, not least how the brain fragmented the simplest of tasks into a myriad of different components. It had long been supposed, for instance, that the visual cortex at the back of the brain acted as a sort of photographic plate, capturing an image of the external world as seen through the eye. But now it turned out that the brain ‘created’ that image from the interaction of thirty or more separate maps within the visual cortex, each dedicated to one or other aspect of the visual image, the shapes, colour, movement of the world ‘out there’. ‘As surely as the old system was rooted in the concept of an image of the visual world received and analysed by the cortex,’ observes Semir Zeki, Professor of Neurobiology at the University of London, ‘the present one is rooted in the belief that an image of the visual world is actively constructed by the cerebral cortex.’

Steven Pinker, Professor of Brain and Cognitive Science at the Massachusetts Institute of Technology, could explain to the readers of Time magazine in April 2000 (the close of the Decade of the Brain) how neuroscientists armed with their new techniques had investigated ‘every facet of mind from mental images to moral sense, from mundane memories to acts of genius’, concluding, ‘I have little reason to doubt that we will crack the mystery of how brain events correlate with experience.’

Both the Human Genome Project and the Decade of the Brain have indeed transformed, beyond measure, our understanding of ourselves – but in a way quite contrary to that anticipated.

Nearly ten years have elapsed since those heady days when the ‘Holy Grail’ of the scientific enterprise, the secrets of life and the human mind, seemed almost within reach. Every month the pages of the science journals are still filled with the latest discoveries generated by the techniques of the New Genetics, and yet more colourful scans of the workings of the brain – but there is no longer the expectation that the accumulation of yet more facts will ever provide an adequate scientific explanation of the human experience. Why?

We return first to the Human Genome Project, which, together with those of the worm and fly, mouse and chimpanzee and others that would follow in its wake, was predicated on the assumption that knowledge of the full complement of genes must explain, to a greater or lesser extent, why and how the millions of species with which we share this planet are so readily distinguishable in form and attributes from each other. The genomes must, in short, reflect the complexity and variety of ‘life’ itself. But that is not how it has turned out.

First, there is the ‘numbers problem’. That final tally of twenty-five thousand human genes is, by definition, sufficient for its task, but it seems a trifling number to ‘instruct’, for example, how a single fertilised egg is transformed in a few short months into a fully formed being, or to determine how the billions of neurons in the brain are wired together so as to encompass the experiences of a lifetime. Those twenty-six thousand genes must, in short, ‘multi-task’, each performing numerous different functions, combining together in a staggeringly large number of different permutations.

That paucity of genes is more puzzling still when the comparison is made with the genomes of other creatures vastly simpler than ourselves – several thousand for a single-cell bacterium, seventeen thousand for a millimetre-sized worm, and a similar number for a fly. This rough equivalence in the number of genes across so vast a range of ‘organismic complexity’ is totally inexplicable. But no more so than the discovery that the human genome is virtually interchangeable with that of our fellow vertebrates such as the mouse and chimpanzee – to the tune of 98 per cent or more. There is, in short, nothing to account for those very special attributes that so readily distinguish us from our primate cousins – our upright stance, our powers of reason and imagination, and the faculty of language.

The director of the Chimpanzee Genome Project, Svante Paabo, had originally anticipated that its comparison with the human genome would reveal the ‘profoundly interesting genetic prerequisites’ that set us apart:

The realisation that a few genetic accidents made human history possible will provide us with a whole new set of philosophical challenges to think about … both a source of humility and a blow to the idea of human uniqueness.

But publication of the completed version of the chimpanzee genome in 2005 prompted a more muted interpretation of its significance: ‘We cannot see in this why we are so different from chimpanzees,’ Paabo commented. ‘Part of the secret is hidden in there, but we don’t understand it yet.’ So ‘The obvious differences between humans and chimps cannot be explained by genetics alone’ – which would seem fair comment, until one reflects that if those differences ‘cannot be explained’ by genes, then what is the explanation?

These findings were not just unexpected, they undermined the central premise of biology: that the near-infinite diversity of form and attributes that so definitively distinguish living things one from the other must ‘lie in the genes’. The genome projects were predicated on the assumption that the ‘genes for’ the delicate, stooping head and pure white petals of the snowdrop would be different from the ‘genes for’ the colourful, upstanding petals of the tulip, which would be different again from the ‘genes for’ flies and frogs, birds and humans. But the genome projects reveal a very different story, where the genes ‘code for’ the nuts and bolts of the cells from which all living things are made – the hormones, enzymes and proteins of the ‘chemistry of life’ – but the diverse subtlety of form, shape and colour that distinguishes snowdrops from tulips, flies from frogs and humans, is nowhere to be found. Put another way, there is not the slightest hint in the composition of the genes of fly or man to account for why the fly should have six legs, a pair of wings and a brain the size of a full stop, and we should have two arms, two legs and that prodigious brain. The ‘instructions’ must be there, of course, for otherwise flies would not produce flies and humans humans – but we have moved, in the wake of the Genome Project, from assuming that we knew the principle, if not the details, of that greatest of marvels, the genetic basis of the infinite variety of life, to recognising that we not only don’t understand the principles, we have no conception of what they might be.

We have here, as the historian of science Evelyn Fox Keller puts it:

One of those rare and wonderful moments when success teaches us humility … We lulled ourselves into believing that in discovering the basis for genetic information we had found ‘the secret of life’; we were confident that if we could only decode the message in the sequence of chemicals, we would understand the ‘programme’ that makes an organism what it is. But now there is at least a tacit acknowledgement of how large that gap between genetic ‘information’ and biological meaning really is.

And so, too, the Decade of the Brain. The PET scanner, as anticipated, generated many novel insights into the patterns of electrical activity of the brain as it looks out on the world ‘out there’, interprets the grammar and syntax of language, recalls past events, and much else besides. But at every turn the neuroscientists found themselves completely frustrated in their attempts to get at how the brain actually works.

Right from the beginning it was clear that there was simply ‘too much going on’. There could be no simpler experiment than to scan the brain of a subject when first reading, then speaking, then listening to, a single word such as ‘chair’. This should, it was anticipated, show the relevant part of the brain ‘lighting up’ – the visual cortex when reading, the speech centre when speaking, and the hearing cortex when listening. But no, the brain scan showed that each separate task ‘lit up’ not just the relevant part of the brain, but generated a blizzard of electrical activity across vast networks of millions of neurons – while thinking about the meaning of a word and speaking appeared to activate the brain virtually in its entirety. The brain, it seemed, must work in a way previously never really appreciated – not as an aggregate of distinct specialised parts, but as an integrated whole, with the same neuronal circuits performing many different functions.

The initial surprise at discovering how the brain fragmented the sights and sounds of the world ‘out there’ into a myriad of separate components grew greater still as it became clear that there was no compensating mechanism that might reintegrate all those fragments back together again into that personal experience of being at the centre, moment by moment, of a coherent, ever-changing world. Reflecting on this problem of how to ‘bind’ all the fragments back together again, Nobel Prize-winner David Hubel of Harvard University observed:

This abiding tendency for attributes such as form, colour and movement to be handled by separate structures in the brain immediately raises the question how all the information is finally assembled, say for perceiving a bouncing red ball. It obviously must be assembled – but where and how, we have no idea.

But the greatest perplexity of all was the failure to account for how the monotonous electrical activity of those billions of neurons in the brain translate into the limitless range and quality of subjective experiences of our everyday lives – where every transient, fleeting moment has its own distinct, unique, intangible feel: where the cadences of a Bach cantata are so utterly different from the flash of lightning, the taste of Bourbon from the lingering memory of that first kiss.

The implications are clear enough. While theoretically it might be possible for neuroscientists to know everything there is to know about the physical structure and activity of the brain, its ‘product’, the mind, with its thoughts and ideas, impressions and emotions, would still remain unaccounted for. As the philosopher Colin McGinn expresses it:

Suppose I know everything about your brain: I know its anatomy, its chemical ingredients, the pattern of electrical activity in its various segments, I even know the position of every atom and its subatomic structure. Do I therefore know everything about your mind? It certainly seems not. On the contrary, I know nothing about your mind. So knowledge of your brain does not give me knowledge of your mind.

This distinction between the electrical activity of the material brain and the non-material mind (of thoughts and ideas) as two quite different things might seem so self-evident as to be scarcely worth commenting on. But for neuroscientists the question of how the brain’s electrical activity translates into thoughts and sensations was precisely what needed explaining – and their failure to do so has come to haunt them. So, for everything that the Decade of the Brain undoubtedly achieved, nonetheless, as John Maddox, editor of Nature, would acknowledge at its close: ‘We seem as far from understanding [the brain] as we were a century ago. Nobody understands how decisions are made or how imagination is set free.’

This verdict on the disappointing outcomes of the Genome Project and the Decade of the Brain might seem a trifle premature. These are, after all, still very early days, and it is far too soon to predict what might emerge over the next twenty to thirty years. The only certainty about advances in human knowledge is that they open the door to further seemingly unanswerable questions, which in time will be resolved, and so on. The implication that here science may finally have ‘reached its limits’ would seem highly contentious, having been expressed many times in the past, only to be repeatedly disproved. Famously, the physicist Lord Kelvin, at the close of the nineteenth century, insisted that the future of his discipline was to be looked for in ‘the sixth place of decimals’ (that is, futile refinements of the then present state of knowledge). Within a few years Albert Einstein had put forward his General Theory of Relativity, and the certainties of Lord Kelvin’s classical physics were eclipsed.

The situation here, however, is rather different, for while the New Genetics and those novel brain scanning techniques offer almost inexhaustible opportunities for further research, it is possible to anticipate in broad outline what their findings will add up to. Scientists could, if they so wished, spell out the genomes of each of the millions of species with which we share this planet – snails, bats, whales, elephants and so on – but that would only confirm that they are composed of several thousand similar genes that ‘code’ for the nuts and bolts of the cells of which they are made, while the really interesting question, of how those genes determine the unique form and attributes of the snail, bat, elephant, whale or whatever, would remain unresolved. And so too for the scanning techniques of the neurosciences, where a million scans of subjects watching a video of bouncing red balls would not take us an iota further in understanding what needs explaining – how the neuronal circuits experience the ball as being red and round and bouncing.

At any other time these twin setbacks to the scientific enterprise might simply have been relegated to the category of problems for which science does not as yet have the answer. But when cosmologists can reliably infer what happened in the first few minutes of the birth of the universe, and geologists can measure the movements of vast continents to the nearest centimetre, then the inscrutability of those genetic instructions that should distinguish a human from a fly, or the failure to account for something as elementary as how we recall a telephone number, throws into sharp relief the unfathomability of ourselves. It is as if we, and indeed all living things, are in some way different, profounder and more complex than the physical world to which we belong.

Nonetheless there must be a reason why those genome projects proved so uninformative about the form and attributes of living things, or why the Decade of the Brain should have fallen so far short of explaining the mind. There is a powerful impression that science has been looking in the wrong place, seeking to resolve questions whose answers lie somehow outside its domain. This is not just a matter of science not yet knowing all the facts; rather there is the sense that something of immense importance is ‘missing’ that might transform the bare bones of genes into the wondrous diversity of the living world, and the monotonous electrical firing of the neurons of the brain into the vast spectrum of sensations and ideas of the human mind. What might that ‘missing’ element be?

Much of the prestige of science lies in its ability to link together disparate observations to reveal the processes that underpin them. But this does not mean that science ‘captures’ the phenomena it describes – far from it. There is, after all, nothing in the chemistry of water (two atoms of hydrogen to one of oxygen) that captures its diverse properties as we know them to be from personal experience: the warmth and wetness of summer rain, the purity and coldness of snow in winter, the babbling brook and the placid lake, water refreshing the dry earth, causing the flowers to bloom and cleansing everything it touches. It is customary to portray this distinction as ‘two orders of reality’. The ‘first’ or ‘primary reality’ of water is that personal knowledge of its diverse states and properties that includes not just how we perceive it through our senses, but also the memories, emotions and feelings with which we respond to it. By contrast, the ‘second order reality’ is water’s materiality, its chemical composition as revealed by the experimental methods of the founder of modern chemistry, the French genius Antoine Lavoisier, who in 1783 sparked the two gases of hydrogen and oxygen together in a test tube, to find a residue of dew-like drops that ‘seemed like water’.

These two radically different, yet complementary, ‘orders of reality’ of water are mutually exclusive. There is nothing in our personal experience that hints at water’s chemical composition, nor conversely is there anything in its chemical formula that hints at its many diverse states of rain, snow, babbling brook, as we know them from personal experience. This seemingly unbridgeable gap between these two orders of reality corresponds, if not precisely, to the notion of the ‘dual nature of reality’, composed of a non-material realm, epitomised by the thoughts and perceptions of the mind, and an objective material realm of, for example, chairs and tables. They correspond, again if not precisely, to two categories of knowledge that one might describe respectively as the philosophic and the scientific view. The ‘first order’ philosophic view is the aggregate of human knowledge of the world as known through the senses, interpreted and comprehended by the powers of reason and imagination. The ‘second order’ scientific view is limited to the material world and the laws that underpin it as revealed by science and its methods. They are both equally real – the fact of a snowflake melting in the palm of the hand is every bit as important as the fact of the scientific explanation that its melting involves a loosening of the lattice holding the molecules of hydrogen and oxygen together. The ‘philosophic’ view, however, could be said to encompass the scientific, for it not only ‘knows’ the snowflake melting in the hand as a snowflake, but also the atomic theory of matter and hence its chemical composition.

It would thus seem a mistake to prioritise scientific knowledge as being the more ‘real’, or to suppose its findings to be the more reliable. But, to put it simply, that is indeed what happened. Before the rise of science, the philosophic view necessarily prevailed, including the religious intimation from contemplating the wonders of the natural world and the richness of the human mind that there was ‘something more than can be known’.

From the late eighteenth century onwards the burgeoning success of science would progressively challenge that inference through its ability to ‘reduce’ the seemingly inscrutable complexities of the natural world to their more readily explicable parts and mechanisms: the earth’s secrets surrendered to the geologist’s hammer, the intricacies of the fabric of plants and animals to the microscopist’s scrutiny, the mysteries of nutrition and metabolism to the analytical techniques of the chemist. Meanwhile, the discovery of the table of chemical elements, the kinetic theory of heat, magnetism and electricity all vastly extended the explanatory powers of science. And, most significant of all, the theory of biological evolution offered a persuasive scientific explanation for that greatest of wonders – the origins and infinite diversity of form and attributes of living things.

The confidence generated by this remorseless expansion in scientific knowledge fostered the belief in its intrinsic superiority over the philosophic view, with the expectation that the universe and everything within it would ultimately be explicable in terms of its material properties alone. Science would become the ‘only begetter of truth’, its forms of knowledge not only more reliable but more valuable than those of the humanities. This assertion of the priority of the scientific view, known as scientific materialism (or just ‘materialism’), marked a watershed in Western civilisation, signalling the way to a future of scientific progress and technical advance while relegating to the past that now superseded philosophical inference of the preceding two thousand years of there being ‘more than we can know’. That future, the scientific programme of the twentieth century, would be marked by a progressively ever deeper scientific penetration into the properties of matter, encompassing the two extremes of scale from the vastness of the cosmos to the microscopic cell from which all living things are made. It began to seem as if there might be no limits to its explanatory power.

The genome projects and the Decade of the Brain represent the logical conclusion of that supposition. First, the genome projects were predicated on the assumption that unravelling the Double Helix would reveal ‘the secret of life’, as if a string of chemicals could possibly account for the vast sweep of qualities of the wonders of the living world; and second, the assumption of the Decade of the Brain that those brain scanning techniques would explain the mind, as if there could be any equivalence between the electrical firing of neurons and the limitless richness of the internal landscape of human memory, thought and action. In retrospect, both were no more likely to have fulfilled the promise held out for them than to suppose the ‘second order’ chemical composition of water might account for its diverse ‘first order’ states of rain, snow, oceans, lakes, rivers and streams as we know them to be.

This necessarily focuses our attention on what that potent ‘missing force’ must be that might bridge the gap between those two ‘orders of reality’, with the capacity to conjure the richness of human experience from the bare bones of our genes and brains. This is an even more formidable question than it might appear to be, for along the way those genome projects have also, inadvertently, undermined the credibility of the fundamental premise of what we do know about ourselves – that the living world and our uniquely human characteristics are the consequence of a known, scientifically proven, process of biological evolution. Certainly, the defining feature of the history of the universe, as outlined earlier, is of the progressive, creative, evolutionary transformation from the simplest elements of matter to ever higher levels of complexity and organisation. Over aeons of time the clouds of gas in intergalactic space evolved into solar systems such as our own. Subsequently the inhospitable landscape of our earth evolved again into its current life-sustaining biosphere, and so on. Thus the whole history of the cosmos is an evolutionary history. That is indisputable, but the biological theory of evolution goes further, with the claim to know the mechanisms by which the near-infinite diversity of forms of life (including ourselves) might have evolved by a process of random genetic changes from a single common ancestor.

It is, of course, possible that the living world and ourselves did so evolve, and indeed it is difficult to conceive of them not having done so. But the most significant consequence of the findings of the genome projects and neuroscience is the transformation of that foundational evolutionary doctrine into a riddle. The dramatic discovery of Lucy’s near-complete skeleton, already described, provides compelling evidence for man’s progressive evolutionary ascent over the past five million years. Why then, one might reasonably ask, is there not the slightest hint in the Human Genome of those unique attributes of the upright stance and massively expanded brain that so distinguish us from our primate cousins?

The ramifications of the seemingly disappointing outcomes of the New Genetics and the Decade of the Brain are clearly prodigious, suggesting that we are on the brink of some tectonic shift in our understanding of ourselves. These issues are nowhere more sharply delineated than in an examination of the achievements of the first human civilisation which marked the arrival of our species, Homo sapiens, thirty-five thousand years ago.

Why Us?: How Science Rediscovered the Mystery of Ourselves

Подняться наверх