Читать книгу The Brain - David Eagleman - Страница 12

Оглавление

The illusion of reality

From the moment you awaken in the morning, you’re surrounded with a rush of light and sounds and smells. Your senses are flooded. All you have to do is show up every day, and without thought or effort, you are immersed in the irrefutable reality of the world.

But how much of this reality is a construction of your brain, taking place only inside your head?

Consider the rotating snakes, below. Although nothing is actually moving on the page, the snakes appear to be slithering. How can your brain perceive motion when you know that the figure is fixed in place?

Nothing moves on the page, but you perceive motion. Rotating Snakes illusion by Akiyoshi Kitaoka.


Compare the color of the squares marked A and B. Checkerboard illusion by Edward Adelson.


Or consider the checkerboard above.

Although it doesn’t look like it, the square marked A is exactly the same color as the square marked B. Prove this to yourself by covering up the rest of the picture. How can the squares look so different, even though they’re physically identical?

Illusions like these give us the first hints that our picture of the external world isn’t necessarily an accurate representation. Our perception of reality has less to do with what’s happening out there, and more to do with what’s happening inside our brain.

Your experience of reality

It feels as though you have direct access to the world through your senses. You can reach out and touch the material of the physical world – like this book or the chair you’re sitting on. But this sense of touch is not a direct experience. Although it feels like the touch is happening in your fingers, in fact it’s all happening in the mission control center of the brain. It’s the same across all your sensory experiences. Seeing isn’t happening in your eyes; hearing isn’t taking place in your ears; smell isn’t happening in your nose. All of your sensory experiences are taking place in storms of activity within the computational material of your brain.

Here’s the key: the brain has no access to the world outside. Sealed within the dark, silent chamber of your skull, your brain has never directly experienced the external world, and it never will.

Instead, there’s only one way that information from out there gets into the brain. Your sensory organs – your eyes, ears, nose, mouth, and skin – act as interpreters. They detect a motley crew of information sources (including photons, air compression waves, molecular concentrations, pressure, texture, temperature) and translate them into the common currency of the brain: electrochemical signals.

These electrochemical signals dash through dense networks of neurons, the main signaling cells of the brain. There are a hundred billion neurons in the human brain, and each neuron sends tens or hundreds of electrical pulses to thousands of other neurons every second of your life.

Neurons communicate with one another via chemical signals called neurotransmitters. Their membranes carry electrical signals rapidly along their length. Although artistic renditions like this one show empty space, in fact there is no room between cells in the brain – they are packed tightly against one another.


Everything you experience – every sight, sound, smell – rather than being a direct experience, is an electrochemical rendition in a dark theater.

How does the brain turn its immense electrochemical patterns into a useful understanding of the world? It does so by comparing the signals it receives from the different sensory inputs, detecting patterns that allow it to make its best guesses about what’s “out there”. Its operation is so powerful that its work seems effortless. But let’s take a closer look.

Let’s begin with our most dominant sense: vision. The act of seeing feels so natural that it’s hard to appreciate the immense machinery that makes it happen. About a third of the human brain is dedicated to the mission of vision, to turning raw photons of light into our mother’s face, or our loving pet, or the couch we’re about to nap on. To unmask what’s happening under the hood, let’s turn to the case of a man who lost his vision, and then was given the chance to get it back.

I was blind but now I see

Mike May lost his sight at the age of three and a half. A chemical explosion scarred his corneas, leaving his eyes with no access to photons. As a blind man, he became successful in business, and also became a championship paralympic skier, navigating the slopes by sound markers.

Then, after over forty years of blindness, Mike learned about a pioneering stem cell treatment that could repair the physical damage to his eyes. He decided to undertake the surgery; after all, the blindness was only the result of his unclear corneas, and the solution was straightforward.

But something unexpected happened. Television cameras were on hand to document the moment the bandages came off. Mike describes the experience when the physician peeled back the gauze: “There’s this whoosh of light and bombarding of images on to my eye. All of a sudden you turn on this flood of visual information. It’s overwhelming.”

SENSORY TRANSDUCTION


Biology has discovered many ways to convert information from the world into electrochemical signals. Just a few of the translation machines that you own: hair cells in the inner ear, several types of touch receptors in the skin, taste buds in the tongue, molecular receptors in the olfactory bulb, and photoreceptors at the back of the eye.

Signals from the environment are translated into electrochemical signals carried by brain cells. It is the first step by which the brain taps into information from the world outside the body. The eyes convert (or transduce) photons into electrical signals. The mechanisms of the inner ear convert vibrations in the density of the air into electrical signals. Receptors on the skin (and also inside the body) convert pressure, stretch, temperature, and noxious chemicals into electrical signals. The nose converts drifting odor molecules, and the tongue converts taste molecules to electrical signals. In a city with visitors from all over the world, foreign money must be translated into a common currency before meaningful transactions can take place. And so it is with the brain. It’s fundamentally cosmopolitan, welcoming travelers from many different origins.

One of neuroscience’s unsolved puzzles is known as the “binding problem”: how is the brain able to produce a single, unified picture of the world, given that vision is processed in one region, hearing in another, touch in another, and so on? While the problem is still unsolved, the common currency among neurons – as well as their massive interconnectivity – promises to be at the heart of the solution.

Mike’s new corneas were receiving and focussing light just as they were supposed to. But his brain could not make sense of the information it was receiving. With the news cameras rolling, Mike looked at his children and smiled at them. But inside he was petrified, because he couldn’t tell what they looked like, or which was which. “I had no face recognition whatsoever,” he recalls.

In surgical terms, the transplant had been a total success. But from Mike’s point of view, what he was experiencing couldn’t be called vision. As he summarized it: “my brain was going ‘oh my gosh’”.

With the help of his doctors and family, he walked out of the exam room and down the hallway, casting his gaze toward the carpet, the pictures on the wall, the doorways. None of it made sense to him. When he was placed in the car to go home, Mike set his eyes on the cars, buildings, and people whizzing by, trying unsuccessfully to understand what he was seeing. On the freeway, he recoiled when it looked like they were going to smash into a large rectangle in front of them. It turned out to be a highway sign, which they passed under. He had no sense of what objects were, nor of their depth. In fact, post-surgery, Mike found skiing more difficult than he had as a blind man. Because of his depth perception difficulties, he had a hard time telling the difference between people, trees, shadows, and holes. They all appeared to him simply like dark things against the white snow.

The lesson that surfaces from Mike’s experience is that the visual system is not like a camera. It’s not as though seeing is simply about removing the lens cap. For vision, you need more than functioning eyes.

In Mike’s case, forty years of blindness meant that the territory of his visual system (what we would normally call the visual cortex) had been largely taken over by his remaining senses, such as hearing and touch. That impacted his brain’s ability to weave together all the signals it needed to have sight. As we will see, vision emerges from the coordination of billions of neurons working together in a particular, complex symphony.

Today, fifteen years after his surgery, Mike still has a difficult time reading words on paper and the expressions on people’s faces. When he needs to make better sense of his imperfect visual perception, he uses his other senses to crosscheck the information: he touches, he lifts, he listens. This comparison across the senses is something we all did at a much younger age, when our brains were first making sense of the world.

Seeing requires more than the eyes

When babies reach out to touch what’s in front of them, it’s not only to learn about texture and shape. These actions are also necessary for learning how to see. While it sounds strange to imagine that the movement of our bodies is required for vision, this concept was elegantly demonstrated with two kittens in 1963.

Richard Held and Alan Hein, two researchers at MIT, placed two kittens into a cylinder ringed in vertical stripes. Both kittens got visual input from moving around inside the cylinder. But there was a critical difference in their experiences: the first kitten was walking of its own accord, while the second kitten was riding in a gondola attached to a central axis. Because of this setup, both kittens saw exactly the same thing: the stripes moved at the same time and at the same speed for both. If vision were just about the photons hitting the eyes, their visual systems should develop identically. But here was the surprising result: only the kitten that was using its body to do the moving developed normal vision. The kitten riding in the gondola never learned to see properly; its visual system never reached normal development.

Inside a cylinder with vertical stripes, one kitten walked while the other was carried. Both received exactly the same visual input, but only the one who walked itself – the one able to match its own movements to changes in visual input – learned to see properly.


Vision isn’t about photons that can be readily interpreted by the visual cortex. Instead it’s a whole body experience. The signals coming into the brain can only be made sense of by training, which requires cross-referencing the signals with information from our actions and sensory consequences. It’s the only way our brains can come to interpret what the visual data actually means.

If from birth you were unable to interact with the world in any way, unable to work out through feedback what the sensory information meant, in theory you would never be able to see. When babies hit the bars of their cribs and chew their toes and play with their blocks, they’re not simply exploring – they’re training up their visual systems. Entombed in darkness, their brains are learning how the actions sent out into the world (turn the head, push this, let go of that) change the sensory input that returns. As a result of extensive experimentation, vision becomes trained up.

Vision feels effortless but it’s not

Seeing feels so effortless that it’s hard to appreciate the effort the brain exerts to construct it. To lift the lid a little on the process, I flew to Irvine, California, to see what happens when my visual system doesn’t receive the signals it expects.

Dr. Alyssa Brewer at the University of California is interested in understanding how adaptable the brain is. To that end, she outfits participants with prism goggles that flip the left and right sides of the world – and she studies how the visual system copes with it.

On a beautiful spring day, I strapped on the prism goggles. The world flipped – objects on the right now appeared on my left, and vice versa. When trying to figure out where Alyssa was standing, my visual system told me one thing, while my hearing told me another. My senses weren’t matching up. When I reached out to grab an object, the sight of my own hand didn’t match the position claimed by my muscles. Two minutes into wearing the goggles, I was sweating and nauseated.

Prism goggles flip the visual world, making it inordinately difficult to perform simple tasks, such as pouring a drink, grabbing an object, or getting through a doorway without bumping into the frame.


Although my eyes were functioning and taking in the world, the visual data stream wasn’t consistent with my other data streams. This spelled hard work for my brain. It was like I was learning to see again for the first time.

I knew that wearing the goggles wouldn’t stay that difficult forever. Another participant, Brian Barton, was also wearing prism goggles – and he had been wearing them for a full week. Brian didn’t seem to be on the brink of vomiting, as I was. To compare our levels of adaptation, I challenged him to a baking competition. The contest would require us to break eggs into a bowl, stir in cupcake mix, pour the batter into cupcake trays, and put the trays in the oven.

It was no contest: Brian’s cupcakes came out of the oven looking normal, while most of my batter ended up dried onto the counter or baked in smears across the baking tray. Brian could navigate his world without much trouble, while I had been rendered inept. I had to struggle consciously through every move.

Wearing the goggles allowed me to experience the normally hidden effort behind visual processing. Earlier that morning, just before putting on the goggles, my brain could exploit its years of experience with the world. But after a simple reversal of one sensory input, it couldn’t any longer.

To progress to Brian’s level of proficiency, I knew I would need to continue interacting with the world for many days: reaching out to grab objects, following the direction of sounds, attending to the positions of my limbs. With enough practice, my brain would get trained up by a continual cross-referencing between the senses, just the way that Brian’s brain had been doing for seven days. With training, my neural networks would figure out how various data streams entering into the brain matched up with other data streams.

Brewer reports that after a few days of wearing the goggles, people develop an internal sense of a new left and an old left, and a new right and an old right. After a week, they can move around normally, the way Brian could, and they lose the concept of which right and left were the old ones and new ones. Their spatial map of the world alters. By two weeks into the task, they can write and read well, and they walk and reach with the proficiency of someone without goggles. In that short time span, they master the flipped input.

The brain doesn’t really care about the details of the input; it simply cares about figuring out how to most efficiently move around in the world and get what it needs. All the hard work of dealing with the low-level signals is taken care of for you. If you ever get a chance to wear prism goggles, you should. It exposes how much effort the brain goes through to make vision seem effortless.

Synchronizing the senses

So we’ve seen that our perception requires the brain to compare different streams of sensory data against one another. But there’s something which makes this sort of comparison a real challenge, and that is the issue of timing. All of the streams of sensory data – vision, hearing, touch, and so on – are processed by the brain at different speeds.

Consider sprinters at a racetrack. It appears that they get off the blocks at the instant the gun fires. But it’s not actually instantaneous: if you watch them in slow motion, you’ll see the sizeable gap between the bang and the start of their movement – almost two tenths of a second. (In fact, if they move off the blocks before that duration, they’re disqualified – they’ve “jumped the gun”.) Athletes train to make this gap as small as possible, but their biology imposes fundamental limits: the brain has to register the sound, send signals to the motor cortex, and then down the spinal cord to the muscles of the body. In a sport where thousandths of a second can be the difference between winning and losing, that response seems surprisingly slow.

Could the delay be shortened if we used, say, a flash instead of a pistol to start the racers? After all, light travels faster than sound – so wouldn’t that allow them to break off the blocks faster?

I gathered up some fellow sprinters to put this to the test. In the top photograph, we are triggered by a flash of light; in the bottom photo we’re triggered by the gun.

Sprinters can break off the blocks more quickly to a bang (bottom panel) than to a flash (top panel).


We responded more slowly to the light. At first this may seem counterintuitive, given the speed of light in the outside world. But to understand what’s happening we need to look at the speed of information processing on the inside. Visual data goes through more complex processing than auditory data. It takes longer for signals carrying flash information to work their way through the visual system than for bang signals to work through the auditory system. We were able to respond to the light at 190 milliseconds, but to a bang at only 160 milliseconds. That’s why a pistol is used to start sprinters.

But here’s where it gets strange. We’ve just seen that the brain processes sounds more quickly than sights. And yet take a careful look at what happens when you clap your hands in front of you. Try it. Everything seems synchronized. How can that be, given that sound is processed more quickly? What it means is that your perception of reality is the end result of fancy editing tricks: the brain hides the difference in arrival times. How? What it serves up as reality is actually a delayed version. Your brain collects up all the information from the senses before it decides upon a story of what happens.

These timing difficulties aren’t restricted to hearing and seeing: each type of sensory information takes a different amount of time to process. To complicate things even more, even within a sense there are time differences. For example, it takes longer for signals to reach your brain from your big toe than it does from your nose. But none of this is obvious to your perception: you collect up all the signals first, so that everything seems synchronized. The strange consequence of all this is that you live in the past. By the time you think the moment occurs, it’s already long gone. To synchronize the incoming information from the senses, the cost is that our conscious awareness lags behind the physical world. That’s the unbridgeable gap between an event occurring and your conscious experience of it.

When the senses are cut off, does the show stop?

Our experience of reality is the brain’s ultimate construction. Although it’s based on all the streams of data from our senses, it’s not dependent on them. How do we know? Because when you take it all away, your reality doesn’t stop. It just gets stranger.

On a sunny San Francisco day, I took a boat across the chilly waters to Alcatraz, the famous island prison. I was going to see a particular cell called the Hole. If you broke the rules in the outside world, you were sent to Alcatraz. If you broke the rules in Alcatraz, you were sent to the Hole.

I entered the Hole and closed the door behind me. It’s about ten by ten feet. It was pitch black: not a photon of light leaks in from anywhere. Sounds are cut off completely. In here, you are left utterly alone with yourself.

THE BRAIN IS LIKE A CITY


Just like a city, the brain’s overall operation emerges from the networked interaction of its innumerable parts. There is often a temptation to assign a function to each region of the brain, in the form of “this part does that”. But despite a long history of attempts, brain function cannot be understood as the sum of activity in a collection of well-defined modules.

Instead, think of the brain as a city. If you were to look out over a city and ask “where is the economy located?” you’d see there’s no good answer to the question. Instead, the economy emerges from the interaction of all the elements – from the stores and the banks to the merchants and the customers.

And so it is with the brain’s operation: it doesn’t happen in one spot. Just as in a city, no neighborhood of the brain operates in isolation. In brains and in cities, everything emerges from the interaction between residents, at all scales, locally and distantly. Just as trains bring materials and textiles into a city, which become processed into the economy, so the raw electrochemical signals from sensory organs are transported along super-highways of neurons. There the signals undergo processing and transformation into our conscious reality.

What would it be like to be locked in here for hours, or for days? To find out, I spoke to a surviving inmate who had been here. Armed robber Robert Luke – known as Cold Blue Luke – was sent to the Hole for twenty-nine days for smashing up his cell. Luke described his experience: “The dark Hole was a bad place. Some guys couldn’t take that. I mean, they were in there and in a couple of days they were banging their head on the wall. You didn’t know how you would act when you got in there. You didn’t want to find out.”

Completely isolated from the outside world, with no sound and no light, Luke’s eyes and ears were completely starved of input. But his mind didn’t abandon the notion of an outside world. It just continued to make one up. Luke describes the experience: “I remember going on these trips. One I used to remember was flying a kite. It got pretty real. But they were all in my head.” Luke’s brain continued to see.

Such experiences are common among prisoners in solitary confinement. Another resident of the Hole described seeing a spot of light in his mind’s eye; he would expand that spot into a television screen and watch TV. Deprived of new sensory information, prisoners said they went beyond daydreaming: instead, they spoke of experiences that seemed completely real. They didn’t just imagine pictures, they saw.

This testimony illuminates the relationship between the outside world and what we take to be reality. How can we understand what was going on with Luke? In the traditional model of vision, perception results from a procession of data that begins from the eyes and ends with some mysterious end point in the brain. But despite the simplicity of that assembly-line model of vision, it’s incorrect.

In fact, the brain generates its own reality, even before it receives information coming in from the eyes and the other senses. This is known as the internal model.

The basis of the internal model can be seen in the brain’s anatomy. The thalamus sits between the eyes at the front of the head and the visual cortex at the back of the head. Most sensory information connects through here on its way to the appropriate region of the cortex. Visual information goes to the visual cortex, so there are a huge number of connections going from the thalamus into the visual cortex. But here’s the surprise: there are ten times as many going in the opposite direction.

Visual information travels from the eyes to the lateral geniculate nucleus to the primary visual cortex (gold). Strangely, ten times as many connections feed information back in the other direction (purple).


Detailed expectations about the world – in other words, what the brain “guesses” will be out there – are being transmitted by the visual cortex to the thalamus. The thalamus then compares what’s coming in from the eyes. If that matches the expectations (“when I turn my head I should see a chair there”), then very little activity goes back to the visual system. The thalamus simply reports on differences between what the eyes are reporting, and what the brain’s internal model has predicted. In other words, what gets sent back to the visual cortex is what fell short in the expectation (also known as the “error”): the part that wasn’t predicted away.

So at any moment, what we experience as seeing relies less on the light streaming into our eyes, and more on what’s already inside our heads.

And that’s why Cold Blue Luke sat in a pitch-black cell having rich visual experiences. Locked in the Hole, his senses were providing his brain with no new input, so his internal model was able to run free, and he experienced vivid sights and sounds. Even when brains are unanchored from external data, they continue to generate their own imagery. Remove the world and the show still goes on.

You don’t have to be locked up in the Hole to experience the internal model. Many people find great pleasure in sensory deprivation chambers – dark pods in which they float in salty water. By removing the anchor of the external world, they let the internal world fly free.

And of course you don’t have to go far to find your own sensory deprivation chamber. Every night when you go to sleep you have full, rich, visual experiences. Your eyes are closed, but you enjoy the lavish and colorful world of your dreams, believing the reality of every bit of it.

Seeing our expectations

When you walk down a city street, you seem to automatically know what things are without having to work out the details. Your brain makes assumptions about what you’re seeing based on your internal model, built up from years of experience of walking other city streets. Every experience you’ve had contributes to the internal model in your brain.

Instead of using your senses to constantly rebuild your reality from scratch every moment, you’re comparing sensory information with a model that the brain has already constructed: updating it, refining it, correcting it. Your brain is so expert at this task that you’re normally unaware of it. But sometimes, under certain conditions, you can see the process at work.

Try taking a plastic mask of a face, the type you wear on Halloween. Now rotate around so you’re looking at the hollow backside. You know it’s hollow. But despite this knowledge, you often can’t help but see the face as though it’s coming out at you. What you experience is not the raw data hitting your eyes, but instead your internal model – a model which has been trained on a lifetime of faces that stick out. The hollow mask illusion reveals the strength of your expectations in what you see. (Here’s an easy way to demonstrate the hollow mask illusion to yourself: stick your face into fresh snow and take a photo of the impression. The resulting picture looks to your brain like a 3D snow sculpture that’s sticking out.)

When you’re confronted with the hollow side of a mask (right), it still looks like it’s coming towards you. What we see is strongly influenced by our expectations.


It’s also your internal model that allows the world out there to remain stable – even when you’re moving. Imagine you were to see a cityscape that you really wanted to remember. So you take out your cell phone to capture a video. But instead of smoothly panning your camera across the scene, you decide to move it around exactly as your eyes move around. Although you’re not generally aware of it, your eyes jump around about four times a second, in jerky movements called saccades. If you were to film this way, it wouldn’t take you long to discover that this is no way to take a video: when you play it back, you’d find that your rapidly lurching video is nauseating to watch.

So why does the world appear stable to you when you’re looking at it? Why doesn’t it appear as jerky and nauseating as the poorly filmed video? Here’s why: your internal model operates under the assumption that the world outside is stable. Your eyes are not like video cameras – they simply venture out to find more details to feed into the internal model. They’re not like camera lenses that you’re seeing through; they’re gathering bits of data to feed the world inside your skull.

Our internal model is low resolution but upgradeable

Our internal model of the outside world allows us to get a quick sense of our environment. And that is its primary function – to navigate the world. What’s not always obvious is how much of the finer detail the brain leaves out. We have the illusion that we’re taking in the world around us in great detail. But as an experiment from the 1960s shows, we aren’t.

The Brain

Подняться наверх