Читать книгу Livewired - David Eagleman - Страница 11

Оглавление

4

WRAPPING AROUND THE INPUTS

Every man can, if he so desires, become the sculptor of his own brain.

—SANTIAGO RAMÓN Y CAJAL (1852–1934), neuroscientist and Nobel laureate

Michael Chorost was born with poor hearing, and he got by during his young adult life with the help of a hearing aid. But one afternoon, while waiting to pick up a rental car, the battery to his hearing aid died. Or so he thought. He replaced the battery but found that all sound was still missing from his world. He drove himself to the nearest emergency room and discovered that the remainder of his hearing—his thin auditory lifeline to the rest of the world—was gone for good.1

Hearing aids wouldn’t be of any use for him now; after all, they work by capturing sound from the world and blasting it at higher volume into the ailing auditory system. This strategy is effective for some types of hearing loss, but it only works if everything downstream of the eardrum is functioning. If the inner ear is defunct, no amount of amplification solves the problem. And this was Michael’s situation. It seemed as though his perception of the world’s soundscapes had come to an end.

But then he found out about a single remaining possibility, and in 2001 he underwent surgery for a cochlear implant. This tiny device circumvents the broken hardware of the inner ear to speak directly to the functioning nerve (think of it like a data cable) just beyond it. The implant is a minicomputer lodged directly into his inner ear; it receives sounds from the outside world and passes the information to the auditory nerve by means of tiny electrodes.

So the damaged part of the inner ear is bypassed, but that doesn’t mean the experience of hearing comes for free. Michael had to learn to interpret the foreign language of the electrical signals being fed to his auditory system:

When the device was turned on a month after surgery, the first sentence I heard sounded like “Zzzzzz szz szvizzz ur brfzzzzzz?” My brain gradually learned how to interpret the alien signal. Before long, “Zzzzzz szz szvizzz ur brfzzzzzz?” became “What did you have for breakfast?” After months of practice, I could use the telephone again, even converse in loud bars and cafeterias.

Although being implanted with a minicomputer sounds something like science fiction, cochlear implants have been on the market since 1982, and more than half a million people are walking around with these bionics in their heads, enjoying voices and door knocks and laughter and piccolos. The software on the cochlear implant is hackable and updateable, so Michael has spent years getting more efficient information through the implant without further surgeries. Almost a year after the implant was activated, he upgraded to a program that gave him twice the resolution. As Michael puts it, “While my friends’ ears will inevitably decline with age, mine will only get better.”


Terry Byland lives near Los Angeles. He was diagnosed with retinitis pigmentosa, a degenerative disorder of his retina, the sheet of photoreceptors at the back of the eye. He reports, “Aged 37, the last thing you want to hear is that you are going blind—that there’s nothing they can do.”2

But then he discovered that there was something he could do, if he was brave enough to try it. In 2004, he became one of the first patients to undergo an experimental procedure: getting implanted with a bionic retinal chip. A tiny device with a grid of electrodes, it plugs into the retina at the back of the eye. A camera on glasses wire-lessly beams its signals to the chip. The electrodes give little zaps of electricity to Terry’s surviving retinal cells, generating signals along the previously silent highway of the optic nerve. After all, Terry’s optic nerve functioned just fine: even while the photoreceptors had died, the nerve remained hungry for signals it could carry to the brain.

A research team at the University of Southern California implanted the miniature chip in Terry’s eye. The surgery was completed without a hitch, and then the real testing began. With hushed anticipation, the research team turned on the electrodes individually to test them. Terry reported, “It was amazing to see something. It was like little specks of light—not even the size of a dime—when they were testing the electrodes one by one.”

Over the course of days, Terry experienced only small constellations of lights: not a rousing success. But his visual cortex gradually figured out how to extract better information out of the signals. After some time, he detected the presence of his eighteen-year-old son: “I was with my son, walking . . . it was the first time I had seen him since he was five years old. I don’t mind saying, there were a few tears wept that day.”

Terry wasn’t experiencing a clear visual picture—it was more like a simple pixelated grid—but the door of darkness had swung open a crack. Over time, his brain has been able to make better sense of the signals. While he can’t ascertain the details of individual faces, he can make them out dimly. And although the resolution of his retinal chip is low, he can touch objects presented at random locations and is able to cross a city street by discerning the white lines of the crosswalk.3 He proudly reports, “When I’m in my home, or another person’s house, I can go into any room and switch the light on, or see the light coming in through the window. When I am walking along the street I can avoid low hanging branches—I can see the edges of the branches, so I can avoid them.”


These digital devices push information that doesn’t quite match the language of the natural biology. Nevertheless, the brain figures out how to make use of the data.

The idea of prostheses for the ear and eye had been seriously considered in the scientific community for decades. But no one was positive that these technologies would work. After all, the inner ear and the retina perform astoundingly sophisticated processing on the sensory input they receive. So would a small electronic chip, speaking the dialect of Silicon Valley instead of the language of our natural biological sense organs, be understood by the rest of the brain? Or instead, would its patterns of miniature electrical sparks come off as gibberish to downstream neural networks? These devices would be like an uncouth traveler to a foreign land who expects that everyone will figure out his language if he just keeps shouting it.

Amazingly, in the case of the brain, such an unrefined strategy works: the rest of the country learns to understand the foreigner.

But how?

The key to understanding this requires diving one level deeper: your three pounds of brain tissue are not directly hearing or seeing any of the world around you. Instead, your brain is locked in a crypt of silence and darkness inside your skull. All it ever sees are electrochemical signals that stream in along different data cables. That’s all it has to work with.

In ways we are still working to understand, the brain is stunningly gifted at taking in these signals and extracting patterns. To those patterns it assigns meaning. With the meaning you have subjective experience. The brain is an organ that converts sparks in the dark into the euphonious picture show of your world. All of the hues and aromas and emotions and sensations in your life are encoded in trillions of signals zipping in blackness, just as a beautiful screen saver on your computer screen is fundamentally built of zeros and ones.

THE PLANET-WINNING TECHNOLOGY OF THE POTATO HEAD

Imagine you went to an island of people born blind. They all read by Braille, feeling tiny patterns of inputs on their fingertips. You watch them break into laughter or melt into sobs as they brush over the small bumps. How can you fit all that emotion into the tip of your finger? You explain to them that when you enjoy a novel, you aim the spheres on your face toward particular lines and curves. Each sphere has a lawn of cells that record collisions with photons, and in this way you can register the shapes of the symbols. You’ve memorized a set of rules by which different shapes represent different sounds. Thus, for each squiggle you recite a small sound in your head, imagining what you would hear if someone were speaking aloud. The resulting pattern of neurochemical signaling makes you explode with hilarity or burst into tears. You couldn’t blame the islanders for finding your claim difficult to understand.

You and they would finally have to allow a simple truth: the fingertip or the eyeball is just the peripheral device that converts information from the outside world into spikes in the brain. The brain then does all the hard work of interpretion. You and the islanders would break bread over the fact that in the end it’s all about the trillions of spikes racing around in the brain—and that the method of entry simply isn’t the part that matters.

Whatever information the brain is fed, it will learn to adjust to it and extract what it can. As long as the data have a structure that reflects something important about the outside world (along with some other requirements we will see in the next chapters), the brain will figure out how to decode it.


Sensory organs feed many different information sources to the brain.

There’s an interesting consequence to this: your brain doesn’t know, and it doesn’t care, where the data come from. Whatever information comes in, it just works out how to leverage it.

This makes the brain a very efficient kind of machine. It is a general-purpose computing device. It just sucks up the available signals and determines—nearly optimally—what it can do with them. And that strategy, I propose, frees up Mother Nature to tinker around with different sorts of input channels.

I call this the Potato Head model of evolution. I use this name to emphasize that all the sensors that we know and love—like our eyes and our ears and our fingertips—are merely peripheral plug-and-play devices. You stick them in, and you’re good to go. The brain figures out what to do with the data that come in.

As a result, Mother Nature can build new senses simply by building new peripherals. In other words, once she has figured out the operating principles of the brain, she can tinker around with different sorts of input channels to pick up on different energy sources from the world. Information carried by the reflection of electromagnetic radiation is captured by the photon detectors in the eyes. Air compression waves are captured by the sound detectors of the ears. Heat and texture information is gathered by the large sheets of sensory material we call skin. Chemical signatures are sniffed or licked up by the nose or tongue. And it all gets translated into spikes running around in the dark vault of the skull.


The Potato Head hypothesis: plug in sensory organs, and the brain figures out how to use them.

This remarkable ability of the brain to accept any sensory input shifts the burden of research and development of new senses to the exterior sensors. In the same way that you can plug in an arbitrary nose or eyes or mouth for Potato Head, likewise does nature plug a wide variety of instruments into the brain for the purpose of detecting energy sources in the outside world.

Consider plug-and-play peripheral devices for your computer. The importance of the designation “plug-and-play” is that your computer does not have to know about the existence of the XJ-3000 SuperWeb-Cam that will be invented several years from now; instead, it needs only to be open to interfacing with an unknown, arbitrary device and receiving streams of data when the new device gets plugged in. As a result, you do not need to buy a new computer each time a new peripheral hits the market. You simply have a single, central device that opens its portholes for peripherals to be added in a standardized manner.4

Viewing our peripheral detectors like individual, stand-alone devices might seem crazy; after all, aren’t thousands of genes involved in building these devices, and don’t those genes overlap with other pieces and parts of the body? Can we really look at the nose, eye, ear, or tongue as a device that stands alone? I dove deep into researching this problem. After all, if the Potato Head model was correct, wouldn’t that suggest that we might find simple switches in the genetics that lead to the presence or absence of these peripherals?

As it turns out, all genes aren’t equal. Genes unpack in an exquisitely precise order, with the expression of one triggering the expression of the next ones in a sophisticated algorithm of feedback and feedforward. As a result, there are critical nodes in the genetic program for building, say, a nose. That program can be turned on or off.

How do we know this? Look at mutations that happen with a genetic hiccup. Take the condition called arhinia, in which a child is born without a nose. It is simply missing from the face. Baby Eli, born in Alabama in 2015, is completely missing a nose, and he also lacks a nasal cavity or system for smelling.5 Such a mutation seems startling and difficult to fathom, but in our plug-and-play framework arhinia is predictable: with a slight tweak of the genes, the peripheral device simply doesn’t get built.


Baby Eli was born with no nose.


Baby Jordy was born with no eyes; beneath his lids one finds skin.

If our sensory organs can be viewed as plug-and-play devices, we might expect to find medical cases in which a child is born with, say, no eyes. And indeed, that is exactly what the condition of anophthalmia is. Consider baby Jordy, born in Chicago in 2014.6 Beneath his eyelids, one simply finds smooth, glossy flesh. While Jordy’s behavior and brain imaging indicate that the rest of his brain is functioning just fine, he has no peripheral devices for capturing photons. Jordy’s grandmother points out, “He will know us by feeling us.” His mother, Brania Jackson, got a special “I love Jordy” tattoo—in Braille—on her right shoulder blade so that Jordy can grow up feeling it.

Some babies are born without ears. In the rare condition of anotia, children are born with a complete absence of the external portion of the ear.


A child with no ears.

Relatedly, a mutation of a single protein makes the structures of the inner ear absent.7 Needless to say, children with these mutations are completely deaf, because they lack the peripheral devices that convert air pressure waves into spikes.

Can you be born without a tongue, but otherwise healthy? Sure. That’s what happened to a Brazilian baby named Auristela. She spent years struggling to eat, speak, and breathe. Now an adult, she underwent an operation to put in a tongue, and at present she gives eloquent interviews on growing up tongueless.8

The extraordinary list of the ways we can be disassembled goes on. Some children are born without any pain receptors in their skin and inner organs, so they are totally insensitive to the sting and agony of life’s lesser moments.9 (At first blush, it might seem as though freedom from pain would be an advantage. But it’s not: children unable to experience pain are covered with scars and often die young because they don’t know what to avoid.) Beyond pain, there are many other types of receptors in the skin, including stretch, itch, and temperature, and a child can end up missing some, but not others. This collectively falls under the term “anaphia,” the inability to feel touch.

When we look at this constellation of disorders, it becomes clear that our peripheral detectors unpack by dint of specific genetic programs. A minor malfunction in the genes can halt the program, and then the brain doesn’t receive that particular data stream.


The all-purpose cortex idea suggests how new sensory skills can be added during evolution: with a mutated peripheral device, a new data stream makes its way into some swath of the brain, and the neural processing machinery gets to work. Thus, new skills require only the development of new sensory devices.

And that’s why we can look across the animal kingdom and find all kinds of strange peripheral devices, each of which is crafted by millions of years of evolution. If you were a snake, your sequence of DNA would fabricate heat pits that pick up infrared information. If you were a black ghost knifefish, your genetic letters would unpack electrosensors that pick up on perturbations in the electrical field. If you were a bloodhound dog, your code would write instructions for an enormous snout crammed with smell receptors. If you were a mantis shrimp, your instructions would manufacture eyes with sixteen types of photo-receptors. The star-nosed mole has what seem like twenty-two fingers on its nose, and with these it feels around and constructs a 3-D model of its tunnel systems. Many birds, cows, and insects have magnetoreception, with which they orient to the magnetic field of the planet.

To accommodate such varied peripherals, does the brain have to be redesigned each time? I suggest not. In evolutionary time, random mutations introduce strange new sensors, and the recipient brains simply figure out how to exploit them. Once the principles of brain operation have been established, nature can simply worry about designing new sensors.

This perspective allows a lesson to come into focus: the devices we come to the table with—eyes, noses, ears, tongues, fingertips—are not the only collection of instruments we could have had. These are simply what we’ve inherited from a lengthy and complex road of evolution.

But that particular collection of sensors may not be what we have to stick with.

After all, the brain’s ability to wrap itself around different kinds of incoming information implies the bizarre prediction that one might be able to get one sensory channel to carry another’s information. For example, what if you took a data stream from a video camera and converted it into touch on your skin? Would the brain eventually be able to interpret the visual world simply by feeling it?

Welcome to the stranger-than-fiction world of sensory substitution.

SENSORY SUBSTITUTION

The idea that one can feed data into the brain via the wrong channels may sound hypothetical and bizarre. But the first paper demonstrating this was published in the journal Nature more than half a century ago.

The story begins in 1958, when a physician named Paul Bach-y-Rita received terrible news: his father, a sixty-five-year-old professor, had just suffered a major stroke. He was wheelchair bound and could barely speak or move. Paul and his brother George, a medical student at the University of Mexico, searched for ways to help their father. And together they pioneered an idiosyncratic, one-on-one rehabilitation program.


Sensory substitution: feed information into the brain via unusual pathways.

As Paul described it, “It was tough love. [George would] throw something on the floor and say ‘Dad, go get it.’ ”10 Or they would have their father try to sweep the porch, even as the neighbors looked on in dismay. But for their father, the struggle was rewarding. As Paul phrased his father’s view, “This useless man was doing something.”

Stroke victims frequently recover only partially—and often not at all—so the brothers tried not to buy into false hope. They knew that when brain tissue is killed in a stroke, it never comes back.

But their father’s recovery proceeded unexpectedly well. So well, in fact, that their father returned to his professorship and died much later in life (the victim of a heart attack while hiking in Colombia at nine thousand feet).

Paul was deeply impressed at the extent of his father’s recovery, and the experience marked a major turning point in his life. Paul realized that the brain could retrain itself and that even when parts of the brain were forever gone, other parts could take over their function. Paul departed a professorship at Smith-Kettlewell in San Francisco to begin a residency in rehabilitation medicine at Santa Clara Valley Medical Center. He wanted to study people like his father. He wanted to figure out what it took for the brain to retrain.

By the late 1960s, Paul Bach-y-Rita had pursued a scheme that most of his colleagues assumed to be foolish. He sat a blind volunteer in a reconfigured dental chair in his laboratory. Inset into the back of the chair was a grid of four hundred Teflon tips in a twenty-by-twenty grid. The tips could be extended and retracted by mechanical solenoids. Over the blind man’s head a camera was mounted on a tripod. The video stream of the camera was converted into a poking of the tips against the volunteer’s back.

Objects were passed in front of the camera while the blind participant in the chair paid careful attention to the feelings in his back. Over days of training, he got better at identifying the objects by their feel—in the same way a person might play the game of drawing with a finger on another person’s back and then asking the person to identify the shape or letter. The experience wasn’t exactly like vision, but it was a start.

What Bach-y-Rita found astonished the field: the blind subjects could learn to distinguish horizontal from vertical from diagonal lines. More advanced users could learn to distinguish simple objects and even faces—simply by the poking sensations on their back. He published his findings in the journal Nature, under the surprising title “Vision Substitution by Tactile Image Projection.” It was the beginning of a new era—that of sensory substitution.11 Bach-y-Rita summarized his findings simply: “The brain is able to use information coming from the skin as if it were coming from the eyes.”


A video feed is translated into touch on the back.

The technique improved drastically when Bach-y-Rita and his collaborators made a single, simple change: instead of mounting the camera to the chair, they allowed the blind user to point it himself, using his own volition to control where the “eye” looked.12 Why? Because sensory input is best learned when one can interact with the world. Letting users control the camera closed the loop between muscle output and sensory input.13 Perception can be understood not as passive but instead as a way to actively explore the environment, matching a particular action to a specific change in what returns to the brain. It doesn’t matter to the brain how that loop gets established—whether by moving the extraocular muscles that move the eye or using arm muscles to tilt a camera. However it happens, the brain works to figure out how the output maps to the input.

The subjective experience for the users was that visual objects were found to be located “out there” instead of on the skin of the back.14 In other words, it was something like vision. Even though the sight of your friend’s face at the coffee shop impinges on your photoreceptors, you don’t perceive that the signal is at your eyes. You perceive that he’s out there, waving at you from a distance. And so it went for the users of the modified dental chair.

While Bach-y-Rita’s device was the first to hit the public eye, it was not actually the first attempt at sensory substitution. On the other side of the world at the end of the 1890s, a Polish ophthalmologist named Kazimierz Noiszewski developed the Elektroftalm (from the Greek for “electricity” + “eye”) for blind people. A photocell was placed on the forehead of a blind person, and the more light that hit it, the louder a sound in the person’s ear. Based on the sound’s intensity, the blind person could tell where lights or dark areas were.


The Elektroftalm translated a camera’s image into vibrations on the head (1969).

Unfortunately, it was large, heavy, and only one pixel of resolution, so it gained no traction. But by 1960, his Polish colleagues picked up the ball and ran with it.15 Recognizing that hearing is critical for the blind, they instead turned to passing in the information via touch. They built a system of vibratory motors, mounted on a helmet, that “drew” the images on the head. Blind participants could move around in specially prepared rooms, painted to enhance the contrast of door frames and furniture edges. It worked. Alas, like the earlier inventions, the device was heavy and would get hot during use, so the world had to wait. But the proof of principle was there.

Why did these strange approaches work? Because inputs to the brain—photons at the eyes, air compression waves at the ears, pressure on the skin—are all converted into the common currency of electrical signals. As long as the incoming spikes carry information that represents something important about the outside world, the brain will learn how to interpret it. The vast neural forests in the brain don’t care about the route by which the spikes entered. Bach-y-Rita described it this way in a 2003 interview on PBS:

If I’m looking at you, the image of you doesn’t get beyond my retina. . . . From there to the brain to the rest of the brain, it’s pulses. Pulses along nerves. Those pulses aren’t any different from the pulses along the big toe. It’s [the] information that [they carry], and the frequency and the pattern of pulses. If you could train the brain to extract that kind of information, then you don’t need the eye to see.

In other words, the skin is a path to feeding data into a brain that no longer possesses functioning eyes. But how could that work?

THE ONE-TRICK PONY

When you look at the cortex, it looks approximately the same everywhere as you traverse its hills and valleys. But when we image the brain or dip tiny electrodes into its jellylike mass, we find that different types of information are lurking in different regions. These differences have allowed neuroscientists to assign areas with labels: this region is for vision, this one for hearing, this one for touch from your left big toe, and so on. But what if areas come to be what they are only because of their inputs? What if the “visual” cortex is only visual because of the data it receives? What if specialization develops from the details of the incoming data cables rather than by genetic pre-specification of modules? In this framework, the cortex is an all-purpose data-processing engine. Feed data in and it will crunch through and extract statistical regularities.16 In other words, it’s willing to accept whatever input is plugged into it and performs the same basic algorithms on it. In this view, no part of the cortex is prespecified to be visual, auditory, and so on. So whether an organism wants to detect air compression waves or photons, all it has to do is plug the fiber bundle of incoming signals into the cortex, and the six-layered machinery will run a very general algorithm to extract the right kind of information. The data make the area.

And this is why the neocortex looks about the same everywhere: because it is the same. Any patch of cortex is pluripotent—meaning that it has the possibility to develop into a variety of fates, depending on what’s plugged into it.

So if there’s an area of the brain devoted to hearing, it’s only because peripheral devices (in this case, the ears) send information along cables that plug into the cortex at that spot. It’s not the auditory cortex by necessity; it’s the auditory cortex only because signals passed along by the ears have shaped its destiny. In an alternate universe, imagine that nerve fibers carrying visual information plugged into that area; then we would label it in our textbooks as the visual cortex. In other words, the cortex performs standard operations on whatever input it happens to get. This gives a first impression that the brain has prespecified sensory areas, but it really only looks that way because of the inputs.17

Consider where the fish markets are in the middle United States: the towns in which pescatarianism thrives, in which sushi restaurants are overrepresented, in which new seafood recipes are developed—let’s call these towns the primary fishual areas.

Why does the map have a particular configuration, and not something different? It looks that way because that’s where the rivers flow, and therefore where the fish are. Think of the fish like bits of data, flowing along the data cables of the rivers, and the restaurant distribution crafts itself accordingly. No legislative body prescribed that the fish markets should move there. They clustered there naturally.

All this leads to the hypothesis that there’s nothing special about a chunk of tissue in, say, the auditory cortex. So could you cut out a bit of auditory cortex in an embryo and transplant it into the visual cortex, and would it function just fine? Indeed, this is precisely what was demonstrated in animal experiments beginning in the early 1990s: in short order, the chunk of transplanted tissue looks and behaves just like the rest of the visual cortex.18

And then the demonstration was taken a step further. In 2000, scientists at MIT redirected inputs from a ferret’s eye to the auditory cortex so that now the auditory cortex received visual data. What happened? The auditory cortex adjusted its circuitry to resemble the connections of the primary visual cortex.19 The rewired animals interpreted inputs to the auditory cortex like normal vision. This tells us that the pattern of inputs determines the fate of the cortex. The brain dynamically wires itself to best represent (and eventually act upon) whatever data come swimming in.20


Visual fibers in the ferret brain were rerouted to the auditory cortex—which then began to process visual information.

Hundreds of studies on transplanting tissue or rewiring inputs support the model that the brain is a general-purpose computing device—a machine that performs standard operations on the data streaming in—whether those data carry a glimpse of a hopping rabbit, the sound of a phone ring, the taste of peanut butter, the smell of salami, or the touch of silk on the cheek. The brain analyzes the input and puts it into context (what can I do with this?), regardless of where it comes from. And that’s why data can become useful to a blind person even when they’re fed into the back, or ear, or forehead.


In the 1990s, Bach-y-Rita and his colleagues sought ways to go smaller than the dental chair. They developed a small device called the Brain-Port.21 A camera is attached to the forehead of a blind person, and a small grid of electrodes is placed on the tongue. The “Tongue Display Unit” uses a grid of stimulators over three square centimeters. The electrodes deliver small shocks that correlate with the position of pixels, feeling something like the children’s candy Pop Rocks in the mouth. Bright pixels are encoded by strong stimulation at the corresponding points on the tongue, gray by medium stimulation, and darkness by no stimulation. The BrainPort gives the capacity to distinguish visual items with a visual acuity that equates to about 20/800 vision.22 While users report that they first perceived the tongue stimulation as unidentifiable edges and shapes, they eventually learn to recognize the stimulation at a deeper level, allowing them to discern qualities such as distance, shape, direction of movement, and size.23


Seeing with the tongue.

We normally think of the tongue as a taste organ, but it is loaded with touch receptors (that’s how you feel the texture of food), making it an excellent brain-machine interface.24 As with the other visual-tactile devices, the tongue grid reminds us vision arises not in the eyes but in the brain. When brain imaging is performed on trained subjects (blind or sighted), the motion of electrotactile shocks across the tongue activates an area of the brain normally involved in visual motion.25

As with the grid of solenoids on the back, blind people who use the BrainPort begin to feel that scenes have “openness” and “depth” and that objects are out there. In other words, it’s more than a cognitive translation of what’s happening on the tongue: it grows into a direct perceptual experience. Their experience is not “I feel a pattern on my tongue that codes for my spouse passing by,” but instead a direct sense that their spouse is moving across the living room. If you’re a reader with normal vision, keep in mind this is precisely how your eyes work: electrochemical signals in your retinas are perceived as a friend beckoning you, a Ferrari zooming past on the road, a scarlet kite against an azure sky. Even though all the activity is at the surface of your sensory detectors, you perceive everything as out there. It simply doesn’t matter whether the detector is the eye or the tongue. As the blind participant Roger Behm describes his experience of the BrainPort:

Last year, when I was up here for the first time, we were doing stuff on the table, in the kitchen. And I got kind of a little emotional, because it’s thirty-three years since I’ve seen before. And I could reach out and I see the different-sized balls. I mean I visually see them. I could reach out and grab them—not grope or feel for them—pick them up, and see the cup, and raise my hand and drop it right in the cup.26

As you can presumably guess by now, the tactile input can be almost anywhere on the body. Researchers in Japan have developed a variant of the tactile grid—the Forehead Retina System—in which a video stream is converted to small points of touch on the forehead.27 Why the forehead? Why not? It’s not being used for much else.


The Forehead Retina System.

Another version hosts a grid of vibrotactile actuators on the abdomen, which use intensity to represent distance to the nearest surfaces.28

What these all have in common is that the brain can figure out what to make of visual input coming in through channels normally thought of as touch. But it turns out that touch isn’t the only strategy that works.

EYE TUNES

In my laboratory some years ago, Don Vaughn walked with his iPhone held out in front of him. His eyes were closed, and yet he was not crashing into things. The sounds streaming through his earbuds were busily converting the visual world into a soundscape. He was learning to see the room with his ears. He gently moved the phone around in front of him like a third eye, like a miniature walking cane, turning it this way and that to pull in the information he needed. We were testing whether a blind person could pick up visual information through the ears. Although you might not have heard of this approach to blindness before, the idea isn’t new: it began more than half a century earlier.

In 1966, a professor named Leslie Kay became obsessed with the beauty of bat echolocation. He knew that some humans could learn to echolocate, but it wasn’t easy. So Kay designed a bulky pair of glasses to help the blind community take advantage of the idea.29

The glasses emitted an ultrasonic sound into the environment. With its short wavelengths, ultrasonic can reveal information about small objects when it bounces back. Electronics on the glasses captured the returning reflections and converted them into sounds humans could hear. The note in your ear indicated the distance of the object: high pitches coded for something far away, low pitches for something nearby. The volume of a signal told you about the size of the object: loud meant the object was large; soft told you it was small. The clarity of the signal was used to represent texture: a smooth object became a pure tone; a rough texture sounded like a note corrupted with noise. Users learned to perform object avoidance pretty well; however, because of the low resolution, Kay and his colleagues concluded that the invention served more as a supplement to a guide dog or cane rather than a replacement.


Professor Kay’s sonic glasses shown on the right.(The other glasses are merely thick, not sonic.)

Although it was only moderately useful for adults, there remained the question of how well a baby’s brain might learn to interpret the signals, given that young brains are especially plastic. In 1974, in California, the psychologist T. G. R. Bower used a modified version of Kay’s glasses to test whether the idea could work. His participant was a sixteen-week-old baby, blind from birth.30 On the first day, Bower took an object and moved it slowly back and forth from the infant’s nose. By the fourth time he moved the object, he reports, the baby’s eyes converged (both pointed toward the nose), as happens when something approaches the face. When Bower moved the object away, the baby’s eyes diverged. After a few more cycles of this, the baby put up its hands as the object drew near. When objects were moved left and right in front of the baby, Bower reports that the baby tracked them with its head and tried to swipe at them. In his write-up of the results, Bower relates several other behaviors:

The baby was facing [his talking mother] and wearing the device. He slowly turned his head to remove her from the sound field, then slowly turned back to bring her in again. This behavior was repeated several times to the accompaniment of immense smiles from the baby. All three observers had the impression that he was playing a kind of peek-a-boo with his mother, and deriving immense pleasure from it.

He goes on to report remarkable results over the next several months:

The baby’s development after these initial adventures remained more or less on a par with that of a sighted baby. Using the sonic guide the baby seemed able to identify a favorite toy without touching it. He began two-handed reaches around 6 months of age. By 8 months the baby would search for an object that had been hidden behind another object. . . . None of these behavior patterns is normally seen in congenitally blind babies.

You may wonder why you haven’t heard of these being used before. Just as we saw earlier, the technology was bulky and heavy—not the kind of thing you could reasonably grow up using—while the resolution was fairly low. Further, the results in adults generally met with less success than those in children with the ultrasonic glasses31—an issue we’ll return to in chapter 9. So while the concept of sensory substitution took root, it had to wait for the right combination of factors to thrive.


In the early 1980s, a Dutch physicist named Peter Meijer picked up the baton of thinking about the ears as a means to transmit visual information. Instead of using echolocation, he wondered if he could take a video feed and convert it into sound.

He had seen Bach-y-Rita’s conversion of a video feed into touch, but he suspected that the ears might have a greater capacity to soak in information. The downside of going for the ears was that the conversion from video to sound was going to be less intuitive. In Bach-y-Rita’s dental chair, the shape of a circle, face, or person could be pressed directly against the skin. But how does one convert hundreds of pixels of video into sound?

By 1991, Meijer had developed a version on a desktop computer, and by 1999 it was portable, worn as camera-mounted glasses with a computer clipped to the belt. He called his system the vOICe (where “OIC” stands for “Oh, I See”).32 The algorithm manipulates sound along three dimensions: the height of an object is represented by the frequency of the sound, the horizontal position is represented by time via a panning of the stereo input (imagine sound moving in the ears from left to right, the way you scan a scene with your eyes), and the brightness of an object is represented by volume. Visual information could be captured for a grayscale image of about sixty by sixty pixels.33

Try to imagine the experience of using these glasses. At first, everything sounds like a cacophony. As one moves around the environment, pitches are buzzing and whining in an alien and useless manner. After a while, one gets a sense of how to use the sounds to navigate around. At this stage it is a cognitive exercise: one is laboriously translating the pitches into something that can be acted upon.

The important part comes a little later. After some weeks or months, blind users begin performing well.34 But not just because they have memorized the translation. Instead, they are, in some sense, seeing. In a strange, low-resolution way, they are experiencing vision.35 One of the vOICe users, who went blind after twenty years of vision, had this to say about her experience of using the device:

You can develop a sense of the soundscapes within two to three weeks. Within three months or so you should start seeing the flashes of your environment where you’d be able to identify things just by looking at them. . . . It is sight. I know what sight looks like. I remember it.36

Rigorous training is key. Just as with cochlear implants, it can take many months of using these technologies before the brain starts to make sense of the signals. By that point, the changes are measurable in brain imaging. A particular region of the brain (the lateral occipital cortex) normally responds to shape information, whether the shape is determined by sight or by touch. After users wear the glasses for several days, this brain region becomes activated by the soundscape.37 The improvements in a user’s performance are paralleled by the amount of cerebral reorganization.38

In other words, the brain figures out how to extract shape information from incoming signals, regardless of the path by which those signals get into the inner sanctum of the skull—whether by sight, touch, or sound. The details of the detectors don’t matter. All that matters is the information they carry.

By the early years of the twenty-first century, several laboratories began to take advantage of cell phones, developing apps to convert camera input into audio output. Blind people listen through their earbuds as they view the scene in front of them with the cell phone camera. For example, the vOICe can now be downloaded for free on phones around the world.

The vOICe is not the only visual-to-auditory substitution approach; recent years have seen a proliferation of these technologies. For example, the EyeMusic app uses musical pitches to represent the up-down location of pixels: the higher a pixel, the higher the note. Timing is exploited to represent the left-right pixel location: earlier notes are used for something on the left; later notes represent something on the right. Color is conveyed by different musical instruments: white (vocals), blue (trumpet), red (organ), green (reed), yellow (violin).39 Other groups are experimenting with alternative versions: for example, using magnification at the center of the scene, just like the human eye, or using simulated echolocation or distance-dependent hum volume modulation, or many other ideas.40

The ubiquity of smartphones has moved the world from bulky computers to colossal power in the back pocket. And this allows not only efficiency and speed but also a chance for sensory-substitution devices to gain global leverage, especially as 87 percent of visually impaired people live in developing countries.41 Inexpensive sensory-substitution apps can have a worldwide reach, because they involve no ongoing cost of production, physical dissemination, stock replenishment, or adverse medical reactions. In this way, a neurally inspired approach can be inexpensive and rapidly deployable and tackle global health challenges.


If it seems surprising that a blind person can come to “see” with her tongue or through cell phone earbuds, just remember how the blind come to read Braille. At first the experience involves mysterious bumps on the fingertips. But soon it becomes more than that: the brain moves beyond the details of the medium (the bumps) for a direct experience of the meaning. The Braille reader’s experience parallels yours as your eyes flow over this text: although these letters are arbitrary shapes, you surpass the details of the medium (the letters) for a direct experience of the meaning.

To the first-time wearer of the tongue grid or the sonic headphones, the data streaming in require translation: the signals generated by a visual scene (say, a dog entering the living room with a bone in her mouth) give little indication of what’s out there. It’s as though the nerves were passing messages in a foreign language. But with enough practice, the brain can learn to translate. And once it does, the understanding of the visual world becomes directly apparent.

GOOD VIBRATIONS

Given that 5 percent of the world has disabling hearing loss, researchers some years ago got interested in ferreting out the genetics of that.42 Unfortunately, the community has currently discovered more than 220 genes associated with deafness. For those hoping for a simple solution, this is a disappointment, but it’s no surprise. After all, the auditory system works as a symphony of many delicate pieces operating in concert. As with any complex system, there are hundreds of ways it can be disrupted. As soon as any part of the system goes wrong, the whole system suffers, and the result is lumped into the term “hearing loss.”

Many researchers are working to figure out how to repair those individual pieces and parts. But let’s ask the question from the livewiring point of view: How could the principles of sensory substitution help us solve the problem?

With this question in mind, my former graduate student Scott Novich and I set out to build sensory substitution for the deaf. We set out to build something totally unobtrusive—so unobtrusive that no one would even know you had it. To that end, we assimilated several advances in high-performance computing into a sound-to-touch sensory-substitution device worn under the shirt. Our Neosensory Vest captures the sound around you and maps it onto vibratory motors on the skin. People can feel the sonic world around them.

If it sounds strange that this could work, just note that this is all your inner ear does: it breaks up sound into different frequencies (from low to high), and then those data get shipped off to the brain for interpretation. In essence, we’re just transferring the inner ear to the skin.


The Neosensory Vest. Sound is translated into patterns of vibration on the skin.

The skin is a mind-bogglingly sophisticated computational material, but we do not use it for much in modern life. It’s the kind of material you’d pay great sums for if it were synthesized in a Silicon Valley factory, but currently this material hides beneath your clothing, almost entirely unemployed. However, you may wonder whether the skin has enough bandwidth to transmit all the information of sound. After all, the cochlea is an exquisitely specialized structure, a masterpiece of capturing and encoding sound. The skin, in contrast, is focused on other measures, and it has poor spatial resolution. Conveying an inner ear’s worth of information to the skin would require several hundred vibrotactile motors—too many to fit on a person. But by compressing the speech information, we can use fewer than thirty motors. How? Compression is all about extracting the important information into the smallest description. Think about chatting on your cell phone: you speak and the other person hears your voice. But the signal representing your voice is not what is getting directly transmitted. Instead, the phone digitally samples your speech (takes a momentary record of it) eight thousand times each second. Algorithms then summarize the important elements from those thousands of measures, and the compressed signal is what gets sent to the cell phone tower. Leveraging these kinds of compression techniques, the Vest captures sounds and “plays” a compressed representation with multiple motors on the skin.43

Our first participant was a thirty-seven-year-old named Jonathan who was born profoundly deaf. We had Jonathan train with the Neo-sensory Vest for four days, two hours a day, learning from a set of thirty words. On the fifth day, Scott shields his mouth (so his lips can’t be read) and says the word “touch.” Jonathan feels the complicated pattern of vibrations on his torso. And then he writes the word “touch” on the dry-erase board. Scott now says a different word (“where”), and Jonathan writes it on the board. Jonathan is able to translate the complicated pattern of vibrations into an understanding of the word that’s spoken. He’s not doing the decoding consciously, because the patterns are too complicated; instead, his brain is unlocking the patterns. When we switched to a new set of words, his performance stayed high, indicating that he was not simply memorizing but learning how to hear. In other words, if you have normal hearing, I can say a new word to you (“schmegegge”) and you hear it perfectly well—not because you’ve memorized it, but because you’ve learned how to listen.

We’ve developed our technology into many different form factors, such as a chest strap for children. We have been testing it with a group of deaf children ranging from two to eight years old. Their parents send me video updates most days. At first it wasn’t clear whether anything was happening. But then we noticed that the children would stop and attend when someone pinged a key on the piano.

The children also began to vocalize more, because for the first time they are closing a loop: they make a noise and immediately register it as a sensory input. Although you don’t remember, this is how you trained to use your ears when you were a baby. You babbled, cooed, clapped your hands, banged the bars of your crib . . . and you got feedback into these strange sensors on the side of your head. That’s how you deciphered the signals coming in: by correlating your own actions with their consequences. So imagine wearing the chest strap yourself. You speak aloud “the quick brown fox,” and you feel it at the same time. Your brain learns to put the two together, understanding the strange vibratory language.44 As we’ll see a little later, the best way to predict the future is to create it.


Two children using the vibratory chest strap.

We have also created a wristband (called Buzz) that has only four motors. It’s lower resolution, but more practical for many people’s lives. One of our users, Philip, told us about his experience wearing Buzz at his work, where he accidentally left an air compressor running:

I tend to leave it running and walk around the room, and then my co-workers say, “Hey, you forgot: you left the air on.” But now . . . using Buzz, I feel that something is running and I see that it is the air compressor. And now I can remind them when they leave it running. They are always like, “Wait, how did you know?”

Philip reports he can tell when his dogs are barking, or the faucet is running, or the doorbell rings, or his wife calls his name (something she never used to do, but does routinely now). When I interviewed Philip after he’d worn the wristband for six months, I quizzed him carefully on his internal experience: Did it feel like buzzing on his wrist that he had to translate, or did it feel like direct perception? In other words, when a siren passed on the street, did he feel that there was a buzzing on his skin, which meant siren . . . or did it feel that there was an ambulance out there. He was very clear that it was the latter: “I perceive the sound in my head.” In the same way that you have an immediate experience when seeing an acrobat (rather than tallying the photons hitting your eyes), or smelling cinnamon (rather than consciously translating molecular combinations on your mucosal membranes), Philip is hearing the world.


The idea of converting touch into sound is not new. In 1923, Robert Gault, a psychologist at Northwestern University, heard about a deaf and blind ten-year-old girl who claimed to be able to feel sound through her fingertips, as Helen Keller had done. Skeptical, he ran experiments. He stopped up her ears and wrapped her head in a woolen blanket (and verified on his graduate student that this prevented the ability to hear). She put her finger against the diaphragm of a “portophone” (a device for carrying a voice), and Gault sat in a closet and spoke through it. Her only ability to understand what he was saying was from vibrations on her fingertip. He reports,

After each sentence or question was completed her blanket was raised and she repeated to the assistant what had been said with but a few unimportant variations. . . . I believe we have here a satisfactory demonstration that she interprets the human voice through vibrations against her fingers.

Gault mentions that his colleague has succeeded at communicating words through a thirteen-foot-long glass tube. A trained participant, with stopped-up ears, could put his palm against the end of the tube and identify words that were spoken into the other end. With these sorts of observations, researchers have attempted to make sound-to-touch devices, but in previous decades the machinery was too large and computationally weak to make for a practical device.

Livewired

Подняться наверх