Читать книгу A Symphony in the Brain - Jim Robbins - Страница 11

Оглавление

CHAPTER ONE

The Symphony


For an eight-year-old named Jake the rest of the world has disappeared as he sits quietly in a darkened room and stares intently at a computer screen with a yellow Pac-Man gobbling dots as it moves across a bright blue background. A soft, steady beeping is the only sound. Jake is not using a joystick or keyboard to control the cartoon character; instead, a single thin wire with a dime-sized, gold-plated cup is fastened to his scalp with conducting paste. The sensor picks up the boy's brain waves—his electroencephalogram (literally, electric head picture), or EEG—and as he changes his brain waves by relaxing or breathing deeply or paying closer attention, he also controls the speed of the Pac-Man.

This is more than a game for the boy. Jake was born in crisis: he arrived more than three months before his due date, in July of 1990, and weighed just over a pound. He required open-heart surgery when he was three days old and spent the first two months of his life in an intensive care unit for infants. He survived, but with serious damage to his brain. The most severe symptoms showed up at the age of four when he entered his parents’ room one evening drooling and unable to speak. He went into a grand mal seizure and fell unconscious on the floor. After that, the seizures came frequently, usually at night as he was falling asleep. Antiseizure medications blunted the severity of the seizures but could not prevent their onset. His parents, Ray and Lisa, kept an overnight bag packed for frequent trips to the emergency room, where the slight boy received injections of Valium to arrest the seizures. The sight of the needle going into their son filled them with apprehension. He also had small absence, or petit mal, seizures throughout the day, when his mind would go elsewhere, when he could neither hear nor speak for five or ten seconds. He was diagnosed with a speech problem and cerebral palsy, which diminished his fine-and gross-motor skills. Even at age seven, when I met him, he had not learned to tie his shoes, zip his zipper, or button his shirt. His learning disabilities were numerous and included attention deficit disorder and hyperactivity. He had speech problems and ground his teeth together constantly, something called bruxism. His sleep was troubled, and he often woke up ten or eleven times in the night. Despite this list of problems, there is a bright little boy inside of Jake, with a wonderful and sometimes peculiar sense of humor.

At the age of five, Jake started taking two heavy-duty antiseizure medications: Depakote and Tegretol. Both are depressants, both control seizures, and both have serious and worrisome side effects. The boy seemed logy and often tired. “We felt Jake was losing his personality,” Lisa told me. “He was zoned out all the time.”

I have known Jake's family since he was born; the incredible story of his birth made him something of a celebrity in our town of Helena, Montana. A local insurance company put his smiling baby picture up on billboards with the line “Baby Jake will always be special to Managed Care Montana,” and talked about how its coverage had paid almost all of the approximately $350,000 in medical bills. On assignment in Santa Fe for a story about different technologies designed to enhance brain performance, I had heard about neurofeedback and the fact that its first and most effective use was with epilepsy. (Neurofeedback works on the same principle as other kinds of biofeedback except that it provides information about the brain, hence the prefix neuro.) At a Christmas party, I mentioned it to Jake's parents, who were eager to investigate an alternative to drugs. They researched the therapy on the Internet, made a series of appointments over a week, and drove three hundred miles to the nearest neurofeedback site in Jackson, Wyoming. They turned the week into a vacation, swimming in the motel pool, hiking in the Grand Tetons, watching elk at a wildlife refuge, and taking Jake to the local hospital for two one-hour “brain training” sessions per day on the computerized EEG biofeedback program.

Jake's brain has places where the electrical activity is not as stable as it should be. Research shows that the brain's electrical signals are subject to change and that people can be taught how to change them. All neurofeedback does is help guide the client to a specific frequency range and help him or her stay there. The brain does the rest. A technician has set the computer Jake is playing Pac-Man on so that when Jake spends time in those hard-to-reach frequencies, the Pac-Man gobbles dots and beeps like crazy. When he is not in those frequencies, the Pac-Man stops gobbling and turns black. Jake knows nothing about brain waves or his EEG, he simply knows that when the Pac-Man is gobbling and beeping, he is winning, and so he has learned how to adjust his brain waves to make the Pac-Man gobble dots all the time. It was easy: he caught on in just one session. As he spends more time in those frequencies his brain has trouble generating, his brain learns to function there on its own. This exercise makes the brain more stable.

It didn't take long for changes to begin to appear in Jake. “It took care of the teeth grinding within two sessions,” Lisa told me when they returned from Jackson. “It took care of the sleep problems immediately.” As the sessions continued, Jake became more settled, more centered. “We could carry on a conversation in the car on the way home for quite a while, the first time ever that we could carry on a two-way conversation for any length of time. His fine-motor skills improved, and he wanted to cut and draw and zip and button. He could never do any of that,” Lisa continued. Unprompted, friends and relatives remarked that Jake seemed calmer and more centered. Later, Jake's parents repeated the protocol for another week. Again they noticed dramatic improvement. Jake went to see his pediatric neurologist, who had been skeptical at the outset, though he had signed off on the treatment. He examined the boy alone for twenty minutes. When he was done, he told Lisa and Ray that the treatment had indeed been effective. “Jake seemed more focused,” Dr. Don Wight, the neurologist, told me later. “He could do things cognitively he couldn't do before the training. There was a qualitative and quantitative improvement in the way he was functioning. It was very real.”

Jake's parents bought one of the $10,000 neurofeedback units from Neurocybernetics, a California biofeedback manufacturer, and have made it available to the community. Dr. Wight has been trained in the technique and has incorporated it into his practice. Jake has regular sessions with the local neurofeedback technician, Bernadette Pedersen, and continues to improve. In 1999, he received a three-year evaluation for his individualized education program in the public schools. “He had some phenomenal gains,” said his mother. “He was an emergent reader going into second grade and after a year of steady training, he was reading at a fourth-grade level. One of the teachers called Jake's rate of improvement explosive, and I think it was.”

Had Jake been born twenty years earlier, he would have had to live with his problems. But in the last decade this new treatment—called, variously, neurofeedback, neurotherapy, or EEG biofeedback—has dramatically changed the prognosis for Jake and thousands of other people. It is being used to treat not only epilepsy and learning disabilities, but also a long list of other problems that defy conventional treatment: cocaine, alcohol, and other addictions; vegetative states; serious and mild head injuries; autism; fetal alcohol syndrome; discomfort from menopause and premenstrual syndrome; chronic pain; the symptoms of multiple sclerosis and Parkinson's disease; stroke; post-traumatic stress disorder; wild hyperactivity; Tourette's syndrome; depression; cerebral palsy; and much more.

All of this raises huge questions. What is neurofeedback? Where did it come from? What are brain waves? How can one tool treat so many disparate problems? How can something that works so well, and seems to perform miracles, not be in widespread use? Answers to those questions begin with an understanding of the three-pound organ known as the brain.

The history of efforts to unravel the source of human consciousness goes back thousands of years. Hundreds of ancient skulls with carefully drilled holes have been found in a variety of places around the world. Anthropologists have documented a belief by some native peoples that trepanation, or drilling a hole in the skull, combined with prayer and ritual, could relieve certain physical problems, perhaps epilepsy. At one archaeological site in France, one hundred and twenty skulls were found, forty of them with human-made apertures. Some people apparently survived the “operations,” for new bone grew at the edges of some of the holes—which ranged from the size of a dime to nearly half the skull. In Peru, anthropologists examined well-preserved, three-thousand-year-old mummies found near Cuzco and found that 40 percent of them had trepanned skulls. Stanley Finger, a neurologist who has looked at the finds, has estimated that there was a 65 percent survival rate. Whether the holes were made in a ritual or a de facto “medical” operation is unknown, but the mummies provide the earliest known record of making a connection between a person's head and his or her behavior.

In Egypt, a painted papyrus illustrates that three thousand years ago Egyptians recognized that a blow to the head could impair one's vision or coordination. A blow to the left side of the head, according to the papyrus, affected the right side of the body, while a blow to the right side of the head affected the body's left side, a description that proved to be fact. It was the heart, however, that the Egyptians revered, as the dwelling place of the human soul. (For most of human history, in fact, a “cardiocentric” view has dominated.) After death, the Egyptians, practitioners of an elaborate funerary ritual, removed all of the organs from the deceased and stored them in specially made ritual jars, except for one: the brain was simply pulled through the nose and discarded. The Aztecs also believed the heart was the superior organ and that it governed feeling and emotion, though they believed the brain was important for remembering and for knowing.

Hippocrates, writing between 460 and 379 B.C. may have been the first persuasive proponent of the idea that the brain is the source of human intelligence. Building on the work of two of his teachers, Alcmaeon and Anaxagoras, he had the prescient idea that epilepsy was the result of a disturbance in the brain. He believed that the gray matter was the source of many other things as well:

Men ought to know that from nothing else but the brain come joys, delights, laughter and sports and sorrows and griefs, despondency, and lamentations. And by this, in an especial manner, we acquire wisdom and knowledge, and see and hear and know what are foul and what are fair, what are bad and what are good, what are sweet, and what are unsavory.... And by the same organ we become mad and delirious, and fears and terrors assail u s . . . . All these things we endure from the brain, when it is not healthy. . . . In these ways I am of the opinion that the brain exercises the greatest power in the man. This is the interpreter to us of those things which emanate from the air, when the brain happens to be in a sound state.

Hippocrates’ view, however, was an anomaly, too far ahead of its time to be taken seriously. Aristotle, who came along several decades later, was a primary proponent of the heart-centered human, primarily because he had seen chickens running around after being decapitated. He had also touched both a human heart and a human brain shortly after the death of their owner. The heart was warm to the touch, while the brain was cool and moist, and so he reasoned that the brain was a kind of regulator that “cooled the passions and the spirit” and “the heat and seething” that originated in the heart. Aristotle was so well respected and influential that this view reigned unchallenged for centuries.

Galen, a physician to Roman gladiators and emperors in the second century, played a major role in the evolution of early thought about the brain. He believed there were four substances or “corporal humors”: yellow bile, black bile, phlegm, and blood, which combined in a person's heart with “pneuma,” a spiritlike substance. This solution traveled to the brain through a mesh of very thin tubes—which he called rete mirabile, or the miraculous network—and was then distributed to nerves throughout the body to produce behavior. Illness came from an imbalance in the fluids. Too much black bile, for example, led to depression and melancholy, while too much blood created a hot temper. The vital part of the brain, Galen claimed, were its ventricles: three hollow structures in the center of the organ that he believed contained this mystical animating substance. The fluid that created intelligence was found in the front ventricle, knowledge or mind in the middle ventricle, and memory in the rear chamber. (Ventricles do in fact exist; they are reservoirs for cerebrospinal fluid.) The rest of the brain, including the gray matter, was thought not to be critical. Adopted by the all-powerful Roman Catholic Church as the truth, Galen's “cell doctrine” reigned for fifteen hundred years, largely because, from the fourth through the fourteenth century, the church banned study of the human body. The dissection of human cadavers was penalized by torture or death, and the evolution of neuroscience virtually ground to a halt.

Then, in 1347, the Black Death seized Europe and killed a third of the population. The church's theories of medicine were proven woefully inadequate, and as a result the monopoly the church held on ideas about humans and their place in the world was broken. The Renaissance blossomed soon after, spurring a new burst of thinking about the human condition. By the sixteenth century, researchers were dissecting cadavers.

An anatomist named Vesalius may have been one of the first to question the cell doctrine. Because ventricles were similar in animals and humans, and animals were not capable of thought, he reasoned, how could ventricles be the source of thought? The difference between humans and animals, he believed, was a larger, more developed brain, and the true source of thought probably lay outside the ventricles. In the seventeenth century, Thomas Willis, an English physician, published a thorough text on the anatomy of the brain, in which he claimed that the brain itself, not the ventricles, controlled memory and volition. His work sparked a new way of thinking and would later convince researchers to abandon the cell doctrine.

Yet the cell doctrine survived for years after Willis's findings. René Descartes, the influential seventeenth-century French philosopher, is one of the most dominant early figures in the study of human behavior, and his influence still deeply impacts beliefs about the brain and the body, and even about reality as we know it. Descartes promoted the concept of dualism, the idea that mind and body are separate. He claimed that the ability to think was a gift from the Creator and the supreme aspect of human existence, while the body was separate and subservient to the mind, little more than a biological machine. His ideas were embraced by the church, and Descartes had laid the foundation for the next three hundred years of reductionism and the modern scientific method, which still dominates Western thinking. Nature is no more than the sum of its parts. Devoid of a soul, on death the human body and brain could be freely dissected and reduced to their component parts.

But the philosopher's work was not finished. If mind and body are separate, how do the two interact in humans? First, he said, involuntary movements were reflexive, an automatic response. Voluntary movements were a different matter. The spiritual belief of the day held that the body was animalistic, an unfit vessel for something so divinely elegant as the human spirit; so how did a godly spirit live in a body and run the show without becoming contaminated? And where? Descartes solved the conundrum of contamination neatly by claiming that the spirit entered the body by, and commanded the network of tubes and fluid from a single point: the tiny pineal gland, an organ in the front of the brain (named for its resemblance to a pine nut). Located there, the divine mind was almost completely untainted by the body; on death it simply floated out of the human “machine” and left it behind. Descartes chose the pineal because it occupied a central place in the brain, because it was near the senses, and because it was surrounded by cerebrospinal fluid, then still believed to be the liquid version of the animal spirits that allowed the body to move. Descartes's interpretation was the first attempt to assign a specific task to a specific part of the brain.

One of the first tools to come along to aid in reducing the universe to its component parts was the microscope. Chemical dyes, created for the textile industry, were used to dye slices of brain tissue for study under the newly invented instrument. It apparently didn't work well at first. Anton Van Leeuwenhoek, inventor of the microscope, looked at the sperm cells of dogs and cats and claimed that he saw microscopic dogs and cats, which he named “animalcules.” It was a shared hallucination, apparently, for it was confirmed by other researchers. Improvements in the technology later dispelled that notion.

The microscope lent itself to the next evolutionary step in thinking about the brain, the school of localization. Researchers looking at cross sections of brain tissue noticed that different parts of the brain had different types and numbers of cells and asked whether the differences in structure, the “cytoarchitecture,” pointed to a difference in function. Explorers of localization of function thought they did. Among the pioneers were Franz Josef Gall and Johann Spurzheim, who, in the late nineteenth century, hypothesized that every kind of behavior was represented in a specific region of the brain and that the organ was the source of the mind. They were right about that much, and far ahead of their time, but their work led them off into other, more fanciful realms. They hypothesized that a person's personality and mental traits depended on whether a particular part of the cortex was over- or underdeveloped. If one was lazy, the portion of the brain that governed “industriousness and responsibility” was weak; while the portion of the brain that governed mathematics was highly developed in people who were good with numbers. They went even further along this line and developed a “science” of phrenology—and as a result lost scientific credibility. Phrenologists claimed that differences in development of various regions of the brain caused bumps in the skull and that a personality assessment could be done by means of something called cranioscopy: feeling the topography of a person's head and comparing it to an interpretive chart of what each bump meant. It was the rage in the elite social circles of the time to have one's bumps read and one's character assessed.

Though Gall and Spurzheim were wrong about phrenology, they were right about functions being localized in the cortex. Their work ushered in the beginning of thought on how the physical attributes of the brain affect who we are.

Localization gained substantial scientific support in 1861, as a result of the research of a respected French physician named Paul Broca. Dr. Broca worked with a stroke patient who seemed to hear clearly but could answer any question asked with only a single word: “tan.” After the patient died, Broca removed his brain and found a large lesion on the part of the organ called the posterior frontal cortex, on the left side of the head near the temple. Broca was fascinated, and a search turned up eight other patients who had similar language difficulties in the wake of a stroke, a handicap called aphasia. Seven were found to have similar lesions. Broca hypothesized that this small region of the left brain—now called Broca's area—enables humans to speak. His research rocked the medical world and kicked off a search for functions across the gray, convoluted landscape of the brain.

Not long after, a German neurologist named Carl Wernicke discovered another area of the brain involved in speech, farther to the rear of the brain than Broca's area. Wernicke also came up with a model of how speech is assembled by networks in the brain, a model that still holds up and provides some understanding into the complex nature of brain function. A sense of what a person wants to say arises in the form of an electrical pattern in Wernicke's area and then travels to Broca's area, where a vocalization program is formed. That program is then communicated to the motor cortex, which activates the mouth, lip, tongue, and larynx muscles to create speech.

In 1848, several years before Wernicke's discovery, a metal rod in the hands of a twenty-five-year-old Vermont railroad construction foreman named Phineas Gage set off some dynamite and added a new dimension to the concept of localization. Gage was tamping dynamite into a hole in some rock when a spark from metal striking stone ignited the explosive and turned the three-and-a-half-foot rod into a missile that tore through Gage's left front cheek and his frontal lobe. It kept moving and landed a hundred feet away. Despite the trauma, and profuse bleeding from the wound, Gage was sitting up in minutes after the accident and was fully conscious, though dazed. After the wound healed, he was physically fine; it was his personality that suffered. Instead of the polite, shrewd, and level-headed guy he'd been before the accident—described by his bosses as “the most efficient and capable man"—he'd become a foul-mouthed lout who swore all the time and couldn't hold a job. He eventually became a freak show attraction, showing off his wound and the tamping rod that created it. His experience led to the unheard-of notion that very tangible brain cells in the frontal cortex somehow govern something as intangible as the human personality. It afforded a glimpse at an answer to a seminal question: How much of the human mind is dependent on the tissue and blood in the brain? Apparently a great deal.

Around the same time there was a development that would play into a different kind of understanding of the human brain, one concerned with the electrical nature of human tissue. The first notion that human nerve impulses are somehow electrical in nature goes all the way back to 1791, when Luigi Galvani, an Italian researcher, published a paper on the subject. Using a hand-cranked generator to send a mild current through frogs’ legs, he found that the current made the leg muscles contract, and he proposed an inherent electricity, an animating principle, that must exist in all living organisms. His work was not conclusive, but it began an important line of inquiry. The first conclusive evidence of electrical nerve impulses resulted from the measurement of a nerve impulse—also in a frog's leg—by a German physiologist named Emil Du Bois Reymond in the 1850s.

An English physician named Richard Caton was the first to discover that the brain generated electricity, using a device called a reflecting galvanometer to make the discovery in the 1870s and 1880s. The galvanometer consisted of a wire and coil that vibrated when small amounts of electricity were detected. The instrument Caton used had a small mirror attached to the coils, and a bright, oxy-hydrogen lamp cast a narrow beam onto the mirror, which reflected the beam onto an eight-foot scale painted on a wall in a darkened theater. When a signal was stronger, the light shone higher on the wall. Caton touched the electrodes from his instrument to the exposed brains of rabbits and monkeys. When an animal moved or chewed food or had a light shone in its eye, there was a corresponding electrical spike. Thoughts, Caton noticed, also generated a charge. He hooked up a monkey and recorded the current associated with chewing. “If I showed the monkey a raisin but did not give it, a slight negative variation of the current occurred,” he wrote. Almost offhandedly, Caton also detected the weak flow of current across an unopened skull: it was the first account of what would become the brain's electrical signature, the electrical encephalogram.

Before they learned how to measure the signals being emitted, researchers actually learned how to put electricity into the brain with great effect, and it became an indispensable technique for mapping brain function. Two German physicians, Gustav Fritsch and Eduard Hitzig, working at a military hospital, discovered they could electrically stimulate the brains of patients who had parts of their skull blown away in battle. Using a galvanic battery that delivered a tiny dose of current, Hitzig wired patches of exposed brain and found that stimulating the back of the brain, the occipital lobe, caused a patient's eyes to move involuntarily. The two men later experimented on live dogs to see which part of the brain corresponded to which voluntary motor controls. Stimulation was a vital tool that expanded understanding of the brain and quickly became more sophisticated. Two Englishmen, Charles Beevor and Victor Horsley, worked with an orangutan. They divided the brain into a grid of two-millimeter sections, and they numbered each square. Then they methodically stimulated parts of the animal's brain and created a detailed map of function—squares 95, 96,121, and 127, for example, caused elevation of the animal's lip on the upper right side. Electrical stimulation of the brain created an intricate picture of the brain that remains an indispensable part of neuroscience to this day.

In the 1880s an Italian anatomist named Camillo Golgi developed a new stain that made nerve cells much easier to study under the microscope—a seminal development, for it had been impossible to enhance the microscopic cell without killing it. Using the new stain, a Spanish anatomist named Santiago Ramon y Cajal turned the nascent world of neuroscience on its ear: he discovered the brain cell, the neuron. Until then the human brain had been thought of as just a blob. He also described the basics of how the cells pass on impulses, by reaching out to the body or dendrite of an adjacent cell with a kind of cable called an axon. He went on to make several other major discoveries about brain cells, including the fact that nerve cells morph or change. As a person studies a subject or learns to play an instrument, nerve cells along a pathway involved in the skill make more connections with other cells. In 1906 he shared the Nobel Prize with Camillo Golgi.

Hans Berger was nineteen, and serving in the German Army, when the horse he was riding slipped down a muddy embankment and he was suddenly thrust into the path of an oncoming horse-drawn artillery unit. For a moment he thought surely his end had come. While the young soldier was shaken, he escaped injury. When he returned to his barracks, he found a telegram from his father, inquiring about his well-being, for his sister had had a disturbing premonition that he had been gravely injured. “This is a case of spontaneous telepathy in which at a time of mortal danger, and as I contemplated certain death, I transmitted my thoughts, while my sister, who was particularly close to me, acted as the receiver,” the square-jawed, mustachioed Berger wrote near the end of his life in 1940. The experience was a defining moment for him. When he returned to college, he changed his major from astronomy to medicine. He eventually became a psychiatrist and continued to explore the possible physiological explanations for that premonition.

Berger's research led him to Caton's work, which he expanded on. At the end of the day, after his psychiatric work had been completed, the shy, fastidious man, who kept a schedule of his activities down to the minute, would retire to a laboratory and work secretly for a few hours with a primitive string galvanometer. He also experimented on patients who had had a piece of their skull removed for medical reasons, which made it much easier to access clear signals. Even though the brain is a constant electrical storm, its electrical potential is only about fifty millionths of a volt, a tenth of the voltage that is measured from the heart, and is difficult to measure. Like Caton, Berger made a beam of light vibrate with the electrical signal he detected, though instead of casting the beam on the wall, he directed it on a moving piece of photographic paper, which then created the wavy graph of the brain wave. In 1924 Berger recorded signals from an intact skull—the head of his fifteen-year-old son, Klaus. Using lead, zinc, platinum, and many other kinds of leads, Berger made seventy-three EEG recordings off of Klaus's head, the first published human electroencephalograms. Unsure of how precise his measurements were, however, he waited five years before he reported his results in a 1929 paper called “On the Electroencephalogram in Man.” The first frequency he encountered was in the 10-hertz range, which at first was called the Berger rhythm.

Berger's discovery caused a minor stir in Germany, and the Carl Zeiss Foundation gave him a technical assistant and state-of-the-art equipment to replace the clunker he was using. Berger went to town, hooking up all kinds of people to see what electrical activity he might find going on inside their skulls. He hooked up his fourteen-year-old daughter and asked her to divide 196 by 7. The EEG clearly showed when the mental activity began and ended. Berger hooked up infants and found that there was no EEG until they were at least two months old. This was evidence, he said, that the brain at birth is incomplete. Someone suggested he hook up a dying man, but he rejected that as immoral. He did wire up a dying dog, however, and watched until the EEG flat-lined. He did EEGs on schizophrenics and psychotics and was disappointed to find that their EEGs were normal—their craziness did not show up in unusual electrical activity that he could measure.

Despite all of the research that has since been done on the brain, no one is sure exactly what functions the electrical activity represents. One scientist compared measuring the brain waves to listening to the sound of a factory through a wall. What is known is that the human brain normally operates within a range of from 1 to about 40 hertz. (A hertz is the number of cycles per second; the higher the hertz, the faster the brain wave.) The frequencies are clumped into categories that denote their characteristics. The 1-to-4-hertz range is called delta and occurs during sleep and some comas. Theta is the 4-to-8-hertz range. It's called the hypnogogic state and is a kind of consciousness twilight that occurs between being deeply relaxed and sleeping. Berger's 10-hertz discovery falls within what he called the alpha range, which is a relaxed but awake state, from 8 to 12 hertz. He also named the beta range, from 13 to around 30 hertz, which is the range of normal waking consciousness. The Greek letters are a holdover from the early days of EEG; many researchers say the Greek alphabet categories are too broad to be meaningful and that there is no reason to have the names; they argue for simply referring to states by their numbers. The designations persist, however.

Berger's ideas were largely ignored until 1934, when two well-known British physiologists, Edgar Adrian and B.H.C. Matthews, announced that they had replicated his measurements of electrical waves. Only six years later, Berger's life ended in tragedy. In 1938, as he made his psychiatric rounds, he received a phone call from a Nazi official ordering him to fire his Jewish staff members. He refused and was ordered to “retire.” In 1940, deeply depressed, Berger committed suicide.

The 1930s were the halcyon days for another area of brain research, one that's been largely forgotten—electrical stimulation of the brain, or ESB. Passing small amounts of current directly into the brain shows the role that frequency plays in the operation of the body's “master control panel.” Few modern books about the brain mention much about the role of frequency in the brain, because modern neuroscience is concerned almost entirely with the cellular level, with an emphasis on drugs to alter chemical flows in the brain.

In 1934, two Yale biologists, E. Leon Chaffee and Richard U. Light, published a paper on an experiment in which they implanted electrodes into different areas of the brain of several experimental monkeys that were kept in a cage surrounded by three large electrical coils. When the power was turned on and a current was sent through the coils, it activated the electrodes implanted in the brains of the monkeys and stimulated very specific parts of their brains. One of the monkeys had the wire implanted in the part of the motor cortex that governs the arm, and when the switch was thrown, the stimulation made the animal swing its arm wildly. In another monkey a different placement in the motor cortex caused “a series of chewing, tongue wagging motions.”

Walter Rudolf Hess, a Swiss opthalmologist, was an important figure in the use of ESB to map the brain. He anesthetized cats and placed implants deep into their diencephalon, an area of the brain that regulates both the involuntary (autonomic) and voluntary nervous systems; then he studied their response to small doses of current there. At one particular site, “even a formerly good natured cat turns bad tempered,” he wrote. “It starts to spit and, when approached, launches a well-aimed attack. As the pupils simultaneously dilate widely and the hair bristles, a picture develops such as is shown by the cat if a dog attacks it while it cannot escape.” With a well-aimed dose of current at another site, he caused them to evacuate their bladders and put them to sleep. He also destroyed parts of their brain with a pinpoint of electricity to see which functions were affected. Then the cats were euthanized, the brains were thinly sliced, and, under a microscope, Hess studied precisely what parts of the brain had been destroyed. His work yielded valuable insight into the brain and built substantially on Wernicke's ideas—that many functions are not the product of a single part of the brain, but instead are governed by a network of sites within the brain, communicating in split-second electrical impulses. Hess also discovered that a tiny region deep in the center of the brain called the hypothalamus (part of the diencephalon) is an especially critical unit, governing essential regulatory functions such as body temperature and hunger and, in association with the pituitary gland, the endocrine system, which governs the production and circulation of the body's vital cocktail of chemicals.

Another pioneer in brain mapping, of particular interest to the evolution of neurofeedback, is Wilder Penfield, a neurosurgeon who was educated at Princeton, Oxford, and Johns Hopkins Medical School before going to Canada to be director of the Montreal Neurological Institute in 1928. Penfield's surgical specialty was the removal of brain lesions, or damaged tissue, that caused severe cases of epilepsy. To carry out such a procedure without destroying important parts of the brain, he needed a detailed map, and the only way to get that was to work with conscious patients. Penfield constructed a small tent that covered a patient's head as the patient lay on his or her side. Part of the head was anesthetized with Novocaine, and a tiny trapdoor was sawed into the bone of the skull, which Penfield swung open to expose the surface of the cortex, the top layer of the brain.

Among other things, Penfield mapped a two-inch-wide strip across the top of the skull, from the tip of one ear to the tip of the other, which he named the sensory and motor homunculus. Homunculus is Latin for “little man” and was so named because all the parts of the body that govern senses and movement are represented there—the human being in miniature. As a patient lay on the table, Penfield prodded the homunculus with tiny bits of current and carefully noted the response. When he inserted an electrode in the area that governs speech in one patient, for example, the man emitted a vowel cry as long as the electrode remained in place; when it was removed, the cry stopped. The electrode was placed in the brain again, in the same spot, and the vowel cry began again; the man was asked to stop making the sound, and he could not. Thousands of sites were mapped in the cortex in hundreds of patients. Penfield had an artist draw the homunculus as a little man, making the size of his body parts relative to the size of the area in the cortex that governs their use. Because humans are verbal, the little guy's lips, tongue, and pharynx are very large; his hands are also disproportionately large.

Enthralled by what he was discovering, Penfield continued to probe with the electrodes. Stimulating the part of the cortex that governed hearing, for example, elicited a variety of reports of sounds by patients: “a ringing sound like a doorbell,” “a rushing sound like a bird flying,” and “I hear a boom or something.” Current in the part of the cortex where memories are stored caused one woman to hear familiar music, while another thought she saw people entering the operating room with snow on their clothes. Yet another very clearly heard her dead son's voice speaking loudly. All of these experiences were much more real than a mere memory, something akin to a very vivid dream.

In the 1950s, an ESB experimenter named James J. Olds stumbled onto the pleasure center in the brain of a laboratory rat while testing the animal with an electrode implant. Intrigued, he placed the rat in a Skinner box, a specially designed container for animal experiments with a lever in it. Once a rat learns that pressing the lever delivers a food pellet, the rat is operantly conditioned. But Olds didn't reward the animal with food; he wanted to see if the animal would press the lever for a shot of electricity to the brain. First, Olds placed the electrode in the sensory motor region and the rat responded by pressing the lever ten to twenty-five times per hour, which is not much more than the rodent might do randomly. As the wires were moved closer to the areas in the midbrain that govern pleasure, including sex, and areas concerned with digestion and excretion, the frequency of the lever pressing skyrocketed. By the time the wires were deep in the “pleasure zone,” the animal was pressing the lever a whopping five thousand times an hour. Even food-deprived rats completely disregarded a bowl of their favorite chow in favor of a shot to the zone. Norwegian researchers Carl W. Sem Jacobsen and Arne Torkildsen at the Gauster Mental Hospital in Oslo replicated Olds's rat work in humans. They implanted an electrode in the pleasure areas of the brain and handed the button to the patients. Some subjects actually stimulated themselves into convulsions.

The most flamboyant of the ESB researchers, and a visionary of sorts, was a Spaniard by the name of José Delgado, a Ph.D. psychologist who conducted research on animals at Yale in the 1950s and 1960s. In one experiment Delgado placed an electrode in the amygdala—a small mass in the brain that governs fear—of a cat chosen for its friendliness. When the juice was on, the cat withdrew from humans and hissed and spat and threatened; when the current was turned off, the cat became friendly again. Monkeys stimulated in certain ways became much more aggressive and territorial. In one monkey, famous for biting its keepers, Delgado inserted a probe into the caudate nucleus. When the switch was on, the monkey was calm, and Delgado could insert his finger in the animal's mouth. When the switch was off, the animal returned to its nasty self and wouldn't let anyone near. The Spaniard's most famous experiment involved the use of what he called a “stimoceiver,” a radio-controlled unit that delivered current to extremely small solid-state receivers implanted in an animal's brain. At one point, Delgado climbed into a bullring in Spain with his stimoceiver and a red cape. As the bull charged, Delgado flipped the switch, and the fierce animal, the implanted electrode activated, stopped dead in its tracks and quietly trotted away.

Delgado believed deeply in the power of well-placed electrodes, and in his book Physical Control of the Mind: Toward a Psychocivilized Society, he laid out a plan for a utopian civilization in which people would wear electrical devices to end depression, pain, anxiety, and aggression and to stimulate pleasure, the will, and the intellect. The times, he said, demanded such electrical evolution. “The contrast between the fast pace of technological evolution and our limited advances in the understanding and control of human behavior is creating a growing danger,” he wrote. Delgado called his Utopia an electroligarchy.

Delgado also foresaw a pacemaker that could be used to treat brain dysfunction—an invention that arrived not too far in the future. In the 1950s and 1960s, Robert G. Heath and Walter A. Mickle, working at Tulane University in New Orleans treating schizophrenics, were desperately looking for ways to alleviate their patient's illness. They implanted fifty-two patients with electrodes—not only in the cortex, as Penfield did, but also in the subcortical levels, the deep part of the brain where emotional regulation is centered. When that part of the brain was stimulated, patients reported a general feeling of wellbeing, and the symptoms of their illness decreased. Intractable pain disappeared, and people talked faster. Stimulation of the amygdala caused a feeling of rage or fear; stimulation of the hypothalamus produced feelings of anxiety and discomfort, and patients complained of a wildly pounding heart.

Heath found that a tiny electrode, inserted in the cerebellum in the back of the brain, and powered by a matchbook-sized battery implanted in the abdomen, could control serious mental disorders. The first patient to receive the implant was a young man who flew into uncontrollable, violent rages and had to be tied to his bed. A small hole was drilled in his skull, and the tiny electrode was slipped into his brain and fired to activate the pleasure center—located in the septal area and in part of the amygdala—and to inhibit the place where rage is centered—the other part of the amygdala, the hippocampus, the thalamus, and the tegmentum. The unit delivered five minutes of gently pulsed current every ten minutes and worked quite well. The man was untied, let out of his bed, and eventually allowed to go home. Things went peaceably for a while. Then one day the man went on a rampage in which he wounded a neighbor, attempted to murder his parents, and barely escaped being shot by the police. He was carted back to the institution and x-rayed. The wires from the battery to the pacemaker had frayed and broken. Heath reattached the wires, and the young man calmed down and went home again.

Pulse generators, very similar to Heath's, are state of the art now for the treatment of Parkinson's disease. A company called Medtronics, in Minneapolis, manufactures a neural stimulator that delivers current to the brain and greatly alleviates tremors caused by the degenerative neurological disease. A similar pacemaker built by a company called Cyberonics is used for treating epileptic seizures and for treatment of severe depression. In one study, fifteen of thirty severely depressed patients who did not respond to any medication reported they felt much better after wearing a pacemaker the size of a pocket watch that stimulated the vagus nerve in their neck, which in turn stimulated the emotional region of their brain. The stimulator is also being used to fight, diabetes and hypertension, alleviate the symptoms of stroke and closed head injuries, and enhance memory.

There is one other field of inquiry in neuroscience that figures directly in the field of neurofeedback. Unlike ESB, the concept of “neuroplasticity"—the idea that the brain is not static but capable, if given the right stimulation, of dramatic and long-lasting change—has only recently been widely accepted as scientific fact. It has fomented a revolution in thinking about the brain. For many years, the assumption that the structure of the brain, after childhood, was fixed throughout life was unquestioned. Injury or disease could diminish the brain's capabilities, but there was no way to enhance those capabilities. “Once development is completed,” wrote Nobel laureate Santiago Ramon y Cajal, “the sources of growth and regeneration of axons are irrevocably lost. In the adult brain, nervous pathways are fixed and immutable; everything may die, nothing may be regenerated.”

Ramon y Cajal, it turns out, was wrong. A growing number of studies show that the brain is capable of great change. Some of the more interesting research took place at a convent in Mankato, Minnesota, called the School Sisters of Notre Dame. Why, scientists wondered, were so many of the nuns free of such problems as senility and Alzheimer's disease well into their eighties and nineties, far beyond the proportion in the general population? Dr. David Snowden, a researcher at the Sanders Brown Center on Aging in Louisville, Kentucky, asked the nuns to donate their remains to science. It seems to him that those nuns in the order who used their head—taught or went to college or performed other intellectual work, including puzzles—suffered less senility or other degenerative brain disease than their counterparts who performed manual labor. After they died, he removed their brain and placed tissue samples from each under an electron microscope and noted dramatic differences between the two types of brains. Connections between the cells of the intellectually active nuns were more robust. One possible explanation is that exerting the brain in new ways through the course of life, he says, creates new neuronal pathways, more synaptic connections, and significantly more cortex—a bigger and better brain.

In the 1970s, a researcher in Canada who worked with white lab rats found that when he split a litter—brought half home and left half in their cage in the lab—those that stayed at his home learned much more adeptly than those left in the lab. Mark Rosenzweig and a graduate student, Bruno Will, designed an experiment at the University of California at Berkeley to test the notion that somehow spending time at the researcher's home stimulated the growth of brain cells. They divided a population of similar rats among three different environments. In a standard social environment, three or four rats were housed together in an average-sized and plainly appointed wire cage with food and water available. A lone rat was housed in an impoverished environment, a small, barren wire cage with dim light. A third population, twelve rats, was housed in the equivalent of a rat resort, a multilevel habitat with food and water and a number of objects to climb on, move, and hide under. Objects were continually added and removed. After three months in their respective habitats, the rats were tested on learning skills. Researchers found that those raised in the enriched environment were much better learners than those raised in the impoverished environment and could learn to navigate a maze much more quickly. When they removed the rats’ brains, they discovered physiological differences as well, similar to those found in the nuns. The cortices of the enriched rats were thicker; the number of glial cells, which play a role in supporting neuronal activity, were greater; and individual neurons had more elaborate branches and an increased number of connections to other cells than those of the impoverished population.

Bruno Will followed up the experiments later at the Louis Pasteur University in Strasbourg, France, by placing rats with brain damage he created—he suctioned lesions on their visual cortex—into the three different environments. The results are testimony to the power of “environmental therapy”: brain-damaged rats placed in the enriched environment recovered their eyesight much better than rats in the standard or impoverished environments. Will then took the experiments another step further. In a population of normal rats, he damaged the hippocampus, a portion of the brain that is critical to the formation and storage of memories. Despite massive damage to that region, exposure to swings, toys, and other enrichments for a month enhanced and sustained the recovery of most of the rats’ memory. Rats with similar lesions placed in the other environments recovered their ability to remember only poorly or not at all.

A recent study, with a very solid design, seems to end for good the notion of a static brain. In November of 1998 the journal Nature Medicine published the results of a study conducted by a team of Swedes and Americans. Five terminally ill cancer patients at a Swedish hospital were injected with a fluorescent green chemical dye called bromodeoxyuridine. (BrdU, as it is known, is a chemical building block of human DNA. It is used in cells only when the cells are new and begin to divide, and not in existing cells. It is used in some cancer patients to monitor tumor growth.) Each time one of the five cancer patients died, one of the researchers, Peter Eriksson, rushed to the hospital. After a pathologist removed the brain and pulled out the hippocampus, which is deep within the temporal lobe, Eriksson injected it with a red dye marker that attaches only to neurons. When he looked at a slice of the hippocampus under a microscope, brain cells in the tissue of the hippocampus lit up in fluorescent red and green. The red dye that had just been injected meant the cells were indeed neurons; the green meant the cells had been created after the injection of BrdU, toward the end of the patient's life. Undifferentiated cells were continuing to divide, and to produce new, fully functional neurons as well, right up until the death of each of the five subjects.

The systems that govern the human brain are the most complex and compact on earth, and, even though more has been learned about the brain in the last thirty years than in all of human history, science has not come close to understanding how all the pieces fit together to create human consciousness. It's a riddle of our existence: “If the brain were so simple we could understand it, we would be too simple to do so,” according to one observer. But a great deal is known. The brain weighs about three pounds, is about 90 percent salt water, and has the consistency of a ripe avocado. There are four distinct regions to the brain. The top layer is the cortex, and there, especially in the front of the cortex, is where reason, planning, writing and reading, and a host of other cognitive functions take place. The cortex is what makes us human, what distinguishes us from the rest of the animal kingdom, by mitigating our baser instincts. Unfolded, it would be about the size of a handkerchief, it ranges from one thirty-second to one quarter inch thick. It looks like the covering of a tree trunk, and the name cortex is Latin for bark. Beneath the cortex is the mammalian brain, or the limbic system, the portion of the brain that governs pain and pleasure, including sex, eating, fighting, and taking flight. Below that is the diencephalon, which regulates our sleep and appetite. The bottom layer is the primitive regulatory machinery, the reptilian brain. This region is concerned with function—breathing, blood pressure, movement, and body temperature.

The brain is a wonder of information processing. It would take a computer the size of some states to come close to matching the number of “bits” of information in the brain, yet you can hold it in one hand. Information comes into the brain from the external world through the senses and is converted to an extremely complex mix of electrical and chemical energy. The four parts of the brain, as well as myriad areas across the surface of the cortex, must “talk” to one another constantly, and the brain accomplishes that by means of its vast assembly of tiny electrical devices, the neurons or brain cells. Neurons are like microscopic batteries. Their membrane builds up a charge electrochemically and then releases it, over and over again, in the form of what is called an “action potential,” a surge of voltage that propagates down the axon to where it terminates on other neurons. Cells fire in unison to create thought and movement, and information travels around the brain in networks. (An EEG electrode reads the activity of about one hundred thousand neurons.) How information—thought—is encoded in this electrochemical soup is a deep mystery. There are as many as one hundred billion of these long and spidery neurons, or cells, in the human brain, and each one may make from hundreds to hundreds of thousands of connections with other cells, which means a total of perhaps ten to a hundred trillion connections. These connections are at the heart of how well the brain functions. The development of the brain, from infancy to adulthood, is akin to ecological succession. The cells in an infant brain are like an open grassland. As the child learns, is stimulated and exposed to the world, the grassland begins to sprout thicker grasses, then small shrubs and trees that are separated from one another. Then the trees grow closer together and larger, and branches multiply. Finally, there is a rich, dense canopy of connected neurons, teeming with life—in the case of the brain, the life is electricity and chemicals that contain information. The richer the canopy, the more connections there are among the trees, the more fertile the habitat for the flow of information.

In very general terms, the brain works as follows. A person calls to you from across the street and says hello. The auditory areas of your brain fire to hear what the person says and to make sense of it. The visual cortex at the rear of the brain lights up so you can tell if you recognize the person. Memories start churning, so you can match what you see and hear against what you know. In order to form a response, thoughts are generated and speech areas begin to fire, which talk to the motor cortex and tell it to activate our lips and our pharynx and to wave our hand. The emotions of the lower level of the brain may activate if the person is a threat, owes you money, or is someone you love. Each of these functions is governed by an assembly of neurons, which generate charges that travel through clusters of cells at between one hundred and two hundred miles per hour, switching on and off with precision. The brain does all of this simultaneously, or in parallel, and in serial progression, as well. Picture the brain as filled with tiny lights, and everything a person does fires certain collections of these lights at different speeds and brightness. “We know that within the brain, a great many electric processes can be identified, each with its own limited domain, some apparently independent, others interacting with each other,” said W. Grey Walter, an English scientist who was a pioneer in the electrical realm of the brain. “We are dealing essentially with a symphonic orchestral composition, but one in which the performers may move about a little, and may follow the conductor or indulge in improvisation—more like a jazz combination than a solemn philharmonic assembly.”

This intricate symphony of consciousness is at play constantly when we are awake and engaged with the world around us.

In Jake's case, the symphony was largely intact but confused. The conductor was not doing his job, and the orchestra was playing too slow. Neurofeedback, the model holds, rouses the conductor and resets him to his appropriate speed. Once the conductor is back in form, the rest of the players fall into line. Whether the problem is autism, epilepsy, post-traumatic stress disorder, or any of a host of other maladies, the answer lies in resetting the conductor and appropriately engaging the orchestra with neurofeedback. The notion has yet to be accepted by the medical establishment, but its time may have arrived.

A Symphony in the Brain

Подняться наверх