Читать книгу The Runaway Species - David Eagleman - Страница 11
ОглавлениеCHAPTER 4
BREAKING
In breaking, something whole – such as a human body – is taken apart, and something new assembled out of the fragments.
Sophie Cave’s Floating Heads
Auguste Rodin’s Shadow Torso
Magdalena Abakanowicz’s Unrecognized
To create his Broken Obelisk, Barnett Newman snapped the obelisk in half and flipped it upside down.
Similarly, artists Georges Braque and Pablo Picasso broke apart the visual plane into a jigsaw puzzle of angles and perspectives in Cubism. In his massive painting Guernica, Picasso used breaking to illustrate the horrors of war. Bits and pieces of civilians, animals and soldiers – a torso, a leg, a head, all disjointed with no figure complete – create a stark representation of brutality and suffering.
George Braque’s Still Life with Violin and Pitcher
Pablo Picasso’s Guernica
The cognitive strategy of breaking that enabled Newman, Braque, and Picasso to make their art also made airports safer. On July 30, 1971, a Pan Am 747 was redirected to a shorter runway as it prepared to depart from San Francisco airport. The new runway required a steeper angle of ascent but, unfortunately, the pilots failed to make the necessary adjustments: as the plane took off, its climb was too shallow and it struck a lighting tower. Airport towers and fences at the time were heavy and unyielding so they could withstand high-force winds; as a result, the lighting tower acted like a giant sword, slicing into the aircraft. A wing was dented, part of the landing gear was torn off, and a piece of the tower penetrated the main cabin. The smoking plane continued out over the Pacific Ocean, where it flew for nearly two hours to use up fuel before heading back for an emergency landing. As the plane touched down, its tires burst and the plane veered off the runaway. Twenty-seven passengers were injured.
An Ercon frangible mast
Following this event, the Federal Aviation Administration mandated new safeguards. Engineers were tasked with preventing this from happening again, and their neural networks spawned different strategies. Nowadays, as you taxi for takeoff, the landing lights and radio towers outside the plane may look like solid metal – but they aren’t. They’re frangible, ready to break apart into smaller pieces that won’t harm the plane. The engineer’s brain saw a solid tower, and generated a what-if in which the tower disbanded into pieces.
Breaking up a continuous area revolutionized mobile communication. The first mobile phone systems worked just like television and radio broadcasting: in a given area, there was a single tower transmitting widely in all directions. Reception was great. But while it didn’t matter how many people were watching TV at the same time, it did matter how many people were making calls: only a few dozen could do so simultaneously. Any more than that and the system was overloaded. Dialing at a busy time of day, you were apt to get a busy signal. Engineers at Bell Labs recognized that treating mobile calls like TV wasn’t working. They took an innovative tack: they divided a single coverage area into small “cells,” each of which had its own tower.1 The modern cellphone was born.
Colors represent different broadcast frequencies
The great advantage of this system was that it enabled the same broadcast frequency to be reused in different neighborhoods, so more people could be on their phones at the same time. In a Cubist painting, the partitioning of a continuous area is on view. With cellphones, the idea runs in the background. All we know is that the call didn’t drop.
Poet e. e. cummings broke apart words and syntax to create his free-verse poetry. In his poem “dim,” nearly every word is split between lines.
dim
i
nu
tiv
e this park is e
mpty(everyb
ody’s elsewher
e except me 6 e
nglish sparrow
s) a
utumn & t
he rai
n
th
e
raintherain2
An analogous type of breaking was used by biochemist Frederick Sanger in the lab during the 1950s. Scientists were eager to figure out the sequence of amino acids that made up the insulin molecule, but the molecule was so large that the task was unwieldy. Sanger’s solution was to chop insulin molecules into more manageable pieces – and then sequence the shorter segments. Thanks to Sanger’s “jigsaw” method, the building blocks of insulin were finally sequenced. For this work, he won the Nobel Prize in 1958. His technique is still used today to figure out the structure of proteins.
But that was just the beginning. Sanger devised a method of breaking up DNA that enabled him to precisely control how and when strands were broken. The driving force was the same: break the long strands into workable chunks. The simplicity of this method greatly accelerated the gene-sequencing process. It made possible the human genome project, as well as the analysis of hundreds of other organisms. In 1980, Sanger won another Nobel Prize for this work.
By busting up strands of text in creative ways, e. e. cummings created a new way to use language; by breaking up strands of DNA, Sanger created a way to read Nature’s genetic code.
The neural process of breaking also underlies the way we now experience movies. In the earliest days of film, scenes unfolded in real time, exactly as they do in real life. Each scene’s action was shown in one continuous shot. The only edits were the cuts from one scene to another. The man would say urgently into the telephone, “I’ll be right there.” Then he would hang up, find his keys, and exit the door. He would walk down the hallway. He would descend the stairs. He would exit the building, walk down the sidewalk, come to the café, enter the café, and sit down for his encounter.
Pioneers such as Edwin Porter begin to link scenes more tightly by shaving off their beginnings and endings. The man would say, “I’ll be right there,” and suddenly the scene would cut to him sitting at the café. Time had been broken, and the audience didn’t think twice about it. As cinema evolved, filmmakers began to reach further in the direction of narrative compression. In the breakfast scene of Citizen Kane, time leaps years ahead every few shots. We see Kane and his wife ageing and their marriage evolving from loving words to silent stares. Directors created montages in which a lengthy train ride or an ingénue’s rise to stardom could be summarized by a few seconds of film; Hollywood studios hired montage specialists whose only job was to edit these sequences. In Rocky IV, training montages of boxer Rocky Balboa and his opponent Ivan Drago consume a full third of the film. No longer did time pass in a movie as it does in life. Breaking time’s flow had become part of the language of cinema.
Breaking continuous action also led to a great innovation in television. In 1963, the Army–Navy football game was broadcast live. Videotape equipment of the time was difficult to control, which made rewinding the tape imprecise. The director of that game’s broadcast, Tony Verna, figured out a way to put audio markers onto the tape – markers that could be heard within the studio, but not on air. This allowed him to covertly cue the start of each play. It took him several dozen tries to get the equipment working properly. Finally, in the fourth quarter, after a key score by Army, Verna rewound the tape to the correct spot and replayed the touchdown on live television. Verna had broken temporal flow and invented instant replay. Because this had never happened before, the television announcer had to provide extra explanation. “This is not live! Ladies and gentleman, Army did not score again!”
The early days of cinema, characterized by single long takes, were similar to the early days of computing, in which a mainframe could only process one problem at a time. A computer user would create punch cards, get in the queue and, when his turn came, hand the cards to a technician. Then he had to wait a few hours while the numbers were crunched before collecting the results.
An MIT computer scientist named John McCarthy came up with the idea of time sharing: what if, instead of running one algorithm at a time, a computer could switch between multiple ones, like cutting between different shots in a movie? Instead of users waiting their turn, several of them could work on the mainframe simultaneously. Each user would have the impression of owning the computer’s undivided attention when, in fact, it was rapidly toggling between them all. There would be no more waiting in line for a turn; users could sit at a terminal and believe they were having a one-on-one relationship with the computer.
The shift from vacuum tubes to transistors gave McCarthy’s concept a boost, as did the development of user-friendly coding languages. But dividing up the computer’s computations into short micro-segments was still a challenging mechanical feat. McCarthy’s first demonstration didn’t go well: in front of an audience of potential customers, McCarthy’s mainframe ran out of memory and started spewing out error messages.3 Fortunately, the technical hurdles were soon overcome and, within a few years, computer operators were sitting at individual terminals in real-time “conversation” with their mainframes. By covertly breaking up digital processing, McCarthy initiated a revolution in the human-machine interface. Nowadays, as we follow driving directions on our phone, our handheld device draws on the processing power of numerous servers, each toggling rapidly between millions of users – McCarthy’s concept writ large in the cloud.
As with time, the brain can break up the visual world into fragments. David Hockney created his photo-collage The Crossword Puzzle using large tiles that overlap and collide.
In pointillism, scenes are built from dots that are smaller and more numerous.
George Seurat’s Un dimanche après-midi à l’île de la Grande Jatte
In digital pixilation, the dots are so small you normally don’t see them. This covert fracturing is the innovation that gives rise to our whole digital universe.
The idea of pixilation – breaking a whole into tiny parts – has a long history. When we “cc” an email, we are employing a skeuomorph from the analog age: carbon copy. In the nineteenth and early twentieth centuries, an author would clone a document by first placing a sheet of black or blue carbonic paper between two sheets of plain paper; then, by writing or typing on the top sheet, dry ink or pigment would be transferred to the lower one, creating a duplicate. But the carbon sheets were messy; it was hard to handle them without getting everything dirty. In the 1950s, inventors Barrett Green and Lowell Schleicher came up with a way to solve the problem. They broke the concept of the sheet into hundreds of smaller areas, inventing the technique of micro-encapsulation. This way, as a person wrote on the sheet, individual ink capsules would burst, turning the sheet below blue.4 Although it would still be called a “carbon copy,” Green and Schleicher had created a user-friendly alternative to carbon paper: no matter where the pencil or typewriter key made its impression, ink would flow. Decades later, photocopying spelled the end of carbonless paper, but Green and Schleicher’s micro-encapsulation technique lived on in time-release medications and liquid crystal displays. For instance, instead of a solid pill, the 1960s decongestant Contac consisted of a gelatin capsule packed with more than six hundred “tiny time pills” that were digested at different rates. Likewise, instead of a solid sheet of glass, today’s LCD televisions segment the screen into millions of densely arranged microscopic crystals. Things that were once thought to be whole and indivisible turned out to be breakable into smaller parts.
Breaking comes so naturally to us that we hardly notice the many ways it is reflected in how we write and speak. We whittle away at words to speed up communication, shortening “gymnasium,” for example, (from the Greek gymnazein, meaning to train in the nude) into “gym” (and a less liberal dress code).5 We remove letters and phrases to create acronyms such as FBI, CIA, WHO, EU and UN. We tweet F2F for face-to-face, OH for overheard, and BFN for bye for now.
Our ease with these kinds of acronyms demonstrates how much brains like compression: we’re good at breaking things down, keeping the best bits, and still understanding the point. This is why our language is full of synecdoche, in which a part stands for a larger whole. When we talk about “the face that launched a thousand ships,” we obviously mean all of Helen, not just her visage – but we can break her down to a fragment without losing the meaning. This is why we describe your vehicle as your “wheels,” tally the number of people with a “head count,” or ask for someone’s “hand” in marriage. We talk about “suits” to represent businessmen, and “gray beards” to represent older executives.
This same sort of compression is characteristic of human thinking in general. Consider these sculptures in the port city of Marseilles, France: the visual analogs of synecdoche.
Bruno Catalano’s Les Voyageurs
Once the brain has the revelation that a whole can be broken into parts, new properties can emerge. David Fisher’s “Dynamic Architecture” breaks apart the usually solid frame of a building and, using motors similar to those in revolving restaurants, allows every floor to move independently. The result is a building that morphs its appearance. Floors can be choreographed individually or as an ensemble, adding an ever-changing facade to the city skyline. Thanks to our neural talent for breaking things apart, pieces that were once unified can become unglued.
As with dynamic architecture, one of classical music’s great innovations was to break musical phrases into smaller bits. Take as an example Johann Sebastian Bach’s Fugue in D-Major from The Well-Tempered Clavier. Here is the main theme:
Don’t worry if you can’t read music. The point is that later in the movement Bach snaps his theme in two: he discards the first half and concentrates only on the final four notes highlighted in red. In the passage below, overlapping versions of this tail appear thirteen times to produce a rapid, beautiful mosaic of fragments.
This kind of breakage gave composers like Bach a flexibility not found in folk songs such as lullabies and ballads. Rather than repeating the entire theme over and over, this shattering allowed him to write a packed multiplicity of theme-fragments in short order, creating something like the movie montages in Citizen Kane or Rocky IV. Given the power of this innovation, much of Bach’s work involved introducing themes and then breaking them apart.
Often the revelation that a whole can be broken up allows some parts to be scooped out and discarded. For his installation piece Super Mario Clouds, the artist Cory Arcangel hacked into the computer game Super Mario Brothers and removed everything but the clouds. He then projected what remained onto large screens. Visitors circulated among the exhibit, watching magnified cartoon clouds floating peacefully on the screen.
And the brain’s technique of omitting some pieces and keeping others leads often to technological innovations.
Late in the nineteenth century, farmers got the idea of replacing horses with a steam engine. Their first tractors didn’t work so well, however: they were essentially street locomotives, and the machinery was so heavy that it compressed the soil and ruined the crops. Switching from steam to gas power helped, but the tractors were still cumbersome and hard to steer.
A nineteenth-century steam tractor
It seemed likely that mechanical plowing might never work. And then Harry Ferguson came up with an idea: take away the undercarriage and the shell, and attach a seat right onto the engine. His “Black Tractor” was lightweight, making it much more effective. By keeping part of the structure and throwing away the rest, the seeds of the modern tractor were planted.6
Almost one hundred years later, breaking things down to omit parts changed the way music was shared. In 1982, a German professor tried to patent an on-demand music system where people could order music over phone lines. Given that audio file sizes were so large, the German patent office refused to approve something it deemed impossible. The professor asked a young graduate student named Karlheinz Brandenburg to work on compressing the files.7 Early compression schemes were available for speech but they were “one-size-fits-all” solutions, treating all files alike. Brandenburg developed an adaptive model that could respond flexibly to the sound source. That enabled him to craft his compression schemes to fit the particular nature of human hearing. Brandenburg knew that our brains hear selectively: for instance, loud sounds mask fainter ones, and low frequency sounds mask high ones. Using this knowledge, he could delete or reduce the unheard frequencies without a loss in quality. Brandenburg’s biggest challenge was a solo recording by Suzanne Vega of the song “Tom’s Diner”: a female voice singing alone and humming required hundreds of attempts to get the fidelity just right. After years of fine tuning, Brandenburg and his colleagues finally succeeded in finding the optimal balance between minimized file size and high fidelity. By giving the ear just what it needed to hear, audio storage space was reduced by as much as 90 percent.
At first, Brandenburg worried whether his formula had any practical value. But within a few years digital music was born, and squeezing as much music as you could onto your iPod became a must. Breaking acoustic data by flexibly throwing out unmissed frequencies, Brandenburg and his colleagues had invented the MP3 compression scheme which underpins most of the music on the net. A few years after it was coined, “MP3” passed “sex” as the most searched-for term on the internet.8
We often discover that the information we need to retain is less than expected. This is what happened when Manuela Veloso and her team at Carnegie Mellon developed the CoBot, a robot helpmate that roams the hallways of a building to run errands. The team equipped the CoBot with sensors to produce a rich 3D rendering of the space in front of it. But trying to process that much data in real time was overloading the robot’s on board processors, leaving the CoBot often stuck in neutral. Dr Veloso and her team realized that the CoBot didn’t need to analyze an entire area in order to spot a wall – all it needed were three points from the same flat surface. So although the sensor records a great deal of data, its algorithm only samples a tiny fraction, using less than 10 percent of the computer’s processing power. When the algorithm identifies three points lying in the same plane, the CoBot knows it’s looking at a barrier. Just as the MP3 took advantage of the fact that the human brain doesn’t pay attention to everything it hears, the CoBot doesn’t need to “see” everything its sensors record. Its vision is barely a sketch, but it has enough of a picture to avoid bumping into obstacles. In an open field, the CoBot would be helpless, but its limited vision is perfectly adapted to a building. The intrepid machine has escorted hundreds of visitors to Dr Veloso’s office, all thanks to breaking down a whole scene to its constituent parts – like Helen’s face becoming the piece of anatomy launching the ships.
This technique of breaking down and discarding parts has created new ways to study the brain. Neuroscientists looking at brain tissue have long been stymied by the fact that the brain contains detailed circuits – but those are buried deep within the brain and are impossible to see. Scientists typically solve that problem by cutting the brain into very thin slices – one form of breaking – and then creating an image of each slice before painstakingly reassembling the entire brain in a digital simulation. However, because so many neural connections are damaged in the slicing process, the computer model is at best an approximation.
Neuroscientists Karl Deisseroth and Kwanghun Chung and their team found an alternate solution. Fatty molecules called lipids help hold the brain together, but they also diffuse light. The researchers devised a way to flush the lipids out of a dead mouse’s brain while keeping the brain’s structure intact. With the lipids gone, the mouse’s grey matter becomes transparent. Like Arcangel’s installation of the Mario Brother clouds, the CLARITY method removes part of the original but does not fill in the gaps – in this case, gaps that enable neuroscientists to study large populations of neurons in a way never before possible.9
A mouse hippocampus viewed with the CLARITY method
Breaking enables us to take something solid or continuous and fracture it into manageable pieces. Our brains parse the world into units that can then be rebuilt and reshaped.
Like bending, breaking can operate on a single source: you can pixilate an image or spin the floors of a building. But what happens when you draw on more than one source? Many creative leaps are the result of surprising combinations – whether it’s sushi pizza, houseboats, laundromat bars, or poet Marianne Moore describing a lion’s “ferocious chrysanthemum head.” For that, we turn to the brain’s third main technique for creativity.