Читать книгу Scatterbrain - Henning Beck - Страница 6

Оглавление

2

LEARNING

Why We Are Bad at Rote Learning, but Better at Understanding the World

KNOWLEDGE IS POWER, everyone says, and so it follows that the most powerful people must also have the most knowledge. But, it turns out they usually have the least. Knowledge doesn’t simply rain down from the sky. Our brain has to work to attain it; it has to learn. And this isn’t a very easy task. Try it out right now by memorizing the following list:

Ginger

Raisin

Bicycle

Strawberry

Night

Hedgehog

Salad

Grapes

Noodles

Clock

Rest

Dream

Zebra

Lollipop

Labyrinth

Chameleon

Raspberry

Allow yourself to read the list multiple times so that you can really get it. Feel free to use tricks, imagery, mnemonic devices, storytelling. Then continue reading. But don’t forget: don’t forget! Even though the previous chapter showed us just how hard it is to remember things and also that the brain loves to toss things out of its memory.

Learning isn’t everything

THE IDEA OF learning doesn’t conjure the rosiest image for us. This is apparent in the words that we use around the idea of learning: cram, bone up on, wade through, bury oneself in, burn the midnight oil, or even put our noses to the grindstone. A lot of people associate learning with an unpleasant period spent at school or attending a training course accompanied by exertion, frustration, battles over grades, and annoying exams. Life is divided into the time in which one is required to learn, finish homework or seminar materials, and spare time, in which we can finally do something fun. Learning is tedious, exhausting, and undesirable. Spare time, free from learning is, by contrast, fun, relaxing, and enjoyable. It almost seems that we need to create a special environment for learning if it is ever to happen at all. Anyone who wants to continue their education has to take a course or a workshop, and when it’s finally over, they have “learned enough.” The exam is passed; the certificate is in hand—it’s curtains on learning.

Unfortunately, learning doesn’t simply let go quite so easily. We are continuously required to educate ourselves and there is never an end to it. I recently read in my autograph book a reflection written by a then seven-year-old pal who had already figured out, over twenty years ago, that he was never going to stop learning: “Learning is like paddling against the stream. As soon as you stop, you float backwards.” The buzzword nowadays is “lifelong learning.” And of course, we do have to learn everywhere and all the time—at school, at university, in our careers. We are thus fortunate to have a brain that learns with us.

Or, does it? At the end of the day, it is not very easy to acquire and save information. In fact, it turns out the brain has three weaknesses when it comes to learning. The first is that it doesn’t learn very well under pressure. Anyone who has ever studied for an important exam knows how complicated that can be. Secondly, we are extremely bad at learning data, facts, and information. The brain tires quickly of this kind of stuff. Or are you perhaps able to recall the names of the first five Presidents of the United States, the second binomial formula, or the difference between a predicative and adverbial clause? No? You have probably learned all of these things at some point but then forgot them again. Which leads us to the third of the brain’s weaknesses: anyone who is able to learn something is also able to unlearn it. Learning is not a one-way street of knowledge in the brain.

Although at first glance learning appears to be a tedious business, linguistically disparaging, and an altogether arduous undertaking, the brain happens to be a grand master in this particular discipline. After all, learning is our evolutionary specialty, our ecological niche—the thing that we are able to perform with exceptional agility and which sets us apart from other species. Birds fly. Fish swim. Humans learn. Albeit differently than we might suppose. There’s no doubt we have certain weaknesses when it comes to learning (i.e., the stress of learning causes us to cramp up, we are bad at memorizing facts, etc.) but on closer inspection, it becomes apparent that these deficiencies are merely the price we pay for being the best learners in the world. Or, even more than that: not only do we learn, we also understand the world. This is the great strength of human thought and why it is worth swallowing the few weaknesses that go along with it. Anyone who is able to appreciate this should also be able to understand the best methods for taking in new information (how best to “learn”) and why we, as a species, will always remain superior to computers.

The neuron orchestra

BEFORE WE START talking about the weaknesses (and strengths) of how we learn, we’d better take a peek behind the scenes of a learning brain. What is it that happens, in fact, when we learn something new? Or, we can ask an even more basic question: What is a piece of information—is it a thought inside of our head that needs to “get learned”?

When it comes to computers, the answer is relatively clear. If I want to save something on a computer, I first need something to save. We call this data, electronically processed characters. The computer has to put these bits of data somewhere so that it may obtain them later. It organizes a data packet, or a location where it can selectively access the material. Once it has both (data and location), the computer is ready to process this combination as information. This is not unlike what goes on in a library. The books contain (written) characters that are placed on the shelves in a system that helps you to locate them again. If you want to get ahold of a piece of information in a library, you will need both components here as well. You will need to know where the book is located, and you will need to be able to process the characters in the book.

It’s different in the brain, however, because there are neither characters (data) nor a fixed location where the data is held. If I were to say: “Think about your grandmother!”—you wouldn’t get some kind of “grandmother neuron” suddenly popping up in your brain (as brain researchers used to believe)—instead your neuronal network would assume a very particular state. And it is precisely in this state, in the way in which the nerve cells activate each other, where the information is located. This may sound somewhat abstract, but let us simplify it by comparing it to a very, very large orchestra. Individual members of an orchestra can also individually change their activity level (playing louder or quieter or at higher or deeper tones). If you are watching a silent orchestra with inactive musicians, it’s impossible to know what compositions they have in their repertoire. In the same way, it’s impossible to know what a brain is able to think by simply observing it from outside of its neural network. In an orchestra, the music is produced when the musicians play together and in sync. The music is not located somewhere within the orchestra but is rather in the activity of the individual musicians. If you only listen to a single viola, you can gain some insight into one musician, but you won’t have any idea of what the complete musical piece sounds like. In order to know this, you also need to find out the way in which the other musicians are active at the same time. But even this would not be adequate because in this case you would only know what one particular tone sounds like at any one given moment in time, whereas the music only first emerges when you consider it over the course of time. The information (in this case, the melody of the musical work) is located between the various musicians.

Like orchestra musicians, neurons also tune themselves to one another. Just as an orchestra produces a piece of music when the musicians interact, neurons produce the informational content of a thought. A thought isn’t stuck somewhere in the network of a brain. Instead, it is located in the manner in which the network interacts or plays together. In order for this to go off without a hitch, the neurons are connected to each other over common points of contact (synapses), which is the only way that the individual nerve cells can figure out what all the others are up to. In an orchestra, every musician listens to what the others are playing to ensure that they can keep in sync and in tune with each other. In the cerebrum, neurons are connected with several thousand other nerve cells, which means that they are able to produce much more complex states of activity than an orchestra. But it is precisely in these states of activity that the content of the brain’s information is located. In an orchestra, this is the music; in the brain, it is a thought.

This method for processing information has a couple of crucial benefits. Just as the same orchestra is able to play completely different pieces of music by synchronizing the playing of the individual musicians in a new way, the exact same neural network is able to produce totally unique thoughts merely by a shift in activation. In addition, a piece of information (whether a melody played in an orchestra or an image in one’s head) is not necessarily coded in a concrete state of activity, but also in the shift of the state. The mood of a piece of music may be influenced by whether the musicians play softer or louder—in the same way, the information in the neural state may also be influenced by the way in which the neurons shift their activity and not only how they currently are.

This brings us to the realization that the number of possible patterns of activity is vast. The question of how many thoughts it’s possible to think is thus as useful as the question of how many songs it’s possible for an orchestra to play.

There is something else to notice here. In a computer, the information is stored in a location. When you switch the machine off, the information is still there (saved in the form of electrical charges), and all you have to do is to turn the computer back on to retrieve it. But if you switch off a brain, the party is over. End of story. Because the information stored in a brain is not located in any particular physical location but is rather an ever-changing state of the network. During a person’s lifetime, a thought or a piece of informational content always proceeds from an earlier one—as though every state of thought becomes the start signal for the next thought. A thought is never derived from nothing.

The learning in between

AS USEFUL AS the orchestra metaphor is, I don’t want to conceal the fact that there is one enormous difference when it comes to the brain. And the difference is this: unlike an orchestra, the brain does not employ a conductor (and the neurons also don’t have predefined sheet music to play). There’s no one standing on a podium in front of the neurons to direct them on how they should interact with their neighbors. And yet they still manage, with utmost precision, to synchronize themselves in their activities and to create new patterns.

This has consequences for the manner in which a neural network learns. While an orchestra conductor provides the tempo to sync up the musicians, the neurons have to find another method. And as it turns out, the way that information is produced is somewhat like the melody of an orchestra, in the ability of the individual neurons to play all together.

When an orchestra learns a new melody, the musicians must accomplish two things. Firstly, they have to improve their own playing skills (i.e., learn a new combination with their fingers). Secondly, and also more importantly, they have to know exactly when and what to play. They can only be really certain of this, however, by watching the actions of the conductor and waiting to hear how the others around them are going to play. When an orchestra practices a new piece, the musicians are in effect practicing their ability to play together. At the end of the day, the piece of music has also been “saved” in the orchestra’s newly acquired skill of playing it together. In order to retrieve it, the concrete dynamic of the musicians first must be activated, leading to the piece of music. Likewise, a piece of information in the brain is encoded in conjunction with the interaction between the neurons, and when the neurons “practice,” they also adjust their harmonization with each other, making it easier to trigger their interaction next time. In order for a neural network to learn, the neurons must also adjust their points of contact and thereby redesign the entire architecture of the network.

Because the brain does not have a conductor, the nerve cells must rely on tuning themselves to their neighboring cells. What happens next on a cellular biological level is well known. Simply put, the adjustments among the neural contact points that happen during learning follow a basic principle: contact points that are frequently used grow stronger while those seldom put to use dwindle away. Thus, when an important bit of information pops up in the brain (that is, when the neurons interact in a very characteristic way), the neurons somehow have to “make a note” of it. They do this by adjusting their contact points with one another so that the information (the state of activity) will be easier to retrieve in the future. If in specific cases, some of the synapses are quite strongly activated, measures are taken to restructure the cells to ensure that it will be easier to activate the specific synapse later on. Conversely, synapses that go unused because of a lack of structural support are dismantled over time. This saves energy, allowing a thinking brain to function on twenty watts of power. (As a comparison, an oven requires a hundred times as much energy to produce nothing but a couple of bread rolls. Ovens are apparently not all that clever.)

This is how the system learns. By altering its structure so that its state of interaction can more readily be triggered. In this way, the piece of information is actually saved in the neural network—namely, “between” the nerve cells, within their architecture and connection points. But this is only half of the story. In order for the piece of information to be retrieved, the nerve cells must first be reactivated. The more interconnected the points of contact are, the easier it is to do this, even though information cannot be derived from these contacts alone. If you cut open a brain, you will see how the cells are connected but not how they work. You won’t have any idea what has been “saved” in the brain, nor what kind of dynamic interaction it could potentially produce.

Under stress, learning is best—and worst

THIS NEURAL SYSTEM of information processing is extremely efficient. It is much more flexible than a static computer system, requires no supervision (such as a conductor) and, in addition, is able to adapt to a vast range of environmental conditions. However, this learning method also has its weaknesses. Because the process of building neurons is subject to regular biological fluctuations, we don’t always learn equally well. When we are under stress, for example, we tend to tighten up more readily. Anyone who has felt the pressure of studying for a test knows how hard it is to prepare with this kind of learning stress. It feels like an arduous task to try jamming the most important bits of information into your head. Or, if you do manage to squeeze them in, you then can’t seem to get them back out at the crucial moment (during the test). Why does stress affect our learning process so negatively?

First of all, let me give you the good news: stress is not something that blocks our learning. On the contrary, stress is actually a learning accelerator. Under acute stress conditions (for example, if we are scared or even positively surprised), the brain’s neurotransmitter noradrenaline first makes sure to activate precisely those regions of the brain that heighten our attention.1 About twenty minutes later, this action is further supported by the hormone cortisol, which silences the distracting background flurry of nerve cells.2 We are then able to become more focused and concentrate. The conclusion? Under acute stress, we are extremely capable of learning. For example, if we cross the street and almost get hit by a car, we take note of this for future street crossings. This is even the case when we are positively stressed. For instance, most of us will never forget our first kiss—even if we only experienced it once.

When our brain is under stress, our neural network is animated, enabling us to learn more rapidly. However, if the content that is being learned does not have anything to do with our stress, it’s a very different story. The main goal of a brain under stress is to concentrate only on the stress-related relevant information. Everything else becomes unimportant. And this is what makes learning under stress a two-edged sword. When test participants are placed under stress conditions by having their hands submerged in ice water for three minutes while simultaneously tasked with memorizing a list of words, a few days later, they are readily able to recall all of the words having to do with ice water (such as “water” and “cold”), but they cannot remember any of the other arbitrary words (“square,” “party”).3

If you are almost hit by a car, you are able to draw an immediate correlation between looking both ways before crossing the street and possible death. And you will never forget it. But if you are studying Latin vocabulary, you have to stretch your imagination three times as much to establish a connection between the phrase “alea iacta est” and the consequences of bad test scores.

A brief interim conclusion at this point: the brain learns quite well under stress if the main point of the learning has to do with the cause of stress itself. After touching a hot stove only once, we are quick to learn that it was not such a good idea. Stress hormones actively regulate the dynamic of the neurons in order to better retain emotional content (the pain from a hot oven is much more important than what brand of oven it is). This is all about emotions, by the way, not facts.4 Facts, facts, facts are boring. Which leads us to the next learning weakness of the brain.

The memorization weakness

DO YOU REMEMBER the list at the beginning of the chapter? Can you recall even half of it? If yes: I owe you congratulations and respect. How did you go about memorizing this list? If you used mnemonic devices, storytelling, or images to help you to relate the various words, did you realize that you actually increased the amount of information that had to be learned? You made yourself “learn” more than was necessary in order to retain the information. This is a paradox. You might ask an additional question: Why does any of it matter at all? The words on the list are mostly arbitrary and have no relevance or context to you. Why should you be bothered to learn them? Merely because an author demands it of you?

This is precisely the point. Our brain is good at adapting to many different situations, actively adjusting itself, and learning new things, but this doesn’t include raw information such as a few random words, bits of data, or facts. Research shows that the upper limit of objects that can be memorized (without using memory tricks such as mnemonic devices or storytelling) is around twenty. Which is not very much. The list at the beginning of this chapter only takes up 146 bytes on a computer hard drive, while a picture of a zebra could easily take up a million times more space. And yet we prefer to imagine, as in a dream, a zebra with a lollipop wandering through a labyrinth (words from the list) instead of learning each of these four words separately. But why is the brain so bad at saving a few simple pieces of information, such as a couple of words?

The reason once again has to do with the way the brain works. The brain doesn’t learn information by rote and then save it somewhere. Instead, it organizes knowledge. There’s a difference. Let me give you a simple example to illustrate. I could list off to you the exact sequence of goals (and who scored them) from the historic semifinal in the 2014 World Cup soccer game between Germany and Brazil that ended in a score of 7–1: 11th minute: 1–0, Müller; 23rd minute: 20, Kroos; 24th minute: 30, Kroos . . . Okay, I’ll spare any Brazilian readers the rest of the list and get to the point. Once you have assembled all of the data from this game, what do you know about the game itself? Not much, since you are not witnessing the shocked expressions of the Brazilian team or the joy of Philipp Lahm, the German team captain. The significance of the game cannot be derived merely from the combination of data. Only after you have watched the game can you understand why the Brazilians still grumble about it—in spite of their later “revenge” on Germany in the 2016 Summer Olympics.

Massed learning

UNFORTUNATELY, MANY METHODS for learning (whether in high school, at the university level, in vocational training, or continuing education at the workplace) continue to rely on the basic concept that memorizing facts and data is a good idea. On the contrary, this method leads to a completely false strategy for learning, known scientifically as “massed learning,” in which you must pump yourself full of information in a short period of time in the hopes of retaining as much as possible in the future. This obviously doesn’t work, since our brain thinks data packets are totally uninteresting.

An orchestra doesn’t learn a new piece of music merely by playing a single note for one second and then waiting before processing the next informational packet (the next note) and so on for thousands of notes (this would be akin to “massed learning”). No, it learns best by quickly recognizing the relationship between the notes and the way in which, at a certain time and place, they develop into a whole melody.

The context is what allows us to learn effectively—namely, that we do not have to consciously concentrate on the idea that we are learning. This became evident in a study conducted by the work group of my colleague, Melissa Vo, who researched memory capacity among adults. Specifically, the study’s adult test subjects were asked to find objects that were pictured in an apartment setting (i.e., the soap in the bathroom). Although participants were not asked to remember these objects, they were much better able to recall them later than if they had been asked to memorize isolated pictures of the objects.5 When the same objects were isolated and presented to participants in front of a neutral background, the information was much less interesting to participants and thus not saved. A bar of soap makes much more sense in the context of a bathroom than surrounded by a green background. The object by itself is not interesting. It is only its situation in a particular context that gives the object a meaningful correlation, which we do not forget. Though this may seem illogical because it implies that we are required to note down additional information (namely, the object’s surroundings), this is, in fact, an ability that comes easily to us.

The lasagna-learning rule

IN ORDER TO understand this correlation of context and of the meaning of a word, the brain must learn differently than it might be used to—namely, with interruptions. In the last chapter, you already read that the brain sacrifices some pieces of memory to nonmemory (or even actively forgets them) in order to be able to actively combine them. Something similar takes place with learning. Learning is successful when breaks and distance are built into the process, a practice referred to as “spaced learning.” This would seem to go against our better intuition, as we assume that we will only be able to grasp correlations and concepts by processing as much information as possible at one time. If we deliberately incorporate breaks into our learning process, we fear we might forget things that could be important. But our brain is not interested in the sheer mass of information so much as it is interested in our ability to connect the information.

To research this, one study asked participants to identify the painting style of various artists. The subjects were divided into two groups. The first group was shown a series of six images, all of them works by one artist, followed by another series of six images, which were by the next artist, and so on for the next four artists. The second group was shown all of the images mixed up in no particular sequence, so that the various artistic styles alternated from image to image. The results were clear. The group that viewed the alternating images was able to identify a new image according to the particular style of the individual artist. Those in the first group, who viewed the images in sequential blocks, were less able to recognize the basic painting concept (artistic style). Despite the results, most of the test subjects indicated that they preferred learning in blocks (“massed learning”), as they believed it to be a more successful strategy.6

This result has been reaffirmed over and over again in studies. Taking breaks is what makes learning successful. Not only for learning about various artistic styles, but also vocabulary at school, movement patterns, biological correlations, or lists of words. The reason for this has to do with the way in which our nerve cells interact. An initial information impulse triggers a stimulus for structural change in the cells. These changes must first be processed to prepare the cells for the next informational push. Only after they have taken a short break are they optimally prepared to react to the recurrent stimulus. If it comes too early, it will not be able to fully realize its effect.7 It is only by alternating information that the brain is able to embed it in a context of related bits of knowledge. It’s not too different from making lasagna. You could of course choose to pour the sauce into the pan all at once and then pile the lasagna noodles and the cheese on top. That would be something like “massed cooking,” but it wouldn’t result in authentic lasagna. Only when you alternate the components do you get the desired, delicious dish—or, when it comes to the brain, a meaningful thought concept. This kind of conceptual thought is the brain’s great strength because it enables us to get away from pure rote learning. Only then is it possible for us to organize the world into categories and meaningful correlations and, thereby, to begin to understand it.

Don’t learn—understand!

ANYONE WHO CAN learn something can also unlearn it. But once you have understood something, you cannot de-understand it. Learning is not particularly unique. Most animals and even computers can learn. But developing an understanding of the things in the world is the great art of the brain, which it is able to master precisely because it does not consume and draw correlations from data in the same way a robot would. A brain creates knowledge out of data, not correlations. These are two vastly different concepts, though they are often equated with each other in the modern, digitalized world. But whereas the amount of data from :-) and R%@ is the same, the information conveyed is completely different. Not to mention the concept behind it—a smiling face. To a computer, the characters :-) and :-( are only 33 percent different. But to us, they are 100 percent different.

How do we learn such knowledge, such thought concepts? How do we understand the world? We can see how we don’t do it by marveling at computer algorithms. Specifically, the most modern algorithms in existence, the “deep neural networks.” These are computer systems that are no longer programmed to follow the classic A then B system of logic. Rather they “borrow” from the brain and copy its network structure. The software simulates digital neurons that are able to adapt their points of contact to one another depending on which pieces of data they need to process. Because the cells and their contacts are able to adjust themselves, the system is able to learn over time. For example, if the software needs to be able to identify a penguin, it is presented with hundreds of thousands of random images with a few hundred penguin images included among them. The program independently identifies the characteristics specific to penguins until it is able to recognize what a penguin might look like.

The advances that have been made in artificial neural networks are huge. Merely by regularly viewing images, such a system is independently capable of identifying animals, objects, or humans in arbitrary pictures. Facial recognition capabilities have even surpassed human ability (Google not only pixels out human faces in its Street View maps but also the faces of cows).8 But to put it all into perspective: a computer system like this is to the brain what a local amateur athlete is to an Olympic decathlon champion. The comparison is not even the same concept because computers do something that is very different than neurons, in spite of the pithy appropriation of the neurology terms by IT companies who claim they are building “artificial neural networks.” In reality, computers are neither replicating real neural networks nor a brain. It is nothing but a marketing trick by computer companies. For a deep learning network to learn to identify a penguin, it must first process thousands of images of one, in a method that follows the maxim “practice makes perfect.” But this is not necessarily how the brain works.

Deep understanding

I WAS RECENTLY standing in the hallway with my two-and-a-half-year-old neighbor. He pointed to the ceiling and said, “Smoke detector.” I was amazed and had to ask myself what kind of parents did this little boy have. Did they perhaps subject him for weeks and weeks to thousands of pictures of smoke detectors, always repeating the series of images until he was finally able to identify the similarities and characteristics of smoke detectors and to correlate the object? His father is, admittedly, a fireman and so my neighbor already has a certain predisposition toward fire safety tools. But still, had this little human really been bombarded with thousands of pictures of smoke detectors, fire extinguishers, and fire axes that then enabled him to quickly identify the required implement for the next possible crisis? And then did they send him down the hall in my direction once he had finally passed the test with flying colors? No way! That’s not how it works. But the question still remains: How was my little neighbor able to identify a smoke detector in a completely new context after only seeing a smoke detector maybe two or three times in his short life?

The answer is that my neighbor did not learn about smoke detectors in the same way that a computer does, rather he understood the idea of smoke detectors. This is something which humans are very good at and which science calls “fast mapping.” If, for example, you were to give a three-year-old child never-before-seen artifacts and explain that one very special artifact is named “Koba” or comes from the land of “Koba,” the child will remember the Koba object one month later.9 After only seeing it one time! It gets even better if the child is learning to understand new actions and not only new words. Children who are only two and a half years old require only fifteen minutes of playing with an object before they can transfer its properties to other objects. For example, a child who realizes that they can balance a plastic clip named “Koba” on their arm later realizes that a similar clip, but with a slightly different shape, is also called a “Koba” and can be balanced on one’s arm.10 The whole exchange only takes a few minutes. How would two-year-olds possibly be able to learn an average of ten new words a day if they had to practice each word hundreds of times? No brain has that much time on its hands.

Of course, the brain cannot simply learn something from nothing. From what we currently know, we assume that learning by “fast mapping” allows new information to be rapidly incorporated into existing categories (presumably without even bothering the hippocampus, the memory trainer that you learned about in the previous chapter).11 But we are even able to create these categories very rapidly—whenever we give ourselves time for some mental digestion. If you present a three-year-old with three variations of a new toy (i.e., a rattle with different colors and surfaces) one right after the other and give each of these the artificial designation of “wug,” the child will not easily be able to identify a fourth rattle as a “wug.” If, however, the child is allowed half a minute of time between the presentation of each new rattle to play with the item, he or she would then grasp the concept of the wug and be able to identify a new, differently shaped and differently colored rattle as a wug. This seemingly inefficient break, this unrelated waste of time that we would love nothing more than to rationalize away in our productivity-optimized world—this is our strength—if, in fact, we hope to be able to accomplish more than a mindlessly learning computer.

We are very quick to understand categories and are able to grasp the relationship between words, objects, and actions almost immediately. You don’t believe me? Do you still think it’s only possible to effectively learn something by repetition and practice? Then allow me to give you a counterexample: How long did it take you to understand a newly coined word like “selfie”? A single experience of seeing four posing teenagers snapping a photo of themselves on a smartphone should have been enough. How quickly were you able to understand the invented word “Brexit”? You probably figured it out fairly quickly. We often understand the world at first glance, but there’s more. Once you’ve understood something, not only can you reproduce it, you can also make something new from it. If Brexit describes the exit of Great Britain from the European Union (EU), what would “Swexit,” “Spaxit,” or “Itaxit” indicate? Or from the opposite direction, what would “Bremain” or a “Breturn” mean? It’s a piece of cake for you to grasp all of the new words because you already understand the fundamental categories of thought. You are able to take these and immediately generate a new piece of knowledge, even if you’ve never heard of “Spaxit” before in your life!

So much for the topic of frequent repetition and “deep learning.” Merely memorizing a bunch of facts is no great art. Understanding them, on the other hand, is. In the future, computers might be able to “learn” about objects and pictures more quickly, but they will never be able to understand them. In order to learn, computers use very basic algorithms to analyze an enormous amount of data. Humans do the opposite. We save much less data but are therefore able to process exceedingly more. Knowing something does not mean having a lot of information. It rather means being able to grasp something with the information in hand. Deep learning is all well and good, but “deep understanding” is better. Computers do not understand what it is that they are recognizing. One interesting indication of this followed from an experiment conducted in 2015. Researchers studied artificial neural networks that had trained themselves to recognize objects (such as screwdrivers, school buses, or guitars). The network was analyzed to find out what, in fact, it had recognized. For example, what would a picture of a robin have to look like in order for the computer program to be able to respond with 100 percent certainty that it was indeed a “robin”? If anyone had expected that a perfect prototype image of a robin would pop out, a sort of “best of” from all the robin images in the world, they would have been disappointed. The resulting image was a total chaos of pixels.12 No human would be able to identify even a very rudimentary robin in such a pixelated mess. But the computer could, because it recognized the robin only as a graphic representation of pixels and did not understand that it was a living creature. If one taught a computer that Brexit refers to the exit of Great Britain from the EU, the computer would never be able to independently draw the conclusion that Swexit means the Swedes waving goodbye.

Our ability to learn extremely quickly, or we had better say, to understand things, is only possible if we do not “learn” facts and information separately, in a way that is sterile and detached, but rather by creating a category correlation that embeds things and, thereby, leads us to understand them. Computers do exactly the opposite. They are very good at saving data quickly, but they are just as dumb as they were thirty years ago. Only now, they are dumb a little faster. This is because they never take time to reflect on all of the data they have gathered. They don’t treat themselves to a break. Computers always work at full blast until they run out (or have their power switched off). But if you never take a break, you cannot ever put the information that you possess to any use, and thus you cannot acquire any knowledge. In order to generate concepts, it is essential to have a stimulus-free space (during sleep). We are able to recognize something at first glance because we don’t allow ourselves to be flooded with facts and data but, instead, make ourselves take a break. This may initially seem to be inefficient and perhaps to smack of weakness, but it is actually highly effective. In fact, this is the only way that we are able to comprehend the world, instead of merely memorizing it.

Learning power reloaded

WE SHOULD THUS not treat the brain as though it were an information machine since the most valuable learning processes of the future don’t call for us to have flawless memories (that this isn’t even possible is touched on in the next chapter), but rather for us to adjust rapidly to new situations. If we start competing with computers, trying to use learning tricks to memorize more facts, telephone numbers, and shopping lists, we are certainly going to lose. Maybe we should let algorithms take over these kinds of basic tasks for us.

Trying to develop the latest learning techniques in order to remember more information isn’t what’s important. It’s much more valuable to improve our ability to think conceptually and to understand. The brain is not a data storage device. It’s a knowledge organizer whose major talent can only be actualized once we stop treating it like an imbecile—the way that I did with you at the beginning of the chapter. Sorry about that.

You’ve now learned the most important ingredients for improving conceptual thinking. Stress is only helpful to learning when it is positive, short term, and surprising. Long-term stress should be minimized by reinterpreting it. Students who are aware of what stress is, for example, have been shown to exhibit better coping techniques in response to stress and are thus less prone to tense up while learning.13

When are we best able to learn? When we are excited about it, of course. Facts are not that important. It’s the feelings that stick with us. Best is if the feeling is positive. Positive feelings should thus be conveyed at school, university, or in work environments by the teachers, lecturers, or team leaders to promote the best learning. This is much more crucial than the factual content that is taught. My best teacher (my chemistry teacher that I mentioned in the introduction) didn’t keep a stockpile of modern PowerPoint presentations on hand, but he was very enthusiastic about his discipline. And when someone is so passionate about the citric acid cycle, then there has to be something to it. That’s why I decided to study biochemistry. Not because I found the factual content to be so captivating (that came later), but because I was entranced by his excitement for the topic. It is only when something impacts us emotionally that we never forget it—even if it is a form of positive stress.

Learning is all well and good—but understanding is better. And in order to understand, we need a context. Even small children are able to comprehend the world at an incredible pace if given examples and concrete applications to figure out the “why” of things. This happens, not by dumping data and facts on their heads, but by allowing them to construct meaningful correlations for themselves. If you want students to learn new vocabulary, you could give them a list of words. Or you could encourage the children to come up with a personal story to incorporate the new words. Children will quickly gain the individual context that they need to remember the words. I have long forgotten every word I’ve ever been given on a vocabulary list. But there are other words that I have only heard once, such as when I lived in California, that I have immediately used and adopted into my vocabulary. At the same time, we should avoid getting drawn into the temptation of trying to compete with computer software and artificial intelligence. When it comes to speed, accuracy, and efficiency, we are definitely going to lose every time. It is much more valuable to remember our human weakness, er, I mean strength. Namely, that we are able, sometimes even at first glance, when we take regular breaks, to happily absorb useless knowledge. Yes, of course it is important that we make good use of computer science and modern technological media in school, as we are going to need these skills to function in the future world. But we shouldn’t attempt to think like an algorithm. Subjects such as history, natural science, languages, or philosophy, and a good general, well-rounded education are what empower us to establish ideas and conceptual correlations. If you are digging through Shakespeare’s masterpieces and chance across the line, “To be, or not to be? That is the question,” you could choose to learn it by heart, copy and paste it as a cool meme on Facebook, or save it onto your flash drive. The latter takes up 42 bytes but means nothing. Alternatively, you could go to the theatre and enjoy watching Hamlet—and the phrase will suddenly take on meaning.

In this chapter, we have been able to unpack how to build up much more effective categories of thought. By taking regular small breaks, for instance, one is able to produce a thought concept. Once the concept has been understood, it can then be applied to new situations. Only humans are able to do this. When a “deep learning” computer analyzes millions of images, it will, doubtless, be able to recognize that a chair is most likely an object with four legs, a flat surface, and a backrest. But for us, a chair is not so much an object with a particular shape as it is something to sit on. Once we have understood this, we suddenly see chairs everywhere and can even apply our knowledge to invent, develop, and design new chairs. For example, I recently had one of those bouncy yoga chairs at home. My little neighbor remarked, quite correctly: “A ball!” But when I went over and sat on it, he said: “Oh, chair!” Try teaching that to a computer.

Scatterbrain

Подняться наверх