Читать книгу Smarter Than You Think: How Technology is Changing Our Minds for the Better - Clive Thompson - Страница 8
We, the Memorious_
ОглавлениеWhat prompts a baby, sitting on the kitchen floor at eleven months old, to suddenly blurt out the word “milk” for the first time? Had the parents said the word more frequently than normal? How many times had the baby heard the word pronounced—three thousand times? Or four thousand times or ten thousand? Precisely how long does it take before a word sinks in anyway? Over the years, linguists have tried to ask parents to keep diaries of what they say to their kids, but it’s ridiculously hard to monitor household conversation. The parents will skip a day or forget the details or simply get tired of the process. We aren’t good at recording our lives in precise detail, because, of course, we’re busy living them.
In 2005, MIT speech scientist Deb Roy and his wife, Rupal Patel (also a speech scientist) were expecting their first child—a golden opportunity, they realized, to observe the boy developing language. But they wanted to do it scientifically. They wanted to collect an actual record of every single thing they, or anyone, said to the child—and they knew it would work only if the recording was done automatically. So Roy and his MIT students designed “TotalRecall,” an audacious setup that involved wiring his house with cameras and microphones. “We wanted to create,” he tells me, “the ultimate memory machine.”
In the months before his son arrived, Roy’s team installed wide-angle video cameras and ultrasensitive microphones in every room in his house. The array of sensors would catch every interaction “down to the whisper” and save it on a huge rack of hard drives stored in the basement. When Roy and his wife brought their newborn home from the hospital, they turned the system on. It began producing a firehose of audio and video: About 300 gigabytes per day, or enough to fill a normal laptop every twenty-four hours. They kept it up for two years, assembling a team of grad students and scientists to analyze the flow, transcribe the chatter, and figure out how, precisely, their son learned to speak.
They made remarkable discoveries. For example, they found that the boy had a burst of vocabulary acquisition—“word births”—that began around his first birthday and then slowed drastically seven months later. When one of Roy’s grad students analyzed this slowdown,1 an interesting picture emerged: At the precise moment that those word births were decreasing, the boy suddenly began using far more two-word sentences. “It’s as if he shifted his cognitive effort2 from learning new words to generating novel sentences,” as Roy later wrote about it. Another grad student discovered that the boy’s caregivers3 tended to use certain words in specific locations in the house—the word “don’t,” for example, was used frequently in the hallway, possibly because caregivers often said “don’t play on the stairs.” And location turned out to be important: The boy tended to learn words more quickly when they were linked to a particular space. It’s a tantalizing finding, Roy points out,4 because it suggests we could help children learn language more effectively by changing where we use words around them. The data is still being analyzed, but his remarkable experiment has the potential to transform how early-language acquisition is understood.
It has also, in an unexpected way, transformed Roy’s personal life. It turns out that by creating an insanely nuanced scientific record of his son’s first two years, Roy has created the most detailed memoir in history.
For example, he’s got a record of the first day his son walked. On-screen, you can see Roy step out of the bathroom and notice the boy standing, with a pre-toddler’s wobbly balance, about six feet away. Roy holds out his arms and encourages him to walk over: “Come on, come on, you can do it,” he urges. His son lurches forward one step, then another, and another—his first time successfully doing this. On the audio, you can actually hear the boy squeak to himself in surprise: Wow! Roy hollers to his mother, who’s visiting and is in the kitchen: “He’s walking! He’s walking!”
It’s rare to catch this moment on video for any parent. But there’s something even more unusual about catching it unintentionally. Unlike most first-step videos caught by a camera-phone-equipped parent, Roy wasn’t actively trying to freeze this moment; he didn’t get caught up in the strange, quintessentially modern dilemma that comes from trying to simultaneously experience something delightful while also acting and getting it on tape. (When we brought my son a candle-bedecked cupcake on his first birthday, I spent so much time futzing with snapshots—it turns out cheap cameras don’t focus well when the lights are turned off—that I later realized I hadn’t actually watched the moment with my own eyes.) You can see Roy genuinely lost in the moment, enthralled. Indeed, he only realized weeks after his son walked that he could hunt down the digital copy; when he pulled it out, he was surprised to find he’d completely misremembered the event. “I originally remembered it being a sunny morning, my wife in the kitchen,” he says. “And when we finally got the video it was not a sunny morning, it was evening; and it was not my wife in the kitchen, it was my mother.”
Roy can perform even crazier feats of recall. His system is able to stitch together the various video streams into a 3-D view. This allows you to effectively “fly” around a recording, as if you were inside a video game. You can freeze a moment, watch it backward, all while flying through; it’s like a TiVo for reality. He zooms into the scene of his watching his son, freezes it, then flies down the hallway into the kitchen, where his mother is looking up, startled, reacting to his yells of delight. It seems wildly futuristic, but Roy claims that eventually it won’t be impossible to do in your own home: cameras and hard drives are getting cheaper and cheaper, and the software isn’t far off either.
Still, as Roy acknowledges, the whole project is unsettling to some observers. “A lot of people have asked me, ‘Are you insane?’” He chuckles. They regard the cameras as Orwellian, though this isn’t really accurate; it’s Roy who’s recording himself, not a government or evil corporation, after all. But still, wouldn’t living with incessant recording corrode daily life, making you afraid that your weakest moments—bickering mean-spiritedly with your spouse about the dishes, losing your temper over something stupid, or, frankly, even having sex—would be recorded forever? Roy and his wife say this didn’t happen, because they were in control of the system. In each room there was a control panel that let you turn off the camera or audio; in general, they turned things off at 10 p.m. (after the baby was in bed) and back on at 8 a.m. They also had an “oops” button in every room: hit it, and you could erase as much as you wanted from recent recordings—a few minutes, an hour, even a day. It was a neat compromise, because of course one often doesn’t know when something embarrassing is going to happen until it’s already happening.
“This came up from, you know, my wife breast-feeding,” Roy says. “Or I’d stumble out of the shower, dripping and naked, wander out in the hallway—then realize what I was doing and hit the ‘oops’ button. I didn’t think my grad students needed to see that.” He also experienced the effect that documentarians and reality TV producers have long noticed: after a while, the cameras vanish.
The downsides, in other words, were worth the upsides—both scientific and personal. In 2007, Roy’s father came over to see his grandson when Roy was away at work. A few months later, his father had a stroke and died suddenly. Roy was devastated; he’d known his father’s health was in bad shape but hadn’t expected the end to come so soon.
Months later, Roy realized that he’d missed the chance to see his father play with his grandson for the last time. But the house had autorecorded it. Roy went to the TotalRecall system and found the video stream. He pulled it up: his father stood in the living room, lifting his grandson, tickling him, cooing over how much he’d grown.
Roy froze the moment and slowly panned out, looking at the scene, rewinding it and watching again, drifting around to relive it from several angles.
“I was floating around like a ghost watching him,” he says.
What would it be like to never forget anything? To start off your life with that sort of record, then keep it going until you die?
Memory is one of the most crucial and mysterious parts of our identities; take it away, and identity goes away, too, as families wrestling with Alzheimer’s quickly discover. Marcel Proust regarded the recollection of your life as a defining task of humanity; meditating on what you’ve done is an act of recovering, literally hunting around for “lost time.” Vladimir Nabokov saw it a bit differently: in Speak, Memory, he sees his past actions as being so deeply intertwined with his present ones that he declares, “I confess I do not believe in time.”5 (As Faulkner put it, “The past is never dead. It’s not even past.”)6
In recent years, I’ve noticed modern culture—in the United States, anyway—becoming increasingly, almost frenetically obsessed with lapses of memory. This may be because the aging baby-boomer population is skidding into its sixties, when forgetting the location of your keys becomes a daily embarrassment. Newspaper health sections deliver panicked articles about memory loss and proffer remedies, ranging from advice that is scientifically solid (get more sleep and exercise) to sketchy (take herbal supplements like ginkgo) to corporate snake oil (play pleasant but probably useless “brain fitness” video games.) We’re pretty hard on ourselves. Frailties in memory are seen as frailties in intelligence itself. In the run-up to the American presidential election of 2012, the candidacy of a prominent hopeful, Rick Perry, began unraveling with a single, searing memory lapse: in a televised debate, when he was asked about the three government bureaus he’d repeatedly vowed to eliminate, Perry named the first two—but was suddenly unable to recall the third. He stood there onstage, hemming and hawing for fifty-three agonizing seconds before the astonished audience, while his horrified political advisers watched his candidacy implode. (“It’s over, isn’t it?” one of Perry’s donors asked.)7
Yet the truth is, the politician’s mishap wasn’t all that unusual. On the contrary, it was extremely normal. Our brains are remarkably bad at remembering details. They’re great at getting the gist of something, but they consistently muff the specifics. Whenever we read a book or watch a TV show or wander down the street, we extract the meaning of what we see—the parts of it that make sense to us and fit into our overall picture of the world—but we lose everything else, in particular discarding the details that don’t fit our predetermined biases. This sounds like a recipe for disaster, but scientists point out that there’s an upside to this faulty recall. If we remembered every single detail of everything, we wouldn’t be able to make sense of anything. Forgetting is a gift and a curse: by chipping away at what we experience in everyday life, we leave behind a sculpture that’s meaningful to us, even if sometimes it happens to be wrong.
Our first glimpse into the way we forget came in the 1880s, when German psychologist Hermann Ebbinghaus ran a long, fascinating experiment on himself.8 He created twenty-three hundred “nonsense” three-letter combinations and memorized them. Then he’d test himself at regular periods to see how many he could remember. He discovered that memory decays quickly after you’ve learned something: Within twenty minutes, he could remember only about 60 percent of what he’d tried to memorize, and within an hour he could recall just under a half. A day later it had dwindled to about one third. But then the pace of forgetting slowed down. Six days later the total had slipped just a bit more—to 25.4 percent of the material—and a month later it was only a little worse, at 21.1 percent. Essentially, he had lost the great majority of the three-word combinations, but the few that remained had passed into long-term memory. This is now known as the Ebbinghaus curve of forgetting, and it’s a good-news-bad-news story: Not much gets into long-term memory, but what gets there sticks around.
Ebbinghaus had set himself an incredibly hard memory task. Meaningless gibberish is by nature hard to remember. In the 1970s and ’80s, psychologist Willem Wagenaar tried something a bit more true to life.9 Once a day for six years, he recorded a few of the things that happened to him on notecards, including details like where it happened and who he was with. (On September 10, 1983, for example, he went to see Leonardo da Vinci’s Last Supper in Milan with his friend Elizabeth Loftus, the noted psychologist). This is what psychologists call “episodic” or “autobiographical” memory—things that happen to us personally. Toward the end of the experiment, Wagenaar tested himself by pulling out a card to see if he remembered the event. He discovered that these episodic memories don’t degrade anywhere near as quickly as random information: In fact, he was able to recall about 70 percent of the events that had happened a half year ago, and his memory gradually dropped to 29 percent for events five years old. Why did he do better than Ebbinghaus? Because the cards contained “cues” that helped jog his memory—like knowing that his friend Liz Loftus was with him—and because some of the events were inherently more memorable. Your ability to recall something is highly dependent on the context in which you’re trying to do so; if you have the right cues around, it gets easier. More important, Wagenaar also showed that committing something to memory in the first place is much simpler if you’re paying close attention. If you’re engrossed in an emotionally vivid visit to a da Vinci painting, you’re far more likely to recall it; your everyday humdrum Monday meeting, not so much. (And if you’re frantically multitasking on a computer, paying only partial attention to a dozen tasks, you might only dimly remember any of what you’re doing, a problem that I’ll talk about many times in this book.) But even so, as Wagenaar found, there are surprising limits. For fully 20 percent of the events he recorded, he couldn’t remember anything at all.
Even when we’re able to remember an event, it’s not clear we’re remembering it correctly. Memory isn’t passive; it’s active.10 It’s not like pulling a sheet from a filing cabinet and retrieving a precise copy of the event. You’re also regenerating the memory on the fly. You pull up the accurate gist, but you’re missing a lot of details. So you imaginatively fill in the missing details with stuff that seems plausible, whether or not it’s actually what happened. There’s a reason why we call it “re-membering”; we reassemble the past like Frankenstein assembling a body out of parts. That’s why Deb Roy was so stunned to look into his TotalRecall system and realize that he’d mentally mangled the details of his son’s first steps. In reality, Roy’s mother was in the kitchen and the sun was down—but Roy remembered it as his wife being in the kitchen on a sunny morning. As a piece of narrative, it’s perfectly understandable. The memory feels much more magical that way: The sun shining! The boy’s mother nearby! Our minds are drawn to what feels true, not what’s necessarily so. And worse, these filled-in errors may actually compound over time. Some memory scientists suspect that when we misrecall something, we can store the false details in our memory in what’s known as reconsolidation.11 So the next time we remember it, we’re pulling up false details; maybe we’re even adding new errors with each act of recall. Episodic memory becomes a game of telephone played with oneself.
The malleability of memory helps explain why, over decades, we can adopt a surprisingly rewritten account of our lives. In 1962, the psychologist Daniel Offer asked a group12 of fourteen-year-old boys questions about significant aspects of their lives. When he hunted them down thirty-four years later and asked them to think back on their teenage years and answer precisely the same questions, their answers were remarkably different. As teenagers, 70 percent said religion was helpful to them; in their forties, only 26 percent recalled that. Fully 82 percent of the teenagers said their parents used corporal punishment, but three decades later, only one third recalled their parents hitting them. Over time, the men had slowly revised their memories, changing them to suit the ongoing shifts in their personalities, or what’s called hindsight bias. If you become less religious as an adult, you might start thinking that’s how you were as a child, too.
For eons, people have fought back against the fabrications of memory by using external aids. We’ve used chronological diaries for at least two millennia, and every new technological medium increases the number of things we capture: George Eastman’s inexpensive Brownie camera gave birth to everyday photography, and VHS tape did the same thing for personal videos in the 1980s. In the last decade, though, the sheer welter of artificial memory devices has exploded, so there are more tools capturing shards of our lives than ever before—e-mail, text messages, camera phone photos and videos, note-taking apps and word processing, GPS traces, comments, and innumerable status updates. (And those are just the voluntary recordings you participate in. There are now innumerable government and corporate surveillance cameras recording you, too.)
The biggest shift is that most of this doesn’t require much work. Saving artificial memories used to require foresight and effort, which is why only a small fraction of very committed people kept good diaries. But digital memory is frequently passive. You don’t intend to keep all your text messages, but if you’ve got a smartphone, odds are they’re all there, backed up every time you dock your phone. Dashboard cams on Russian cars are supposed to help drivers prove their innocence in car accidents, but because they’re always on, they also wound up recording a massive meteorite entering the atmosphere. Meanwhile, today’s free e-mail services like Gmail are biased toward permanent storage; they offer such capacious memory that it’s easier for the user to keep everything than to engage in the mental effort of deciding whether to delete each individual message. (This is an intentional design decision on Google’s part, of course; the more they can convince us to retain e-mail, the more data about our behavior they have in order to target ads at us more effectively.) And when people buy new computers, they rarely delete old files—in fact, research shows that most of us just copy our old hard drives13 onto our new computers, and do so again three years later with our next computers, and on and on, our digital external memories nested inside one other like wooden dolls. The cost of storage has plummeted so dramatically that it’s almost comical to consider: In 1981, a gigabyte of memory cost roughly three hundred thousand dollars, but now it can be had for pennies.
We face an intriguing inversion point in human memory. We’re moving from a period in which most of the details of our lives were forgotten to one in which many, perhaps most of them, will be captured. How will that change the way we live—and the way we understand the shape of our lives?
There’s a small community of people who’ve been trying to figure this out by recording as many bits of their lives as they can as often as possible. They don’t want to lose a detail; they’re trying to create perfect recall, to find out what it’s like. They’re the lifeloggers.
When I interview someone, I take pretty obsessive notes: not only everything they say, but also what they look like, how they talk. Within a few minutes of meeting Gordon Bell, I realized I’d met my match: His digital records of me were thousands of times more complete than my notes about him.
Bell is probably the world’s most ambitious and committed lifelogger.14 A tall and genial white-haired seventy-eight-year-old, he walks around outfitted with a small fish-eye camera hanging around his neck, snapping pictures every sixty seconds, and a tiny audio recorder that captures most conversations. Software on his computer saves a copy of every Web page he looks at and every e-mail he sends or receives, even a recording of every phone call.
“Which is probably illegal, but what the hell,” he says with a guffaw. “I never know what I’m going to need later on, so I keep everything.” When I visited him at his cramped office in San Francisco, it wasn’t the first time we’d met; we’d been hanging out and talking for a few days. He typed “Clive Thompson” into his desktop computer to give me a taste of what his “surrogate brain,” as he calls it, had captured of me. (He keeps a copy of his lifelog on his desktop and his laptop.) The screen fills with a flood of Clive-related material: twenty-odd e-mails Bell and I had traded, copies of my articles he’d perused online, and pictures beginning with our very first meeting, a candid shot of me with my hand outstretched. He clicks on an audio file from a conversation we’d had the day before, and the office fills with the sound of the two of us talking about a jazz concert he’d seen in Australia with his wife. It’s eerie hearing your own voice preserved in somebody else’s memory base. Then I realize in shock that when he’d first told me that story, I’d taken down incorrect notes about it. I’d written that he was with his daughter, not his wife. Bell’s artificial memory was correcting my memory.
Bell did not intend to be a pioneer in recording his life. Indeed, he stumbled into it. It started with a simple desire: He wanted to get rid of stacks of paper. Bell has a storied history; in his twenties, he designed computers, back when they were the size of refrigerators, with spinning hard disks the size of tires. He quickly became wealthy, quit his job to become a serial investor, and then in the 1990s was hired by Microsoft as an éminence grise, tasked with doing something vaguely futuristic—whatever he wanted, really. By that time, Bell was old enough to have amassed four filing cabinets crammed with personal archives, ranging from programming memos to handwritten letters from his kid and weird paraphernalia like a “robot driver’s license.” He was sick of lugging it around, so in 1997 he bought a scanner to see if he could go paperless. Pretty soon he’d turned a lifetime of paper into searchable PDFs and was finding it incredibly useful. So he started thinking: Why not have a copy of everything he did? Microsoft engineers helped outfit his computer with autorecording software. A British engineer showed him the SenseCam she’d invented. He began wearing that, too. (Except for the days where he’s worried it’ll stop his heart. “I’ve been a little leery of wearing it for the last week or so because the pacemaker company sent a little note around,” he tells me. He had a massive heart attack a few years back and had a pacemaker implanted. “Pacemakers don’t like magnets, and the SenseCam has one.” One part of his cyborg body isn’t compatible with the other.)
The truth is, Bell looks a little nuts walking around with his recording gear strapped on. He knows this; he doesn’t mind. Indeed, Bell possesses the dry air of a wealthy older man who long ago ceased to care what anyone thinks about him, which is probably why he was willing to make his life into a radical experiment. He also, frankly, seems like someone who needs an artificial memory, because I’ve rarely met anyone who seems so scatterbrained in everyday life. He’ll start talking about one subject, veer off to another in midsentence, only to interrupt that sentence with another digression. If he were a teenager, he’d probably be medicated for ADD.
Yet his lifelog does indeed let him perform remarkable memory feats. When a friend has a birthday, he’ll root around in old handwritten letters to find anecdotes for a toast. For a commencement address, he dimly recalled a terrific aphorism that he’d pinned to a card above his desk three decades before, and found it: “Start many fires.” Given that he’s old, his health records have become quite useful: He’s used SenseCam pictures of his post-heart-attack chest rashes to figure out whether he was healing or not, by quickly riffling through them like a flip-book. “Doctors are always asking you stuff like ‘When did this pain begin?’ or ‘What were you eating on such and such a day?’—and that’s precisely the stuff we’re terrible at remembering,” he notes. While working on a Department of Energy task force a few years ago, he settled an argument by checking the audio record of a conference call. When he tried to describe another jazz performance, he found himself tongue-tied, so he just punched up the audio and played it.
Being around Bell is like hanging out with some sort of mnemonic performing seal. I wound up barking weird trivia questions just to see if he could answer them. When was the first-ever e-mail you sent your son? 1996. Where did you go to church when you were a kid? Here’s a First Methodist Sunday School certificate. Did you leave a tip when you bought a coffee this morning on the way to work? Yep—here’s the pictures from Peet’s Coffee.
But Bell believes the deepest effects of his experiment aren’t just about being able to recall details of his life. I’d expected him to be tied to his computer umbilically, pinging it to call up bits of info all the time. In reality, he tends to consult it sparingly—mostly when I prompt him for details he can’t readily bring to mind.
The long-term effect has been more profound than any individual act of recall. The lifelog, he argues, given him greater mental peace. Knowing there’s a permanent backup of almost everything he reads, sees, or hears allows him to live more in the moment, paying closer attention to what he’s doing. The anxiety of committing something to memory is gone.
“It’s a freeing feeling,” he says. “The fact that I can offload my memory, knowing that it’s there—that whatever I’ve seen can be found again. I feel cleaner, lighter.”
The problem is that while Bell’s offboard memory may be immaculate and detailed, it can be curiously hard to search. Your organic brain may contain mistaken memories, but generally it finds things instantaneously and fluidly, and it’s superb at flitting from association to association. If we had met at a party last month and you’re now struggling to remember my name, you’ll often sift sideways through various cues—who else was there? what were we talking about? what music was playing?—until one of them clicks, and ping: The name comes to us. (Clive Thompson!) In contrast, digital tools don’t have our brain’s problem with inaccuracy; if you give it “Clive,” it’ll quickly pull up everything with a “Clive” associated, in perfect fidelity. But machine searching is brittle. If you don’t have the right cue to start with—say, the name “Clive”—or if the data didn’t get saved in the right way, you might never find your way back to my name.
Bell struggles with these machine limits all the time. While eating lunch in San Francisco, he tells me about a Paul Krugman column he liked, so I ask him to show it to me. But he can’t find it on the desktop copy of his lifelog: His search for “Paul Krugman” produces scores of columns, and Bell can’t quite filter out the right one. When I ask him to locate a colleague’s phone number, he runs into another wall: he can locate all sorts of things—even audio of their last conversation—but no number. “Where the hell is this friggin’ phone call?” he mutters, pecking at the keyboard. “I either get nothing or I get too much!” It’s like a scene from a Philip K. Dick novel: A man has external memory, but it’s locked up tight and he can’t access it—a cyborg estranged from his own mind.
As I talked to other lifeloggers, they bemoaned the same problem. Saving is easy; finding can be hard. Google and other search engines have spent decades figuring out how to help people find things on the Web, of course. But a Web search is actually easier than searching through someone’s private digital memories. That’s because the Web is filled with social markers that help Google try to guess what’s going to be useful. Google’s famous PageRank system looks at social rankings:15 If a Web page has been linked to by hundreds of other sites, Google guesses that that page is important in some way. But lifelogs don’t have that sort of social data; unlike blogs or online social networks, they’re a private record used only by you.
Without a way to find or make sense of the material, a lifelog’s greatest strength—its byzantine, brain-busting level of detail—becomes, paradoxically, its greatest flaw. Sure, go ahead and archive your every waking moment, but how do you parse it? Review it? Inspect it? Nobody has another life in which to relive their previous one. The lifelogs remind me of Jorge Luis Borges’s story “On Exactitude in Science,”16 in which a group of cartographers decide to draw a map of their empire with a 1:1 ratio: it is the exact size of the actual empire, with the exact same detail. The next generation realizes that a map like that is useless, so they let it decay. Even if we are moving toward a world where less is forgotten, that isn’t the same as more being remembered.
Cathal Gurrin probably has the most heavily photographed life in history, even more than Bell. Gurrin, a researcher at Dublin City University, began wearing a SenseCam five years ago and has ten million pictures. The SenseCam has preserved candid moments he’d never otherwise have bothered to shoot: the time he lounged with friends in his empty house the day before he moved; his first visit to China, where the SenseCam inadvertently captured the last-ever pictures of historic buildings before they were demolished in China’s relentless urban construction upheaval. He’s dipped into his log to try to squirm out of a speeding ticket (only to have his SenseCam prove the police officer was right; another self-serving memory distortion on the part of his organic memory).
But Gurrin, too, has found that it can be surprisingly hard to locate a specific image. In a study at his lab, he listed fifty of his “most memorable” moments from the last two and a half years, like his first encounters with new friends, last encounters with loved ones, and meeting TV celebrities. Then, over the next year and a half, his labmates tested him to see how quickly he could find a picture of one of those moments. The experiment was gruesome: The first searches took over thirteen minutes. As the lab slowly improved the image-search tools, his time dropped to about two minutes, “which is still pretty slow,” as one of his labmates noted. This isn’t a problem just for lifeloggers; even middle-of-the-road camera phone users quickly amass so many photos that they often give up on organizing them. Steve Whittaker, a psychologist who designs interfaces and studies how we interact with computers, asked a group of subjects to find a personally significant picture on their own hard drive. Many couldn’t. “And they’d get pretty upset when they realized that stuff was there, but essentially gone,” Whittaker tells me. “We’d have to reassure them that ‘no, no, everyone has this problem!’” Even Gurrin admits to me that he rarely searches for anything at all in his massive archive. He’s waiting for better search tools to emerge.
Mind you, he’s confident they will. As he points out, fifteen years ago you couldn’t find much on the Web because the search engines were dreadful. “And the first MP3 players were horrendous for finding songs,” he adds. The most promising trends in search algorithms include everything from “sentiment analysis” (you could hunt for a memory based on how happy or sad it is) to sophisticated ways of analyzing pictures, many of which are already emerging in everyday life: detecting faces and locations or snippets of text in pictures, allowing you to hunt down hard-to-track images by starting with a vague piece of half recall, the way we interrogate our own minds. The app Evernote has already become popular because of its ability to search for text, even bent or sideways, within photos and documents.
Yet the weird truth is that searching a lifelog may not, in the end, be the way we take advantage of our rapidly expanding artificial memory. That’s because, ironically, searching for something leaves our imperfect, gray-matter brain in control. Bell and Gurrin and other lifeloggers have superb records, but they don’t search them unless, while using their own brains, they realize there’s something to look for. And of course, our organic brains are riddled with memory flaws. Bell’s lifelog could well contain the details of a great business idea he had in 1992; but if he’s forgotten he ever had that idea, he’s unlikely to search for it. It remains as remote and unused as if he’d never recorded it at all.
The real promise of artificial memory isn’t its use as a passive storage device, like a pen-and-paper diary. Instead, future lifelogs are liable to be active—trying to remember things for us. Lifelogs will be far more useful when they harness what computers are uniquely good at: brute-force pattern finding. They can help us make sense of our archives by finding connections and reminding us of what we’ve forgotten. Like the hybrid chess-playing centaurs, the solution is to let the computers do what they do best while letting humans do what they do best.
Bradley Rhodes has had a taste of what that feels like. While a student at MIT, he developed the Remembrance Agent, a piece of software that performed one simple task. The agent would observe what he was typing—e-mails, notes, an essay, whatever. It would take the words he wrote and quietly scour through years of archived e-mails and documents to see if anything he’d written in the past was similar in content to what he was writing about now. Then it would offer up snippets in the corner of the screen—close enough for Rhodes to glance at.
Sometimes the suggestions were off topic and irrelevant, and Rhodes would ignore them. But frequently the agent would find something useful—a document Rhodes had written but forgotten about. For example, he’d find himself typing an e-mail to a friend, asking how to work the campus printer, when the agent would show him that he already had a document that contained the answer. Another time, Rhodes—an organizer for MIT’s ballroom dance club—got an e-mail from a club member asking when the next event was taking place. Rhodes was busy with schoolwork and tempted to blow him off, but the agent pointed out that the club member had asked the same question a month earlier, and Rhodes hadn’t answered then either.
“I realized I had to switch gears and apologize and go, ‘Sorry for not getting back to you,’” he tells me. The agent wound up saving him from precisely the same spaced-out forgetfulness that causes us so many problems, interpersonal and intellectual, in everyday life. “It keeps you from looking stupid,” he adds. “You discover things even you didn’t know you knew.” Fellow students started pestering him for trivia. “They’d say, ‘Hey Brad, I know you’ve got this augmented brain, can you answer this?’”
In essence, Rhodes’s agent took advantage of computers’ sheer tirelessness. Rhodes, like most of us, isn’t going to bother running a search on everything he has ever typed on the off chance that it might bring up something useful. While machines have no problem doing this sort of dumb task, they won’t know if they’ve found anything useful; it’s up to us, with our uniquely human ability to recognize useful information, to make that decision. Rhodes neatly hybridized the human skill at creating meaning with the computer’s skill at making connections.
Granted, this sort of system can easily become too complicated for its own good. Microsoft is still living down its disastrous introduction of Clippy, a ghastly piece of artificial intelligence—I’m using that term very loosely—that would observe people’s behavior as they worked on a document and try to bust in, offering “advice” that tended to be spectacularly useless.
The way machines will become integrated into our remembering is likely to be in smaller, less intrusive bursts. In fact, when it comes to finding meaning in our digital memories, less may be more. Jonathan Wegener, a young computer designer who lives in Brooklyn,17 recently became interested in the extensive data trails that he and his friends were leaving in everyday life: everything from Facebook status updates to text messages to blog posts and check-ins at local bars using services like Foursquare. The check-ins struck him as particularly interesting. They were geographic; if you picked a day and mapped your check-ins, you’d see a version of yourself moving around the city. It reminded him of a trope from the video games he’d played as a kid: “racing your ghost.” In games like Mario Kart, if you had no one to play with, you could record yourself going as fast as you could around a track, then compete against the “ghost” of your former self.
Wegener thought it would be fun to do the same thing with check-ins—show people what they’d been doing on a day in their past. In one hectic weekend of programming, he created a service playfully called FoursquareAnd7YearsAgo. Each day, the service logged into your Foursquare account, found your check-ins from one year back (as well as any “shout” status statements you made), and e-mailed a summary to you. Users quickly found the daily e-mail would stimulate powerful, unexpected bouts of reminiscence. I spent an afternoon talking to Daniel Giovanni, a young social-media specialist in Jakarta who’d become a mesmerized user of FoursquareAnd7YearsAgo. The day we spoke was the one-year anniversary of his thesis defense, and as he looked at the list of check-ins, the memories flooded back: at 7:42 a.m. he showed up on campus to set up (with music from Transformers 2 pounding in his head, as he’d noted in a shout); at 12:42 p.m., after getting an A, he exuberantly left the building and hit a movie theater to celebrate with friends. Giovanni hadn’t thought about that day in a long while, but now that the tool had cued him, he recalled it vividly. A year is, of course, a natural memorial moment; and if you’re given an accurate cue to help reflect on a day, you’re more likely to accurately re-remember it again in the future. “It’s like this helps you reshape the memories of your life,” he told me.
What charmed me is how such a crude signal—the mere mention of a location—could prompt so many memories: geolocation as a Proustian cookie. Again, left to our own devices, we’re unlikely to bother to check year-old digital detritus. But computer code has no problem following routines. It’s good at cueing memories, tickling them to recall more often and more deeply than we’d normally bother. Wegener found that people using his tool quickly formed new, creative habits around the service: They began posting more shouts—pithy, one-sentence descriptions of what they were doing—to their check-ins, since they knew that in a year, these would provide an extra bit of detail to help them remember that day. In essence, they were shouting out to their future selves, writing notes into a diary that would slyly present itself, one year hence, to be read. Wegener renamed his tool Timehop and gradually added more and more forms of memories: Now it shows you pictures and status updates from a year ago, too.
Given the pattern-finding nature of computers, one can imagine increasingly sophisticated ways that our tools could automatically reconfigure and re-present our lives to us. Eric Horvitz, a Microsoft artificial intelligence researcher, has experimented with a prototype named Lifebrowser, which scours through his massive digital files to try to spot significant life events. First, you tell it which e-mails, pictures, or events in your calendar were particularly vivid; as it learns those patterns, it tries to predict what memories you’d consider to be important landmarks. Horvitz has found that “atypia”—unusual events that don’t repeat—tend to be more significant, which makes sense: “No one ever needs to remember what happened at the Monday staff meeting,” he jokes when I drop by his office in Seattle to see the system at work. Lifebrowser might also detect that when you’ve taken a lot of photos of the same thing, you were trying particularly hard to capture something important, so it’ll select one representative image as important. At his desk, he shows me Lifebrowser in action. He zooms in to a single month from the previous year, and it offers up a small handful of curated events for each day: a meeting at the government’s elite DARPA high-tech research department, a family visit to Whidbey Island, an e-mail from a friend announcing a surprise visit. “I would never have thought about this stuff myself, but as soon as I see it, I go, ‘Oh, right—this was important,’” Horvitz says. The real power of digital memories will be to trigger our human ones.
In 1942, Borges published another story, about a man with perfect memory. In “Funes, the Memorious,” the narrator encounters a nineteen-year-old boy who, after a horse-riding accident, discovers that he has been endowed with perfect recall. He performs astonishing acts of memory, such as reciting huge swathes of the ancient Roman text Historia Naturalis and describing the precise shape of a set of clouds he saw several months ago. But his immaculate memory, Funes confesses, has made him miserable. Since he’s unable to forget anything, he is tortured by constantly recalling too much detail, too many minutiae, about everything. For him, forgetting would be a gift. “My memory, sir,” he said, “is like a garbage heap.”
Technically, the condition of being unable to forget is called hyperthymesia, and it has occasionally been found in real-life people. In the 1920s, Russian psychologist Aleksandr Luria examined Solomon Shereshevskii,18 a young journalist who was able to perform incredible feats of memory. Luria would present Shereshevskii with lists of numbers or words up to seventy figures long. Shereshevskii could recite the list back perfectly—not just right away, but also weeks or months later. Fifteen years after first meeting Shereshevskii, Luria met with him again. Shereshevskii sat down, closed his eyes, and accurately recalled not only the string of numbers but photographic details of the original day from years before. “You were sitting at the table and I in the rocking chair … You were wearing a gray suit,” Shereshevskii told him. But Shereshevskii’s gifts did not make him happy. Like Funes, he found the weight of so much memory oppressive. His memory didn’t even make him smarter; on the contrary, reading was difficult because individual words would constantly trigger vivid memories that disrupted his attention. He “struggled to grasp” abstract concepts like infinity or eternity. Desperate to forget things, Shereshevskii would write down memories on paper and burn them, in hopes that he could destroy his past with “the magical act of burning.” It didn’t work.
As we begin to record more and more of our lives—intentionally and unintentionally—one can imagine a pretty bleak future. There are terrible parts of my life I’d rather not have documented (a divorce, the sudden death of my best friend at age forty); or at least, when I recall them, I might prefer my inaccurate but self-serving human memories. I can imagine daily social reality evolving into a set of weird gotchas, of the sort you normally see only on a political campaign trail. My wife and I, like many couples, bicker about who should clean the kitchen; what will life be like when there’s a permanent record on tap and we can prove whose turn it is? Sure, it’d be more accurate and fair; it’d also be more picayune and crazy. These aren’t idle questions, either, or even very far off. The sorts of omnipresent recording technologies that used to be experimental or figments of sci-fi are now showing up for sale on Amazon. A company named Looxcie sells a tiny camera to wear over your ear, like a Bluetooth phone mike; it buffers ten hours of video, giving the wearer an ability to rewind life like a TiVo. You can buy commercial variants of Bell’s SenseCam, too.
Yet the experience of the early lifeloggers suggests that we’re likely to steer a middle path with artificial memory. It turns out that even those who are rabidly trying to record everything quickly realize their psychic limits, as well as the limits of the practice’s usefulness.
This is particularly true when it comes to the socially awkward aspect of lifelogging—which is that recording one’s own life inevitably means recording other people’s, too. Audio in particular seems to be unsettling. When Bell began his lifelogging project, his romantic partner quickly began insisting she mostly be left out of it. “We’d be talking, and she’d suddenly go, ‘You didn’t record that, did you?’ And I’d admit, ‘Yeah, I did.’ ‘Delete it! Delete it!’” Cathal Gurrin discovered early in his experiment that people didn’t mind being on camera. “Girlfriends have been remarkably accepting of it. Some think it’s really great to have their picture taken,” he notes. But he gave up on trying to record audio. “One colleague of mine did it for a week, and nobody would talk to him.” He laughs. Pictures, he suspects, offer a level of plausible deniability that audio doesn’t. I’ve noticed this, too, as a reporter. When I turn my audio recorder off during an interview, people become more open and candid, even if they’re still on the record. People want their memories to be cued, not fully replaced; we reserve the existential pleasures of gently rewriting our history.
Gurrin argues that society will have to evolve social codes that govern artificial memory. “It’s like there’s now an unspoken etiquette around when you can and can’t take mobile phone pictures,” he suggests. Granted, these codes aren’t yet very firm, and will probably never be; six years into Facebook’s being a daily tool, intimate friends still disagree about whether it’s fair to post drunken pictures of each other. Interestingly (or disturbingly), in our social lives we seem to be adopting concepts that used to obtain solely in institutional and legal environments. The idea of a meeting going “in camera” or “off the record” is familiar to members of city councils or corporate boards. But that language is seeping into everyday life: the popular Google Chat program added a button so users could go “off the record,” for example. Viktor Mayer-Schönberger, the author of Delete: The Virtue of Forgetting in the Digital Age, says we’ll need to engineer more artificial forgetting into our lives. He suggests that digital tools should be designed19 so that, when we first record something—a picture, a blog post, an instant messaging log—we’re asked how long it ought to stick around: a day, a week, forever? When the time is up, it’s automatically zapped into the dustbin. This way, he argues, our life traces would consist only of the stuff we’ve actively decided ought to stick around. It’s an intriguing idea, which I will take up later when I discuss social networking and privacy.
But the truth is, research has found that people are emotional pack rats. Even when it comes to digital memories that are depressing or disturbing, they opt to preserve them. While researching his PhD dissertation, Jason Zalinger—now a digital culture professor at the University of South Florida—got interested in Gmail, since it was the first e-mail program to actively encourage people to never delete any messages. In a sense, Gmail is the de facto lifelog for many of its users: e-mail can be quite personal, and it’s a lot easier to search than photos or videos. So what, Zalinger wondered, did people do with e-mails that were emotionally fraught?
The majority kept everything.20 Indeed, the more disastrous a relationship, the more likely they were to keep a record—and to go back and periodically read it. One woman, Sara, had kept everything from racy e-mails traded with a married boss (“I’m talking bondage references”) to e-mails from former boyfriends; she would occasionally hunt them down and reread them, as a sort of self-scrutiny. “I think I might have saved some of the painful e-mails because I wanted to show myself later, ‘Wow was this guy a dick.’” The saved e-mails also, she notes, “gave me texts to analyze … I just read and reread until I guess I hit the point that it either stopped hurting, or I stopped looking.” Another woman, Monica, explained how she’d saved all the e-mails from a partner who’d dumped her by abruptly showing up at a Starbucks with a pillowcase filled with her belongings. “I do read over those e-mails a lot,” she said, “just to kind of look back, and I guess still try to figure what exactly went wrong. I won’t ever get an answer, but it’s nice to have tangible proof that something did happen and made an impact on my life, you know? In the beginning it was painful to read, but now it’s kind of like a memory, you know?”
One man that Zalinger interviewed, Winston, had gone through a divorce. Afterward, he was torn about what to do with the e-mails from his ex-wife. He didn’t necessarily want to look at them again; most divorced people, after all, want their organic memory to fade and soften the story. But he also figured, who knows? He might want to look at them someday, if he’s trying to remember a detail or make sense of his life. In fact, when Winston thought about it, he realized there were a lot of other e-mails from his life that fit into this odd category—stuff you don’t want to look at but don’t want to lose, either. So he took all these emotionally difficult messages and archived them in Gmail using an evocative label: “Forget.” Out of sight, out of mind, but retrievable.
It’s a beautiful metaphor for the odd paradoxes and trade-offs we’ll live with in a world of infinite memory. Our ancestors learned how to remember; we’ll learn how to forget.