Читать книгу Zucked: How Users Got Used and What We Can Do About It - Roger McNamee - Страница 8

Оглавление

Prologue

Technology is a useful servant but a dangerous master. —CHRISTIAN LOUS LANGE

November 9, 2016

“The Russians used Facebook to tip the election!”

So began my side of a conversation the day after the presidential election. I was speaking with Dan Rose, the head of media partnerships at Facebook. If Rose was taken aback by how furious I was, he hid it well.

Let me back up. I am a longtime tech investor and evangelist. Tech had been my career and my passion, but by 2016, I was backing away from full-time professional investing and contemplating retirement. I had been an early advisor to Facebook founder Mark Zuckerberg—Zuck, to many colleagues and friends—and an early investor in Facebook. I had been a true believer for a decade. Even at this writing, I still own shares in Facebook. In terms of my own narrow self-interest, I had no reason to bite Facebook’s hand. It would never have occurred to me to be an anti-Facebook activist. I was more like Jimmy Stewart in Hitchcock’s Rear Window. He is minding his own business, checking out the view from his living room, when he sees what looks like a crime in progress, and then he has to ask himself what he should do. In my case, I had spent a career trying to draw smart conclusions from incomplete information, and one day early in 2016 I started to see things happening on Facebook that did not look right. I started pulling on that thread and uncovered a catastrophe. In the beginning, I assumed that Facebook was a victim and I just wanted to warn my friends. What I learned in the months that followed shocked and disappointed me. I learned that my trust in Facebook had been misplaced.

This book is the story of why I became convinced, in spite of myself, that even though Facebook provided a compelling experience for most of its users, it was terrible for America and needed to change or be changed, and what I have tried to do about it. My hope is that the narrative of my own conversion experience will help others understand the threat. Along the way, I will share what I know about the technology that enables internet platforms like Facebook to manipulate attention. I will explain how bad actors exploit the design of Facebook and other platforms to harm and even kill innocent people. How democracy has been undermined because of design choices and business decisions by internet platforms that deny responsibility for the consequences of their actions. How the culture of these companies causes employees to be indifferent to the negative side effects of their success. At this writing, there is nothing to prevent more of the same.

This is a story about trust. Technology platforms, including Facebook and Google, are the beneficiaries of trust and goodwill accumulated over fifty years by earlier generations of technology companies. They have taken advantage of our trust, using sophisticated techniques to prey on the weakest aspects of human psychology, to gather and exploit private data, and to craft business models that do not protect users from harm. Users must now learn to be skeptical about products they love, to change their online behavior, insist that platforms accept responsibility for the impact of their choices, and push policy makers to regulate the platforms to protect the public interest.

This is a story about privilege. It reveals how hypersuccessful people can be so focused on their own goals that they forget that others also have rights and privileges. How it is possible for otherwise brilliant people to lose sight of the fact that their users are entitled to self-determination. How success can breed overconfidence to the point of resistance to constructive feedback from friends, much less criticism. How some of the hardest working, most productive people on earth can be so blind to the consequences of their actions that they are willing to put democracy at risk to protect their privilege.

This is also a story about power. It describes how even the best of ideas, in the hands of people with good intentions, can still go terribly wrong. Imagine a stew of unregulated capitalism, addictive technology, and authoritarian values, combined with Silicon Valley’s relentlessness and hubris, unleashed on billions of unsuspecting users. I think the day will come, sooner than I could have imagined just two years ago, when the world will recognize that the value users receive from the Facebook-dominated social media/attention economy revolution masked an unmitigated disaster for our democracy, for public health, for personal privacy, and for the economy. It did not have to be that way. It will take a concerted effort to fix it.

When historians finish with this corner of history, I suspect that they will cut Facebook some slack about the poor choices that Zuck, Sheryl Sandberg, and their team made as the company grew. I do. Making mistakes is part of life, and growing a startup to global scale is immensely challenging. Where I fault Facebook—and where I believe history will, as well—is for the company’s response to criticism and evidence. They had an opportunity to be the hero in their own story by taking responsibility for their choices and the catastrophic outcomes those choices produced. Instead, Zuck and Sheryl chose another path.

This story is still unfolding. I have written this book now to serve as a warning. My goals are to make readers aware of a crisis, help them understand how and why it happened, and suggest a path forward. If I achieve only one thing, I hope it will be to make the reader appreciate that he or she has a role to play in the solution. I hope every reader will embrace the opportunity.

It is possible that the worst damage from Facebook and the other internet platforms is behind us, but that is not where the smart money will place its bet. The most likely case is that the technology and business model of Facebook and others will continue to undermine democracy, public health, privacy, and innovation until a countervailing power, in the form of government intervention or user protest, forces change.

Ten days before the November 2016 election, I had reached out formally to Mark Zuckerberg and Facebook chief operating officer Sheryl Sandberg, two people I considered friends, to share my fear that bad actors were exploiting Facebook’s architecture and business model to inflict harm on innocent people, and that the company was not living up to its potential as a force for good in society. In a two-page memo, I had cited a number of instances of harm, none actually committed by Facebook employees but all enabled by the company’s algorithms, advertising model, automation, culture, and value system. I also cited examples of harm to employees and users that resulted from the company’s culture and priorities. I have included the memo in the appendix.

Zuck created Facebook to bring the world together. What I did not know when I met him but would eventually discover was that his idealism was unbuffered by realism or empathy. He seems to have assumed that everyone would view and use Facebook the way he did, not imagining how easily the platform could be exploited to cause harm. He did not believe in data privacy and did everything he could to maximize disclosure and sharing. He operated the company as if every problem could be solved with more or better code. He embraced invasive surveillance, careless sharing of private data, and behavior modification in pursuit of unprecedented scale and influence. Surveillance, the sharing of user data, and behavioral modification are the foundation of Facebook’s success. Users are fuel for Facebook’s growth and, in some cases, the victims of it.

When I reached out to Zuck and Sheryl, all I had was a hypothesis that bad actors were using Facebook to cause harm. I suspected that the examples I saw reflected systemic flaws in the platform’s design and the company’s culture. I did not emphasize the threat to the presidential election, because at that time I could not imagine that the exploitation of Facebook would affect the outcome, and I did not want the company to dismiss my concerns if Hillary Clinton won, as was widely anticipated. I warned that Facebook needed to fix the flaws or risk its brand and the trust of users. While it had not inflicted harm directly, Facebook was being used as a weapon, and users had a right to expect the company to protect them.

The memo was a draft of an op-ed that I had written at the invitation of the technology blog Recode. My concerns had been building throughout 2016 and reached a peak with the news that the Russians were attempting to interfere in the presidential election. I was increasingly freaked out by what I had seen, and the tone of the op-ed reflected that. My wife, Ann, wisely encouraged me to send the op-ed to Zuck and Sheryl first, before publication. I had been one of Zuck’s many advisors in Facebook’s early days, and I played a role in Sheryl’s joining the company as chief operating officer. I had not been involved with the company since 2009, but I remained a huge fan. My small contribution to the success of one of the greatest companies ever to come out of Silicon Valley was one of the true highlights of my thirty-four-year career. Ann pointed out that communicating through an op-ed might cause the wrong kind of press reaction, making it harder for Facebook to accept my concerns. My goal was to fix the problems at Facebook, not embarrass anyone. I did not imagine that Zuck and Sheryl had done anything wrong intentionally. It seemed more like a case of unintended consequences of well-intended strategies. Other than a handful of email exchanges, I had not spoken to Zuck in seven years, but I had interacted with Sheryl from time to time. At one point, I had provided them with significant value, so it was not crazy to imagine that they would take my concerns seriously. My goal was to persuade Zuck and Sheryl to investigate and take appropriate action. The publication of the op-ed could wait a few days.

Zuck and Sheryl each responded to my email within a matter of hours. Their replies were polite but not encouraging. They suggested that the problems I cited were anomalies that the company had already addressed, but they offered to connect me with a senior executive to hear me out. The man they chose was Dan Rose, a member of their inner circle with whom I was friendly. I spoke with Dan at least twice before the election. Each time, he listened patiently and repeated what Zuck and Sheryl had said, with one important addition: he asserted that Facebook was technically a platform, not a media company, which meant it was not responsible for the actions of third parties. He said it like that should have been enough to settle the matter.

Dan Rose is a very smart man, but he does not make policy at Facebook. That is Zuck’s role. Dan’s role is to carry out Zuck’s orders. It would have been better to speak with Zuck, but that was not an option, so I took what I could get. Quite understandably, Facebook did not want me to go public with my concerns, and I thought that by keeping the conversation private, I was far more likely to persuade them to investigate the issues that concerned me. When I spoke to Dan the day after the election, it was obvious to me that he was not truly open to my perspective; he seemed to be treating the issue as a public relations problem. His job was to calm me down and make my concerns go away. He did not succeed at that, but he could claim one victory: I never published the op-ed. Ever the optimist, I hoped that if I persisted with private conversations, Facebook would eventually take the issue seriously.

I continued to call and email Dan, hoping to persuade Facebook to launch an internal investigation. At the time, Facebook had 1.7 billion active users. Facebook’s success depended on user trust. If users decided that the company was responsible for the damage caused by third parties, no legal safe harbor would protect it from brand damage. The company was risking everything. I suggested that Facebook had a window of opportunity. It could follow the example of Johnson & Johnson when someone put poison in a few bottles of Tylenol on retail shelves in Chicago in 1982. J&J immediately withdrew every bottle of Tylenol from every retail location and did not reintroduce the product until it had perfected tamperproof packaging. The company absorbed a short-term hit to earnings but was rewarded with a huge increase in consumer trust. J&J had not put the poison in those bottles. It might have chosen to dismiss the problem as the work of a madman. Instead, it accepted responsibility for protecting its customers and took the safest possible course of action. I thought Facebook could convert a potential disaster into a victory by doing the same thing.

One problem I faced was that at this point I did not have data for making my case. What I had was a spidey sense, honed during a long career as a professional investor in technology.

I had first become seriously concerned about Facebook in February 2016, in the run-up to the first US presidential primary. As a political junkie, I was spending a few hours a day reading the news and also spending a fair amount of time on Facebook. I noticed a surge on Facebook of disturbing images, shared by friends, that originated on Facebook Groups ostensibly associated with the Bernie Sanders campaign. The images were deeply misogynistic depictions of Hillary Clinton. It was impossible for me to imagine that Bernie’s campaign would allow them. More disturbing, the images were spreading virally. Lots of my friends were sharing them. And there were new images every day.

I knew a great deal about how messages spread on Facebook. For one thing, I have a second career as a musician in a band called Moonalice, and I had long been managing the band’s Facebook page, which enjoyed high engagement with fans. The rapid spread of images from these Sanders-associated pages did not appear to be organic. How did the pages find my friends? How did my friends find the pages? Groups on Facebook do not emerge full grown overnight. I hypothesized that somebody had to be spending money on advertising to get the people I knew to join the Facebook Groups that were spreading the images. Who would do that? I had no answer. The flood of inappropriate images continued, and it gnawed at me.

More troubling phenomena caught my attention. In March 2016, for example, I saw a news report about a group that exploited a programming tool on Facebook to gather data on users expressing an interest in Black Lives Matter, data that they then sold to police departments, which struck me as evil. Facebook banned the group, but not until after irreparable harm had been done. Here again, a bad actor had used Facebook tools to harm innocent victims.

In June 2016, the United Kingdom voted to exit the European Union. The outcome of the Brexit vote came as a total shock. Polling had suggested that “Remain” would triumph over “Leave” by about four points, but precisely the opposite happened. No one could explain the huge swing. A possible explanation occurred to me. What if Leave had benefited from Facebook’s architecture? The Remain campaign was expected to win because the UK had a sweet deal with the European Union: it enjoyed all the benefits of membership, while retaining its own currency. London was Europe’s undisputed financial hub, and UK citizens could trade and travel freely across the open borders of the continent. Remain’s “stay the course” message was based on smart economics but lacked emotion. Leave based its campaign on two intensely emotional appeals. It appealed to ethnic nationalism by blaming immigrants for the country’s problems, both real and imaginary. It also promised that Brexit would generate huge savings that would be used to improve the National Health Service, an idea that allowed voters to put an altruistic shine on an otherwise xenophobic proposal.

The stunning outcome of Brexit triggered a hypothesis: in an election context, Facebook may confer advantages to campaign messages based on fear or anger over those based on neutral or positive emotions. It does this because Facebook’s advertising business model depends on engagement, which can best be triggered through appeals to our most basic emotions. What I did not know at the time is that while joy also works, which is why puppy and cat videos and photos of babies are so popular, not everyone reacts the same way to happy content. Some people get jealous, for example. “Lizard brain” emotions such as fear and anger produce a more uniform reaction and are more viral in a mass audience. When users are riled up, they consume and share more content. Dispassionate users have relatively little value to Facebook, which does everything in its power to activate the lizard brain. Facebook has used surveillance to build giant profiles on every user and provides each user with a customized Truman Show, similar to the Jim Carrey film about a person who lives his entire life as the star of his own television show. It starts out giving users “what they want,” but the algorithms are trained to nudge user attention in directions that Facebook wants. The algorithms choose posts calculated to press emotional buttons because scaring users or pissing them off increases time on site. When users pay attention, Facebook calls it engagement, but the goal is behavior modification that makes advertising more valuable. I wish I had understood this in 2016. At this writing, Facebook is the fourth most valuable company in America, despite being only fifteen years old, and its value stems from its mastery of surveillance and behavioral modification.

When new technology first comes into our lives, it surprises and astonishes us, like a magic trick. We give it a special place, treating it like the product equivalent of a new baby. The most successful tech products gradually integrate themselves into our lives. Before long, we forget what life was like before them. Most of us have that relationship today with smartphones and internet platforms like Facebook and Google. Their benefits are so obvious we can’t imagine foregoing them. Not so obvious are the ways that technology products change us. The process has repeated itself in every generation since the telephone, including radio, television, and personal computers. On the plus side, technology has opened up the world, providing access to knowledge that was inaccessible in prior generations. It has enabled us to create and do remarkable things. But all that value has a cost. Beginning with television, technology has changed the way we engage with society, substituting passive consumption of content and ideas for civic engagement, digital communication for conversation. Subtly and persistently, it has contributed to our conversion from citizens to consumers. Being a citizen is an active state; being a consumer is passive. A transformation that crept along for fifty years accelerated dramatically with the introduction of internet platforms. We were prepared to enjoy the benefits but unprepared for the dark side. Unfortunately, the same can be said for the Silicon Valley leaders whose innovations made the transformation possible.

If you are a fan of democracy, as I am, this should scare you. Facebook has become a powerful source of news in most democratic countries. To a remarkable degree it has made itself the public square in which countries share ideas, form opinions, and debate issues outside the voting booth. But Facebook is more than just a forum. It is a profit-maximizing business controlled by one person. It is a massive artificial intelligence that influences every aspect of user activity, whether political or otherwise. Even the smallest decisions at Facebook reverberate through the public square the company has created with implications for every person it touches. The fact that users are not conscious of Facebook’s influence magnifies the effect. If Facebook favors inflammatory campaigns, democracy suffers.

August 2016 brought a new wave of stunning revelations. Press reports confirmed that Russians had been behind the hacks of servers at the Democratic National Committee (DNC) and Democratic Congressional Campaign Committee (DCCC). Emails stolen in the DNC hack were distributed by WikiLeaks, causing significant damage to the Clinton campaign. The chairman of the DCCC pleaded with Republicans not to use the stolen data in congressional campaigns. I wondered if it were possible that Russians had played a role in the Facebook issues that had been troubling me earlier.

Just before I wrote the op-ed, ProPublica revealed that Facebook’s advertising tools enabled property owners to discriminate based on race, in violation of the Fair Housing Act. The Department of Housing and Urban Development opened an investigation that was later closed, but reopened in April 2018. Here again, Facebook’s architecture and business model enabled bad actors to harm innocent people.

Like Jimmy Stewart in the movie, I did not have enough data or insight to understand everything I had seen, so I sought to learn more. As I did so, in the days and weeks after the election, Dan Rose exhibited incredible patience with me. He encouraged me to send more examples of harm, which I did. Nothing changed. Dan never budged. In February 2017, more than three months after the election, I finally concluded that I would not succeed in convincing Dan and his colleagues; I needed a different strategy. Facebook remained a clear and present danger to democracy. The very same tools that made Facebook a compelling platform for advertisers could also be exploited to inflict harm. Facebook was getting more powerful by the day. Its artificial intelligence engine learned more about every user. Its algorithms got better at pressing users’ emotional buttons. Its tools for advertisers improved constantly. In the wrong hands, Facebook was an ever-more-powerful weapon. And the next US election—the 2018 midterms—was fast approaching.

Yet no one in power seemed to recognize the threat. The early months of 2017 revealed extensive relationships between officials of the Trump campaign and people associated with the Russian government. Details emerged about a June 2016 meeting in Trump Tower between inner-circle members of the campaign and Russians suspected of intelligence affiliations. Congress spun up Intelligence Committee investigations that focused on that meeting.

But still there was no official concern about the role that social media platforms, especially Facebook, had played in the 2016 election. Every day that passed without an investigation increased the likelihood that the interference would continue. If someone did not act quickly, our democratic processes could be overwhelmed by outside forces; the 2018 midterm election would likely be subject to interference, possibly greater than we had seen in 2016. Our Constitution anticipated many problems, but not the possibility that a foreign country could interfere in our elections without consequences. I could not sit back and watch. I needed some help, and I needed a plan, not necessarily in that order.

Zucked: How Users Got Used and What We Can Do About It

Подняться наверх