Читать книгу Radical Inclusion - Ori Brafman - Страница 9

Uncle Shoe Store

Оглавление

At a family Christmas party, Ori found himself in a conversation with an uncle who’s a professor of philosophy, specializing in language and epistemology. The two were talking about fake news and how in the near future the trend might affect our ability to discern the truth. Halfway through the conversation, they were joined by another uncle, a physical therapist who runs a specialty shoe store for athletes. This uncle is one of the top experts in the country on running shoes and even holds a patent on a machine that tests a shoe’s stability to gauge its appropriateness for a given runner.

The conversation—as tends to happen at family events— turned to global affairs. Uncle Shoe Store mentioned that he’d read about a Harvard professor who demonstrated that climate change science is wrong. “I mean, look around,” he continued. “It’s not hot this winter in San Francisco.”

Of course, Uncle Philosopher is at the opposite end of the political spectrum, so Ori bit his tongue and sat back to watch the fireworks. Instead of engaging in an argument, Uncle Philosopher asked Uncle Shoe Store how he had reached his conclusions.

Uncle Shoe Store said that he had read the information online, and that a number of his friends—all successful business owners—had read and agreed with the same materials. Uncle Philosopher tried to ask about the multitude of peer-reviewed journal articles backing climate change, but Uncle Shoe Store put little stock in them.

Just as Ori kept his mouth shut at Christmas, we’re not going to weigh in on climate change. For a moment, though, let’s view Uncle Shoe Store from the perspective of someone who believes in climate change.

We need to recognize that Uncle Shoe Store isn’t simply spouting unfounded beliefs. He is actually being rational— reading up on climate change in his favorite publications, seeing what the people he trusts on social media say, etc., and coming to a rational (albeit debatable) conclusion. He’s in no way irrational. He’s reached a conclusion based on both the data in front of him and the so-called wisdom of the crowd. In other words, not only does he find the data compelling, but he’s verifying it via a statistically established methodology. He’s just not necessarily aware that the crowd whose wisdom he’s tapping may be decidedly biased.

As much as we might feel superior to someone who holds an alternative view of scientific data, we all are soon going to suffer the same fate. What Uncle Shoe Store didn’t account for as he gathered information and formed judgments was the digital echo. He wasn’t alone.

There soon will come a time when, despite using all the resources available to us, we will simply not be able to tell what is actually true. This, as we’ll soon see, is what happened at the Berkeley protest. Let’s look at two other examples.

First, consider a recent hoax in which, with the aid of bots, the Twitterverse was convinced that a Louisiana chemical plant had gone up in flames—local news even reported on the fire. They eventually got the facts right when they sent a reporter to the scene, but what happens when local news gets replaced by distributed networks?

In other words, what will happen when anyone can produce a news story? In a case like this false fire, social media might have two versions of the same story. One would say there was no fire— showing a video of the unburned site—and then there would be another narrative, with photos purporting to show the explosion and its victims.

Now, what does that mean for a future allegation of, say, the use of chemical weapons in Syria? Or of some kind of warfare engaged in by the U.S. government? Will the public be able to discern what is actually real?

In the second example, mal intent wasn’t even a factor. On December 27, 2016 (two days after the family Christmas party), a protester threw some firecrackers at a government building in Bangkok. This triggered a Facebook alert for an “explosion” (based on an unnamed “trusted third party”), and users proceeded to mark themselves “safe.” The Facebook alert linked to a news story that referenced BBC “breaking news” footage of an explosion in Bangkok . . . that had happened a year earlier. News outlets saw the BBC logo and, in their rush to cover what appeared to be a major breaking story, overlooked the date on the video and hastily posted their own stories about the explosion.2

Of course, the error was quickly discovered and the Face-book alert was taken down. In the old days, newspapers wouldn’t even have had time to take the story to print, television news outlets that covered it would have run a correction, and that would have been that. But with news traveling at the speed of links and clicks, news of the “explosion” spread around the globe within minutes—and continued to spread even after Facebook corrected its error. And so, if you Googled “December 27, 2016, bombing in Thailand,” there was a good chance that your top search result would be a story based on inaccurate data.

It’s not always accurate to call instances like these “fake news.” They can occur without any intentional deception. An inaccurate news story—even an accidentally inaccurate one— creates a “digital echo,” and though the original source may be corrected, the echo—reverberating across distributed networks—endures forever.

Radical Inclusion

Подняться наверх