Читать книгу We Humans and the Intelligent Machines - Jörg Dräger - Страница 22

Wrong conclusions: Algorithms misinterpret data

Оглавление

Today, social media are an important source of information for many people, and users get the messages on their screen that interest them the most. Facebook, for example, tries to ensure that people will spend as much time as possible on the social network, viewing as many texts, videos and photos as possible and commenting on them. The messages that the site’s algorithms automatically present to each user should therefore be as relevant as possible to her or him. But how do you define and measure relevance?

For Facebook, the key indicator is individual user behavior. If someone lingers even a moment longer on a message, clicks on a button or calls up a video, the platform sees this as a sign of increased interest. The more intensively and the longer people interact with content, the more relevant that content must be – that, at least, is the assumption. Using this sort of analysis, the algorithm calculates who it will supply with which news from which source in the future. The problem with this is that the more disturbing a post, the more likely it is that someone will spend time with it. The software again will see this as interest and send the user additional messages of the same kind. If you wanted to measure relevance in a way that benefits society, it would have to be done differently. Basic values that are important to the common good, such as truth, diversity and social integration, play a subordinate role here at best. What counts instead is getting attention and screen time (see Chapter 13).

Facebook not only tries to find out what users like best but also what they do not like at all. This led to a long-standing misinterpretation because a wrong impetus was being measured: If a user clicked on the “hide post” option, the algorithm interpreted this as a clear sign of dissatisfaction and accordingly did not show the person any further messages of a similar kind. This was true until 2015, when someone took a closer look and discovered that 5 percent of Facebook users were responsible for 85 percent of the hidden messages.

These so-called super hiders were a mystery. They hid almost everything that appeared in their news stream, even posts they had commented on shortly before. Surveys then revealed that the super hiders were by no means dissatisfied. They just wanted to clear away read messages, just as some people keep their inbox clean by continually deleting e-mails. Having discovered what was going on, Facebook changed its approach. Since then, it no longer necessarily interprets hiding a post as a strong signal of displeasure.4

In this case, the algorithm did what it was told, but with an unwanted result. Wrong criteria led to wrong conclusions. The algorithm was unable to detect the super-hider phenomenon. An investigation initiated and evaluated by humans was required to uncover what was truly happening. Anyone who uses algorithmic systems is well advised to regularly question and check the systems’ logic and meaningfulness.

We Humans and the Intelligent Machines

Подняться наверх