Читать книгу We Humans and the Intelligent Machines - Jörg Dräger - Страница 21

System error: Algorithms fail to do the job they are assigned to

Оглавление

Louise Kennedy was actually a bit nervous.2 Emigrating from Ireland to Australia was a big step for the veterinarian. She suspected that everything would not fall into her lap right away. However, she had not expected that the highest hurdle for the native speaker with two university degrees would, of all things, be an English-language test. She scored 74 points on the oral part, 79 were required. She was refused permanent residence. Who would not think of “Computer says no”?

The Irishwoman had indeed failed – because of voice recognition technology. The computer-based test is used by the Australian immigration authorities to assess oral speaking ability. Foreigners who want to live in Australia have to repeat sentences and retell a story. An algorithm then analyzes their ability to speak.

Alice Xu, originally from China, attempted to pass the test as well. She studied in Australia and speaks English fluently, but the algorithm refused to recognize her abilities, too. She scored a paltry 41 points on her oral examination. Xu did not want to give up so easily and hired a coach, who helped her pass on her second attempt with the maximum number of points. How do you improve your oral language skills so markedly in such a short time?

Her coach Clive Liebmann explains the leap in performance, revealing the absurdity of how the software works: “So I encourage students to exaggerate tonation in an over-the-top way and that means less making sense with grammar and vocabulary and instead focusing more on what computers are good at, which is measuring musical elements like pitch, volume and speed.”3 If pitch, volume and speed are correct, the test takers could, in extreme cases, talk utter nonsense as long as some of the vocabulary matches the topic.

Louise Kennedy did not go to a coach, but to the public. The media around the world then mocked the Australian immigration authorities. But they did not react at all. On the contrary, the company providing the automatic language tests just pointed out that the requirements for potential immigrants were very high. Of course, it was not the performance standards that prevented a young and highly qualified native speaker from getting permanent residency, it was the algorithmic system that was simply not able to process the Irish accent correctly. The voice recognition software used in Australia is not yet capable of testing sentence structure, vocabulary and the ability to logically render complex information. That is the heart of the problem. The refusal to admit the obvious makes the incident look like a parody.

Yet the story has a very serious side. Ultimately, the software does not safeguard the Australian state’s legitimate interests when it comes to immigration, neither does it provide justice for those individuals who have worked diligently in the hope of gaining a residence permit. Alice Xu and Louise Kennedy found ways to circumvent the deficient algorithmic system. One exploited the software’s weaknesses and told it exactly what it wanted to hear. The other married an Australian, allowing her to stay in the country permanently. But people should not have to adapt to meet the needs of a faulty algorithm; dysfunctional software should be adapted to meet people’s needs instead.

We Humans and the Intelligent Machines

Подняться наверх