Читать книгу We Humans and the Intelligent Machines - Jörg Dräger - Страница 24
One-sided learning: Algorithms as self-fulfilling prophecies
ОглавлениеUnlike in Florida, in the state of Wisconsin the COMPAS software is not only used to assess pretrial risk. The judges there also use the recidivism probabilities calculated by its algorithms to determine whether a defendant should go to jail or be put on probation8 – a decision of enormous significance for the person in question, and for the public and its feeling of security. Therefore, it is all the more important that COMPAS is able to learn from the results of its forecasts, i.e. when was the algorithm right and when was it wrong?
The problem is the one-sidedness of this learning process. Parolees can confirm or disprove the system’s prognosis, depending on how they behave during the parole period. If, on the other hand, they go to jail because of the COMPAS recommendation, they have no chance of proving that the software was wrong. This is not an isolated example: People who do not receive a loan can never prove that they would have repaid it. And anyone who is rejected as an applicant by a computer program cannot prove that he or she would have done an excellent job if hired.
In such situations, the algorithmic system has what is in effect a learning disability, since verifying whether its prognosis was correct is only possible for one of the two resulting groups. For the other, the question remains hypothetical. Freedom or prison, creditworthy or not, job offer or rejection: Algorithms used in the areas of law enforcement, banking and human resources are fed with one-sided feedback and do not improve as much as they should.
Users must be aware of this and create comparison groups from which the algorithm can nevertheless learn. A financial institution could, for example, also grant a loan to some of the applicants who were initially rejected and use the experience gained with those individuals to further develop its software. HR departments and courts could also form comparison groups by making some of their decisions without machine support, generating feedback data to assess their algorithmic predictions: Was an applicant successful at the company even though he or she would have been rejected by the software? Did someone not reoffend even though the system, as in the case of Brisha, had made a different prediction?
To reduce the problem of one-sided feedback, users must be willing to examine the issue and be adept at addressing it. Both qualities are urgently needed. After all, one-sided feedback not only creates a learning problem for algorithmic decision-making, it can also reinforce and even exacerbate discrimination and social disadvantages. A longer stay in prison increases the risk of a new crime being committed afterwards; a loan taken out in desperation at exorbitant interest rates increases the risk of default. This threatens to turn the algorithmic system into a generator of self-fulfilling prophecies.