Читать книгу Data protection for the prevention of algorithmic discrimination - Alba Soriano Arnanz - Страница 26

3.2.2. The use of algorithms in public administration’s regulatory and coercive activity: law enforcement 3.2.2.1. Police departments and the criminal justice system

Оглавление

The use of machine learning algorithms has become especially widespread amongst police departments and the criminal justice system given the amount of data that is available to them.150 The fact that this form of automated decision-making has an especially significant impact on the fundamental rights of individuals, means that it must be employed with special caution.

Oswald and Grace151 establish three categories of law enforcement algorithms. The first category comprises systems used to detect hotspots in which more criminal activity is likely to be carried out.152 Detecting areas in which criminal offences are likely to occur provides police departments with the possibility of allocating their resources more efficiently and with a wider timeframe in which to plan and organise police activity.153

Secondly, algorithms are also used in law enforcement in order to predict specific threats and analyse the data available in on-going criminal investigations.154 The use of automated systems for these purposes provides an insight into specific situations that may be overlooked by law enforcement officials, such as connections between victims or criminals.155

The third and final category of algorithmic decision-making in law enforcement includes systems that evaluate individual risk and behaviour.156 These tools are used, for instance, in detecting which personal characteristics make a person more likely to break the law and which legal activities may suggest that someone will commit a crime in the near future, for example, the fact that someone purchases a large quantity of small plastic bags may indicate a higher probability that he or she will be dealing drugs.157

Algorithms encompassed within this third category are also used in recidivism models in order to determine sentence length, parole rights and other elements of convicted criminals’ sentences.158 In order to do so, convicted offenders are handed a test, in which a series of questions regarding past attitudes, general behaviour and other facts about their life are contained. An algorithm analyses answers to the test and other information on the individual, thereby supposedly determining their likelihood of re-offending.159

These systems have been very controversial as they seem to perpetuate negative stereotypes regarding racial minorities and poor individuals. For instance, while the questions in the tests do not ever ask about a person’s race,160 it is possible to find questions such as: “ ‘Was one of your parents ever sent to jail or prison?’ ‘How many of your friends/acquaintances are taking drugs illegally?’ and ‘How often did you get in fights while at school’ ”161 or “how many prior convictions have you had.”162

The framework used to develop these questions is clearly skewed. While no specific questions on race or economic status are asked it is clear that the answers provided by a convicted criminal brought up by a well-off family in an affluent neighbourhood will tend to be on the lower-risk side of the scale than those given by someone from a poorer background.163 As it turns out, the latter are generally people from racial minorities who, amongst other things and, as multiple studies show, are more likely to be stopped by the police.164 Furthermore, given the fact that the risk score is drawn from such a large pool of questions, even when a person with a privileged background has prior convictions, the answers to the rest of the questions will, in many cases, lower their risk score.165

In addition, it is important to highlight that one of the recidivism risk tools used by several US prison systems, directly asks questions regarding offenders’ economic status such as “How often do you have barely enough money to get by?”166 The example of recidivism algorithms therefore also offers a deeply worrying picture regarding the general acceptance that appears to exist regarding the use of individuals’ economic status as an element which is directly linked to their risk of reoffending.

In Spain, individual predictive algorithms are used for two very specific purposes in law enforcement. The Spanish National Police use VeriPol, a programme that, through the use of machine learning algorithms, detects when an individual has filed a false robbery police report.167 Another algorithm developed, VioGén, establishes the risk that victims of gender-based violence will suffer further attacks.168

Data protection for the prevention of algorithmic discrimination

Подняться наверх