Читать книгу Statistics and the Evaluation of Evidence for Forensic Scientists - Franco Taroni - Страница 17

1.2 Statistics and the Law

Оглавление

The book does not focus on the use of statistics and probabilistic thinking for legal decision making, other than by occasional reference. Also, neither the role of statistical experts as expert witnesses presenting statistical assessments of data nor their role as consultants preparing analyses for counsel is discussed. There is a distinction between these two issues (Fienberg 1989, Tribe 1971). The main focus of this book is on the assessment of evidence for forensic scientists, in particular for identification purposes. The process of addressing the issue of whether or not a particular item came from a particular source is most properly termed individualization. ‘Criminalistics is the science of individualization’ (at p. 236) as defined by Kirk (1963) but established forensic and judicial practices have led to it being termed identification. The latter terminology will be used throughout this book. An identification, however, is more correctly defined as ‘the determination of some set to which an object belongs or the determination as to whether an object belongs to a given set’ (Kingston 1965a). Further discussion is given by Kwan (1977), Evett et al. (1998), and Champod et al. (2016b). For a critical discussion of individualisation as a decision, see Cole (2014), Biedermann et al. (2008a), and Saks and Koehler (2008). More details are given in Section 2.5.9.

For example, in a case involving a broken window, similarities may be found between the refractive indices of fragments of glass found on the clothing of a PoI and the refractive indices of fragments of glass from the broken window. The assessment of this evidence, in consideration of the association or otherwise of the PoI with the scene of the crime, is part of the focus of this book.

For those interested in the issues of statistics and the law beyond those of forensic science, in the sense used in this book, there are several books available and some of these are discussed briefly.

‘The evolving role of statistical assessments as evidence in the courts’ is the title of a report, edited by Fienberg (1989), by the Panel on Statistical Assessments as Evidence in the Courts formed by the Committee on National Statistics and the Committee on Research on Law Enforcement and the Administration of Justice of the United States, and funded by the National Science Foundation. Through the use of case studies, the report reviews the use of statistics in selected areas of litigation, such as employment discrimination, antitrust litigation, and environment law. One case study is concerned with identification in a criminal case. Such a matter is the concern of this book, and the ideas relevant to this case study, which involves the evidential worth of similarities amongst human head hair samples, will be discussed in greater detail later (Section 3.5.5). The report makes various recommendations, relating to the role of the expert witness, pretrial discovery, the provision of statistical resources, the role of court‐appointed experts, the enhancement of the capability of the fact‐finder, and statistical education for lawyers.

Two books that take the form of textbooks on statistics for lawyers are Vito and Latessa (1989) and Finkelstein and Levin (2015). The former focusses on the presentation of statistical concepts commonly used in criminal justice research. It provides criminological examples to demonstrate the calculation of basic statistics. The latter introduces rather more advanced statistical techniques and again uses case studies to illustrate the techniques.

Historically, the particular area of discrimination litigation is covered by a set of papers edited by Kaye and Aickin (1986). This starts by outlining the legal doctrines that underlie discrimination litigation. In particular, there is a fundamental issue relating to discrimination in hiring. The definition of the relevant market from which an employer hires has to be made very clear. For example, consider the case of a man who applies, but is rejected, for a secretarial position. Is the relevant population the general population, the representation of men amongst secretaries in the local labour force, or the percentage of male applicants? The choice of a suitable reference population is also one with which the forensic scientist has to be concerned. This is discussed at several points in this book, see, for example, Sections 5.5.3.4 and 6.1.

Another textbook, which comes in two volumes, is Gastwirth (1998a,b). The book is concerned with civil cases and ‘is designed to introduce statistical concepts and their proper use to lawyers and interested policy makers’ (volume 1, p. xvii). Two areas are stressed, which are usually given less emphasis in most statistical textbooks. The first area is concerned with measures of relative or comparative inequality. These are important because many legal cases are concerned with issues of fairness or equal treatment. The second area is concerned with the combination of results of several related statistical studies. This is important because existing administrative records or currently available studies often have to be used to make legal decisions and public policy; it is not possible to undertake further research. Gastwirth (2000) has also edited a collection of essays on statistical science in the courtroom, some of which are directly relevant for this current book and will be referred to as appropriate.

A collection of papers on Statistics and Public Policy has been edited by Fairley and Mosteller (1977). One issue in the book, which relates to a particularly infamous case, the Collins case, is discussed in detail later (Section 3.4). Other articles concern policy issues and decision making.

Of further interest is a book (Kadane 2008) explicitly entitled ‘Statistics and the Law’, which considers the question ‘how can lawyers and statisticians best collaborate in a court of law to present statistics in the most clear and persuasive manner?’. With the use of case studies that refer to employment, jury behaviour, fraud, taxes, and aspects of patents, there is clarification of what a statistician and what a lawyer should know for a fruitful collaboration.

Other recent publications on the interaction between law and statistics are, for example, Dawid et al. (2014), Kadane (2018a,b), Kaye (2017a,b), and Gastwirth (2017).

The remit of this book is one that is not covered by these others in great detail. The use of statistics in forensic science in general is discussed in a collection of essays edited by Aitken and Stoney (1991). The remit of this book is to describe statistical procedures for the evaluation of evidence for forensic scientists. This will be done primarily through a Bayesian approach, the principle of which was described in Peirce (1878). It was developed further in the work of I.J. Good and A.M. Turing as code‐breakers at Bletchley Park during World War Two. A brief review of the history was given in Good (1991). A history of the Bayesian approach for a lay audience was given in Bertsch McGrayne (2011). An essay on the topic of probability and the weighing of evidence was written by Good (1950). This also referred to entropy (Shannon 1948), the expected amount of information from an experiment, and Good remarked that the expected weight of evidence in favour of a hypothesis and against its complement is equal to the difference of the entropies assuming and , respectively. A brief discussion of a frequentist approach and the problems associated with it is given in Section 3.6 (see also Taroni et al. 2016). A general review of the Bayesian approach was given by Fienberg (2006).

It is of interest to note that a high proportion of situations involving the so‐called objective presentation of statistical evidence uses the frequentist approach with tests of significance (Fienberg and Schervish 1986).2 However, Fienberg and Schervish go on to say that the majority of examples cited for the use of the Bayesian approach are in the area of identification evidence. It is this area that is the main focus of this book, and it is Bayesian analyses that will form the basis for the evaluation of evidence as discussed here. Examples of the applications of such analyses to legal matters include Cullison (1969), Finkelstein and Fairley (1970, 1971), Fairley (1973), Lempert (1977), Lindley (1975, 1977b,c), Fienberg and Kadane (1983), Lempert (1986), Redmayne (1995, 1997), Friedman (1996), Redmayne (2002), Anderson et al. (2005), Robertson et al. (2016), and Adam (2016).

Another approach that will not be discussed here is that of Shafer (1976, 1982). This concerns so‐called belief functions, see Section 3.1. The theory of belief functions is a very sophisticated theory for assessing uncertainty that endeavours to answer criticisms of both the frequentist and Bayesian approaches to inference. Belief functions are non‐additive in the sense that belief in an event [denoted ] and belief in the opposite of [denoted ] do not sum to 1. See also Shafer (1978) for a historical discussion of non‐additivity. Further discussion is beyond the scope of this book. Practical applications are few. One such, however, is to the evaluation of evidence concerning the refractive index of glass (Shafer 1982). More recent developments of the role of belief functions in law for burdens of proof and in forensic science with a discussion of the island problem (Section 6.1.6.3) and parental identification are given in Nance (2019) and Kerkvliet and Meester (2016).

It is very tempting when assessing evidence to try to determine a value for the probability of the so‐called probandum of interest (or the ultimate issue) such as the true guilt of a PoI (as distinct from a verdict of guilty, which may or may not be correct), or a value for the odds in favour of guilt and perhaps even to reach a decision regarding the PoI's guilt. However, this is the role of the jury and/or judge. It is not the role of the forensic scientist or statistical expert witness to give an opinion on this (Evett 1983). It is permissible for the scientist to say that the evidence is 1000 times more likely, say, if the PoI is the offender than if he is not the offender. It is not permissible to interpret this to say that, because of the evidence, it is 1000 times more likely that the PoI is the offender than is not the offender. Some of the difficulties associated with assessments of probabilities are discussed by Tversky and Kahneman (1974) and are further described in Section 2.5. An appropriate representation of probabilities is useful because it fits the analytic device most used by lawyers, namely, the creation of a story. This is a narration of events ‘abstracted from the evidence and arranged in a sequence to persuade the fact‐finder that the story told is the most plausible account of “what really happened” that can be constructed from the evidence that has been or will be presented’ (Anderson and Twining 1998, p. 166). Also of relevance is Kadane and Schum (1996), which provides a Bayesian analysis of evidence in the Sacco–Vanzetti case (Sacco 1969) based on subjectively determined probabilities and assumed relationships amongst evidential events. A similar approach is presented in Section 2.9.

Statistics and the Evaluation of Evidence for Forensic Scientists

Подняться наверх