Читать книгу Understanding Case Study Research - Malcolm Tight - Страница 31

Validity and Reliability

Оглавление

The concepts of validity – is the way in which you are collecting your data appropriate for answering the questions you wish to answer? – and reliability – would another researcher collecting the same data in the same way produce much the same results? –are clearly related to that of generalisability. Each addresses aspects of how other researchers, viewing your research results, would judge their quality and usefulness.

Kazdin (1981), working in the context of clinical psychology, notes that ‘The case study has been discounted as a potential source of scientifically validated inferences, because threats to internal validity cannot be ruled out in the manner achieved in experimentation’ (p. 183). However, he then identifies a set of procedures which can, at least partly, overcome these threats:

Specific procedures that can be controlled by the clinical investigator can influence the strength of the case demonstration. First, the investigator can collect objective data in place of anecdotal report information. Clear measures are needed to attest to the fact that change has actually occurred. Second, client performance can be assessed on several occasions, perhaps before, during, and after treatment. The continuous assessment helps rule out important rival hypotheses related to testing, which a simple pre- and posttreatment assessment strategy does not accomplish. Third, the clinical investigator can accumulate cases that are treated and assessed in a similar fashion. Large groups are not necessarily needed but only the systematic accumulation of a number of clients. As the number and heterogeneity of clients increase and receive treatment at different points in time, history and maturation become less plausible as alternative rival hypotheses. (p. 190)

That Kazdin is working within a scientific framework is clear from his use of words like ‘objective’ and ‘fact’, and in his reliance on careful measurement (he is also clearly discussing quantitative case studies). His two other suggested strategies are similar to those advocated to enhance generalisation, and would also be helpful for qualitative and social researchers: the assessment of the case over time (the use of time series research designs in combination with case studies is discussed further in Chapter 6), and the accumulation of multiple case studies.

Riege (2003) considers which validity and reliability tests can most appropriately be used at each stage of case study research. He argues that:

The validity and reliability of case study research is a key issue… A high degree of validity and reliability provides not only confidence in the data collected but, most significantly, trust in the successful application and use of the results… The four design tests of construct validity, internal validity, external validity and reliability are commonly applied to the theoretical paradigm of positivism. Similarly, however, they can be used for the realism paradigm, which includes case study research… In addition to using the four ‘traditional’ design tests, the application of four ‘corresponding’ design tests is recommended to enhance validity and reliability, that is credibility, trustworthiness (transferability), confirmability and dependability. (p. 84)

Riege here brings in the notion of paradigms, which can be expressed more simply as our ways of thinking about the world, contrasting the positivist paradigm (the foundation of conventional science, which argues that there is a real world which we can measure and understand) with what he calls realism (which others would call post-positivist, the belief that, while there is a real world out there, and we may try to comprehend it, we accept that we cannot fully do so). The earlier discussion of case study in the context of qualitative and quantitative forms of research would suggest, however, that case studies could be carried out within both the positivist and realist paradigms, and, indeed, in others as well (alternative paradigms, notably the positivist and interpretivist, are further discussed in the section on Alternative Methodological Approaches in Chapter 9).

Riege also introduces the notion of different forms or measures of validity, identifying three types (other authors identify more or different types, and/or give them different names):

 construct (whether the constructs which are being used to measure concepts of interest are appropriate)

 internal (the quality of the explanation of the phenomena examined)

 external (whether the findings can be extrapolated beyond the case studied; the equivalent of generalisation).

Most interestingly, however, he introduces four alternative, or parallel, ways of judging the quality of a piece of case study research: credibility, trustworthiness (transferability), confirmability and dependability (see also Lee, Mishna and Brennenstuhl 2010). These also have the benefit of being phrased in more common-sense language.

Such alternative criteria for judging the quality or worth of research have been taken up quite widely by qualitative researchers. Box 3.3 gives four different recent formulations, showing the alternative terms used by – or which may be applied to – positivist/post-positivist, interpretivist and/or constructivist, or quantitative and qualitative, forms of research. There are clearly many overlaps between these formulations.

Understanding Case Study Research

Подняться наверх