Читать книгу Understanding Case Study Research - Malcolm Tight - Страница 32
Box 3.3 Alternative Criteria for Judging the Quality of Research
ОглавлениеDenzin and Lincoln (2005b, p. 24)
positivist/post-positivist paradigms – internal and external validity
constructivist paradigm – trustworthiness, credibility, transferability, confirmability
Guba and Lincoln (2005, p. 196)
positivism/post-positivism – conventional benchmarks of ‘rigor’: internal and external validity, reliability and objectivity
constructivism – trustworthiness and authenticity, including catalyst for action
Farquhar (2012, pp. 100–110)
classical approaches – construct validity, internal validity, reliability, generalizability
interpretivist views – credibility, transferability, dependability, confirmability
an ethnographic contribution – authenticity, plausibility, criticality
Denscombe (2014, pp. 297–300)
quantitative research – validity, reliability, generalizability, objectivity
qualitative research – credibility, dependability, transferability, confirmability
Taking Denscombe’s formulation, he explains credibility as ‘the extent to which qualitative researchers can demonstrate that their data are accurate and appropriate’ (p. 297), perhaps through the use of techniques like respondent validation (asking your respondents to comment on and confirm your findings), grounded data (provided through extensive fieldwork) and triangulation. Dependability involves the researcher demonstrating that ‘their research reflects procedures and decisions that other researchers can ‘see’ and evaluate in terms of how far they constitute reputable procedures and reasonable decisions’ (p. 298, emphasis in original).
Transferability has to do with the researcher supplying ‘information enabling others to infer the relevance and applicability of the findings (to other people, settings, case studies, organizations, etc.)’ (p. 299, emphasis in original). And confirmability involves recognising the role of the self in qualitative research and keeping an open mind, by, for example, not neglecting data that do not fit the preferred analysis and checking rival explanations.
Interestingly, Farquhar also brings in an ethnographic contribution, which she derives from Golden-Biddle and Locke (1993). Their concern was with how ethnographic writing was convincing (or not), identifying three elements of convincingness: authenticity, plausibility and criticality. These elements could, of course, be seen as analogues for credibility, dependability and transferability.
There are, then, other languages available to case study researchers – particularly, perhaps, those approaching their case studies from a qualitative perspective – with which to evaluate and justify the quality of their research and findings. Most researchers, though, have sought to remain true to the older, more conventional ideas, derived from quantitative/positivist research, of validity and reliability when assessing the results of case study (and other forms of) research.
Thus, Gibbert, Ruigrok and Wicki (2008) offer a meta-analysis of 159 articles based on case studies published during the period 1995–2000 in ten management journals, focusing on their methodological sophistication. They conclude that researchers have placed too much emphasis on external validity and need to pay more attention to internal and construct validity.
Diefenbach (2009), in an article pejoratively titled ‘Are Case Studies More Than Sophisticated Storytelling?’ identifies 16 criticisms of case study research, particularly when based on interviews. These criticisms relate to all aspects of research design, data collection and analysis, but focus in particular on validity and reliability issues. He concludes that: ‘many qualitative case studies either do not go far beyond a mere description of particular aspects or the generalisations provided are not based on a very sound methodological basis’ (p. 892).
One of the strongest contemporary advocates of case study, Yin (2013), offers rather more hope in this respect. He discusses a range of different approaches that have been taken towards addressing validity and generalisation in case study evaluations, including alternative explanations, triangulation, logic models (which represent ‘the key steps or events within an intervention and then between the intervention and its outcomes’, p. 324) for validity, and analytic generalisation and theory. In the particular context of case study evaluations, he recommends paying more attention to the questions posed for the case study, being clearer about what it is that makes the case study complex, and focusing carefully on the methods used.
As with generalisation, then, there is a need for case study researchers to be aware of, and to address, issues of validity and reliability posed by their research. You may choose to do this in a conventional positivist/post-positivist fashion, using the language of construct, external and internal validity and reliability. You may choose to locate your case study in a constructivist/interpretivist paradigm, and use the language of trustworthiness, credibility, transferability and confirmability. Or you can adopt the procedures suggested by other case study researchers, such as Yin.