Читать книгу Practitioner's Guide to Using Research for Evidence-Informed Practice - Allen Rubin - Страница 65
KEY CHAPTER CONCEPTS
ОглавлениеA prominent misconception is that EIP implies an overly restrictive hierarchy of evidence – one that only values evidence produced by tightly controlled quantitative studies employing experimental designs.
EIP does not imply a black-and-white evidentiary standard in which evidence has no value unless it is based on experiments.
Not all EIP questions imply the need to make causal inferences about intervention effects.
Different research hierarchies are needed for different types of EIP questions.
Qualitative studies tend to employ flexible designs and subjective methods – often with small samples of research participants – in seeking to generate tentative new insights, deep understandings, and theoretically rich observations.
Quantitative studies put more emphasis on producing precise and objective statistical findings that can be generalized to populations or on designs with logical arrangements that are geared to testing hypotheses about whether predicted causes really produce predicted effects.
Although some scholars who favor qualitative inquiry misperceive EIP as devaluing qualitative research, countless specific kinds of EIP questions would be applicable to a hierarchy where qualitative studies might reside at the top.
Correlational and qualitative studies can be useful in identifying factors that predict desirable or undesirable outcomes.
Qualitative studies would reside at the top of a research hierarchy for EIP questions that ask: “What can I learn about clients, service delivery, and targets of intervention from the experiences of others?”
Various kinds of studies can be used to answer the question: “What assessment tool should be used?”
When seeking evidence about whether a particular intervention – and not some alternative explanation – is the real cause of a particular outcome, experiments are near the top of the hierarchy of research designs, followed by quasi-experiments with relatively low vulnerabilities to selectivity biases.
Because of the importance of replication, systematic reviews and meta-analyses – which attempt to synthesize and develop conclusions from the diverse studies and their disparate findings – reside above experiments on the evidentiary hierarchy for EIP questions about effectiveness.
Some postmodern philosophies and political voices dismiss the value of using experimental design logic and unbiased, validated measures as ways to assess the effects of interventions. They argue that social reality is unknowable and that objectivity is impossible and not worth pursuing. Critics have portrayed such philosophies as an example of an all-or-nothing thinking problem that is logically incoherent.