Practitioner's Guide to Using Research for Evidence-Informed Practice

Practitioner's Guide to Using Research for Evidence-Informed Practice
Автор книги: id книги: 2285292     Оценка: 0.0     Голосов: 0     Отзывы, комментарии: 0 9000,52 руб.     (98,16$) Читать книгу Купить и скачать книгу Электронная книга Жанр: Психотерапия и консультирование Правообладатель и/или издательство: John Wiley & Sons Limited Дата добавления в каталог КнигаЛит: ISBN: 9781119858584 Скачать фрагмент в формате   fb2   fb2.zip Возрастное ограничение: 0+ Оглавление Отрывок из книги

Реклама. ООО «ЛитРес», ИНН: 7719571260.

Описание книги

The latest edition of an essential text to help students and practitioners distinguish between research studies that should and should not influence practice decisions  Now in its third edition, Practitioner's Guide to Using Research for Evidence-Informed Practice  delivers an essential and practical guide to integrating research appraisal into evidence-informed practice. The book walks you through the skills, knowledge, and strategies you can use to identify significant strengths and limitations in research. The ability to appraise the veracity and validity of research will improve your service provision and practice decisions. By teaching you to be a critical consumer of modern research, this book helps you avoid treatments based on fatally flawed research and methodologies.  Practitioner's Guide to Using Research for Evidence-Informed Practice, Third Edition offers: An extensive introduction to evidence-informed practice, including explorations of unethical research and discussions of social justice in the context of evidence-informed practice. Explanations of how to appraise studies on intervention efficacy, including the criteria for inferring effectiveness and critically examining experiments. Discussions of how to critically appraise studies for alternative evidence-informed practice questions, including nonexperimental quantitative studies and qualitative studies. A comprehensive and authoritative blueprint for critically assessing research studies, interventions, programs, policies, and assessment tools, Practitioner's Guide to Using Research for Evidence-Informed Practice belongs in the bookshelves of students and practitioners of the social sciences.

Оглавление

Allen Rubin. Practitioner's Guide to Using Research for Evidence-Informed Practice

Table of Contents

List of Tables

List of Illustrations

Guide

Pages

PRACTITIONER'S GUIDE TO USING RESEARCH FOR EVIDENCE-INFORMED PRACTICE

PREFACE

Organization and Special Features

Significant Additions to This Edition

ACKNOWLEDGEMENTS

ABOUT THE AUTHORS

ABOUT THE COMPANION WEBSITE

1 Introduction to Evidence-Informed Practice (EIP)

1.1 Emergence of EIP

1.2 Defining EIP

1.3 Types of EIP Questions

1.3.1 What Factors Best Predict Desirable or Undesirable Outcomes?

1.3.2 What Can I Learn about Clients, Service Delivery, and Targets of Intervention from the Experiences of Others?

1.3.3 What Assessment Tool Should Be Used?

1.3.4 Which Intervention, Program, or Policy Has the Best Effects?

1.3.5 What Are the Costs of Interventions, Policies, and Tools?

1.3.6 What about Potential Harmful Effects?

1.4 EIP Practice Regarding Policy and Social Justice

1.5 EIP and Black Lives Matter

1.6 Developing an EIP Practice Process Outlook

1.6.1 Critical Thinking

1.7 EIP as a Client-Centered, Compassionate Means, Not an End unto Itself

1.8 EIP and Professional Ethics

1.8.1 What about Unethical Research?

KEY CHAPTER CONCEPTS

REVIEW EXERCISES

ADDITIONAL READINGS

2 Steps in the EIP Process

2.1 Step 1: Question Formulation

2.2 Step 2: Evidence Search

2.2.1 Some Useful Websites

2.2.2 Search Terms

2.2.3 An Internet Search Using Google Scholar and PsycINFO

2.2.4 A Time-Saving Tip

2.3 Step 3: Critically Appraising Studies and Reviews

2.4 Step 4: Selecting and Implementing the Intervention

2.4.1 Importance of Practice Context

BOX 2.1 The Importance of Practice Context: A Policy Example

2.4.2 How Many Studies Are Needed?

2.4.3 Client-Informed Consent

2.5 Step 5: Monitor Client Progress

2.6 Feasibility Constraints

2.6.1 Strategies for Overcoming Feasibility Obstacles

BOX 2.2 Strategies for Overcoming Feasibility Obstacles

2.7 But What about the Dodo Bird Verdict?

KEY CHAPTER CONCEPTS

REVIEW EXERCISES

ADDITIONAL READINGS

3 Research Hierarchies: Which Types of Research Are Best for Which Questions?

3.1 More than One Type of Hierarchy for More than One Type of EIP Question

3.2 Qualitative and Quantitative Studies

3.3 Which Types of Research Designs Apply to Which Types of EIP Questions?

3.3.1 What Factors Best Predict Desirable and Undesirable Outcomes?

3.3.2 What Can I Learn about Clients, Service Delivery, and Targets of Intervention from the Experiences of Others?

3.3.3 What Assessment Tool Should Be Used?

3.3.4 What Intervention, Program, or Policy Has the Best Effects?

3.3.5 Matrix of Research Designs by Research Questions

3.3.6 Philosophical Objections to the Foregoing Hierarchy: Fashionable Nonsense

KEY CHAPTER CONCEPTS

REVIEW EXERCISES

ADDITIONAL READINGS

4 Criteria for Inferring Effectiveness: How Do We Know What Works?

4.1 Internal Validity

4.1.1 Threats to Internal Validity

4.1.2 Selectivity Bias

4.1.3 Random Assignment

4.1.4 Inferring the Plausibility of Causality

4.1.5 Degree of Certainty Needed in Making EIP Decisions

4.2 Measurement Issues

4.3 Statistical Chance

4.4 External Validity

4.5 Synopses of Fictitious Research Studies. 4.5.1 Study 1 Synopsis: Evaluation of a Faith-Based In-Prison Program to Prevent Rearrests

4.5.2 Study 2 Synopsis: Preventing Re-arrests among Mexican American Inmates

4.5.3 Study 3 Synopsis: Preventing Police Brutality with People of Color

4.5.4 Study 4 Synopsis: Study 3 Replicated in Another City

4.5.5 Critical Appraisal of Synopsis 1

4.5.6 Critical Appraisal of Synopsis 2

4.5.7 Critical Appraisal of Synopsis 3

4.5.8 Critical Appraisal of Synopsis 4

KEY CHAPTER CONCEPTS

REVIEW EXERCISES

EXERCISE FOR CRITICALLY APPRAISING PUBLISHED ARTICLES

ADDITIONAL READINGS

5 Critically Appraising Experiments

5.1 Classic Pretest-Posttest Control Group Design

5.2 Posttest-Only Control Group Design

5.3 Solomon Four-Group Design

5.4 Alternative Treatment Designs

5.5 Dismantling Designs

5.6 Placebo Control Group Designs

5.7 Experimental Demand and Experimenter Expectancies

5.8 Obtrusive Versus Unobtrusive Observation

5.9 Compensatory Equalization and Compensatory Rivalry

5.10 Resentful Demoralization

5.11 Treatment Diffusion

5.12 Treatment Fidelity

5.13 Practitioner Equivalence

5.14 Differential Attrition

5.15 Synopses of Research Studies. 5.15.1 Study 1 Synopsis

5.15.2 Study 2 Synopsis

5.15.3 Critical Appraisal of Synopsis 1

5.15.4 Critical Appraisal of Synopsis 2

KEY CHAPTER CONCEPTS

REVIEW EXERCISES

EXERCISE FOR CRITICALLY APPRAISING PUBLISHED ARTICLES

ADDITIONAL READINGS

6 Critically Appraising Quasi-Experiments: Nonequivalent Comparison Groups Designs

6.1 Nonequivalent Comparison Groups Designs

6.1.1 Are the Groups Comparable?

6.1.2 Grounds for Assuming Comparability

6.2 Additional Logical Arrangements to Control for Potential Selectivity Biases

6.2.1 Multiple Pretests

6.2.2 Switching Replication

6.2.3 Nonequivalent Dependent Variables

6.3 Statistical Controls for Potential Selectivity Biases

6.3.1 When the Outcome Variable Is Categorical

6.3.2 When the Outcome Variable Is Quantitative

6.4 Creating Matched Comparison Groups Using Propensity Score Matching

6.4.1 Propensity Score Matching Using a Policy Example

6.5 Pilot Studies

6.6 Synopses of Research Studies. 6.6.1 Study 1 Synopsis

6.6.2 Study 2 Synopsis

6.6.3 Critical Appraisal of Synopsis 1

6.6.4 Critical Appraisal of Synopsis 2

KEY CHAPTER CONCEPTS

REVIEW EXERCISES

EXERCISE FOR CRITICALLY APPRAISING PUBLISHED ARTICLES

ADDITIONAL READINGS

7 Critically Appraising Quasi-Experiments: Time-Series Designs and Single-Case Designs

7.1 Simple Time-Series Designs

7.2 Multiple Time-Series Designs

7.3 Single-Case Designs

7.3.1 AB Designs

7.3.2 ABAB Designs

7.3.3 Multiple Baseline Designs

7.3.4 Multiple Component Designs

7.3.5 External Validity

7.4 Synopses of Research Studies

7.4.1 Study 1 Synopsis

7.4.2 Study 2 Synopsis

7.4.3 Critical Appraisal of Synopsis 1

7.4.4 Critical Appraisal of Synopsis 2

KEY CHAPTER CONCEPTS

REVIEW EXERCISES

EXERCISE FOR CRITICALLY APPRAISING PUBLISHED ARTICLES

ADDITIONAL READING

8 Critically Appraising Systematic Reviews and Meta-Analyses

8.1 Advantages of Systematic Reviews and Meta-Analyses

8.2 Risks in Relying Exclusively on Systematic Reviews and Meta-Analyses

8.3 Where to Start

8.4 What to Look for When Critically Appraising Systematic Reviews

8.4.1 The PRISMA Statement

8.4.2 Bias

8.4.3 The Cochrane and Campbell Collaborations

8.4.4 Inclusion and Exclusion Criteria

8.4.5 Does the Review Critically Appraise the Quality of Included Studies?

8.4.6 Comprehensiveness

8.4.7 Transparency

8.5 What Distinguishes a Systematic Review from Other Types of Reviews?

8.6 What to Look for When Critically Appraising Meta-Analyses

8.6.1 Effect Size

8.6.2 Correlations

8.6.3 The d-Index

8.6.4 Odds Ratios, Risk Ratios, and Number Needed to Treat Estimates

8.6.5 Correlating Effect Size with Other Variables

8.6.6 Some Caveats

8.6.7 Clinical Significance

8.7 Synopses of Research Studies

8.7.1 Study 1 Synopsis

8.7.2 Study 2 Synopsis

8.7.3 Critical Appraisal of Synopsis 1

8.7.4 Critical Appraisal of Synopsis 2

KEY CHAPTER CONCEPTS

REVIEW EXERCISES

EXERCISE FOR CRITICALLY APPRAISING PUBLISHED ARTICLES

ADDITIONAL READINGS

9 Critically Appraising Nonexperimental Quantitative Studies

9.1 Surveys

9.1.1 Measurement Issues: Was the Information Collected in a Valid, Unbiased Manner?

9.1.2 Sampling Issues: Were the Survey Respondents Representative of the Target Population?

9.1.2.1 Nonresponse bias

9.1.2.2 Sampling methods

9.1.2.3 Probability sampling

9.1.2.4 Potential bias in probability samples

9.1.2.5 Nonprobability sampling

9.1.3 Can Results from Nonprobability Samples Have Value?

9.1.4 Appropriateness of Data Analysis and Conclusions

9.2 Cross-Sectional and Longitudinal Studies

9.2.1 Cohort Studies and Panel Studies

9.3 Case-Control Studies

9.4 Synopses of Research Studies. 9.4.1 Study 1 Synopsis

9.4.2 Study 2 Synopsis

9.4.3 Critical Appraisal of Synopsis 1

9.4.4 Critical Appraisal of Synopsis 2

KEY CHAPTER CONCEPTS

REVIEW EXERCISES

EXERCISE FOR CRITICALLY APPRAISING PUBLISHED ARTICLES

ADDITIONAL READINGS

10 Critically Appraising Qualitative Studies

10.1 Qualitative Observation

10.2 Qualitative Interviewing

10.2.1 Life History

10.2.2 Focus Groups

10.3 Other Qualitative Methodologies

10.4 Qualitative Sampling

10.5 Grounded Theory

10.6 Alternatives to Grounded Theory

10.7 Frameworks for Appraising Qualitative Studies

10.7.1 Empowerment Standards

10.7.2 Social Constructivist Standards

10.7.3 Contemporary Positivist Standards

10.8 Mixed Model and Mixed Methods Studies

10.9 Synopses of Research Studies. 10.9.1 Study 1 Synopsis

10.9.2 Study 2 Synopsis

10.9.3 Critical Appraisal of Synopsis 1

10.9.4 Critical Appraisal of Synopsis 2

KEY CHAPTER CONCEPTS

REVIEW EXERCISES

EXERCISE FOR CRITICALLY APPRAISING PUBLISHED ARTICLES

ADDITIONAL READINGS

11 Critically Appraising, Selecting, and Constructing Assessment Instruments

11.1 Reliability

11.1.1 Internal Consistency Reliability

11.1.2 Test–Retest Reliability

11.1.3 Interrater Reliability

11.2 Validity

11.2.1 Face Validity

11.2.2 Content Validity

11.2.3 Criterion Validity

11.2.4 Construct Validity

11.2.5 Sensitivity

11.2.6 Cultural Responsivity

11.3 Feasibility

11.4 Sample Characteristics

11.5 Locating Assessment Instruments

11.6 Constructing Assessment Instruments

BOX 11.1 Guidelines for Constructing a Measurement Instrument

11.7 Synopses of Research Studies. 11.7.1 Study 1 Synopsis

11.7.2 Study 2 Synopsis

11.7.3 Critical Appraisal of Synopsis 1

11.7.4 Critical Appraisal of Synopsis 2

KEY CHAPTER CONCEPTS

REVIEW EXERCISES

EXERCISE FOR CRITICALLY APPRAISING PUBLISHED ARTICLES

ADDITIONAL READINGS

12 Monitoring Client Progress

12.1 A Practitioner-Friendly Single-Case Design

12.1.1 The B+ Design

12.1.2 Feasible Assessment Techniques

12.1.2.1 What to measure?

12.1.2.2 Who should measure?

12.1.2.3 With what measurement instrument?

12.1.2.3.1 Behavioral recording forms

12.1.2.3.2 Individualized rating scales

12.1.2.3.3 Standardized scales

12.1.2.4 When and where to measure?

12.2 Using Within-Group Effect-Size Benchmarks

KEY CHAPTER CONCEPTS

REVIEW EXERCISES

ADDITIONAL READINGS

13 Appraising and Conducting Data Analyses in EIP

13.1 Introduction

13.2 Ruling Out Statistical Chance

13.3 What Else Do You Need to Know?

13.4 The 05 Cutoff Point Is Not Sacred!

13.5 What Else Do You Need to Know?

13.6 Calculating Within-Group Effect Sizes and Using Benchmarks

13.7 Conclusion

KEY CHAPTER CONCEPTS

REVIEW EXERCISES

ADDITIONAL READING

14 Critically Appraising Social Justice Research Studies

14.1 Introduction

14.2 Evidence-Informed Social Action

14.3 What Type of Evidence?

BOX 14.1 A Twitter Disagreement between Two Deans

14.4 Participatory Action Research (PAR)

14.5 Illustrations of Other Types of Social Justice Research

14.6 Conclusion

BOX 14.2 A PAR Study with Indigenous Communities in Canada

BOX 14.3 PAR to Evaluate a Teen Dating Violence Prevention Program

BOX 14.4 Using PAR to Influence Welfare Policy

BOX 14.5 Making Ends Meet: How Single Mothers Survive Welfare and Low-Wage Work

BOX 14.6 Evicted: Poverty and Profit in the American City

BOX 14.7 Building a Better Cop

KEY CHAPTER CONCEPTS

REVIEW EXERCISES

ADDITIONAL READINGS

Note

GLOSSARY

REFERENCES

INDEX

WILEY END USER LICENSE AGREEMENT

Отрывок из книги

THIRD EDITION

Allen Rubin

.....

When asking which approach has the best effects, we implicitly acknowledge that for some target problems there is more than one effective approach. For example, the book Programs and Interventions for Maltreated Children and Families (Rubin, 2012) contains 20 chapters on 20 different approaches whose effectiveness with maltreated children and their families has been empirically supported. Some of these programs and interventions are more costly than others. Varying costs are connected to factors such as the minimum degree level and amount of experience required in staffing, the extent and costs of practitioner training, caseload maximums, amount number of treatment sessions required, materials and equipment, and so on. The child welfare field is not the only one in which more than one empirically supported approach can be found. And it is not the only one in which agency administrators or direct service practitioners are apt to deem some of these approaches to be unaffordable. An important part of practitioner expertise includes knowledge about the resources available to you in your practice context. Consequently, when searching for and finding programs or interventions that have the best effects, you should also ask about their costs. You may not be able to afford the approach with the best effects, and instead may have to settle for one with less extensive or less conclusive empirical support.

But affordability is not the only issue when asking about costs. Another pertains to the ratio of costs to benefits. For example, imagine that you were to find two empirically supported programs for reducing dropout rates in schools with high dropout rates. Suppose that providing the program with the best empirical support – let's call it Program A – costs $200,000 per school and that it is likely to reduce the number of dropouts per school by 100. That comes to $2,000 per reduced dropout. In contrast, suppose that providing the program with the second best empirical support – let's call it Program B – costs $50,000 per school and that it is likely to reduce the number of dropouts per school by 50. That comes to $1,000 per reduced dropout – half the cost per dropout than Program A.

.....

Добавление нового отзыва

Комментарий Поле, отмеченное звёздочкой  — обязательно к заполнению

Отзывы и комментарии читателей

Нет рецензий. Будьте первым, кто напишет рецензию на книгу Practitioner's Guide to Using Research for Evidence-Informed Practice
Подняться наверх