Experimental Evaluation Design for Program Improvement

Реклама. ООО «ЛитРес», ИНН: 7719571260.
Оглавление
Laura R. Peck. Experimental Evaluation Design for Program Improvement
Experimental Evaluation Design for Program Improvement
Brief Contents
Detailed Contents
List of Boxes, Figures, and Tables
Volume Editors’ Introduction
Reference
About the Author
Acknowledgments
Chapter 1 Introduction
Box 1.1 Definition and Origins of the Term “Black Box” in Program Evaluation
The State of the Field
Large-Scale Experiments
Nudge or Opportunistic Experiments
Rapid-Cycle Evaluation
Meta-Analysis and Systematic Reviews
Getting Inside the Black Box
The Ethics of Experimentation
What This Book Covers
Questions and Exercises
Resources for Additional Learning
Chapter 2 Conceptual Framework: From Program Logic Model to Evaluation Logic Model
Program Logic Model
Evaluation Logic Model
Conclusion
Questions and Exercises
Resources for Additional Learning
Descriptions of Images and Figures
3 The Basic Experimental Design Defined
Box 3.1 Implications of Threats to Validity for a Tutoring Program Evaluation
Random Assignment Explained
Box 3.2 Random Assignment Is Not Random Sampling
Box 3.3 Two Additional Features of Randomized Experiments in Practice
Box 3.4 A Sidebar on Semantics
The Basic (Two-Armed) Experimental Design
Treatment vs. Control (no services)
Treatment vs. Control (Status Quo, “business as usual” or Existing Service Environment)
Treatment vs. Alternative Treatment
To Have a Control Group or Not to Have A Control Group?
Questions and Exercises
Resources for Additional Learning
Descriptions of Images and Figures
Chapter 4 Variants of the Experimental Design
Multi-Armed Designs
Factorial designs
Multistage designs
Staggered Introduction Designs
Blended Designs
Aligning Evaluation Design Options With Program Characteristics and Research Questions
Box 4.1 Examples of Components in Career Pathways Training Programs
Conclusion
Questions and Exercises
Descriptions of Images and Figures
Chapter 5 Practical Considerations and Conclusion
Some Practical Considerations
Sample Size Considerations
Random Assignment in Practice
What Local Programs Should Know About Generalizability
Box 5.1 How Larimer County Embraces Experimentation
Road Testing
Principles for Conducting High-Quality Evaluation
Box 5.2 Principles for Conducting Experiments in Practice
Questions and Exercises
Resources for Additional Learning
Appendix Doing the Math and Other Technical Considerations
Estimating Treatment Impacts
How to Interpret Results
Interpreting Null Results and the Role of Sample Size
Box A.1 Factors That Determine Minimum Detectible Effect Sizes
Handling Treatment Group No-Shows and Control Group Crossovers
Subgroup Analyses
Conclusion
Questions and Exercises
Resources for Additional Learning
References
Glossary
Index
Отрывок из книги
Evaluation in Practice Series
Christina A. Christie & Marvin C. Alkin, Series Editors
.....
Rapid-cycle evaluation is another relatively recent development within the broader field of program evaluation. In part because of its nascency, it is not yet fully or definitively defined. Some scholars assert that rapid-cycle evaluation must be experimental in nature, whereas others define it as any quick turnaround evaluation activity that provides feedback to ongoing program development and improvement. Regardless, rapid-cycle evaluations that use an experimental evaluation design are relevant to this book. In order to be quick-turnaround, these evaluations tend to involve questions similar to those asked by nudge or opportunistic experiments and outcomes that can be measured in the short term and still be meaningful. Furthermore, the data that inform impact analyses for rapid-cycle evaluations tend to come from administrative sources that are already in existence and therefore quicker to collect and analyze than would be the case for survey or other, new primary data.
The fourth set of evaluation research relevant to experiments involves meta-analysis, including tiered-evidence reviews. Meta-analysis involves quantitatively aggregating other evaluation results in order to ascertain, across studies, the extent and magnitude of program impacts observed in the existing literature. These analyses tend to prioritize larger and more rigorous studies, down-weighting results that are based on small samples or that use designs that do not meet criteria for establishing a causal connection between a program and change in outcomes. Indeed, some meta-analyses use only evidence that comes from experimentally designed evaluations. Likewise, evidence reviews—such as those provided by the What Works Clearinghouse (WWC) of the U.S. Department of Education—give their highest rating to evidence that comes from experiments. Because of this, I classify meta-analyses as a type of research that is relevant to experimentally designed evaluations.
.....