Читать книгу Experimental Evaluation Design for Program Improvement - Laura R. Peck - Страница 7
ОглавлениеVolume Editors’ Introduction
Impact evaluation is central to the practice and profession of evaluation. Emerging in the Great Society Era, the field of evaluation holds deep roots in the social experiments of large-scale demonstration programs—Campbell’s utopian ideas of an Experimenting Society. Since then, the fervent search for “what works”—for establishing the impact of social programs—has taken on many different forms. From the early emphasis on experimental and quasi-experimental designs, through the later emergence of systematic reviews and meta-analysis, and onwards to the more recent and sustained push for evidence-based practice, proponents of experimental designs have succeeded in bringing attention to the central role of examining the effectiveness of social programs (however we chose to define it). There is a long and rich history of measuring impact in evaluation.
The landscape of impact evaluation designs and methods has grown and continues to grow. Innovative variants of and alternatives to traditional designs and approaches continue to emerge and gain prominence, addressing not only “what works” but also “what works, for whom, and under what circumstances” (Stern et al., 2012). For the novice (and perhaps even the seasoned) evaluator, the broadening array of designs and methods, not to mention the dizzying array of corresponding terminology, may invoke a mixed sense of methodological promise and peril, opportunity and apprehension. How can randomization be applied across multiple treatments, across multiple treatment components, and across stages of a program process? What exactly is the difference between multistage, staggered, and blended impact evaluation designs? And are there any practical and methodological considerations that one should award particular attention to when applying these designs in real-world settings?
These are but a few of the questions answered in Laura Peck’s Experimental Evaluation Design for Program Improvement. Grounded on decades of scholarship and practical experience with real-world impact evaluation, Peck begins the book by providing a concise and accessible introduction to the “State of the Field,” carefully guiding the reader through decades of developments in the experimental design tradition, including large-scale experimental designs, nudge experiments, rapid cycle evaluation, systematic reviews and the associated meta-analysis, and more recently design options for understanding impact variation across program components.
After this introduction, Peck describes a “framework for thinking about the aspects of a program that drive its impacts and how to evaluate the relative contributions of those aspects,” rooted in the idea of using a well-developed program logic model to discern the most salient program comparisons to be examined in the evaluation. As Peck states, “From a clear and explicit program logic model, the evaluation logic model can also be framed to inform program operators’ understanding of the essential ingredients of their programs” (p. 27). The remainder of the book is dedicated to a broad variety of experimental design options for measuring program impact, covering both traditional designs and more recent variants of these (e.g., multistage and blended designs). Bringing these designs closer to practice, an illustrative application and a set of practical lessons learned are provided. A set of hands-on principles for “good practice” concludes the book.
The present book is an important contribution to the growing landscape of impact evaluation. With her aim to identify a broader range of designs and methods that directly address causal explanation of “impacts,” Peck opens new frontiers for impact evaluation. Peck directly challenges, and correctly so, the longstanding perception that experimental designs are unable to get inside the black box of how, why, and for whom social programs work. Countering this idea, Peck describes and illustrates by way of practical examples, a variety of design options that each in their own way support causal explanation of program impact. By doing so, the book significantly broadens the types of evaluation questions potentially pursued by experimental impact evaluations. As Peck opines, “using experimental evaluation designs to answer ‘black box’ type questions—what works, for whom, and under what circumstances—holds substantial promise” (p. 10). We agree.
Sebastian T. Lemire, Christina A. Christie, and Marvin C. Alkin
Volume Editors
Reference
Stern, E., Stame, N., Mayne, J., Forss, K., Davies, R., & Befani, B. (2012). Broadening the range of designs and methods for impact evaluations. Report of a study commissioned by the Department for International Development. Working Paper 38. Accessible from here: https://www.oecd.org/derec/50399683.pdf