Читать книгу Experimental Evaluation Design for Program Improvement - Laura R. Peck - Страница 19
What This Book Covers
ОглавлениеThis book considers a range of experimental evaluation designs, highlighting their flexibility to accommodate a range of applied questions of interest to program managers. These questions about impact variation—what drives successful programs—have tended to be outside the purview of experimental evaluations. Historically, they have been under the purview of nonexperimental approaches to impact evaluation, including theory-driven evaluation, case-based designs, and other, descriptive or correlational, analytical strategies. It is my contention that experimental evaluation designs, counter to common belief among many an evaluator, can actually be used to address what works, for whom, and under what circumstances.
It is my hope that the designs discussed will motivate their greater use for program improvement for the betterment of humankind.
Why a focus on experimental evaluation? I focus on experimental evaluation because of its relative importance to funders, its ability to establish causal evidence, and its increasing flexibility to answer questions addressing more than the average treatment effect.
Why a focus on experimental evaluation designs? I focus on experimental evaluation designs because (1) alternative, nonexperimental designs are covered in other texts, and (2) many analytic strategies aimed at uncovering insights about “black box” mechanisms necessitate specialized analytic training that is beyond the scope of this book.
Why not a focus on nonexperimental designs and analysis strategies? There is substantial, active research in the “design replication” (or “within-study comparison”) literature that considers the conditions under which nonexperimental designs can produce the same results as an experimental evaluation. As with advanced analytic strategies, is it beyond the scope of this book to offer details—let alone a primer—on the many, varied nonexperimental evaluation designs. Suffice it to say that those designs exist and are the subjects of other books.
Using experimental evaluation designs to answer “black box” type questions—what works, for whom, and under what circumstances—holds substantial promise. Making a shift from thinking about a denied control group toward thinking about comparative and enhanced treatments opens opportunities for connecting experimental evaluation designs to the practice of program management and evidence-based improvement efforts.
The book is organized as follows: After this Introduction, Chapter 2 suggests a conceptual framework, building from the well-known program logic model and extending that to an evaluation logic model. Chapter 3 offers an introduction to the two-group experimental evaluation design. As the center of the book, Chapter 4 considers variants on experimental evaluation design that are poised to answer questions about program improvement. Chapter 5 concludes by discussing some practical considerations and identifying some principles for putting experimental evaluation into practice. Finally, an Appendix provides basic instruction in doing the math needed to generate impact estimates associated with various designs. When randomization is used, the math can be quite simple. The Appendix also addresses the relationship between sample size impact magnitude. Each of the chapters ends with two common sections: Questions and Exercises, and Resources for Additional Learning.