Читать книгу Experimental Evaluation Design for Program Improvement - Laura R. Peck - Страница 9

Acknowledgments

Оглавление

This book has been motivated by many years of scholarly and applied evaluation work. Along the way, I developed and tested my ideas in response to prodding from professors, practitioners, and policymakers; and it is my hope that articulating them in this way gives them more traction in the field. Through my graduate school training at New York University’s Wagner School, I especially valued the perspectives of Howard Bloom, Jan Blustein, and Dennis Smith. While at Arizona State University’s School of Public Affairs, I appreciated opportunities to engage many community agencies, who served as test beds for my graduate students as they learned about evaluation in practice.

Most recently, at Abt Associates, I am fortunate to have high-quality colleagues and projects where we have the chance not only to inform public policy but also to advance evaluation methods. I am grateful to the funders of that research (including the U.S. Departments of Labor, Health and Human Services, and Housing and Urban Development) who are incredibly supportive of advancing evaluation science: They encourage rigor and creativity and are deeply interested in opening up that black box—be it through advancing analytic approaches or design approaches. I am also grateful to have had the opportunity to write about some of these ideas in the context of my project work, for the American Evaluation Association’s AEA365 blog, and for Abt Associates’ Perspectives blog.

Evaluation is a team sport, and so the ideas in this book have evolved through teamwork over time. For example, some of the arguments, particularly regarding the justification for experiments and ethics of experimentation (in Chapter 1), stem from work with Steve Bell (and appear in our joint JMDE publication in 2016). The discussion of whether a control group (in Chapter 4) is needed as well as some observations of design variants draws on earlier work as well (e.g., Peck, 2015, in JMDE; Bell & Peck, 2016, in NDE #152). In addition, some of the (Appendix) discussion of the trade-offs between intent-to-treat and treatment-on-the-treated impacts and the factors that determine minimum detectable effect sizes came from joint work with Shawn Moulton, Director of Analysis, and the project team for the HUD First-Time Homebuyer Education and Counseling Demonstration.

At Abt Associates, Rebecca Jackson provided research assistance, Bry Pollack provided editorial assistance, Daniel Litwok provided critical review of a draft manuscript, and the Work in Progress Seminar offered input on final revisions. I am also appreciative of input from five anonymous reviewers for the Evaluation in Practice Series, and SAGE editors.

My most intellectually and personally enriching partnership—and longest-standing collaboration—is with Brad Snyder who asks the right and hard questions, including the “and then,” which implies pushing further still. I also thank my parents for raising me to value questioning and my daughter for teaching me patience in answering.

I would also like to acknowledge the following reviewers for their feedback on the book:

 Deven Carlson, University of Oklahoma

 Roger A. Boothroyd, University of South Florida

 Sebastian Galindo, University of Florida

 Katrin Anacker, George Mason University

 Christopher L. Atkinson, University of West Florida

 Sharon Kingston, Dickinson College

 Colleen M. Fisher, University of Minnesota

 Regardt Ferreira, Tulane University

Experimental Evaluation Design for Program Improvement

Подняться наверх