Читать книгу Experimental Evaluation Design for Program Improvement - Laura R. Peck - Страница 12

The State of the Field

Оглавление

The field of program evaluation is large and diverse. Considering the membership and organizational structure of the U.S.-based American Evaluation Association (AEA)—the field’s main professional organization—the evaluation field covers a wide variety of topical, population-related, theoretical, contextual, and methodological areas. For example, the kinds of topics that AEA members focus on—as defined by the association’s sections, or Topical Interest Groups (TIGs), as they are called—include education, health, human services, crime and justice, emergency management, the environment, and community psychology. As of this writing, there are 59 TIGs in operation. The kinds of population-related interests cover youth; feminist issues; indigenous peoples; lesbian, gay, bisexual and transgendered people; Latinos/as; and multiethnic issues. The foundational, theoretical, or epistemological perspectives that interest AEA members include theories of evaluation, democracy and governance, translational research, research on evaluation, evaluation use, organizational learning, and data visualization. The contexts within which AEA members consider their work involve nonprofits and foundations, international and cross-cultural entities and systems, teaching evaluation, business and management, arts and cultural organizations, government, internal evaluation settings, and independent consultancies. Finally, the methodologies considered among AEA members include collaborative, participatory, and empowerment; qualitative; mixed methods; quantitative; program-theory based; needs assessment; systems change; cost-benefit and effectiveness; cluster, multisite, and multilevel; network analysis; and experimental design and analytic methods, among others. Given this diversity, it is impossible to classify the entire field of program evaluation neatly into just a few boxes. The literature regarding any one of these topics is vast, and the intersections across dimensions of the field imply additional complexity.

What this book aims to do is focus on one particular methodology: that of experimental evaluations. Within that area, it focuses further on designs to address the more nuanced questions what about a program drives its impacts. The book describes the basic analytic approach to estimating treatment effects, leaving full analytic methods to other texts that can provide the needed deeper dive.

Across the field, alternative taxonomies exist for classifying evaluation approaches. For example, Stern et al. (2012) identify five types of impact evaluations: experimental, statistical, theory based, case based, and participatory. The focus of this book is the first. Within the subset of the evaluation field that uses randomized experiments, there are several kinds of evaluation models, which I classify here as (1) large-scale experiments, (2) nudge or opportunistic experiments, (3) rapid-cycle evaluation, and (4) meta-analysis and systematic reviews.

Experimental Evaluation Design for Program Improvement

Подняться наверх