Читать книгу Collaborative Approaches to Evaluation - Группа авторов - Страница 15

When It’s About More Than Impact

Оглавление

The accountability function is essential to the overt demonstration of fiscal responsibility, that is, showing the wise and justifiable use of public and donor funds. It comes as no surprise that in accountability-driven evaluation, the main interests being served are those of the senior decision and policy makers on behalf of taxpayers and donors. As such, a premium is placed on impact evaluation particularly on the impartial demonstration of the propensity for interventions to achieve their stated objectives. Such information needs are generally not well served by CAE, although some approaches are sometimes used to these ends (e.g., contribution analysis, empowerment evaluation, most significant change technique). In fact, contribution analysis seems well suited in this regard (Mayne, 2001, 2012). Contribution analysis is committed to providing an alternative to obsessing about claims of program attribution to outcomes through the use of a statistical counterfactual; instead, it focuses on supporting program contribution claims through the use of plausible, evidence-based performance stories. While the accountability agenda is and is always likely to be essential and necessary, many have observed that reliance on associated single-minded evaluation approaches serves to diminish, even marginalize the interests of the much broader array of stakeholders (e.g., Carden, 2010; Chouinard, 2013; Hay, 2010).

If we take into account, indeed embrace, the legitimate information needs of a very broad array of program and evaluation stakeholders, traditional mainstream evaluation designs are not likely to be particularly effective in meeting those needs. What good, for example, is a black box approach to evaluation (e.g., randomized controlled trial) to program managers whose main concern is to improve program performance, thereby making it more effective and cost-efficient? Or how could such an evaluation possibly assist program developers to truly appreciate the contextual exigencies and complex circumstances within which the focal program is expected to function and how to design interventions in ways that will suit? What about the program consumers? It is relatively easy to imagine that their concerns would be associated with their experience with the program and their sense of the extent to which it is making a difference for them. Evaluations which are single-mindedly focused on demonstrating program impact are likely to be of only minimal value for such people, if any at all.

Single-minded impact evaluations are likely to be best suited to what Mark (2009) has called fork-in-the-road decisions. When decisions to continue to fund or to terminate programs define the information needs associated with the impetus for evaluation, the evaluation will be exclusively summative in nature and orientation. But such decisions, as a basis for guiding evaluation, are relatively rare. Often, it is the case that summative and formative, improvement-oriented evaluation interests are comingled with summative questions about the extent to which programs are meeting their objectives and demonstrating effectiveness (Mark, 2009).

To the extent that formative interests are prevalent in guiding the impetus for evaluation, the learning function of evaluation carries weight, and CAE would be a viable evaluation option to consider. In formative evaluations, program community members, particularly primary users who are well-positioned to leverage change on the basis of evaluation findings (Alkin, 1991; Patton, 1978), stand to learn a great deal about the focal program or intervention as well as the context within which it is being implemented. Creating the opportunity for such learning, some would argue, is a hallmark of CAE (e.g., Cousins & Chouinard, 2012; Dahler-Larsen, 2009).

Collaborative Approaches to Evaluation

Подняться наверх