Читать книгу Collaborative Approaches to Evaluation - Группа авторов - Страница 24
Description of the CAE Principles
ОглавлениеFigure 3 provides an overview of the set of eight CAE principles resulting from our validation process. There are at least four important considerations to bear in mind in thinking about this set. First, the set is to be thought of as a whole, not as pick-and-choose menu. This aligns with the point made above that each and every principle in the set, if followed, is expected to contribute toward the desired outcome, that is, a successful CAE project. It is therefore possible for evaluation practitioners to follow each of the principles without risk of confusing or confounding purposes. The extent to which each principle is followed or weighted will depend on context and the presenting information needs. A second consideration is associated with the individual principles being differentially shaded and yet separated by the dotted lines in the visual representation. These two features in the diagram imply that while each principle contributes something unique, there is expected to be a degree of overlap among them. That is to say, they are not to be thought of as being mutually exclusive. Third, we make the claim that the principles are in no specific order although it may be argued that there is a loose temporal ordering beginning with clarify motivation for collaboration and ending with follow through to realize use. Important to note is that we intend for the CAE principles to require an iterative process, as opposed to a lockstep sequential one. Many of the principles described below require ongoing monitoring and adjustments to the evaluation and collaboration as time passes. For example, foster meaningful relationships requires continuous attention and may reassert itself as a priority during a clash of values or a change in stakeholder personnel. Finally, it might be noted that some of the principles laid out in Figure 3 might apply as equally to mainstream approaches to evaluation as they do to CAE. This may be true, but it is important to recognize that (i) these principles emerged from detailed data from evaluators practicing CAE, and (ii) each is somehow unique in its application to the collaborative context, as we elaborate below.
Description
Figure 3 ■ Evidence-based CAE principles (adapted from Shulha et al., 2016).
We now turn to a brief description of each of the principles. Readers interested in a more detailed description and commentary may wish to consult Shulha et al. (2016). In the text to follow, supportive factors for each principle, which were derived from themes in our data, are identified in parentheses (following the title) and through the use of italics (in the descriptive text).
Clarify Motivation for Collaboration (evaluation purpose; evaluator and stakeholder expectations; information and process needs): Evaluators should be able to describe and justify why a CAE was selected in the first place. Why use CAE as opposed to a conventional or another alternative approach to evaluation? The principle encourages the development of a thorough understanding of the justification for the collaborative approach based on a systematic examination of the context within which the intervention is operating.
Clarity on these issues will help to ensure CAE is both called for and appropriate as a response to the evaluation challenge. Program improvement, opportunities for individual and organizational learning, and organizational capacity building were among the evaluation purposes suggested to be most conducive to CAE. On the other hand, accountability-oriented and legitimizing purposes could be counterproductive. Clarifying evaluator and stakeholder expectations for collaboration early on can be quite beneficial and can potentially lead to stakeholders leveraging networks and resources to help. CAE processes that are somehow mandated are less likely to be successful. Finally, clarification about information needs and priorities is an important supportive factor; evaluators can work with organizational or program stakeholders to help generate such clarity. Such activity helps to focus the evaluation and ensure that it will generate information that will be valued.
Foster Meaningful Relationships (respect, trust and transparency; structured and sustained interactivity; cultural competence): The principle inspires the conscious development of quality working relationships between evaluators and program stakeholders and among stakeholders, including open and frequent communication. Successful CAE projects benefit from “highly cooperative and collaborative organizational context, with abundant positive peer/professional relations and a wholesome, trusting, organizational climate” (study participant). Trust and respect are not givens and must be developed through ongoing interaction and transparency. While there is certainly a role for evaluators here, efforts on behalf of program and organizational stakeholders are implicated as well. Trust and respect can be leveraged through ongoing sustained interactive communication where evaluators learn to avoid “too many unspoken assumptions” (study participant). Close and constant contact can be instrumental to real-time communication, relationship building, and expectation clarification. The constructive exploration of differences and search for solutions that go beyond one’s own limited vision are at the crux of cultural competency. In CAE, building respectful sustainable relationships is essential.
Develop a Shared Understanding of the Program (program logic, organizational context): Is the program commonly understood among program and organizational community members and evaluators? Is everyone in agreement about intended program processes and outcomes? The principle promotes the explication of the program logic situated within context. Involving program stakeholders in the program description process is a useful way to deepen understanding about program logic. “The involvement of stakeholders provides a more accurate definition of the terms, problems, and population needs [and] culture” (study participant). Focusing on a mutual understanding of what is being evaluated can reduce the likelihood of stakeholders moving forward in the evaluation with unrealistic expectations. Organizational context is also a significant consideration in this regard. It is important for stakeholders to feel comfortable and confident in the capacity of the organization to embrace the process. Disruptive forces such as a change in administration can diminish this capacity. Evaluators need to monitor the organizational context as the project unfolds.
Promote Appropriate Participatory Processes (diversity of stakeholders; depth of participation; control of decision-making): What does it mean for stakeholders to participate in a CAE? The principle encourages deliberate reflection on the form that the collaborative process will take in practice with regard to specific roles and responsibilities for the range of stakeholders identified for participation. Collaboration in CAE can be operationalized in a contextually responsive way. It is important for evaluators to consider diversity in stakeholder participation, particularly with members or groups who might not otherwise have been involved. A challenge, however, is not just identifying such diversity but negotiating participation. The benefits of involvement to organization and program stakeholders and relatively deep levels of participation in the evaluation process can pay off rather significantly, as suggested by this survey respondent:
Participants were close to—and ultimately owned—the data. They helped design the tools, collect the data, analyze the data, interpret the data, and present findings. It wasn’t just buy-in to the processes and outcomes; it was implementing the process themselves (not being led through) and generating (not been given and asked for their thoughts about) and owning the outcomes.
An important consideration is control of decision-making about the evaluation, which may be difficult to manage. The evaluator being open to sharing the control of evaluation—in terms of instrument choice, data collection, and the interpretation of findings—is an important strategy. On the other hand, complications can easily arise around the control of decision-making, particularly when power issues among stakeholders are present.
Monitor and Respond to Resource Availability (budget, time, personnel): Issues of time and money are challenges for any evaluation but in CAE, important interconnections are associated with personnel. Participating stakeholders are a significant resource for CAE implementation. In addition to fiscal resources, the principle warrants serious attention to the extent to which stakeholder evaluation team members are unencumbered by competing demands from their regular professional roles. If the collaboration is identified as part of the job for those who will be heavily involved, evaluators should ask what aspects of their normal routine will be removed from their list of responsibilities during the evaluation. This would be one way to set appropriate expectations. Evaluators need to monitor stakeholder engagement and perhaps develop strategies to motivate staff. Such engagement can be eroded by emerging conditions within the evaluation context. Another aspect of interest is the skill set that stakeholder participants bring to the project and the extent to which evaluators can help to match skills and interests to the tasks at hand. Program and organizational stakeholders are also a key resource for program content and contextual knowledge. “The evaluator was not an expert in the program content area and absolutely needed stakeholders to provide clarity about how the data would be used and what the boundary conditions were for asking questions of intended beneficiaries” (study participant).
Monitor Evaluation Progress and Quality (evaluation design, data collection): Just as program and organizational stakeholders can help evaluators to understand local contextual exigencies that bear upon the program being evaluated, there is a significant role for evaluators in contributing to the partnership. The principle underscores the critical importance of data quality assurance and the maintenance of professional standards of evaluation practice. One aspect of the role concerns evaluation designs and ensuring that any adjustments preserve design integrity and data quality. Such adjustments may be necessary in the face of changes in the evaluation context. Acknowledging and sometimes confronting one another with deteriorating lack of fit between the intended evaluation design and the capacity of the collaboration to implement it can be productive and critical to salvaging evaluation efforts. Challenges with data collection are particularly salient and critical to ensuring data quality. It is essential for evaluators not to assume that stakeholders are appreciative of the implications of data quality on findings and outcomes, as the following excerpt suggests: “Front-line staff, who are responsible for collecting the data, did not understand the importance of getting it collected accurately.” Given the instructional role for evaluators, it is a worthwhile consideration to build in funding for such professional development processes. Such attention may reduce the amount of monitoring necessary as the project unfolds and can go a long way toward preserving the integrity of the evaluation.
Promote Evaluative Thinking (inquiry orientation, focus on learning): The principle inspires the active and conscious development of an organizational culture of appreciation for evaluation and its power to leverage social change. Evaluative thinking is an attitude of inquisitiveness and belief in the value of evidence, and CAE provides good opportunity for developing such. When evaluative thinking is enhanced through collaboration, evaluation processes and findings become more meaningful to stakeholders, more useful to different decision makers, and more organizationally effective. The development of an inquiry orientation is an organizational culture issue and will not happen overnight, but certainly evaluators can profitably embrace a promotional stance as evaluation unfolds. Significant energy may be well spent helping collaborators to become invested in the learning process and to be prepared for the unexpected. In essence, evaluators would do well to be opportunistic in this respect, as the following excerpts suggest: “Because of the stakeholder commitment, results were used as an opportunity to learn and grow;” “stakeholders were willing to accept negative or contrary results without killing the messenger.” Organizational and program stakeholders who embrace the learning function of evaluation will have greater ownership and will be less likely to view it as something for someone else to do.
Follow Through to Realize Use (practical outcomes, transformative outcomes): To what extent is the evaluation a valuable learning experience for the stakeholder participants? The principle promotes the conscious consideration of the potential for learning, capacity building, and other practical and transformative consequences of the evaluation. Implicated are evaluation processes and findings, as well as the evaluator’s role in facilitating these desirable outcomes. Practical outcomes at the organizational level influence program, policy, and structural decision-making, and they are seen through a change in disposition toward the program or evaluation and the development of program skills, including systematic evaluative inquiry. To the extent that stakeholders are directly engaged with knowledge production, the evaluation will have greater success in getting a serious hearing when program decisions are made. Transformative outcomes reflect change in the way organizations and individuals view the construction of knowledge and in the distribution and use of power and control. Enhanced independence and democratic capacities are the sorts of social change that could be labelled transformative. Working collaboratively can deepen the sense of community among stakeholders and enhance their empathy toward intended beneficiaries through the development of their understanding of complex problems. Transformational outcomes are more likely when the facilitating evaluator is skillful in promoting inquiry and has expertise in human and social dynamics. Being prepared to work toward transformational outcomes almost certainly means being prepared to work in contexts where there are differences and even conflict. Given the interplay between practical and transformative outcomes, evaluators working on CAE would be wise to negotiate with stakeholders about i) the range of possible outcomes given the scope of the evaluation, ii) the outcomes most worthy of purposeful attention, and iii) how joint efforts might best facilitate these outcomes.
The foregoing description of the principles provides a good overview to support the development and implementation of CAE. The principles are grounded in the rich experiences of a significant number of practicing evaluators. Their credibility is enhanced by virtue of the comparative design we used to generate the evidence base as well as the validation exercise described above. In his recent book on principle-based evaluation, Patton (2017) explicitly acknowledged their quality: “For excellence in the systematic and rigorous development of a set of principles, I know of no better example than the principles for use in guiding collaborative approaches to evaluation” (p. 299).
But in and of themselves, mere descriptions of the principles remain somewhat abstract. In order to enhance their practical value to guide CAE decision-making and reflection, we developed for each principle summary statements of evaluator actions and principle indicators in the form of questions that could be posed as an evaluation project is being planned or implemented. This information is summarized in Table 1 and was included in an indicator document to complement descriptions of the principles and their supportive factors.
Table 1
The actions and indicator questions provided in the Table (and in the indicator document) have not been subjected to any formal review or validation. They are the result of our own collective reflections on CAE and are therefore indirectly based on knowledge garnered through working with the base data set. Nevertheless, we offered these processes and indicators as a way for potential users of the CAE principles to apply them in practice. Notable among the suggested actions for evaluators to consider in order to follow or apply the principles, a range of interpersonal and soft skills would be required. These would include facilitation, negotiation, promotion, and monitoring. Such skills, we would argue, come through considerable practical experience; they are not likely to be easily picked up in courses or workshops.
Having provided a summary overview of the set of eight effectiveness principles for CAE, and associated actions and indicators, we now turn to considerations about how these principles may be applied to the benefit of evaluators, program and organizational stakeholders, and in the evaluation community at large.