Читать книгу Collaborative Approaches to Evaluation - Группа авторов - Страница 23
Evidence-based Principles to Guide CAE Practice Systematic Approach
ОглавлениеIt will come as no surprise to those familiar with our work that the approach to the development of CAE principles that we took was empirical. We have long supported the concept of RoE, having identified it as an underdeveloped yet increasingly important gap in our field (e.g., Cousins & Chouinard, 2012). Through systematic inquiry, we sought to tap into this domain of evaluation practice to understand what characterizes or describes effective work and differentiates it from practice that is less so. Other approaches to principle development have been heavily grounded in practice and relied on the experience of renowned experts in the domain (e.g., DE principles, Patton, 2011) or based on fairly intensive consultative, deliberative processes (e.g., empowerment evaluation principles, Fetterman & Wandersman, 2005). In both instances, proponents draw heavily from practical wisdom. Our intention was to do the same but to do so through a rather significant data collection exercise.
Our methodology was comparative, but we relied on practicing evaluators to generate the comparisons from their own experience (i.e., within-respondent comparisons). Essentially, we wanted to ask evaluators who practice CAE (in whatever form) about their positive and less than positive experiences within the genre. Our sample (from three evaluation professional associations) of over 300 evaluators derived largely, but not exclusively, from North America; a substantial portion corresponded to those working in international development contexts. The approach that we took was to have participants think about a CAE project from their own experience that they believed to be highly successful. They were then asked to describe the project according to a set of questions, and in particular, they were asked to identify the top three reasons why they believed the projects to be successful. Having completed this first part, participants were then asked to identify from their experience a project they considered to be far less successful than hoped. They responded to an identical set of questions for this project, but they were asked to identify the top three reasons as to why the project was not successful.8 We had done some preliminary pilot work, and we are quite pleased with the response that we got (N=320). The data from this online survey were predominantly qualitative and provided us with a rich sense of what works in CAE practice.
8 The order of successful and less-than-successful projects and corresponding sets of questions was counterbalanced to protect against response bias.
Themes (reasons) emerged through an analysis of the qualitative responses, and these provided the basis for our development of higher-order themes (contributing factors) and ultimately draft principles. Some themes we considered to be particularly critical because they represented both a reason why a given project was perceived to have been highly successful, but also why, in a separate instance, it was perceived to have been limiting. For example, for a hypothetical CAE that had ample resources, this factor may have contributed substantially to success. Conversely, in another project, a lack of resources may have been limiting and intrusive. We called these critical factors. Ultimately, we generated a set of eight principles and then asked 280 volunteer members of our sample to look over the 43-page draft as part of a validation exercise. Given the enormity of this task (realistically, requiring at least a half day), we greatly appreciated the generosity of the 50 participants who responded.
Based on the feedback, we made a range of changes to the wording and characteristics of the draft principles and developed the final version of the preliminary set, subsequently published in the American Journal of Evaluation (Shulha, Whitmore, Cousins, Gilbert, & Al Hudib, 2016).