Читать книгу Collaborative Approaches to Evaluation - Группа авторов - Страница 4

Оглавление

Volume Editors’ Introduction

One of the challenges faced by evaluators is that professional evaluation is complex work that looks deceptively simple. The general public has had ample opportunity to engage in informal evaluation practice. These experiences include, for example, using Amazon reviews to decide which new toaster to buy, evaluating classroom teaching drawing from both the public discourse and our own personal experiences in school, making decisions about which policies to support at the polls based on arguments from public intellectuals, and analyses of the likely outcomes from trusted sources, etcetera. In short, everyone has had some type of experience with informal evaluation, which is one of the reasons why many in our field claim that evaluation is an ancient practice, but a new discipline (Mathison, 2005; Scriven, 1991). An argument for the complexity of professional evaluation therefore requires an understanding of what differentiates informal evaluation practice the public engages in on a daily basis from the work that occurs in formal evaluation studies.

In an effort to demarcate the boundaries of professional evaluation and describe that work, for the past 60 years or so, academics and practitioner-academics have focused on the demands of evaluation practice, arguing that professional evaluation requires a great deal of expert knowledge that is specific to the work that evaluators do (Schwandt, 2015). We see this, for example, in the evaluation theories and approaches that have proliferated, which are arguments about what the work of professional evaluators ought to look like. We also see this in studies of evaluation education in formal and informal settings, which are arguments about how evaluators are or ought to be trained to carry out formal evaluation studies. More recently, we see this in the current drafts of evaluator competencies or capabilities documents, all of which have a section specifically devoted to what makes evaluators distinct as practicing professionals (see also, American Evaluation Association, 2017; United Nations Evaluation Group, 2016). These efforts have resulted in a large body of work meant to establish and inform the theoretical, technical, and practical aspects of evaluation practice.

At the same time, as noted in the editors’ introduction for other volumes in this series, there is a need for additional resources that fill a gap between academic publications and general textbooks. The Evaluation in Practice series was born out of a recognition that there are core topics in evaluation, which are fundamental to the work of evaluators, but that have not yet received the focus they deserve. Instead, these topics are often treated as a subheading or a subtheme situated within a larger conversation. Professional practice is at the heart of this series.

In Collaborative Approaches to Evaluation: Principles in Use, the third volume in this series, J. Bradley Cousins and colleagues have presented a rich, empirically derived description of how the conceptual, technical, and practical tools of evaluation are enacted by experts in the everyday aspects of professional work. We expand on this in order. Conceptual tools are the theories, approaches, effectiveness, and moral principles or guidelines that evaluators use to guide their evaluation practice decisions. The conceptual tool that is the focus of this volume is Collaborative Approaches to Evaluation (CAE) principles. In the first chapter of this book, readers are presented with an overview of CAE principles. This chapter lays out what the CAE principles are, describes their evolution, and offers examples of potential uses. Importantly, these principles are grounded in research on evaluation (RoE), that is, studies that examine evaluation as the object of inquiry.

Technical tools are the research designs, measurement techniques, and analysis strategies that evaluators use to guide their evaluation practice decisions. Across the chapters, technical tools are used in two ways: the technical tools used to carry out the evaluation and the technical tools used to engage in RoE. For the chapters that are grounded in a specific evaluation, almost all (Chapter 3, Chapter 4, Chapter 5, Chapter 7) used a multimethod design, and one used a qualitative case study design (Chapter 6). When it came to the actual RoE studies, however, several methods were used. These methods included retrospective case study (Chapter 2, Chapter 5, Chapter 6), qualitative thematic analysis (Chapter 3, Chapter 9), Q-methodology (Chapter 7), meta-reflection (Chapter 4), and participatory action research (Chapter 8). Across these RoE studies, the authors have provided examples of how evaluators could engage in a systematic analysis of their work as a way to understand and describe the complex work of professional evaluation. Readers interested in this aspect of practice will find much value in the cross-chapter analysis in Chapter 10.

Practical tools are the strategies, interpersonal skills, practices, and moves evaluators use in their work as they strive to carry out an evaluation. Arguably, these are one of the hardest tools to see and teach across expert knowledge occupations. This is why, for example, doctors spend time developing bedside manners, psychotherapists spend time unpacking video recordings of themselves with patients, and teachers spend time critically reflecting on their in-the-moment instructional decisions. In all of these instances, you have to be in the right place at the right time to catch a glimpse of a professional using a particular practical tool, and at the same time, you have to have a systematic process in place for unpacking use of these tools. A strength of the chapters included in this volume is that they make visible some of the practical tools evaluation practitioners and educators use. For example, how evaluators go about building relationships and with whom, the importance of trust, and the communication strategies evaluators use that are aligned with CAE principles. Moreover, in Chapters 7 and 9, we see how novice and emergent evaluators engage with learning to identify and use these practical tools, as well as the practical tools evaluation educators use to foster learning.

While each of these tools is important, in practice, they are and must be intertwined. Learning about a theory or learning to justify a technical method is not the same thing as learning to use that theory in practice or to use a particular design in an evaluation, just as learning about the technical aspects of writing and learning about narratology1 is not learning to write. The latter requires that an author actually engage in writing, understanding, for example, how to use technical rules, when it makes sense to break rules, how potential readers understand or interpret what they are reading, how they react to prose and whether that is what you intended, and so forth. Because evaluation is situated in the social, learning how to evaluate requires all three tools. It is how one learns to use our evaluation theories, the benefits and limits of particular evaluation approaches, how to generate evidence that will be perceived as credible to a wide body of stakeholders, which dilemmas to anticipate and how to think through addressing unanticipated issues, and so forth.

1 The Oxford English Dictionary defines narratology as “the branch of knowledge or criticism that deals with the structure and function of narrative and its themes, conventions, and symbols” (https://en.oxforddictionaries.com/definition/narratology).

This volume has given us a window into our complex work, which will be useful for novice and seasoned evaluators learning how to use the conceptual, technical, and practical tools of our profession. It will also be useful for evaluation educators who are working to facilitate the process of learning to practice. Evaluation researchers who are interested in describing and understanding practice will also find much use in this volume.

Bianca Montrosse-Moorhead, Marvin C. Alkin, and Christina A. Christie Volume Editors

References

American Evaluation Association. (2017, August 28). AEA evaluator competencies. Retrieved from https://www.eval.org/p/do/sd/sid=8317&fid=2290&req=direct

Mathison, S. (2005). Preface. In S. Mathison (Ed.), Encyclopedia of evaluation (pp. xxxiii–xxxv). Thousand Oaks, CA: Sage.

Schwandt, T. A. (2015). Evaluation foundations revisited: Cultivating a life of the mind for practice. Stanford, CA: Stanford University Press.

Scriven, M. (1991). Evaluation thesaurus. Newbury Park, CA: Sage.

United Nations Evaluation Group. (2016). UNEG evaluation competency framework. Retrieved from http://www.unevaluation.org/2016-Evaluation-Competency-Framework

Collaborative Approaches to Evaluation

Подняться наверх