An Introduction to Evaluation
Реклама. ООО «ЛитРес», ИНН: 7719571260.
Оглавление
Chris Fox. An Introduction to Evaluation
An Introduction to Evaluation
Table of Contents
About the Authors
Introduction. Who should read this book?
Why evaluation is important
Our approach to evaluation. A practical discipline
Evidence-based (or informed) policy and practice
Multi-sector, multi-disciplinary
Complexity
What does this book cover?
Part I: Getting Started
Part II: Undertaking an Evaluation
Part III: The Practice of Evaluation
Part IV: Using Evaluation Findings
Part V: Evaluation Paradigms
How to use this book
1 What is Evaluation?
Introduction
Defining evaluation
Defining evaluation according to purpose
A note on terminology: policy, programme or project?
Defining evaluation according to method
Evaluation distinguished from monitoring, performance management, audit and accreditation
Distinguishing evaluation from research
Defining evaluation according to judgements of value
Our preferred definition of evaluation
Different types of evaluation
Formative and summative evaluation
Process (implementation) and impact (outcome) evaluation
A note on terminology: impact versus outcome and process versus implementation
Economic evaluation
Ex ante and ex post evaluations
Theory-led evaluation
Evaluation theory
Social science theory
Theory-based evaluation
Trends in evaluation
Evaluation ‘booms’
The first ‘boom’ in evaluation
The second ‘boom’ in evaluation
Real time evaluation
Evaluation, social policy and public administration
The growth of government
Neo-liberalism and new public management
Evidence-based policy and practice
Chapter Summary
Further Reading
2 The Ethics of Evaluation
Introduction
Ethics in evaluation and other forms of social research distinguished
Ethical guidelines
Ethics committees
The guiding principles
Principles relating to participants’ and stakeholders’ rights. Informed consent
Voluntary participation
Do no harm
Confidentiality and anonymity
Guiding principles that relate to evaluators’ inherent ethics. Professional integrity
Openness to and respect for diversity
Cultural competency
Limitations of ethical guidelines
The ‘anthropo’-centric approach to ethics in evaluations
Protecting the profession
Responding to the criticisms of ethical guidelines: the evaluator as a self-reflective negotiator
The challenges that evaluators must overcome to ensure ethical evaluations
Methods and ethics
Randomised control trials
Participatory action research (PAR)
Exogenous factors and ethics
Literacy level
Power relations
Intercultural communication
Conclusion
Chapter Summary
Further Reading
3 Theories of Change
Introduction
What is a theory of change? Origins and definition of Theory of Change
Programme theory and the theory of change
Main characteristics of theories of change
Difference between TOC and logical framework approach
How to develop a TOC
Prospectively: TOC as an approach to planning
Retrospectively: TOC as part of ex post evaluation
Case Study ‘Project Superwoman’ theory of change
Advantages
Challenges
Multiple assumptions
Under-developed theories of change
Making sense of causality
Theory-based evaluations
Chapter Summary
Further Reading
4 Process Evaluation
Introduction. What is a process evaluation?
What do we cover in this chapter?
Process evaluation theory
Implementation theory
Top-down perspectives
Bottom-up perspectives
Synthesis
Organisational dynamics
Change and innovation
Systems and complexity
Designing a process evaluation
Describing and understanding implementation
Participatory evaluations and the influence of action research methodology
Action research
Participatory Action Research
Case Study Process evaluation of Community Legal Advice Centres and Networks
Using process evaluation findings
Using process evaluation findings to shape the intervention being evaluated
The relationship between process evaluation findings and outcome evaluation
Considerations when planning the relationship between the process evaluation findings and stakeholders
Chapter Summary
Further Reading
5 Impact Evaluation
Introduction. What is an impact evaluation?
What do we cover in this chapter?
Establishing trustworthiness: Validity
Statistical conclusion validity
Internal validity
Construct validity
External validity
The relative importance of different types of validity
Different approaches to impact
Experiments. Introduction
Designing randomised field experiments
Units
Conditions conducive to randomisation
Different experimental designs
Natural experiments
Case Study Financial Incentives for Extended Weight Loss: A Randomised Controlled Trial
Advantages of a randomised trial
Disadvantages of randomised field experiments
Threats to integrity
Ethical concerns with randomised field experiments
Lack of policy utility
High cost of randomised field experiments
Unintended outcomes
Failure to explain why an intervention works or does not work
Quasi-experiments
Non-equivalent control group designs
The basic design
Improving the basic design
Interrupted time series designs
Limitations
Regression discontinuity design
Some quasi-experimental designs to avoid
Case Study Improving Academic Performance of School-Age Children by Physical Activity in the Classroom: A 1-Year Programme Evaluation
Experiments versus quasi-experiments
Alternative impact evaluation designs
Theory-led designs
Theories of change
Scientific realist
Case designs
Combining different designs – the case for and against ‘realist RCTs’
Chapter Summary
Further Reading
6 Economic Evaluation
Introduction. What is economic evaluation?
What do we cover in this chapter?
Ex Ante and Ex Post economic evaluations
Cost Benefit and Cost Effectiveness Analysis. Cost Benefit and Cost Effectiveness distinguished
Choosing between Cost Benefit and Cost Effectiveness Analysis
Theory behind Cost Benefit Analysis
Stages in a Cost Benefit Analysis
Viewpoint
Costs. Different types of cost
Fixed and variable costs
Gathering data on costs
Valuing inputs
Fixed and marginal costs
Estimating programme effects
Monitising outcomes
Non-market goods: willingness to pay or willingness to accept
Revealed preferences
Stated preferences
Other approaches
Deciding whether to undertake a study on valuing costs and benefits
Discounting and Net Present Value
Distributional considerations
Sensitivity analysis
Presenting results
Case Study Long-term economic evaluation of the HighScope Perry Preschool Programme
The pros and cons of Cost Benefit Analysis
Valuing costs and benefits
Externalities and whole systems frameworks
Equity and distributional issues
Benefits realisation
Social Return on Investment. Introducing Social Return on Investment
The principles of SROI
Undertaking a Social Return on Investment evaluation
Establishing the scope and identifying key stakeholders
Mapping outcomes
Evidencing outcomes and giving them a value
Putting a value on the outcome
Establishing impact
Calculating the SROI
Reporting, using and embedding
Social Return on Investment and Cost Benefit Analysis distinguished
Chapter Summary
Further Reading
7 Evaluation Methods
Introduction
Quantitative research. Introducing quantitative research
Description
Causality
The role of theory
Survey research
Standardised questionnaire design
Types of questions
Question format
Categorical questions
Ordinal scales
Interval scales
Ratio scales
Questionnaire layout
Instructions
Order of questions
Questionnaire length
Survey questions. Designing new questions
The advantage of pre-existing questions
Interviewing and data collection modes
Piloting
Validity
Sampling
Probability sampling
Non-probability sampling
Sample size
Response rates
Error and bias
Secondary analysis of monitoring data
The overlap between monitoring and evaluation
Qualitative research. Introduction
Research design
Case study
Comparative study
Qualitative retrospective study
Qualitative longitudinal study
Gaining access
Sampling
Sample size
Verbal data – interviewing
Conducting interviews
Group interviews and focus groups
How to conduct a qualitative interview: some basic rules
Data management/transcribing
Qualitative data analysis
Coding
Computer-aided qualitative data analysis software
Collecting non-verbal data
Documents
Quality assurance
Mixed- or multi-method research
Case Study Anglican Schools Partnership: Effective Feedback Evaluation Report
Integrating quantitative and qualitative analysis
Managing a mixed-methods team
Chapter Summary
Further Reading
8 Planning an Evaluation
Introduction
When to evaluate?
Evaluation during the development and design of a programme
Evaluation during the implementation of a programme
Evaluation of an established programme
Which evaluation model?
Different models of evaluation
Selecting different models
Working with the evaluation funder and stakeholders
Different evaluation stakeholders
Different relationships between evaluator and stakeholders
Avoiding ‘ritualistic’ evaluation
Evaluation questions
What makes a good evaluation question?
Reasonable and appropriate
Evaluation questions must be answerable
Evaluative criteria
Negotiating with evaluation stakeholders
Common evaluation questions
Needs assessment
Theory of change
Process evaluation
Impact evaluation questions
Economic evaluation questions
The evaluator
Skills and experience
Evaluation teams
Understanding of the programme
Internal or external evaluators?
Resourcing evaluations
Strategic decisions on resourcing
Operational decisions on resourcing
Testing the feasibility of an evaluation
Evaluation plan
Chapter Summary
Further Reading
9 Conducting an Evaluation
Introduction
Project management
Clarity on aims and objectives
Identifying phases and tasks
Task dependencies and critical path
Risk management
Piloting
Piloting primary data collection
Piloting secondary data collection
Team management
The challenge
Solutions
Managing relationships
Funders and commissioners
Programme staff and service users
The wider research community
The media
Negotiating access
Responding to programme change
Chapter Summary
Further Reading
10 Systematic Reviews
Introduction
Systematic reviews. What is a systematic review?
Why do we need systematic reviews?
Carrying out a systematic review
Choosing and defining the topic for a systematic review
Objectives
Preparing a protocol
Criteria for including and excluding studies
Searching for eligible studies
Analysing results
Meta-analysis
Organising a systematic review
Case Study A Systematic Review of the Effects of Teachers’ Classroom Management Practices on Disruptive, or Aggressive Student Behaviour
Rapid Evidence Assessments
Criticisms of systematic review
Reviewing qualitative studies. What is involved?
Undertaking a qualitative research synthesis
Critique of qualitative synthesis
Chapter Summary
Further Reading
11 Knowledge Mobilisation: Getting Evidence into Policy and Practice
Introduction
Dissemination
Evaluation reports
Structure of an evaluation report
Evaluative dimension
Recommendations
Other evaluation outputs
Interim evaluation reports
Summaries of findings
Press releases
Websites and social media
Knowledge mobilisation
Mapping the terrain of knowledge mobilisation
Types of research utilisation
Types of knowledge
Models of process
Conceptual frameworks
Individual and organisational learning
Evidence-based practice
Practices to improve evaluation impact: planning for knowledge mobilisation
Chapter Summary
Further Reading
12 Evaluation Paradigms and the Limits of Evidence-Based Policy
Introduction
Why does the philosophy of evaluation matter?
Is evaluation a science and if so what kind?
Three evaluation paradigms compared and contrasted
Post-positivism
Key beliefs
Some criticisms of post-positivism
Responsive constructivism (fourth generation evaluation)
Key beliefs
Some criticisms of responsive constructivism
Scientific realism
Key beliefs
Critique of scientific realism
Implications for evaluation
Implications for the role of an evaluator
The implications for evidence-based policy
Chapter Summary
Further Reading
13 Conclusion
Glossary
References
Index
Отрывок из книги
Chris Foxis Professor of Evaluation and Policy Analysis at Manchester Metropolitan University, where he is Director of the Policy Evaluation and Research Unit (www.mmuperu.co.uk). PERU is a multi-disciplinary team of evaluators, researchers, analysts, and economists and undertakes evaluation and applied research in the UK and Europe. Chris’s work cuts across a number of policy areas. He has led numerous evaluations in the criminal justice system and currently is leading large, long-term research and evaluation programmes on reducing re-offending, and on prisoner education. Chris is also interested in social policy innovation and is Director of a European Commission-funded project ‘Innovative Social Investment: Strengthening communities in Europe’ that is studying innovative approaches to welfare reform in ten European countries.Robert Grimmis Associate Director at Ipsos and leads the Ipsos Public Affairs, Political and Social Research Team in Germany. In this capacity, Robert is responsible for the design and management of major social research projects for academic institutions, NGOs and Private Sector Clients. His work includes developing research instruments, sampling strategies, designing indicators and measurements, setting up baselines measures and continuous performance tracking. Robert is also leading on the development of Ipsos’ political polling in Germany. Before joining Ipsos, Robert worked as an academic at Manchester Metropolitan University. Robert’s research interests included social innovation and social investment and he has been involved in many European Commission funded research projects. Robert taught evaluation and social research methods to postgraduate students and continues to be a fellow at Manchester Metropolitan University.Rute Caldeirais a social scientist with a doctorate in sociology. She has 15 years of work experience, mostly in the NGO sector; the last eight years were dedicated to setting up planning, performance and monitoring systems both in complex not-for-profit and in community-based organisations with a view to enhance their ability to enable social impact. Rute currently heads Transparency International Monitoring, Evaluation and Learning Unit, which she set up in 2011. She brings in extensive knowledge of methods and methodological approaches, of evaluations and assessments, and of organisational capacity to absorb learning from impact studies and evaluations.
The first group is students, in particular postgraduates doing taught courses in research methods and both undergraduates and postgraduates planning research projects, perhaps for a final year dissertation or a PhD.
.....
Evaluation can be prospective or retrospective. A prospective or ex ante evaluation takes place before a programme or project has been implemented, whereas a retrospective or ex post evaluation takes place once a programme or project is in place and has demonstrated that it has had an impact (Rossi et al. 2004).
Ex ante evaluations are most commonly undertaken by governments or similar bodies as part of the policy and programme development cycle. They normally have a strong economic component. The European Commission (2001) defines ex ante evaluation thus:
.....