Skip to content

Blog

Six Lessons from the 2016 UK Evaluation Society Conference

Itad was well represented at this years’ UK Evaluation Society (UKES) conference

We were involved in five presentations across a range of subjects (though all rooted in this years’ theme of ‘complexity’), and we had several staff members among the general delegates. Here, some of them reflect on the lessons they took away from the sessions:

Leonora Evans Gutierrez, Consultant in the Fragile and Conflict-Affected States theme:

Whilst at the UK Evaluation Society conference last month I attended a parallel session about ‘Ethics in Evaluation’ given by Dr Leslie Groves Williams. In the session, Dr Groves Williams presented the findings of a report commissioned by DFID – the aim of which was to analyse the ethics principles, guidance and practice currently employed by DFID (and other commissioners of evaluation and research to DFID). The report sought to challenge existing assumptions and champion a change of behaviour vis-à-vis ethics in evaluation. The report revealed a number of notable findings, including; lack of a clear definition of ethics are; many assumptions about ethics but lack of a shared understanding of what ethics are and their role in international development evaluation; and a general failure to consider ethics throughout the project cycle.

During the session it became clear of the need to bring ethics front and centre of international development evaluations. Failing to do so could reinforce marginalisation and discrimination, and increase the risk of cross-cultural recriminations.

Should we move away from ethics as a bureaucratic hurdle, so they become part of everyday practice, including communication and dissemination? Should we be striving to ‘do good’ rather than confine ethics to ‘do no harm’?

Rob Lloyd, Associate Director:

I attended a great session at UKES on communicating evaluation findings. There was lots of discussion around the different ways of communicating evaluation findings to audiences and some good insights into new and innovative ways of doing this. But what the discussion really got me thinking about is what evaluators should be held accountable for.  We all do this job because we want to be agents of change (I think?), we want the work that we spends hours, days and months slaving over to be used, and for it to stimulate discussion (sometimes argument) and definitely action.  But should we be held to account for the findings, conclusions, and recommendations of an evaluation being used?  Uptake depends on so many things that are outside of our control as evaluators. Decision making in any organisation is political, and evidence is but one factor in the process.  For me, what we should be held accountable for are four process related issues:

  1. Producing a high quality report that contains credible evidence that it presented in a clear and concise way;
  2. Effectively involving key stakeholders in the evaluation and building their buy-in to the process;
  3. Working with clients at the start of the evaluation to clarify exactly what they want from the process;
  4. Communicating the findings in a way that engages and is tailored to the audiences.

I don’t think it’s fair that we are held to account for the outcome of an evaluation (uptake), but the process is what we have much more control over, and we need to execute it in such a way that uptake is more likely.

Fabiola Lopez-Gomez, Consultant in the Private Sector Development theme:

Evaluations of market systems programmes are challenging. For instance, previous evaluations have used simple linear theories of change which do not adequately reflect systemic change. Limited attention has been paid to possible unintended and negative effects. Data triangulation practices are weak, particularly using qualitative data.

The UKES conference hosted a presentation of the “The Keystone Node Approach to conduct theory based evaluations of complex programmes”. The presentation was a practical example of an alternative approach to evaluate market systems interventions. The evaluation uses contribution analysis combined with mixed methods approach to data collection.

A dilemma commonly faced by evaluators is “how to evaluate a complex programme with a limited budget”. This evaluation tries to address this issue by focusing on key elements or ‘key stone nodes’ of the programme theory, rather than the whole programme theory. The nodes are selected from the results chains for each programme intervention using specific criteria.  The nodes are the main focus of the research efforts as they contain the most critical and learning-rich steps within each result chain and they are tracked through the implementation of the programme.

A good lesson learned from the presentation is that in order to apply theory-based evaluation approaches, it is vital to have a good theory of change in place.

Ethel Sibanda, Principal Consultant in the Social Protection and Livelihoods theme:

One of the most fascinating sessions for me at the UKES conference, was the session on Choosing Appropriate Evaluation Methods by Bond. Bond is developing a tool that provides evaluation commissioners and evaluators practical guidelines on selecting appropriate evaluation methods for specific interventions. The tool assumes that combinations of methods are often most appropriate and that trade-offs may need to be made by those commissioning in order to ensure that an intervention is evaluable.

In summary, the tool tries to match characteristics of methods with characteristics of interventions. 11 commonly used evaluation methods1 are scored on their ability to answer five generic evaluation questions. For example:

  1. Which difference did the intervention make, for whom and under what circumstances
  2. How / why did the intervention make a difference
  3. How much of the outcome can be attributed to the intervention, on average?

A set of rubrics is then used to score each method’s ability to answer these questions. Methods are further scored on a set of 19 requirements. These include, but are not limited to ability to control who receives the intervention, # units receiving the intervention, # units that can be used for comparison (control group) and comparability of cases. Results are computed to gauge which methods are most appropriate for the given intervention.

Development of the tool is still in progress and Bond has welcomed my offer to trial the tool at Itad before official launch. I look forward to this and hope the trial version will include more recent approaches like Contribution Tracing. We expect to receive a web based/excel version for trialling by July 2016.

Marc Theuss, Senior Consultant in the Health Theme:

There was a sense of unease among evaluators at the methodological options available in the contemporary evaluation landscape. In particular, there seemed to be a frustration in relation to opportunities to generalise evaluation findings in a rigorous fashion through methods which are realistically feasible and which provide scope to contribute to iterative cycles of programme improvement and course-correction. People at UKES sessions I was in described a rather polarised set of evaluation options. On the one hand, there are RCTs which provide rigour and generalisability but are expensive, complex to implement, and are often not appropriate. On the other, whilst the growing prominence of Adaptive Programming (PDIA) and participatory methods are exciting and innovative, they are not yet proven to provide a platform to generalise from a series of evaluations to formulate broader aggregated evaluation findings and outcomes. I came away from the conference with the sense that formulating solutions to this challenge will be essential if evaluators are to embrace rigour whilst operating in practical budgetary and programmatic contexts.

Emmeline Willcocks, Communications Officer:

There were a couple of things that struck me most about this year’s UKES conference; that the ‘language’ of communication seems to be evolving, and that there is an increasing awareness that evaluation practices and findings need to be communicated more frequently beyond (and indeed between) evaluators.  As a non-evaluator working in an evaluator’s world, I have in the past found myself baffled by acronyms, incomprehensible methodologies, and intricate descriptions of findings, and this has been no less true when I have been listening to speakers at conferences. At UKES 2016 though, I noticed a trend across the conference of unpacking and explaining these complexities (perhaps because the theme of the conference was ‘complexity’!) in terms that seemed to appeal to evaluators and non-evaluators alike. This general sense of ‘demystifying’ at the conference was accompanied by several discussions about communicating evaluation results beyond hundreds of pages in a report. People at UKES weren’t saying that the report is dead, indeed it is still the most important part of an evaluation, but that it can be amplified by identifying potentially interested audiences (as well as whether the client might get more out of other formats), tailoring products (and language) to them, and publishing them over a range of media. One of the phrases that has stuck with me, and that I have brought back to my day to day role, is ‘do the work well, but sell it well too’.

Footnotes

  1. 1. Randomised Controlled Trials (RCTs), Difference-In-Difference, Statistical Matching (e.g. PSM), Outcome Mapping, Most Significant Change, Causal Loop Diagrams, Soft Systems Methodology, Realist Evaluation, Qualitative Comparative Analysis (QCA), Contribution Analysis, Process Tracing / Bayesian Updating