Skip to content

Blog

What I learned at the CARES realist evaluation conference

18/10/2016

A couple of weeks ago I attended the CARES realist evaluation conference at the Barbican Centre in London. It was a brilliant few days of reflection and debate about all things realist, and a great chance to discuss Itad’s first realist evaluation (of the Building Capacity to Use Evidence programme, or BCURE) with the wider community.  Here’s some of what I learned…

Realist evaluation is here to stay (and has a lot to offer the international development field). Gill Westhorp informed us that ‘Pawson’ and ‘realist evaluation’ were the most frequently referenced terms in the journal Evaluation last year.  At this year’s European Evaluation Society conference, there were 18 hours of discussion on realist evaluation across 6 sessions (for comparison, there were 7 sessions on QCA as my colleague Florian wrote about recently, and just one on contribution analysis). Sara Van Belle argued that realist evaluation comes in the slipstream of a widespread acceptance of theory based approaches in international development evaluation, welcomed by commissioners who are aware that they are trying to implement projects in complex settings to address ‘wicked problems.’ Our experience from the BCURE project is that realist evaluation provides a systematic way to examine how context affects how people respond to the resources provided by programmes in a complex environment, and how this influences programme outcomes (we wrote about this in a CDI practice paper last year).

Ray Pawson presented a fascinating new argument against RCTs as a ‘gold standard.’ He argued that pharmaceutical drug trial RCTs (the ultimate ‘gold standard’ of RCTs) are only possible due to many years of previous basic science, which has developed, tested and refined theory about the drug’s mechanism of action.  By the time the RCT happens, there is already a very well developed theory about how and why it works, the appropriate dosage, the eligible patient sub-population most likely to respond…in other words, an idea about ‘what works for whom in what circumstances, in what respects and why.’ In contrast, Pawson argued that RCTs in social research are a ‘79-pound weakling’ – because they aren’t superseded by all this prior theory development and testing work.  Ray’s presentation was filmed and should be up on the conference website soon (and he’s also presenting in Oxford next month) – definitely worth a watch.

Realist evaluation means moving away from starting every evaluation from scratch, and towards ‘standing on the shoulders of giants’. Ray Pawson and Gill Westhorp both implored evaluators to ‘stand on the shoulders’ of previous research. There is no such thing as a unique programme. Theories will almost certainly have been tested in previous evaluations (even if in a different field), and doing realist evaluation means building on these theories rather than starting from scratch every time. Pawson talked about building ‘mechanism libraries’ (or ‘programme theory libraries’) – containing generalisable insights from across many different evaluations, about how different classes of programmes might work with different classes of people in different classes of contexts.

The Barbican Centre conservatory – a post-apocalyptic vision of abandoned city tower blocks, aka a great place to have a poster exhibition.
The Barbican Centre conservatory – a post-apocalyptic vision of abandoned city tower blocks, aka a great place to have a poster exhibition.

However, some presentations throughout the conference highlighted challenges with this. How should the evaluator choose which of the many thousands of theories across the social sciences to build on? How can the researcher avoid their evaluation being shaped by ‘serendipity’ – stumbling across theories that seem to fit by chance, or explaining the programme with reference to theories that they happen to be familiar with in their field?

Doing realist evaluation means ‘thinking like a realist.’ Realist evaluation is more than an approach, it’s a worldview and a way of thinking, grounded in a realist understanding of reality (as Gill explains concisely in her ODI methods paper). Thinking and collecting data in a ‘realist way’ presents many challenges…most are common across qualitative research, but show up in specific ways within realist evaluations.

  1. I’ll show you my theory if you show me yours…? The goal of realist interviews is to explicitly present the researchers’ theory to respondents and ask them to comment on it – but several presenters talked about situations where this is challenging or even likely to have a negative effect on the respondent. For example, when interviewing alcoholics who have issues of cognitive functioning and distrust of people perceived to be in positions of authority, or women in cultural contexts where they aren’t empowered to think for themselves, or elderly people who are vulnerable and nervous about the idea of speaking to a researcher. In BCURE we were worried that our interview respondents would just agree with our theories in order to ‘tell us what we want to hear’ – in the hope of gaining further DFID support in future. Mari Berge and Jean Hannah from the University of Stirling got around this by ‘keeping the theory in the back of their heads’ and manoeuvring around it – but found that they could identify Cs, Ms and Os from transcripts even without explicitly presenting theory (we have also done this in BCURE). Nehla Djellouli explained the tactic of ‘WHY 5 times and HOW 5 times’ – getting researchers to keep digging in interviews with participants who were not used to reflecting on ‘why’ and how’ questions.
  1. Whose theory is it anyway? Several presenters (including Gill and her colleague Andreas Sihotang) talked about the issue of research in international contexts where the lead researchers come from a very different cultural background to the participants in the intervention. There is a risk that the international researchers may override local interpretations and understandings, by imposing their own theories about how the world works. Sara Van Belle raised the question of whether theories held by researchers from the global North are valid and ‘culturally transferable’ in international development contexts. Using local researchers can help bridge the divide – but there are all kinds of issues with this too, not least the difficulty of training novice local researchers to ‘think like realists’ within limited budgets.
  1. Lost in translation? Emma Williams talked about the issue of conducting interviews in one language and translating them into another, and the difficulty of ensuring that complex meanings weren’t ‘lost in translation’ during this process. How do you ensure consistency, when there are multiple researchers who may be translating in different ways?
  1. Parachuting or partnership? How should international research teams be structured, when the lead evaluators are from the North and the data collectors and project participants from the South? Ana Manzano discussed the ‘parachuting’ approach – where international researchers fly in for a week and collect data before flying home again to analyse it. And the ‘postal’ approach – where local researchers collect the data and ‘post’ (or email) it to Northern researchers for analysis. She feels the best model is neither of the above, and instead a model of ‘partnership’ – involving mutual trust and shared decision making between all members of the research team. Both Ana and Sara Van Belle argued that this model is necessary in realist evaluation, or you won’t get good data (because the data collectors won’t be ‘thinking like realists’).

Both Ana and Gill Westhorp talked about how the capacity building of international research teams was an integral part of their evaluations – building face to face meetings and capacity workshops into the budget, using ‘methods handbooks’ and similar tools, involving the international team in the theory development process, using webinars to introduce teams to the concepts of realist evaluation and realist interviews, providing lots of on-the-ground support, and ultimately using a variety of capacity building tools to get people to ‘think like realists’.

Overall, many things to chew over and consider how they can be brought into the realist evaluation work we are doing at Itad. It’s great to be part of such an active community of practice around realist evaluation, and very reassuring to know that others are grappling with the same issues as we are! Huge thanks to Justin Jagosh and the organising team for putting on a great event.