How can capacity building improve how policy makers use evidence?

Itad has just finished a three-year realist evaluation of the Building Capacity to Use Research Evidence (BCURE) programme – a £15.7 million, DFID-funded initiative that aimed to build capacity for evidence-informed policy making.

This involved six linked capacity building projects across more than 12 low and middle-income countries in Africa and Asia. BCURE partners used a range of activities – including training and mentoring, technical support to develop evidence tools and guidelines, and learning exchanges and policy dialogues.  The evaluation investigated how, why and for whom capacity building worked and didn’t work across the BCURE projects.

What did we find?

The starting assumption behind the programme was:

“Evidence is crucial to successful policy making. However, in many low and middle-income countries, policy makers lack the capacity to effectively access, appraise and apply research when making decisions.”

But did this assumption hold?

The evaluation suggests that it did – to some extent. Across all the BCURE contexts, there was a genuine need to build technical skills in evidence access, appraisal and use. However, while there is a problem when civil servants don’t understand statistics or how to weigh up which evidence sources are reliable, fixing these problems won’t help improve policy if there is no political space to bring evidence into decision making, and no incentives for senior decision makers to care about evidence.

BCURE was set up as a fairly technocratic project with limited political economy analysis. But as the project matured, we found that where the BCURE partners engaged with politics and incentives, and took a ‘systems’ view of capacity (working to change things at the level of the individual, the organisation and the wider environment, rather than seeing capacity as a ‘gap’ to be ‘filled’), they managed to find windows of opportunity to catalyse change. Where BCURE worked, it worked through generating a combination of specific ‘mechanisms’ (in realist evaluation speak) or change processes (in layman’s terms). For example, success followed where partners:

  • Accompanied government ministries through a process of reform in a flexible, collaborative way (rather than delivering pre-set activities through a more traditional ‘supplier-consumer’ model)
  • Targeted people who are working with evidence on a day to day basis who could apply their new skills directly in their day-jobs, through training that followed best practice in the adult learning literature – tailored to real-life needs, practical and participatory, and focusing on soft as well as technical skills.
  • Developed evidence tools that facilitated policy makers to do their jobs more easily.
  • Co-produced policies in an evidence-informed way, helping to showcase the value that evidence can bring.
  • Reinforced evidence use through rewards (such as professional recognition) or tapping into control mechanisms (making procedures mandatory and enforced by an organisation with clout).
  • Supported the genuine (rather than superficial) adoption of new processes, practices or tools, so that government partners can continue promoting evidence-informed policy making in their own way into the future.

Over the course of the evaluation, we also learned a lot about how to apply a realist evaluation approach, adapt this to an international development context, and communicate the nuanced findings that realist evaluation generates in an intuitive way.  We shared our early learning through a CDI practice paper in 2016, but have a lot more to share now that the evaluation is over – so watch this space!

Find out more about our work on BCURE on our findings and publication page.