What works for social accountability? Findings from DFID’s macro evaluation

I wrote my first blog on DFID’s macro evaluation of its empowerment and accountability project portfolio way back in 2015, at the end of the evaluation’s 11-month inception phase. Some 18 months on, I’m over the moon to finally publish findings from the evaluation which focus specifically on DFID’s support to social accountability initiatives.

The macro evaluation analysed 50 DFID social accountability projects to find out what works, for whom, in what contexts and why.  It has been a long and arduous road, but the findings are well timed.  UK aid is under ever more intense scrutiny to deliver results and value for money and building the evidence base around what works in different contexts is critical to that.  We are therefore pleased to share the findings and look forward to DFID and other agencies making good use of them in their future empowerment and accountability policy and programming.

So what did the evaluation find?

Most importantly, our analysis of 50 different projects operating in a wide range of contexts revealed that social accountability processes almost always lead to better services, with services becoming more accessible and staff attendance improving. They work best in contexts where the state-citizen relationship is strong, but they can also work in contexts where this is not the case.  In the latter, we found that social accountability initiatives are most effective when citizens are supported to understand the services they are entitled to. Taking an adaptive approach and being responsive to changing contexts also improves results.

Interestingly, the evaluation highlighted that social accountability interventions can improve access to services for marginalised groups if the dialogue between citizens and service providers includes all social groups.  Results were better when social accountability interventions were integrated with government support for services targeting marginalised groups.

Whilst these findings highlight the benefits of social accountability interventions, the evaluation also chimes with Jonathan Fox’s observation (2014) that the challenges in scaling up them up often leave these benefits trapped at a local level.  Instead, we propose special efforts are needed to connect local social accountability initiatives to higher level government structures and processes to achieve success at scale.

The evaluation’s findings are disarming in their simplicity and mask the rigorous evaluation process that underpins them.  To reach these conclusions, we screened 2,379 DFID projects to identify 180 social accountability projects that met our inclusion/exclusion criteria.  We reviewed project documentation held on DFID’s system for all of these and found 50 of them had sufficiently high-quality data to be included in the evaluation.  We compared this set of 50 projects with the other social accountability projects and confirmed that there was a high degree of comparability and therefore low bias in the projects selected for the evaluation.

In parallel, we studied the available evidence base on social accountability initiatives and documentation for the selected projects to identify 17 hypotheses to test on the projects and contribute to addressing gaps in the evidence.

A challenging analytical process

The evaluation team developed an innovative methodology which combined Qualitative Comparative Analysis with narrative analysis, to analyse the set of 50 projects.

We first used QCA to test the 17 selected hypotheses and identify combinations of factors necessary and/or sufficient to achieve certain results in projects. QCA is an innovative method, yet lacks established standards to guide implementation.  Whilst this gave us license to trail-blaze, it also led to hot debate between our evaluation team and DFID Advisers about how to apply the method and what we could strictly conclude from the analysis.

Our second method, in-depth narrative analysis helped us understand how these factors work in context. The narrative analysis consisted of identifying themes arising in project documentation which were relevant to the hypotheses identified, systematically coding the relevant narrative passages and then analysing the resulting evidence to flesh out the QCA findings and draw conclusions against the working hypotheses.  Whilst the narrative analysis should have been relatively simple to apply, it proved only marginally less controversial than the QCA. The main issue the evaluation team and DFID Advisers debated was whether findings from the narrative analysis could be generalised more broadly.  Despite using a transparent methodology to ensure the 13 projects selected for narrative analysis were typical of the configuration of factors being explored, we concluded that context is paramount. Findings from the narrative analysis are therefore powerful, but illustrative only.

So you see, it has been quite a journey.  If you’d like a fuller summary of the evaluation’s main findings and policy lessons, take a look at our What works for Social Accountability Policy Briefing. And for those of you interested in the detail, including the rigorous evaluation process that has led to these findings, take a look at the full macro evaluation report.

Claire Hughes, July 2017

Share