Skip to content

Blog

Monitoring and Evaluating Flexible and Adaptive Programming

Julian Barr blogs on a DFID internal learning event on Monitoring & Evaluating Flexible & Adaptive Programming, where he presented lessons learnt from SAVI.

22/04/2015

A couple of weeks ago I was at DFID, representing DFID Nigeria to present lessons from the State Accountability and Voice Initiative (SAVI) project we are doing with GRM. The event was an internal learning event on Monitoring and Evaluating Flexible and Adaptive Programming.

SAVI has been involved in wider debates on adaptive programming (there was an earlier learning event on this) and on locally-led, political smart development. This has crystallised as the Doing Development Differently (DDD) manifesto. Confirming a long-held Itad belief that, soon after a new idea emerges in development, people want to be measured and assess how it works. This is true of DDD and adaptive programming.

The session opened with an excellent summary by Rachel Kleinfield from the Carnegie Endowment on her report on improving the design and evaluation of political reform initiatives; her report has the apt subtitle ‘Plan for Sailboats, Not Trains’. Leni Wild from ODI discussed Monitoring & Evaluation (M&E) lessons emerging from their work on adaptive/DDD approaches to improving service delivery. DFID staff also reported experiences from governance projects in Myanmar (Burma), Vietnam and Zambia.

Four lessons from SAVI that resonated well with others were:

  • Identifying “bedrock indicators”, particularly at the Intermediate Outcome/Outcome level. These don’t change over the adaptive life of the programme and keep the goalposts in place. It also avoids the trap of having too many process indicators in the results framework. Selecting meaningful and durable bedrocks is not easy.
  • Allowing indicators to be flexible – particularly at the Output level. Adaptive programming means that activities and outputs will change over the life of the programme. There is no point sticking fast to indicators that are no longer relevant measures of what the programme is doing. These should be adapted as the programme evolves. This means logframes will go through many versions.
  • Since change will be approached from a number of routes in these programmes, no single indicator can capture the desired change.  Hence, there was broad agreement that programmes should aim to employ complementary baskets of indicators.
  • Particularly in governance programmes, it is often hard to predict what will result from improvements in governance processes. SAVI therefore uses open-ended “concrete change” indicators. The programme commits to delivering a target number of improvements, without pre-specifying what they will be. The partners in the different States in Nigeria work on a range of issues of their choice. SAVI uses Outcome Harvesting to retrospectively capture and tell the story of real developmental improvements as a result of SAVI’s support to partners. The M&E tools aim to tell the story of the ‘concrete change’ and identify contributory factors, including the role of SAVI’s support. SAVI is conscious of not falling foul of a post-hoc logical fallacy in claiming successes.

There were also cautions at the meeting about M&E in flexible and adaptive programing:

  • Several people were concerned that methods for M&E in adaptive programmes are not straightforward or cheap. The methods require a good skills level, and it may be risky if these M&E approaches become the norm without having a sufficient M&E skills base to draw on. These programmes tend not to rely heavily on simple ‘counting things’ type of M&E, and this makes their M&E more expensive. But as was noted in the context of adaptive programmes in the M4P field, in this type of programme M&E is more important and more expensive can be justified.
  • DFID is supportive of programmes that learn from failure – the ‘fail fast’ approach. The risk of being too adaptive in M&E is that failures are not learnt from as well as they might be, as the M&E focus is developing indicators for the next approach to be tried, rather than reflecting on the discarded one. The continually evolving M&E can also can make it hard for funders to know if and when to stop this type of programme.
  • Some programmes are using action research throughout their life to generate a reflective cycle; M&E could do more around ‘developmental evaluation’ approaches to better define learning and improvement  loops.

It looks like this is going to be an interesting space to be involved with – how to measure change in changing interventions.

Julian Barr, April 2015