Making Results Systems Work Part 1 – Comparing the Results Measurement Systems of Development Agencies
One of the key areas of interest to the Organisational Effectiveness Theme at Itad is understanding how to make results measurement and evaluation systems work more effectively. So, when we were approached to contribute two articles to a special edition of the Journal of Development Effectiveness on agency-wide performance measurement systems, and a practice paper for the Centre for Development Impact (CDI) at the Institute of Development Studies on the factors that affect the quality of commissioned evaluations, we jumped at the opportunity.
Over the next three weeks we’ll be exploring some of key ideas and messages coming out of the work. They are all linked to the idea of ‘making results systems work more effectively’.
- Today we’ll look at the findings from our comparison of the results measurement systems of three agencies: Norad, Danida, the Department for International Development (DFID) and the World Bank.
- Next week we’ll focus on the role of leadership in getting results measurement systems to work;
- The following week we’ll focus on the key factors that determine whether an evaluation is good quality.
Part 1 – Comparing the Results Measurement Systems of Development Agencies
In one of the articles we wrote for the Journal of Development Effectiveness we compared the results measurement systems of Norad, Danida, DFID and World Bank and looked at how each agency has embedded results measurement in the planning of interventions, monitoring and reporting, and quality assurance. From the comparison we identified two divergent models for agency wide results measurement systems: a flexible system and a standardised system. Below we outline the core elements of both models and briefly discuss their respective benefits and challenges.
The flexible results measurement system is best characterised by the Norwegian model (we’ve also seen it among a number of INGOs and foundations : it requires very limited planning on how results are to be measured at the design phase of an intervention (other than the normal logframe); common templates and tools for monitoring and reporting exist, but are not mandatory; results frameworks or reports are quality assured at the discretion of the individual managing the grant; and, there is no standardisation in how intervention performance is assessed. In the case of Norway this highly flexible approach is borne out of its deep commitment to being partner led and not wanting to impose any sort of Norway-centric reporting system or approaches onto partner governments and organisations. Interestingly, many INGOs and foundations that share this model use a similar argument: they are supporting a wide range of local and national organisations and don’t want to make too many burdensome demands on their internal systems.
While partners may like this approach as they are free to report as they want, the absence of any real mandatory requirements on how results should be planned, measured and reported, means whether and how results are measured is very ad hoc. This makes it is very difficult for for the likes of Norway and any other agency that takes this approach to say anything about what is being achieved above the level of the individual grants.
While this model for an agency-wide results measurement systems undoubtedly has its challenges, it also has its benefits: the flexibility that is built into it provides scope for a more adaptive and context specific approach to measuring results; and the fact that it builds on partner systems is central to effective development.
However, from our perspective, to function effectively this approach needs certain pre-conditions to exist. First, staff need to be well skilled in measuring results as they are going to have to negotiate lots of different results frameworks being presented by partners in many different ways. They’ll need to have the skills to be able to appraise them, offer suggestions for improvement and then often work with the partner to implement the recommendations. Second, staff need the time to be able to do this. Having a meaningful conversation with partners around the strengths and weaknesses of their results framework, then helping them to improve it, is time consuming. If there is a pressure to shift funds and get money out the door there will be conflicting priorities and our experience suggests that nine times out of ten shifting funds trumps good planning for measuring results. Third, partner’s results measurement systems need to be of a sufficient quality. Linked to the point above, this will again need time for donor staff to appraise partner’s systems and support capacity development where it is needed.
The standardised results measurement system is best characterised by DFID and to a slightly lesser extent the World Bank. The systems of both institutions have lots of mandatory processes and templates for results measurement. This includes requirements to develop theories of change, to review the evidence underpinning interventions, and planning for when and how evaluations will be used to plug evidence gaps and support learning. They also include formal quality checks on results frameworks and reporting.
The main benefit of this approach is that the standardisation in the system means that there is greater consistency in how results are being measured and reported. This leads to the generation of lots of data in a format that supports aggregation. For example, the World Bank’s scoring system used in the Independent Completion Reports enables practices from across a diverse portfolio to be aggregated, an overall picture of performance to be developed, and patterns and trends to be identified across regions and sectors.
While this type of system might provide enable an organisation it generate large amounts of comparable data, it also has its challenges. First, it can rub up against the idea of more adaptive and agile style of management. This type of approach requires more resources to be invested up front at the design stage. In contexts of rapid change, spending lots of time developing the perfect project design and associated results framework might not be appropriate as activities may need to change soon into implementation. Second, it puts significant capacity strains on partners and can skew domestic systems to donor accountability rather than domestic accountability.
There is no right or wrong approach to designing an agency wide system. Which approach an agency decides to pursue depends on what it wants the system to deliver. It will also be shaped by the agencies’ values. What is important however, is that an organisation recognises the approach that it is taking, is aware of the challenges inherent in the approach and looks at ways of actively managing them.
Rob Lloyd, March 2015
Read Part 2 – A Framework for thinking about Leadership on Results
Read Part 3 – What Drives Evaluation Quality?