Skip to content

Blog

Increasing the use of evaluation results: three tensions to navigate

As pressure to demonstrate Value for Money and results in international development increases in the current political climate, funding and implementing organisations alike are commissioning a growing number of evaluations. But how many expensive and well-intentioned evaluation reports end up gathering dust on a shelf?

18/05/2017

Unfortunately, basing an evaluation on sound methods and presenting reliable evidence to back up its conclusions and recommendations is not enough to actually affect the way a programme or future programmes are implemented. To lead to change, findings from an evaluation need to be firstly ‘used’ by individuals (decision makers, programme managers, donors etc.), something independent evaluators typically have little or no influence over, especially once the evaluation is completed.

Last week I had the chance to attend the annual UKES Evaluation conference in London. This year’s theme was precisely ‘the use and usability of evaluation’. The presentations I attended made me reflect upon a number of tensions that can affect the use(s) of evaluation results.

First, a tension between accountability and learning. Promoting learning can be challenging when neither the programme nor the evaluation are set up for it. If a funder commissions an evaluation of to gather information on how its portfolio is performing, to decide whether to change its composition for future funding rounds, there might not be enough resources and opportunities for grantees themselves to engage with the evaluation results and learn from them. Moreover, implementing organisations often perceive the evaluators as ‘separate’ and their function is too often assimilated to an audit, which can also hinder learning. If the main purpose an evaluation needs to serve is accountability for the donors, the evaluation questions and methods might be designed in such a way that does not delve into how and why intended results are or can be achieved (focusing more or just on whether and to what extent the results materialised). This being the case, the whole evaluation can become a tick-box exercise. Finally, if an evaluation is to lead to course correction, preliminary findings need to be shared at an early stage, when the evidence gathered is potentially still a bit wobbly. Whereas accountability asks for sounder evidence, gathered and analysed over a longer period (including ex-post).

Second, a tension between short-term/internal learning and longer-term/external learning. In order for evaluation findings to influence programme implementation in the short-term through course correction, getting the evaluation timing right is essential. Resources need to be frontloaded and learning opportunities need to be built into the early stages of an evaluation. Due to limited resources devoted to monitoring, evaluation and learning and time pressure, the bulk of resources tends instead to be concentrated on final, and sometimes mid-term, report, with little budget left for engagement and learning opportunities throughout the process.

Third, a tension between short-term learning and evaluators’ independence. A recurrent theme in many UKES presentations this year has been the role of evaluators and the quality and type of relationship established between evaluators and evaluand. As already mentioned above, evaluators tend to be perceived as separate and this separation is reinforced by the qualification of evaluators as ‘independent’ from the intervention being evaluated. While it is of the upmost importance that evaluators retain their independence of judgement and remain free of conflict of interest, in order to become learning facilitators, the traditional perception of evaluators as ‘them versus us” needs to change and become more of a ‘critical friendship’.

How can we navigate these tensions to increase the chances evaluation findings leading to greater impacts on people’s lives (through international development resources being better spent)? This year’s UKES presentations gave me some pointers:

  • Perceptions that independence means separation needs to be challenged. Evaluators need to negotiate their role to be able to give advice and get heard. In this regard, it could be helpful to integrate a research uptake/learning role into the M&E team. It would also be important to build a positive ‘human’ relationship with implementers and funders, and to identify a common language early on. Moreover, it is important that evaluators maintain their objectivity and do not come across as having an agenda too soon in order not to spoil the relationship.
  • Potential trade-offs between accountability and learning, and between short and long-term learning, need to be acknowledged and discussed with clients at an early stage. Commissioners need to be aware from the beginning of the potential tensions highlighted above. Evaluators need to listen carefully to the client’s needs and facilitate a discussion on what is feasible and desirable, in case the client does not yet have a clear-cut view of what they want out of the evaluation. All parties should reach a common understanding of evaluation purpose before entering the design stage.
  • Evaluators should gauge and try to influence the appetite for learning of organisations they work with/for. To what extent evaluation results will be used largely depends on the organisational culture of the primary users. Evaluators can however do their part to stimulate organisations’ appetite for learning. For instance, they could propose to collaboratively elaborate an explicit uptake strategy – i.e. a clear plan for dissemination and use of the evaluation findings, conclusions and recommendations – from the beginning. This will decrease the chances of reports being left to gather dust.
  • Learning products are less important than learning processes. It is widely acknowledged that adults learn more by ‘doing’ than by reading, and that learning is enhanced by collaborative relationships between learners and their educators. Engaging primary users throughout the entire process in order for them to increase their sense of ownership of the evaluation and the results it will bring (to encourage buy-in and then use) is also a cornerstone of Michael Quinn Patton’s Utilization-Focused Evaluation (U-FE)1. Hence, the budget needs to adequately account for learning and engagement activities – e.g. sessions to discuss evidence, unpack perceptions of what works and assess the quality of the evidence used. In order to build confidence in the findings, it is also crucial to be truthful about the evaluation limitations.
  • Evaluators and donors should be put a greater focus on learning and use rather than accountability. Ultimately, if an evaluation needs to lead to change through learning, evaluators and donors alike need to work through perceptions of evaluation as a burden/time consuming tick-box exercise. In this regard, it would be good to invest in monitoring and evaluation capacity building in implementing organisations, as well as making sure that their learning needs are taken into account in the design phase and that the right incentives to use the analysis are provided throughout the process.

Looking ahead, given that a growing number of donors are encouraging the use of adaptive programme approaches to achieve better results, the emerging interest in, and opportunity for, more real-time monitoring data, and the increased need to demonstrate the impact and value of evaluation activities, it is clear that the role of evaluators is bound to change. The days in which evaluators could just work from a distance and maintain a strict separation from the object of their investigation in name of their independence are over. If the value of evaluation is in the use of the evidence that the evaluation provides and in the benefits conferred to users by it, it is time evaluators take up new skills and new challenges.

Footnotes

  1. 1. Michael Quinn Patton(2012) Utilization-Focused Evaluation, Saint Paul, MN