Skip to content

Blog

Pushing the boundaries: considering new ways of utilising evaluations

Itad's Bara Sladkova reflects on this year's UKES Conference, which explored the current ways of utilising evaluations.

7/06/2017

This year’s UKES conference explored the current uses of evaluation. The role of evaluators in delivering learning products and enabling evaluation uptake featured in many conversations, as well as new ways of working with clients to improve their business & management strategies. My takeaways focus on the latter: evaluation reports aside, in what other ways can evaluators support the work of clients?

  1. Learning products have no traction without learning processes: Although evaluators play an increasing role in the uptake of learning products, using the independent evaluation of the Tilitonse Fund as an example, Mel Punton and Julia Hamaus pointed out that more work needs to be done to improve learning processes. They outlined the need for grant making to come hand in hand with lesson learning, capacity building and convening. From Mel’s experience, learning processes are more important than learning products themselves and a lack of evaluators’ ability to accommodate learning is a source of concern.
  2. Learning processes are often compromised by jumping on the ‘accountability train’: The trade-offs between accountability and learning are often rooted in the misalignment between wants and needs of various stakeholders engaged in evaluation – from evaluation commissioners and other audiences to various groups of stakeholders involved in data collections processes. Evaluators are increasingly aware of the tensions between the learning audiences outlined in their terms of reference (ToRs) and the real learning needs often revealed by delivery. In Mel’s words, the risks associated with ‘jumping on the accountability train’ might compromise evaluators’ ability to tailor learning processes and products to the actual learning needs discovered as part of the process.

Two weeks later, the same point was made by Michael Bamberger in a webinar titled ‘Evaluation and the SDGs’. In his view, we should think about ways to communicate evaluation results as early as the implementation phase to ensure that findings are useful to evaluation commissioners, as well as stakeholders and groups interviewed as part of the evaluation. Michael argued that this approach requires tailored learning products (condensed, simplified, translated and context sensitised) delivered through various kinds of communication strategies.

Back at UKES, many presentations conveyed the same message: for evaluations to be fully utilised, evaluation products and lessons learnt have to be well tailored to the needs of their audiences and made accessible to them, and audiences have to have the time, buy-in, resources and capacity to appreciate and benefit from the products. The difficulty is that not all learning needs can be formalised up-front. This prompts the question of whether evaluators should challenge the limited scope of their role in learning uptake and apply their skills and insights in a more strategic way.

  1. Strategic objectives give evaluation purpose and improve traction of evaluation outputs: A session on delivering evaluation for private and philanthropic clients showcased the importance of motivation and strategic interest in evaluation services in successful uptake of evaluation findings. The Small Business Research Initiative (SBRI), an example given by George Bramley from the University of Birmingham, showed how evaluators helped to identify improved health and social outcomes as well as the resource efficiencies in health care provision generated by the client’s activities and outputs. The commissioner’s motivation was driven by the strategic objective to use evaluation to demonstrate a good value proposition. While in the case of the Tilitonse and many other evaluations, evaluation outputs are often destined to be used for accountability, private actors and many philanthropic organisations tend to use evaluations more strategically – perceiving evaluators as strategic partners.

Though it’s not always acknowledged, in one way or another, supporting clients with strategy alignment is an inherent part of many evaluations, portfolio evaluations being an example. This potential is being increasingly tapped into by private sector and philanthropic clients who are open to widening the scope of evaluations and enable evaluators to provide more strategic services. The strategy alignment services that Itad delivered as part of the Evaluation of the Children’s Investment Fund Foundation’s (CIFF) Contribution to the 2015 International Climate Negotiations is one of several such examples.

Working with Kay Fisher’s work through Experience Engineers, the Youth Cancer Trust feeds evaluation results into their broader management and investment strategies.  Applying a combination of quantitative and qualitative methods, Experience Engineers found the gaps between the needs of young cancer patients and the realities of their experience with the services made available by healthcare service providers, including the Trust itself. The Trust used the outcomes of the study to further develop their strategy and service offer to young cancer patients – the interviewees and the primary beneficiaries of the study.

To sum up, to transform Mel’s ‘accountability train’ into an adaptive one, evaluation ToRs should become less descriptive and more open to adjustment, allowing evaluators to design flexible and targeted learning processes and deliver evaluation outputs fit for a strategic purpose – be it programme design, investment alignment, or strategy development. This potential should be considered as early in the design stage as possible allowing for evaluations to be aligned with strategic objectives, sufficiently resourced, and able to address ad hoc learning needs. Simultaneously, it’s crucial that evaluators communicate evaluation aims and objectives to all their stakeholders and ensure that targeted audiences are familiar with their wider strategic purpose. This way, evaluation outputs are more likely to get traction among stakeholder and target audiences and can be easily fed into broader strategies – improving the ways in which resources are invested in international development and in doing so, ensuring that programmes have the greatest possible impact on people’s lives.