Skip to content

Blog

How accountability trumps learning: three lessons from evaluating the Tilitonse programme

Mel Punton and Julia Hamaus discuss their presentation at the UK Evaluation Society conference this year

At this year’s UK Evaluation Society Conference, we reflected on Itad’s experience working on the independent evaluation for the Tilitonse programme.  Tilitonse, which means ‘we are together’ in Chichewa, is a multi-donor fund to promote civil engagement and support more accountable, responsive and inclusive governance in Malawi.

Along with partners Kadale and Cadeco, we led an independent evaluation of the programme between 2012 to 2016. Our purpose was to evaluate Tilitonse’s impact, but also play a role in knowledge management and support civil society capacity, sharing best practice with partners and helping keep Tilitonse focused on delivering results.

Nearly all donor terms of reference say they want both accountability and learning, and one of the big questions we tackled at the conference was whether there is a trade-off between them. Our experiences evaluating Tilitonse suggest that there is.

Did the evaluation promote learning?

On one level, yes.  We made several recommendations, including around the importance of strategically clustering support to grantees, the need to support partners to think and act politically, and the importance of focussing on non-financial support such as facilitating, mentoring and convening. In their management response, DFID said they will act on these recommendations in the design of future programmes, including the Tilitonse local foundation, which will replace the programme once it wraps up later this year.

On the other hand, the response made it clear that there were limits to how far recommendations could influence the current programme given the limited time remaining. So why were these lessons – some of them fairly clear from early on – not incorporated into the project earlier? Why was the evaluation unable to influence the course of the programme during implementation? We drew out three lessons from our experience.

Lesson 1: Promoting learning is challenging when programme governance and management structures are all geared towards accountability

Although the programme had a learning objective, ultimately it did not receive the same attention as accountability. Why?

  1. The programme was under a lot of time pressure to grant money due to donor disbursement requirements, as well as the need to avoid a gap in resources provided to Malawian civil society.
  2. Significant time was invested in scrutinising Tilitonse’s financial systems and holding grantees to account for project spending. This contributed to the programme’s focus on grant mechanisms, rather than building the capacity of grantees. This emphasis grew, in part, due to the ‘cash gate scandal’ in Malawi, which broke in 2013, around the beginning of the programme.
  3. There were different levels of understanding of what ‘independence’ meant for the evaluation. We argued that independence did not mean we had to be completely separated from the Board, Tilitonse programme staff and grantees – however, there was a general feeling that a highly collaborative relationship with the evaluation would imply a conflict of interest.
  4. Our role in helping Tilitonse learn through the evaluation was ambiguous and perceived as overlapping with the programme’s role; a consequence of different levels of ownership of the evaluation, and clarity on what it was expected to achieve.

These factors combined to set the programme on an ‘accountability course’ rather than a ‘learning course’ from the outset. Over time, the accountability function gained greater emphasis, with a stronger focus within the Board on holding the evaluation to account for delivering results, rather than its role in helping the programme to learn.

Lesson 2: Learning products are less important than learning processes

The evaluation produced a number of products over four years, including 18 in-depth case studies of grantees’ projects, and several concise briefing papers summarising key insights and lessons. Not all of these products were presented in an engaging format, and many were fairly long and technical – not the best to facilitate learning.  However, we also found that even the short and snappy products had limited traction without processes for the programme to engage with them and consider how to put lessons into action. For example, the evaluation’s main point of contact with the programme was through quarterly reporting to a sub-committee of the Board, but there were few entry points for learning in these engagements, due to the reasons discussed above. The programme also did not want the evaluation to engage directly with grantees – due to understandable concerns that introducing more people and purposes could cause confusion and duplication, as well as issues around independence as already mentioned.  Finally, there was a lack of emphasis on learning within programme’s own M&E systems, which focused on measuring grantee progress against results frameworks and holding them to account to fairly narrow, quantitative indicators. This limited the ability of both the programme and grantees to adapt to emerging evidence, including from the evaluation.

Lesson 3: A strong focus on accountability limited opportunities for grassroots learning

The overarching driver for the evaluation came from donor and Board-level reporting needs. This led to the evaluation using in-depth case studies as a way to feed into higher-level results, rather than using these case studies to support lower-level, context-specific learning by grantees, resulting in a fairly top-down and extractive research approach. The case studies were less tailored to understanding grantees’ projects on their own terms, providing a holistic picture of their successes and challenges, or answering their own questions…and more to providing data that was comparable across projects to meet the accountability objective.

If the evaluation had worked with grantees more as their projects progressed, it might have been possible to facilitate learning from baseline case studies to help grantees strengthen programme design and delivery. However, our ability to play this type of role was limited by a number of things, including us, being deliberately held at arm’s length, and the perception that, if an evaluation is to be rigorous, evaluators shouldn’t ‘interfere’ but rather should attempt to avoid influencing the evaluation ‘subjects’.

Three final reflections on balancing accountability and learning…

  1. As with Tilitonse, in many evaluations, there is often a desire to have both accountability and learning. But our experience suggests that once an evaluation has boarded the ‘accountability train’, everything can easily become geared towards this – processes, incentives, time, resources, methodology – and it’s very hard to jump off once it has picked up speed!
  2. Evaluators need to focus on learning processes as well as products, to ensure products are actually used. These processes cannot be conjured out of thin air – they depend on culture, appetite and demand for learning. If evaluators are more aware of what degree of learning culture exists from the outset of an evaluation, they can hopefully find better strategies to support uptake.
  3. ‘Independence’ as a term hides many meanings and perceptions and isn’t always a helpful concept. The pressure for ‘rigour’ can lead to treating grantees as ‘subjects not to be interfered with’ which tends to favour accountability and limit opportunities for learning and meaningful engagement. We believe that, through careful evidence collection and transparency, evaluators can engage in learning without compromising what they bring to the party in terms of rigorous evidence.