Skip to content

Blog

Learning at the speed of trust: building monitoring, evaluation and learning capacity

Learnings from an innovative programme providing flexible tailored support to meet six organisations ‘where they were’ to build MEL capacity

In 2020 we undertook an assessment of the Bill & Melinda Gates Foundation’s Global Fund to maximise the contribution of government donors and support resource mobilisation for the Fund.

Our goal was to equip six organisations with the practices and systems to evaluate their work. At the beginning of this programme, we published ‘From compliance to love with MEL focusing on how to cross the divide between MEL being a chore to being valued as a way to improve and strengthen advocacy work.

Now, the project team reflect on the experience and their five key insights from the journey.

What drew you to get involved with this project in the first place?

Isabel Vogel, Independent Consultant: I have worked in catalysing MEL capacities in organisations of different sizes. The diversity of the advocacy organisations supported by BMGF was appealing – how could we design a capacity strengthening process that had a consistent approach but was sufficiently tailored to the needs of both the smaller and larger organisations in the group?

And how could we make MEL a useful process for busy advocates responding on the fly to events, rather than a burdensome, bureaucratic exercise that took precious time away from their advocacy?

Rhonda Schlangen, Independent Consultant:  As an internal research and evaluation lead at an INGO and then a consultant focusing on advocacy evaluation, I joined this project convinced that conventional MEL approaches often do not serve advocates. For example, advocacy needs to be highly adaptive, so evaluating an advocacy effort based on its fidelity to outdated plans misses the mark. So, I was most interested in supporting these six organisations to develop MEL that really aligned with the nature of advocacy and which they could readily apply to strengthen their advocacy work on an ongoing basis.

What were your expectations heading in?

Laura Hopkins, Principal Consultant: I knew we had a great team, with lots of relevant experience in advocacy, MEL for advocacy, capacity building, and general life! I remember being very encouraged in our early meetings when the framework for thinking about MEL capacity emerged – it felt thoughtful and nuanced and like we were on to a good thing.

There was some nervousness that the organisations would think they were getting ‘remedial classes’ in MEL based on the preceding evaluation findings, so we had our eyes open to that. I also think I probably naively expected everyone would be as excited as I was about the possibilities that MEL can provide – but the reality of an advocate’s day-to-day is intense and pressured and there were understandably some questions thrown back at us to prove ourselves. I think we stepped up to that challenge!

Stefanie Wallach, Associate Partner: In addition to Laura’s point about us coming in to provide ‘remedial MEL classes’, I also expected that there would be some degree of suspicion from the advocacy organisations that we had been brought in to hold them accountable to their MEL commitments, rather than to work with them in a supportive, tailored way. I also expected that there would be some standard tools that could work across organisations regardless of their structure, size and ways of working.

How did that play out in practice?

R: Going into the project we all had different ideas about what MEL capacity looks like and advocacy evaluation is itself an evolving practice. The bespoke approach was essential as each of the organisations differed in size, structure, scope, and advocacy approaches. So, the Adaptive MLE Capacities Framework (yes, we struggled with that name!) we developed was both an internal tool for the team as well as a tool to guide our work with the grantees to identify their current practices and their priorities for strengthening their MEL systems and cultures.

L: Our team were really good at coming together to share learning across the portfolio of organisations we were working with. It helped to continually calibrate expectations and identify opportunities to make positive gains and call them out. These might not have been the gains that we had initially expected, but it was important to recognise where progress was being made and what it looked like.

S: We were able to clarify fairly early on – with the support of BMGF – that our role was not to provide remedial classes or to play an accountability function. What I think we underestimated across the team was the importance of spending the time upfront to deeply understand the organisations’ culture as a foundation to their MEL appetite.

I: It really helped that our shared professional intuition was that training in technical skills was not the right approach, or at least, not right at the start (it did come in later). Some of the organisations we were working with had a staff of only two or three people, so unless they could use MEL approaches in their day-to-day work it just wasn’t going to work.

So, we came up with a framework that placed MEL within an organisational context, as an organisational development process that didn’t necessarily need technical skills. As a team we also knew that the organisations needed to see the direct value of MEL in their work, so we developed a mapping process as our first intervention.

This was a supported process, where the teams used our organisational MEL framework to analyse where they were today and where they wanted to get to. This helped to build trust and buy-in, and the relationships that were key to the success of the project.

As we progressed, both the team and the partners learned so much about how MEL is a mindset and a culture, not just a set of technical skills in data analysis.

This second learning brief builds on the first by delivering five key insights that you have identified as important in underpinning this kind of work. You write that progress building MEL capacity can be seen when organisations curb their expectations of what MEL can deliver for them.

Can you expand on that a little?

L: I suppose this is what I was talking about earlier about recognising was progress looked like. We had a working hypothesis throughout that we might see organisations lower their self-assessment scores on their MEL capacities as they learnt more about what good MEL looks like. This is the classic ‘MEL for reporting or showcasing’ vs ‘MEL for learning’ question. It was gratifying to see this come to life in situations where organisations would have an ‘a-ha’ moment and re-calibrate their perspective on the kinds of data they were collecting.

I: MEL is nuanced. It’s not going to help you to ‘prove’ your impact but it should help you to reflect more about why you are doing what you are doing: is it as effective as it could be, or are you just doing the activities you’ve always done?

What results did you get last time, and what can you learn from that to adapt and optimise your strategies this time?

Most importantly for advocacy organisations, our approach to MEL helped them start to recognise and capture all the small but vital accomplishments along the way to a larger impact, as well as the setbacks and adaptations. Reflecting on the ‘small but mighty’ outcomes on the advocacy journey and each organisations’ unique contribution to the advocacy effort, not just the final – hard to measure – impact.

R: I think one of our learnings was that we also had to support partners’ unlearning negative past MEL experiences. Some thought of it as another funder demand and it was difficult for some to overcome the sense that they’d been tapped for the project due to some deficit. Others had had bad experiences with evaluations that didn’t tell them anything they didn’t already know or, worse, felt like audits.

So, we needed to model something that respected the agency of the advocates and demonstrated usefulness to them. This is another moment where trust was really important.

In the learning brief, you say ‘attention to funder reporting… crowds out space for critical reflection’. What reaction did funders have to this finding?

R:  This was one of the most exciting parts of the project for me because the foundation really responded well when we started to raise these issues with them. I think there are two main issues with the reporting-critical reflection connection.

First, is a purely pragmatic concern: grantees have limited time and resources to gather information and use it. Funder reporting is tied to resources, so it’s a priority to channel information for reporting.

Second, critical reflection is a learning process. It involves examining what didn’t work, missed opportunities, what could have gone better and why. All the things you want to know to build your advocacy muscle. But these aren’t necessarily the things you want to share with a funder who might decide you’re not worthy of more resources. If a grantee feels there’s high risk and low reward for reflection and learning, it gets deprioritised.

The funder’s reaction was exciting for me because they really seemed to grapple with the challenge that they needed to do more than invite reflection on learning in the grant reports.

For example, they expanded attention to learning in their conversations with grantees and really tried to understand from us and from their grantee partners where there were challenges. I think we all recognised that supporting a learning culture is a long-term prospect and involves effort on the part of funders to address power imbalances that disincentivize grantees from sharing what they’re learning.

I: I think we were lucky to have a funder that was open to self-reflection; the team at BMGF recognised the tension between asking for reporting on achievements and critical reflection. Reports are not the right tool for eliciting reflection, and so you need to think about creating facilitated spaces for colleagues, partners and funders to come together to share experiences and reflections, alongside formal reporting.

Our takeaway is that a lot depends on the culture that funders create, especially in a field as dynamic as advocacy, where so much is out of the control of advocacy organisations themselves, reflecting on set-backs and challenges should not be avoided as ‘failures’, but

The concept of learning at the speed of trust is something everyone can relate to. In this circumstance you have the role of the funder, the organisation and the external MEL provider – that’s a lot of trust-building!

What do you think helped create that trustful environment?

L: Honestly, a lot of dialogue. We knew it would be important to have face-time with the organisations, and that the early stages would set the scene for the ongoing work. We scheduled 6-monthly face-to-face check-ins to reflect on progress, then, from a remote setting, we would hold an ongoing dialogue with our counterparts. We worked from the beginning to create a sense of ownership within the partner organisations, recognising that it was important to accompany the change rather than impose it. Important lessons from Isabel’s previous work with  !

I: It wasn’t always easy, there was a lot of crossed wires as we didn’t speak the same professional language all the time and we had to translate MEL concepts into non-technical language. There was also an element of building confidence in our counterparts that they could make use of these approaches without having to be a technical person. Overall, it felt like a proper co-creation process; this was learning-by-doing for both parties and definitely a process of accompanying change.

S: We also had regular calls with the BMGF team, and building trust with them was equally as important as building trust with the advocacy organisations. I am grateful that we built a relationship with them that allowed open conversations, the space to constructively challenge each other, and honest recognition of the journey that we were on as the MLE partner and that they were on as the funder.

It was important for us to be able to say we didn’t always have the ‘right’ or ‘perfect’ solution but had an openness to trying out new approaches (backed by evidence and experience) and the flexibility to change course if needed.

R: I think giving attention to the power dynamic between the funder, us as consultants, and the partners as grantees was critical to building trust. Evaluation is interpreted as a judgemental activity so even when someone says “we want to support what’s useful and meaningful to you and your funder is paying for it”, it might feel more like some kind of MEL hustle than authentic support. The BMGF program officers were fantastic at reinforcing this more positive message and it meant so much more coming from the funder.

What struck you most when reflecting on progress months after the capacity building intervention had ended?

I: I was amazed and delighted when the organisations told us about some of the things that they were integrating as regular practices and that they felt confident enough to adapt and try some other things out that we had explored together.

What struck me most was this idea of MEL as a mindset and a culture – it felt like we had catalysed some cautious cheerleaders for MEL who felt confident enough to put in place ways of working that will help improve the effectiveness of their organisations and take them onto the next stage of their MEL journey.

R:  Yes, as Isabel described, it was so powerful to see how some of the groups were using the MEL foundation they’d built as a springboard. I can’t emphasize enough how much I learned about what it meant for us to be truly designing MEL in support of advocacy organisations.

The most gratifying development was to see how some of the groups were using this foundation to take MEL in directions we hadn’t envisioned.

Finally, what would be your one piece of advice to anyone thinking of commissioning an external MEL provider to build organisational capacity in this space?

L: Consider dynamics and their role within this. For example, as a team, we often wondered what the outcome might have been if the advocacy organisations has contracted us directly. On the one hand, they might have been more engaged if it had been ‘their money’ on the table. On the other hand, the direct relationship we had with their funder allowed us to raise questions around the issue of reporting formats and the bearing these had on MEL.

I: Once we had established trust with the organisations, we could act as a bit of a broker in both directions, and also to help take a collective overview of how MEL is not something that rests within one unit within one organisation but is a collective endeavour that involves leadership and a supportive culture at different levels across a group of organisations and their funders.

R: I absolutely agree; the most important piece of advice I’d give is to start by thinking about power and agency, and doing as much as possible to acknowledge and neutralize the power imbalance between the funder, consultants, and MEL users. It is also important to step the priorities over time, as we were able to do in this project, so there is support for both the MEL design and then its use.

Any final thoughts before we leave?

L: This piece of work was all about learning together, and I think it’s a testament to the commitment and openness of all three parties in the scenario that this has proved to be the case. I know I learnt an awful lot, and the reflections from BMGF and advocates indicate they have too. For me, that’s the best we can ask for.

Read the full learning brief ‘Learning at the speed of trust’ here.

Read more about our work: