Skip to content

Blog

Lessons for evaluating complex and adaptive programmes

Drawing on our experiences evaluating a six-year girl-centred sexual health programme, the A360 evaluation team share our key learnings and tips for others in the sector evaluating complex adaptive interventions.  

About Adolescents 360

Adolescents 360 (A360) was a four-year, US$31 million initiative to increase adolescent girls’ access to and demand for modern contraception in Nigeria, Ethiopia and Tanzania. The programme drew on six distinct disciplines, including human-centered design (HCD) and meaningful youth engagement, to develop novel, user-centred interventions to reach adolescent girls. It was led by Population Services International (PSI) and co-funded by the Bill & Melinda Gates Foundation and the Children’s Investment Fund Foundation.

From 2016-22, we collaborated with the London School of Hygiene & Tropical Medicine (LSHTM) and Avenir Health to lead a process evaluation, an outcome evaluation, and a cost-effectiveness analysis of the programme. Our overall findings from the evaluation are complex, presenting a mixed picture on the overall success of the programme. Read our summative report, and reports on each of these three components.

In Nigeria, three girls laugh together in front of a billboard that says 'Girls'.
© Population Services International

What worked well within the A360 evaluation?

Close engagement with implementors and donors to build a strong and trusting relationship:
This happened through annual process evaluation activities as well as participation in important project moments such as A360’s annual reviews. It was important to listen more, and listen more often, throughout this evaluation. This enabled us to stay on top of programme adaptations, understand learning needs, and keep PSI and donors abreast of our ongoing evaluation findings. While these relationships were crucial to the success of the evaluation, we constantly underestimated the level of time and effort required to stay ‘in the room’ as evaluators.

Providing ‘real-time’ findings to programme implementors to inform adaptations and programme delivery:
These practical insights were hugely valued by PSI. It was crucial to have findings ready at the right moments so that evidence-informed decisions and adaptations could be implemented by the programme team, and build these timelines into our evaluation planning – not always easy given the fast pace of programme evolution, compared to the often slower pace of evaluation research and analysis.

Developing ‘user journeys’ to structure the process evaluation and describe the A360 programme in an intuitive and practical way:
In 2019, the process evaluation team worked in collaboration with PSI to design ‘user journey’ models, depicting how girls were intended to experience A360. This approach built on ‘journey maps’ from health research – a systematic approach to documenting service-user touchpoints with an intervention, capturing both the physical and emotional journey of the user including behaviour, feelings, motivations and attitudes. The user journeys helped make sure our evaluation kept the experience of girls at the centre of our work, as well as providing a visual tool to help explain and articulate the interventions and our evaluation findings.

What were the greatest challenges we faced?

Building the aircraft as we flew it: Our first challenge within this evaluation was designing an outcome evaluation based on interventions which were constantly evolving, through the human-centred design (HCD) process. This required us to essentially ‘build the aircraft as we flew it.’  We had to be agile and creative, and rely on strong communication and a good dose of patience and fortitude, to develop multiple design models and iterations before we could find consensus on a design with the two funders and PSI.  We had to make a number of assumptions around the likely intervention design, its location, and timing, as well as monitor its continued integrity through the evaluation period.

Following a fast paced, highly iterative HCD process: It was important for us to be ‘in the room’ for the design process, so the process evaluation could document how and why the programme evolved. However, the design process was fast moving, meaning we missed some key moments, and decisions around the design were not always documented in way that were easy for us to use. This meant some gaps in our evidence from the design period and how and why some decisions were made – some of which was impossible to fill in retrospectively.

Building the relationships and systems needed for our findings to be useful: Starting up a demanding, complex programme like A360 is hard work – it was fast moving and demanded a lot from the implementation teams. The process evaluation was intended to provide timely insights to course correct. However, in the early years, it was hard for the evaluation team to engage with implementers and hard for them to pause and absorb the findings. As time went on, we strengthened the relationship with PSI and understood more about what was most useful to them. We also integrated participatory research activities that provided answers to ‘real time’ questions including service provider attitudes in Nigeria and factors affecting whether girls continue / discontinue their use of contraceptive methods, and adapted how we engaged with the PSI country teams to co-create learning based on the findings. By the time the evaluation came to a close, we had developed a close and trusting relationship with PSI which paid dividends when having honest conversations about evaluation findings which challenged the programme theory or the status-quo.

5 top tips for others who are looking to evaluate complex adaptive interventions like A360

  1. Consider a phased evaluation approach to ensure the evaluation design is the best fit. Our evaluation was designed at the same time the HCD process was taking place for A360. This meant that no one knew what the final target geographies, populations or intended outcomes would be when the evaluation was being designed. Our advice for future evaluations is to wait until the final design of the programme is agreed before moving ahead with the evaluation design – especially for impact evaluations – in order to ensure the best fit. However, the advantages of a phased approach such as this would need to be balanced against the disadvantages of lengthening the time between implementation and evaluation – this issue is explored further here.
  2. Build a shared understanding with implementers on the time required for them to engage with the evaluation and work with the evaluation team to ensure findings are timed to feed into key decisions.
    Everyone wants to learn and recognizes the importance of this but it can be hard to set aside time for learning. We found that implementors engaged with us more as time went on – perhaps because time required for program delivery stabilised, but also because the evaluation findings were adding value, which meant country teams felt engagement was worth their time. In addition, learning and adaptation were embedded in the programme model for A360, for example within annual meetings and through the adaptation process. As a result, learning became part of the ‘ programme culture’ for A360 and provided us with vital opportunities to share learning.
  3. Consider expanding the scope of country-based partners.
    Throughout the evaluation, we maintained strong working relationships with our partners in Nigeria, Tanzania and Ethiopia, who led the majority of data collection as well as contributing to some of the final analysis. There was scope for more direct involvement and ownership from country-based partners beyond data collection – for example, within analysis and interpretation of findings as well as presentation and ownership of the final evaluation products.
  4. Don’t underestimate the work required to synthesize findings across multiple evaluation workstreams over multiple years.
    We pulled together three evaluation workstreams over more than five years of the evaluation. The time and team effort required to do this was significant. It was also complicated by the final products being staggered due to Covid-19 delays – originally, the process evaluation, cost-effectiveness analysis and outcome evaluation would have all finished at the same time which would have enabled a more joined up approach across the team to review and synthesize results.
  5. Ensure adequate budget to support sector-wide knowledge sharing. More resources for external dissemination would have allowed greater learning from A360 for the wider adolescent sexual and reproductive health (ASRH) sector. Throughout the course of the A360 evaluation, we produced a series of blogs, infographics, reports and webinars to share interim findings. We also had a good amount of meaningful opportunities to share our findings with A360 implementing partners. However, our resources didn’t allow us to share our final findings widely with the ASRH sector, particularly at the country and regional level. In light of this, one thing we would do differently is to allocate and ring-fence more budget for dissemination.