Skip to content


Expect the unexpected: unanticipated consequences and development programming

You’ll be familiar with the basic logic of development programmes: there’s a serious problem to address; we have designed a solution, we can implement it, and the situation will improve in the following ways...


Naturally, we tend to be more certain about what we will do, but a good programme will think hard about the immediate benefits and how it can best ensure these are achieved and sustained. The intentions of the project become the indicators of success, which can be measured and evaluated.

What happens when a programme leads to something the implementers didn’t expect?

A mismatch between intentions and reactions

At the 8th African Evaluation Association conference in Kampala, we heard a story about unintended consequences from a team who spoke to communities about how they dealt with local hazards. The team arrived at these conversations assuming that any increase in income would enable people to take measures to avoid, respond to, or cushion the impact of a drought. While this turned out to be a correct understanding of one aspect of household resilience, during their discussions with women the team also found something else: that some men got drunk on the extra money and some men came home after drinking and beat their wives.

In this case, researchers were on a scoping study and so were able to factor the safety implications into their thinking. However, a few years ago I was part of an evaluation of a programme that already knew about male capture of household income and, based on a commissioned assessment of gender livelihoods in Africa, decided to support products that were locally perceived to be under the control of women. The female craft-makers confirmed that they took their wares to market but mentioned that their husbands became interested in the income when they saw how profitable the products had become. Elsewhere in the programme, there were cases of domestic violence associated with the introduction of the idea that women could control the purse strings.

Not all unintended consequences are as alarming or as obvious as this. There is a growing understanding, however, that the effect that programmes have is much more nuanced than our ways for explaining them and that many unanticipated reactions to an intervention quietly influence what changes the programme creates in the situation. This is especially relevant to communications interventions, where it is known that people interpret messages based on their own experience and values. Some people will eat five-a-day when prompted to; others will eat fewer to ensure their children get more, and some may feel they’ve missed the mark for years and visit a doctor. No one is a clean slate waiting to be given a set of instructions.

In each of these cases, context is key: each person’s reaction determined what happened after the message was received and the wider situation influenced whether the outcome was positive, negative, or somewhere in between. Take those who visited the doctor: if the health system is stretched then you may have unintentionally reduced the time medical staff spend on more serious issues, but if health centre attendance is low this could be the thing that connects people to a raft of wider support. You’re unlikely to find out these things if you really want to tell the world how many vegetables people ate yesterday.

The effect isn’t limited to communications. We have similar reactions to technologies, markets, food, art, organisations – almost everything we engage with. A drug may come closest to directly delivering its intended impact, but people still have to choose to take it. And, when put like that, it becomes clear how much messaging surrounds physical interventions, whether they’re delivered through farmer-schools, nurses, or the routine interactions that programme staff have with the people with whom they work.

Thinking differently about interventions

Of course, as with the female craft-makers, programmes and evaluations do draw upon lessons from research and institutional history in their design and decisions. But a lot of this information is a snapshot taken or applied once, may stay at a level of abstraction above the programme’s specific context, or just arrive too late to be applied. Unanticipated reactions are nearly always anecdotal and stumbled upon after the fact.

So what can we do? Three thoughts struck me during the conference:

  1. Gaining a deeper understanding of the programme in the context – not at the level of ‘we’re working in a patriarchal society’ but separating and digging into many ways people react to different aspects of the programme. Men and women experience things differently, but within each of these monolithic groups, there is a lot more nuance than usually is applied to the design of a programme.
  2. Have an openness to hear different and unexpected forks in the programme logic – A may lead to B, Z, D, and F, and to find this out requires listening to the people who interact closest with the intervention. With a more detailed map of the programme-context interactions, you can design or adjust programmes down the paths that offer the best benefits. This requires a longer view as some of these forks might not be reached until after the programme departs.
  3. Establishing a closer connection between learning and programme delivery – both spatially and temporally, so a programme can make adjustments in response to the things they see. It will not be possible to plan for every reaction before implementation, but a programme should be able to see, reflect, and respond before the next project cycle, when the context may have changed again.

There are an array of approaches and tools that can help deliver this way of thinking. Adaptive management is an attempt to bring more relevant information into programming decisions more quickly. Developmental evaluation, outcome harvesting, realist evaluation and a range of other methodologies place greater emphasis on finding out what actually happens rather than narrowing down to find and calculate the outcomes the programme said it would achieve at the start. From there you can try to understand why it happened and what can be replicated or improved. You may want to understand your impact by comparing it with what would have happened had the programme not been there, but given that the project is there, isn’t it valuable to understand what it does?

This could lead to development programming that is more in-tune and in time with the situations in which it is often an integral part. It’s important for the ‘Do No Harm’ principle, to ‘Leave No One Behind’, and, responding to the major question of AfrEA conference, is a significant way that monitoring and evaluation can help to ensure the Sustainable Development Goals are more successful than their predecessors.  If the approaches can be combined with a commitment from donors and implementers to value and apply learning as much as accounting, then we may move away from hearing that we missed B to knowing that we successfully steered ourselves to Z, D, and F. These may not be what we started out to achieve – they may be even better.