Skip to content

Blog

Learning from HIEP on Evaluation Methodology

Team Leader Teresa Hanley reflects on lessons learnt so far from Itad's evaluation of DFID's Humanitarian Innovation and Evaluation Programme (HIEP).

1/06/2015

As team leader of the evaluation of DFID’s Humanitarian Innovation and Evaluation Programme (HIEP) I am learning a lot about leading a theory-based evaluation of a complex programme over five years!

HIEP is a UK£48 million programme made up of 16 projects carried out with partners from the academic, humanitarian and policy arenas around the world. It aims to improve humanitarian outcomes through support for evidence and innovation to be increasingly used in humanitarian practice.  The evaluation runs alongside the programme over five years both to provide an independent assessment of progress and also to feed in learning to enhance the programme design.  Itad brought together an evaluation team including humanitarian, evaluation, organisational change and academic experts. The formative phase report has just been released.

Five evaluation lessons to date:

1. Need for flexibility – given the inherent unstable nature of humanitarian context, the DFID programme has to be flexible and therefore so does the evaluation E.g. project focus countries change due to security operations and the Ebola crisis has resulted in new project directions.

2. Patterns over aggregation  –  The variety of projects and contexts make aggregation of evaluation findings a challenge. Rather, the team has found that a focus on patterns, themes and interesting outliers is a more relevant way to identify key learning.

3. Ownership of the theory of change-The team’s first task was to work with DFID to establish a theory of change for the programme.  There’s a risk that the theory of change becomes seen as belonging to the evaluation team rather than the programme. Both DFID and the evaluation team have to work hard to ensure it is promoted separately from the evaluation and is owned by DFID.

4. Theory based or evaluation questions? – The methodology for a theory-based evaluation which, like this one, also seeks to answer evaluation questions around effectiveness, efficiency, relevance and impact needs to align two ways of assessing the programme.  The evaluation matrix is a good vehicle for this. We adapted the links and assumptions from the theory of change as well using the outputs and outcomes to form criteria to judge the effectiveness, efficiency, relevance and impact questions.  However, we are also committed to revising the theory if practice shows it can be improved: the theory is not only the basis for testing the programme, but this is a two-way process. The experience of the programme being used to improve the theory of how change occurs- a particularly challenging area in relation to research and humanitarian action. Which brings us back to lesson one – the need for flexibility in the evaluation.

5. Team time- Often underestimated and squeezed due to budget constraints, time for the team to meet is really important. This allows us to share and discuss together findings by team members who are following different case studies and themes.  It’s also important for team building. So far over the first two year there have been three babies, one wedding and two team members moved to live in new countries; but the team is still together.