This year, the United Kingdom Evaluation Society Conference looked at influencing impact through innovation and inclusion. The need to experiment with new methods and approaches is particularly pertinent for the evaluation of advocacy and policy change initiatives. For example, if we want to find out whether and how a particular campaign has contributed to a particular policy change, traditional impact evaluation methods such as randomised control trials may not provide the most satisfactory answers.
Evaluators have started experimenting with qualitative research methods from the social sciences to overcome this impasse, and one of these emergent methods is ‘process tracing’. Process tracing originated in the field of historical studies, and it can be likened to the work of a detective who traces the mechanisms that led to a specific event or outcome. This involves unpacking the causal mechanism that explains what it is that linked cause A to outcome B. The investigator establishes a causal chain linking A to B and tests the strength of the evidence at each step in the chain by applying a number of probability tests (‘straw in the wind’, ‘hoop, ‘smoking gun’ and ‘doubly decisive’ tests) underpinned by Bayesian logic.
A recently published practice paper on the potentials and pitfalls of process tracing by Mel Punton and Katharina Welle from Itad (published through the Centre for Development Impact) explains the methodological foundations of process tracing and draws on two case studies to show how it can be applied. The accompanying Annex describes the main steps in one version of process tracing, and provides some examples of how they can be applied in practice. Mel presented these findings at the UKES conference, and the presentation can be found below.
Why consider using process tracing?
We think process tracing holds potential as a rigorous ex-post approach to assess causal change, without having to rely on a control group. It might therefore be of interest to evaluators looking for a rigorous method to establish causal inference, but facing the reality of limited time or resources, or an absence of detailed baseline data or counterfactual evidence. Its strength also lies in shedding light on how and why a particular intervention led to change – and so may be particularly interesting to evaluate interventions where the pathways to change are uncertain and evaluators are interested in finding out why a programme did or didn’t work.
However, there are also some important pitfalls: process tracing can be time intensive, and collecting sufficient evidence to reconstruct probability tests requires considerable knowledge and understanding of the intervention. Another challenge is in how to apply process tracing in situations where the outcome is not fully known. It may be possible to apply aspects of process tracing to other theory-based approaches and overcome some of these issues – for example Barbara Befani and John Mayne discuss a combined application of process tracing and contribution analysis in a recent IDS bulletin article.
In a nutshell, we think process tracing is a step on the way to improving the range of methods and techniques available to assess impact in international development, particularly in policy and advocacy initiatives. It would be great to hear your thoughts!
Katharina Welle, with Melanie Punton, May 2015