Skip to content

Blog

The Rigour Arguments in Impact Evaluations

Abdulkareem Lawal shares his reflections from attending an Impact Evaluation course at the Institute of Development Studies in April 2015

28/10/2015

Abdulkareem Lawal, Principal Consultant, shares his reflections from attending an Impact Evaluation course at the Institute of Development Studies in April 2015.

As a Doctoral student some 10 years ago, I struggled with my Professors on what constituted rigour when I was preparing for my dissertation field work. My research looked at the social and economic impacts, and the gender dynamics, resulting from the adoption of improved species of chickens amongst fisherfolks in Nigeria. I had proposed the use of mainly qualitative research methods. My Professors had argued that such methods lacked rigour and were not comparable to the Randomised Control Trials (RCTs) and other quantitative approaches. I also argued that the rigour of the methods I wanted to adopt laid with the very in-depth, stakeholder engaging, and participatory process of qualitative assessments. I eventually adopted a mixed methods approach, including the quantification of qualitative variables, which I then included into econometric models.

As an evaluation professional, I have also followed the arguments around conceptual and practical rigour, as well as external validity in impact evaluations. In April 2015, I attended a short course on impact evaluation at the Institute of Development Studies (IDS) in Brighton. Adopting a hands-on and experiential approach, as well as using real time projects for practice sessions, the course focussed on the design of impact evaluations – tools and methods.

There was the usual argument that the impact of an intervention is the comparison of outcomes with and without the intervention; and that there is no impact evaluation without a control group. From the proponents of quantitative methods, there was also the argument that such methods are scientific, precise and rigorous, compared with qualitative methods. We had “debates” about rigour by exploring its definition and then applying this to quantitative and qualitative methods. Rigour is a means to demonstrate plausibility, credibility and integrity.

From the quantitative camp came the arguments that the process of sampling and randomisation takes care of precision, which in turn allows the demonstration of plausibility, credibility and integrity. The qualitative camp opined that qualitative methods can ensure rigour in many ways:

  • Rigour in procedure: being flexible and ensuring triangulation of information – which reduces bias and misinterpretations through the review of contradictory information.
  • Rigour in analysis: allowing participants to review findings – that is, respondent validation.
  • Rigour in documentation: process trail (that sufficient for another to follow and come to same conclusion); be transparent and explicit.

My reflections….

Conceptual clarity is important – be sure of what you want to do. Designing and implementing an impact evaluation should begin with a plan that clarifies the intended purposes and users, including the key evaluation questions it is intended to answer. The plan should also seek to address the six components of an impact evaluation – clarifying values, developing a theory of change, measuring or describing important variables, explaining what has produced the impacts, synthesizing evidence, and reporting and sup­porting use. The evaluation plan should then consider the implications of using different methods. In my mind, the central contribution that any method will make in proving the causal link between an activity and the change that we want to see, should be highlighted. Everyone should care about the question of ‘attribution’, and seek to answer the questions: “will this method highlight the change being sought?” “How will this method benefit from other complimentary methods?”

There is no supremacy of methods – the course provided very deep insights into the fact that one method may not answer all the research/evaluation questions. As I mentioned earlier, there have been continuous arguments about the fact that RCTs and other more quantitative methods represent the “gold standard” in impact evaluation. Certainly there is the need to have experts in various fields, but individuals in each field need to develop their understanding of the others as well. Thus the proponents of gold standards should listen to others. This will, of course require more interdisciplinary activities as well as humility and empathy; and the willingness to sometimes have our perspectives changed as a result.

Contextual considerations are critical –  there would be situations when a particular method may not work, we can’t force it! The more quantitative approaches to evaluation are often ‘data-hungry’, and often requires large sample sizes; and therefore expensive to design and implement. In addition, methods like RCTs are difficult to design and implement because of practical problems like data availability, or non-availability of a control group, and the fact that data of the project and control group have to be comparable. Quantitative methods also depend a lot on variables that can be “observed” or “counted”. However, not all variables lend themselves to being easily observed or counted; for example political affiliation or individual and community motivation. When these types of variables are strong determinants of project outcomes, then it might be difficult to use quantitative methods in a strict sense.

Mixed methods are the way to go – My previous experience of delivering social development programmes has helped me to make better judgements on results. After going through the delivery of a replicable and scalable behaviour change HIV programme in DFID, for example; I have come to realise the efficacy of using mixed methods to generate, and support others to use evidence effectively. Indeed, OECD says, ‘impact evaluation is designed to evaluate the positive and negative, intended and unintended, direct and indirect, primary and secondary effects produced by an intervention’. It is doubtful whether a single evaluation method can fully capture the complexities that are presented in the above statement, which is what happens in a programme operating in the real world. In my view, the use of different methods can only result in broadening and deepening our understanding of the evidence provided by the evaluation, while also extending the comprehensiveness of the findings.

The big question about impact evaluation is answering the question of what would have happened without an intervention, that is, the counterfactual. This requires the building of a control group of people not affected by the intervention. Quantitative methods lend themselves to randomization and constructing control groups. However, qualitative methods do offer impact evaluation some very important insights – understanding change in context; change as seen from perspectives of people living in poverty; wider lens embracing negative, unintended as well as intended experience of interventions/change, multiple realities. Qualitative methods also explore motivations, choice, aspirations, and frustrations.

Abdulkareem Lawal, October 2015