This was a valuable opportunity to reflect on the status of development evaluation and discuss key areas for innovation and learning in the research and practice development evaluation. A mixed group of participants attended, from academia, donor evaluation commissioners and practitioners, which meant a rich set of views were expressed. Bob Picciotto opened the event with a keynote proposing that the methodological war in evaluation is over; subsequent discussions contended that this is not the case. Patricia Rogers gave the second day’s keynote, outlining a comprehensive agenda for research on evaluation. Her 25 point agenda of burning research questions to support stronger evaluation was very neatly structured around the BetterEvaluation framework. A key question was “Why does so much development impact evaluation fail to be informed by what has been learnt about effective evaluation?” Much of the discussion over the two days, including John Mayne’s presentation of his contribution analysis approach, centred on the related issues of causality, contribution and attribution, including how systems approaches and complexity thinking provide ways to frame causality in non-linear contexts.
Recent debates and recent and forthcoming events [here and here respectively] show that there continues to be much mileage in the methodological debate on standards of proof and attribution. Much of this polarised exchange appears to be between the camps located towards the ends of the methodological spectrum. CDI has much to offer in terms of working on an agenda that develops innovative approaches that mix the best of the various schools.