Reflections from the 2019 UK Evaluation Society Conference
Our takeaways from the 2019 UK Evaluation Society conference really reflect the event’s theme of ‘Evaluation: A Diverse Field’. In our blog this year, Talar Bogosyan explores how evaluation can keep pace in our digital world, Jessica Rust-Smith reflects on the importance of accessible evaluation outputs and Elisa Sandri wonders how the field is only now beginning to think about ‘complexity’. Read on to find out more.
Digital technology for discursive data in evaluation, Talar Bogosyan
Kelsey Beninger and Mary Suffield from Kantar Public presented on digital technologies for data collection, including the use of WhatsApp and chatbots in evaluation. Some of the often-repeated challenges in M&E – the long report, measuring behaviour change, adaptability for learning – could perhaps find some resolution in these technologies. Beninger and Suffield suggest that these methods offer instant insight, up to date and actionable data, and a structured process to review behaviour change and learning. ‘Conversational commerce’, a term coined to describe the interaction with businesses through chat apps, has already become an indispensable component of e-commerce platforms, increasing quality and duration of engagement and enhancing user experience. Why not share in these benefits for MEL?
The utility of these methods is especially highlighted when thinking about our work measuring the impact of digital financial services on business growth and employment. There is potential to integrate evaluation into chat apps and superplatforms – like Facebook and WhatsApp – that operate across sectors, including financial services, and offer direct access to a relevant audience on a medium they access daily.
Of course, these methods are not without their limits, not least of which is the question of who they exclude. Even so, as society becomes increasingly digitally-focused and literate, evaluation shouldn’t be left out of this shift.
The importance of accessible and timely evaluation, Jessie Rust-Smith
I attended the last talk on the last day of the conference, ‘Evaluation in international development: towards a new narrative’ by Bridget Dillon, DFID’s Head of Profession for Evaluation. She spoke passionately and candidly about the state of evaluation in this field, and at DFID specifically. I found myself nodding my head vigorously in agreement when she described the current narrative on evaluation: it takes too long to develop evidence, and there is not enough focus on the use of evidence. I almost stood up and cheered when, in calling for a new narrative, she lamented the evaluation ‘tomes’ we currently produce, saying we need a change in format—moving to videos, slide decks etc. Although she gave me pause when she called for both a faster turnaround in producing evidence and greater rigour in evaluation methodology—and I wasn’t the only one—an audience member challenged her on this point, to which she responded that DFID was aware of this tension. The suggested solution was to strengthen evaluation Terms of Reference to make sure the client (e.g. DFID) knows what they want, an answer that gave me partial satisfaction.
In all, it was a lively and invigorating discussion, enlightening to learn about DFID’s plans for evaluation going forward, and an excellent end to the conference.
System thinking – a little help from other disciplines? Elisa Sandri
One of the main themes discussed at this year’s UKES conference was the use of system thinking in the evaluation of complex interventions. The main question behind system thinking is: how can you accurately measure the impact of interventions, taking into account the complexity of the context in which the intervention takes place? In his keynote speech, Dr Matt Egan, from the London School of Hygiene and Tropical Medicine, illustrated how, despite ‘systems thinking’ receiving increasing attention in the evaluation world, there is still not a shared and practical understanding of what this term really means. In addition, there are considerable challenges on how to draw the boundaries of a system.
A key takeaway for me was how, in the evaluation field, the idea of ‘complexity’ is relatively new, while other disciplines, such as philosophy, anthropology and maths, have been grappling with this idea for a while. Aristotle, in his endeavour to understand complexity, came up with a system to structure the world based on oppositions. Arguably, the foundations of anthropology are based on understanding complexities, being these embedded in individual societies or complex networks between different societies. As I myself have a background in anthropology, complexity is not a new concept, but during the presentation, I was left wondering why this idea is only emerging now in the field of evaluation. This question was also brought up by members of the audience during the Q&As. Dr Egan commented that evaluations have been creating tools that were believed could capture data without taking into account the complexity around this data, but it is now time to focus on system thinking and look at the ‘bigger picture’. Perhaps, to describe complexity and learn from it too, evaluators can draw from other disciplines and approaches that have dealt with the idea of complexity. This could help not only to make sense of complexity around interventions, but could also strengthen impact and help guide decisions by having an enriched understanding of the context of the intervention.
Read our thoughts from previous UKES conferences:#UKES