The UKES Conference provides an important space for the evaluation community to share knowledge and experience, discuss ideas and connect with peers. The central question behind this year’s conference is: ‘How do we make sure evaluation leads to real-world change?’. Across three themes, participants will explore how evaluation can shape decisions, build cultures of learning, and communicate findings in ways that inspire action.
Say hello and join our sessions
Itad members, Kecia Bertermann, Viktoria Beran and Valeria Raggi will be at the conference, presenting on the two panels below alongside Itad partners. Lamiaa Shehata will also be presenting work from our health portfolio. Please come along and say hello in person – they’d be delighted to meet you!
A shared journey: practical approaches to community-centred and inclusive evaluation to help turn evidence into action
Thursday 18 May 2026 (online), 13.55pm
Presenters: Viktoria Beran (Itad), Tasneem Mowjee (Independent), Valeria Raggi (Itad)
This interactive panel will showcase how diverse, utilisation-focused evaluations embedded community and lived-experience perspectives at multiple stages of the evaluation cycle. Drawing on experiences from different Itad-led evaluations, the session will highlight practical, resource-sensitive approaches for meaningful participation amid shrinking budgets.
Presenters will share key factors that enabled engagement, the communication methods used, and how the evaluation teams addressed challenges. The innovative and good practice approaches that we will cover include:
- Working with a Lived Experience Advisory Group from inception to co-creating recommendations, thereby strengthening relevance and accountability.
- Designing context-specific stakeholder engagement methodologies to prioritise marginalised voices (including indigenous groups and persons with disabilities) and ensure the participation of diverse stakeholders in iterative analysis and validation processes
- Consulting actors from global to local levels at key decision points, thereby shaping areas of enquiry and developing context-specific and actionable recommendations.
The session will comprise a brief presentation, followed by a collaborative learning session to exchange experiences and discuss strategies for engaging communities and stakeholders beyond data collection. Together, we will reflect on lessons learned and practical steps to turn evaluative evidence into equity-driven action.
Human-in-the-loop evaluations: myth-busting and setting realistic expectations on how AI can be used in evaluations
Thursday 21 May 2026 (in person), 11.40am
Presenters: Paul Jasper (Oxford Policy Management), Steve Powell (Causal Map Ltd), Kecia Bertermann (Itad), Matthew Mcconnachie (Niras)
This ‘myth-busting’ round-table will cut through this noise surrounding AI in evaluation and offer practical, hands-on insights into what ‘AI-assisted’ evaluation really looks like today. Four experienced practitioners will share concrete examples of where AI is already adding value, how to manage expectations around efficiency gains, how to handle risks, and where they believe the ‘AI in evaluation’ journey is heading.
- Matthew McConnachie will reflect on how recent developments in agentic AI help him and his team to implement evaluations more effectively.
- Paul Jasper will share examples from OPM on how AID can support specific tasks in evaluation workflows – enabling ‘AI-assisted evaluations’ while keeping human evaluators firmly in the loop.
- Kecia Bertermann will discuss pairing AI specialists with subject-matter evaluators so that domain experts shape the analytical questions and frameworks, while AI experts configure and refine the tools.
- Steve Powell will present examples of how a “verifiable AI” approach to causal mapping has helped answer evaluation questions at scale.
Participants will come away with a clear sense of the practical use cases for AI in evaluations right now. The session will highlight specific points in an evaluation workflow where investing in AI can offer good value for money, alongside reflections on how emerging technologies may shift this picture in the near future.
The discussion will help the audience understand what realistic expectations of AI in evaluation look like – what AI can currently do to support human evaluators, and what it cannot. Participants will also gain insights into how AI may shape the evolving role of evaluators and the tasks they perform on a regular basis.