What has the evaluation team discovered so far about the Ideas to Impact prizes?
In this blog post, Ideas to Impact’s evaluators share their findings from the assessment of the first round of prizes and lessons learnt from evaluating them.
While the Evaluation and Learning team’s main focus will be Ideas to Impact’s longer-term prizes or stage 2, we have been looking at what happened with stage 1 and the whole prize process.
Are the prizes working?
All stage 1 prizes attracted enough good-quality solutions to make awards (learn more). While one of them fell short of its target with three winning solutions instead of five, this could be seen as added value of this form of funding over grants as only participants judged to be worthy of a prize receive any financial award. Participants have to achieve to gain financial recognition.
Ideas to Impact’s goal is not to make awards but to incentivise desired behaviour and outcomes. Each prize has a set of intended effects mapped onto its Theory of Change, which describes how and why it is expected to lead to certain changes and results. For example, we found that the Wazo prize (Stage 1 of the Climate Information Prize in Kenya) succeeded in encouraging the development of solutions to deliver useable climate information to vulnerable communities. It has also helped to raise awareness of climate information and stimulated networking which may not have occurred if the prize wasn’t run.
In theory, prizes are particularly useful to those wanting to reach new entrants as compared to other forms of funding. We have evidence that Ideas to Impact has created interest among people and organisations, including winners, who are new to development funding, innovation prizes and even the field of interest.
Challenges and surprises
Barriers experienced by solvers are important to understand so teams who design and manage the prizes can make changes, if needed, to increase participation and create a more level playing field. Stage 1 solvers reported facing obstacles that ranged from practical constraints, such as insufficient resources or poor internet access, to inexperience in producing the required types of documentation. Ideas to Impact is using these insights to identify what additional support can be given to solvers.
We have been investigating what motivates solvers to take part and across the portfolio we have found that the money is only part of the picture. Prizes for development are still a novelty and offer non-financial incentives that make them stand out from other competitions. The opportunity to apply knowledge, make something happen, get recognition for their work and help others are some of the reasons solvers provided for their participation. Some of the Cylinder Prize participants only gave an altruistic reason for participating.
These are some of the headlines that have emerged from the data collection and analysis done so far. The final evaluations are due to wrap up by spring 2020 and the full reports will be published online. (The evaluation report for the Cylinder Prize is already available as it completed in 2015.)
What are we learning about evaluating prizes?
Previously we highlighted the different motivations of prize design and evaluation and the challenges these can create for both sides and this has been the case with evaluating this first round. There is a tension between pausing, reflecting and improving prize plans, which is good practice in project planning, while keeping things moving to maintain the necessary momentum and solvers’ interest.
The evaluators and prize teams have needed to work together to manage this tension and see how and when learning can be gleaned to support planning and implementation of subsequent stages. In some cases, stage 2 launched when stage 1 was awarded, partly to use the media buzz from award events to energise participation in the more demanding follow-on stage. In this scenario, evaluation findings can only inform later phases in the process, such as judging.
Where there has been space between one stage and the next, practical lessons from the evaluations have informed more substantial changes to design and implementation, notably within Adaptation at Scale and Dreampipe.
In another post, we discussed the need for evaluators to be responsive to changes in design and learning gained from implementation. More than a year later, this still holds as a reality of evaluating prizes for development. Most of Ideas to Impact’s prizes are multi-stage, so once those from stage 1 had been awarded, the Evaluation team spent time reflecting on what had been learned from evaluating Ideas to Impact so far. We used this, together with any changes made to prize designs, to help refine our approach.
We look forward to sharing our new thinking in a future post.
Cheryl Brown is the Evaluation & Learning Coordinator on Ideas to Impact
This blog was originally published on the Ideas to Impact website.