The topic of the day was blended finance and how we might address the evidence gap. This builds on the OECD Principles for Blended Finance.
Blended finance is often touted as a great opportunity to mobilise from the ‘billions to trillions’ needed to address the Sustainable Development Goals. But, using public money in this way has its controversies: not least the potential to (mis-)use public funds to subsidise firms, distort markets, or even the redefine Overseas Development Assistance (see Jesse Griffith’s recent blog).
Here are a few reflections from the meeting for evaluators to ponder:
The additionality question won’t go away. It’s both the critical question for policy makers to ask and perhaps the impossible one to answer! It often seems like a brain teaser: the logic is infallible, however, can we ever prove that investments occurred that otherwise would not have happened without public money? Some think not (read the Center for Global Development’s paper on the topic). But, should evaluators stop trying? There didn’t seem to be any consensus in the room. Some thinking emerged that we should use a richer, eclectic mix of methods; focus more on what it is about the design of the instrument that was additional due to public money (even if the end impact is hard to attribute); and accept more probabilistic notions of evidence (rather than proof).
This latter point resurfaced at several moments throughout the day: what is ‘good enough’ evidence? It was a hard discussion to follow as terms flipped between lean social performance data and all forms of due diligence (e.g. ESG), reporting and monitoring; to input-output models and right through to all forms of evaluation. There seemed to be some recognition that evidence gathering needed to be at least proportional to the often-tight operating costs of different financial instruments – with perhaps donors needing to accept the need to pay for robust answers to the harder questions they ask on behalf on their public (e.g., on additionality; which instrument work best under what conditions; deep analysis of social and market-level impacts, etc).
Of those harder questions, then a couple stood out in the discussions:
- Firstly, do (and how do) different instruments of blended finance result in different social and environmental impacts, and under what circumstances? What works best, and why? In reality there is a trade-off between risk and return, but for blended finance this is further complicated by the trade-offs within the return itself (i.e., between social and financial returns). If we accept that not every deal is unique, then there must be patterns about where to best place public money to address market failures and maximise development impact. It seems like there is still much to understand.
- What is the market impact of blended finance? If we focus too much on impact measurement at the firm level, then do we overlook distortions to the market? For instance, counting jobs and adding up gross job numbers ignores both the net jobs created (across firms) and how the job market has shifted (for better or worse). Understanding the broader systemic effects are important to justify the use of public money beyond supporting individual deals – and claiming this solely as ‘impact’.
And finally, for me at least, the ‘filling the evidence gap’ for blended finance needs to couple the production of data and analysis with serious consideration about how it can be best used to increase transparency and accountability. While upwards accountability is important (to parliaments, asset owners and asset managers) a multi-polar perspective might include the downwards engagement of employers, suppliers and consumers at one end of the investment chain (see this IDS blog on the costs of participation); as well as different stakeholders from the public, to investment decision-makers, and strategic policy questions around where best to place scarce public resources).
Different groups with a stake in blended finance should probably expect different forms of evidence for different purposes and with different levels of rigour. It’s hard to think of one framework and method that fits all purposes. Differentiation is key. Now, that’s quite a challenge for evaluators to navigate.