Skip to content

Blog

Itad at EES: Five takeaways from inside the QCA bubble

Florian Schatz reflects on the Qualitative Comparative Analysis (QCA) sessions at this year's European Evaluation Society conference.

At this year’s European Evaluation Society (EES) conference, I found myself almost exclusively attending sessions related to Qualitative Comparative Analysis (QCA).

Two years ago, QCA was still the newbie at EES, but with seven sessions this year, the approach has taken a prominent role in European evaluation discourse and practice. Exciting applications and innovations were presented and I was happy to engage with this emerging community of practice through my panel contribution on the macro evaluation of DFID’s Empowerment and Accountability policy frame.

A quick run through the QCA sessions

The QCA marathon kicked off with Barbara Befani’s presentation of some of the limitations of the QCA algorithm, followed by Rick Davies’ review of the advantages and limitations of the two algorithms available in the QCA software and alternative algorithms available in his EvalC3 tool. Triangulating findings from different algorithms using tools such as EvalC3 is something that I believe the evaluation profession should embrace going forward. Wolfgang Stuppert concluded the panel with a presentation on different options for visualising QCA findings, which was equally useful given the challenges of communicating QCA findings to policy makers.

The second QCA panel discussed the benefits and challenges of applying QCA to the evaluation of democracy and transparency initiatives. Dr. Valerie Pattyn focussed on the different stages of the evaluation cycle, with specific attention to the engagement of stakeholders and the importance of expectation management in terms of the purposes and possibilities of the method. Gavin Stedman-Bryce then presented the Itad evaluation of the Medicines Transparency Alliance. Finally, Wolfgang Stuppert demonstrated how he is applying QCA to the evaluation of local ‘Partnerships for Democracy’ for the German Federal Ministry for Family Affairs, an interesting example of a QCA evaluation outside international development.

My session: Challenges and benefits of applying QCA to evaluate DFID’s portfolio in the area of social accountability

On Thursday afternoon, it was my turn to discuss the challenges and benefits of applying QCA on a panel with Barbara Befani and Karel Chambille. As an evaluation commissioner, Karel focussed on the difficulties of engaging programme stakeholders and meeting their expectations. Much of what I said can already be read on our recent blog from the UK Evaluation Society (UKES) annual conference and our Centre for Development Impact Practice Paper, but this time I tried to focus more on the QCA aspect of the macro evaluations and came up with three key challenges and benefits:

Challenges of using QCA

  1. Difficulties in defining conditions and thresholds: In the macro evaluations, we had an initial challenge of finding the right balance between definitions that were too specific or too broad. Definitions that were too specific narrowed the number of comparable projects too far down, while definitions that were too broad generated learning that was too high-level and not very insightful. Several of the QCA sessions mentioned this challenge, and it appears to be common.
  2. Limitations of using an inductive approach: In the macro evaluations, we did not have sufficient time and resources to iterate extensively between theory and project evidence to define our conditions – and these remained largely theory-driven. We also had too many conditions (compared to the number of cases) to engage in much inductive analysis and had to resort to a hypothesis-testing approach. Again, this is a challenge that appears to be recurrent in QCA, looking at other evaluator’s presentations.
  3. To understand change fully, QCA needs to be combined with other methods: Many presentations also highlighted that QCA findings need to be interpreted and explained using other methods. QCA by itself is not sufficient to understand the causal mechanisms and work, which is why we are combining QCA with narrative analysis in the macro evaluations.

Benefits of using QCA

  1. Ability to compare a relatively large number of cases: In psychology, there is the rule of 7±2, i.e. the idea that most humans can’t keep more than 7±2 objects at the same time in their head. For this reason, it is difficult to compare a large number of cases without a systematic approach such as QCA. In the macro evaluations, QCA is proving to be useful to compare 50 cases.
  2. Systematic and transparent approach: While a systematic and transparent approach is possible with most other methods, QCA is particularly strong in enforcing such an approach. The method cannot be credibly applied without clear and transparent definitions and rubrics as well as a transparent publication of the different steps of the analysis and findings. Several other presenters at EES also mentioned this benefit.
  3. Ability to handle different types of data: The macro evaluations demonstrate how QCA is able to handle a wide range of different types of data, including those of varying quality. In this, the approach is superior to other approaches, particularly experimental or quasi-experimental designs. This flexibility is one of QCA’s key practical benefits and came up in several QCA sessions at the conference.

Other QCA sessions

A number of other QCA sessions took place during the conference, which I won’t be able to summarise completely here. However, I particularly enjoyed Carroll Patterson’s presentation of an evaluation that combined QCA with Outcome Harvesting. The evaluation looked at mechanisms for coordinating USAID programmes at district level in Uganda and used Outcome Harvesting to capture the random and unpredictable outcomes of the initiative. QCA was applied in a second step to better understand how these outcomes were achieved. This represents a useful and innovative way of dealing with unpredictable outcomes and understanding “what works.”

Five takeaways

  1. QCA has taken a prominent role in European evaluation discourse and practice, and a buzzing community of practice is emerging.
  2. Most evaluations face similar challenges when applying QCA, providing a fertile breeding ground for innovative solutions.
  3. Many evaluators are currently experimenting with combining QCA with other methods, which is essential to overcome some of QCA’s limitations.
  4. The challenge of communicating highly technical QCA findings to decision makers remains, but innovative ideas of visualising QCA results promise a way forward.
  5. QCA is just one approach that can be applied on a QCA-fit dataset – there are others, such as data mining approaches using decision tree or genetic algorithms (see here for more info). Triangulating QCA findings with findings from methods using other algorithms offers great potential and could be the “what next” for the QCA community of practice.