“Qualitative Comparative Analysis (QCA) is a method particularly well suited for systematic and rigorous comparisons and synthesis of information over a limited number of cases. In addition to a presentation of the method, this report provides an innovative step-wise guide on how to apply and quality-assure the application of QCA to real-life development evaluation, indicating the common mistakes and challenges for each step.”

Rick Davies Comment: This is an important publication, worth spending some time with. It is a detailed guide on the use of QCA, written specially for use by evaluators. Barbara Befani has probably more experience and knowledge of the use of QCA for evaluation purposes than anyone else. This is where she has distilled all her knowledge to date. There are lots of practical examples of the use of QCA scattered throughout the book, used to support particular points about how QCA works. It is not an easy book to read but is well worth the effort because there is so much that is of value. It is the kind of book you probably will return to many times. I have read pre-publication drafts of the book and the final version and will be undoubtedly returning to different sections again in the future. While this book presents QCA as a package, as is normal, there are many ideas and practices in the book which are useful in themselves. For some of my thoughts on this view of QCA you can see my comments made at the book launch a week ago (in very rough PowerPoint form:A response, The need-to-knows when commissioning evaluations)

Summary

In the search for new, more rigorous and more appropriate methods for development evaluation, one key task is to understand the strengths and weaknesses of a broad range of different methods. This report makes a contribution in this sense by focusing on the potential and pitfalls of Qualitative Comparative Analysis (QCA). The report aims at constituting a self-contained 8-step how to-guide to QCA, built on real-world cases. It also discusses issues of relevance for commissioners of evaluations using QCA, in particular on how to quality-assure such evaluations.

QCA stands out as capable of filling a series of important gaps. The method can drastically shorten the distance between qualitative and quantitative methods, sometimes referred to as a divide. By translating qualitative data, including potential causal factors, into a numerical format and systematically analysing it, causal patterns in the data can be found, thus allowing for causal claims to be tested without the need of a counterfactual situation. As such, QCA marries the depth of qualitative information with the rigour of quantitative methods, and allows processes and findings to be replicated. Transparency and replicability gives a higher credibility to the analytical set up (i.e. high internal validity) than what is normally achieved in qualitative studies. In addition, the approach can be used to analyse both small sets of data (as small as 3 cases) as well as larger sets of cases.

Quantitative “rigorous” research designs are often expensive, and sometimes prohibitively so. While QCA can be used as a first hand evaluation method, and thus letting the evaluation questions guide primary data collection, QCA is often used to make the best of existing resources and data, drawing insight from information which is already available and allowing it to support or refine a number of possible theories of change. In this sense QCA can be relatively cheap, and a useful tool for theory development, allowing the evaluator to quickly recognise which theories are promising and worthy of being taken forward, and which ones need fundamental changes and reformulations.

QCA has the possibility to synthesise case-based findings, and assess the extent to which findings can be generalised. This creates new possibilities for meta-evaluations, syntheses and systematic reviews, where QCA adds both conceptual clarity and rigour. The type of learning facilitated by QCA is not simply to answer evaluation questions like “did we do things right?” but also questions of the form “did we do the right things?”, allowing an understanding of what works best for different groups, under different circumstances, and in different contexts. By preserving case diversity and complexity, QCA does not capture an average picture of a situation but rather a sort of “controlled simplification”, a synthesis of the dataset that identifies a limited number of patterns explaining an outcome. Finally, QCA is ideally suited to capture causal asymmetry: causal factors that are – although possibly strongly and consistently associated with an outcome – only necessary but not sufficient for it, or only sufficient but not necessary. Some factors are neither, but they can still be important as necessary conditions for a causal package to be sufficient, just like an intervention can make a strong, demonstrable difference in a specific context (and be necessary for the achievement of an outcome there) but not in others (or, in other words, not necessary in general but only under specific circumstances). Drawing on several real-life applications of QCA to evaluation, the report illustrates the opportunities offered by the method, showing how it can be applied step by step to develop, test and refine theories explaining how and under what circumstances outcomes are achieved. But it also draws attention to several pitfalls, challenges and limitations: for example, the need for consistently available data across comparable cases; the need for technical skills in the evaluation team; the relative unpredictability of the number of iterations needed to achieve meaningful findings; and finally the need for sense-making of the synthesis output, which can be accomplished in many ways, including drawing on other evaluation approaches like Contribution Analysis, Realist Evaluation and Process Tracing.

Since generalisation of case-based findings is such an important added value of QCA, a specific section is devoted to the discussion of the different types of generalisation QCA facilitates. As the demand for inclusion of QCA components in evaluations increases, it is important that QCA quality assurance is improved and perhaps standardised: hence the inclusion of a proposed Quality Assurance checklist aiming to ensure that the opportunities offered by the method are caught and the pitfalls evaluators can run into are avoided.