The Advantages and Limitations of Single Case Study Analysis

As Andrew Bennett and Colin Elman have recently noted, qualitative research methods presently enjoy “an almost unprecedented popularity and vitality… in the international relations sub-field”, such that they are now “indisputably prominent, if not pre-eminent” (2010: 499). This is, they suggest, due in no small part to the considerable advantages that case study methods in particular have to offer in studying the “complex and relatively unstructured and infrequent phenomena that lie at the heart of the subfield” (Bennett and Elman, 2007: 171). Using selected examples from within the International Relations literature[1], this paper aims to provide a brief overview of the main principles and distinctive advantages and limitations of single case study analysis. Divided into three inter-related sections, the paper therefore begins by first identifying the underlying principles that serve to constitute the case study as a particular research strategy, noting the somewhat contested nature of the approach in ontological, epistemological, and methodological terms. The second part then looks to the principal single case study types and their associated advantages, including those from within the recent ‘third generation’ of qualitative International Relations (IR) research. The final section of the paper then discusses the most commonly articulated limitations of single case studies; while accepting their susceptibility to criticism, it is however suggested that such weaknesses are somewhat exaggerated. The paper concludes that single case study analysis has a great deal to offer as a means of both understanding and explaining contemporary international relations.
The term ‘case study’, John Gerring has suggested, is “a definitional morass… Evidently, researchers have many different things in mind when they talk about case study research” (2006a: 17). It is possible, however, to distil some of the more commonly-agreed principles. One of the most prominent advocates of case study research, Robert Yin (2009: 14) defines it as “an empirical enquiry that investigates a contemporary phenomenon in depth and within its real-life context, especially when the boundaries between phenomenon and context are not clearly evident”. What this definition usefully captures is that case studies are intended – unlike more superficial and generalising methods – to provide a level of detail and understanding, similar to the ethnographer Clifford Geertz’s (1973) notion of ‘thick description’, that allows for the thorough analysis of the complex and particularistic nature of distinct phenomena. Another frequently cited proponent of the approach, Robert Stake, notes that as a form of research the case study “is defined by interest in an individual case, not by the methods of inquiry used”, and that “the object of study is a specific, unique, bounded system” (2008: 443, 445). As such, three key points can be derived from this – respectively concerning issues of ontology, epistemology, and methodology – that are central to the principles of single case study research.
First, the vital notion of ‘boundedness’ when it comes to the particular unit of analysis means that defining principles should incorporate both the synchronic (spatial) and diachronic (temporal) elements of any so-called ‘case’. As Gerring puts it, a case study should be “an intensive study of a single unit… a spatially bounded phenomenon – e.g. a nation-state, revolution, political party, election, or person – observed at a single point in time or over some delimited period of time” (2004: 342). It is important to note, however, that – whereas Gerring refers to a single unit of analysis – it may be that attention also necessarily be given to particular sub-units. This points to the important difference between what Yin refers to as an ‘holistic’ case design, with a single unit of analysis, and an ’embedded’ case design with multiple units of analysis (Yin, 2009: 50-52). The former, for example, would examine only the overall nature of an international organization, whereas the latter would also look to specific departments, programmes, or policies etc.
Secondly, as Tim May notes of the case study approach, “even the most fervent advocates acknowledge that the term has entered into understandings with little specification or discussion of purpose and process” (2011: 220). One of the principal reasons for this, he argues, is the relationship between the use of case studies in social research and the differing epistemological traditions – positivist, interpretivist, and others – within which it has been utilised. Philosophy of science concerns are obviously a complex issue, and beyond the scope of much of this paper. That said, the issue of how it is that we know what we know – of whether or not a single independent reality exists of which we as researchers can seek to provide explanation – does lead us to an important distinction to be made between so-called idiographic and nomothetic case studies (Gerring, 2006b). The former refers to those which purport to explain only a single case, are concerned with particularisation, and hence are typically (although not exclusively) associated with more interpretivist approaches. The latter are those focused studies that reflect upon a larger population and are more concerned with generalisation, as is often so with more positivist approaches[2]. The importance of this distinction, and its relation to the advantages and limitations of single case study analysis, is returned to below.
Thirdly, in methodological terms, given that the case study has often been seen as more of an interpretivist and idiographic tool, it has also been associated with a distinctly qualitative approach (Bryman, 2009: 67-68). However, as Yin notes, case studies can – like all forms of social science research – be exploratory, descriptive, and/or explanatory in nature. It is “a common misconception”, he notes, “that the various research methods should be arrayed hierarchically… many social scientists still deeply believe that case studies are only appropriate for the exploratory phase of an investigation” (Yin, 2009: 6). If case studies can reliably perform any or all three of these roles – and given that their in-depth approach may also require multiple sources of data and the within-case triangulation of methods – then it becomes readily apparent that they should not be limited to only one research paradigm. Exploratory and descriptive studies usually tend toward the qualitative and inductive, whereas explanatory studies are more often quantitative and deductive (David and Sutton, 2011: 165-166). As such, the association of case study analysis with a qualitative approach is a “methodological affinity, not a definitional requirement” (Gerring, 2006a: 36). It is perhaps better to think of case studies as transparadigmatic; it is mistaken to assume single case study analysis to adhere exclusively to a qualitative methodology (or an interpretivist epistemology) even if it – or rather, practitioners of it – may be so inclined. By extension, this also implies that single case study analysis therefore remains an option for a multitude of IR theories and issue areas; it is how this can be put to researchers’ advantage that is the subject of the next section.
Having elucidated the defining principles of the single case study approach, the paper now turns to an overview of its main benefits. As noted above, a lack of consensus still exists within the wider social science literature on the principles and purposes – and by extension the advantages and limitations – of case study research. Given that this paper is directed towards the particular sub-field of International Relations, it suggests Bennett and Elman’s (2010) more discipline-specific understanding of contemporary case study methods as an analytical framework. It begins however, by discussing Harry Eckstein’s seminal (1975) contribution to the potential advantages of the case study approach within the wider social sciences.
Eckstein proposed a taxonomy which usefully identified what he considered to be the five most relevant types of case study. Firstly were so-called configurative-idiographic studies, distinctly interpretivist in orientation and predicated on the assumption that “one cannot attain prediction and control in the natural science sense, but only understanding ( verstehen )… subjective values and modes of cognition are crucial” (1975: 132). Eckstein’s own sceptical view was that any interpreter ‘simply’ considers a body of observations that are not self-explanatory and “without hard rules of interpretation, may discern in them any number of patterns that are more or less equally plausible” (1975: 134). Those of a more post-modernist bent, of course – sharing an “incredulity towards meta-narratives”, in Lyotard’s (1994: xxiv) evocative phrase – would instead suggest that this more free-form approach actually be advantageous in delving into the subtleties and particularities of individual cases.
Eckstein’s four other types of case study, meanwhile, promote a more nomothetic (and positivist) usage. As described, disciplined-configurative studies were essentially about the use of pre-existing general theories, with a case acting “passively, in the main, as a receptacle for putting theories to work” (Eckstein, 1975: 136). As opposed to the opportunity this presented primarily for theory application, Eckstein identified heuristic case studies as explicit theoretical stimulants – thus having instead the intended advantage of theory-building. So-called p lausibility probes entailed preliminary attempts to determine whether initial hypotheses should be considered sound enough to warrant more rigorous and extensive testing. Finally, and perhaps most notably, Eckstein then outlined the idea of crucial case studies , within which he also included the idea of ‘most-likely’ and ‘least-likely’ cases; the essential characteristic of crucial cases being their specific theory-testing function.
Whilst Eckstein’s was an early contribution to refining the case study approach, Yin’s (2009: 47-52) more recent delineation of possible single case designs similarly assigns them roles in the applying, testing, or building of theory, as well as in the study of unique cases[3]. As a subset of the latter, however, Jack Levy (2008) notes that the advantages of idiographic cases are actually twofold. Firstly, as inductive/descriptive cases – akin to Eckstein’s configurative-idiographic cases – whereby they are highly descriptive, lacking in an explicit theoretical framework and therefore taking the form of “total history”. Secondly, they can operate as theory-guided case studies, but ones that seek only to explain or interpret a single historical episode rather than generalise beyond the case. Not only does this therefore incorporate ‘single-outcome’ studies concerned with establishing causal inference (Gerring, 2006b), it also provides room for the more postmodern approaches within IR theory, such as discourse analysis, that may have developed a distinct methodology but do not seek traditional social scientific forms of explanation.
Applying specifically to the state of the field in contemporary IR, Bennett and Elman identify a ‘third generation’ of mainstream qualitative scholars – rooted in a pragmatic scientific realist epistemology and advocating a pluralistic approach to methodology – that have, over the last fifteen years, “revised or added to essentially every aspect of traditional case study research methods” (2010: 502). They identify ‘process tracing’ as having emerged from this as a central method of within-case analysis. As Bennett and Checkel observe, this carries the advantage of offering a methodologically rigorous “analysis of evidence on processes, sequences, and conjunctures of events within a case, for the purposes of either developing or testing hypotheses about causal mechanisms that might causally explain the case” (2012: 10).
Harnessing various methods, process tracing may entail the inductive use of evidence from within a case to develop explanatory hypotheses, and deductive examination of the observable implications of hypothesised causal mechanisms to test their explanatory capability[4]. It involves providing not only a coherent explanation of the key sequential steps in a hypothesised process, but also sensitivity to alternative explanations as well as potential biases in the available evidence (Bennett and Elman 2010: 503-504). John Owen (1994), for example, demonstrates the advantages of process tracing in analysing whether the causal factors underpinning democratic peace theory are – as liberalism suggests – not epiphenomenal, but variously normative, institutional, or some given combination of the two or other unexplained mechanism inherent to liberal states. Within-case process tracing has also been identified as advantageous in addressing the complexity of path-dependent explanations and critical junctures – as for example with the development of political regime types – and their constituent elements of causal possibility, contingency, closure, and constraint (Bennett and Elman, 2006b).
Bennett and Elman (2010: 505-506) also identify the advantages of single case studies that are implicitly comparative: deviant, most-likely, least-likely, and crucial cases. Of these, so-called deviant cases are those whose outcome does not fit with prior theoretical expectations or wider empirical patterns – again, the use of inductive process tracing has the advantage of potentially generating new hypotheses from these, either particular to that individual case or potentially generalisable to a broader population. A classic example here is that of post-independence India as an outlier to the standard modernisation theory of democratisation, which holds that higher levels of socio-economic development are typically required for the transition to, and consolidation of, democratic rule (Lipset, 1959; Diamond, 1992). Absent these factors, MacMillan’s single case study analysis (2008) suggests the particularistic importance of the British colonial heritage, the ideology and leadership of the Indian National Congress, and the size and heterogeneity of the federal state.
Most-likely cases, as per Eckstein above, are those in which a theory is to be considered likely to provide a good explanation if it is to have any application at all, whereas least-likely cases are ‘tough test’ ones in which the posited theory is unlikely to provide good explanation (Bennett and Elman, 2010: 505). Levy (2008) neatly refers to the inferential logic of the least-likely case as the ‘Sinatra inference’ – if a theory can make it here, it can make it anywhere. Conversely, if a theory cannot pass a most-likely case, it is seriously impugned. Single case analysis can therefore be valuable for the testing of theoretical propositions, provided that predictions are relatively precise and measurement error is low (Levy, 2008: 12-13). As Gerring rightly observes of this potential for falsification:
“a positivist orientation toward the work of social science militates toward a greater appreciation of the case study format, not a denigration of that format, as is usually supposed” (Gerring, 2007: 247, emphasis added).
In summary, the various forms of single case study analysis can – through the application of multiple qualitative and/or quantitative research methods – provide a nuanced, empirically-rich, holistic account of specific phenomena. This may be particularly appropriate for those phenomena that are simply less amenable to more superficial measures and tests (or indeed any substantive form of quantification) as well as those for which our reasons for understanding and/or explaining them are irreducibly subjective – as, for example, with many of the normative and ethical issues associated with the practice of international relations. From various epistemological and analytical standpoints, single case study analysis can incorporate both idiographic sui generis cases and, where the potential for generalisation may exist, nomothetic case studies suitable for the testing and building of causal hypotheses. Finally, it should not be ignored that a signal advantage of the case study – with particular relevance to international relations – also exists at a more practical rather than theoretical level. This is, as Eckstein noted, “that it is economical for all resources: money, manpower, time, effort… especially important, of course, if studies are inherently costly, as they are if units are complex collective individuals ” (1975: 149-150, emphasis added).
Limitations
Single case study analysis has, however, been subject to a number of criticisms, the most common of which concern the inter-related issues of methodological rigour, researcher subjectivity, and external validity. With regard to the first point, the prototypical view here is that of Zeev Maoz (2002: 164-165), who suggests that “the use of the case study absolves the author from any kind of methodological considerations. Case studies have become in many cases a synonym for freeform research where anything goes”. The absence of systematic procedures for case study research is something that Yin (2009: 14-15) sees as traditionally the greatest concern due to a relative absence of methodological guidelines. As the previous section suggests, this critique seems somewhat unfair; many contemporary case study practitioners – and representing various strands of IR theory – have increasingly sought to clarify and develop their methodological techniques and epistemological grounding (Bennett and Elman, 2010: 499-500).
A second issue, again also incorporating issues of construct validity, concerns that of the reliability and replicability of various forms of single case study analysis. This is usually tied to a broader critique of qualitative research methods as a whole. However, whereas the latter obviously tend toward an explicitly-acknowledged interpretive basis for meanings, reasons, and understandings:
“quantitative measures appear objective, but only so long as we don’t ask questions about where and how the data were produced… pure objectivity is not a meaningful concept if the goal is to measure intangibles [as] these concepts only exist because we can interpret them” (Berg and Lune, 2010: 340).
The question of researcher subjectivity is a valid one, and it may be intended only as a methodological critique of what are obviously less formalised and researcher-independent methods (Verschuren, 2003). Owen (1994) and Layne’s (1994) contradictory process tracing results of interdemocratic war-avoidance during the Anglo-American crisis of 1861 to 1863 – from liberal and realist standpoints respectively – are a useful example. However, it does also rest on certain assumptions that can raise deeper and potentially irreconcilable ontological and epistemological issues. There are, regardless, plenty such as Bent Flyvbjerg (2006: 237) who suggest that the case study contains no greater bias toward verification than other methods of inquiry, and that “on the contrary, experience indicates that the case study contains a greater bias toward falsification of preconceived notions than toward verification”.
The third and arguably most prominent critique of single case study analysis is the issue of external validity or generalisability. How is it that one case can reliably offer anything beyond the particular? “We always do better (or, in the extreme, no worse) with more observation as the basis of our generalization”, as King et al write; “in all social science research and all prediction, it is important that we be as explicit as possible about the degree of uncertainty that accompanies out prediction” (1994: 212). This is an unavoidably valid criticism. It may be that theories which pass a single crucial case study test, for example, require rare antecedent conditions and therefore actually have little explanatory range. These conditions may emerge more clearly, as Van Evera (1997: 51-54) notes, from large-N studies in which cases that lack them present themselves as outliers exhibiting a theory’s cause but without its predicted outcome. As with the case of Indian democratisation above, it would logically be preferable to conduct large-N analysis beforehand to identify that state’s non-representative nature in relation to the broader population.
There are, however, three important qualifiers to the argument about generalisation that deserve particular mention here. The first is that with regard to an idiographic single-outcome case study, as Eckstein notes, the criticism is “mitigated by the fact that its capability to do so [is] never claimed by its exponents; in fact it is often explicitly repudiated” (1975: 134). Criticism of generalisability is of little relevance when the intention is one of particularisation. A second qualifier relates to the difference between statistical and analytical generalisation; single case studies are clearly less appropriate for the former but arguably retain significant utility for the latter – the difference also between explanatory and exploratory, or theory-testing and theory-building, as discussed above. As Gerring puts it, “theory confirmation/disconfirmation is not the case study’s strong suit” (2004: 350). A third qualification relates to the issue of case selection. As Seawright and Gerring (2008) note, the generalisability of case studies can be increased by the strategic selection of cases. Representative or random samples may not be the most appropriate, given that they may not provide the richest insight (or indeed, that a random and unknown deviant case may appear). Instead, and properly used , atypical or extreme cases “often reveal more information because they activate more actors… and more basic mechanisms in the situation studied” (Flyvbjerg, 2006). Of course, this also points to the very serious limitation, as hinted at with the case of India above, that poor case selection may alternatively lead to overgeneralisation and/or grievous misunderstandings of the relationship between variables or processes (Bennett and Elman, 2006a: 460-463).
As Tim May (2011: 226) notes, “the goal for many proponents of case studies […] is to overcome dichotomies between generalizing and particularizing, quantitative and qualitative, deductive and inductive techniques”. Research aims should drive methodological choices, rather than narrow and dogmatic preconceived approaches. As demonstrated above, there are various advantages to both idiographic and nomothetic single case study analyses – notably the empirically-rich, context-specific, holistic accounts that they have to offer, and their contribution to theory-building and, to a lesser extent, that of theory-testing. Furthermore, while they do possess clear limitations, any research method involves necessary trade-offs; the inherent weaknesses of any one method, however, can potentially be offset by situating them within a broader, pluralistic mixed-method research strategy. Whether or not single case studies are used in this fashion, they clearly have a great deal to offer.
References
Bennett, A. and Checkel, J. T. (2012) ‘Process Tracing: From Philosophical Roots to Best Practice’, Simons Papers in Security and Development, No. 21/2012, School for International Studies, Simon Fraser University: Vancouver.
Bennett, A. and Elman, C. (2006a) ‘Qualitative Research: Recent Developments in Case Study Methods’, Annual Review of Political Science , 9, 455-476.
Bennett, A. and Elman, C. (2006b) ‘Complex Causal Relations and Case Study Methods: The Example of Path Dependence’, Political Analysis , 14, 3, 250-267.
Bennett, A. and Elman, C. (2007) ‘Case Study Methods in the International Relations Subfield’, Comparative Political Studies , 40, 2, 170-195.
Bennett, A. and Elman, C. (2010) Case Study Methods. In C. Reus-Smit and D. Snidal (eds) The Oxford Handbook of International Relations . Oxford University Press: Oxford. Ch. 29.
Berg, B. and Lune, H. (2012) Qualitative Research Methods for the Social Sciences . Pearson: London.
Bryman, A. (2012) Social Research Methods . Oxford University Press: Oxford.
David, M. and Sutton, C. D. (2011) Social Research: An Introduction . SAGE Publications Ltd: London.
Diamond, J. (1992) ‘Economic development and democracy reconsidered’, American Behavioral Scientist , 35, 4/5, 450-499.
Eckstein, H. (1975) Case Study and Theory in Political Science. In R. Gomm, M. Hammersley, and P. Foster (eds) Case Study Method . SAGE Publications Ltd: London.
Flyvbjerg, B. (2006) ‘Five Misunderstandings About Case-Study Research’, Qualitative Inquiry , 12, 2, 219-245.
Geertz, C. (1973) The Interpretation of Cultures: Selected Essays by Clifford Geertz . Basic Books Inc: New York.
Gerring, J. (2004) ‘What is a Case Study and What Is It Good for?’, American Political Science Review , 98, 2, 341-354.
Gerring, J. (2006a) Case Study Research: Principles and Practices . Cambridge University Press: Cambridge.
Gerring, J. (2006b) ‘Single-Outcome Studies: A Methodological Primer’, International Sociology , 21, 5, 707-734.
Gerring, J. (2007) ‘Is There a (Viable) Crucial-Case Method?’, Comparative Political Studies , 40, 3, 231-253.
King, G., Keohane, R. O. and Verba, S. (1994) Designing Social Inquiry: Scientific Inference in Qualitative Research . Princeton University Press: Chichester.
Layne, C. (1994) ‘Kant or Cant: The Myth of the Democratic Peace’, International Security , 19, 2, 5-49.
Levy, J. S. (2008) ‘Case Studies: Types, Designs, and Logics of Inference’, Conflict Management and Peace Science , 25, 1-18.
Lipset, S. M. (1959) ‘Some Social Requisites of Democracy: Economic Development and Political Legitimacy’, The American Political Science Review , 53, 1, 69-105.
Lyotard, J-F. (1984) The Postmodern Condition: A Report on Knowledge . University of Minnesota Press: Minneapolis.
MacMillan, A. (2008) ‘Deviant Democratization in India’, Democratization , 15, 4, 733-749.
Maoz, Z. (2002) Case study methodology in international studies: from storytelling to hypothesis testing. In F. P. Harvey and M. Brecher (eds) Evaluating Methodology in International Studies . University of Michigan Press: Ann Arbor.
May, T. (2011) Social Research: Issues, Methods and Process . Open University Press: Maidenhead.
Owen, J. M. (1994) ‘How Liberalism Produces Democratic Peace’, International Security , 19, 2, 87-125.
Seawright, J. and Gerring, J. (2008) ‘Case Selection Techniques in Case Study Research: A Menu of Qualitative and Quantitative Options’, Political Research Quarterly , 61, 2, 294-308.
Stake, R. E. (2008) Qualitative Case Studies. In N. K. Denzin and Y. S. Lincoln (eds) Strategies of Qualitative Inquiry . Sage Publications: Los Angeles. Ch. 17.
Van Evera, S. (1997) Guide to Methods for Students of Political Science . Cornell University Press: Ithaca.
Verschuren, P. J. M. (2003) ‘Case study as a research strategy: some ambiguities and opportunities’, International Journal of Social Research Methodology , 6, 2, 121-139.
Yin, R. K. (2009) Case Study Research: Design and Methods . SAGE Publications Ltd: London.
[1] The paper follows convention by differentiating between ‘International Relations’ as the academic discipline and ‘international relations’ as the subject of study.
[2] There is some similarity here with Stake’s (2008: 445-447) notion of intrinsic cases, those undertaken for a better understanding of the particular case, and instrumental ones that provide insight for the purposes of a wider external interest.
[3] These may be unique in the idiographic sense, or in nomothetic terms as an exception to the generalising suppositions of either probabilistic or deterministic theories (as per deviant cases, below).
[4] Although there are “philosophical hurdles to mount”, according to Bennett and Checkel, there exists no a priori reason as to why process tracing (as typically grounded in scientific realism) is fundamentally incompatible with various strands of positivism or interpretivism (2012: 18-19). By extension, it can therefore be incorporated by a range of contemporary mainstream IR theories.
— Written by: Ben Willis Written at: University of Plymouth Written for: David Brockington Date written: January 2013

Further Reading on E-International Relations
- Identity in International Conflicts: A Case Study of the Cuban Missile Crisis
- Imperialism’s Legacy in the Study of Contemporary Politics: The Case of Hegemonic Stability Theory
- Recreating a Nation’s Identity Through Symbolism: A Chinese Case Study
- Ontological Insecurity: A Case Study on Israeli-Palestinian Conflict in Jerusalem
- Terrorists or Freedom Fighters: A Case Study of ETA
- A Critical Assessment of Eco-Marxism: A Ghanaian Case Study
Please Consider Donating
Before you download your free e-book, please consider donating to support open access publishing.
E-IR is an independent non-profit publisher run by an all volunteer team. Your donations allow us to invest in new open access titles and pay our bandwidth bills to ensure we keep our existing titles free to view. Any amount, in any currency, is appreciated. Many thanks!
Donations are voluntary and not required to download the e-book - your link to download is below.

- Search Menu
- Author Guidelines
- Submission Site
- Open Access Options
- About Global Studies Quarterly
- About the International Studies Association
- Editorial Board
- Advertising & Corporate Services
- Journals Career Network
- Self-Archiving Policy
- Dispatch Dates
- Journals on Oxford Academic
- Books on Oxford Academic
Article Contents
Introduction, section i: individual events and political science methodology, generalizability, research design, evaluating evidence, contingency, falsifiability, section iii, acknowledgments.
- < Previous
A New Case for the Study of Individual Events in Political Science

- Article contents
- Figures & tables
- Supplementary Data
Joseph Torigian, A New Case for the Study of Individual Events in Political Science, Global Studies Quarterly , Volume 1, Issue 4, December 2021, ksab035, https://doi.org/10.1093/isagsq/ksab035
- Permissions Icon Permissions
Despite significant advances in both quantitative and qualitative methods over the last few years, the discipline of political science has yet to explicitly address the special challenges and benefits of studying specific historical events marked by high levels of contingency. The field of security studies, where concrete historical cases have always played a major role in the development of the subfield, should place special focus on the specific challenges and benefits to the study of such events. Taking full advantage of what event-specific research can teach us, however, will require thinking about generalizability, evidence, the role of contingency, and falsifiability in ways that are not yet fully understood in the discipline. More clarity on such questions will benefit our understanding of like nuclear crises in particular.
A pesar de los grandes avances en los métodos cuantitativos y cualitativos de los últimos años, la disciplina de la ciencia política aún no ha abordado de manera explícita los desafíos y los beneficios especiales del estudio de acontecimientos históricos específicos marcados por altos niveles de contingencia. El campo de los estudios de seguridad, en el que los casos históricos concretos siempre han desempeñado una función importante en el desarrollo del subcampo, debería prestar especial atención a los desafíos y los beneficios específicos del estudio de tales acontecimientos. Sin embargo, para aprovechar al máximo lo que la investigación de acontecimientos específicos puede enseñarnos, será necesario pensar en la generalización, la evidencia, la función de la contingencia y la falsabilidad en formas que aún no se comprenden en su totalidad dentro de la disciplina. Una mayor claridad en estas cuestiones nos permitirá comprender mejor las crisis nucleares, en particular.
Malgré les progrès considérables qui sont à la fois intervenus dans les méthodes quantitatives et qualitatives ces dernières quelques années, la discipline des sciences politiques doit pourtant encore aborder explicitement les avantages et défis particuliers inhérents à l’étude d’événements historiques spécifiques marqués par de hauts niveaux de contingence. Le domaine des études de la sécurité, dans lequel des cas historiques concerts ont toujours joué un rôle majeur dans le développement du sous-domaine, devrait accorder une attention particulière aux avantages et défis spécifiques de l’étude de tels événements. Pour tirer pleinement profit de ce que les recherches spécifiques à des événements peuvent nous enseigner, il faudra cependant réfléchir à la généralisabilité, aux preuves, au rôle de la contingence et à la falsifiabilité de manières qui n'ont pas encore été pleinement comprises dans la discipline. Une plus grande clarté sur ces questions sera en particulier bénéfique pour notre compréhension de crises nucléaires similaires.
In the 2000s, political science underwent a “credibility revolution.” Drawing on innovations first introduced by economists, the field now pays close attention to the exact conditions needed for a causal interpretation of quasi-experiments and natural experiments ( Angrist and Pischke 2010 ). This step forward means we can now much more reliably measure an average treatment effect. Recently, political scientists also have begun to pay more attention to a different set of questions—how can we explain specific cases and what can we learn from them? Yamamoto and Lam have suggested quantitative techniques for determining how many past events can be explained by a particular cause or how to measure individual causal effect ( Yamamoto 2011 ; Lam 2013 ). Goertz and Mahoney argue that Mill's methods, which identify necessary and/or sufficient conditions using cross-case variation, make more epistemological sense for explaining individual cases than an average treatment effect. Political scientists have also made breakthroughs on understanding what can be learned through “process-tracing” within individual cases ( Goertz and Mahoney 2012 , 87).
However, political scientists have not sufficiently moved onto ground that would fully justify looking at specific, concrete historical events or provide complete answers for how they should be studied. Within the subfield of security studies, where qualitative case studies have historically played a foundational role, thinking explicitly about the advantage of event-specific research is a crucial task. Crucially, fully extracting what we can and should learn from individual moments requires analytical priors different from those methods that seek to find an average treatment effect, necessary/sufficient conditions, or links in a chain. In this paper, I both make the case for studying individual events and explain what methodological assumptions are most useful for such research.
Average treatment effects and necessary/sufficient conditions are variables that have a probabilistic or deterministic effect, respectively (“determinative” here refers to the epistemology shaping the method, not the idea that political scientists can or should seek to find perfect determinative relations) ( Goertz and Mahoney 2012 ). These methods by their nature contain a tradeoff—the power we gain by looking at numerous cases together forces our results into an inherently ambiguous relationship with specific events. Causal chains, on the other hand, present challenges for generalizability, easily confuse chronology with causality, make problematic assumptions about how “determined” the links on the chain may be, include causes not interesting from a social-scientific perspective, and risk oversimplifying an iterative, contingent, and rapidly evolving situation.
As this article proposes, finding driving forces that have a gravitational pull on events, such as nuclear crises, is the most scholars can hope for when explaining individual cases. The point of the investigation is not to link cause and effect by coding and operationalizing variables but to conceptualize driving forces that pushed or pulled the outcome in a particular direction and how they worked. While this way of thinking excludes prediction, it does include a form of explanation that helps reduce perplexity in both the case at hand and other similar events as well. This differentiation may sound subtle, but it demands a different way of thinking about a host of methodological issues. Although focusing on individual events in this way does not preclude the use of other methods to gain further insights into a topic under investigation, it does proceed from priors different enough that it cannot be seamlessly included into an integrated multi-method approach.
Section I describes why quantitative methods, Mill's methods, and most forms of process-tracing are only partially useful for understanding specific historical events. Section II explains why we should be interested in individual events but that such a focus demands a special way of thinking about (1) generalizability, (2) research design, (3) the evaluation of evidence, (4) the role of contingency, and (5) falsifiability. Section III applies these ideas to the study of nuclear crises.
Political scientists have made serious breakthroughs in theorizing about the strengths of case study research. Yet the field has still not fully provided a complete case for the inherent benefits of rigorously investigating specific events on their own. As this section demonstrates, despite claims to have moved away from the “quantitative” worldview, the field still usually proceeds from a Hume-ean view that prevents an approach that fully and properly extricates what we can from such research.
Approaches that seek to identify an average treatment effect (described by some as statistics, a term not accepted by most practitioners) adopt a probabilistic, correlational conception of causality and seek to measure the average treatment effect for a theoretical case (or, to be more precise, the average over individual estimates). In 1994, King, Keohane, and Verba famously argued that these same principles of inference applied to both quantitative and qualitative methods ( King, Keohane, and Verba 1994 ).
A few years later, Goertz and Mahoney argued that Mill's methods, also known as set theory or nominal analysis, were based on fundamentally different principles. Set theory uses variables that are mutually exclusive and collectively exhaustive to explain outcomes. The method of agreement eliminates necessary causes by showing that the causal variable is always present when the outcome is present, while the method of difference eliminates sufficient causes by showing that the outcome is always present when the causal variable is present ( Ragin 1987 , 2000 ; Goertz and Mahoney 2012 ).
Standard quantitative causal inference methods’ analysis and set theory both establish general relationships between variables using comparison, not within-case analysis ( Brady 2008 ). The cross-case element in set theory is evident in the fact that “necessary and sufficient” conditions are not intended to explain every single case. For example, although Ertman's cases do not predict every single one of his cases correctly, the theory's usefulness remains ( Ertman 1997 ). As Mahoney himself recognizes, there can be a “probabilistic” understanding of necessary and sufficient conditions ( Mahoney 2003 ). Therefore, both quantitative causal inference methods and Mill's methods are rooted in the theories of causality posited by David Hume, in which causation is “understood in terms of regular patterns of X: Y association, and the actual process whereby X produces Y is black-boxed” ( Beach and Pedersen 2013 , 25). Hume-eans see social forces as regularities or “covering laws”: light switches that lead to automatic outcomes given certain circumstances.
Hume-ean approaches are extremely powerful. When looking at individual cases, they have abundant utility. Moreover, even the most quantitative scholars acknowledge the need to have some knowledge of specific cases. Insights from one methodological approach regularly improve the research design of another approach ( Beck 2010 ). For example, cross sectional analysis can sensitize a qualitative researcher about what types of causes may (or may not) matter. It may also point the researcher in the direction of certain cases ( Laitin 2003 ).
“Case studies and inferential statistics cannot logically mix if the definition of causality is reductionist and regularist . . . How does one know that the mechanism connecting a cause with an effect in a particular case study is the same mechanism connecting causes to effects in all the other cases? What part of the study does the causal work, the case studies or the statistical analysis? If it is the case study then the statistical analysis should not convince us, and if it is the statistical analysis then the case study should not convince us” ( Chatterjee 2009 , 11).
“where one method advances a nomothetic proposition intended to function as a ‘covering law’ while another proceeds from a phenomenological view of the world and offers a context-specific idiographic narrative. Because these approaches are predicated on fundamentally distinct ontologies and conceptions of causality, the findings they generate are ultimately incommensurable and do not serve to strengthen each other” ( Ahmed and Sil 2012 , 936).
Due to these different priors, concepts have fundamentally different meanings in a quantitative context as opposed to a qualitative one, as they are operationalized to achieve different functions ( Ahram 2013 ).
“Does Colombian history show too much of a role for terrain in light of a statistical coefficient that is significant but not substantively moderate? After all, the current civil war began in part because anti-state actors had created refuges for themselves in the mountains; a conceivable counterfactual is that less rugged terrain would have prevented these key actors from organizing in the first place. On the other hand, perhaps the case suggests that the coefficient is too large; armed factors have at times found the jungles and other regions as hospitable a refuge as the mountains, so various forms of difficult terrain may be substitutes in a way that the statistical results fail to demonstrate. Or perhaps these competing considerations are just what the [estimated coefficient] of 0.219 implies? I think it is in fact impossible to decide whether the case study and the logit coefficient agree” ( Seawright 2016 , 6–7).
“as pathways multiply, these techniques get increasingly tenuous. Under such conditions, narrative would need to stand alone, and rules of narrative coherence and completeness would help decide whether the causal structure was as theorized.”
For Laitin, narratives can be used for “residuals” that cannot be explained by variance. Laitin's own characterization of the role of narratives, therefore, points to their problematic role in the multi-method approach ( Laitin 2003 ).
When an average causal effect is identified, scholars who assume that such a finding can be easily integrated into case studies unsurprisingly often engage in qualitative work that necessarily oversimplifies or misrepresents the historic evidence. However, the unproblematic use of average causal effect findings to explain individual events is not only methodologically unsound and a recipe for poor case studies but, in policy-making terms, also occasionally dangerous. As Elster notes, “To apply statistical generalizations to individual cases is a grave error, not only in science but also in everyday life . . . The intellectual fallacy is to assume that a generalization valid for most cases is valid in each case” ( Elster 2007 , 19). Jackson illustrates this point by referring to how policymakers, shaped by democratic peace theory, failed to distinguish general frequencies connecting regime type and violence from case-specific explanations ( Jackson 2017 , 690). Statistical findings are, of course, useful because they point to average causal effects, but they cannot be unproblematically and automatically used to explain specific, individual cases.
“More substantively, following the covering-law model does not in fact enable us to give an explanation of the occurrence of an event – for all that following this model does is to show that the occurrence of Y (‘the explanandum’) was to be expected in the circumstances because ‘it always happens like that’ (or, in a diluted version, ‘it often happens like that’)” ( Suganami 2008 , 331).
Where does process-tracing fit into this discussion? Process-tracing is inherently case-specific, yet the two most common understandings of process-tracing have limited utility for the study of specific events. One form of process-tracing uses “hoop tests” to identify necessary conditions and “smoking gun tests” to identify sufficient conditions with evidence in individual cases. The crucial task is to identify intervening steps that are so proximate to one another that their connection is obvious: “The leverage gained by this kind of test derives from the fact that while X being necessary for Y is in doubt, the status of M [mechanism] being sufficient for Y and of X being necessary for M might be more readily available” ( Mahoney 2012 , 579). In other words, while set theory finds necessary and sufficient conditions by comparing across cases, process-tracing identifies these conditions by linking the original cause to the final outcome by identifying intervening variables close enough to make the causal relationship self-apparent in individual cases ( Waldner 2015 ).
This view of process-tracing has obvious commonalities with a second approach described as a comparative sequential method, comparative narrative analysis, generic narrative, or event-structure analysis. Here, the scholar seeks to formally diagram narratives so that they can be compared across cases to see if they follow the same causal logic. This approach delineates a series of events (conceptually defined) and then shows how their presence in multiple cases leads to the same outcome ( Abbot 1990 ).
Mahoney explicitly states that his conception of process-tracing is different from the Hume-ean worldview: “Scholars who use process tracing . . . reject the view that an event is explained when it can be subsumed under and predicted by a covering law model” ( Mahoney 2012 , 586). However, in crucial ways, Mahoney's understanding of process-tracing remains Hume-ean. Like Sambanis, he understands mechanisms as “variables that operate in sequence” ( Sambanis 2004 , 13; George and Bennett 2005 ). George and Bennett use the term “dominos” ( George and Bennett 2005 , 206). This “billiard-ball” view of explanation, which uses earlier events as causes, originates with David Hume ( Elster 2007 , 3).
The “billiard-ball” understanding of process-tracing suffers from several inherent problems when applied to the study of individual events. As Chatterjee perceptively recognizes, “The difficulty of defending case studies while holding this particular understanding of mechanisms stems from the fact that it implies just another version of the Hume-an definition extended to intervening variables” ( Chatterjee 2009 , 13).
First, a causal chain complicated enough to explain one crisis would almost certainly struggle with explaining another crisis given how contingent, idiographic, and iterative crises are. Second, process-tracing as “bunching” intervening variables is extremely difficult in an environment in which specific moments have an interactive effect. Third, simple sequential accounts, as their supporters admit, still often fail at “abstracting ‘causes’ out of their narrative environments” ( Abbott 1991 , 228). Instead, they accept “temporal flow as the basis of explanation and the narrator's construction of the event as the happening” ( Griffin 1993 , 1105). In other words, narratives can “often miss the distinction between chronology and causality” ( Maxwell 2012 , 45).
Fourth, process-tracing assumes a deterministic relationship. However, in events like a crisis the final outcome is almost beside the point. If counterfactuals are easily imaginable, and uninteresting for social-scientific reasons, then assuming that one outcome was more likely than another would be misleading. We have no reason to assume that if the event was re-started as an experiment, we would get the same result most of the time ( Jervis 1997 ).
Fifth, a causal chain specific enough to explain a given crisis would necessarily include highly contingent events inherent to such dynamic situations. Those events might be crucial for understanding the outcome, but their origins are uninteresting from a social-scientific point of view. Including such elements in a causal chain would complicate the theoretical message of the endeavor. As Jackson argues, “In the open system of the actual world, a causal explanation is not likely to look anything like a linear combination of discrete variables, but will likely feature case-specific sequences and interactions in ways that are difficult to capture generally or formally” ( Jackson 2017 , 691).
Sixth, this approach raises serious questions about how much we gain from the hugely complicated task of explicitly formalizing every part of a narrative and whether such a level of specificity really teaches us anything. While establishing a chronology of the event is of course crucial for the broader research project, and the more detail the better, that chronology should be used as evidence, not theory.
Seventh, process-tracing is predicated on the idea that it should adjudicate among competing hypotheses and that only one hypothesis provides the right answer. However, that viewpoint prescribes an approach to evaluating evidence in individual cases that necessarily include multiple serious pathologies. As Zaks explains in an important article, the assumption that only one hypothesis has any validity “artificially inflates the importance of one explanation at the expense of another.” If hypotheses are mutually exclusive, a researcher can claim that a hypothesis is credible after presenting even the slightest of evidence. Yet, in reality, “mutual exclusivity, however, is a strong modeling assumption; and empirically, it is more often the exception than the rule. Competing explanations may exhibit a variety of relationships to the main hypotheses, each of which has distinct implications for collecting evidence and drawing inferences” ( Zaks 2017 , 344–45).
Ultimately, then, for specific cases, the importance of context makes a covering law approach essentially meaningless. As Cartwright puts it, “At the lower level there are a very great number of laws indeed . . . The conditions are too numerous. They give us too many factors to control. Our experiments would be undoable and the laws they entitle would be narrowed in scope beyond all recognition . . . how a factor operates, at this very concrete level, is far too context-dependent” ( Cartwright 1999 , 91).
These methodological challenges raise questions about the benefits of focusing on individual events, whether as standalone projects or as part of a multi-method approach. Yet political scientists should include the study of individual events in their toolbox for several reasons.
First, people often understand historical moments as totems representing a particular type of politics, and political scientists should provide society with rigorous, serious explanations for them ( Inboden 2014 ). Without professional investigations into the past that explicitly debate what past moments should teach us about how the world works, it is more likely that policymakers and the broader public will use poor analogies to understand the present ( Khong 1992 ).
Second, if power is an iceberg, then specific events reveal more ice than usual ( Pierson 2015 , 124). As Gourevitch put it, “Hard times expose strengths and weaknesses to scrutiny, allowing observers to see relationships that are often blurred in prosperous periods, when good times slake the propensity to contest and challenge” ( Gourevitch 1986 , 9). In other words, despite the view of the “transitologists” who discounted how much we could learn from looking at moments of democratization (or its failure), if we change our methodological priors and research goals we can actually learn quite a bit ( O'Donnell and Schmitter 1986 ).
Third, a very close investigation into specific historical events has already proven deeply beneficial to political science theorizing. Capoccia and Ziblatt argue that “history sits again at center stage of the comparative study of democratization.” Episode analysis, for example, “identifies the key political actors fighting over institutional change, highlights the terms of the debate and the full range of options that they perceived, reconstructs the extent of political and social support behind these options, and analyzes, as much as possible with the eyes of the contemporaries, the political interactions that led to the institutional outcome ( Capoccia and Ziblatt 2010 , 932).” Historical institutionalists, in particular, have emphasized the implications of the political phenomenon as being deeply rooted in particular times and places ( Thelen 2002 ; Pierson 2004 ; Hall 2010 ). Hall, noting the pressures this worldview creates for the demanding assumptions necessary for both the comparative method and the standard regression models, argues that “a substantial gap has opened up between the methodologies popular in comparative politics and the ontologies the field embraces” ( Hall 2003 , 374).
Fourth, historical research has increasingly raised questions about game-theoretic approaches that use formal modeling to create parsimonious and elegant theories that conceptualize actors as utility-maximizers with stable preferences. As a deductive tool, it can play a powerful role in generating hypotheses and giving traction within individual cases ( Schelling 1966 ; Glaser 2010 ). Yet when the pursuit of parsimony is divorced from historical grounding, game theory analysis often drifts toward ahistorically homogenizing assumptions that lack empirical verification about actors like class or sector. As recent scholarship shows, evidence that would have to be identified empirically to prove many famous game-theoretic arguments, especially with regard to international crises and democratization, do not exist in the empirical record ( Kreuzer 2010 ; Morrison 2011 ; Haggard and Kaufman 2012 ; Trachtenberg 2012 ; Gallagher and Hanson 2013 ; Slater, Smith, and Nair 2014 ). Closer attention to history helps scholars avoid assuming unit homogeneity, failing to give due justice to the temporal element, or missing the right direction of causality ( Capoccia and Ziblatt 2010 ).
Fifth, looking at individual events provides a fruitful new direction within the discipline as it increasingly recognizes the constraints of other approaches. The impact of the “credibility revolution” has arrived alongside a recognition that truly persuasive findings are only possible in unique situations. For an analysis to be persuasive, a researcher has to pass many serious hurdles: ensure that all confounding variables have been identified; balance the dilemma of including too many or too few variables; have an accurate and credible idea of how data is generated to avoid unverifiable assumptions about the distribution of independent and dependent variables, error terms, and linearity; and be sure endogeneity is not a problem ( Achen 2005 ; Clarke 2005 ; Freedman 2010 ; Dunning 2012 ; Rodrik 2012 ; Narang 2014 ). Even “big data” cannot make up for problems in research design ( Titiunik 2015 , 75–76). Sekhon similarly concludes that “[w]ithout an experiment, a natural experiment, a discontinuity, or some other strong design, no amount of econometric or statistical modeling can make the move from correlation to causation persuasive” ( Sekhon 2009 , 503). Due to precisely these problems, quantitative scholars have moved in the direction of natural experiments—in which some accident of history has created a situation resembling a real experiment. Natural experiments are able to avoid the problem of confounding variables and are thus much more persuasive than multivariate regression. This comes with a catch, however: if advanced techniques will not be able to solve problems inherent to a problematic research design ( Shalev 2007 ; Freedman 2010 ; Seawright 2010 ; Dunning 2012 ), and natural experiments are by their nature confined to cases given to us by accident, some may feel that we are presented with a very limited field of academic endeavor ( Deaton 2009 ; Rodden 2009 ).
Sixth, including individual events within the field of political science can be seen as yet another approach within the broader toolbox. Political scientists should be cognizant of the analytical priors of different methodological approaches, and individual events cannot be seamlessly integrated into medium- or large- n approaches. Although these approaches cannot ask the same exact question or serve as a “crutch” for one another because they proceed from different ontologies, they can both shed light on the same broader subject in different ways.
For the study of individual events to meet their full potential, we need a very different approach than coding and identifying correlations or bunching up variables in a causal chain. It requires significantly different analytical priors with regard to generalizability, research design, evidence evaluation, contingency, and falsifiability.
The reasons why specific events should be studied, which are listed in the previous section, only tell us the benefits if such an approach is possible. But is generalizability possible with such a narrow aperture of focus? Here, we make the case that, when looking at individual events, we should not try to identify either the precise constellation of variables that led to an outcome or a causal chain. Instead, the goal should be to identify the causal forces that had a gravitational pull on the outcome and the mechanisms through which they manifested.
“are often more important for their value in clarifying previously obscure theoretical relationships than for providing an additional observation to be added to a sample . . . a good case is not necessarily a ‘typical’ case but a ‘telling case’” ( McKeown 1999 , 174).
This viewpoint also has strong intellectual affinities with the definition of a “conditioning” cause provided by Slater and Simmons: “Conditions that vary before a critical juncture and predispose (but do not predestine) cases to diverge as they ultimately do” ( Slater and Simmons 2010 , 891).
With regard to this question of generalizability, critical realists have provided useful theorizing, although not all international relations theorists, such as Chernoff, accept their tenets ( Chernoff 2009 ). Critical realists are not interested in regularities but in understanding “what an object is and the things it can do by virtue of its nature” ( Danermark et al. 2002 , 55). Cases are not “manifestations of one or another theoretically derived instance[s] in a typology” but a combination of different structural elements ( Katznelson 1997 , 99). For critical realists, the first level is what we observe (the empirical or, in other words, the evidence we collect), the second level is what actually happened in the historical record (the actual), and the third level is generative structures (the real) ( Collier 1994 , 42–44; Danermark et al. 2002 , 20). This is fundamentally different from those working in the tradition of Hume, who conflate the three domains (empirical, actual, and real) by assuming that the “real” can be reduced to what happens in the form of a relationship of two observables that can be codified operationally ( Danermark et al. 2002 , 7).
Weber and the critical realists, however, differ on the question of whether the causes they identify are “real” or not. Weber was explicit that his “ideal types” did not exist in the real world. Similarly, Elster, who defines mechanisms as “frequently occurring and easily recognizable causal patterns that are triggered under generally unknown conditions or with indeterminate consequences,” compares them to “proverbs” ( Elster 2007 , 27). Critical realists are not fully satisfied with the “ideal type/proverb” approach, as they argue that a “cause” is real. Yet they do not expect that cause to always work in the same way, as they believe that these forces are always shaped by contingencies and cannot be explained with covering laws. Here, we do not pick a side to this debate, but we do emphasize that both approaches provide a useful but non-Hume-ean approach to generalization.
Can political scientists be happy with using individual events to determine “proverbs” or “transfactual” causes that often manifest differently? One possible charge against this methodology is that it is incapable of predicting outcomes and therefore irrelevant to political science. As Kirshner notes, although political scientists rarely argue that they can accurately predict the future, prediction is still the model upon which standard approaches rely ( Kirshner 2015 , 9–10, fn 19).
Yet social scientists need not predicate useful research on whether it provides an ability to “predict” precisely. Even physics, which many political scientists see as a model, struggles with prediction. Physics theories “are severely limited in their scope. For, to all appearances, not many of the situations that occur naturally in our world fall under the concepts of these theories” ( Cartwright 1999 , 9). A series of new findings demonstrate that prediction in the social world is particularly difficult ( York and Clark 2007b ; Ward, Greenhill, and Bakke 2010 ; Ng and Wright 2013 ; Tikuisis, Carment, and Samy 2013 ; Ahir and Loungani 2014 ; Friedman 2014 ; Hasnain and Kurzman 2014 ; Bowlsby et al. 2019 ). For rare phenomena, “even weak laws of large numbers don't hold” and we have no reason to assume that one particular outcome is typical ( Bendor and Shapiro 2019 , 129). For political scientists who believe that political science should still strive to be “scientific,” of crucial importance is the fact that, even in the natural world, scientists are happy to “explain processes and outcomes but not predict them” ( George and Bennett 2005 , 130–31).
“Or, reading across multiple events and situations, one might start to develop a conceptual vocabulary of mechanisms and processes useful for organizing different cases and showing how in each case there was a unique configuration of mechanism and processes leading to a specific outcome. Instead of the manipulation of inputs, logical elaboration with a myriad of examples establishes the plausibility of each causal claim” ( Jackson 2017 , 705).
Any political scientist who has passed their general exams can explain how to conduct good case selection for research projects in the “regularist” or “frequentist” tradition. Correctly, they would know not to select on the dependent variable under any circumstances. Yet, with the understanding of generalizability described above, a scholar could justify case selection based on a wide variety of other motivations, including: the historical importance of the event, the new availability of crucial sources, policy relevance, the link between that event and a theoretical body of literature, an event whose outcome is puzzling for some theoretical or empirical reason, the ability to leverage language skills or personal experience/contacts, and/or the possibility to find evidence that can answer the theoretical question she is asking. Acknowledging these advantages strengthen the case for encouraging students to pick cases based on their ability and inclination to truly master them ( Kollner, Sil, and Ahram 2018 , 4).
However, what kind of terminology should be used to describe this kind of approach? A useful term here is “retroduction.” The origins of this term (sometimes “abduction”) come from the philosopher Charles Sanders Peirce, who noticed that scientists were able to derive plausible hypotheses without the use of induction or deduction. Peirce believed in “another definite state of things,” which, although no “unequivocal evidence” could prove its existence, would “shed a light of reason upon the state of facts with which we are confronted” ( Piekarinen and Bellucci 2014 , 355). Although the ultimate purpose of retroduction/abduction is sometimes viewed differently, the method is essentially about using individual cases as a fertile ground for identifying useful concepts and is distinct from the Hume-ean understanding of causality ( Friedrichs and Kratochwil 2009 ).
Inspired by Pierce, numerous types of scholars have latched onto abduction as a legitimate form of scientific inference that is not limited to deduction or induction. Interpretivists, for example, engage in abductive reasoning that “begins with a puzzle, a surprise, or a tension, and then seeks to explicate it by identifying the conditions that would make that puzzle less perplexing and more of a ‘normal’ or ‘natural’ event” ( Schwartz-Shea and Yanow 2012 , 27–28). For interpretivists, although abductive reasoning is the “logic,” their ultimate goal is to answer “questions about context and meaning ” ( Agar 2010 , 290). In other words, for interpretivists, “Abductive reasoning on its own does not require that one search for meaning, or that meaning be context-specific, as Agar (2010 , 20) notes. But interpretive research does!” ( Schwartz-Shea and Yanow 2012 , 32). They consciously depart from Pierce, who thought that retroduction should be followed by induction and deduction. For scholars who are looking for causal explanations of events, they will depart from the interpretivist focus on meaning and not causality (although, like interpretivists, they will necessarily look to the intentions and views of the actors under investigation).
In critical realist language, on the other hand, retroduction/abduction is the process by which individual cases are used to understand the domain of the real. It is “about advancing from one thing (empirical observation of events) and arriving at something different (a conceptualization of transfactual conditions)” ( Danermark et al. 2002 , 96). Although the real never appears in its pure observable form, retroduction helps us understand the generative functions of those antecedent conditions. Unlike Pierce, they do not believe that the “next step” is necessarily induction and deduction; instead, they prioritize the discovery of transfactual conditions.
Should a specific-event-centric research project still include more than one case in its research design? Certainly, such a research project would be stronger if some causes are present in some cases but not in others. In fact, that difference would be a good reason to select another case. Such a state of affairs would allow for the researcher to draw interesting insights into the implications of such a factor being present or absent. Yet, this tactic is different from “controlling” certain variables because the number of cases would be too small and idiosyncratic ( Jackson 2016 , 121). The researcher should not go beyond the number of cases that she can master, as the value of this approach is getting the individual cases right, not the number of cases. If the details are wrong, the whole argument is wrong.
“The basic technique is to take some major theoretical claim, bring it down to earth by thinking about what it would mean in specific historical contexts, and then study those historical episodes with those basic conceptual issues in mind . . . Theoretical claims are hard to deal with on a very general level. But those general claims translate, or should translate, into expectations about what you are likely to find if you study a particular historical episode” ( Trachtenberg 2006 , 32, 45; Darnton 2018 ).
The questions themselves are essentially empirical and must be concrete enough to answer with evidence. A question cannot be something like whether income inequality prevented democratization. The researcher can, however, ask questions like: were elites afraid that democratization would leave to redistribution of income? Were elites able to translate economic power into state capacity? Did elites care about issues other than income redistribution and, if so, how much? In other words, the empirical questions are a bit of a “bankshot,” as they do not answer the theoretical questions directly but they do have obvious relevance for theory.
Second, the questions should connect directly to a broader theoretical debate. For example, the idea that democratization is primarily about income distribution comes from Acemoglu and Robinson ( Acemoglu and Robinson 2006 ). Levitsky and Way, on the other hand, emphasize the importance of revolutionary legacies ( Levitsky and Way 2013 ). The task of the researcher would be to formulate questions that strengthen or weaken the purchase of those worldviews for explaining a given case. If, during a political crisis, leaders betrayed no worries about income distribution, but they did betray an obsession with defending the regime because they helped create it, then that would be theoretically meaningful for this discussion (although such a finding would not “disprove” the average causal effect determined by Acemoglu and Robinson).
What kind of evidence addresses questions like these? As Bennett and Checkel write, for case studies historical evidence is not “variables” but “diagnostic evidence,” which is further supplemented with “the ways in which actors privately frame or explain their action” (significantly, however, Bennett and Checkel still seek to identify regularities among variables at the macro-level) ( Bennett and Checkel 2015 , 7).
The key insight provided by some process-tracers is the importance of causal process observations (CPOs): “an insight or piece of data that provides information about context or mechanism and contributes . . . leverage in causal inference.” The CPOs can be contrasted with data-set observations, which are the specific pieces of information used for quantitative analysis ( Brady, Collier, and Seawright 2010 , 184).
When it comes to the nuts and bolts of methods, event-specific research has more in common with the historian, detective, or journalist. As Maxwell puts it, this “resembles the approach of a detective trying to solve a crime, an inspector trying to determine the cause of an airplane crash, or a physician attempting to diagnose a patient's illness.” Because the causal process is not directly observable, they instead search for “clues” ( Scriven 1976 , 47; Maxwell 2012 ). Therefore, creating room within the discipline for this kind of research will require a new dedication to training students on how to collect and interpret qualitative evidence—skills that unfortunately have atrophied among graduate students ( Lebow 2007 , 2).
Some scholars reject any role for contingency in political science, and indeed see the historical, inductive approach as “antitheoretical” ( Kiser and Hechter 1991 ). Yet an understanding of the nature of contingency is absolutely critical to researching specific events, especially since it focuses on those moments when “fortune” is at its most powerful. O'Donnell and Schmitter are correct to identify “the high degree of indeterminacy embedded in situations where unexpected events ( fortuna ), insufficient information, hurried and audacious choices, confusion about motives and interests, plasticity, and even indefinition of political identities, as well as the talents of specific individuals ( virtu ) are frequently decisive in determining the outcomes” ( O'Donnell and Schmitter 1986 , 5). Although Mahoney denies that outcomes are entirely random, he goes so far as to argue that some moments “cannot be explained on the basis of prior historical condition” ( Mahoney 2000 , 508). This presents a dilemma: contingent events without social scientifically interesting origins simply must be included when explaining specific outcomes because otherwise the explanation of an individual case would not make sense ( Beach and Pedersen 2013 , 51). But how do we manage the tension between driving forces and contingency?
A key insight when managing these challenges is the relative nature of contingency ( Pettit 2007 ). As Slater and Simmons point out, “even the most severe crises rarely produce blank slates” ( Slater and Simmons 2010 , 890). A purely contingent event, in the ideal sense, is one with origins whose explanation has no social-scientific value and is essentially unpredictable. However, in the real world, events only very rarely fit these qualifications. Contingency can be the precise, but not perfectly predictable, manifestation of antecedent conditions ( Slater and Simmons 2010 ). In any case, if an actor intended to achieve something, even if they failed, we can still “preserve the proffered motivational account and elaborate on it,” as “explaining means elaborating, justifying, or possibly excusing the action rather than simply ‘refuting’ the hypothesis” ( Kratochwil 1990 , 25). Moreover, if a trigger is almost completely unpredictable, the effect that such contingent events have when they occur is still shaped by antecedent conditions ( Wood 2007 ).
The relative nature of contingency means that political scientists should problematize the extent to which a certain event was likely. An inevitable, possible, or unlikely outcome are all possibilities. To what extent an outcome is determined by structural causes is an empirical question: “documents and other historical evidence can tell whether key actors in a critical juncture acted with a significant degree of freedom or not” ( Capoccia and Kelemen 2007 ). Some outcomes are more open, while others are “not just determined but overdetermined” ( Rueschemeyer 2003 , 315). Bendor and Shapiro even argue that certain types of political phenomena, like military conflicts, are shaped by relatively higher levels of chance and contingency ( Bendor and Shapiro 2019 ).
Terminology like “likelihood” for an outcome might suggest a statistical approach, but the method here is different. Instead of identifying an average causal effect across a population, to address likelihood in an individual case, the researcher can ask: how powerful were countervailing forces that ultimately did not sway the outcome? Could tiny, easily imaginable counterfactuals have fundamentally changed the event ( Lebow 2015 )?
For example, in his essay on World War I, Lebow recognizes that “underlying causes, no matter how numerous or deep-seated, do not make an event inevitable. Their consequences may depend on fortuitous coincidences in timing and on the presence of catalysts that are independent of any of the underlying causes” ( Lebow 2000 , 591–92). He argues that several independent antecedent conditions working in conjunction along with a single triggering mechanism were the cause of the war. The assassination of Archduke Ferdinand was an especially powerful triggering mechanism given the antecedent conditions at the time, but it was not inevitable. If it happened outside of the two-year window when states would choose war over peace when faced with the decision, the war might have been avoided. Margaret MacMillan contributes to this debate by asserting that some crisis like the assassination was bound to happen and would likely have had a similar effect sometime between 1900 and 1914 ( MacMillan 2013 ).
Can we falsify individual analyses of events? Many political scientists would argue that identifying causes in single cases is fundamentally impossible. When coming from a statistical worldview, this viewpoint makes eminent sense. However, outside of this methodological prior, such an idea is rather radical, if not almost postmodern—after all, juries make judgments in a single case without relying on statistics, induction, or deduction. Instead, they are persuaded by which lawyer better uses evidence in a specific case to make a particular claim ( Toulmin 1972 ; McKeown 2004 , 149; Perelman and Olbrechts-Tyteca 2013 ). As Kratochwil clearly explains, we do not determine whether someone is “guilty” of a crime with covering laws. Instead, explanation of an individual case draws upon facts to construct a narrative framework that provides a reason for someone's actions. In other words, “explaining an action means providing a critically vetted, plausible account of the action and its context, which has the structure of a narrative rather than a demonstration” ( Kratochwil 2018 , 414–17).
The answer to whether individual cases are falsifiable depends on whether the questions are formed in a way that can be answered meaningfully with the available evidence. As discussed above, “Does income inequality lead to democratization?” would not be an appropriate question for this type of method, but “Does speech evidence and behavior demonstrate that political leaders in country A were primarily concerned that democratization would lead to redistribution of wealth?” certainly would be.
As opposed to other forms of qualitative analysis, event-specific is probably the least vulnerable to charges of “cherry-picking” or relying on only a few biased historical accounts ( Lustick 1996 ). Event-specific research presumes a much deeper and thicker relationship with the material in order to ask the questions that make it persuasive. Every piece of evidence must be situated, which avoids the problem of overemphasizing some CPOs that look like they confirm a theory but which are taken out of context. Also, event-specific research accepts the presence of multiple powerful forces, which decreases the pressure for over-arguing the case for one particular cause.
However, this kind of research is at its best when the substantive questions are posed in a way that allows the scholar to show how much one theory explains relative to another one. In order to ensure the highest level of rigor, the scholar should investigate at least two potential causes at once, ideally the two most likely to provide a better explanation (as determined by the theoretical literature and previous historiographical accounts of the event in question). Questions can be asked in such a way that “yes” points to one theory while “no” supports another. Or, two sets of questions can be asked: one set that addresses how much one generative structure mattered and a second set centered around another theory.
This research shares a core assumption within the field about the importance of “rigor.” Rigor here stems primarily from (1) the asking of questions that can be answered meaningfully with empirical evidence, (2) the integration of those answers to broader theoretical questions, and (3) evaluation of conclusions as part of a broader academic community. Event-specific research is not a “soft” approach. If science is “a set of shared practices within a professionally trained community,” then event-specific research is scientific ( Lebow 2007 , 7).
Since they have different views of generalizability, Weber, critical realists, and interpretivists naturally differ on the question of falsifiability. For Weber, identifying ideal types through a handful of specific events was a meaningful enough exercise. For critical realists, having discovered the potential existence of a finding, they still want to determine if it is “real.” For example, if the mechanism is psychological, they might then take the finding to a “laboratory” and conduct a psychological assessment. Or, they would investigate a number of cases to see whether the cause or mechanism is present in those cases as well. If that proves to be the case, then the finding is “transfactual,” meaning that the structure in question is commonly present (although with no a priori assumptions about how it would manifest in specific cases) ( Collier 1994 ).
This article is not the place to adjudicate these different views but bracketing this question for now does not hide the basic point they all share: that a great deal of generalizable and useful information can be learned from individual cases and that social science is not constrained to the Hume-ean worldview. Ultimately, specific events matter not because they are outliers, or a crucial case, or a least likely case, or a most likely case.
Elements of the event-specific research described above have already been apparent in the study of nuclear crises. Most famously, George and Smoke, instead of seeking to identify “a frequency distribution of different outcomes,” instead attempted to discriminate “among varieties and patterns of deterrence situations” ( George and Smoke 1974 , 3, 1989 , 171). They contributed to the literature by identifying typologies that would make situations more legible to policymakers. Political scientists have thought carefully about what quantitative methods can teach us about nuclear weapons ( Sechser and Fuhrmann 2017 , 63–71). But what exactly are the particular strengths of an event-focused approach compared with other methodologies for understanding specific nuclear crises?
Based on a statistical analysis, Kroenig argues that “nuclear crises are competitions in risk taking, but that nuclear superiority—defined as an advantage in the size of a state's nuclear arsenal relative to that of its opponent—increases the level of risk that a state is willing to run in a crisis. I show that states that enjoy a nuclear advantage over their opponents possess higher levels of effective resolve” ( Kroenig 2013 , 143). In response, Gavin, using the specific case study of the Berlin 1958–1962 nuclear crisis, charged that Kroenig's argument could not “fully explain the outcomes and causal mechanisms in the most important and most representative case” ( Gavin 2014 , 16).
However, Gavin is not evaluating Kroenig by what Kroenig is actually trying to do—if Kroenig's model is accurate, he may in fact have helpfully identified an average causal effect or at least an interesting correlation. With regard to individual cases, the problem is not so much that Kroenig's finding cannot explain a key case—the issue is that his methodology is not designed to explain individual cases at all.
Drawing on Seawright's analysis of Fearon and Laitin discussed above, we can ask questions that show the meaninglessness of directly applying Kroenig's statistical finding to a single case. Was the average causal effect of nuclear arsenal size not powerful enough to sway the outcome in the Berlin nuclear crisis? In other words, is the statistical coefficient too “high” or “low” for this single case? These are unanswerable questions. Because the core of Kroenig's finding is Hume-ean, his theory for how variables work cannot be tested in a single case because it is possible that the statistical relationship is manifested in fundamentally different ways in different cases. Therefore, judging whether the statistical finding and case study “agree” is impossible.
Not all political scientists are convinced by Kroenig's empirical findings. Sechser and Fuhrmann, for example, believe that nuclear superiority is meaningless in a crisis ( Sechser and Fuhrmann 2017 ). However, if a policymaker found herself in a nuclear crisis and wanted to look at past events for guidance, assuming that either of those empirical findings could be unproblematically applied directly to specific cases would put the world in a dangerous place—regardless of which scholarship is closer to the truth. First, the policymaker would not know whether the present crisis was one of the cases that cut against the grain of the identified average causal effect. Second, that empirical finding would not equip the policymaker to “see” the potential causes or mechanisms or have a sense for how those elements actually interacted with one another in the past.
Holloway's on the Cuban Missile Crisis illustrates the advantages of an approach that (implicitly) uses retroduction ( Holloway 2010 ). Three aspects, in particular, stand out. First, Holloway demonstrates that the world came extraordinarily close to nuclear war in 1962. Although war did not happen, he shows that the forces that had a gravitational pull toward war, like the obvious benefit of going first, were extremely powerful. The fact that contingent events, such as accidents, did not activate them does not mitigate their absolutely crucial importance. The interesting finding here is not that a precise group of variables meant peace but that the structural forces present could just have easily started a world war. These forces would be meaningless or invisible to Hume-eans who seek regularities across cases.
Second, Holloway's deep dive allows him to challenge Schelling's argument that rational behavior during a crisis would be to demonstrate “madness” and cut off the ability to retreat. During the Berlin and Cuban crises, neither side behaved in that way. Instead, Holloway provides a more subtle argument. “Common knowledge” did not encourage threats because it was not only a war of nerves, but also a limiting factor, as cutting off roads meant risking preventing attack. His ability to theorize this concept, while undermining a previously prominent theory in the discipline, draws upon a close reading of the evidence.
Third, this “common understanding” is best understood as a driving force, not a variable. Because the outcome was not determined, the “common understanding” is not best understood as a “sufficient” condition, even in a probabilistic sense. Because Holloway is doing no more (or less) than identifying one previously underestimated dynamic “in the wild,” which may or may not have a similar effect in other cases, he is also not identifying a “necessary” condition. Moreover, since this “common understanding” is not visible anywhere as a specific link in a chain of events, Holloway is not engaged in process-tracing. Instead, powerful speech and behavioral evidence indicates that this dynamic had an important pull throughout the crisis.
Political scientists have reached important and enduring conclusions using standard methods such as game theory, standard quantitative methods, and Mill's methods. In this article, we presented an argument for a more explicit theorizing of what individual events can teach us. Of course, like other approaches, this method has its own built-in limitations. Most immediately, its applicability to specific moments, the inherent limitations of relevant qualitative material, and nonuniversal ambitions are significant drawbacks. This approach cannot provide a number encapsulating the average causal effect of inequality on democracy (although it might show the implications of socioeconomic cleavages in individual cases). Yet, given the importance and complicated nature of the political world, political science can only be strengthened by adding to our tool kit. Many political scientists have reached the conclusion that individual cases tell us nothing except the extent that they provide for cross-case variation. However, by being less ambitious about universal effects and understanding generalizability in a different way, scholars using qualitative source materials to investigate even single cases can shed new light on political processes.
Thank you to Stanford's CISAC and the CFR's Stanton Fellowship for providing time to write this article.
Abbot Andrew 1990 . “ Conceptions of Time and Events in Social Science Methods .” Historical Methods 23 ( 4 ): 140 .
Google Scholar
Abbot Andrew . 1991 . “ History and Sociology: The Lost Synthesis .” Social Science History 15 ( 2 ): 201 .
Acemoglu Daron , Robinson James A. . 2006 . Economic Origins of Dictatorship and Democracy . Cambridge : Cambridge University Press .
Google Preview
Achen Christopher H. 2005 . “ Let's Put Garbage-Can Regressions and Garbage-Can Probits where They Belong .” Conflict Management and Peace Science 22 ( 4 ): 327 – 39 .
Agar Michael 2010 . “ On the Ethnographic Part of the Mix: A Multi-Genre Tale of the Field .” Organizational Research Methods 13 ( 2 ): 286 – 303 .
Ahir Hites , Loungani Prakash . 2014 . “ ‘There Will Be Growth in the Spring’: How Well Do Economists Predict Turning Points? ” Accessed October 14, 2021. VoxEU.org .
Ahmed Amel , Sil Rudra . 2012 . “ When Multi-Method Research Subverts Methodological Pluralism—or, Why We Still Need Single-Method Research .” Perspectives on Politics 10 ( 4 ): 935 – 53 .
Ahram Ariel I 2013 . “ Concepts and Measurement in Multimethod Research .” Political Research Quarterly 66 ( 2 ): 280 – 91 .
Angrist Joshua D. , Pischke Jörn-Steffen . 2010 . “ The Credibility Revolution in Empirical Economics: How Better Research Design Is Taking the Con out of Econometrics .” Journal of Economic Perspectives 24 ( 2 ): 3 – 30 .
Beach Derek , Pedersen Rasmus Brun . 2013 . Process-Tracing Methods: Foundations and Guidelines . Ann Arbor, MI : University of Michigan Press .
Beck Nathaniel . 2010 . “ Causal Process ‘Observation’: Oxymoron or (Fine) Old Wine .” Political Analysis 18 ( 4 ): 499 – 505 .
Bendor Jonathan , Shapiro Jacob N. . 2019 . “ Historical Contingencies in the Evolution of States and Their Militaries .” World Politics 71 ( 1 ): 126 – 61 .
Bennett Andrew , Checkel Jeffrey . 2015 . “ Process Tracing: From Philosophical Roots to Best Practices .” In Process Tracing: From Metaphor to Analytic Tool , edited by Bennett Andrew , Checkel Jeffrey , 3 – 37 . Cambridge : Cambridge University Press .
Bowlsby Drew , Chenoweth Erica , Hendrix Cullen , Moyer Jonathan . 2019 . “ The Future Is a Moving Target: Predicting Political Instability .” British Journal of Political Science 50 ( 4 ): 1 – 13 .
Brady Henry E. 2008 . “ Causation and Explanation in Social Science .” In The Oxford Handbook of Political Methodology , edited by Box-Steffensmeier Janet M. , Brady Henry E. , Collier David . Oxford : Oxford University Press .
Brady Henry E. , Collier David , Seawright Jason . 2010 . “ Sources of Leverage in Causal Inference: Toward an Alternative View of Methodology .” In Rethinking Social Inquiry: Diverse Tools, Shared Standards , edited by Brady Henry E. , Collier David , 229 – 66 . Lanham, MD : Rowman & Littlefield .
Capoccia Giovanni , Daniel Kelemen R. . 2007 . “ The Study of Critical Junctures: Theory, Narrative, and Counterfactuals in Historical Institutionalism .” World Politics 59 ( 3 ): 341 – 69 .
Capoccia Giovanni , Ziblatt Daniel . 2010 . “ The Historical Turn in Democratization Studies: A New Research Agenda for Europe and beyond .” Comparative Political Studies 43 ( 8–9 ): 931 – 68 .
Cartwright Nancy . 1999 . The Dappled World: A Study of the Boundaries of Science . Cambridge : Cambridge University Press .
Chatterjee Abhishek . 2009 . “ Ontology, Epistemology, and Multi-Methods .” Qualitative and Multi-Method Research 7 ( 2 ): 11 – 15 .
Chernoff Fred . 2009 . “ The Ontological Fallacy: A Rejoinder on the Status of Scientific Realism in International Relations .” Review of International Studies 35 ( 2 ): 371 – 95 .
Clarke Kevin . 2005 . “ The Phantom Menace: Omitted Variable Bias in Econometric Research .” Conflict Management and Peace Science 22 ( 4 ): 341 – 52 .
Collier Andrew . 1994 . Critical Realism: An Introduction to Roy Bhaskar's Philosophy . London : Verso .
Danermark Berth , Ekstrom Mats , Jakobsen Liselotte , Karlsson Jan Ch. . 2002 . Explaining Society: Critical Realism in the Social Sciences . London : Routledge .
Darnton Christopher . 2018 . “ Archives and Inference: Documentary Evidence in Case Study Research and the Debate over U.S. Entry into World War II .” International Security 42 ( 3 ): 84 – 126 .
Deaton Angus S. 2009 . Instruments of Development: Randomization in the Tropics, and the Search for the Elusive Keys to Economic Development . National Bureau of Economic Research. Working Paper. Accessed October 29, 2014. http://www.nber.org/papers/w14690 .
Dunning Thad 2012 . Natural Experiments in the Social Sciences: A Design-Based Approach . New York : Cambridge University Press .
Elster Jon . 2007 . Explaining Social Behavior: More Nuts and Bolts for the Social Sciences . Cambridge : Cambridge University Press .
Ertman Thomas . 1997 . Birth of the Leviathan: Building States and Regimes in Medieval and Early Modern Europe . Cambridge : Cambridge University Press .
Fearon James D. , Laitin David D. . 2003 . “ Ethnicity, Insurgency, and Civil War .” The American Political Science Review 97 ( 1 ): 75 – 90 .
Freedman David . 2010 . Statistical Models and Causal Inference: A Dialogue with the Social Sciences , edited by Collier David , Sekhon Jasjeet Singh , Stark Philip . Cambridge : Cambridge University Press .
Friedman Walter 2014 . Fortune Tellers: The Story of America's First Economic Forecasters . Princeton, NJ : Princeton University Press .
Friedrichs Jorg , Kratochwil Friedrich . 2009 . “ On Acting and Knowing: How Pragmatism Can Advance International Relations Research and Methodology .” International Organization 63 ( 4 ): 701 – 31 .
Fu Diane , Simmons Erica S. . 2021 . “ Ethnographic Approaches to Contentious Politics: The What, How, and Why .” Comparative Political Studies 54 ( 10 ): 1695 – 1721 .
Gallagher Mary , Hanson Jonathan K. . 2013 . “ Authoritarian Survival, Resilience, and the Selectorate Theory .” In Why Communism Did Not Collapse: Understanding Authoritarian Regime Resilience in Asia and Europe , edited by Dimitrov Martin K , 185 – 204 . New York : Cambridge University Press .
Gavin Francis J. 2014 . “ What We Talk about when We Talk about Nuclear Weapons: A Review Essay .” H-Diplo/ISSF Forum, No. 2 : 11 – 36 .
George Alexander L. , Bennett Andrew . 2005 . Case Studies and Theory Development in the Social Sciences . Cambridge : MIT Press .
George Alexander L. , Smoke Richard . 1974 . Deterrence in American Foreign Policy: Theory and Practice . New York : Columbia University Press .
George Alexander L. , Smoke Richard . 1989 . “ Deterrence and Foreign Policy .” World Politics 41 ( 2 ): 170 – 82 .
Glaser Charles 2010 . Rational Theory of International Politics: The Logic of Competition and Cooperation . Princeton, NJ : Princeton University Press .
Goertz Gary , Mahoney James . 2012 . A Tale of Two Cultures: Qualitative and Quantitative Research in the Social Sciences . Princeton, NJ : Princeton University Press .
Gourevitch Peter Alexis 1986 . Politics in Hard Times: Comparative Responses to International Economic Crises . Ithaca, NY : Cornell University Press .
Griffin Larry J. 1993 . “ Narrative, Event-Structure Analysis, and Causal Interpretation in Historical Sociology .” American Journal of Sociology 98 ( 5 ): 1094 – 1133 .
Haggard Stephan , Kaufman Robert R. . 2012 . “ Inequality and Regime Change: Democratic Transitions and the Stability of Democratic Rule .” American Political Science Review 106 ( 3 ): 495 – 516 .
Hall Peter 2003 . “ Aligning Ontology and Methodology in Comparative Research .” In Comparative Historical Analysis in the Social Sciences , edited by Mahoney James , Rueschemeyer Dietrich . Cambridge : Cambridge University Press .
Hall Peter . 2010 . “ Politics as a Process Structured in Space and Time .” Presented at the Annual Meeting of the American Political Science Association .
Hasnain Aseem , Kurzman Charles . 2014 . “ When Forecasts Fail: Unpredictability in Israeli-Palestinian Interaction .” Sociological Science 1 ( 16 ): 239 – 59 .
Holloway David . 2010 . “ Nuclear Weapons and the Escalation of the Cold War, 1945-1962 .” In The Cambridge History of the Cold War , edited by Leffler Melvyn P. , Westad Odd Arne . 376 – 97 Cambridge : Cambridge University Press .
Inboden William 2014 . “ Statecraft, Decision-Making, and the Varieties of Historical Experience: A Taxonomy .” Journal of Strategic Studies 37 ( 2 ): 291 – 318 .
Jackson Patrick Thaddeus . 2016 . The Conduct of Inquiry in International Relations: Philosophy of Science and Its Implications for the Study of World Politics . 2nd ed. London : Routledge .
Jackson Patrick Thaddeus . 2017 . “ Causal Claims and Causal Explanation in International Studies .” Journal of International Relations and Development 20 : 689 – 716 .
Jervis Robert . 1997 . System Effects: Complexity in Political and Social Life . Princeton, NJ : Princeton University Press .
Katznelson Ira . 1997 . “ Structure and Configuration in Comparative Politics .” In Comparative Politics: Rationality, Culture, and Structure , edited by Lichbach Mark Irving , Zuckerman Alan , 81 – 112 . Cambridge : Cambridge University Press .
Khong Yuen Foong . 1992 . Analogies at War: Korea, Munich, Dien Bien Phu, and the Vietnam Decisions of 1965 . Princeton, NJ : Princeton University Press .
King Gary , Robert O. , Keohane , Verba Sidney . 1994 . Designing Social Inquiry: Scientific Inference in Qualitative Research . Princeton, NJ : Princeton University Press .
Kirshner Jonathan 2015 . “ The Economic Sins of Modern IR Theory and the Classical Realist Alternative .” World Politics 67 ( 1 ): 155 – 83 .
Kiser Edgar , Hechter Michael . 1991 . “ The Role of General Theory in Comparative-Historical Sociology .” American Journal of Sociology 97 ( 1 ): 1 – 30 .
Kollner Patrick , Sil Rudra , Ahram Ariel I. , eds. 2018 . “ Comparative Area Studies: What It Is, What It Can Do .” In Comparative Area Studies: Methodological Rationales & Cross-Regional Applications . 3 – 26 . New York : Oxford University Press .
Kratochwil Friedrich 1990 . Rules, Norms, and Decisions: On the Conditions of Practical and Legal Reasoning in International Relations and Domestic Affairs . Cambridge : Cambridge University Press .
Kratochwil Friedrich . 2018 . Praxis: On Acting and Knowing . Cambridge : Cambridge University Press .
Kreuzer Marcus 2010 . “ Historical Knowledge and Quantitative Analysis: The Case of the Origins of Proportional Representation .” American Political Science Review 104 ( 2 ): 369 – 92 .
Kroenig Matthew 2013 . “ Nuclear Superiority and the Balance of Resolve: Explaining Nuclear Crisis Outcomes .” International Organization 67 ( 1 ): 141 – 71 .
Laitin David D. 2003 . “ The Perestroikan Challenge to Social Science .” Politics & Society 31 ( 1 ): 163 – 84 .
Lam Patrick Kenneth . 2013 . “ Estimating Individual Causal Effects .” Doctoral Dissertation, Harvard University .
Lebow Richard Ned . 2000 . “ Contingency, Catalysts, and International System Change .” Political Science Quarterly 115 ( 4 ): 591 – 616 .
Lebow Richard Ned . 2007 . “ What Can We Know? How Do We Know? ” In Theory and Evidence in Comparative Politics and International Relations , edited by Lebow Richard Ned , Lichbach Mark Irving , 1 – 22 . New York : Palgrave Macmillan .
Lebow Richard Ned . 2015 . “ Counterfactuals and Security Studies .” Security Studies 24 ( 3 ): 403 – 12 .
Levitsky Steven , Way Lucan . 2013 . “ The Durability of Revolutionary Regimes .” Journal of Democracy 24 ( 3 ): 5 – 17 .
Lustick Ian S. 1996 . “ History, Historiography, and Political Science: Multiple Historical Records and the Problem of Selection Bias .” The American Political Science Review 90 ( 3 ): 605 – 18 .
MacMillan Margaret 2013 . The War That Ended Peace: The Road to 1914 . New York : Random House .
Mahoney James 2000 . “ Path Dependence in Historical Sociology .” Theory and Society 29 ( 4 ): 507 – 48 .
Mahoney James . 2003 . “ Strategies of Causal Assessment in Comparative Historical Analysis .” In Comparative Historical Analysis in the Social Sciences , edited by Rueschemeyer Dietrich , James Mahoney , 337 – 72 . Cambridge : Cambridge University Press .
Mahoney James . 2012 . “ The Logic of Process Tracing Tests in the Social Sciences .” Sociological Methods & Research 41 ( 4 ): 570 – 97 .
Maxwell Joseph A. 2012 . A Realist Approach to Qualitative Research . Los Angeles, CA : SAGE .
Maxwell Joseph A. . 1999 . “ Case Studies and the Statistical Worldview: Review of King, Keohane, and Verba's Designing Social Inquiry: Scientific Inference in Qualitative Research .” International Organization 53 ( 1 ): 161 – 90 .
McKeown Timothy J. 2004 . “ Case Studies and the Limits of the Quantitative Worldview .” In Rethinking Social Inquiry: Diverse Tools, Shared Standards , edited by Brady Henry E. , Collier David , 139 – 67 . Lanham : Rowman & Littlefield .
Morrison Bruce 2011 . “ Channeling the ‘Restless Spirit of Innovation’: Elite Concessions and Institutional Change in the British Reform Act of 1832 .” World Politics 63 ( 4 ): 678 – 710 .
Narang Vipin . 2014 . “ The Use and Abuse of Large-n Methods in Nuclear Studies .” H-Diplo/ISSF Forum, No. 2 : 91 – 97 .
Ng Serena , Wright Jonathan H. . 2013 . “ Facts and Challenges from the Great Recession for Forecasting and Macroeconomic Modeling .” Journal of Economic Literature 51 ( 4 ): 1120 – 54 .
O'Donnell Guillermo A. , Schmitter Philippe C . 1986 . Transitions from Authoritarian Rule: Tentative Conclusions about Uncertain Democracies . Baltimore, MD : Johns Hopkins University Press .
Perelman Chaim , Olbrechts-Tyteca Lucie . 2013 . The New Rhetoric: A Treatise on Argumentation . Notre Dame : Notre Dame University Press .
Philip , Pettit . 2007 . “ Resilience as the Explanandum of Social Theory .” In Political Contingency: Studying the Unexpected, the Accidental, and the Unforeseen , edited by Shapiro Ian , Bedi Sonu , 79 – 96 . New York : New York University Press .
Piekarinen Ahti-Veikko , Bellucci Francesco . 2014 . “ New Light on Peirce's Conceptions of Retroduction, Deduction, and Scientific Reasoning .” International Studies in the Philosophy of Science 28 ( 4 ): 353 – 73 .
Pierson Paul 2004 . Politics in Time: History, Institutions, and Social Analysis . Princeton, NJ : Princeton University Press .
Pierson Paul . 2015 . “ Power and Path Dependence .” In Advances in Comparative-Historical Analysis, Strategies for Social Inquiry , edited by Thelen Kathleen Ann , Mahoney James , 123 – 46 . New York : Cambridge University Press .
Ragin Charles C. 1987 . The Comparative Method: Moving beyond Qualitative and Quantitative Strategies . Berkeley, CA : University of California Press .
Ragin Charles C. . 2000 . Fuzzy-Set Social Science . Chicago, IL : University of Chicago Press .
Ragin Charles , Zaret David . 1983 . “ Theory and Method in Comparative Research: Two Strategies .” Social Forces 61 ( 3 ): 731 – 54 .
Rodden Jonathan . 2009 . “ Back to the Future: Endogenous Institutions and Comparative Politics .” In Comparative Politics: Rationality, Culture, and Structure , edited by Lichbach Mark Irving , Zuckerman Alan S. , 333 – 57 . Cambridge : Cambridge University Press .
Rodrik Dani 2012 . “ Why We Learn Nothing from Regressing Economic Growth on Policies .” Seoul Journal of Economics 25 ( 2 ): 137 – 51 .
Rueschemeyer Dietrich . 2003 . “ Can One or a Few Cases Yield Theoretical Gains ?” In Comparative Historical Analysis in the Social Sciences , edited by Mahoney James , Rueschemeyer Dietrich , 305 – 36 . New York : Cambridge University Press .
Sambanis Nicholas 2004 . “ Using Case Studies to Expand Economic Models of Civil War .” Perspectives on Politics 2 ( 2 ): 259 – 79 .
Schelling Thomas 1966 . Arms and Influence . New Haven, CT : Yale University Press .
Schwartz-Shea Peregrine , Yanow Dvora . 2012 . Interpretive Research Design: Concepts and Processes . New York : Routledge .
Scriven Michael 1976 . “ Evaluation in Science Teaching .” Journal of Research in Science Teaching 13 ( 4 ): 363 – 68 .
Seawright Jason . 2010 . “ Regression-Based Inference: A Case Study in Failed Causal Assessment .” In Rethinking Social Inquiry: Diverse Tools, Shared Standards , edited by Brady Henry , Collier David , 247 – 71 . Lanham, MD : Rowman & Littlefield .
Seawright Jason . 2016 . Multi-Method Social Science: Combining Qualitative and Quantitative Tools . Cambridge : Cambridge University Press .
Sechser Todd S. , Fuhrmann Matthew . 2017 . Nuclear Weapons and Coercive Diplomacy . Cambridge : Cambridge University Press .
Sekhon Jasjeet S. 2009 . “ Opiates for the Matches: Matching Methods for Causal Inference .” Annual Review of Political Science 12 ( 1 ): 487 – 508 .
Shalev Michael 2007 . “ Limits and Alternatives to Multiple Regression in Comparative Research .” Comparative Social Research 24 : 261 – 308 .
Slater Dan , Simmons Erica . 2010 . “ Informative Regress: Critical Antecedents in Comparative Politics .” Comparative Political Studies 43 ( 7 ): 886 – 917 .
Slater Dan , Smith Benjamin , Nair Gautam . 2014 . “ Economic Origins of Democratic Breakdown? The Redistributive Model and the Postcolonial State .” Perspectives on Politics 12 ( 2 ): 353 – 74 .
Suganami Hidemi 1996 . On the Causes of War . Oxford : Clarendon Press .
Suganami Hidemi . 2008 . “ Narrative Explanation and International Relations: Back to Basics .” Millennium: Journal of International Studies 37 ( 2 ): 327 – 56 .
Tarrow Sidney 2008 . “ Charles Tilly .” PS: Political Science & Politics 41 ( 3 ): 639 – 41 .
Thelen Kathleen 2002 . “ The Explanatory Power of Historical Institutionalism .” In Akteure, Mechanismen, Modelle: Zur Theoriefähigkeit Makro-Sozialer Analysen , edited by Mayntz Renate , 91 – 107 . Frankfurt : Campus .
Tikuisis Peter , Carment David , Samy Yiagadeesen . 2013 . “ Prediction of Intrastate Conflict Using State Structural Factors and Events Data .” Journal of Conflict Resolution 57 ( 3 ): 410 – 44 .
Titiunik Rocío 2015 . “ Can Big Data Solve the Fundamental Problem of Causal Inference? ” PS: Political Science & Politics 48 ( 1 ): 75 – 79 .
Toulmin Stephen E. 1972 . Human Understanding . Princeton, NJ : Princeton University Press .
Trachtenberg Marc 2006 . The Craft of International History: A Guide to Method . Princeton, NJ : Princeton University Press .
Trachtenberg Marc . 2012 . “ Audience Costs: An Historical Analysis .” Security Studies 21 ( 1 ): 3 – 42 .
Waldner David 2015 . “ Process Tracing and Qualitative Causal Inference .” Security Studies 24 ( 2 ): 239 – 50 .
Wood Elisabeth Jean . 2007 . “ Modeling Contingency .” In Political Contingency: Studying the Unexpected, the Accidental, and the Unforeseen , edited by Shapiro Ian , Bedi Sonu . New York : New York University Press .
Ward Michael D. , Greenhill Brian D. , Bakke Kristin M. . 2010 . “ The Perils of Policy by P-Value: Predicting Civil Conflicts .” Journal of Peace Research 47 ( 4 ): 363 – 75 .
Yamamoto Teppei 2011 . “ Understanding the Past: Statistical Analysis of Causal Attribution .” American Journal of Political Science 56 ( 1 ): 237 – 56 .
York Richard , Clark Brett . 2007a . “ The Problem with Prediction: Contingency, Emergence, and the Reification of Projections .” Sociological Quarterly 48 ( 4 ): 713 – 43 .
York Richard , Clark Brett . 2007b . “ The Problem with Prediction: Contingency, Emergence, and the Reification of Projections .” Sociological Quarterly 48 ( 4 ): 713 – 43 .
Zaks Sherry 2017 . “ Relationships Among Rivals (RAR): A Framework for Analyzing Contending Hypotheses in Process Tracing .” Political Analysis 25 ( 3 ): 344 – 62 .
Supplementary data
Email alerts, citing articles via.
- Advertising and Corporate Services
Affiliations
- Online ISSN 2634-3797
- Copyright © 2023 International Studies Association
- About Oxford Academic
- Publish journals with us
- University press partners
- What we publish
- New features
- Open access
- Institutional account management
- Rights and permissions
- Get help with access
- Accessibility
- Advertising
- Media enquiries
- Oxford University Press
- Oxford Languages
- University of Oxford
Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide
- Copyright © 2023 Oxford University Press
- Cookie settings
- Cookie policy
- Privacy policy
- Legal notice
This Feature Is Available To Subscribers Only
Sign In or Create an Account
This PDF is available to Subscribers Only
For full access to this pdf, sign in to an existing account, or purchase an annual subscription.
No internet connection.
All search filters on the page have been cleared., your search has been saved..
- All content
- Dictionaries
- Encyclopedias
- Expert Insights
- Foundations
- How-to Guides
- Journal Articles
- Little Blue Books
- Little Green Books
- Project Planner
- Tools Directory
- Sign in to my profile No Name
- Sign in Signed in
- My profile No Name
Subject index
The SAGE Handbook of Research Methods in Political Science and International Relations offers a comprehensive overview of research processes in social science - from the ideation and design of research projects, through the construction of theoretical arguments, to conceptualization, measurement, and data collection, and quantitative and qualitative empirical analysis - exposited through 65 major new contributions from leading international methodologists. Each chapter surveys, builds upon, and extends the modern state of the art in its area. Following through its six-part organization, undergraduate and graduate students, researchers and practicing academics will be guided through the design, methods, and analysis of issues in Political Science and International Relations: Part One: Formulating Good Research Questions and Designing Good Research Projects; Part Two: Methods of Theoretical Argumentation; Part Three: Conceptualization and Measurement; Part Four: Large-Scale Data Collection and Representation Methods; Part Five: Quantitative-Empirical Methods; Part Six: Qualitative and Mixed Methods.

Case Study Methods: Case Selection and Case Analysis
- By: Chiara Ruffa
- In: The SAGE Handbook of Research Methods in Political Science and International Relations
- Chapter DOI: https:// doi. org/10.4135/9781526486387
- Subject: Political Science and International Relations
- Show page numbers Hide page numbers
Case study methods are a group of approaches in political science and international relations that aim at testing and developing theory. 1 The key characteristic of case study methods is their focus on one or few cases but with the ambition to understand and capture broader and more general underlying dynamics. This implies two core inter-related functions: the first is describing an observed phenomenon in all its depth and complexity; the second is attempting to generalize to a broader universe of cases. A research project using case study methods can rarely attempt to do both at the same time and to the same extent: the attempt to go deep usually has to compromise with trying to describe and study several cases in breadth. Either way, case study methods imply a more or less explicit positivist assumption in terms of the objective of research: in different ways case studies aim at generalizing beyond the case or the set of cases at hand and they acknowledge an attempt to identify and describe patterns of behavior. Notwithstanding the positivist assumption, the spectrum of positions can vary greatly: it ranges from explicit positivist approaches to interpretivist ones, in which the ability to observe causality is somewhat challenged (della Porta, 2008: 32; Vennesson, 2008). Case study methods’ opportunities and challenges partly derive from being tools that can be used by scholars with different understandings of causality and different assumptions about what a researcher wants and can ostensibly know about the world.
Case study methods are a very well established set of methods in political science and international relations. In a 2011 TRIP (Teaching, Research and International Policy) survey, most scholars in international relations declared that their main chosen methods were qualitative. 2 As Bennett and Elman (2010: 499) have noted on several occasions, ‘qualitative research methods are currently enjoying an almost unprecedented popularity and vitality in both the international relations and comparative politics subfields'. Notwithstanding their persistent popularity [Page 1134] and traction, the relevance of case study methods is being eroded by a growing focus on causal identification and inference in the positivist social sciences. 3 More narrowly, case study methods seem to fall short even towards observational quantitative approaches due to a general tendency to strive for generalization. Notwithstanding these concerns, case study methods still hold strong.
Case study methods scholars have developed a set of sophisticated techniques to make case study research rigorous and maximize the generalizability potential of case studies (Bennett and Elman, 2010). Partly because of this attempt to engage with other approaches, most of the debate on case studies has focused on case-selection techniques for generalizability purposes and has not engaged as much with case study analysis – that is, tools to conduct a case study and interpret the material. Yet, as Seawright and Gerring (2008: 294) argue, ‘case selection and case analysis are intertwined to a much greater extent in case study research than in large-N cross-case analysis'. The necessity to engage with other approaches has led to a somewhat essentialist understanding of case study methods and has over-shadowed the importance of using case-selection techniques in eclectic and creative ways and engaging more systematically with case analysis of complex phenomena. Case study methods should be seen as methods in their own right, with their own potentials and pitfalls. I contend that more attention should be paid to discuss case selection together with case analysis. Case study methods are both art and craft, and, as such, they should combine both command of the method and creative thinking to leverage inferential power through case selection and to gauge complexity through case analysis. In this chapter, I present the most common strategies of case selection and case analysis and outline ways to combine them more creatively for maximizing inferential power while at the same time capturing complexity.
This chapter comes in six parts. I first situate case study methods in the broader literature on research methods. Second, I discuss case study methods in relation to what a case is and what it is good for; third, I present different logics of case selection; fourth, I discuss single case study approaches and fifth, I discuss comparative designs. Sixth, I reflect on common practices for conducting case studies and finally, I draw some conclusions.
Between a Rock and a Hard Place: Case Study Methods between Post-Positivism and Large-N Approaches
Despite being so well established, case study methods find themselves between ‘a rock and a hard place'. On the one hand, post-positivist and interpretivist approaches may work well with case studies but have different underlying ontological, epistemological and methodological assumptions. By post-positivism and interpretivism, I refer to a wealth of rather diverse ontologies and epistemologies that challenge the possibility to describe and explain phenomena in terms of causal relations and the impossibility to separate the object of research from the researcher. 4 The very use of the term ‘case study', as in the ‘study of a case', may reflect an underlying objective for generalizability, thereby making the term itself not widely used among post-positivists and interpretivists. Some scholars from post-positivist and interpretivist approaches have started to increasingly reflect upon and make use of case study methods, but they are mainly concerned with issues of generality and explicitly question generalizability as a more or less explicit research objective (Salter and Mutlu, 2013).
Among positivist scholarship, case study methods also hold a slightly uncomfortable place. The focus on a small number of cases (small-N) is what makes them distinct from the other, more widespread set of approaches in political science, that is, large-N approaches making use of statistical [Page 1135] techniques or experimental approaches. With the increasing traction of observational quantitative and experimental methods, case study methods are often under-appreciated, and their utility and breadth of application is widely misunderstood and misused. Within the field of political science and international relations, case studies are often used and seen in a subordinate and complementary fashion to quantitative techniques. 5 To its extreme, within the burgeoning literature using ever more advanced quantitative techniques, I witness a worrying and growing use of illustrative examples as substitute for systematic case study evidence. 6 While illustrative examples are useful to give an indication of the direction and quality of causal relation, they cannot substitute full-fledged case study analysis. Embracing and adopting case study methods may indeed not be a strategic choice: at first sight, time constraints and the pressure to publish more and more quickly do not provide great incentives to graduate students to specialize in case study methods, which are, by their nature, very time-consuming and field-work intensive, are narrated at best in book-format outputs and find themselves very constrained in article format. I still remember when I was desperately trying to fit my 25,000-word case narrative into a 13,000-word limit for the journal Security Studies – one of the most generous outlets in terms of word count (Ruffa, 2017). At the same time, engaging with quantitative scholars provides great opportunities. For instance, recent debates on qualitative methods have discussed how to make qualitative case studies more transparent, so as to foster dialogue with quantitative scholars through active citations, among others (Barnes and Weller, 2017; Moravcsik, 2010, 2014).
Notwithstanding these considerations, case study methods also provide unique opportunities on their own for understanding, conceptualizing, developing and testing new theories, and they remain a set of widely utilized methods. As Gerring (2009: 65) puts it, the case study approach ‘is disrespected but nonetheless regularly employed. Indeed, it remains the workhorse of most disciplines and subfields in the social sciences'. Also, case study methods still hold an important place and perform analysis and cover aspects that quantitative techniques are unable to capture. In particular, they are unique at providing nuances in theories and observing phenomena from up-close, so that one does not need to capture and measure proxies. In other words, using case study methods allows for high validity – ‘the degree to which a measure accurately represents the concept that is supposed to measure’ (Kellstedt and Whitten, 2013: 126). Importantly, they still hold a lot of promise that is not being fully exploited or always given full attention.
What is a Case? What is it Not? And What is it For?
What is a case.
In this section, I define what a case is and is not and reflect upon what it is for. A case is usually defined as an instance of a broader phenomenon under study. George and Bennett (2005: 17) define ‘a case as an instance of a class of events’ and explicitly refer to the generalizability of it, since a case study is ‘the detailed examination of an aspect of a historical episode to develop or test historical explanations that may be generalizable to other events'. Levy (2008: 2) more precisely talks about a case as ‘an attempt to understand and interpret a spatially and temporally bounded set of events'. Occasionally used as synonym of observation in the qualitative realm, those two terms actually differ. As Gerring (2009: 20-21) puts it, in contrast to a case, ‘an observation is the most basic element of any empirical endeavor', yet ‘in a case study, however, the case under study always provides more than one observation'. Those observations may be scattered across time and space within the same case.
[Page 1136] Case studies focus on phenomena of specific interest, such as revolutions, wars, decisions and military interventions. The key here – albeit often neglected – is the importance of identifying a case at the smallest unit of analysis possible given the chosen theory. A case study allows to observe the theory at play. It is also important to distinguish between the case and the broader population of cases that the theory aspires to be applicable to. For instance, in his seminal work on the Cuban missile crisis, Allison's (1971) ‘case’ is the Cuban missile crisis. But this specific case could be seen as part of a large population of cases, which depending on the theory's focus could be coercive diplomacy, crisis management or the operational code of political leaders. When I first started to work on military cultures in peacekeeping operations, I was undecided on whether my cases were national troops deployed in peacekeeping missions or rather peacekeeping missions or, even more broadly, international military interventions (Ruffa, 2018). I have only recently realized that what I defined as ‘my case’ depended on the kind of literature and contribution I was trying to make – this being, for my project, security studies and the peacekeeping literature. My cases ended up being four different national troops deployed in two different missions. Since I made an argument about how military culture influences the ways in which soldiers behave when deployed in peacekeeping missions, it was reasonable to zoom in at the unit level, since each unit deployed to an area of operation and had full responsibility of implementing the UN mandate in that area– which was part of a peacekeeping mission. As such, case studies under analysis can be anything, ranging from countries to well defined time periods. In other words, some self-reflection on what a case is in a particular research project is key in order to clarify the nature of the contribution that is being made. Relatedly, it is important to reflect upon how high or low on the ladder of abstraction one wants to place her/his contribution (Sartori, 1970).
Once one has an idea about the objective of research – be it theory testing, development or merely descriptive – a research design requires some assumptions that the objective of research is to identify patterns that might lead to generalizable results. This assumption aside, there is still widespread variation within positivist approaches, mainly in terms of the objective of research and in terms of how explicit the research objective is. The very use of the term ‘case study’ entails an underlying positivist assumption, in that the objective is to generalize and find patterns that are valid beyond the case under study. The logic underlying case study methods does not fundamentally differ from that of large-N studies: both case studies and large-N studies are based on observational data, meaning a non-experimental selection of the cases or observations under study. The core difference, however, between these two types of methods relates to the number of cases or observation under investigation. Case study methods vary between single case studies (n=1) up to 12/15 cases, although the issue is still widely debated. Finally, case studies do not necessarily equal qualitative approaches. While in political science and international relations, case study methods usually rely on qualitative strategies of data collection, they can make use of numbers and statistics for measuring some aspects of the key variables in place (Collier, 2011: 824).
What is a Case… Not?
A case study is not an illustrative example. Also (but not only) in the quantitative literature, there is a growing usage of illustrative examples to suggest the plausibility of the existence of certain causal relations and the plausibility of underlying mechanisms. Illustrative examples are tremendously useful to suggest that some causal relations might be at play in the empirical domain; to complement a significant correlation with empirical examples suggesting the plausibility of the relations and of its underlying mechanism(s) and to guide the early phases of theorizing as [Page 1137] reality checks. While illustrations are increasingly required and used, their scope and purpose remain different from case studies. A recent paper by Haass and Ansorg (2018) is a good example in this respect. The authors argue that in peace operations with high-quality troops, militaries are better able to protect civilians. They further argue that such better protection happens because ‘high-quality militaries are better able to deter violence from state and non-state actors and create buffer zones within conflict areas, can better reach remote locations, and have superior capabilities to monitor the implementation of peace agreements’ (Haass and Ansorg, 2018: 1). While their paper is an important contribution to the peacekeeping literature, their use of an illustrative example on the case of Mali misses the broader picture that one would hopefully likely get in a full-fledged case study.
Despite a mission strength that was significantly lower at the time than that of MINUSCA (…) in a country about twice the size of the CAR, the UN operation successfully stabilized the situation in Mali and monitored the presidential elections in August 2013, MINUSMA was in a much better position to respond to threats against civilians than MINUSCA in part due to the fact that the mission consisted of, inter alia, highly trained troops from the Netherlands, Denmark, Norway and Finland. (Haass and Ansorg, 2018: 1)
The problem with that illustrative example is that it misses some important underlying dynamics happening within the case. Dutch, Norwegian and Finnish troops are unlikely to have contributed actively to all those mechanisms described because they were not even present on the ground, they deployed in very low numbers, and were only intelligence, surveillance and reconnaissance missions tasked to report back up to the Force Commander via the intelligence command structure. While Haas and Ansorg's theory is sound and quantitatively supported, further qualitative research could potentially try to understand why we may find this result in a mission like Mali, for instance by exploring within case variations or trying to gauge qualitatively whether there was an indirect effect.
Along similar lines, deciding to focus on a full-fledged case study rather than on illustrative examples could be beneficial from the early phases of research. For instance, in a recent article, my coauthors and I argue that an increase in terrorist threats or attacks influence the level of military involvement in politics by either the military pushing its way in politics or being pulled (Bove, Rivera and Ruffa, 2019). We test this quantitatively and conduct three illustrative case studies: Algeria before the descent into the civil war; France in 1996–98 and 2015–16. The project started with the idea of only proposing some illustrative examples but it was only when we decided to turn them into full-fledged case studies and invested time on them that we discovered a second fundamental underlying causal mechanism. We did not only find a pushing effect of the military in politics but also of a pulling effect– which we then decided to further theorize about. If at all possible for time and length constraints, full-fledged case studies are usually preferable as they allow to capture dynamics that may remain invisible at a first cursory view.
What is a Case For?
A necessary preparatory step in delving into a case has to do with deciding what kind of research object one has in relation to its level of ambition. Different case-selection techniques would then follow depending on the kind of research objective one has. In their classic book, Case study and theory development in the social sciences , George and Bennett (2005) distinguish between six different kinds of case studies depending on the kind of research objective one has, which they adapted from the work of Eckstein (1975) and Lijphart (1971). They distinguish between atheoretical/configurative idiographic case studies, disciplined configurative, heuristic/theory development, theory testing, plausibility probes and building-block studies (George and Bennett, 2005: 75). Importantly, in a [Page 1138] single-outcome study, the phenomenon of interest is not seen as an instance of some greater population of cases. A somewhat similar approach is Beach and Pedersens’ third form of process tracing and they describe it as a type of process tracing used to explain why a specific outcome occurred in a specific case and this is really the only occasion in which a type of process tracing does not contribute to advance theory (Beach and Pedersen, 2013). 7 While usually discarded, single-outcome studies can be good descriptions of cases that might be used in subsequent studies for theory building but by themselves do not cumulate or contribute directly to theory (George and Bennett, 2005: 75). They have no ambition beyond describing the phenomenon of interest and thereby get closer to historical work. In some instances, single case studies focus on phenomena that are intrinsically important. For instance, historian Isabel Hull (2005) wrote a book on German institutional extremism during the imperial period until 1918. Even though her research is extremely case-specific, she influenced much theory development within the literature on military and strategic cultures in the following years. In other cases, the potential population of cases appears too heterogeneous to allow for any general statements. For instance, Peter Feaver (2011) published a paper in which he examines the decision-making process that led to the surge in Iraq. In the civil-military literature, many argue that civil-military relations in the United States are so particular that they cannot be generalized beyond the case under study. Single-outcome studies are increasingly used in post-positivist approaches, for instance by focusing on the essence of politics or power shifts and changes. Aside from single-outcome studies, all other types of case study aim at either theory testing or theory development at a varying level of ambition. While theory testing has a deductive approach (from theory to empirics), theory development has an inductive spin (from empirics to theory). Disciplined configurative case studies make use of established theories for explaining a case, while plausibility probes are theory-testing exercises with a lower level of ambition. Parallelly, building-block studies are theory-development exercises with a narrower focus. Even when not used explicitly, these are useful for clarifying the objective of research given the constraints one has. Once the objective of research has been clarified, we can then move on to case-selection techniques.
Case-selection Techniques
Case selection is a powerful tool in the hands of the researcher and the first crucial step for producing convincing research designs. Case selection should be ‘an integral part of a good research strategy to achieve well-defined objectives of the study’ (George and Bennett, 2005: 83). In case studies, the researcher ‘is asked to perform a heroic role: to stand for (represent) a population of cases that is often much larger than the case itself’ (Seawright and Gerring, 2008: 294). Developing on existing work, I distinguish among three different kinds of case selection: convenience, random and strategic. For the purpose of this discussion, I am not (yet) distinguishing between single case studies and comparative designs. The first kind of case selection entails selecting a case out of convenience. For instance, one selects a case because one knows the language in a particular country or because one is particularly interested in a case for its policy implications. Such a single case study approach might give some ideas about how the theory plausibly plays out in the case at hand, but it has not been strategically chosen to maximize the generalizability potential of the theory to a population of cases. This particular category of cases is problematic in terms of selection bias, as one might not select it based on its relation to a broader universe of cases.
A second strategy is randomly selecting a case or set of cases. While this is a possibility, it is highly discouraged. In fact, ‘serious [Page 1139] problems are likely to develop if one chooses a very small sample in a completely random fashion’ (Seawright and Gerring, 2008: 295). With two Monte Carlo experiments, Seawright and Gerring (2008) show that selecting cases randomly ‘will often produce a sample that is substantially unrepresentative’ (Seawright and Gerring, 2008: 295). Given the considerations above, the third strategy might in fact be the best one. The so-called strategic (or purposive) case selection is based on the idea of strategically selecting a case or pair of cases based on its hypothesized characteristics in relation to a broader universe of cases. Following that logic, one selects a case or compares two or more cases, trying to maximize the chances of capturing causal connections that are occurring within the universe of cases and controlling as much as possible for confounders. Selecting cases is a daunting task, and it is important to be pragmatic about it and combine convenience (such as the profound knowledge of a language or culture) with strategic considerations. A tip I always find useful is to start thinking about cases already when one is taking key decisions about theory. We often hold a romantic idea of first deciding on theory and then moving on to research design and case selection. Yet, this is rarely the case: we often circularly refine theories based on cases and decide on cases based on theory. 8 Doing so allows the research to buy time and reflect upon case selection at an early stage and reflect on tradeoffs among selection criteria, feasibility, previous case knowledge and the like. I discuss options of strategic case selection, first focusing only on one case and then, second, conducting comparative designs.
Single Case Studies
Once the research objective has been clarified, it is important to reflect upon the different kinds of case study that are possible and are available, given the available constraints. A first important distinction is between single case studies and comparative design. As the term suggests, single case studies focus on one single case, exploring the plausibility of a theory or tracing the causal mechanism at play in a particular context. In comparison to designs with more cases, single case studies have higher levels of conceptual validity, allowing us to take into account the complexity of contextual factors. By contrast, they are unable to control for confounders and may suffer from selection bias. Comparative methods (both in their most similar and most different system versions) allow us to control for confounders and have a greater external validity.
In line with the general discussion on case selection, single case studies can be selected by convenience or in strategic ways. In the first scenario, while there might be very good reasons for doing it, such cases may suffer from severe selection bias and their generalizability potential will be diminished. When selected strategically, however, single case studies hold the promise of combining the richness of focusing on one case with the ambition of saying something about the broader population of interest. Indeed, single case studies can be selected on the basis of their relation to theory, or, in quantitative language, on whether they are situated on the regression line or not (Gerring, 2007; Gerring, 2017)
For this reason, I label them ‘empowered single case studies', and they are usually distinguished among four different kinds (Gerring, 2007; Gerring, 2017; Levy, 2008). Such distinction is mainly based on whether the values of dependent or independent variables are known or not. The first possibility is to select a so-called extreme case that has extreme or unusual values on the independent or the dependent variables. It is used for hypothesis generating or probing a potential causal connection but not to perform a full-fledged test of the theory. Such a case does not score very high in terms of representativeness because it does not give us any indication of how well the theory holds, since we do not yet know anything about the relationship [Page 1140] between independent and dependent variables. A second option is a deviant case, which is an outlier in comparison to all the cases that follow the theory, because – as the term suggests – it deviates from a cross-case relationship. A deviant case is useful for theory-development purposes or for generating new hypotheses, because it deviates from a cross-case relationship in a theory and might help detect neglected variables. For instance, in our work on NGO–military relations in complex humanitarian emergencies, Vennesson and I noticed that identity alone would not be able to explain variations in NGO–military relations (Ruffa and Vennesson, 2014). A hidden variable was at play, but we did not know what. Therefore, we selected a deviant case from existing theories, where we conducted qualitative empirical work and identified a neglected variable of interest – namely, domestic institutional configurations. A deviant case allowed us to identify a new variable, which was then ripe for further testing. A third option, which is in fact a variation of a deviant case, is an influential case, which displays influential configurations of the independent variable and makes me accept or refute the theory. It is suited for hypothesis testing as it helps me to make some final decisions about my theory. A fourth option is ‘a crucial case, which is most likely or least likely to exhibit a given outcome. It is based on a qualitative assessment of real crucial-ness’ (Gerring, 2007: 247). Crucial cases play an important confirming/confirmatory or disconfirming/disconfirmatory role in hypothesis testing. While they do not tell us much about representativeness, they are useful to get a sense of whether the theory holds even in a hard case (least likely) or does not hold in an easy case (most likely) and therefore whether it deserves further testing either quantitatively or qualitatively: ‘The inferential logic of least likely case design is based on the “Sinatra inference”—if I can make it there I can make it anywhere’ (Levy, 2002: 442), and by contrast, ‘the logic of most likely case design is based on the inverse Sinatra inference—if I cannot make it there, I cannot make it anywhere’ (Levy, 2002: 442). A fifth option is the so-called pathway case, which is thought to embody a distinct causal path from the independent to the dependent variable. The pathway case is useful to probe a causal mechanism, and therefore a case that is easy to study at length can be particularly useful. 9 The pathway case should embody some typical relation as expected by the theory and is particularly fruitful in mixed-method analysis. Its main characteristic is that it lends itself to exploring causal mechanisms. So it shares from traits with the typical case that entails the selection of a case that mirrors the typical example of a cross-case relationship: the so-called typical or paradigmatic case. Such a case lends itself well for testing theories and how they hold, and they are representative of the broader population almost by definition. Pathway and typical cases partly overlap but a pathway case is not necessarily a typical case. Sometimes one could select a pathway case which is not typical because it is easy or practical to study.
Single case studies have both strengths and weaknesses. Because they allow for much emphasis on the case, they score high on conceptual validity, in the sense that nuanced and complex concepts can be constructed and measured. Relatedly, single case study can be useful for developing new hypotheses because they allow us to explore the causal mechanism and embark on process tracing (Bennett and Elman, 2007). The work by Katzenstein (1996), for instance, with a least-likely case, has produced some of the most influential recent scholarship in international relations. Recent work on a single case, the Democratic Republic of Congo, has changed the way we think about making peace at the local level (Autesserre, 2014).
On the other hand, single case studies also have some disadvantages. The first and main problem is the case-selection bias, that is, selecting a case that is already known to display certain characteristics. For instance, the [Page 1141] qualitative literature on peacekeeping has suffered from this particular problem, by focusing overwhelmingly on cases of failure rather than on cases of success (Howard, 2008; Pouligny, 2005). They are also not particularly suited for identifying scope conditions or necessity. Set aside typical cases, a single case study is unable to tell us whether the phenomenon we are observing is representative for the population of cases. In terms of inferential leverage, ‘empowered single case studies’ are more powerful than pure single case studies. Single case studies definitely allow for delving into a case study and its richness, which is great and very important. With some minor adjustments, single case studies can have an improved chance of generalizability. As usual with case study research, it is much about trade-offs, and it is difficult to provide a main takeaway. For sure, selecting a case based on convenience is weakened by the selection bias but also by the lack of objectivity. At the same time, a profound knowledge about the case is sometimes necessary to immerse oneself into a project. In practice, we often do both: we strive to follow a systematic case selection as well as deal with practical considerations. For instance, when selecting peace and stability operations to study, I was looking for a traditional UN peacekeeping mission but I also had to select cases of ongoing missions since I could only collect ethnographic evidence about soldiers deployed (Ruffa, 2018). Compounding case selection logics and practical considerations, I opted for the UN mission in Lebanon. Finally, single case studies have in any case a fundamental problem with confounders, and if one is particularly concerned by those, s/he should consider a comparative case study approach, which will be the focus of the next section.
Comparative Case Studies
Comparing cases entails two cases being selected and compared to one another. There are two main known types of comparative designs: the most similar and the most different system design (Lijphart, 1971; Przeworski and Teune, 1970). In a most similar system case comparison, which builds on the logic of Mill's ‘method of difference', two cases are selected on the basis of them presenting similar characteristics on most confounding variables, setting aside the independent or the dependent variables of interest (Mill, 1882: 484). In its purest form, such a case entails that through strategic case selection, the researcher manages to control for confounders. Those cases are selected because those variables do not entail much explanatory power. The best-known alternative to this is the most different system case comparison, which entails two cases being selected on the basis of them varying in every possible respect except the independent or dependent variable of interest. A classic example of this is the work by Theda Skocpol (2015). These two designs are mutually exclusive when thinking of one specific case comparison. However, when it comes to more complex designs, they can actually be fruitfully combined. For instance, in my own work, I combined a most similar system design with a most different system design. I conducted, separately, two distinct most similar system designs by comparing the French and Italian units in the UN mission in Lebanon and the French and Italian units in the NATO mission in Afghanistan. But I then compared Lebanon and Afghanistan, which were the most different types of peace and stability operations I could find that they had a similar outcome and independent variable of interest (Ruffa, 2017, 2018). While in that research the most different system design is notably much less developed, it is an attempt to creatively combine those two approaches. The most similar system design within each operation studied allowed me to gauge the plausibility of my theory by controlling for confounders. The most different system allowed to check whether my theory held in two very different context – thereby maximizing external validity. While usually [Page 1142] cases for comparative designs are selected strategically, recent work suggests the use of matching techniques to systematically select a most similar system design (Nielsen, 2016).
When thinking about comparative methods, a useful distinction is among variable-oriented vs. case-oriented designs. I opted for a variable-oriented comparative study, as opposed to a case oriented one. ‘Variable-oriented studies mainly aim at establishing generalized relationships between variables, while case-oriented research seeks to understand complex units’ (della Porta, 2008: 198). This differs from case-oriented that ‘aims at rich descriptions of a few instances of a certain phenomenon’ (della Porta, 2008: 198). While both approaches are legitimate, disagreements persist in terms of whether the underlying logics (generalizations vs. account for complexity) differ or are the same. Most research mirrors the tension between case vs. variable-oriented comparison, including my own. To illustrate, the foci of my analysis are rather complex units, which might have lent themselves better to a case-oriented approach. However, I opted for a variable-oriented approach for three reasons. First, notwithstanding the micro-focus, my ambition was to maximize the inferential strategy beyond the cases under study. Second, I decided to talk with the language of variables as a way to navigate through each case and being able to compare them, since I was using concepts and measurements that had never been used before. Third, I aimed to show that a comparative design with a strategic case selection actually ended up close to standard large-N techniques and fundamentally had a comparable inferential leverage. When actually conducting my comparative case study in the mission, I realized early on that the complexity of the issue at hand, the richness of the cases and the comparative logic necessitated a rigorous way of conducting research. Other alternatives to control for confounders are within case comparisons and longitudinal and spatial comparisons. Within case comparisons have the advantage that most social, cultural and structural factors are probably somewhat similar. A similar logic pertains comparisons of the same unit in different moments of time (longitudinal) or in different space (spatial).
Conducting the comparative methods is a great opportunity to strive for greater generalizability and controlling for confounders while still maintaining some of the typical richness of small-N approaches. Comparative methods are useful both for developing new hypotheses and testing old ones. They also suffer, however, from two weaknesses. The first is that selection bias persists and, if anything, is worsened by the necessity of selecting good cases with certain characteristics to compare. The second is that they also run the risk of missing the independence among cases and of comparing apples and oranges.
Similar to large-N approaches, the comparative method takes competing explanations very seriously. While statistical methods assess rival explanations through statistical control, experimental methods eliminate rival explanations through experimental control and random sample selection of both the control and the treatment group. While experimental methods have the highest inferential power, statistical methods based on observational data have a similar understanding of the ‘control’ idea to the comparative method. While observational statistical methods assess rival explanations through statistical control, comparative methods do so strategically. Once a case or set of cases has been selected, the case study has to be conducted, which is the focus of the next section.
Options on How to do a Case Study: A Few Options
While the literature on case selection has burgeoned, relatively little is known about what to do when one has actually selected his/her case(s). The widespread assumption is that case selection and case analysis are two [Page 1143] distinct phases, but that is almost never true. Case study methods entail much more of a circular back-and-forth between case selection and case analysis than is usually recognized. This section briefly reviews three non-exhaustive strategies for conducting case studies: process tracing, structured focused comparison and congruence theory. 10 While process tracing and congruence do not require more than one case and in fact are particularly suited for in-depth one-case analysis, the method of structured focused comparison – as the term suggests – requires at least two cases. These three types of case study analysis entail specific strategies of data collection, ranging from individual qualitative interviews, observation, archival research, focus groups and document analysis, which fall outside the scope of this chapter (Kapiszewski et al., 2015; Ruffa, 2019). In the remainder of this section, I briefly present each approach and then provide a broad-stroke discussion on the advantages and disadvantages of each approach to case study analysis.
Process tracing is one of the most common techniques and entails ‘the systematic examination of diagnostic evidence selected and analyzed in light of research questions and hypotheses posed by the investigator’ (Collier, 2011: 823). Recent studies have introduced a similar method, called causal process observations (CPOs). A CPO systematically ‘highlights the contrast between a) the empirical foundation of qualitative research, and b) the data matrices analyzed by quantitative researchers, which may be called data-set observations (DSOs)’ (Collier, 2011: 823). Collier and others consider CPOs as equivalent to process tracing. 11 Process tracing is used both for descriptive and for explanatory purposes, and it is particularly well suited for within-case analysis (Beach and Pedersen, 2013; Checkel, 2006; George and Bennett, 2005). Since its first systematic and comprehensive treatment in George and Bennett (George and Bennett, 2005), different kinds of process tracing have emerged, mainly clustering around positivist vs. more interpretivist research approaches (Beach and Pedersen, 2013). As Vennesson (2008: 232) points out ‘now the most common conceptions of process tracing are more standardized than the original formulation, and they emphasize the identification of a causal mechanism that connects independent and dependent variables. The emphasis is on causality, deduction and causal mechanisms'. In this particular, standardized version of process tracing, four empirical tests have been identified, originally formulated by Bennett and Elman (2010), who built on Van Evera (1997). Those four tests help understand how strongly a theory holds. Passing the test is necessary and/or sufficient for accepting the inference. When delving into the empirics and exploring the validity of the hypotheses, those four tests allow us to understand the strength of the hypothesis based on the empirical support found. The idea is that the hypothesis has to pass an increasingly difficult test; the harder the test it passes, the stronger the hypothesis is holding. If a hypothesis passes the straw-in-the-wind test, the hypothesis is relevant but not confirmed. If it fails, the hypothesis is not eliminated but weakened. If a hypothesis passes a hoop test, the relevance of the hypothesis is affirmed but not confirmed. However, if the hypothesis fails the hoop test, it suggests that the hypothesis has to be eliminated. When the hypothesis passes the smoking-gun test, it is confirmed, but if it fails, it is not eliminated but somewhat weakened. The fourth test is doubly decisive: if the hypothesis passes the test, it is confirmed and eliminates the others; if it fails, the hypothesis is eliminated (Collier, 2011).
This standardized idea of process tracing has gained traction in recent decades, which has implied that ‘although the idea of process tracing is often invoked by scholars as they examine qualitative data, too often this tool is neither well understood nor rigorously applied’ (Collier, 2011: 823). Sometimes scholars claim they are using process tracing when they are not. Alternatively, the [Page 1144] process-tracing method has become so technical that it can be hardly applied.
As Vennesson (2008: 232) points out, ‘something has been lost in the most recent formulations of process tracing'. A more pluralistic and less mechanistic understanding of process tracing could help capture more complex political phenomena – that can be linear, circular and interacting – than can hardly pass the four tests presented above. Finally, a particularly important yet somewhat less emphasized aspect is the sequencing aspect on process tracing, which gives close attention to phenomena as they unfold throughout time (Mahoney, 2010). In whatever form, however, tracing a process between two variables still differs from storytelling. As Flyvbjerg (2006: 237–41) suggests, process tracing differs from a pure narrative in three ways: process tracing is focused, structured and aims at providing a narrative explanation of a causal path that leads to a specific outcome. I contend that process tracing is a great framework but that it should be used pragmatically, otherwise there is a risk of missing the bigger picture. Relating to this, it is important that process tracing is not equated with testing causal mechanisms. As Gerring (2010: 1499) wrote, one should avoid ‘a dogmatic interpretation of the mechanismic mission’ – an explanation that is overly concerned with mechanisms.
A second common approach is less formal and entails systematically developing a set of observable implications to understand which kind of empirical referent one would need in order to find support for the theory. One particular version of it applies only to the comparative method, which is about conducting a structured focused comparison . Such comparison is structured because the same questions are asked across the cases, and it is focused because only the questions relevant to understanding the plausibility of the theory are asked, therefore focusing only on certain aspects (Flyvbjerg, 2006; George and Bennet, 2005). Such a method entails the ‘use of a well-defined set of theoretical questions or propositions to structure an empirical inquiry on a particular analytically defined aspect of a set of events across different cases’ (George and Bennett, 2005). Structured focused comparisons are widely used in both research and teaching contexts, since they allow for stringent and rigorous comparison and at the same time provide a good toolkit on how to actually carry out a comparison (Friesendorf, 2018).
A third alternative to this is the method of congruence, which is often considered important to complement process tracing. Outcomes are congruent with the expectations of a particular theory (Blatter and Haverland, 2012). The method of congruence is insufficient when it comes to causal mechanisms and can be complemented with the consistency of observable implications of theory at different stages of the causal mechanism. While less technical than the previous two, congruence in whatever form is a pragmatic way to approach complex fieldwork terrain.
The greater traction of quantitative methods has posed challenges to the role played by case study methods in political sciences and international relations. Even though case studies can be fruitfully used to complement quantitative techniques, they remain a method on their own, which maintains its specificities, advantages and disadvantages. The challenge for the future of case study methods is to strive for greater empirical rigor and greater transparency (Elman et al., 2010). With the current ongoing cross-fertilization unfolding across subfields (Gerring, 2017), further reflections on what case studies are and what they entail are important, as are the calls for more pluralism and pragmatism. This chapter has highlighted the importance of three dimensions to bear in mind in all phases of research, be it the setting up, the case selection or the conducting the case study phase. First, at the cost of [Page 1145] sounding obvious, it is important to clarify and be explicit about the kind of research objective one has and whether one is opting for the best design given the circumstances one is in. Second, within the case study methods, it is important to be well aware of the most important trade-offs and which design to opt for given the constraints. It is also important to be explicit about the inferential leverage that matters, given the time, budget and expertise constraints. But it ultimately boils down to asking what is the most fruitful way to answer one's research question given one's constraints? Third, case study research implies the fruitful combination of the craft and art of research in the social sciences. While it is always important to engage with a broader population of cases and master the techniques available for case selection and case analysis, it is also important to nurture creativity and ‘let the case speak to you'. But, it has also to do with using case study methods to answer research questions by not overemphasizing mechanistic technicism of case-selection techniques but rather appreciating the unique opportunities provided by case study methods to answer complex research questions. The research objective should be to strive for greater transparency and maximize the inferential leverage so as to be able to dialogue with other approaches but, ultimately, to answer relevant research questions.
1 I gratefully acknowledge Annekatrin Deglowś excellent comments on this chapter.
2 https://trip.wm.edu .
3 Very crudely, by positivism I refer to a wealth of diverse approaches that have as ultimate objective of research to identify causal relations (by developing or testing theories; by conceptualizing or describing phenomena) and that believe that reality is distinct from the observer/researcher and knowable (even if only to some extent). For a more thorough discussion, see: Della Porta and Keating (2008: 19–39).
4 Della Porta and Keating distinguish between positivist, post-positivist, interpretivist and humanistic approaches in terms of ontology and epistemology (Della Porta and Keating, 2008: 19–39). I also note that positivism and post-positivism lie on a continuum.
5 To the point that several scholars label only quantitative research as being ‘empirical', notwithstanding the very strong and deep engagement of case study research with the ‘empirics'.
6 Often quantitative scholars are asked to add illustrative examples when going through the review process. Ideally, the system could allow for more systematic mixed-methods approaches, where quantitatively focused and qualitatively focused could collaborate.
7 I thank Annekatrin Deglow for pointing this out to me.
8 This process is often called ‘abduction'.
9 For an alternative perspective to causal mechanisms, see process patterns (Friedrichs, 2016)
10 Causal process observations (CPOs) are often used but they will be treated here as synonymous to process tracing.
11 For an excellent example of CPOs, see Deglow (2018).
Mixed-Methods Designs
Comparative Analyses of Foreign Policy
Sign in to access this content
Get a 30 day free trial, more like this, sage recommends.
We found other relevant content for you on other Sage platforms.
Have you created a personal profile? Login or create a profile so that you can save clips, playlists and searches
- Sign in/register
Navigating away from this page will delete your results
Please save your results to "My Self-Assessments" in your profile before navigating away from this page.
Sign in to my profile
Sign up for a free trial and experience all Sage Research Methods has to offer.
You must have a valid academic email address to sign up.
Get off-campus access
- View or download all content my institution has access to.
- view my profile

IMAGES
COMMENTS
It is important to study political science because of its effects on education, healthcare, housing, jobs and conflict with other nations. Understanding political science helps to develop analytical and professional skills related to politi...
The function of political science is to strive to create an ideal government through the study of and application of theories about and observations about government.
Examples of a case study could be anything from researching why a single subject has nightmares when they sleep in their new apartment, to why a group of people feel uncomfortable in heavily populated areas. A case study is an in-depth anal...
As Gerring puts it, a case study should be “an intensive study of a single unit… a spatially bounded phenomenon – e.g. a nation-state
... case study re- search—that is, how do individual case studies provide evidence for the causal claims that political scientists hope to establish, and what
Case study as a research strategy: Some ambiguities and opportunities · P. Verschuren. Political Science. 2003. From a general methodological point of view the
London School of Economics & Political Science. She is a community
Case studies and theory in political science. In Greenstein, F., and N
espite the on-going discussion about case studies limitations in association with other methods and its credibility, the case study method is.
PS: Political Science & Politics. 44 (4): 823–830. doi:10.1017
Case Study Research.” Boston University, Department of Political Science.
The relative nature of contingency means that political scientists should problematize the extent to which a certain event was likely. An
tive in nature, and the decline of single-country studies in political science outside of the field of ... Case studies and theory in political
As the term suggests, single case studies focus on one single case, exploring the plausibility of a theory or tracing the causal mechanism at play in a