- Privacy Policy
Home » Case Study – Methods, Examples and Guide
Case Study – Methods, Examples and Guide
Table of Contents
A case study is an in-depth examination of a single case or a few selected cases within a real-world context. Case study research is widely used across disciplines such as psychology, sociology, business, and education to explore complex phenomena in detail. Unlike other research methods that aim for broad generalizations, case studies offer an intensive understanding of a specific individual, group, event, or situation.
A case study is a research method that involves a detailed examination of a subject (the “case”) within its real-life context. Case studies are used to explore the causes of underlying principles, behaviors, or outcomes, providing insights into the nuances of the studied phenomena. This approach allows researchers to capture a wide array of factors and interactions that may not be visible in other methods, such as experiments or surveys.
Key Characteristics of Case Studies :
- Focus on a specific case, individual, or event.
- Provide in-depth analysis and contextual understanding.
- Useful for exploring new or complex phenomena.
- Generate rich qualitative data that contributes to theory building.
Types of Case Studies
Case studies can be classified into different types depending on their purpose and methodology. Common types include exploratory , descriptive , explanatory , intrinsic , and instrumental case studies.
1. Exploratory Case Study
Definition : An exploratory case study investigates an area where little is known. It helps to identify questions, variables, and hypotheses for future research.
Characteristics :
- Often used in the early stages of research.
- Focuses on discovery and hypothesis generation.
- Helps clarify research questions.
Example : Examining how remote work affects team dynamics in an organization that has recently transitioned to a work-from-home model.
2. Descriptive Case Study
Definition : A descriptive case study provides a detailed account of a particular case, describing it within its context. The goal is to provide a complete and accurate depiction without necessarily exploring underlying causes.
- Focuses on describing the case in detail.
- Provides comprehensive data to paint a clear picture of the phenomenon.
- Helps understand “what” happened without delving into “why.”
Example : Documenting the process and outcomes of a corporate restructuring within a company, describing the actions taken and their immediate effects.
3. Explanatory Case Study
Definition : An explanatory case study aims to explain the cause-and-effect relationships of a particular case. It focuses on understanding “how” or “why” something happened.
- Useful for causal analysis.
- Aims to provide insights into mechanisms and processes.
- Often used in social sciences and psychology to study behavior and interactions.
Example : Investigating why a school’s test scores improved significantly after implementing a new teaching method.
4. Intrinsic Case Study
Definition : An intrinsic case study focuses on a unique or interesting case, not because of what it represents but because of its intrinsic value. The researcher’s interest lies in understanding the case itself.
- Driven by the researcher’s interest in the particular case.
- Not meant to generalize findings to broader contexts.
- Focuses on gaining a deep understanding of the specific case.
Example : Studying a particularly successful start-up to understand its founder’s unique leadership style.
5. Instrumental Case Study
Definition : An instrumental case study examines a particular case to gain insights into a broader issue. The case serves as a tool for understanding something more general.
- The case itself is not the focus; rather, it is a vehicle for exploring broader principles or theories.
- Helps apply findings to similar situations or cases.
- Useful for theory testing or development.
Example : Studying a well-known patient’s therapy process to understand the general principles of effective psychological treatment.
Methods of Conducting a Case Study
Case studies can involve various research methods to collect data and analyze the case comprehensively. The primary methods include interviews , observations , document analysis , and surveys .
1. Interviews
Definition : Interviews allow researchers to gather in-depth information from individuals involved in the case. These interviews can be structured, semi-structured, or unstructured, depending on the study’s goals.
- Develop a list of open-ended questions aligned with the study’s objectives.
- Conduct interviews with individuals directly or indirectly involved in the case.
- Record, transcribe, and analyze the responses to identify key themes.
Example : Interviewing employees, managers, and clients in a company to understand the effects of a new business strategy.
2. Observations
Definition : Observations involve watching and recording behaviors, actions, and events within the case’s natural setting. This method provides first-hand data on interactions, routines, and environmental factors.
- Define the behaviors and interactions to observe.
- Conduct observations systematically, noting relevant details.
- Analyze patterns and connections in the observed data.
Example : Observing interactions between teachers and students in a classroom to evaluate the effectiveness of a teaching method.
3. Document Analysis
Definition : Document analysis involves reviewing existing documents related to the case, such as reports, emails, memos, policies, or archival records. This provides historical and contextual data that can complement other data sources.
- Identify relevant documents that offer insights into the case.
- Systematically review and code the documents for themes or categories.
- Compare document findings with data from interviews and observations.
Example : Analyzing company policies, performance reports, and emails to study the process of implementing a new organizational structure.
Definition : Surveys are structured questionnaires administered to a group of people involved in the case. Surveys are especially useful for gathering quantitative data that supports or complements qualitative findings.
- Design survey questions that align with the research goals.
- Distribute the survey to a sample of participants.
- Analyze the survey responses, often using statistical methods.
Example : Conducting a survey among customers to measure satisfaction levels after a service redesign.
Case Study Guide: Step-by-Step Process
Step 1: define the research questions.
- Clearly outline what you aim to understand or explain.
- Define specific questions that the case study will answer, such as “What factors led to X outcome?”
Step 2: Select the Case(s)
- Choose a case (or cases) that are relevant to your research question.
- Ensure that the case is feasible to study, accessible, and likely to yield meaningful data.
Step 3: Determine the Data Collection Methods
- Decide which methods (e.g., interviews, observations, document analysis) will best capture the information needed.
- Consider combining multiple methods to gather rich, well-rounded data.
Step 4: Collect Data
- Gather data using your chosen methods, following ethical guidelines such as informed consent and confidentiality.
- Take comprehensive notes and record interviews or observations when possible.
Step 5: Analyze the Data
- Organize the data into themes, patterns, or categories.
- Use qualitative or quantitative analysis methods, depending on the nature of the data.
- Compare findings across data sources to identify consistencies and discrepancies.
Step 6: Interpret Findings
- Draw conclusions based on the analysis, relating the findings to your research questions.
- Consider alternative explanations and assess the generalizability of your findings.
Step 7: Report Results
- Write a detailed report that presents your findings and explains their implications.
- Discuss the limitations of the case study and potential directions for future research.
Examples of Case Study Applications
- Objective : To understand the success factors of a high-growth tech company.
- Methods : Interviews with key executives, analysis of internal reports, and customer satisfaction surveys.
- Outcome : Insights into unique management practices and customer engagement strategies.
- Objective : To examine the impact of project-based learning on student engagement.
- Methods : Observations in classrooms, interviews with teachers, and analysis of student performance data.
- Outcome : Evidence of increased engagement and enhanced critical thinking skills among students.
- Objective : To explore the effectiveness of a new mental health intervention.
- Methods : Interviews with patients, assessment of clinical outcomes, and reviews of therapist notes.
- Outcome : Identification of factors that contribute to successful treatment outcomes.
- Objective : To assess the impact of urban development on local wildlife.
- Methods : Observations of wildlife, analysis of environmental data, and interviews with residents.
- Outcome : Findings showing the effects of urban sprawl on species distribution and biodiversity.
Case studies are valuable for in-depth exploration and understanding of complex phenomena within their real-life contexts. By using methods such as interviews, observations, document analysis, and surveys, researchers can obtain comprehensive data and generate insights that are specific to the case. Whether exploratory, descriptive, or explanatory, case studies offer unique opportunities for understanding and discovering practical applications for theories.
- Baxter, P., & Jack, S. (2008). Qualitative Case Study Methodology: Study Design and Implementation for Novice Researchers . The Qualitative Report, 13(4), 544–559.
- Creswell, J. W., & Poth, C. N. (2017). Qualitative Inquiry and Research Design: Choosing Among Five Approaches (4th ed.). SAGE Publications.
- Stake, R. E. (1995). The Art of Case Study Research . SAGE Publications.
- Yin, R. K. (2018). Case Study Research and Applications: Design and Methods (6th ed.). SAGE Publications.
- Thomas, G. (2016). How to Do Your Case Study (2nd ed.). SAGE Publications.
About the author
Muhammad Hassan
Researcher, Academic Writer, Web developer
You may also like
Questionnaire – Definition, Types, and Examples
Basic Research – Types, Methods and Examples
Mixed Methods Research – Types & Analysis
Descriptive Research Design – Types, Methods and...
Exploratory Research – Types, Methods and...
Triangulation in Research – Types, Methods and...
An official website of the United States government
Official websites use .gov A .gov website belongs to an official government organization in the United States.
Secure .gov websites use HTTPS A lock ( Lock Locked padlock icon ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.
- Publications
- Account settings
- Advanced Search
- Journal List
Advancing the Application and Use of Single-Case Research Designs: Reflections on Articles from the Special Issue
Robert h horner, john ferron.
- Author information
- Article notes
- Copyright and License information
Corresponding author.
Accepted 2021 Oct 29; Collection date 2022 Mar.
This special issue of Perspective on Behavior Science is a productive contribution to current advances in the use and documentation of single-case research designs. We focus in this article on major themes emphasized by the articles in this issue and suggest directions for improving professional standards focused on the design, analysis, and dissemination of single-case research.
Keywords: Single-case research design, Methods standards, Analysis advances
The application of single-case research methods is entering a new phase of scientific relevance. Researchers in an increasing array of disciplines are finding single-case methods useful for the questions they are asking and the clinical needs in their fields (Kratochwill et al., 2010 ; Maggin et al., 2017 ; Maggin & Odom, 2014 ; Riley-Tillman, Burns, & Kilgus, 2020 ). With this special issue the editors have challenged authors to articulate the advances in research design and data analysis that will be needed if single-case methods are to meet these emerging expectations. Each recruited article delves into a specific avenue of concern for advancing the use of single-case methods. The purpose of this discussion is to integrate themes identified by the authors and offer perspective for advancing the application and use of single-case methods. We provide initial context and then focus on the unifying messages the authors provide for both interpreting single-case research results and designing studies that will be of greatest benefit.
A special issue of Perspectives on Behavior Science focused on methodological advances needed for single-case research is a timely contribution to the field. There are growing efforts to both articulate professional standards for single-case methods (Kratochwill et al., 2010 ; Tate et al., 2016 ), and advance new procedures for analysis and interpretation of single-case studies (Manolov & Moeyaert, 2017 ; Pustejovsky et al., 2014 ; Riley-Tillman et al., 2020 ). Foremost among these trends is the goal of including single-case methods in the identification of empirically validated clinical practices (Slocum et al., 2014 ). Often labeled “evidence-based practices” the emerging message is that federal, state, and local agencies will join with professional associations in advancing investment in practices that have empirically documented effectiveness, efficiency, and safety. This movement depends on each discipline defining credible protocols for identifying empirically validate procedures, and in the present context, the use of single-case methods to achieve this goal.
This special issue comes to the field following recent publication of the What Works Clearinghouse 4.1 standards for single-case design (Institute of Education Sciences, 2020 ). At this time, the repeated demonstrations that single-case methods are useful, valid, and increasingly well-defined holds great promise. For single-case methods to achieve the impact they promise, however, there remains a need for (1) professional acceptance of research design standards, (2) agreement on data analysis standards (both interpreting individual studies, and for larger meta-analyses), and (3) incorporation of these standards in journal review protocols, grant review protocols, and university training programs targeting research design. This special issue offers a useful foundation for advancing the field in each of these areas.
A Role for Experimental Single-Case Designs
One important theme across the articles is recognition that the scientific community is unifying in acceptance that the core features of experimental single-case designs allow credible documentation of functional relations (experimental control). This is a large message, and one that needs to be more overtly noted across disciplines where single-case methods are less often used. Of special value is the distinction between rigorous single-case experimental designs and clinical case studies, or formal descriptive time-series analyses. The iterative collection of data across time with periodic experimenter-manipulation of treatments is useful both as a clinical tool and, when this approach is linked with designs that control for threats to internal validity, to the advancement of science.
Combining Visual Analysis and Statistical Analysis
Another major message from the recruited articles is that interpretation of single-case research designs will benefit from (even require) incorporation of statistical tools. Single-case researchers have used visual analysis as the initial step in examining evidence (Parsonson & Baer, 1978 ; Ledford & Gast, 2018 ; Kazdin, 2021 ; Riley-Tillman et al., 2020 ). Rigorous use of visual analysis involves (1) examining the data from each phase of the study to define within phase patterns, (2) comparing data patterns of adjacent phases, (3) comparing data patterns of similar phases, (4) examining the full set of data within a design to assess if the design has been effective at controlling threats to internal validity, and if there are at least three demonstrations of effect (each at a different point in time), and (5) determining if there are instances of noneffect or contra-indicated effect.
When assessing a single phase (or similar phases) of a study, the researcher considers (1) number of data points, (2) level (mean) score, (3) variability of scores, and (4) within phase trend(s). When comparing adjacent phases, the researcher examines if there is a change in the pattern of data following manipulation of the independent variable. Phase comparisons are done by simultaneously assessing (1) change in level, (2) change in variability, (3) change in trend, (4) immediacy of any change in pattern, (5) degree of overlap in data between the two phases, and (6) similarity in the patterns of data from similar phases (e.g., two baseline phases).
When assessing the overall design, the researcher looks at all the data to determine if an effect (e.g., change in the pattern of the dependent variable following manipulation of the independent variable) is observed at least three different times, each at a different point in time. The researcher also examines if there are manipulations of the independent variable where change in the dependent variable did not occur or occurred in the opposite direction expected by the hypothesis under consideration.
At present there is active discussion about the need for visual analysis as a component in the analysis protocol with single-case studies (Institute of Education Sciences, 2020 ). It is clear that the number of data points per phase, mean of these points, variability, and within phase trend are all easily calculatable. As the authors of articles in this issue note, there also are creative approaches to examining if there is change in the data patterns across adjacent phases. We view these approaches as major advances, and positive assets to the task of interpreting single-case evidence. We also recognize, however, that none of the proposed statistical options simultaneously examine the full set of variables traditionally used to guide visual analysis (level, trend, variability, immediacy, overlap, similarity of pattern across similar phases), nor do they include protocols for adjusting the weight given to each variable when assessing an effect (e.g., level is weighted differently in phases with stable data patterns than in phases with strong trends). Most important, visual analysis offers a more nuanced interpretation of data patterns. The role of outliers, within phase shifts in data patterns, and shifts in data patterns at similar times (within a multiple baseline design) are more apparent via visual analysis, and useful sources of information for assessing the stability and clinical relevance of effects. At this point we continue to see visual analysis as the appropriate first step in assessment of single-case studies, but strongly support the addition of statistical tools that yield valuable quantitative summaries of specific aspects of the analysis.
Align Data Analysis with Research Purpose
A theme that emerges in this special issue is the importance of aligning the aspects of the analysis that are quantified with the purposes of the study. We are fortunate to see in this special issue a variety of quantitative summaries that are tailored to meet a variety of purposes. There are methods helpful in estimating the size of the average treatment effect, and these vary depending on whether the focus is on quantifying in a standardized way the change in level (Cox et al., this issue ) or a change in slope or variability (Manolov et al., this issue-a ). In addition, there are methods to quantify the consistency of effects across replications (Manolov et al., this issue-b ) and other methods to summarize the degree to which the size of effects relates to characteristics of the participants (Moeyaert et al., this issue ). There are also estimates of the probability of the observed difference occurring in the absence of a treatment effect (Friedel et al., this issue ; Manolov et al., this issue-b ), methods that rely on a series of probability estimates to aid in the interpretation of FA (Kranak & Hall, this issue ), and summaries used to identify overselectivity (Mason et al., this issue ). In each case a strong rationale is available to support specific conditions where the proposed analysis would be useful. The important message is that no one analysis is applicable to all conditions and clarifying the purpose and structure of a specific study is critical when deciding which analysis to implement.
In addition to the need for aligning the statistical analysis with the study purpose, is the need for alignment of the logic and assumptions underlying the quantitative summary with the design and data from the single-case study. For example, the interpretation of a change in level as a measure of the size of the effect is made more meaningful when the experimental design controls for threats to internal validity and a visual analysis reveals an absence of trends, an absence of level shifts that are not coincident with intervention, and a problematic level of baseline responding. Probabilities based on randomization tests (Manolov et al., this issue-b ) are more meaningfully interpreted when the design incorporates randomization and the data permutations are restricted to the possible random assignments, whereas probabilities based on Monte Carlo resampling methods (Friedel et al., this issue ) are based on an assumption of exchangeability, and thus more meaningful when the time series are stable.
Single-case researchers will increasingly be expected to integrate statistical analyses in their reporting of results. The number of statistical options will continue to expand, and the analyses will become increasingly easy to implement through software applications. For the field to capitalize on these advancements, it will be important for single-case researchers to be flexible, selecting quantifications that are well matched to their purposes, study design, data, and visual analyses. Because single-case researchers cannot routinely rely on one specific quantification, efforts have begun to provide guidance in selecting among quantitative options (Fingerhut et al., 2020 ; Manolov & Moeyaert, 2017 ; Manolov et al., this issue-a ). These efforts will need to be extended so they include the techniques developed and illustrated by authors of this special issue, as well as methods that will be developed in the future to meet the varied needs of single-case researchers.
Computer Applications Supporting Analysis of Single-Case Designs
We also acknowledge the value of computer applications that can assist in analysis of data from single-case designs, and make use of statistical tools more accessible. The ExPRT application developed by Joel Levin and colleagues is one such program that provides rapid interpretation of single-case designs that have incorporated randomization criteria (Gafurov & Levin, 2021 ). The logic used by ExPRT is consistent with the approach to analysis of alternating treatment designs (ATDs) proposed by Manolov et al. ( this issue-b ) and is likely to prompt single-case researchers to consider incorporation of randomization options in the design of future experiments. The value of computer applications also is apparent in the Automated Nonparametric Statistical Analysis (ANSA) app offered by Kranak and Hall ( this issue ) as a tool for facilitating the interpretation of functional analysis data using alternating treatment designs. In addition, many of the effect sizes discussed by Manolov et al. ( this issue-a ) can be readily computed using computer applications, such as the Single-Case Effect Size Calculator (Pustejovsky & Swan, 2018 ) and SCDHLM (Pustejovsky et al., 2021 ). We anticipate an increasing number of computer applications for interpreting single-case data will become available as statistical strategies gain acceptance.
Falligant et al. ( this issue ) extend this theme by reviewing emerging statistical strategies for improving analysis of time series data. They summarize data analytic methods that will both benefit experimental studies and be especially useful in interpretation of clinical data (e.g., with designs that may not meet experimental requirements for control of threats to internal validity). Their message is joined by Cox et al. ( this issue ) in emphasizing the value of collecting rigorous time series data in clinical context even when experimental designs are contraindicated. The consistent message is that combining visual analysis with supplemental statistical assessment has value both for clinical decision making and advancing the science within a discipline. The improved array of statistical options, and the increasing ease with which they can be applied to time series data, make integration of visual and statistical analysis a likely standard for the future.
Implications for Designing Single-Case Research
The articles in this special issue emphasize innovative approaches to analysis of single-case research data. But the authors also offer important considerations for research designs. Two articles report procedures for identifying the role of intervention components (Cox et al., this issue ; Mason et al., this issue ). Too little emphasis has been given to the role of single-case designs to examine moderator variables, interaction effects, intervention components and sustained impact. Few interventions are effective across all population groups, all contexts, and all challenges. Effective research designs need to allow identification not only of the impact of an intervention in a specific context, but identification of conditions where the intervention is not effective. Likewise, a growing number of behavioral interventions include multiple procedures. Identifying the respective value of each procedural component and the most efficient combination of components is a worthy challenge for researchers and the creative application of single-case designs.
Single-case studies designed to examine component interactions or setting specificity may benefit from use of complex single-case designs that combine multiple baseline, alternating treatment and/or reversal elements (Kazdin, 2021 ). In other cases, analysis approaches may be helpful to both document effects and guide future studies. Cox et al. ( this issue ) offer examples for separating the independent and combined effects of behavioral interventions and medication on reduction of problem behavior for individuals with intellectual disabilities. Mason et al. ( this issue ) likewise document how statistical modeling can be used to isolate elements of stimulus control and document with greater precision the presence of stimulus overselectivity.
Research Protocols
Three articles in this issue focus on research protocols that will facilitate the inclusion of single-case research in larger meta-analyses documenting evidence-based practices. Aydin and Yassikaya ( this issue ) focus on the need for transforming graphic data into spreadsheets that can be used for statistical analysis. They report on the value of a PlotDigitizer application for extracting graphic data and provide documentation of the validity and reliability of this tool for delivering the data in a format needed for supplemental statistical analysis. Manolov et al. ( this issue-a , b ) propose procedures for both selecting and reporting the measures employed in any study to avoid measurement bias and misinterpretation, and Dowdy et al. ( this issue ) likewise encourage procedures to identify possible publication bias. These authors promote the value of research plan prepublication as a growing option that is both practical and valuable for maximizing rigorous and ethically implemented research protocols.
The major message from articles in this special issue is that single-case research designs are available and functional for advancing both our basic science and clinical technology. The efforts over the past 15 years to define professional design and analysis standards for single-case methods have been successful. But as the articles in this special issue show, single-case research methods are continuing to evolve. Innovative statistical procedures are improving the precision and credibility of single-case research analysis and posing important considerations for novel research design options. These innovations will continue to challenge prior assumptions, and open new opportunities. Each innovation will receive its own critical review, but collectively, the field is benefiting from the creative recommendations exemplified by the authors of this special issue.
Declarations
Conflict of interest.
We have no known conflict of interest to disclose.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
- Aydin, O., & Yassikaya, M. Y. (this issue). Validity and reliability analysis of the Plot Digitizer Software Program for data extraction from single-case graphs. Perspectives on Behavior Science . Advance online publication. 10.1007/s40614.021.00284-0 [ DOI ] [ PMC free article ] [ PubMed ]
- Cox, A., Pritchard, D., Penney, H., Eiri, L., & Dyer, T. (this issue). Demonstrating an analyses of clinical data evaluating psychotropic medication reductions and the ACHIEVE! Program in adolescents with severe problem behavior. Perspectives on Behavior Science. Advance online publication. 10.1007/s40614-020-00279-3 [ DOI ] [ PMC free article ] [ PubMed ]
- Dowdy, A., Hantula, D., Travers, J. C., & Tincani, M. (this issue). Meta-analytic methods to detect publication bias in behavior science research. Perspectives on Behavior Science . Advance online publication. 10.1007/s40614-021-00303-0 [ DOI ] [ PMC free article ] [ PubMed ]
- Falligant, J., Kranak, M., & Hagopian, L. (this issue). Further analysis of advanced quantitative methods and supplemental interpretative aids with single-case experimental designs. [ DOI ] [ PMC free article ] [ PubMed ]
- Fingerhut, J., Marbou, K., & Moeyaert, M. (2020). Single-case metric ranking tool (Version 1.2) [Microsoft Excel tool]. 10.17605/OSF.IO/7USBJ
- Friedel, J., Cox A., Galizio, A., Swisher, M., Small, M., & Perez S. (this issue). Monte Carlo analyses for single-case experimental designs: An untapped resource for applied behavioral researchers and practitioners. [ DOI ] [ PMC free article ] [ PubMed ]
- Gafurov, B. S., & Levin, J. R. (2021, June). ExPRT ( Excel Package of Randomization Tests): Statistical analyses of single-case intervention data (Version 4.2.1) [Computer software]. https://ex-prt.weebly.com
- Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance, & What Works Clearinghouse. (2020). What Works Clearinghouse Procedures Handbook (Vers. 4.1). https://ies.ed.gov/ncee/wwc/Docs/referenceresources/WWC-Procedures-Handboo v4-1508.pdf
- Kazdin AE. Single-case research designs: Methods for clinical and applied settings. 3. Oxford University Press; 2021. [ Google Scholar ]
- Kranak, M., & Hall, S. (this issue) Implementing automated nonparametric statistical analysis on functional analysis data: A guide for practitioners and researchers. Perspectives on Behavior Science . Advance online publication. 10.1007/s40614-021-00290-2 [ DOI ] [ PMC free article ] [ PubMed ]
- Kratochwill, T. R., Hitchcock, J., Horner, R. H., Levin, J. R., Odom, S. L., Rindskopf, D. M., & Shadish, W. R. (2010). Single-case designs technical documentation . What Works Clearinghouse . https://ies.ed.gov/ncee/wwc/pdf/wwc_scd.pdf
- Ledford J, Gast D. Single case research methodology: Applications in special education and behavioral sciences. 3. Taylor & Francis; 2018. [ Google Scholar ]
- Maggin D, Odom S. Evaluating single-case research data for systematic review: A commentary for the special issue. Journal of School Psychology. 2014;52(2):237–241. doi: 10.1016/j.jsp.2014.01.002. [ DOI ] [ PubMed ] [ Google Scholar ]
- Maggin DM, Pustejovsky JE, Johnson AH. A meta-analysis of school-based group contingency interventions for students with challenging behavior: An update. Remedial & Special Education. 2017;38(6):353–370. doi: 10.1177/0741932517716900. [ DOI ] [ Google Scholar ]
- Manolov R, Moeyaert M. Recommendations for choosing single-case data analytical techniques. Behavior Therapy. 2017;48(1):97–114. doi: 10.1016/j.beth.2016.04.008. [ DOI ] [ PubMed ] [ Google Scholar ]
- Manolov, R., Moeyaert, M., & Fingerhut, J. (this issue-a) A priori justification for effect measures in single-case experimental designs. Perspectives in Behavior Science . Advance online publication. 10.1007/s40614.021-00282-2 [ DOI ] [ PMC free article ] [ PubMed ]
- Manolov, R., Tanious, R., & Onghena, P. (this issue-b). Quantitative techniques and graphical representations for interpreting results from alternating treatment design. Perspectives on Behavior Science . Advance online publication. 10.1007/s40614-021-00289-9 [ DOI ] [ PMC free article ] [ PubMed ]
- Mason, L., Otero, M., & Andrews, A. (this issue). Cochran’s Q test of stimulus overselectivity within the verbal repertoire of children with autism. Perspectives on Behavior Science . Advance online publication. 10.1007/s40614-021-00315-2 [ DOI ] [ PMC free article ] [ PubMed ]
- Moeyaert, M., Yang, P., & Xu, X. (this issue). The power to explain variability in intervention effectiveness in single-case research using hierarchical line modeling. Perspectives on Behavior Science . Advance online publication. 10.1007/s40614-021-00304-z [ DOI ] [ PMC free article ] [ PubMed ]
- Parsonson B, Baer D. The analysis and presentation of graphic data. In: Kratochwill TR, editor. Single subject research. Elsevier; 1978. pp. 101–166. [ Google Scholar ]
- Pustejovsky, J. E., & Swan, D. M. (2018). Single-case effect size calculator (Version 0.5.1) Web application. https://jepusto.shinyapps.io/SCD-effect-sizes/
- Pustejovsky, J. E., Hedges, L. V., & Shadish, W. R. (2014). Design-comparable effect sizes in multiple baseline designs: A general modeling framework. Journal of Educational & Behavioral Statistics, 39 (5), 368–393. 10.3102/1076998614547577
- Pustejovsky, J. E., Chen, M., & Hamilton, B. (2021). scdhlm: A web-based calculator for between-case standardized mean differences (Version 0.5.2) Web application. https://jepusto.shinyapps.io/scdhlm
- Riley-Tillman, T. C., Burns, M. K., & Kligus, S. (2020). Evaluating educational interventions: Single-case design for measuring response to intervention . Guilford Press.
- Slocum, T. A., Detrich, R., Wilczynski, S. M., Spencer, T. D., Lewis, T., & Wolfe, K. (2014). The evidence-based practice of applied behavior analysis. The Behavior Analyst, 37 , 41–56. 10.1007/s40614-014-0005-2 [ DOI ] [ PMC free article ] [ PubMed ]
- Tate, R. L., Perdices, M., Rosenkoetter, U., Shadish, W., Vohra, S., Barlow, D. H., Horner, R., Kazdin, A., Kratochwill, T., McDonald, S., Sampson, M., Shamseer, L., Togher, L., Albin, R., Backman, C., Douglas, J., Evans, J. J., Gat, D., Manolov, R., et al. (2016). The single-case reporting guideline in Behavioural Interventions (SCRIBE) 2016 statement. Physical Therapy, 96 (7), e1–e10. 10.2522/ptj.2016.96.7.e1 [ DOI ] [ PubMed ]
- View on publisher site
- PDF (581.1 KB)
- Collections
Similar articles
Cited by other articles, links to ncbi databases.
- Download .nbib .nbib
- Format: AMA APA MLA NLM
Add to Collections
Single-Case Designs
- Reference work entry
- First Online: 13 January 2019
- Cite this reference work entry
- Breanne Byiers 2
852 Accesses
3 Citations
Single-case designs (also called single-case experimental designs) are system of research design strategies that can provide strong evidence of intervention effectiveness by using repeated measurement to establish each participant (or case) as his or her own control. The flexibility of the designs, and the focus on the individual as the unit of measurement, has led to an increased interest in the use of single-case design research in many areas of intervention research. The purpose of this chapter is to introduce the reader to the basic logic underlying the conduct and analysis of single-case design research by describing the fundamental features of this type of research, providing examples of several commonly used designs, and reviewing the guidelines for the visual analysis of single-case study data. Additionally, current areas of consensus and disagreement in the field of single-case design research will be discussed.
This is a preview of subscription content, log in via an institution to check access.
Access this chapter
Subscribe and save.
- Get 10 units per month
- Download Article/Chapter or eBook
- 1 Unit = 1 Article or 1 Chapter
- Cancel anytime
- Available as PDF
- Read on any device
- Instant download
- Own it forever
- Available as EPUB and PDF
Tax calculation will be finalised at checkout
Purchases are for personal use only
Institutional subscriptions
Similar content being viewed by others
Assessing generalizability and variability of single-case design effect sizes using two-stage multilevel modeling including moderators
A Priori Justification for Effect Measures in Single-Case Experimental Designs
Ahearn WH, Kerwin ML, Eicher PS, Shantz J, Swearingin W. An alternating treatments comparison of two intensive interventions for food refusal. J Appl Behav Anal. 1996:29(3):321–32.
Article Google Scholar
Allison DB, Franklin RD, Heshka S. Reflections on visual inspection, response guided experimentation, and type I error rate in single-case designs. J Exp Educ. 1992;61(1):45–51.
Baer DM. Perhaps it would be better not to know everything. J Appl Behav Anal. 1977;10:167–72.
Baer DM, Wolf MM, Risley TR. Some current dimensions of applied behavior analysis. J Appl Behav Anal. 1968;1:91–7.
Barlow DH, Hayes SC. Alternating treatments design: one strategy for comparing the effects of two treatments in a single subject. J Appl Behav Anal. 1979;12(2):199–210.
Barlow DH, Nock M, Hersen M. Single-case experimental designs. 3rd ed. 2009.
Google Scholar
Barrish HH, Saunders M, Wolf MM. Good behavior game: effects of individual contingencies for group consequences on disruptive behavior in a classroom. J Appl Behav Anal. 1969;2:119–24.
Brownell KD, Stunkard AJ, Albaum JM. Evaluation and modification of exercise patterns in the natural environment. Am J Psychiatr. 1980;137:1540–5.
Byun TM, Hitchcock ER, Ferron J. Masked visual analysis: minimizing type I error in visually guided single-case design for communication disorder. J Speech Lang Hear Res. 2017;60: 1455–66.
Colvin G, Sugai G, Good RJ, Lee YY. Using active supervision and pre-correction to improve transition behaviors in an elementary school. Sch Psychol Q. 1997;12:344–63.
Dallery J, Cassidy RN, Raiff BR. Single-case experimental designs to evaluate novel technology-based health interventions. J Med Internet Res. 2013;15:e22.
Dugard P, File P, Todman J. Single-case and small-n experimental designs: a practical guide to randomization tests. New York: Routledge; 2012.
Book Google Scholar
Ferron J, Ware W. Using randomization tests with responsive single-case designs. Behav Res Ther. 1994;32:787–91.
Fisch GS. Evaluating data from behavioral analysis: visual inspection or statistical models? Behav Process. 2001;54:137–54.
Gast DL, Ledford J. Single case research methodology. 2nd ed. New York: Routledge; 2014.
Hersen M, Bellack AS. A multiple-baseline analysis of social-skills training in chronic schizophrenics. J Appl Behav Anal. 1976;9(3):239–45.
Higgins Hains AH, Baer DM. Interaction effects in multielement designs: inevitable, desirable, and ignorable. J Appl Behav Anal. 1989;22:57–69.
Horner RD, Baer DM. Multiple-probe technique: a variation of the multiple baseline. J Appl Behav Anal. 1978;11:189–96.
Horner RH, Carr EG, Halle J, McGee G, Odom S, Wolery M. The use of single subject research to identify evidence-based practice in special education. Except Child. 2005;71:165–79.
Horner RH, Swaminathan H, Sugai G, Smolkowski K. Considerations for the systematic analysis and use of single-case research. Educ Treat Child. 2012;35(2):269–90.
Jones RR, Weinrott MR, Vaught RS. Effects of serial dependency on the agreement between visual and statistical inference. J Appl Behav Anal. 1978;11:277–83.
Kazdin AE. Single-case experimental designs: methods for clinical and applied settings. New York: Oxford University Press; 1982.
Kazdin AE. Single-case research designs: methods for clinical and applied settings. New York: Oxford University Press; 2011.
Kratochwill TR, Hitchcock J, Horner RH, Levin JR, Odom SL, Rindskopf DM, Shadish WR. Single-case designs technical documentation. 2010. Retrieved from What Works Clearinghouse website: http://ies.ed.gov/ncee/wwc/pdf/wwc_scd.pdf .
Kratochwill TR, Levin JR. Enhancing the scientific credibility of single-case intervention research: randomization to the rescue. In: Kratochwill TR, Levin JR, editors. Single-case intervention research: methodological and statistical advances. Washington, DC: American Psychological Association; 2014. p. 53–90.
Chapter Google Scholar
Ledford JR, Gast DL. Measuring procedural fidelity in behavioural research. Neuropsychol Rehabil. 2014;24:332–48.
Manolov R, Gast DL, Perdices M, Evans JJ. Single-case experimental designs: reflections on conduct and analysis. Neuropsychol Rehabil. 2014;24(3–4):634.
Matyas TA, Greenwood KM. Visual analysis of single-case time series: effects of variability, serial dependence, and magnitude of intervention effects. J Appl Behav Anal. 1990;23:341–51.
Morgan DL, Morgan RK. Comparing group and single-case designs. In: Morgan DL, Morgan RK, editors. Single-case research methods for the behavioral and health sciences. Thousand Oaks: SAGE; 2014.
Parsonson BS, Baer DM. The analysis and presentation of graphic data. In: Kratochwill T, editor. Single subject research. New York: Academic; 1978. p. 101–66.
Parsonson BS, Baer DM. The visual analysis of data, and current research into stimuli controlling it. In: Kratochwill TR, Levin JR, editors. Single-case research design and analysis: new directions for psychology and education. Hillsdale: Lawrence Erlbaum Associates; 1992. p. 15–40.
Putnam RF, Handler MW, Ramirez-Platt CM, Luiselli JK. Improving student bus-riding behavior through a whole-school intervention. J Appl Behav Anal. 2003;36:583–90.
Rose M. Single-subject experimental designs in health research. In: Liamputtong P, editor. Research methods in health: foundations for evidence-based practice. Melbourne: Oxford University Press; 2017. p. 217–34.
Schlosser RW, Blischak DM. Effects of speech and print feedback on spelling by children with autism. J Speech Lang Hear Res. 2004;47(4):848.
Shadish WR, Cook TD, Campbell DT. Experimental and quasi-experimental designs for generalized causal inference. Boston: Houghton Mifflin; 2002.
Sidman M. Tactics of scientific research. Boston: Authors Cooperative, Inc; 1960.
Sindelar P, Rosenberg M, Wilson R. An adapted alternating treatments design for instructional research. Educ Treat Child. 1985;8(1):67–76.
Smith JD. Single-case experimental designs: a systematic review of published research and current standards. Psychol Methods. 2012;17:510–50.
Stark LJ, Bowen AM, Tyc VL, Evans S, Passero MA. A behavioral approach to increasing calorie consumption in children with cystic fibrosis. J Pediatr Psychol. 1990;15:309–26.
Wolery M. Procedural fidelity: a reminder of its functions. J Behav Educ. 1994;4:381–6.
Download references
Author information
Authors and affiliations.
Department of Educational Psychology, University of Minnesota, Minneapolis, MN, USA
Breanne Byiers
You can also search for this author in PubMed Google Scholar
Corresponding author
Correspondence to Breanne Byiers .
Editor information
Editors and affiliations.
School of Science and Health, Western Sydney University, Penrith, NSW, Australia
Pranee Liamputtong
Rights and permissions
Reprints and permissions
Copyright information
© 2019 Springer Nature Singapore Pte Ltd.
About this entry
Cite this entry.
Byiers, B. (2019). Single-Case Designs. In: Liamputtong, P. (eds) Handbook of Research Methods in Health Social Sciences. Springer, Singapore. https://doi.org/10.1007/978-981-10-5251-4_92
Download citation
DOI : https://doi.org/10.1007/978-981-10-5251-4_92
Published : 13 January 2019
Publisher Name : Springer, Singapore
Print ISBN : 978-981-10-5250-7
Online ISBN : 978-981-10-5251-4
eBook Packages : Social Sciences Reference Module Humanities and Social Sciences Reference Module Business, Economics and Social Sciences
Share this entry
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
- Publish with us
Policies and ethics
- Find a journal
- Track your research
Single-Case Design, Analysis, and Quality Assessment for Intervention Research
Affiliation.
- 1 Biomechanics & Movement Science Program, Department of Physical Therapy, University of Delaware, Newark, Delaware (M.A.L., A.B.C., I.B.); and Division of Educational Psychology & Methodology, State University of New York at Albany, Albany, New York (M.M.).
- PMID: 28628553
- PMCID: PMC5492992
- DOI: 10.1097/NPT.0000000000000187
Background and purpose: The purpose of this article is to describe single-case studies and contrast them with case studies and randomized clinical trials. We highlight current research designs, analysis techniques, and quality appraisal tools relevant for single-case rehabilitation research.
Summary of key points: Single-case studies can provide a viable alternative to large group studies such as randomized clinical trials. Single-case studies involve repeated measures and manipulation of an independent variable. They can be designed to have strong internal validity for assessing causal relationships between interventions and outcomes, as well as external validity for generalizability of results, particularly when the study designs incorporate replication, randomization, and multiple participants. Single-case studies should not be confused with case studies/series (ie, case reports), which are reports of clinical management of a patient or a small series of patients.
Recommendations for clinical practice: When rigorously designed, single-case studies can be particularly useful experimental designs in a variety of situations, such as when research resources are limited, studied conditions have low incidences, or when examining effects of novel or expensive interventions. Readers will be directed to examples from the published literature in which these techniques have been discussed, evaluated for quality, and implemented.
- Cohort Studies
- Medical Records*
- Quality Assurance, Health Care*
- Randomized Controlled Trials as Topic
- Research Design*
Grants and funding
- R21 HD076092/HD/NICHD NIH HHS/United States
IMAGES
VIDEO
COMMENTS
The purpose of this article is to describe single-case studies, and contrast them with case studies and randomized clinical trials. We will highlight current research designs, analysis techniques, and quality appraisal tools relevant for single-case ...
Case study research is widely used across disciplines such as psychology, sociology, business, and education to explore complex phenomena in detail. Unlike other research methods that aim for broad generalizations, case studies offer an intensive understanding of a specific individual, group, event, or situation.
This article systematically reviews the research design and methodological characteristics of single-case experimental design (SCED) research published in peer-reviewed journals between 2000 and 2010. SCEDs provide researchers with a flexible and viable alternative to group designs with large sample sizes.
Single-case studies designed to examine component interactions or setting specificity may benefit from use of complex single-case designs that combine multiple baseline, alternating treatment and/or reversal elements (Kazdin, 2021). In other cases, analysis approaches may be helpful to both document effects and guide future studies.
38 single case studies published in four top management journals1 across various management topics (organizational behavior, strategy, organizational theory) and over time, which helps capture changes in how the methodology may have evolved.
As outlined in the Policy and Position Statement from the Division for Research of the Council for Exceptional Children (CEC), CEC recognizes the legitimacy and importance of single case research (SCR) for identifying evidence-based practices in special education (Rodgers, Lewis, O’Neill, & Vannest, 2017).
Single-case designs (also called single-case experimental designs) are system of research design strategies that can provide strong evidence of intervention effectiveness by using repeated measurement to establish each participant (or case) as his or her own control.
Summary of key points: Single-case studies can provide a viable alternative to large group studies such as randomized clinical trials. Single-case studies involve repeated measures and manipulation of an independent variable.
Studies that use a single-case design (SCD) measure outcomes for cases (such as a child or family) repeatedly during multiple phases of a study to determine the success of an intervention. The number of phases in the study will depend on the research questions, intervention, and outcome(s) of interest (see Types of SCDs on page 4 for examples).
For the positivist case study researcher, a single case, like a single experiment, represents a single set of empirical circumstances. The findings of a single case are generalisable to other empirical settings when additional cases test and confirm those findings in other settings (Lee 1989).