•   Home
  • Acta Universitatis Gothoburgensis
  • Gothenburg Studies in Educational Sciences

Contextual analysis: A research methodology and research approach

Link to web site, collections, publication type, series/report no..

Research Design Review

A discussion of qualitative & quantitative research design, contextual analysis: a fundamental attribute of qualitative research.

Unique attributes of qualitative research-Contextual analysis

One of the 10 unique or distinctive attributes of qualitative research is contextual, multilayered analysis. This is a fundamental aspect of qualitative research and, in fact, plays a central role in the unique attributes associated with data generation, i.e., the importance of context, the importance of meaning, the participant-researcher relationship , and researcher as instrument —

“…the interconnections, inconsistencies, and sometimes seemingly illogical input reaped in qualitative research demand that researchers embrace the tangles of their data from many sources. There is no single source of analysis in qualitative research because any one research event consists of multiple variables that need consideration in the analysis phase. The analyzable data from an in-depth interview, for example, are more than just what was said in the interview; they also include a variety of other considerations, such as the context in which certain information was revealed and the interviewee–interviewer relationship.” (Roller & Lavrakas, pp. 7-8)

The ability — the opportunity — to contextually analyze qualitative data is also associated with basic components of research design, such as sample size and the risk of relying on saturation which “misguides the researcher towards prioritizing manifest content over the pursuit of contextual understanding derived from latent, less obvious data.” And the defining differentiator between a qualitative and quantitative approach, such as qualitative content analysis in which it is “the inductive strategy in search of latent content, the use of context, the back-and-forth flexibility throughout the analytical process, and the continual questioning of preliminary interpretations that set qualitative content analysis apart from the quantitative method.”

There are many ways that context is integrated into the qualitative data analysis process to ensure quality analytical outcomes and interpretations . Various articles in Research Design Review have discussed contextually grounded aspects of the process, such as the following (each header links to the corresponding RDR article).

Unit of Analysis

“Although there is no perfect prescription for every study, it is generally understood that researchers should strive for a unit of analysis that retains the context necessary to derive meaning from the data. For this reason, and if all other things are equal, the qualitative researcher should probably err on the side of using a broader, more contextually based unit of analysis rather than a narrowly focused level of analysis (e.g., sentences).”

Meaning of Words

“How we use our words provides the context that shapes what the receiver hears and the perceptions others associate with our words. Context pertains to apparent as well as unapparent influences that take the meaning of our words beyond their proximity to other words [or] their use in recognized terms or phrases…”

Categorical Buckets

“No one said that qualitative data analysis is simple or straightforward. A reason for this lies in the fact that an important ingredient to the process is maintaining participants’ context and potential multiple meanings of the data. By identifying and analyzing categorical buckets, the researcher respects this multi-faceted reality and ultimately reaps the reward of useful interpretations of the data.”

Use of Transcripts

“Although serving a utilitarian purpose, transcripts effectively convert the all-too-human research experience that defines qualitative inquiry to the relatively emotionless drab confines of black-on-white text. Gone is the profound mood swing that descended over the participant when the interviewer asked about his elderly mother. Yes, there is text in the transcript that conveys some aspect of this mood but only to the extent that the participant is able to articulate it.”

Use of Recordings

“Unlike the transcript, the recording reminds the researcher of how and when the atmosphere in the [focus] group environment shifted from being open and friendly to quiet and inhibited; and how the particular seating arrangement, coupled with incompatible personality types, inflamed the atmosphere and seriously colored participants’ words, engagement, and way of thinking.”

Roller, M. R., & Lavrakas, P. J. (2015). Applied qualitative research design: A total quality framework approach . New York: Guilford Press.

Share this:

  • Click to share on Reddit (Opens in new window)
  • Click to share on Twitter (Opens in new window)
  • Click to share on LinkedIn (Opens in new window)
  • Click to share on Facebook (Opens in new window)
  • Click to share on Tumblr (Opens in new window)
  • Click to email a link to a friend (Opens in new window)
  • Click to print (Opens in new window)

Leave a comment Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed .

' src=

  • Already have a WordPress.com account? Log in now.
  • Subscribe Subscribed
  • Copy shortlink
  • Report this content
  • View post in Reader
  • Manage subscriptions
  • Collapse this bar
  • Help and information
  • Comparative Politics
  • Environmental Politics
  • European Politics
  • European Union
  • Foreign Policy
  • Human Rights and Politics
  • International Political Economy
  • International Relations
  • Introduction to Politics
  • Middle Eastern Politics
  • Political Economy
  • Political Institutions
  • Political Methodology
  • Political Philosophy
  • Political Theory
  • Politics of Development
  • Security Studies
  • UK Politics
  • US Politics
  • Share This Facebook LinkedIn Twitter

Research Methods in the Social Sciences: An A-Z of key concepts

Research Methods in the Social Sciences: An A-Z of key concepts (1st edn)

  • Acknowledgements
  • List of Figures and Tables
  • List of Contributors
  • Introduction. An Introduction to Research Methods in the Social Sciences
  • A. Archival Research
  • Automated Text Analysis: The Application of Automatic Text Processing in the Social Sciences
  • B. Bayesian Inference
  • Behaviourism
  • Bias: Unavoidable Subjectivity?
  • Boolean Algebra
  • C. Case Selection
  • Comparative Analysis
  • Concept Construction
  • Content Analysis: On the Rise
  • Contextual Analysis: Putting Research into Context
  • Counterfactual Analysis
  • Covariance: A First Step in the Analysis of the Relationship between Two Variables
  • Critical Realism: Uncovering the Shades of Grey
  • Cross-Sectional and Longitudinal Studies
  • D. Deductive, Inductive, and Retroductive Reasoning
  • Descriptive, Explanatory, and Interpretive Approaches
  • Determinism, Predictions, and Probabilism
  • Discourse Analysis: Breaking Down Ideational Boundaries in the Social Sciences
  • E. Endogeneity: When the Effect Influences the Cause
  • Epistemology
  • Ethics in Research
  • Ethnography
  • Experiments
  • F. Factor Analysis: Uncovering Unobservable Constructs
  • Falsification: How does it Relate to Reproducibility?
  • Focus Groups: Trends, Issues, and Debates
  • Formal Modelling
  • G. Grand Theory and Middle-Range Theory: Choosing the Right Tool for Theory Building
  • H. Hermeneutics: Theory and Methodology of Interpretation
  • Hypotheses: State of the Art and ‘Best Practice’
  • I. Interdisciplinarity: The Interaction of Different Disciplines for Understanding Common Problems
  • Interview Techniques
  • L. Levels of Analysis
  • Literature Review
  • M. Meta-Analysis: A Solution to Deal with Scientific Information Overload when Conducting Research Syntheses
  • Methodological Individualism and Holism
  • Mixed Methods: Combination of Quantitative and Qualitative Research Approaches
  • Multicausality and Equifinality
  • Multiple Correspondence Analysis and Geometric Data Analysis
  • N. Nomothetic and Idiographic Methods
  • O. Observational Methods: Combining Critical Distance and Inside Knowledge?
  • Operationalization: From the Concepts to the Data
  • Oral History and Life History
  • P. Paradigms and Research Programmes
  • Positivism, Post-positivism, and Social Science
  • Process Tracing: Tracing the Causal Pathways between Independent and Dependent Variables
  • Prosopography
  • Q. Qualitative Comparative Analysis
  • R. Regression Analysis
  • Replication and Reproducibility
  • Research Question
  • S. Sampling Techniques: Sample Types and Sample Size
  • Scientific Realism
  • Scope Conditions: A Potential Escape from Systematic Theory Falsification
  • Sequence Analysis: Being Earnest with Time
  • Social Network Analysis: The Significance of Relations
  • Source Criticism
  • Statistical Significance
  • Survey Research
  • Systems Analysis
  • T. Thematic Analysis: An Accessible and Flexible Approach for Qualitative Analysis
  • Time Series: A Statistical Method for Longitudinal Analysis
  • Triangulation
  • Typology: A Multidimensional Classification
  • U. Unit of Analysis and Observation
  • V. Variables: A Matter of Kind and Degree

Contextual Analysis

Putting research into context.

  • Auke Willems
  • https://doi.org/10.1093/hepl/9780198850298.003.0014
  • Published in print: 21 January 2021
  • Published online: August 2021

This chapter reflects on contextual analysis, which examines the environment in which a given phenomenon operates. Contextual analysis is used widely in social sciences, such as history, managerial and leadership studies, organizational theory, business studies, and political sciences. It is useful for identifying trends and topics within unstructured data (contexts). In a sense, contextual analysis helps create order out of chaos. The main aim of contextual analysis is to assess when and how contexts shape a social phenomenon and vice versa. Contexts can be, inter alia, historical, institutional, cultural, demographic, technological, psychological, ideological, ontological, and epistemological. A wide body of scholarship has developed on the topic of contextual analysis. The chapter reviews the literature briefly and identify clues and themes relevant to the social sciences.

  • contextual analysis
  • social sciences
  • unstructured data
  • social phenomenon

You do not currently have access to this chapter

Please sign in to access the full content.

Access to the full content requires a subscription

Printed from Oxford Politics Trove. Under the terms of the licence agreement, an individual user may print out a single article for personal use (for details see Privacy Policy and Legal Notice).

date: 07 June 2024

  • Cookie Policy
  • Privacy Policy
  • Legal Notice
  • Accessibility
  • [66.249.64.20|185.80.151.41]
  • 185.80.151.41

Characters remaining 500 /500

  • Open access
  • Published: 14 December 2022

Methodological approaches to study context in intervention implementation studies: an evidence gap map

  • Juliane Mielke 1   na1 ,
  • Thekla Brunkert 1 , 2   na1 ,
  • Franziska Zúñiga 1 ,
  • Michael Simon 1 ,
  • Leah L. Zullig 3 &
  • Sabina De Geest 1 , 4  

BMC Medical Research Methodology volume  22 , Article number:  320 ( 2022 ) Cite this article

8567 Accesses

5 Citations

18 Altmetric

Metrics details

Within implementation science studies, contextual analysis is increasingly recognized as foundational to interventions' successful and sustainable implementation. However, inconsistencies between methodological approaches currently limit progress in studying context and guidance to standardize the use of those approaches is scant. Therefore, this study's objective was to systematically review and map current methodological approaches to contextual analysis in intervention implementation studies. The results would help us both to systematize the process of contextual analysis and identify gaps in the current evidence.

We conducted an evidence gap map (EGM) based on literature data via a stepwise approach. First, using an empirically developed search string, we randomly sampled 20% of all intervention implementation studies available from PubMed per year (2015–2020). Second, we assessed included studies that conducted a contextual analysis. Data extraction and evaluation followed the Basel Approach for CoNtextual ANAlysis (BANANA), using a color-coded rating scheme. Also based on BANANA and on the Context and Implementation of Complex Interventions (CICI) framework–an implementation framework that pays ample attention to context– we created visual maps of various approaches to contextual analysis.

Of 15, 286 identified intervention implementation studies and study protocols, 3017 were screened for inclusion. Of those, 110 warranted close examination, revealing 22% that reported on contextual analysis.

Only one study explicitly applied a framework for contextual analysis. Data were most commonly collected via surveys ( n  = 15) and individual interviews ( n  = 13). Ten studies reported mixed-methods analyses. Twenty-two assessed meso-level contextual and setting factors, with socio-cultural aspects most commonly studied. Eighteen described the use of contextual information for subsequent project phases (e.g., intervention development/adaption, selecting implementation strategies). Nine reported contextual factors' influences on implementation and/or effectiveness outcomes.

Conclusions

This study describes current approaches to contextual analysis in implementation science and provides a novel framework for evaluating and mapping it. By synthesizing our findings graphically in figures, we provide an initial evidence base framework that can incorporate new findings as necessary. We strongly recommend further development of methodological approaches both to conduct contextual analysis and to systematize the reporting of it. These actions will increase the quality and consistency of implementation science research.

Peer Review reports

Successful implementation of interventions in real-world settings depends on the dynamic, multi-dimensional, multi-level interplay between context, intervention and implementation strategies [ 1 , 2 ]. Therefore, a thorough understanding of the implementation context is critical. This is true not only for the initial implementation, but also for sustainability and scale-up [ 3 , 4 , 5 , 6 , 7 ]. Filling this need is the role of contextual analysis, i.e., the mapping of multi-dimensional and multi-level contextual factors relevant for the implementation of an intervention in a specific setting.

Within an implementation science Footnote 1 project, we understand contextual analysis as a separate study. It starts well before implementation and continues throughout the project. The in-depth contextual knowledge informs subsequent phases of the project, especially the development or adaption of an intervention and choices of implementation strategies [ 8 , 9 , 10 ]. Within that setting, contextual analysis helps to interpret the studied intervention's effectiveness and implementation outcomes and guides choices of sustainability strategies [ 11 , 12 ].

Although the importance of context has been widely emphasized regarding implementation, little attention has been paid to its assessment in studies partly driven by funding frameworks that do not normally recognize this phase's importance [ 13 , 14 , 15 ]. Yet, conceptual and methodological challenges hamper the assessment of context additionally. Even the concept of context is only partially mature [ 16 , 17 , 18 ]: a recent systematic review revealed inconsistencies in current theoretical and operational definitions [ 18 ].

No unifying definition of context yet exists. Instead, we see terms including setting—sometimes divided into inner and outer setting—environment, or system characteristics, with each signifying a slightly different perspective [ 16 , 19 , 20 ]. Further, no explicit methodological guidance yet describes how to assess, analyze or report context and setting.

Within a postpositivist paradigm, researchers tend to focus on single factors (commonly referred to as facilitators and barriers) to the exclusion of those occupying multiple levels and dimensions [ 18 , 20 , 21 ]. These factors are often selected without theoretical support; and even where contextual analyses are conducted, the findings are rarely used to inform subsequent project phases (e.g., implementation strategy choices). Additionally, no specific methods to study contexts are described, the range of psychometrically sound measurement tools (particularly to assess macro-level factors) limited, and reporting guidelines (e.g., Standards for Reporting implementation Studies (StaRI) [ 22 , 23 ]) ambiguous regarding how contextual analysis to report [ 18 , 24 ].

Based on a methodology reported by Stange and Glasgow [ 5 ] within a series of patient-centered medical home research for the US Agency for Healthcare Research and Quality (AHRQ), we developed the B asel A pproach for Co N textual ANA lysis (BANANA) and applied it successfully in two implementation science projects [ 25 , 26 , 27 ]. BANANA provides methodological guidance for contextual analyses and can point to relevant aspects in reporting contextual analyses. This approach's theoretical grounding is the Context and Implementation of Complex Interventions (CICI) framework [ 2 ], a meta-framework incorporating insights from previous frameworks (e.g., the Consolidated Framework for Implementation Research [ 19 ]), but also filling previous gaps (e.g., differentiating between context and setting, focusing more on macro-level factors, considering how other interventions can affect implementation). Starting from an ecological perspective, the authors conceptualized context as a “set of characteristics and circumstances that consist of active and unique factors, within which the implementation is embedded” [ 2 ], whereas setting refers to the physical location in which an intervention is to be implemented and interacts with both context and implementation [ 2 ]. Context “is an overarching concept, comprising not only a physical location but also roles, interactions and relationships at multiple levels” [ 2 ]. Contextual factors can be grouped into geographical, epidemiological, socio-cultural, socio-economic, political, legal or ethical domains, and include, e.g., the social structure, financial aspects, or the political climate.

To guide contextual analysis in implementation science projects, BANANA includes six components: (1) choosing a theory, model or framework (TMF) to guide contextual analysis. (To enhance analytical granularity, the TMF can be complemented with one that is setting-specific.); (2) reviewing empirical evidence about relevant contextual factor(s), including facilitators and barriers, as well as practice patterns related to the implementation and intervention; (3) involving relevant stakeholders in the contextual analysis. This includes implementation agents, i.e., individuals (or organizations) targeted or affected by the implementation of an intervention ( target group , e.g., patients, family caregivers), who implement an intervention ( implementers , e.g., healthcare professionals) or who decide on the implementation of an intervention ( decision makers , e.g., policy makers and funders) [ 2 ]. Other stakeholders can include experts with advisory roles within the project (e.g., for intervention development); (4) collecting and analyzing data, combining qualitative and quantitative methods where appropriate; (5) identifying and describing the relevance of contextual factors for intervention co-design, implementation strategies and outcomes; and (6) reporting the contextual analysis [ 27 ]. To strengthen the methodology for contextual analysis in implementation science, we recognized that it would be essential first to understand the key methods currently in use. Therefore, we set out to gather an evidence base. To identify gaps in that base, we systematically reviewed and mapped the methodological approaches described. More specifically, first, we aimed to determine the percentage of published intervention implementation studies reporting on contextual analysis. Second, we aimed to assess, map and evaluate those studies that reported on contextual analysis. We focused on a) which methodological approaches were used for contextual analyses and what gaps exist in current approaches, and b) which results were used to inform subsequent phases of the associated studies.

To draft an evidence gap map (EGM) we reviewed and categorized the methodologies applied to contextual analyses in the identified studies. This process was basically a systematic search that included surveying the current state of methodological approaches to contextual analysis. As the name implies, this was very useful to identify gaps in those approaches [ 28 , 29 , 30 ]. As for the mapping aspect, the results are presented in a user-friendly format, usually combining tables or visualizations and descriptive reports to summarize existing evidence and facilitate methodological improvements regarding the topic—in this case, contextual analysis [ 28 , 29 , 30 , 31 ]. We reported our findings according to the Preferred Reporting Items for Systematic reviews and Meta-Analyses–Scoping Reviews (PRISMA–ScR) Checklist (Additional file 1 ) [ 32 ].

Scope of the evidence gap map (EGM) and development of research questions

As a first step, to develop comprehensive, relevant research questions, this study's authors—all experienced implementation science scientists—discussed the scope and focus of the EGM [ 31 , 33 ]. As noted, a stepwise approach helped us identify relevant literature and provide a comprehensive overview of the available evidence (Additional file 2 ): First, we aimed to identify intervention implementation studies and assessed whether they included contextual analyses (Step 1). Second, focusing exclusively on studies that reported contextual analyses, we mapped both the researchers' methods (Step 2a) and how they used the results to inform further phases of their projects (Step 2b).

Inclusion/ exclusion criteria

In step 1, we employed ten inclusion criteria to the prospective sample. We included (a) peer-reviewed articles or study protocols (b) concerning intervention implementation studies (c) if they employed experimental or quasi-experimental designs (d) to test intervention effectiveness (e) in real world settings. They also needed (f) to include at least one of Brown et al.'s "7 Ps" [ 34 ], i.e., programs, practices, principles, procedures, products, pills, and policies, and (g) to report on the evaluation of the implementation pathway. This included qualitative or quantitative information on the implementation process and/or on at least one implementation outcome as defined by Proctor et al. [ 35 ] (Additional file 2 ). During the screening we identified a large number of feasibility studies that did not fit the scope of our study. Therefore, we decided only to include feasibility studies (h) if they assessed at least one additional implementation outcome (e.g., feasibility and acceptability). Further, only papers (i) written in English or German and (j) with available full texts were included. Because the level of detail of contextual analysis in study protocols is usually limited, we used the "cited by" function in PubMed to determine whether the intervention study had been already published and contained further information on contextual analysis. In cases where we identified the study protocol and related intervention implementation study, only the intervention study was included in the review. Further, we excluded studies reporting on context exclusively as part of the process evaluation or retrospectively.

Systematic searching – search strategy development

We applied Hausner et al.'s empirical-based approach [ 36 ] to develop our search strategy. Following a four-step process, we first used a precise search string to identify a subset of 163 articles in Pubmed that met our EGM's inclusion criteria (Additional file 3 ). Those articles were randomly assigned to a development ( n  = 81) or a validation set ( n  = 82). Second, using Pubmed ReMiner ( https://hgserver2.amc.nl/cgi-bin/miner/miner2.cgi ), we identified the search terms (keywords and MeSH terms) most commonly used in the development set articles. The identified search terms were used to develop a search string. In a third step, this string was tested against the validation set. The final search string consisted of 22 keywords (MeSH and free terms) and achieved a sensitivity of 95.1% (i.e., it identified 75 of the 81 development records). The fourth step consisted of documenting the search string development (Additional file 3 ).

Our main aim was to identify and map gaps in the current evidence base on approaches to contextual analysis and not to provide an exhaustive overview on all existing evidence. Therefore, we searched only the PubMed electronic database. Further, to maximize timeliness, we limited our search to the past six years (2015–2020). Using a random number generator, we then selected a random sample of 20% of the articles identified from each year. No further filters were applied.

Study selection

For step 1, using the web application Rayyan ( https://rayyan.qcri.org ), two reviewers (JM, TB) independently screened titles and abstracts of the randomly selected implementation science papers against the described inclusion criteria [ 37 ]. Second, each reviewer (JM, TB) independently screened the full texts of all included papers. In case of disagreement between the two reviewers, a third reviewer (SDG) was consulted to reach consensus. For step 2, the first two reviewers (JM, TB) independently screened the full texts of previously included intervention implementation studies against the respective eligibility criteria. Again, the third reviewer (SDG) was consulted in case of disagreement.

Data extraction and analysis

We extracted the general data of all included intervention implementation studies (e.g., design, setting). Guided by BANANA, specific characteristics of studies including contextual analyses were extracted, including general information (e.g., whether context was analyzed at various timepoints, TMFs used), implementation agents involved in each analysis and methods applied to conduct contextual analysis (i.e., quantitative and qualitative methods). Further, we assessed the results of the contextual analyses, i.e., we noted how those results were used for subsequent study phases and whether the researchers had considered how contextual factors might influence implementation and summative outcomes (Additional file 2 ). As it quickly became clear that few studies explicitly reported the use of hybrid designs, we used Curran et al.'s description to categorize these in the remainder that we checked, i.e., as hybrid type 1/2/3 Footnote 2 [ 38 ]. All extracted data were charted in an Excel file. General study characteristics were analyzed descriptively, calculating frequencies and percentages.

Mapping of identified methodological approaches

We mapped the identified approaches to contextual analyses against the components of BANANA. To provide a user-friendly format, we created color coded tables and depicted the information graphically (i.e., in an EGM). The structure of the tables follows the BANANA approach and provides a comprehensive overview of all relevant information. More detailed information on the assessed approaches can be found in the Additional files 4 and 5 .

To provide an overview of contextual factors assessed, an EGM was developed using two software tools: EPPI-Reviewer Version 4.12.3.0 [ 39 ] and EPPI-Mapper Version 1.2.5 [ 40 ]. As terminology and conceptualization of contextual factors varied widely across the identified studies, with none differentiating between context and setting, we used the CICI framework to categorize identified micro-, meso- und macro-level aspects [ 2 ]. Contextual factors were grouped to the seven CICI context domains (i.e., geographical, epidemiological, socio-cultural, socio-economic, political, legal and ethical) and subcategories further specifying contextual domains (e.g., infrastructure, organization structure, leadership). Setting factors as part of the context (i.e., those referring to the physical location in which an intervention is implemented) were categorized into three domains: work environment, physical characteristics and practice patterns. Since included studies did not differentiated setting as a part of context, JM inductively categorized all identified setting factors for each domain (e.g., pertaining to work flow, capacity, availability of resources) to clearly structure and summarize them. These choices were then reviewed by TB. Inconsistencies were discussed with SDG and FZ. Using dots, the evidence map concisely depicts which aspects of context and setting were assessed in each implementation and at which level. Each dot's color indicates whether the method used was quantitative or qualitative; its size indicates how many studies investigated this aspect. I.e., the larger the dot, the more studies have considered this specific aspect. As the evidence map is interactive, categories can be shown or hidden to provide simpler or more complex views. The respective studies' references (including abstracts) can also be displayed.

Evaluation of identified methodological approaches

To critically evaluate the methodological approaches to contextual analysis reported in the included studies, we grouped the extracted data via five of the six components described in the BANANA approach [ 27 ]. The sixth step of BANANA was not evaluated as it refers to the reporting of the contextual analysis, which was an inclusion criterion for the assessed studies. We applied color-coding to indicate whether each study clearly addressed a component (green), only mentioned it partly (yellow), or failed to address it (red). The color coding was done independently by two researchers (JM, TB). In cases of disagreement, a third researcher (SDG) was consulted to decide on the rating.

We used a two-phase sampling process. In Phase 1, our PubMed search returned 15,286 records. After removing duplicates, we randomly sampled 20% of the remaining studies from each of the six selected publication years (2015–2020) (n = 3017). In Phase 2, we screened this sample via the inclusion criteria noted above. Figure  1 presents a flow chart of the screening process. This left 110 intervention implementation studies for data extraction. For Phase 1, our inter-rater reliability was 76.7%; for Phase 2 it was 91.1%. As the included articles were both, original studies and study protocols, in the interests of readability, we will describe all results in the past tense.

figure 1

PRISMA flow chart presenting a stepwise approach to identify relevant studies

General characteristics of included studies (Step 1)

Of the 110 extracted articles the majority were study protocols ( n  = 90); most ( n  = 82) were either from North America ( n  = 45) or Europe ( n  = 37) (Table 1 ). The studies were conducted in a wide range of settings, the most common being primary care ( n  = 20), community care ( n  = 15), the health care system ( n  = 13) and schools ( n  = 12). Eighty-four of their designs were experimental; twenty-six were quasi-experimental. Further details of the studies are described in the Additional file 4 .

Characteristics of studies reporting on contextual analysis and methodological approaches applied (Step 2)

Of the sample's 110 studies, 24 (21.8%) reported conducting contextual analyses (Table 2 ). As authors of seven studies had released further information or results elsewhere, we located and extracted those records ( n  = 15) as well. Based on Curran et al.'s definitions [ 38 ], we identified (or categorized if not described) 17 hybrid type 1, five hybrid type 2, and two hybrid type 3 designs. Seven of the 24 assessed context at one time point; 12 assessed it at two, and five at three timepoints during their projects (Additional file 5 ).

TMFs used and empirical evidence considered for contextual analysis

The included studies used eleven distinct TMFs. Those used can be broadly categorized into process models (e.g., Knowledge-to-Action Models), determinant frameworks (e.g., CFIR), or classic theories (e.g., social cognitive theory) [ 80 ]. One, the RE-AIM (reach, effectiveness, adoption, implementation, maintenance) Planning and Evaluation Framework is a process and evaluation framework that includes a determinant component [ 81 ]. Only one study specifically described how it used a TMF (CFIR) for contextual analysis and how that TMF guided it [ 59 ]. The others ( n  = 15) referred more generally to their TMFs guiding their overall implementation process, with RE-AIM ( n  = 7) and the Consolidated Framework for Implementation Research (CFIR) ( n  = 3), cited most often. Four studies reported combining two TMFs, e.g., CFIR and RE-AIM. In addition, seven considered empirical evidence about relevant contextual factors (Fig.  2 ).

figure 2

Characteristics of studies that performed contextual analyses (CAs)

Note. Color coding: black = reported, white = not reported, grey = unclear; 1 TMF = theory, model, frameworks; 2 IP = overall implementation process in the assessed study, asterisk indicates combination of two TMFs; 3 asterisk indicates mixed methods analysis; 4 expert group / advisory panel; quantitative, qualitative; authors disescribed the process how contextual information were used

Consideration of implementation agents

Only nine studies collected data of all three types of implementation agents, with implementers most often being involved in the assessment of context ( n  = 19) (Fig.  2 ). In some cases, stakeholder groups who functioned as expert panels or advisory boards throughout the project ( n  = 11) were established. These included, e.g., health care providers from various medical fields, people affected by specific illnesses or health conditions, leaders and administrators, and delegates for non-profit organizations or government departments (Additional file 5 ).

Methods applied for data collection and analysis

Of the 24 studies that reported using contextual analyses, 23 clearly described their methods. Of these 23, while ten explicitly reported using mixed-methods analysis, we found that 13 applied combinations of quantitative and qualitative methods. The remaining ten applied either quantitative ( n  = 2) or qualitative ( n  = 8) methods alone (Fig.  2 ). Quantitative data collection methods included purpose-designed surveys ( n  = 15), behavior mapping ( n  = 1), and retrospective use of national survey ( n  = 1) and surveillance ( n  = 1) data. Seven qualitative data collection methods were used: individual interviews ( n  = 13), focus groups ( n  = 13), observations ( n  = 2), as well as photovoice methodology Footnote 3 ( n  = 2), telephone interviews ( n  = 1), yarning Footnote 4 ( n  = 1) and site visits ( n  = 1).

Contextual and setting factors assessed

We identified 43 separate factors. Following the CICI framework, we first categorized these as either context ( n  = 30) or setting factors ( n  = 13), then mapped them on an evidence gap map (Additional file 6 ) [ 2 ]. In general, meso-level factors ( n  = 22) were most commonly assessed, accounting for almost half of all mentions. The remainder were roughly equally divided between macro- ( n  = 13) and micro-level factors ( n  = 12). Fifteen studies considered context on at least two levels. We report a detailed overview of all assessed factors in Additional file 7 .

Contextual factors. Within context, socio-cultural factors were most commonly assessed (e.g., knowledge and perceptions, lifestyle, social structure) ( n  = 20); no studies reported on legal aspects. In descending order of frequency, other contextual domains included political (e.g., policies, leadership) ( n  = 12), geographic (e.g., larger infrastructure) ( n  = 5), epidemiological (e.g., incidence and prevalence of disease) ( n  = 5), socio-economic (occupational aspects, living conditions) and ethical (ethical principles ( n  = 2), conflicts of interest ( n  = 2)). Seven studies described their assessment of inner or outer context or of facilitators and barriers, but did not further specify contextual factors in detail.

Setting factors. In view of setting, most studies assessments focused on the work environment (e.g., availability and accessibility of resources) ( n  = 15). Other setting aspects assessed included practice patterns (e.g., service delivery, care planning) ( n  = 11) as well as the setting's physical characteristics (e.g., study site, physical environment) ( n  = 7).

Use of contextual information for subsequent project phases

Eighteen study protocols described further uses of contextual information to develop ( n  = 17) and/or adapt interventions ( n  = 11), eight used contextual information to choose implementation strategies, and six used it to interpret study outcomes. Of these, ten described their processes for doing that. Both original study papers described the further use of contextual information; however, only one reported how it was used.

Influences of contextual factors on outcomes

Twelve study protocols and both original studies reported process evaluations. We identified nine studies that explicitly reported contextual factors' influences on implementation outcomes and/or effectiveness outcomes (Fig.  2 ). Various terms were used to signify similar implementation outcomes; and even where studies labeled these outcomes similarly, their definitions varied. In five protocol papers, as well as in both original studies, it was unclear whether any association had been considered between contextual factors and either implementation outcomes or effectiveness outcomes.

Evaluation of methodological approaches for contextual analysis

Our evaluation of the identified approaches to contextual analysis revealed that few studies addressed the key components of contextual analysis that are described in detail within BANANA (Fig.  3 ). The components that most studies clearly described were the use of quantitative and qualitative methods ( n  = 12) and the involvement of implementation agents ( n  = 9). The latter was also described partly within most of the remaining studies ( n  = 15). The two least addressed components were the use of contextual information to interpret outcomes ( n  = 7) and the use of empirical evidence ( n  = 7).

figure 3

Evaluation of contextual analyses according to the Basel Approach for Contextual Analysis (BANANA)

Note. Colors indicate, whether each study clearly addressed a component (green box), only mentioned it partly (yellow box), failed to address it (red box), or if it is unclear whether the component is addressed (grey box). 1 including components 1–5, whereas component five was divided into intervention/implementation strategies and implementation outcomes; as component six refers to the reporting of contextual analysis it is not included in this figure Further explanations on color codes of specific components: 2 green = TMF indicated to specifically guide contextual analysis, yellow = TMF indicated to guide overall implementation process, red = no TMF indicated; 3 green = all types of implementation agents were involved (i.e., target group, implementers, and decision makers), yellow = at least two types of implementation agents were involved, red = no involvement of implementation agents described; 4 green = use of contextual information for intervention and implementation strategy development/adaption, yellow = use for either intervention or implementation strategy development/adaption, red = use of contextual information not described; We have checked the colors used with the Chromatic Vision Simulator Web Edition 1.3 for their blind-friendliness

This study provides an overview of the current methodological approaches to contextual analysis in intervention implementation studies and indicates gaps. Using EGM methodology, we applied a novel approach for summarizing and evaluating available evidence on contextual analysis to develop an initial evidence gap map on contextual analysis methodology. Based on a random sample drawn from 110 intervention implementation studies, we found that fewer than one-quarter of those studies (21.8%) reported on analyses of their projects' contexts and settings. The studies that did report on contextual analyses showed high variability in the methodological approaches they used. This was true both of the analyses and of how they were reported.

Using the BANANA approach—one of the first frameworks for evaluating CAs—we found widespread significant methodological gaps. For example, few contextual analyses were theory based: only one study explicitly reported the use of a TMF for its contextual analysis; and fewer than half (8/22) provided information how they used findings from their CA to inform their project's subsequent steps.

Lack of TMFs guiding contextual analysis

Building our understanding of context demands a stable theoretical basis. In addition to guiding our selection of multilevel contextual factors, this will enable operationalization both of context and of setting. Still, of the 24 studies we reviewed, only one provided both a specific description of its authors' use of a TMF to guide their contextual analysis and their rationale for using the one they did [ 59 , 82 ]. Congruent with our findings, research shows that 22.5 – 48% of implementation science studies typically use TMFs; and of those that do, few explicitly explain their choices [ 82 , 83 , 84 , 85 ].

The phenomenon of “underuse, superficial use or misuse” [ 86 ] of TMFs has been described elsewhere in implementation science literature [ 85 , 87 , 88 , 89 ]. All of the identified TMFs consider context, but differ widely regarding their focus and conceptualization of context [ 18 , 20 ]. Lacking clear theoretical underpinnings, their assessments of contextual factors appear arbitrary. While limiting both the comparability and the generalizability of their results, this gives the impression of a lack of rigor concerning the contextual analysis. And as this analysis provides the data for further fine-tuning of the project, any such deficiencies will reduce the credibility of all subsequent study phases. This includes also the emerging focus of differentiating setting from context, which was not reflected in includes studies and complicated data analysis [ 2 , 16 ].

Variability in conceptualization and assessment of context

Consistent with other reviews' findings, the assessed studies' conceptualizations of context tended to be vague. For example, while a diverse range of factors were assessed at numerous levels, no definitions accompanied them. The resulting vagueness (e.g., documentation of inner and outer context, local contextual determinants, environmental-level characteristics, facilitators and barriers), hampered our efforts either to understand, to summarize and to compare those factors [ 13 , 17 , 18 ].

We noted considerable differences regarding which levels' and domains' contextual factors were appropriate targets for investigation. In contrast to Rogers et al.'s review [ 18 ] of studies from 2008–2018, which found that micro-level factors were most often assessed, our results regarding reports published over the last six years (2015–2020) showed a significant focus on the meso level, with socio-cultural contextual factors (e.g., social structure, community structure) most frequently captured. Macro-level factors (including political, legal and socio-economic aspects) were less commonly studied.

This scarcity might also reflect a shortage of tools and frameworks focusing on the macro level [ 20 , 24 , 90 , 91 ]. However, evidence points to the importance of macro-level factors for adoption and successful implementation of interventions. For example, policy dynamics—or rather, competing policy agendas—can create major macro-level barriers to implementation [ 90 , 92 , 93 ]. Further, when reviewing research on projects that resulted in mis-implementation of interventions, it quickly becomes clear that the most common causes of premature termination of effective interventions or programs are all funding-related (86–87.6%) [ 94 , 95 ]. This observation drives home the point that, to maximize the chances of a project's success (e.g., by recognizing changes in funding priorities at an early stage acquiring additional funding), its contextual analyses has to consider and closely monitor factors at every level.

However, the choice of which contextual factors to study and which stakeholders to involve at which phases depends largely on the type of intervention. This may also explain why the recorded contextual factors differed so widely between studies.

Furthermore, both the assessment of context and the reporting of contextual analysis might be biased by their analysts' level of pre-existing knowledge, i.e., researchers' inside knowledge may influence the quality or impartiality of their results. For example, researchers working in a specific setting may already be aware of certain contextual determinants (e.g., processes and practice patterns) or may gather important information informally (e.g., via chance meetings with implementation agents, observation of practice). While this information is not explicitly collected for the contextual analysis, it can lead to confirmation bias. I.e., it can leave "blind spots" in contextual analysis, exerting subtle pressure on analysis or interpretation to favor factors that support pre-existing hypotheses or beliefs [ 96 ].

Limited involvement of various implementation agents

Both to enhance the quality of a project's research and to ensure appropriateness of intervention and implementation strategies through co-design, it is crucial to involve implementation agents in diverse positions [ 97 , 98 ]. This is true throughout the implementation project but especially so in the contextual analysis. Also, in the reviewed studies, the most commonly considered implementation agents were implementers; however, persons affected by the intervention and decision makers often went unrepresented. Implementation science guidelines generally recommend the most representative possible range of implementation agents' and other stakeholders' voices—the clear assumption being that this improves the likelihood of a successful and sustainable implementation [ 99 ]. In order to benefit fully from implementation agents' views, a stakeholder involvement strategy should be developed, specifying both, the tasks performed by the involved implementation agents and the methods used to involve them [ 100 ].

Variability in methods used for contextual analysis

For contextual analysis, either a combination of quantitative and qualitative methods, or, if possible, a mixed-methods approach is recommended. Merging, connecting or embedding data obtained via various means increases both the breadth and the depth of the analysis [ 101 , 102 ]. It also improves our practical understanding of how interventions can work and of which implementation strategies are needed to successfully implement them [ 101 , 103 ]. Congruent with Rogers et al.'s findings [ 18 ], we found that only 37.5% of the studies used mixed-methods approaches [ 104 , 105 ]. Overall, while Rogers et al.'s sample included a smaller proportion of these approaches (19%), the tendency was the same. Like them, our sample also used more qualitative than quantitative methods (respectively 75% and 25% compared to Rogers et al.'s respective findings of 53% and 28%).

Likewise, surveys or interviews (with individuals or focus groups) were our sample's most common methods of capturing contextual details. However, recent studies increasingly emphasize the relevance of direct (e.g., ethnographic) observations in implementation research. These allow insider perspectives, including, for example, records of contextual aspects that implementation agents may take for granted and omit to mention, or tasks performed differently than generally reported [ 106 , 107 , 108 , 109 ].

Problematically, as contextual analysis in implementation science is primarily done within a postpositivist paradigm, researchers' understandings of context are often mechanistic and reductionistic. Therefore, we recommend that they also consider constructivist perspectives, particularly rapid ethnographic methods. In addition to probing more deeply into the context (e.g., to uncover hidden processes), these require fewer resources than traditional methods. This efficiency makes them particularly useful for contextual analyses, which are rarely well-resourced [ 108 , 110 , 111 ].

Gaps in reporting and use of contextual information

As noted above, the reviewed studies showed significant gaps in their descriptions of how contextual information was later used. The results mainly informed intervention development. However, reporting gaps may have resulted from the fact that we assessed study protocols almost exclusively.

Another factor influencing the reporting of contextual analyses in study protocols or journal articles is lack of space: a 5000-word article can adequately develop and describe its central topic, but very little more. Therefore, implementation scientists should consider publishing their contextual analyses as separate papers. This would allow detailed descriptions of their methods and results, as well as of how they used those results for further study phases. Detailed reporting guidelines for contextual analysis could help researchers to structure their findings and avoid the types of “blind spots” noted above.

Strengths and weaknesses

The current study's objective is to systematically review and map methodological approaches currently in use for contextual analyses, as well as to identify gaps in the identified approaches. In this regard, this paper's most notable strength is the empirical search string development. Given the reported challenges in finding implementation science literature, the string provides both high sensitivity and high specificity [ 112 , 113 , 114 ].

Furthermore, we provide a novel framework for evaluating existing CA-related evidence by applying the BANANA approach [ 27 ]. This framework can be used as a monitoring system for literature on contextual analysis, while providing quality criteria to evaluate contextual analysis. Moreover, the developed EGM offers a concise and informative overview of the reviewed studies' results, thereby facilitating comparison between them. The map is a “living document” designed to be updatable by future researchers.

However, as we included primarily study protocols, the descriptions given of contextual analysis lacked adequate detail in some cases. This affects our analysis of how contextual information informed the studied projects' later phases. Although we searched study papers related to the protocol, we were unable to verify in every case the extent to which the planned approaches to contextual analysis were carried out in the project, or whether adaptations were made. We suspect that one major reason for the high number of identified study protocols was publishing bias. Considering that we only included articles reporting contextual analyses as part of intervention implementation studies, it is possible that many contextual analyses were reported in study protocols, then conducted as part of implementation projects but not published.

The applied random sampling approach of study papers provided an opportunity to gain an initial overview of current evidence and its gaps. However, this approach may have excluded other relevant study papers that could have provided further insights into approaches to contextual analyses. Another possible weakness is that our strict inclusion criteria might have influenced our results. We focused on contextual analysis as a foundation for further study phases, i.e., prospective assessment of context and setting factors. As studies that conducted their contextual analyses retrospectively (e.g., as part of their process evaluation) would not enhance our understanding of contextual analysis in implementation science, we excluded them. For further research, it would be useful to adapt BANANA by planning a more comprehensive analysis—one that differentiates between the different implementation project phases (e.g., exploration, preparation, implementation and sustainment phase [ 115 ]). This would allow us to study differences in approaches applied to contextual analysis, that might be related to the different phases of an implementation project (e.g., contextual factors assessed might differ in the exploration and sustainment phase.

To the best of our knowledge, this is the first study to provide a novel framework for evaluating and mapping methodological approaches to contextual analysis. Our evidence map provides a broad overview of methodologies applied in contextual analysis and shows which aspects of those studies can serve as models for other implementation science projects. The map is dynamic and can be updated as the literature on contextual analysis evolves.

We found wide variation regarding which methods were used for contextual analysis, which contextual factors were assessed, and how the results were applied in later study phases. Such a high level of heterogeneity is a major barrier to inter-study comparison or to later scale-up efforts. To reduce it, we recommend conducting contextual analyses according to TMFs. In addition to providing clear, proven and repeatable methodologies, these both support stronger conceptualization of the assessed context and enhance the rigor of the entire analytical process. If the described gaps are left open, contextual analysis will become a "black box" in many cases, greatly reducing its contribution over the course of implementation projects. Therefore, the implementation science community needs to take concerted action to develop, test and improve straightforward, robust methodologies for contextual analysis and reporting.

Across health care, researchers need to embrace contextual analysis as an essential element of every implementation science project; funding agencies need to develop specific opportunities to improve it; and journals need to demand full reporting on it. And every implementation science research team needs not only practical guidance on how to carry out contextual analysis, but also special guidelines on how to report their findings. Above all, we need to understand that, to achieve the quality and success that implementation science research promises, we will first need to break open the “black box” of contextual analysis.

Availability of data and materials

All data generated or analyzed during this study are included in this published article as supplementary information files.

Implementation science is a scientific study, promoting “the systematic uptake of research findings and other evidence-based practices into routine practice, and, hence, to improve the quality and effectiveness of health services and care” [ 116 ].

Hybrid Type 1: Prime focus on testing intervention effectiveness, and second, studying implementation. Hybrid Type 2: equal focus on testing intervention effectiveness and implementation strategies. Hybrid Type 3: Prime focus on testing effectiveness of implementation strategies, and second, assessing the intervention.

Photovoice is a participatory research methodology, that allows study participants to record and reflect on their experiences (e.g., quit smoking) by taking photos and discussing those in guided discussion sessions [ 117 , 118 ].

Yarning is a highly structured qualitative research methodology, to gain knowledge from indigenous people by storytelling [ 119 ].

Abbreviations

Basel Approach for coNtextual ANAlysis

Context an Implementation of Complex Interventions (CICI) framework

  • Evidence gap map

Theory, model or framework

Powell BJ, Waltz TJ, Chinman MJ, Damschroder LJ, Smith JL, Matthieu MM, Proctor EK, Kirchner JE. A refined compilation of implementation strategies: results from the Expert Recommendations for Implementing Change (ERIC) project. Implement Sci. 2015;10:21.

Article   Google Scholar  

Pfadenhauer LM, Gerhardus A, Mozygemba K, Lysdahl KB, Booth A, Hofmann B, Wahlster P, Polus S, Burns J, Brereton L, et al. Making sense of complexity in context and implementation: the Context and Implementation of Complex Interventions (CICI) framework. Implement Sci. 2017;12(1):21.

Glasgow RE, Emmons KM. How Can We Increase Translation of Research into Practice? Types of Evidence Needed. Annu Rev Public Health. 2007;28(1):413–33.

Neta G, Glasgow RE, Carpenter CR, Grimshaw JM, Rabin BA, Fernandez ME, Brownson RC. A Framework for Enhancing the Value of Research for Dissemination and Implementation. Am J Public Health. 2015;105(1):49–57.

Stange KC, Glasgow RE: Contextual factors: the importance of considering and reporting on context in research on the patient-centered medical home. Rockville, MD: Agency for Healthcare Research and Quality; 2013. ARHQ Publication No. 13–0045-EF.

Olswang LB, Prelock PA. Bridging the Gap Between Research and Practice: Implementation Science. J Speech Lang Hear Res. 2015;58(6):S1818–26.

Daivadanam M, Ingram M, Sidney Annerstedt K, Parker G, Bobrow K, Dolovich L, Gould G, Riddell M, Vedanthan R, Webster J, et al. The role of context in implementation research for non-communicable diseases: Answering the ‘how-to’ dilemma. PlosS One. 2019;14(4):e0214454.

Article   CAS   Google Scholar  

De Geest S, Zúñiga F, Brunkert T, Deschodt M, Zullig LL, Wyss K, Utzinger J. Powering Swiss health care for the future: implementation science to bridge “the valley of death.” Swiss Med Wkly. 2020;150:w20323.

Google Scholar  

Proctor EK, Powell BJ, McMillen JC. Implementation strategies: recommendations for specifying and reporting. Implement Sci. 2013;8(1):139.

Waltz TJ, Powell BJ, Fernández ME, Abadie B, Damschroder LJ. Choosing implementation strategies to address contextual barriers: diversity in recommendations and future directions. Implement Sci. 2019;14(1):42.

Craig P, Di Ruggiero E, Frolich KL, Mykhalovskiy E, White M, Campbell R, Cummins S, Edwards N, Hunt K, Kee F: Taking account of context in population health intervention research: guidance for producers, users and funders of research. 2018.

Chambers DA, Glasgow RE, Stange KC. The dynamic sustainability framework: addressing the paradox of sustainment amid ongoing change. Implement Sci. 2013;8(1):117.

Shoveller J, Viehbeck S, Di Ruggiero E, Greyson D, Thomson K, Knight R. A critical examination of representations of context within research on population health interventions. Crit Public Health. 2016;26(5):487–500.

Zullig L, Deschodt M, Liska J, Bosworth HB, De Geest S: Moving from the Trial to the Real World: Improving Medication Adherence Using Insights of Implementation Science. Annu Rev Pharmacol Toxicol 2018.

Rogers L, De Brún A, McAuliffe E. Development of an integrative coding framework for evaluating context within implementation science. BMC Med Res Methodol. 2020;20(1):158.

Pfadenhauer LM, Mozygemba K, Gerhardus A, Hofmann B, Booth A, Lysdahl KB, Tummers M, Burns J, Rehfuess EA. Context and implementation: A concept analysis towards conceptual maturity. Z Evid Fortbild Qual Gesundhwes. 2015;109(2):103–14.

Squires JE, Graham I, Bashir K, Nadalin-Penno L, Lavis J, Francis J, Curran J, Grimshaw JM, Brehaut J, Ivers N et al : Understanding context: A concept analysis. J Adv Nurs 2019, 0(0):1–23.

Rogers L, De Brún A, McAuliffe E. Defining and assessing context in healthcare implementation studies: a systematic review. BMC Health Serv Res. 2020;20(1):591.

Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4:50.

Nilsen P, Bernhardsson S. Context matters in implementation science: a scoping review of determinant frameworks that describe contextual determinants for implementation outcomes. BMC Health Serv Res. 2019;19(1):189.

Mettert K, Lewis C, Dorsey C, Halko H, Weiner B. Measuring implementation outcomes: An updated systematic review of measures’ psychometric properties. Implement Res Pract. 2020;1:2633489520936644.

Pinnock H, Barwick M, Carpenter CR, Eldridge S, Grandes G, Griffiths CJ, Rycroft-Malone J, Meissner P, Murray E, Patel A. Standards for reporting implementation studies (StaRI) statement. BMJ. 2017;356:i6795.

Pinnock H, Barwick M, Carpenter CR, Eldridge S, Grandes G, Griffiths CJ, Rycroft-Malone J, Meissner P, Murray E, Patel A, et al. Standards for Reporting Implementation Studies (StaRI): explanation and elaboration document. BMJ Open. 2017;7(4):e013318.

McHugh S, Dorsey CN, Mettert K, Purtle J, Bruns E, Lewis CC. Measures of outer setting constructs for implementation research: A systematic review and analysis of psychometric quality. Implement Res Pract. 2020;1:2633489520940022.

Leppla L, Mielke J, Kunze M, Mauthner O, Teynor A, Valenta S, Vanhoof J, Dobbels F, Berben L, Zeiser R, et al. Clinicians and patients perspectives on follow-up care and eHealth support after allogeneic hematopoietic stem cell transplantation: A mixed-methods contextual analysis as part of the SMILe study. Eur J Oncol Nurs. 2020;45:101723.

Yip O, Huber E, Stenz S, Zullig LL, Zeller A, De Geest SM, Deschodt M. A contextual analysis and logic model for integrated care for frail older adults living at home: the INSPIRE project. Int J Integr Care . 2021;21(2):9.

Mielke J, Leppla L, Valenta S, Zúñiga F, Zullig LL, Staudacher S, Teynor A, De Geest S: Unravelling implementation context: The Basel Approach for coNtextual ANAlysis (BANANA) in implementation science and its application in the SMILe project. Implement Sci Commun In Press.

Miake-Lye IM, Hempel S, Shanman R, Shekelle PG. What is an evidence map? A systematic review of published evidence maps and their definitions, methods, and products. Syst Rev. 2016;5(1):28.

Snilstveit B, Bhatia R, Rankin K, Leach B: 3ie evidence gap maps: a starting point for strategic evidence production and use, 3ie Working Paper 28. In . New Delhi: International Initiative for Impact Evaluation (3ie); 2017.

Saran A, White H. Evidence and gap maps: a comparison of different approaches. Campbell Syst Rev. 2018;14(1):1–38.

Snilstveit B, Vojtkova M, Bhavsar A, Stevenson J, Gaarder M. Evidence & Gap Maps: A tool for promoting evidence informed policy and strategic research agendas. J Clin Epidemiol. 2016;79:120–9.

Tricco AC, Lillie E, Zarin W, O’Brien KK, Colquhoun H, Levac D, Moher D, Peters MDJ, Horsley T, Weeks L, et al. PRISMA Extension for Scoping Reviews (PRISMA-ScR): Checklist and ExplanationThe PRISMA-ScR Statement. Ann Intern Med. 2018;169(7):467–73.

Bragge P, Clavisi O, Turner T, Tavender E, Collie A, Gruen RL. The Global Evidence Mapping Initiative: Scoping research in broad topic areas. BMC Med Res Methodol. 2011;11(1):92.

Brown CH, Curran G, Palinkas LA, Aarons GA, Wells KB, Jones L, Collins LM, Duan N, Mittman BS, Wallace A, et al. An Overview of Research and Evaluation Designs for Dissemination and Implementation. Ann Rev Public Health. 2017;38(1):1–22.

Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, Griffey R, Hensley M. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health. 2011;38(2):65–76.

Hausner E, Waffenschmidt S, Kaiser T, Simon M. Routine development of objectively derived search strategies. Syst Rev. 2012;1(1):19.

Ouzzani M, Hammady H, Fedorowicz Z, Elmagarmid A. Rayyan—a web and mobile app for systematic reviews. Syst Rev. 2016;5(1):210.

Curran GM, Bauer M, Mittman B, Pyne JM, Stetler C. Effectiveness-implementation hybrid designs: combining elements of clinical effectiveness and implementation research to enhance public health impact. Med Care. 2012;50(3):217.

Thomas J, Graziosi S, Brunton J, Ghouze Z, O’Driscoll P, Bond M: EPPI-Reviewer: advanced software for systematic reviews, maps and evidence synthesis. EPPI-Centre Software London: UCL Social Research Institute 2020.

Digital Solution Foundry and EPPI-Centre. EPPI-Mapper. EPPI-Centre USRI: University College London; 2020.

Apers H, Vuylsteke B, Loos J, Smekens T, Deblonde J, Van Beckhoven D, Nöstlinger C. Development and evaluation of an HIV-testing intervention for primary care: protocol for a mixed methods study. JMIR Res Protoc. 2020;9(8):e16486.

Berhanu D, Okwaraji YB, Belayneh AB, Lemango ET, Agonafer N, Birhanu BG, Abera K, Betemariam W, Medhanyie AA, Abera M. Protocol for the evaluation of a complex intervention aiming at increased utilisation of primary child health services in Ethiopia: a before and after study in intervention and comparison areas. BMC Health Serv Res. 2020;20(1):1–12.

Bidwell P, Thakar R, Sevdalis N, Silverton L, Novis V, Hellyer A, Kelsey M, van der Meulen J, Gurol-Urganci I. A multi-centre quality improvement project to reduce the incidence of obstetric anal sphincter injury (OASI): study protocol. BMC Pregnancy Childbirth. 2018;18(1):1–11.

D’Onofrio G, Edelman EJ, Hawk KF, Pantalo MV, Chawarski MC, Owens PH, Martel SH, VanVeldhuisen P, Ode N, Murphy SM. Implementation facilitation to promote emergency department-initiated buprenorphine for opioid use disorder: protocol for a hybrid type III effectiveness-implementation study (Project ED HEALTH). Implement Sci. 2019;14(1):48.

Hawk KF, D’Onofrio G, Chawarski MC, O’Connor PG, Cowan E, Lyons MS, Richardson L, Rothman RE, Whiteside LK, Owens PH, et al. Barriers and Facilitators to Clinician Readiness to Provide Emergency Department-Initiated Buprenorphine. JAMA Netw Open. 2020;3(5):e204561–e204561.

Grazioli VS, Moullin JC, Kasztura M, Canepa-Allen M, Hugli O, Griffin J, Vu F, Hudon C, Jackson Y, Wolff H, et al. Implementing a case management intervention for frequent users of the emergency department (I-CaM): an effectiveness-implementation hybrid trial study protocol. BMC Health Serv Res. 2019;19(1):28.

von Allmen M, Grazioli VS, Kasztura M, Chastonay O, Moullin JC, Hugli O, Daeppen J-B, Bodenmann P. Does Case Management Provide Support for Staff Facing Frequent Users of Emergency Departments? A Comparative Mixed-Method Evaluation of ED Staff Perception. BMC Emerg Med. 2021;21(1):92.

Chastonay OJ, Lemoine M, Grazioli VS, Canepa Allen M, Kasztura M, Moullin JC, Daeppen J-B, Hugli O, Bodenmann P. Health care providers’ perception of the frequent emergency department user issue and of targeted case management interventions: a cross-sectional national survey in Switzerland. BMC Emerg Med. 2021;21(1):4.

Bodenmann P, Kasztura M, Graells M, Schmutz E, Chastonay O, Canepa-Allen M, Moullin J, von Allmen M, Lemoine M, Hugli O, et al. Healthcare Providers’ Perceptions of Challenges with Frequent Users of Emergency Department Care in Switzerland: A Qualitative Study. Inquiry. 2021;58:00469580211028173.

Hartzler B, Lyon AR, Walker DD, Matthews L, King KM, McCollister KE. Implementing the teen marijuana check-up in schools—a study protocol. Implement Sci. 2017;12(1):1–14.

Johnson K, Gilbert L, Hunt T, Wu E, Metsch L, Goddard-Eckrich D, Richards S, Tibbetts R, Rowe JC, Wainberg ML. The effectiveness of a group-based computerized HIV/STI prevention intervention for black women who use drugs in the criminal justice system: study protocol for E-WORTH (Empowering African-American Women on the Road to Health), a Hybrid Type 1 randomized controlled trial. Trials. 2018;19(1):1–19.

Knight DK, Belenko S, Wiley T, Robertson AA, Arrigona N, Dennis M, Bartkowski JP, McReynolds LS, Becan JE, Knudsen HK. Juvenile Justice—Translational Research on Interventions for Adolescents in the Legal System (JJ-TRIALS): a cluster randomized trial targeting system-wide improvement in substance use services. Implement Sci. 2015;11(1):1–18.

Knight DK, Joe GW, Morse DT, Smith C, Knudsen H, Johnson I, Wasserman GA, Arrigona N, McReynolds LS, Becan JE, et al. Organizational Context and Individual Adaptability in Promoting Perceived Importance and Use of Best Practices for Substance Use. J Behav Health Serv Res. 2019;46(2):192–216.

Kwan BM, Dickinson LM, Glasgow RE, Sajatovic M, Gritz M, Holtrop JS, Nease DE, Ritchie N, Nederveld A, Gurfinkel D. The Invested in Diabetes Study Protocol: a cluster randomized pragmatic trial comparing standardized and patient-driven diabetes shared medical appointments. Trials. 2020;21(1):1–14.

Lakerveld J, Mackenbach JD, De Boer F, Brandhorst B, Broerse JE, De Bruijn G-J, Feunekes G, Gillebaart M, Harbers M, Hoenink J. Improving cardiometabolic health through nudging dietary behaviours and physical activity in low SES adults: design of the Supreme Nudge project. BMC Public Health. 2018;18(1):1–9.

Nahar P, van Marwijk H, Gibson L, Musinguzi G, Anthierens S, Ford E, Bremner SA, Bowyer M, Le Reste JY, Sodi T. A protocol paper: community engagement interventions for cardiovascular disease prevention in socially disadvantaged populations in the UK: an implementation research study. Glob Health Res Policy. 2020;5(1):1–9.

Osilla KC, Becker K, Ecola L, Hurley B, Manuel JK, Ober A, Paddock SM, Watkins KE. Study design to evaluate a group-based therapy for support persons of adults on buprenorphine/naloxone. Addict Sci Clin Pract. 2020;15(1):1–11.

Quintiliani LM, Russinova ZL, Bloch PP, Truong V, Xuan Z, Pbert L, Lasser KE. Patient navigation and financial incentives to promote smoking cessation in an underserved primary care population: A randomized controlled trial protocol. Contemp Clin Trials. 2015;45:449–57.

Rahm AK, Cragun D, Hunter JE, Epstein MM, Lowery J, Lu CY, Pawloski PA, Sharaf RN, Liang SY, Burnett-Hartman AN, et al. Implementing universal Lynch syndrome screening (IMPULSS): protocol for a multi-site study to identify strategies to implement, adapt, and sustain genomic medicine programs in different organizational contexts. BMC Health Serv Res. 2018;18(1):824.

Rotter T, Plishka C, Hansia MR, Goodridge D, Penz E, Kinsman L, Lawal A, O’Quinn S, Buchan N, Comfort P. The development, implementation and evaluation of clinical pathways for chronic obstructive pulmonary disease (COPD) in Saskatchewan: protocol for an interrupted times series evaluation. BMC Health Serv Res. 2017;17(1):1–7.

Sævareid TJL, Lillemoen L, Thoresen L, Førde R, Gjerberg E, Pedersen R. Implementing advance care planning in nursing homes–study protocol of a cluster-randomized clinical trial. BMC Geriatr. 2018;18(1):1–12.

Gjerberg E, Lillemoen L, Weaver K, Pedersen R, Førde R. Advance care planning in Norwegian nursing homes. Tidsskr Nor Laegeforen. 2017;137(6):447–50.

Thoresen L, Ahlzén R, Solbrække KN. Advance Care Planning in Norwegian nursing homes—Who is it for? J Aging Stud. 2016;38:16–26.

Thoresen L, Lillemoen L. “I just think that we should be informed” a qualitative study of family involvement in advance care planning in nursing homes. BMC Med Ethics. 2016;17(1):1–13.

Shanley DC, Hawkins E, Page M, Shelton D, Liu W, Webster H, Moritz KM, Barry L, Ziviani J, Morrissey S, et al. Protocol for the Yapatjarrathati project: a mixed-method implementation trial of a tiered assessment process for identifying fetal alcohol spectrum disorders in a remote Australian community. BMC Health Serv Res. 2019;19(1):649.

Smeltzer MP, Rugless FE, Jackson BM, Berryman CL, Faris NR, Ray MA, Meadows M, Patel AA, Roark KS, Kedia SK. Pragmatic trial of a multidisciplinary lung cancer care model in a community healthcare setting: study design, implementation evaluation, and baseline clinical results. Transl Lung Canc Res. 2018;7(1):88.

Kedia SK, Ward KD, Digney SA, Jackson BM, Nellum AL, McHugh L, Roark KS, Osborne OT, Crossley FJ, Faris N. ‘One-stop shop’: lung cancer patients’ and caregivers’ perceptions of multidisciplinary care in a community healthcare setting. Transl Lung Canc Res. 2015;4(4):456.

Gray CS, Wodchis WP, Upshur R, Cott C, McKinstry B, Mercer S, Palen TE, Ramsay T, Thavorn K. Supporting goal-oriented primary health care for seniors with complex care needs using mobile technology: evaluation and implementation of the health system performance research network, Bridgepoint electronic patient reported outcome tool. JMIR Res Protoc. 2016;5(2):e126.

Steele Gray C, Miller D, Kuluski K, Cott C. Tying eHealth tools to patient needs: exploring the use of eHealth for community-dwelling patients with complex chronic disease and disability. JMIR Res Protoc. 2014;3(4):e3500.

Gray CS, Khan AI, Kuluski K, McKillop I, Sharpe S, Bierman AS, Lyons RF, Cott C. Improving patient experience and primary care quality for patients with complex chronic disease using the electronic patient-reported outcomes tool: adopting qualitative methods into a user-centered design approach. JMIR Res Protoc. 2016;5(1):e28.

Sutherland R, Brown A, Nathan N, Janssen L, Reynolds R, Walton A, Hudson N, Chooi A, Yoong S, Wiggers J. Protocol for an effectiveness-implementation hybrid trial to assess the effectiveness and cost-effectiveness of an m-health intervention to decrease the consumption of discretionary foods packed in school lunchboxes: The ‘SWAP IT’trial. BMC Public Health. 2019;19(1):1–11.

Reynolds R, Sutherland R, Nathan N, Janssen L, Lecathelinais C, Reilly K, Walton A, Wolfenden L. Feasibility and principal acceptability of school-based mobile communication applications to disseminate healthy lunchbox messages to parents. Health Promot J Austr. 2019;30(1):108–13.

Sutherland R, Brown A, Nathan N, Yoong S, Janssen L, Chooi A, Hudson N, Wiggers J, Kerr N, Evans N, et al. A Multicomponent mHealth-Based Intervention (SWAP IT) to Decrease the Consumption of Discretionary Foods Packed in School Lunchboxes: Type I Effectiveness-Implementation Hybrid Cluster Randomized Controlled Trial. J Med Internet Res. 2021;23(6):e25256.

Taylor RS, Hayward C, Eyre V, Austin J, Davies R, Doherty P, Jolly K, Wingham J, Van Lingen R, Abraham C. Clinical effectiveness and cost-effectiveness of the Rehabilitation Enablement in Chronic Heart Failure (REACH-HF) facilitated self-care rehabilitation intervention in heart failure patients and caregivers: rationale and protocol for a multicentre randomised controlled trial. BMJ Open. 2015;5(12):e009994.

Greaves CJ, Wingham J, Deighan C, Doherty P, Elliott J, Armitage W, Clark M, Austin J, Abraham C, Frost J. Optimising self-care support for people with heart failure and their caregivers: development of the Rehabilitation Enablement in Chronic Heart Failure (REACH-HF) intervention using intervention mapping. Pilot Feasibility Stud. 2016;2(1):1–17.

Van Delft LMM, Bor P, Valkenet K, Veenhof C. Hospital in Motion, a multidimensional implementation project to improve patients’ physical behavior during hospitalization: protocol for a mixed-methods study. JMIR Res Protoc. 2019;8(4):e11341.

Van Dongen BM, Ridder MAM, Steenhuis IHM, Renders CM. Background and evaluation design of a community-based health-promoting school intervention: Fit Lifestyle at School and at Home (FLASH). BMC Public Health. 2019;19(1):1–11.

van Dongen BM, de Vries IM, Ridder MAM, Renders CM, Steenhuis IHM: Opportunities for Capacity Building to Create Healthy School Communities in the Netherlands: Focus Group Discussions With Dutch Pupils. Front Public Health 2021, 9.

Verjans-Janssen S, Van Kann DH, Gerards SM, Vos SB, Jansen MW, Kremers SP. Study protocol of the quasi-experimental evaluation of “KEIGAAF”: a context-based physical activity and nutrition intervention for primary school children. BMC Public Health. 2018;18(1):1–12.

Nilsen P. Making sense of implementation theories, models and frameworks. Implement Sci. 2015;10(1):53.

Holtrop JS, Estabrooks PA, Gaglio B, Harden SM, Kessler RS, King DK, Kwan BM, Ory MG, Rabin BA, Shelton RC, et al. Understanding and applying the RE-AIM framework: Clarifications and resources. J Clin Transl Sci. 2021;5(1):e126.

Davies P, Walker AE, Grimshaw JM. A systematic review of the use of theory in the design of guideline dissemination and implementation strategies and interpretation of the results of rigorous evaluations. Implement Sci. 2010;5(1):1–6.

Tinkle M, Kimball R, Haozous EA, Shuster G, Meize-Grochowski R. Dissemination and Implementation Research Funded by the US National Institutes of Health, 2005–2012. Nurs Res Pract. 2013;2013:15.

Birken SA, Bunger AC, Powell BJ, Turner K, Clary AS, Klaman SL, Yu Y, Whitaker DJ, Self SR, Rostad WL, et al. Organizational theory for dissemination and implementation research. Implement Sci. 2017;12(1):62.

Liang L, Bernhardsson S, Vernooij RW, Armstrong MJ, Bussières A, Brouwers MC, Gagliardi AR. Use of theory to plan or evaluate guideline implementation among physicians: a scoping review. Implement Sci. 2017;12(1):1–12.

Birken SA, Powell BJ, Shea CM, Haines ER, Alexis Kirk M, Leeman J, Rohweder C, Damschroder L, Presseau J. Criteria for selecting implementation science theories and frameworks: results from an international survey. Implement Sci. 2017;12(1):124.

Tabak RG, Khoong EC, Chambers DA, Brownson RC. Bridging Research and Practice: Models for Dissemination and Implementation Research. Am J Prev Med. 2012;43(3):337–50.

Moullin JC, Sabater-Hernandez D, Fernandez-Llimos F, Benrimoj SI. A systematic review of implementation frameworks of innovations in healthcare and resulting generic implementation framework. Health Res Policy Syst. 2015;13:16.

Kirk MA, Kelley C, Yankey N, Birken SA, Abadie B, Damschroder L. A systematic review of the use of the consolidated framework for implementation research. Implement Sci. 2015;11(1):72.

Watson DP, Adams EL, Shue S, Coates H, McGuire A, Chesher J, Jackson J, Omenka OI. Defining the external implementation context: an integrative systematic literature review. BMC Health Serv Res. 2018;18(1):209.

Ziemann A, Brown L, Sadler E, Ocloo J, Boaz A, Sandall J. Influence of external contextual factors on the implementation of health and social care interventions into practice within or across countries—a protocol for a ‘best fit’ framework synthesis. Syst Rev. 2019;8(1):258.

Albers B, Shlonsky A. When Policy Hits Practice – Learning from the Failed Implementation of MST-EA in Australia. Hum Serv Organ Manag Leadersh Gov . 2020; 44(4):381–405.

Bruns EJ, Parker EM, Hensley S, Pullmann MD, Benjamin PH, Lyon AR, Hoagwood KE. The role of the outer setting in implementation: associations between state demographic, fiscal, and policy factors and use of evidence-based treatments in mental healthcare. Implement Sci. 2019;14(1):96.

Allen P, Jacob RR, Parks RG, Mazzucca S, Hu H, Robinson M, Dobbins M, Dekker D, Padek M, Brownson RC. Perspectives on program mis-implementation among U.S. local public health departments. BMC Health Serv Res. 2020;20(1):258.

Padek MM, Mazzucca S, Allen P, Rodriguez Weno E, Tsai E, Luke DA, Brownson RC. Patterns and correlates of mis-implementation in state chronic disease public health practice in the United States. BMC Public Health. 2021;21(1):101.

Nickerson RS. Confirmation Bias: A Ubiquitous Phenomenon in Many Guises. Rev Gen Psychol. 1998;2(2):175–220.

Ramanadhan S, Davis MM, Armstrong R, Baquero B, Ko LK, Leng JC, Salloum RG, Vaughn NA, Brownson RC. Participatory implementation science to increase the impact of evidence-based cancer prevention and control. Cancer Causes Control. 2018;29(3):363–9.

Brett J, Staniszewska S, Mockford C, Herron-Marx S, Hughes J, Tysall C, Suleman R. Mapping the impact of patient and public involvement on health and social care research: a systematic review. Health Expect. 2014;17(5):637–50.

Bombard Y, Baker GR, Orlando E, Fancott C, Bhatia P, Casalino S, Onate K, Denis J-L, Pomey M-P. Engaging patients to improve quality of care: a systematic review. Implement Sci. 2018;13(1):98.

Mason RJ, Searle KM, Bombard Y, Rahmadian A, Chambers A, Mai H, Morrison M, Chan KKW, Jerzak KJ. Evaluation of the impact of patient involvement in health technology assessments: A scoping review. Int J Technol Assess Health Care. 2020;36(3):217–23.

Palinkas LA, Aarons GA, Horwitz S, Chamberlain P, Hurlburt M, Landsverk J. Mixed Method Designs in Implementation Research. Adm Policy Ment Health. 2011;38(1):44–53.

Beidas RS, Wolk CL, Walsh LM, Evans AC Jr, Hurford MO, Barg FK. A complementary marriage of perspectives: understanding organizational social context using mixed methods. Implement Sci. 2014;9:175.

Palinkas LA. Qualitative and Mixed Methods in Mental Health Services and Implementation Research. J Clin Child Psychol. 2014;43(6):851–61.

Green CA, Duan N, Gibbons RD, Hoagwood KE, Palinkas LA, Wisdom JP. Approaches to Mixed Methods Dissemination and Implementation Research: Methods, Strengths, Caveats, and Opportunities. Adm Policy Ment Health. 2015;42(5):508–23.

Palinkas LA, Mendon SJ, Hamilton AB. Innovations in Mixed Methods Evaluations. Ann Rev Public Health. 2019;40(1):423–42.

Gertner AK, Franklin J, Roth I, Cruden GH, Haley AD, Finley EP, et al. A scoping review of the use of ethnographic approaches in implementation research and recommendations for reporting. Implement Res Pract. 2021;2:1–13.

Eldh AC, Rycroft-Malone J, van der Zijpp T, McMullan C, Hawkes C. Using Nonparticipant Observation as a Method to Understand Implementation Context in Evidence-Based Practice. Worldviews Evid Based Nurs. 2020;17(3):185–92.

Haines ER, Kirk MA, Lux L, Smitherman AB, Powell BJ, Dopp A, Stover AM, Birken SA: Ethnography and user-centered design to inform context-driven implementation. Transl Behav Med 2021.

Mielke J, De Geest S, Zúñiga F, Brunkert T, Zullig LL, Pfadenhauer LM, Staudacher S. Understanding dynamic complexity in context—Enriching contextual analysis in implementation science from a constructivist perspective. Front Health Serv. 2022;2:953731.

Conte KP, Shahid A, Grøn S, Loblay V, Green A, Innes-Hughes C, Milat A, Persson L, Williams M, Thackway S, et al. Capturing implementation knowledge: applying focused ethnography to study how implementers generate and manage knowledge in the scale-up of obesity prevention programs. Implement Sci. 2019;14(1):91.

Haines ER, Dopp A, Lyon AR, Witteman HO, Bender M, Vaisson G, Hitch D, Birken S. Harmonizing evidence-based practice, implementation context, and implementation strategies with user-centered design: a case example in young adult cancer care. Implementation Sci Comm. 2021;2(1):45.

Lokker C, McKibbon KA, Wilczynski NL, Haynes RB, Ciliska D, Dobbins M, Davis DA, Straus SE. Finding knowledge translation articles in CINAHL. Medinfo. 2010;160:1179–83.

McKibbon KA, Lokker C, Wilczynski NL, Haynes RB, Ciliska D, Dobbins M, Davis DA, Straus SE. Search filters can find some but not all knowledge translation articles in MEDLINE: An analytic survey. J Clin Epidemiol. 2012;65(6):651–9.

Mielke J, Brunkert T, Zullig LL, Bosworth HB, Deschodt M, Simon M, De Geest S. Relevant Journals for Identifying Implementation Science Articles: Results of an International Implementation Science Expert Survey. Front Public Health. 2021;9:639192.

Aarons GA, Hurlburt M, Horwitz SM. Advancing a Conceptual Model of Evidence-Based Practice Implementation in Public Service Sectors. Adm Policy Ment Health. 2011;38(1):4–23.

Eccles MP, Mittman BS. Welcome to Implementation Science. Implement Sci. 2006;1(1):1.

Wang C, Burris MA. Photovoice: Concept, Methodology, and Use for Participatory Needs Assessment. Health Educ Behav. 1997;24(3):369–87.

Quintiliani LM, Russinova ZL, Bloch PP, Truong V, Xuan Z, Pbert L, Lasser KE. Patient navigation and financial incentives to promote smoking cessation in an underserved primary care population: A randomized controlled trial protocol. Contemp Clin Trials. 2015;45(Pt B):449–57.

Barlo S, Boyd WE, Pelizzon A, Wilson S. Yarining as protected space: principles and protocols. AlterNative. 2020;16(2):90–8.

Download references

Acknowledgements

The authors would like to thank Sarah Musy for her support in developing the empirical search strategy and Mieke Deschodt for providing feedback to the study conceptualization. Further, we would like to thank Chris Shultis for language editing.

Author information

Juliane Mielke and Thekla Brunkert shared first authorship.

Authors and Affiliations

Institute of Nursing Science, Department Public Health, University of Basel, Bernoullistrasse 28, CH-4056, Basel, Switzerland

Juliane Mielke, Thekla Brunkert, Franziska Zúñiga, Michael Simon & Sabina De Geest

University Department of Geriatric Medicine FELIX PLATTER, Basel, Switzerland

Thekla Brunkert

Center for Innovation to Accelerate Discovery and Practice Transformation (ADAPT), Durham Veterans Affairs Health Care System and Department of Population Health Sciences, Duke University Medical Center, Durham, NC, USA

Leah L. Zullig

Department of Public Health and Primary Care, KU Leuven, Academic Center for Nursing and Midwifery, Louvain, Belgium

Sabina De Geest

You can also search for this author in PubMed   Google Scholar

Contributions

The study was conceptualized by JM, SDG and TB. The search strategy was developed by JM, MS and SDG. JM and TB conducted the screening, data extraction and analysis. The manuscript was drafted by JM; TB, SDG, LLZ, FZ, MS provided ongoing feedback to the interpretation of results and to the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Sabina De Geest .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

JM, TB, FZ and MS have no competing interests. LLZ reports research support from Sanofi, Proteus Digital Health, and the PhRMA Foundation, as well as consulting for Novartis. SDG consults for Sanofi and Novartis – all activities are unrelated to the current work.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1..

Preferred Reporting Items for Systematic reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) Checklist.

Additional file 2.

Research questions and screening tool inclusion-/exclusion criteria.

Additional file 3.

Empirical search string development.

Additional file 4.

STEP 1 - General characteristics of identified implementation intervention studies ( n = 110).

Additional file 5: STEP 2

- Study characteristics of implementation intervention studies that performed contextual analyses.

Additional file 6.

Evidence gap map.

Additional file 7:

Overview of contextual factors identified in implementation intervention studies (mapped according to the Context and Implementation of Complex Interventions (CICI) framework (1)).

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Mielke, J., Brunkert, T., Zúñiga, F. et al. Methodological approaches to study context in intervention implementation studies: an evidence gap map. BMC Med Res Methodol 22 , 320 (2022). https://doi.org/10.1186/s12874-022-01772-w

Download citation

Received : 13 July 2022

Accepted : 25 October 2022

Published : 14 December 2022

DOI : https://doi.org/10.1186/s12874-022-01772-w

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Implementation science
  • Contextual analysis
  • Dissemination

BMC Medical Research Methodology

ISSN: 1471-2288

contextual analysis research methodology

No internet connection.

All search filters on the page have been cleared., your search has been saved..

  • Sign in to my profile My Profile

Not Logged In

Contextual Analysis

  • By: Gudmund R. Iversen
  • Publisher: SAGE Publications, Inc.
  • Series: Quantitative Applications in the Social Sciences
  • Publication year: 1991
  • Online pub date: January 01, 2011
  • Discipline: Anthropology
  • Methods: Ordinary least squares , Dependent variables , Regression analysis
  • DOI: https:// doi. org/10.4135/9781412983785
  • Keywords: equations , parameters Show all Show less
  • Print ISBN: 9780803942721
  • Online ISBN: 9781412983785
  • Buy the book icon link

The effective use of contextual analysis - the study of the role of group context on actions and attitudes of individuals - often involves overcoming a number of issues such as group boundaries, the mobility of individuals within a group, overlapping groups, missing individual data and the choice of statistical models. This straightforward volume offers researchers a guide for selecting the best model to use, and discusses such topics as contextual analysis with absolute effects, with relative effects and the choice between regression coefficients as fixed parameters or as random variables.

Front Matter

  • Series Editor's Introduction
  • Introduction
  • Contingency Tables
  • Contextual Analysis With Absolute Effects
  • Contextual Analysis With Relative Effects
  • Examples of Absolute and Relative Data
  • Other Approaches

Back Matter

  • About the Author

Sign in to access this content

Get a 30 day free trial, more like this, sage recommends.

We found other relevant content for you on other Sage platforms.

Have you created a personal profile? Login or create a profile so that you can save clips, playlists and searches

  • Sign in/register

Navigating away from this page will delete your results

Please save your results to "My Self-Assessments" in your profile before navigating away from this page.

Sign in to my profile

Please sign into your institution before accessing your profile

Sign up for a free trial and experience all Sage Learning Resources have to offer.

You must have a valid academic email address to sign up.

Get off-campus access

  • View or download all content my institution has access to.

Sign up for a free trial and experience all Sage Learning Resources has to offer.

  • view my profile
  • view my lists
  • Methodology
  • Open access
  • Published: 01 October 2022

Unraveling implementation context: the Basel Approach for coNtextual ANAlysis (BANANA) in implementation science and its application in the SMILe project

  • Juliane Mielke 1 ,
  • Lynn Leppla 1 , 2 ,
  • Sabine Valenta 1 , 3 ,
  • Leah L. Zullig 4 ,
  • Franziska Zúñiga 1 ,
  • Sandra Staudacher 1 , 5 ,
  • Alexandra Teynor 6 &
  • Sabina De Geest   ORCID: orcid.org/0000-0001-6596-7237 1 , 7  

Implementation Science Communications volume  3 , Article number:  102 ( 2022 ) Cite this article

4856 Accesses

8 Citations

17 Altmetric

Metrics details

Designing intervention and implementation strategies with careful consideration of context is essential for successful implementation science projects. Although the importance of context has been emphasized and methodology for its analysis is emerging, researchers have little guidance on how to plan, perform, and report contextual analysis. Therefore, our aim was to describe the Basel Approach for coNtextual ANAlysis (BANANA) and to demonstrate its application on an ongoing multi-site, multiphase implementation science project to develop/adapt, implement, and evaluate an integrated care model in allogeneic SteM cell transplantatIon facILitated by eHealth (the SMILe project).

BANANA builds on guidance for assessing context by Stange and Glasgow (Contextual factors: the importance of considering and reporting on context in research on the patient-centered medical home, 2013). Based on a literature review, BANANA was developed in ten discussion sessions with implementation science experts and a medical anthropologist to guide the SMILe project’s contextual analysis. BANANA’s theoretical basis is the Context and Implementation of Complex Interventions (CICI) framework. Working from an ecological perspective, CICI acknowledges contextual dynamics and distinguishes between context and setting (the implementation’s physical location).

BANANA entails six components: (1) choose a theory, model, or framework (TMF) to guide the contextual analysis; (2) use empirical evidence derived from primary and/or secondary data to identify relevant contextual factors; (3) involve stakeholders throughout contextual analysis; (4) choose a study design to assess context; (5) determine contextual factors’ relevance to implementation strategies/outcomes and intervention co-design; and (6) report findings of contextual analysis following appropriate reporting guidelines. Partly run simultaneously, the first three components form a basis both for the identification of relevant contextual factors and for the next components of the BANANA approach.

Understanding of context is indispensable for a successful implementation science project. BANANA provides much-needed methodological guidance for contextual analysis. In subsequent phases, it helps researchers apply the results to intervention development/adaption and choices of contextually tailored implementation strategies. For future implementation science projects, BANANA’s principles will guide researchers first to gather relevant information on their target context, then to inform all subsequent phases of their implementation science project to strengthen every part of their work and fulfill their implementation goals.

Peer Review reports

Contributions to the literature

We provide a comprehensive, multi-component approach to guide contextual analysis in implementation science, i.e., the Basel Approach for coNtextual ANAlysis (BANANA).

BANANA specifically provides guidance on how to combine theories, models, and frameworks; how to use empirical evidence in contextual analysis; how to choose a study design for assessing context and setting; how to use findings from the contextual analysis to inform subsequent phases of an implementation science project (e.g., intervention development/adaption, implementation strategies); and how to report contextual analyses.

Using a case example, we demonstrate a successful application of BANANA in an ongoing implementation science research project.

The importance of context Footnote 1 for a successful and sustainable implementation has gained significant attention in implementation science (IS) with contextual analysis increasingly being recognized as vital to IS methodology [ 1 , 2 , 3 , 4 ]. While contextual analyses’ value is widely accepted, guidance on how to conduct one is lacking and no unified definition of contextual analysis in IS exists. We understand contextual analysis as a foundational phase within IS projects to which specific research questions and IS theories, models, or frameworks (TMFs) are applied [ 3 , 5 , 6 ]. It entails the mapping of relevant qualitative and quantitative information about the context (e.g., multilevel implementation determinants, practice patterns) in which an intervention will be delivered. Starting (prospectively) at the beginning of each IS project, the results of the contextual analysis become the basis of all subsequent phases of an IS project: they inform intervention development or adaption, guide choices regarding implementation strategies, help interpret implementation and effectiveness outcomes, and guide selection of sustainability strategies [ 7 , 8 , 9 , 10 ]. As context is dynamic and evolves, continuous monitoring of context including for example several assessments of context throughout the project is important. Our view is that contextual analysis requires methodological strengthening as very limited guidance is available so far and conceptual and methodological unclarity on contextual analysis exists.

Conceptual inconsistencies between the applied methods and approaches hamper the development of a standardized approach [ 11 ]. In their systematic review of 64 empirical implementation studies, Rogers et al. identified over 40 distinct strategies to study context via quantitative, qualitative, and mixed-methods approaches [ 11 ]. Whereas assessment of contextual factors often focuses on the meso-level (e.g., organizational culture and climate, readiness for implementation), macro-level factors (e.g., political and economic climate) are rarely considered [ 12 , 13 , 14 , 15 , 16 ].

TMFs provide guidance on which contextual factors to study, but not on how to study context per se [ 17 , 18 , 19 ]. Commonly applied TMFs that incorporate context include the Consolidated Framework for Implementation Research (CFIR) [ 20 ], the Integrated Promoting Action on Research Implementation in Health Services (i-PARIHS) framework [ 21 ], or the Theoretical Domains Framework (TDF) [ 22 ]. This emphasis on theory contrasts with an increasing number of IS studies that focus on mapping single facilitators and barriers to implementation, but that follow no specific theory. This absence both obscures the researchers’ rationale for choosing their variables and limits theoretical development based on empirical evidence. Along with any multilevel perspective, they also commonly lack assessments of interactions between context, intervention, and implementation [ 11 , 15 , 23 , 24 , 25 , 26 , 27 ]. And what contextual information they generate is rarely linked to subsequent phases in their IS project [ 28 ]. In cases where contextual analysis is treated not as a separate foundational phase of an IS project, but as an add-on, contextual data are commonly only performed retrospectively, as part of a process evaluation [ 19 , 29 ]. This obviously excludes any chance of applying any contextual information to the IS project’s next phases. Furthermore, as findings of contextual analyses are rarely published (particularly meso-level factors, e.g., in regard to practice patterns), transparency of a project might be limited as valuable methodological observations, e.g., in regard to intervention development, that might be important to other researchers’ projects are not reported ([ 30 ]; Mielke J, Brunkert T, Zúñiga F, Simon M, Zullig L, De Geest S: Methodological approaches to study context in implementation science studies: an evidence gap map, Under review). To conclude, a comprehensive guidance focusing not only on individual aspects of a contextual analysis (e.g., theoretical underpinning or methods to assess context) is lacking in implementation science.

As part of a series of guidelines commissioned by the Agency for Healthcare Research and Quality, Stange and Glasgow [ 31 ] provided an initial step-wise approach to assessing and reporting contextual factors throughout the phases of patient-centered medical home research. Since this approach was not initially developed for IS projects, it lacks further details on the operationalization of context, specific methods to assess context, and the use of contextual information to inform later IS project phases. To fill these gaps and to provide guidance “how to do” contextual analysis in IS projects, we developed the Basel Approach for coNtextual ANAlysis (BANANA), which entails six components. Accordingly, this paper has two objectives: first, to describe the six components of BANANA; and second, to describe its application to the SMILe project, i.e., development/adaption, implementation, and evaluation of an integrated care model (ICM) in allogeneic stem cell transplantation facilitated by eHealth (SMILe) (Table 1 ) [ 32 , 33 ].

To develop BANANA, we used a multiphase approach. First, we conducted a literature review, focusing on methodological IS papers available via major electronic data bases (PubMed, EMBASE, and Web of Science). To identify existing methodological approaches for contextual analysis, we screened the identified papers’ reference lists. The only authors to provide a methodological description of an entire contextual analysis were Stange and Glasgow [ 32 ]; others addressed only individual aspects, e.g., the use of TMFs or methods to study context [ 3 , 11 , 31 ]. However, as Stange and Glasgow’s approach is not IS specific, we used it as a basis to develop BANANA and adapted it as necessary to guide a contextual analysis for the SMILe project and for IS projects in general. Briefly, SMILe is a multi-site, multiphase IS project (Table 1 ): Phase A entailed analyzing the context and target users’ technology acceptance, as well as developing/adapting and extending the SMILe-ICM and its setting-specific implementation strategies [ 34 , 36 , 37 ]. In fact, BANANA was originally developed to guide contextual analysis in this phase to inform intervention development (German setting) [ 34 ] and intervention adaptation (Swiss setting). Phase B focused on the SMILe-ICM’s implementation and evaluation using a hybrid effectiveness-implementation randomized controlled trial [ 32 , 33 ].

Second, research group members (SDG, LL, SV, AT, JM) consulted with IS experts (LLZ, FZ) and a medical anthropologist (SS) in iterative discussion sessions about the identified literature and how to elaborate Stange and Glasgow’s approach in view of its application for IS projects. We used the three-step approach as a basis, adapted it to IS (e.g., IS terminology, relevance of context for different phases of an IS project), and complemented it with identified evidence on context and its assessment (e.g., methods for data collection) as well as other relevant aspects (e.g., use of empirical evidence) (Additional file 1 ). Our understanding of context and how it is reflected in BANANA was conceptually based on the Context and Implementation of Complex Interventions (CICI) framework [ 1 ] and is described elsewhere in detail [ 38 ]. CICI is a meta-framework that in contrast to other frameworks explicitly focuses on the multilevel, dynamic context, i.e., interactions between intervention, implementation, and context. CICI operationalizes context across seven domains (geographic, epidemiological, socio-cultural, socio-economic, political, legal-ethical). Each of these entails micro-, meso-, and macro-level contextual factors. CICI differentiates between setting and context, defining it as the physical location in a context in which an intervention is implemented [ 1 ]. In the setting interactions between the intervention, the implementation and the other contextual factors occur. Thus, the contextual analysis includes an assessment not only of contextual aspects but also of the setting in which the implementation takes place. BANANA spans around the CICI’s constructs (i.e., multidimensional, multilevel, dynamic) and acknowledges in its methodological guidance differences between context and setting. Starting with an initial version of BANANA compiled by the first and last authors (JM, SDG), we further refined each of the steps (e.g., specifying reporting of contextual findings). After ten discussion rounds between all study authors, we reached a consensus on BANANA (i.e., conceptual underpinning, different components, and methodological aspects). The initial and final version of BANANA, as well as which aspects of BANANA were informed by the CICI framework, can be found in Additional file 1 .

The BANANA approach entails six components (Table 2 ): (1) choosing a TMF, (2) using empirical evidence, (3) involving stakeholders, (4) designing a study specifically for the contextual analysis, (5) determining the relevance of contextual factors for implementation strategies/outcomes and intervention co-design, and (6) reporting on the contextual analysis. Stakeholder involvement represents a key element of contextual analysis and thus relates to all other components. BANANA’s components are presented linearly, but depending on the project, components 1–3 can occur concurrently and determine the need and focus for component 4. Meaning that depending on the project aims and context information available, researchers need to carefully review in their specific case whether component 4 of BANANA is relevant to gain further contextual information and to reflect on how to realize those components. Working in an interdisciplinary research team that combines different expertise and skills can therefore be instrumental in planning and executing a contextual analysis. BANANA is explained in detail below; for each component, an example from the SMILe project is provided. We also provide further key resources in regard to the different components of BANANA in Additional file 2 that provide further guidance. A detailed description of the SMILe study and its methods for contextual analysis can be found elsewhere [ 34 ].

Component 1: Choosing a TMF to guide contextual analysis

Identification and selection of tmfs.

In general, the use of TMFs is essential to inform and guide all phases of IS projects and increase the findings’ generalizability [ 24 , 41 , 42 , 43 ]. Regarding contextual analysis, a TMF can serve as “a comprehensive starting point” to identify contextual factors that influence implementation.

The selection of a framework is often perceived as difficult, as a large number of IS and other TMFs are available [ 24 , 41 , 44 , 45 ]. Therefore, following Moullin et al.’s recommendations [ 46 ], we suggest considering four criteria when selecting a TMF: (1) it is intended/designed for contextual analysis; (2) it acknowledges the multidimensional, multilevel, and dynamic nature of context; (3) it includes guidance on operationalization of concepts (e.g., contextual factors); (4) it fits the intervention and setting.

Resources that provide an overview of TMFs or support the identification, selection, and combining of TMFs include key IS papers [ 3 , 24 , 44 , 47 ] and tools such as the D&I Models Webtool [ 48 ]. To justify and report TMF selection, the implementation theory comparison and selection tool (T-CaST) can provide useful guidance [ 41 ]. Based on 16 criteria relating to applicability, usability, testability, and acceptability, T-CaST provides a first attempt to select and compare TMFs [ 41 ]. Furthermore, to ensure a TMF’s fit and applicability for a specific setting and/or context, stakeholders can be involved (cf. component 3) [ 46 ].

Combining of TMFs for context and setting

As context differs from the setting 1 , we suggest the combination of a context- and setting-specific TMF, as such combinations enhance the granularity of contextual analysis. While context-specific TMFs provide an overview of factors that may influence implementation (e.g., socio-cultural characteristics), setting-specific TMFs indicate factors that influence a specific intervention’s implementation in a specific setting (e.g., site characteristics, practice patterns, work flows, and processes within that setting). A broad variety of TMFs are available for specific settings and/or interventions, e.g., the Chronic Care Model [ 49 ] or the Primary Care Practice Development Model [ 39 ].

Case example — use of TMFs in the SMILe project

In the SMILe project, we chose the CICI framework [ 1 ] as an overarching framework for contextual analysis. In our view, as it acknowledges contextual dynamics and distinguishes between context and setting, it is currently the most mature framework available. Working with the CICI framework, we assessed relevant micro- and meso-level contextual factors from the three context domains—geographic (i.e., Internet access, type of connection), epidemiological (i.e., patient demographics), and socio-cultural (i.e., self-management, health behavior). We did not explicitly assess further contextual factors. As SMILe project leaders (LL, SV) themselves have been working for years within the SMILe-ICM’s implementation setting, they had implicit and explicit contextual knowledge (e.g., organizational culture, leadership, and legal aspects).

As the SMILe project’s focus is on developing and implementing an eHealth-facilitated ICM, we combined the CICI framework with the eHealth Enhanced Chronic Care Model (eCCM) to gain a deeper understanding of the target setting (the stem cell transplant center) (Fig. 1 ) [ 1 , 35 ]. The eCCM supports the re-design of acute care-oriented processes towards chronic illness management [ 35 , 49 ]. The SMILe researchers assessed factors from the model’s five building blocks (i.e., self-management support, delivery system design, clinical decision support, clinical information systems, eHealth education) [ 34 ]. Micro-level factors of interest included self-management support and technology openness; on the meso-level, they included transplant center structural characteristics, practice patterns in follow-up care, the level of chronic illness management, team composition, and clinician demographics [ 35 ]. Macro-level factors were considered but not explicitly assessed and reported (e.g., legal aspects).

figure 1

Combination of the Context and Implementation of Complex Interventions (CICI) framework [ 1 ] and the eHealth Enhanced Chronic Care Model (eCCM) [ 35 ] to guide contextual analysis within the SMILe project. Figure adapted from Pfadenhauer et al. [ 1 ] and Gee et al. [ 35 ]

Component 2: Using empirical evidence for contextual analysis

TMFs provide a comprehensive overview of how context is conceptualized and which contextual factors are relevant for implementation. However, not all aspects mentioned in the TMFs are relevant to each IS project. Therefore, using available empirical evidence can help to determine what is already known about the specific implementation context and relevant contextual factors. Four sources of evidence exist, i.e., (1) local data and information, (2) professional knowledge/clinical experience, (3) patient experiences and preferences, and (4) research [ 50 ]. The first three can be considered through stakeholder input (cf. component 3); local data can be also identified, e.g., by studying audit and performance data. To assess evidence from research, a literature review on relevant contextual factors should be conducted [ 1 , 11 , 51 ].

Additional file 3 provides an overview of micro-, meso-, and macro-level contextual factors we identified (via our literature review) as the most commonly reported influencing implementation [ 11 , 13 , 27 , 52 ]. Additionally, Rogers et al. identified team-level factors (e.g., team characteristics and teamwork, team stability, morale) important to implementation [ 11 ]. Reviews such as these provide broad views of relevant contextual factors. However, whenever possible, researchers need to consider evidence on implementation determinants for specific interventions, target groups, or settings, e.g., Evans et al.’s research toolkit to study organizational contextual factors influencing the implementation of ICMs [ 53 ]. That toolkit includes a framework and measurement tools to study those factors.

Empirical evidence can shed light on gaps in our understanding of context and can point to aspects of context that need further inquiry. It thus informs the focus and extent of the primary data collection in component 4 or even unravels in certain circumstances that no further primary data collection is required to inform further steps of the implementation project [ 1 ].

Case example — use of empirical evidence in the SMILe project

In order to optimally inform the SMILe contextual analysis, all sources of evidence were used. First, a literature review revealed limited evidence on follow-up practice patterns in allogeneic stem cell transplantation (research evidence). Other identified studies reported on challenges with eHealth implementation (e.g., low adoption rates) [ 54 , 55 , 56 , 57 ], including relevant contextual factors that tend to hinder implementation (e.g., technology acceptance, interoperability of technology, financial resources) [ 58 , 59 , 60 , 61 , 62 ]. The findings of the literature review served as the basis for deciding which factors to explore in more detail as part of our contextual analysis [ 63 ]. Based on the factors identified, questionnaire surveys and interview guides for contextual analysis were chosen—and, if necessary, complemented—to clarify our picture of the studied context. For example, as part of the contextual analysis, target patients’ technology openness was assessed and patients’ and clinicians’ experiences using eHealth to support health or healthy behaviors explored [ 34 , 64 ].

Second, in addition to the literature review, our studies in allogeneic stem cell transplantation as well as clinical experience of the SMILe team and patient feedback highlighted the challenges patients face in trying to improve their self-management behavior, e.g., medication non-adherence or physical activity [ 63 , 65 ]. Based on this evidence, we added specific questions about self-management challenges and how to overcome them to our interviews and focus groups [ 34 ].

Furthermore, the SMILe project leader’s knowledge among others about the organizational culture, leadership, work processes, and structures in the implementation setting, helped (1) to specify factors relevant to consider in the contextual analysis (e.g., level of chronic illness management); (2) to identify relevant stakeholders in the clinical setting and to rate their influence on the implementation; (3) to provide a basic understanding of the setting’s readiness for change and their openness towards eHealth technology; and (4) to inform the study design (e.g., feasibility of recruitment strategy and inclusion/exclusion criteria) and selection of appropriate methods (e.g., a combination of individual and focus group interviews with clinicians was selected due to conflicting time schedules). Finally, information on the setting’s resources (e.g., financial, staffing) and operability of the IT system were gained in individual, informal stakeholder meetings (local data and information). Those meetings were essential for the context-based development and adaptation of the SMILe-ICM, to identify potential hindering factors for its implementation in the specific setting and plan ahead for its sustainability.

Component 3: Stakeholder involvement in contextual analysis

Stakeholder involvement is essential in every component of a contextual analysis. Stakeholders are “those individuals [or organizations] without whose support and feedback an organization, or a project within an organization [or beyond] cannot subsist” [ 40 ]. They can be targeted or affected by an intervention (e.g., patients and caregivers), actively implementing an intervention (e.g., healthcare practitioners), deciding on whether it will be implemented (e.g., organizational leaders, policy makers) [ 1 , 66 , 67 , 68 , 69 ]. It is also possible to ask input from specific experts (e.g., epidemiologists, researchers) on dedicated topics.

Identification of stakeholders and development of a stakeholder strategy

The matter of which stakeholders to involve in contextual analysis always depends on the project’s specific focus [ 40 ]. To help ensure productive and robust stakeholder involvement, developing a stakeholder strategy is key. This indicates which and how stakeholders will be involved at each step of the contextual analysis, specifies each group’s tasks, and outlines methods or tools to involve each group. Essentially, stakeholder selection must be systematic, i.e., involving multiple stakeholder perspectives from every relevant level (micro, meso, and/or macro), while balancing power and bridging inter-group disparities, e.g., between patients and care specialists. Identified stakeholders can be mapped on a matrix (i.e., influence-interest-capacity matrix) that specifies their characteristics, e.g., role, degree of influence, anticipated effects, and outcomes of involving them [ 40 ]. Throughout the project, continuous changes in context require continuous involvement of the stakeholders, e.g., via regular stakeholder meetings [ 10 , 30 ]. Furthermore, their needs must be continuously evaluated and adapted as necessary.

Stakeholder tasks and tools for involvement

Since no specific guidance is available regarding stakeholder involvement in IS projects, general guidelines such as INVOLVE [ 70 ] or the PARADIGM Patient Engagement Toolbox [ 71 ] can support researchers to plan stakeholder tasks and choose tailored tools for stakeholder involvement. Within a contextual analysis, stakeholder tasks can include helping to choose a TMF, identifying/selecting relevant contextual factors for analysis, and evaluating and monitoring those factors throughout the project. By helping research teams interpret the findings of the contextual analysis, stakeholders can also deepen their understanding of inherent inter-factor relationships. Further tasks include supporting the development of contextually adapted intervention and implementation strategies [ 31 , 46 , 66 , 72 ]. In these ways, stakeholder involvement can contribute to interventions’ acceptance, adoption, and feasibility. That is, engaged stakeholders will add considerably to an intervention’s successful implementation and sustainment [ 73 ].

Case example — stakeholder involvement in the SMILe project

The SMILe project involves stakeholders at multiple levels throughout the project and thus also in the contextual analysis [ 34 ]. Potential stakeholders were identified in brainstorming sessions and via one-to-one in-depth discussion by the SMILe research team, project leaders, and clinicians working in the field. Selections were based on their expert opinions and their perceptions of the stakeholder’s influence in regard to the implementation process and sustainability of the SMILe-ICM as well as their interest in the SMILe-ICM. In the second center, a stakeholder matrix was developed, indicating each of the stakeholder’s impact/influence, and necessary resources for their involvement (Valenta S, Ribaut J, Leppla L, Mielke J, Teynor A, Koehly K, Gerull S, Grossmann F, Witzig-Brändli V, De Geest S, for the SMILe study team: Context-specific adaptation of an eHealth-facilitated, integrated care model and tailoring its implementation strategies – a mixed-methods study as a part of the SMILe implementation science project, Under review). The final stakeholder group included the target group (stem cell-transplanted patients) and implementers (transplant team members, e.g., in- and outpatient nurses, physicians, and psycho-oncologists). Furthermore, extending the “typical end-users,” we also involved decision-makers (e.g., transplant directors, nursing directors, and head nurses) and other stakeholders including hospital IT and medical controllers and patients’ family members. In addition to being tremendously useful in identifying the most appropriate participants for focus groups, they participated in data collection, supported interpretation of study findings, helped develop/adapt the SMILe-ICM, and helped choose/adapt implementation strategies [ 34 ]. The stakeholders were involved longitudinally (i.e., over the whole project period), to keep track to changes in the dynamic and evolving context. Over the course of the contextual analysis, other stakeholders, including experts in medical device regulation and health insurers, were involved via individual in-depth interviews.

Component 4: Study design for contextual analysis

As a contextual analysis in our view is a foundational phase, and thus functions as a separate study within IS projects, it requires additional considerations regarding research methods and study design. Data collection concerning relevant contextual factors is informed by components 1–3 of BANANA, i.e., theory, empirical evidence, and stakeholder input. The choice of methods is driven by the research questions. In addition, considering that available resources (time, funding, personnel) for contextual analysis are usually constrained, researchers need to strike a “balance between speed and rigor” [ 74 ]. This balance will influence how extensively a contextual analysis can be carried out and which methods can be applied [ 75 ].

Methods and measurement tools to study context

To deepen the research team’s understanding of the context, a combination of quantitative and qualitative methods is typically used [ 11 , 76 , 77 ]. Where mixed-methods approaches are used [ 78 , 79 , 80 , 81 ], the overall focus can be on quantitative data, qualitative data, or any supported mix of the two [ 78 , 82 ].

Quantitative methods include numerous types of surveys (e.g., online surveys, paper-pencil questionnaires, telephone surveys), systematic interviews, direct observations, or routine data. Quantitatively assessed contextual factors include, e.g., implementation climate, organizational culture and climate, available resources, and readiness for change. Several reviews provide overviews of current measurement tools and their psychometric properties [ 14 , 15 , 25 , 83 , 84 , 85 , 86 , 87 , 88 ]. Furthermore—for instance, on the CFIR [ 89 ] and EPIS framework [ 90 ] project websites—measurement and data extraction tools are available to assess aspects of context mentioned in the frameworks. However, before applying any such measurement tools, research teams must ensure that they are appropriate for their intended use, produce psychometrically sound results, and will be used consistently over time to ensure the comparability of results [ 16 ].

To explore qualitative contextual factors, interviews (unstructured or semi-structured), focus groups, observations, or document analysis Footnote 2 can be applied [ 91 ]. Qualitative methods are particularly suitable to identify stakeholders’ preferences and needs, values, beliefs, and attitudes and how these influence their behavior. Published recommendations for the use of qualitative methods in IS include a white paper by the National Cancer Institute [ 91 , 92 , 93 , 94 ]. Furthermore, for certain frameworks, such as the CFIR or CICI, interview guides have been developed to guide the exploration of context constructs [ 1 , 89 ].

Some of these quantitative and qualitative approaches have been criticized for focusing only on specific levels (e.g., the meso/organizational level) or “only provid[ing] a cursory view of complex and dynamic contexts” [ 29 ]. However, alongside quantitative and/or qualitative methods, ethnographic methods can complement both these types of data, thereby facilitating in-depth insights in organizational and contextual processes that influence implementation. An ethnographic approach can help highlight interactions within the context that remain undetected by other methods, but that may have a substantial impact on the intended implementation [ 29 ]. Furthermore, details that may not be obvious to the interviewee (e.g., ritualized everyday actions, cultural and social norms) or differences between what is said and what is done can be identified via ethnographic methods [ 95 , 96 , 97 ].

Considering the limited resources available for contextual analysis, the current trend is increasingly toward rapid qualitative or rapid ethnographic approaches. For example, the Rapid Assessment Procedure-Informed Clinical Ethnography (RAPICE) method combines rapid assessment procedures with ethnography [ 74 , 98 ]. Initial evidence suggests that rapid research methods can be as effective and rigorous as traditional approaches but more time- and cost-effective [ 98 , 99 , 100 ]. However, a research team planning on using these methods for the first time should be aware that applying them effectively and efficiently may require special training, multiple attempts, and methodological adaptions to fit their research setting [ 99 , 100 , 101 , 102 ].

Timepoints for data collection

BANANA focuses on the prospective assessment of context; however, as context is dynamic and evolves further, timepoints for considering context should be planned through the IS project. This does not mean starting over with component 1 of BANANA, but rather “keeping the thermometers in the system.” This can be achieved, for instance, through repeated assessments (e.g., surveys or use of routine data) or other methods (e.g., observations, site visits, document analysis) as well as through regular exchanges or informal conversations with stakeholders [ 38 ]. Currently, little guidance is available regarding which contextual factors to record at which timepoints and how frequently [ 31 , 103 ]. Further insights may be gained from Ariadne Labs’ Atlas Initiative that aims to develop a data repository of contextual factors related to the implementation success of different interventions in different settings and at different timepoints of analysis (before implementation, 6 weeks after implementation, and monthly after that) [ 104 , 105 ].

Case example — SMILe project data collection and analysis

For the SMILe contextual analysis, an explanatory mixed-methods (quantitative/qualitative) design was applied based on the research aims formulated [ 34 ]. Specific aims of this analysis are described in Table 1 . Data collection and analysis were guided by the eCCM and the CICI framework. First, questionnaire surveys were conducted with each participating center’s patients, clinicians, and transplant director. These questionnaires allowed us to assess each center’s structural characteristics, practice patterns regarding chronic illness management, overall level of chronic illness management, current levels of self-management behavioral, and technology openness and acceptance [ 34 ]. We also gathered the demographic characteristics of patients and clinicians.

The questionnaires cover the eCCM’s five building blocks and have been applied by the research team to previous work in heart and solid organ transplantation [ 64 , 106 , 107 , 108 , 109 ]. All questionnaires were adapted as appropriate to the allogeneic stem cell transplant setting. In some cases, we supplemented the questionnaires with further contextually relevant factors (e.g., patients’ acceptance of symptom monitoring and data sharing), based on aspects described in the context domain of the CICI framework ([ 35 ]; Valenta S, Ribaut J, Leppla L, Mielke J, Teynor A, Koehly K, Gerull S, Grossmann F, Witzig-Brändli V, De Geest S, for the SMILe study team: Context-specific adaptation of an eHealth-facilitated, integrated care model and tailoring its implementation strategies – a mixed-methods study as a part of the SMILe implementation science project, Under review).

Second, to support our understanding of the quantitative findings and allow a deeper understanding of further aspects relevant to the development/adaptation and implementation of the SMILe-ICM (e.g., patient’s performance of self-management tasks, patient’s and clinician’s barriers to technology use), we conducted focus groups with clinicians as well as focus groups and individual interviews with patients. In both cases, our interview guides followed the eCCM’s building blocks [ 34 ]. In the second center where we implemented an adapted version of the SMILe-ICM, we explored factors facilitating or hindering the SMILe-ICM’s implementation by means of focus group discussions ([ 35 ]; Valenta S, Ribaut J, Leppla L, Mielke J, Teynor A, Koehly K, Gerull S, Grossmann F, Witzig-Brändli V, De Geest S, for the SMILe study team: Context-specific adaptation of an eHealth-facilitated, integrated care model and tailoring its implementation strategies – a mixed-methods study as a part of the SMILe implementation science project, Under review). An overview of the variables assessed in the surveys and themes explored during the focus group and individual interviews can be found in Additional file 4 .

Quantitative and qualitative data were collected over a 1-year period. Ongoing changes in context (e.g., changes in leadership) were noted by the SMILe project leaders and documented. The team also had a regular exchange with stakeholders via informal conversations and official stakeholder meetings. The data analysis followed three eCCM-guided steps: (1) analysis of quantitative (descriptive tables) and qualitative results (meta-maps), (2) mapping of quantitative and qualitative findings in a joint display, and (3) reflection on findings and their implications for intervention development and choices of implementation strategies (again in a joint display) [ 34 ].

Component 5: Identifying and describing the relevance of contextual and setting factors for intervention co-design, implementation strategies, and outcomes

Implementation success and sustainment of an intervention and the implementation strategy depend heavily on how well they align to the target context [ 8 , 110 , 111 , 112 , 113 , 114 ].

Intervention development and selection of implementation strategies

Numerous frameworks/guidelines help researchers develop interventions and select implementation strategies. One of these is the Medical Research Council (MRC) guidance for the development and evaluation of complex interventions in healthcare. Whereas previously context was mainly considered during process evaluation, i.e., retrospectively, the MRC guidance now recommends examining interactions between the intervention and context across all phases of intervention development, implementation, and evaluation Footnote 3 [ 114 ].

Another guidance focusing on both intervention development and implementation strategies is Bartholomew, Parcel, and Kok’s “Intervention Mapping”—a five-step process, the foundation of which is a contextual analysis [ 115 ]. Other methods that can be applied to match implementation strategies to contextual factors are concept mapping, group model building, and conjoint analysis [ 116 ].

Furthermore, originally designed to facilitate implementation strategy choices, the CFIR–ERIC Implementation Strategy Matching Tool speeds the identification of implementation strategies available in the Expert Recommendations for Implementing Change (ERIC) compilation. The ERIC compilation’s strategies address specific constructs described in the CFIR framework [ 7 ]. Just as with implementation strategies, specific sustainability strategies can also be selected to ensure that a successfully implemented intervention remains in clinical practice.

Adaption of interventions and implementation strategies

Even where proven intervention or implementation strategies are available, adaptions are usually required to ensure their effectiveness in a new context [ 110 , 111 , 117 ]. However, before making changes, it is necessary to distinguish between core intervention components—which have to be implemented as they are to achieve a desired effect—and those adaptable to various contexts [ 111 ]. Building on the idea of Intervention Mapping, Implementation Mapping was developed for use with interventions that have already been developed and tested [ 25 ]. To ensure that an adaptation is transparent and reproducible, a description should be given of which contextual details necessitate it and how the proposed adaption addresses those details [ 118 ]. Another source for adapting interventions is the three-step ADAPT guidance [ 110 ]. When adapting an intervention, it is always necessary to record which intervention components or implementation strategies were adapted, in which ways, and why. Frameworks such as FRAME [ 118 ] and FRAME-IS [ 119 ] can support this process.

Interpretation of implementation and effectiveness outcomes

An intervention’s likely effects will vary across contexts and settings [ 120 ]. The findings of the contextual analysis help to understand mechanisms that influence the implementation process (i.e., what was implemented and how well), and how these mechanisms will likely influence the intended intervention’s effectiveness. Usually, this component is part of a process evaluation [ 120 ].

To describe how and why a specific intervention leads to its expected effects, as well as to trace causal pathways between intervention components, implementation strategies, and contextual factors, it will be necessary to develop a program theory [ 114 , 121 ].

Case example — relevance of the contextual analysis for development/adaption of the SMILe-ICM and implementation strategies

Contextual analysis guided the development/adaption of the SMILe-ICM and the selection of implementation strategies. All quantitative and qualitative findings were synthesized in a joint display and the intervention’s implications summarized. Identified gaps both in self-management (e.g., symptom recognition) and in delivery system design (e.g., chronic care delivery, continuity of care) highlight a need to re-engineer the current acute care model towards an ICM [ 34 ]. Following the Behavior Change Wheel methodology, we considered the identified determinants to help us choose intervention functions and behavioral change techniques, [ 33 , 122 ]. As patients and clinicians were open to the use of an eHealth technology, but expressed concerns that technology might replace human contact, the SMILe-ICM intervention includes both human- and technology-based components [ 33 ]. The adaptions of the SMILe-ICM and its implementation strategies followed the FRAME and the FRAME-IS framework (Valenta S, Ribaut J, Leppla L, Mielke J, Teynor A, Koehly K, Gerull S, Grossmann F, Witzig-Brändli V, De Geest S, for the SMILe study team: Context-specific adaptation of an eHealth-facilitated, integrated care model and tailoring its implementation strategies – a mixed-methods study as a part of the SMILe implementation science project, Under review). The ERIC taxonomy was used to choose and describe context-specific implementation strategies (e.g., conducting local consensus discussions, creating new clinical teams) [ 34 , 123 ]. In addition, the contextual analysis itself represented a valuable implementation strategy: conducting a local need assessment. Finally, as part of phase B, the implementation pathway and outcomes (i.e., acceptability, appropriateness, feasibility, and fidelity) will be assessed from a patients’ and healthcare providers’ perspective and likely influences of context considered [ 32 ].

Component 6: Reporting of contextual analysis

As contextual analysis informs subsequent phases of an IS project—affecting, for example, intervention development—it is a critical component of that project and needs to be reported as such [ 124 , 125 ]. However, given the limited space available in journal articles, detailed findings of contextual analyses and their uses should be reported in separate papers. These are by no means restricted to dedicated IS journals but can also include journals with a clinical focus [ 126 ]. Furthermore, a much more serious impediment to the reporting and dissemination of contextual findings is the lack of clear, comprehensive guidelines on how to report contextual analyses [ 124 , 125 ]. For instance, the Standards for Reporting Implementation Studies (StaRI Checklist) recommend the CFIR for reporting relevant contextual factors; however, information specifying which aspects of the contextual analysis to report is missing [ 20 , 127 , 128 ].

Case example — reporting of the SMILe contextual analysis

The SMILe project’s contextual analysis findings for its first study site were published in a separate paper. The same paper described the research team’s implementation strategies and outlined their findings’ implications for re-engineering stem cell transplant follow-up care [ 34 ]. A second paper described how the research team had based their intervention component and mode-of-delivery choices on information from their contextual analysis [ 33 ]. Applying the BANANA approach to the SMILe project, we were focusing sharply on making our decision-making processes and results transparent and replicable. That is, at each step, we are ensuring that both the results and the processes used to achieve them can be employed by other researchers (e.g., for scale-up).

Contextual analysis should be the foundation of every IS project in our view. As noted above, contextual analysis results inform all subsequent project phases, enhancing interventions’ implementation and sustainability in real-world settings. In comparison to previous studies on facilitators and barriers, the BANANA approach does not only describe individual methods to study context (e.g., surveys or interviews), it provides methodological guidance on planning and conducting contextual analyses in IS projects. Furthermore, BANANA describes how contextual information can be reported and used to inform further project phases (e.g., intervention development). While we have described BANANA in terms of six individual components, neither all components have to be performed in every contextual analysis, nor do they always operate sequentially (Fig. 2 ). Particularly the first three—choosing a TMF, identifying empirical evidence, and involving stakeholders—are partly concurrent or can be executed in a different order with stakeholder involvement being a key component that is linked to all other components. Once in place, they form a firm foundation upon which to identify and assess relevant contextual factors (component 4).

figure 2

Overview of the six components of the Basel Approach for coNtextual ANAlysis (BANANA)

When presenting BANANA at conferences or in workshops, participants often ask us for a checklist they can apply to their project’s contextual analysis. However, we have deliberately avoided a “checklist approach”: the aspects to be studied in context and the methods chosen always depend on the individual research project and its research questions. Applying a checklist risks oversimplifying the context and undervaluing the complex interconnections of contextual factors, many of which differ from one setting to the next [ 77 , 129 ]. In the worst cases, only superficial contextual knowledge would be generated, limiting the contextual analyses’ capacity either to inform later phases of the IS project or to ensure the implementation’s success [ 77 , 130 ]. Therefore, planning and conducting a contextual analysis usually requires a high degree of reflexivity and an experienced transdisciplinary research team covering experiences in the field of IS (e.g., knowledge and use of implementation TMFs, understanding of all implementation phases), and a broad knowledge of how to apply research methods.

In addition to the project and research questions, however, pre-existing contextual knowledge and the researcher’s role influences planning and conducting of a more targeted contextual analysis [ 38 , 131 ]. The SMILe project leaders (LL, SV) had both worked for several years in the SMILe-ICM’s target setting. For example, both have ample experience in the care of stem cell-transplanted patients as well as implicit and explicit knowledge of the target context and setting (e.g., work processes, resources available, leadership, organizational culture, and legal aspects). These experiences and their role within the clinical setting may have been influential in shaping the focus of our contextual analysis [ 38 ]. Given their background as advanced practice nurses, specific values, assumptions, and beliefs (i.e., mental models) drive their understanding of context. This includes, for example, which contextual factors they perceive as relevant to affect implementation, or how they interpret specific findings of the contextual analysis (i.e., confirmation bias). However, the use of a theoretical underpinning to guide contextual analyses (i.e., the eCCM and CICI framework), as well as various evidence sources in addition to the project leader’s professional knowledge might counteract potential bias [ 38 ]. Furthermore, by being embedded in the context, they have an insider perspective that may ultimately be helpful not only for conducting the context analysis (e.g., involving stakeholders from the setting), but also to support the implementation and sustainability of the intervention in practice [ 38 , 132 ].

Second, although the importance of context in IS projects has been widely emphasized, funding agencies remain hesitant to fund contextual analyses as they are not yet recognized as a crucial part of an IS project. A contextual analysis’ rigor and thoroughness both reflect the available resources such as time, personnel, and especially funding [ 74 , 75 ]. These circumstances should be considered when evaluating contextual analysis and interpreting its results. In projects, where resources are constrained, we do not recommend omitting components of BANANA that a relevant to the individual project, but rather thinking about how the components are carried out. In terms of empirical evidence, for example, informal meetings with key stakeholders could provide sufficient understanding of relevant contextual factors so that the amount of data collected during the contextual analysis can be narrowed down. Other aspects that can be considered in terms of resources include the timeframe of the contextual analysis (e.g., one vs. more timepoints), the number of participants involved, or the methods chosen (e.g., use of rapid qualitative methods). Informal conversations with stakeholders, for example, can also complement data collection or recurrent measures of context, when changes in context need to be observed. In addition, funding agencies need to acknowledge contextual analysis as a foundational phase in IS projects and provide specific funding mechanisms to adequately resource this phase.

Strengths and limitations

BANANA was developed based on evidence and expert discussion and successfully applied within the SMILe project to guide intervention development [ 34 ] as well as intervention adaptation ([ 35 ]; Valenta S, Ribaut J, Leppla L, Mielke J, Teynor A, Koehly K, Gerull S, Grossmann F, Witzig-Brändli V, De Geest S, for the SMILe study team: Context-specific adaptation of an eHealth-facilitated, integrated care model and tailoring its implementation strategies – a mixed-methods study as a part of the SMILe implementation science project, Under review). Its six components provide an overall, theory-based guidance for “how to do” contextual analyses in IS projects and raise questions in regard to contextual analysis that needs to be answered individually for each project. BANANA is not limited to the stem cell transplant population or any clinical settings, yet contextual analyses based on the principles of BANANA were already conducted in community-based care [ 133 ] and geriatric care [ 134 ]. Further application of BANANA for example in the community pharmacy setting is planned. Thereby, based on our operationalization of contextual analysis, BANANA is particularly useful for earlier stage work (e.g., preparatory work, hybrid 1 studies) as well as IS studies that include the development/adaptation, implementation, and evaluation of an intervention and its implementation strategies. For studies focusing on the sustainment or scale-up of interventions the extent to which components of BANANA are relevant might differ. Additional testing will be necessary to ensure its reliability for other project phases (e.g., sustainability, scale-up) and other settings (e.g., in low- and middle-income countries). Furthermore, we are considering methods of finding a broader consensus between implementation experts regarding BANANA’s six components, e.g., by applying a Delphi approach. Another limitation of BANANA is that interactions in context—particularly regarding how individuals are embedded within a context, and how they are influenced by and shape that context—require more consideration than was possible within the scope of this study. Therefore, we plan to further develop BANANA and complement it via social science elements [ 38 ].

Implications for research and funders

Improving researchers’ consideration of context and their reporting of it in IS studies will clearly require conceptual and methodological developments; however, further measures are also required. First, coupled with the acknowledgment of contextual analysis as the foundational first phase of every IS project, its relevance to implementation success requires funding agencies to rethink how to support this phase. That is, adequately resourcing contextual analyses will require specific funding schemes [ 77 ]. Within reasonable tolerances, this will require a timeline for a thorough contextual analysis and further components (e.g., intervention development) [ 77 ]. Second, the reporting of context should be a condition for the publication of IS projects. Appropriate standards and guidelines must be developed to support researchers to meet this requirement.

Conclusions

Contextual analysis is a foundational phase of every IS project, providing essential information to all subsequent phases. The BANANA approach successfully guided the SMILe project’s contextual analysis. To help researchers make sense of their target contexts, and to strengthen every part of their work, this approach’s principles can also be applied to other IS projects. However, further adaption and testing of BANANA in other projects are required. Equally importantly, considering the vast heterogeneity of the studies we reviewed, a coordinated campaign will be required to unify and enhance IS researchers’ efforts to conduct and report on contextual analyses. As a first step, a common set of analysis and reporting guidelines will do much to improve the success and quality of implementation efforts.

Availability of data and materials

Data analyzed during this study are available from the corresponding author upon reasonable request.

Context can be defined as “a set of characteristics and circumstances that consist of active and unique factors, within which the implementation is embedded” [ 1 ]. Context is multilevel, multidimensional, and dynamic. It interacts with an intervention and its implementation in the setting, i.e., the “physical location, in which an intervention is put into practice” [ 1 ].

Document analysis can be quantitative as well.

Four phases of the MRC guidance: (1) developing a new intervention or identifying an already existing intervention, (2) assessing the feasibility and acceptability of an intervention, (3) assessing an intervention (evaluation), and (4) implementing an intervention [ 114 ].

Abbreviations

Basel Approach for coNtextual ANAlysis in implementation science

Context and Implementation of Complex Interventions framework

eHealth Enhanced Chronic Care Model

Integrated care model

  • Implementation science

Theory, model, or framework

Pfadenhauer LM, Gerhardus A, Mozygemba K, Lysdahl KB, Booth A, Hofmann B, et al. Making sense of complexity in context and implementation: the Context and Implementation of Complex Interventions (CICI) framework. Implement Sci. 2017;12(1):21.

Rogers L, De Brún A, McAuliffe E. Development of an integrative coding framework for evaluating context within implementation science. BMC Med Res Methodol. 2020;20(1):158.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Nilsen P, Bernhardsson S. Context matters in implementation science: a scoping review of determinant frameworks that describe contextual determinants for implementation outcomes. BMC Health Serv Res. 2019;19(1):189.

Article   PubMed   PubMed Central   Google Scholar  

Dryden-Palmer KD, Parshuram CS, Berta WB. Context, complexity and process in the implementation of evidence-based innovation: a realist informed review. BMC Health Serv Res. 2020;20(1):81.

De Geest S, Zúñiga F, Brunkert T, Deschodt M, Zullig LL, Wyss K, et al. Powering Swiss health care for the future: implementation science to bridge “the valley of death”. Swiss Med Wkly. 2020;150:w20323.

Davis M, Beidas RS. Refining contextual inquiry to maximize generalizability and accelerate the implementation process. Implement Res Pract. 2021;2:2633489521994941.

Google Scholar  

Waltz TJ, Powell BJ, Fernández ME, Abadie B, Damschroder LJ. Choosing implementation strategies to address contextual barriers: diversity in recommendations and future directions. Implement Sci. 2019;14(1):42.

Craig P, Di Ruggiero E, Frolich KL, Mykhalovskiy E, White M, Campbell R, et al. Taking account of context in population health intervention research: guidance for producers, users and funders of research; 2018.

Squires JE, Aloisio LD, Grimshaw JM, Bashir K, Dorrance K, Coughlin M, et al. Attributes of context relevant to healthcare professionals’ use of research evidence in clinical practice: a multi-study analysis. Implement Sci. 2019;14(1):52.

Chambers DA, Glasgow RE, Stange KC. The dynamic sustainability framework: addressing the paradox of sustainment amid ongoing change. Implement Sci. 2013;8(1):117.

Rogers L, De Brún A, McAuliffe E. Defining and assessing context in healthcare implementation studies: a systematic review. BMC Health Serv Res. 2020;20(1):591.

Allen JD, Towne SD, Maxwell AE, DiMartino L, Leyva B, Bowen DJ, et al. Measures of organizational characteristics associated with adoption and/or implementation of innovations: a systematic review. BMC Health Serv Res. 2017;17(1):591.

Watson DP, Adams EL, Shue S, Coates H, McGuire A, Chesher J, et al. Defining the external implementation context: an integrative systematic literature review. BMC Health Serv Res. 2018;18(1):209.

Chor KHB, Wisdom JP. Olin S-CS, Hoagwood KE, Horwitz SM: Measures for predictors of innovation adoption. Adm Policy Ment Health. 2015;42(5):545–73.

Chaudoir SR, Dugan AG, Barr CH. Measuring factors affecting implementation of health innovations: a systematic review of structural, organizational, provider, patient, and innovation level measures. Implement Sci. 2013;8:22.

Lewis CC, Stanick CF, Martinez RG, Weiner BJ, Kim M, Barwick M, et al. The Society for Implementation Research Collaboration Instrument Review Project: a methodology to promote rigorous evaluation. Implement Sci. 2015;10(1):2.

Davidoff F. Understanding contexts: how explanatory theories can help. Implement Sci. 2019;14(1):23.

Johns G. Reflections on the 2016 decade award: incorporating context in organizational research. Acad Manage Rev. 2017;42(4):577–95.

Article   Google Scholar  

Kirk MA, Kelley C, Yankey N, Birken SA, Abadie B, Damschroder L. A systematic review of the use of the consolidated framework for implementation research. Implement Sci. 2015;11(1):72.

Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4:50.

Harvey G, Kitson A. PARIHS revisited: from heuristic to integrated framework for the successful implementation of knowledge into practice. Implement Sci. 2016;11:33.

Atkins L, Francis J, Islam R, O’Connor D, Patey A, Ivers N, et al. A guide to using the Theoretical Domains Framework of behaviour change to investigate implementation problems. Implement Sci. 2017;12(1):77.

Moullin JC, Sabater-Hernandez D, Fernandez-Llimos F, Benrimoj SI. A systematic review of implementation frameworks of innovations in healthcare and resulting generic implementation framework. Health Res Policy Syst. 2015;13:16.

Tabak RG, Khoong EC, Chambers DA, Brownson RC. Bridging research and practice: models for dissemination and implementation research. Am J Prev Med. 2012;43(3):337–50.

Fernandez ME, Walker TJ, Weiner BJ, Calo WA, Liang S, Risendal B, et al. Developing measures to assess constructs from the Inner Setting domain of the Consolidated Framework for Implementation Research. Implement Sci. 2018;13(1):52.

Pfadenhauer LM, Mozygemba K, Gerhardus A, Hofmann B, Booth A, Lysdahl KB, et al. Context and implementation: a concept analysis towards conceptual maturity. Z Evid Fortbild Qual Gesundhwes. 2015;109(2):103–14.

Squires JE, Graham I, Bashir K, Nadalin-Penno L, Lavis J, Francis J, et al. Understanding context: a concept analysis. J Adv Nurs. 2019;0(0):1–23.

Szymczak JE. Beyond barriers and facilitators: the central role of practical knowledge and informal networks in implementing infection prevention interventions. BMJ Qual Saf. 2018;27(10):763–5.

Article   PubMed   Google Scholar  

Haines ER, Kirk MA, Lux L, Smitherman AB, Powell BJ, Dopp A, et al. Ethnography and user-centered design to inform context-driven implementation. Transl Behav Med. 2022;12(1):ibab077.

Neta G, Glasgow RE, Carpenter CR, Grimshaw JM, Rabin BA, Fernandez ME, et al. A framework for enhancing the value of research for dissemination and implementation. Am J Public Health. 2015;105(1):49–57.

Stange KC, Glasgow RE. Contextual factors: the importance of considering and reporting on context in research on the patient-centered medical home. Rockville: Agency for Healthcare Research and Quality; 2013. ARHQ Publication No. 13-0045-EF

De Geest S, Valenta S, Ribaut J, Gerull S, Mielke J, Simon M, Bartakova J, Kaier K, Eckstein J, Leppla L et al : The SMILe Integrated Care Model in Allogeneic SteM Cell TransplantatIon faciLitated by eHealth: a protocol for a hybrid 1 effectiveness-implementation randomized controlled trial. BMC Health Services Research. 2022;22(1):1067.

Leppla L, Schmid A, Valenta S, Mielke J, Beckmann S, Ribaut J, et al. Development of an integrated model of care for allogeneic stem cell transplantation facilitated by eHealth-the SMILe study. Support Care Cancer. 2021;29:8045–57.

Leppla L, Mielke J, Kunze M, Mauthner O, Teynor A, Valenta S, et al. Clinicians and patients perspectives on follow-up care and eHealth support after allogeneic hematopoietic stem cell transplantation: a mixed-methods contextual analysis as part of the SMILe study. Eur J Oncol Nurs. 2020;45:101723.

Gee PM, Greenwood DA, Paterniti DA, Ward D, Miller LM. The eHealth enhanced chronic care model: a theory derivation approach. J Med Internet Res. 2015;17(4):e86.

Ribaut J, Leppla L, Teynor A, Valenta S, Dobbels F, Zullig LL, et al. Theory-driven development of a medication adherence intervention delivered by eHealth and transplant team in allogeneic stem cell transplantation: the SMILe implementation science project. BMC Health Serv Res. 2020;20(1):827.

Leppla L, Hobelsberger S, Rockstein D, Werlitz V, Pschenitza S, Heidegger P, et al. Implementation science meets software development to create eHealth components for an integrated care model for allogeneic stem cell transplantation facilitated by eHealth: the SMILe study as an example. J Nurs Scholarsh. 2020;53(1):35–45.

Mielke J, De Geest S, Zúñiga F, Brunkert T, Zullig LL, Pfadenhauer LM, et al. Understanding dynamic complexity in context—enriching contextual analysis in implementation science from a constructivist perspective. Front Health Serv. 2022;2:1–7.

Miller WL, Crabtree BF, Nutting PA, Stange KC, Jaén CR. Primary care practice development: a relationship-centered approach. Ann Fam Med. 2010;8(Suppl 1):S68–79; s92.

Barkhordarian A, Demerjian G, Jan A, Sama N, Nguyen M, Du A, et al. Stakeholder engagement analysis - a bioethics dilemma in patient-targeted intervention: patients with temporomandibular joint disorders. J Transl Med. 2015;13(1):15.

Birken SA, Rohweder CL, Powell BJ, Shea CM, Scott J, Leeman J, et al. T-CaST: an implementation theory comparison and selection tool. Implement Sci. 2018;13(1):143.

Birken SA, Powell BJ, Shea CM, Haines ER, Alexis Kirk M, Leeman J, et al. Criteria for selecting implementation science theories and frameworks: results from an international survey. Implement Sci. 2017;12(1):124.

Davidoff F, Dixon-Woods M, Leviton L, Michie S. Demystifying theory and its use in improvement. BMJ Qual Saf. 2015;24(3):228–38.

Strifler L, Cardoso R, McGowan J, Cogo E, Nincic V, Khan PA, et al. Scoping review identifies significant number of knowledge translation theories, models, and frameworks with limited use. J Clin Epidemiol. 2018;100:92–102.

Liang L, Bernhardsson S, Vernooij RW, Armstrong MJ, Bussières A, Brouwers MC, et al. Use of theory to plan or evaluate guideline implementation among physicians: a scoping review. Implement Sci. 2017;12(1):1–12.

Moullin JC, Dickson KS, Stadnick NA, Albers B, Nilsen P, Broder-Fingert S, et al. Ten recommendations for using implementation frameworks in research and practice. Implement Sci Commun. 2020;1(1):42.

Minogue V, Matvienko-Sikar K, Hayes C, Morrissey M, Gorman G, Terres A. The usability and applicability of knowledge translation theories, models, and frameworks for research in the context of a national health service. Health Res Policy Syst. 2021;19(1):105.

D&I Models Webtool [ https://dissemination-implementation.org/content/diMain.aspx ]. Accessed 29 Sept 2021.

Wagner EH, Austin BT, Von Korff M. Organizing care for patients with chronic illness. Milbank Q. 1996;74(4):511–44.

Rycroft-Malone J, Seers K, Titchen A, Harvey G, Kitson A, McCormack B. What counts as evidence in evidence-based practice? J Adv Nurs. 2004;47(1):81–90.

Squires JE, Hutchinson AM, Coughlin M, Bashir K, Curran J, Grimshaw JM, et al. Stakeholder perspectives of attributes and features of context relevant to knowledge translation in health settings: a multi-country analysis. Int J Health Policy Manag. 2021;11(8):1373–90.

Li S-A, Jeffs L, Barwick M, Stevens B. Organizational contextual features that influence the implementation of evidence-based practices across healthcare settings: a systematic integrative review. Syst Rev. 2018;7(1):72.

Evans JM, Grudniewicz A, Gray CS, Wodchis WP, Carswell P, Baker GR. Organizational context matters: a research toolkit for conducting standardized case studies of integrated care initiatives. Int J Integr Care. 2017;17(2):1–10.

Simblett S, Greer B, Matcham F, Curtis H, Polhemus A, Ferrão J, et al. Barriers to and facilitators of engagement with remote measurement technology for managing health: systematic review and content analysis of findings. J Med Internet Res. 2018;20(7):e10480.

Jeffs E, Vollam S, Young JD, Horsington L, Lynch B, Watkinson PJ. Wearable monitors for patients following discharge from an intensive care unit: practical lessons learnt from an observational study. J Adv Nurs. 2016;72(8):1851–62.

Thies K, Anderson D, Cramer B. Lack of adoption of a mobile app to support patient self-management of diabetes and hypertension in a federally qualified health center: interview analysis of staff and patients in a failed randomized trial. JMIR Hum Factors. 2017;4(4):e24.

Glasgow RE, Phillips SM, Sanchez MA. Implementation science approaches for integrating eHealth research into practice and policy. Int J Med Inform. 2014;83(7):e1–e11.

Asthana S, Jones R, Sheaff R. Why does the NHS struggle to adopt eHealth innovations? A review of macro, meso and micro factors. BMC Health Serv Res. 2019;19(1):984.

Dünnebeil S, Sunyaev A, Blohm I, Leimeister JM, Krcmar H. Determinants of physicians’ technology acceptance for e-health in ambulatory care. Int J Med Inform. 2012;81(11):746–60.

Faber S, van Geenhuizen M, de Reuver M. eHealth adoption factors in medical hospitals: a focus on the Netherlands. Int J Med Inform. 2017;100:77–89.

Granja C, Janssen W, Johansen MA. Factors determining the success and failure of eHealth interventions: systematic review of the literature. J Med Internet Res. 2018;20(5):e10235.

Ross J, Stevenson F, Lau R, Murray E. Factors that influence the implementation of e-health: a systematic review of systematic reviews (an update). Implement Sci. 2016;11(1):146.

Kirsch M, Berben L, Johansson E, Calza S, Eeltink C, Stringer J, et al. Nurses’ practice patterns in relation to adherence-enhancing interventions in stem cell transplant care: a survey from the Nurses Group of the European Group for Blood and Marrow Transplantation. Eur J Cancer Care. 2014;23(5):607–15.

Vanhoof JMM, Vandenberghe B, Geerts D, Philippaerts P, De Mazière P, DeVito DA, et al. Consortium obotP-T: Technology experience of solid organ transplant patients and their overall willingness to use interactive health technology. J Nurs Scholarsh. 2018;50(2):151–62.

Gresch B, Kirsch M, Fierz K, Halter J, Nair G, Denhaerynck K, et al. Medication nonadherence to immunosuppressants after adult allogeneic haematopoietic stem cell transplantation: a multicentre cross-sectional study. Bone Marrow Transplant. 2017;52(2):304–6.

Ramanadhan S, Davis MM, Armstrong R, Baquero B, Ko LK, Leng JC, et al. Participatory implementation science to increase the impact of evidence-based cancer prevention and control. Cancer Causes Control. 2018;29(3):363–9.

Glasgow RE, Chambers D. Developing robust, sustainable, implementation systems using rigorous, rapid and relevant science. Clin Transl Sci. 2012;5(1):48–55.

Goodman MS, Sanders Thompson VL. The science of stakeholder engagement in research: classification, implementation, and evaluation. Transl Behav Med. 2017;7(3):486–91.

Varvasovszky Z, Brugha R. A stakeholder analysis. Health Policy Plan. 2000;15(3):338–45.

Article   CAS   PubMed   Google Scholar  

INVOLVE. Briefing notes for researchers: involving the public in NHS, public health and social care research. Eastleigh: INVOLVE; 2012.

Patients Active in Research and Dialogues for an Improved Generation of Medicines (PARADIGM) [ https://imi-paradigm.eu/ ]. Accessed 29 Sept 2021.

Churruca K, Ludlow K, Taylor N, Long JC, Best S, Braithwaite J. The time has come: embedded implementation research for health care improvement. J Eval Clin Pract. 2019;25(3):373–80.

Bombard Y, Baker GR, Orlando E, Fancott C, Bhatia P, Casalino S, et al. Engaging patients to improve quality of care: a systematic review. Implement Sci. 2018;13(1):98.

Ramanadhan S, Revette AC, Lee RM, Aveling EL. Pragmatic approaches to analyzing qualitative data for implementation science: an introduction. Implement Sci Commun. 2021;2(1):70.

Subramanian L, Elam M, Healey AJ, Paquette E, Henrich N. Context matters—but what aspects? The need for evidence on essential aspects of context to better inform implementation of quality improvement initiatives. Jt Comm J Qual Patient Saf. 2021;47(11):748–52.

Pfadenhauer LM. Conceptualizing context and intervention as a system in implementation science: learning from complexity theory; comment on “Stakeholder Perspectives of Attributes and Features of Context Relevant to Knowledge Translation in Health Settings: A Multi-country Analysis”. Int J Health Policy Manag. 2021;11(8):1570–3.

Howarth E, Devers K, Moore G, O’Cathain A, Dixon-Woods M. Contextual issues and qualitative research. In: Challenges, solutions and future directions in the evaluation of service innovations in health care and public health, vol. 4; 2016. p. 105–20. Health Serv Deliv Res.

Palinkas LA, Aarons GA, Horwitz S, Chamberlain P, Hurlburt M, Landsverk J. Mixed method designs in implementation research. Adm Policy Ment Health. 2011;38(1):44–53.

Beidas RS, Wolk CL, Walsh LM, Evans AC Jr, Hurford MO, Barg FK. A complementary marriage of perspectives: understanding organizational social context using mixed methods. Implement Sci. 2014;9:175.

Albright K, Gechter K, Kempe A. Importance of mixed methods in pragmatic trials and dissemination and implementation research. Acad Pediatr. 2013;13(5):400–7.

Green CA, Duan N, Gibbons RD, Hoagwood KE, Palinkas LA, Wisdom JP. Approaches to mixed methods dissemination and implementation research: methods, strengths, caveats, and opportunities. Adm Policy Ment Health. 2015;42(5):508–23.

Creswell JW, Plano Clark VL. Designing and conducting mixed methods research. 3rd ed. Los Angeles: Sage; 2018.

McHugh S, Dorsey CN, Mettert K, Purtle J, Bruns E, Lewis CC. Measures of outer setting constructs for implementation research: a systematic review and analysis of psychometric quality. Implement Res Pract. 2020;1:2633489520940022.

Clinton-McHarg T, Yoong SL, Tzelepis F, Regan T, Fielding A, Skelton E, et al. Psychometric properties of implementation measures for public health and community settings and mapping of constructs against the Consolidated Framework for Implementation Research: a systematic review. Implement Sci. 2016;11(1):148.

Kien C, Schultes M-T, Szelag M, Schoberberger R, Gartlehner G. German language questionnaires for assessing implementation constructs and outcomes of psychosocial and health-related interventions: a systematic review. Implement Sci. 2018;13(1):150.

Gagnon M-P, Attieh R, Ghandour EK, Légaré F, Ouimet M, Estabrooks CA, et al. A systematic review of instruments to assess organizational readiness for knowledge translation in health care. PlosS One. 2014;9(12):e114338.

Rabin BA, Lewis CC, Norton WE, Neta G, Chambers D, Tobin JN, et al. Measurement resources for dissemination and implementation research in health. Implement Sci. 2016;11:42.

Weiner BJ, Mettert KD, Dorsey CN, Nolen EA, Stanick C, Powell BJ, et al. Measuring readiness for implementation: a systematic review of measures’ psychometric and pragmatic properties. Implement Res Pract. 2020;1:2633489520933896.

Consolidated Framework for Implementation Research [ https://cfirguide.org/ ]. Accessed 10 Oct 2021.

EPIS Framework [ https://episframework.com/ ]. Accessed 10 Oct 2021.

National Cancer Institute: Qualitative methods in implementation science . ; 2015.

Hamilton AB, Finley EP. Qualitative methods in implementation research: an introduction. Psychiatry Res. 2019;280:112516.

Palinkas LA, Mendon SJ, Hamilton AB. Innovations in mixed methods evaluations. Annu Rev Public Health. 2019;40(1):423–42.

Creswell JW, Poth CN. Qualitative inquiry & research design: choosing among five approaches. 4th ed. Thousand Oaks: SAGE Publications; 2018.

Checkland K, Harrison S, Marshall M. Is the metaphor of ‘barriers to change’ useful in understanding implementation? Evidence from general medical practice. J Health Serv Res Policy. 2007;12(2):95–100.

Soom Ammann E, Van Holten K. Mit allen Sinnen ins Feld - Teilnehmende Beobachtung als Methode. QuPuG. 2017;4(1):6–14.

Daae J, Boks C. A classification of user research methods for design for sustainable behaviour. J Clean Prod. 2015;106:680–9.

Palinkas LA, Zatzick D. Rapid assessment procedure informed clinical ethnography (RAPICE) in pragmatic clinical trials of mental health services implementation: methods and applied case study. Adm Policy Ment Health. 2019;46(2):255–70.

Nevedal AL, Reardon CM, Opra Widerquist MA, Jackson GL, Cutrona SL, White BS, et al. Rapid versus traditional qualitative analysis using the Consolidated Framework for Implementation Research (CFIR). Implement Sci. 2021;16(1):67.

Vindrola-Padros C, Johnson GA. Rapid techniques in qualitative research: a critical review of the literature. Qual Health Res. 2020;30(10):1596–604.

Vindrola-Padros C, Vindrola-Padros B. Quick and dirty? A systematic review of the use of rapid ethnographies in healthcare organisation and delivery. BMJ Qual Saf. 2018;27(4):321–30.

Gale RC, Wu J, Erhardt T, Bounthavong M, Reardon CM, Damschroder LJ, et al. Comparison of rapid vs in-depth qualitative analytic methods from a process evaluation of academic detailing in the Veterans Health Administration. Implement Sci. 2019;14(1):11.

Coles E, Anderson J, Maxwell M, Harris FM, Gray NM, Milner G, et al. The influence of contextual factors on healthcare quality improvement initiatives: a realist review. Syst Rev. 2020;9(1):94.

The Atlas Initiative: five questions with Natalie Henrich [ https://www.ariadnelabs.org/resources/articles/the-atlas-initiative-five-questions-with-natalie-henrich/ ]. Accessed 30 Nov 2021.

Atlas Initiative [ https://www.ariadnelabs.org/atlas-initiative/ ]. Accessed 30 Nov 2021.

Berben L, Denhaerynck K, Dobbels F, Engberg S, Vanhaecke J, Crespo-Leiro MG, et al. Building research initiative group: chronic illness management and adherence in transplantation (BRIGHT) study: study protocol. J Adv Nurs. 2015;71(3):642–54.

Denhaerynck K, Berben L, Dobbels F, Russell CL, Crespo-Leiro MG, Poncelet AJ, et al. Multilevel factors are associated with immunosuppressant nonadherence in heart transplant recipients: the international BRIGHT study. Am J Transplant. 2018;18(6):1447–60.

Berben L, Russell CL, Engberg S, Dobbels F, De Geest S. Development, content validity and inter-rater reliability testing of the Chronic Illness Management Implementation – Building Research Initiative Group: Chronic Illness Management and Adherence in Transplantation: an instrument to assess the level of chronic illness management implemented in solid organ transplant programmes. Int J Care Coord. 2014;17(1-2):59–71.

Gugiu PC, Coryn C, Clark R, Kuehn A. Development and evaluation of the short version of the Patient Assessment of Chronic Illness Care instrument. Chronic Illn. 2009;5(4):268–76.

Moore G, Campbell M, Copeland L, Craig P, Movsisyan A, Hoddinott P, et al. Adapting interventions to new contexts—the ADAPT guidance. BMJ. 2021;374:n1679.

Chambers DA, Norton WE. The adaptome: advancing the science of intervention adaptation. Am J Prev Med . 2016;51(4, Supplement 2):S124–31.

Bleijenberg N, de Man-van Ginkel JM, Trappenburg JCA, Ettema RGA, Sino CG, Heim N, et al. Increasing value and reducing waste by optimizing the development of complex interventions: enriching the development phase of the Medical Research Council (MRC) Framework. Int J Nurs Stud. 2018;79:86–93.

Haines ER, Dopp A, Lyon AR, Witteman HO, Bender M, Vaisson G, et al. Harmonizing evidence-based practice, implementation context, and implementation strategies with user-centered design: a case example in young adult cancer care. Implement Sci Comm. 2021;2(1):45.

Skivington K, Matthews L, Simpson SA, Craig P, Baird J, Blazeby JM, et al. A new framework for developing and evaluating complex interventions: update of Medical Research Council guidance. BMJ. 2021;374:n2061.

Bartholomew LK, Parcel GS, Kok G. Intervention mapping: a process for developing theory and evidence-based health education programs. Health Educ Behav. 1998;25(5):545–63.

Powell BJ, Beidas RS, Lewis CC, Aarons GA, McMillen JC, Proctor EK, et al. Methods to improve the selection and tailoring of implementation strategies. J Behav Health Serv Res. 2017;44(2):177–94.

Powell BJ, Fernandez ME, Williams NJ, Aarons GA, Beidas RS, Lewis CC, et al. Enhancing the impact of implementation strategies in healthcare: a research agenda. Front Public Health. 2019;7(3):1–9.

Wiltsey Stirman S, Baumann AA, Miller CJ. The FRAME: an expanded framework for reporting adaptations and modifications to evidence-based interventions. Implement Sci. 2019;14(1):58.

Miller CJ, Barnett ML, Baumann AA, Gutner CA, Wiltsey-Stirman S. The FRAME-IS: a framework for documenting modifications to implementation strategies in healthcare. Implement Sci. 2021;16(1):36.

Moore GF, Audrey S, Barker M, Bond L, Bonell C, Hardeman W, et al. Process evaluation of complex interventions: Medical Research Council guidance. BMJ. 2015;350:h1258.

O'Cathain A, Croot L, Duncan E, Rousseau N, Sworn K, Turner KM, et al. Guidance on how to develop complex interventions to improve health and healthcare. BMJ Open. 2019;9(8):e029954.

Michie S, van Stralen MM, West R. The behaviour change wheel: a new method for characterising and designing behaviour change interventions. Implement Sci. 2011;6(1):42.

Powell BJ, Waltz TJ, Chinman MJ, Damschroder LJ, Smith JL, Matthieu MM, et al. A refined compilation of implementation strategies: results from the Expert Recommendations for Implementing Change (ERIC) project. Implement Sci. 2015;10:21.

Wells M, Williams B, Treweek S, Coyle J, Taylor J. Intervention description is not enough: evidence from an in-depth multiple case study on the untold role and impact of context in randomised controlled trials of seven complex interventions. Trials. 2012;13(1):95.

Datta J, Petticrew M. Challenges to evaluating complex interventions: a content analysis of published papers. BMC Public Health. 2013;13(1):568.

Mielke J, Brunkert T, Zullig LL, Bosworth HB, Deschodt M, Simon M, et al. Relevant journals for identifying implementation science articles: results of an international implementation science expert survey. Front Public Health. 2021;9(458):1–8.

Pinnock H, Barwick M, Carpenter CR, Eldridge S, Grandes G, Griffiths CJ, et al. Standards for reporting implementation studies (StaRI) statement. BMJ. 2017;356:i6795.

Pinnock H, Barwick M, Carpenter CR, Eldridge S, Grandes G, Griffiths CJ, et al. Standards for reporting implementation studies (StaRI): explanation and elaboration document. BMJ Open. 2017;7(4):e013318.

Tomoaia-Cotisel A, Scammon DL, Waitzman NJ, Cronholm PF, Halladay JR, Driscoll DL, et al. Context matters: the experience of 14 research teams in systematically reporting contextual factors important for practice change. Ann Fam Med. 2013;11(Suppl 1):S115–23.

Rogers L, De Brún A, Birken SA, Davies C, McAuliffe E. Context counts: a qualitative study exploring the interplay between context and implementation success. J Health Organ Manag. 2021;35(7):802–24.

Article   PubMed Central   Google Scholar  

Meier N, Dopson S. Theoretical lenses on context. In: Meier N, Dopson S, editors. Context in Action and How to Study It: Illustrations from Health Care. 1st ed. United States: Oxford University Press; 2019. p. 13–32.

Chapter   Google Scholar  

Vindrola-Padros C, Pape T, Utley M, Fulop NJ. The role of embedded research in quality improvement: a narrative review. BMJ Qual Saf. 2017;26(1):70–80.

Yip O, Huber E, Stenz S, Zullig LL, Zeller A, De Geest SM, et al. A contextual analysis and logic model for integrated care for frail older adults living at home: the INSPIRE project. Int J Integr Care. 2021;21(2):1–16.

AdvantAGE - Development and implementation of an ADVANced Practice Nurse-led interprofessional Transitional cAre model for frail GEriatric adults. [ https://nursing.unibas.ch/de/forschung/forschungsprojekte/laufende-projekte/neues-projekt-1/ ]. Accessed 18 Aug 2022.

Download references

Acknowledgements

The authors would like to thank Thekla Brunkert, Hélène Schoemans, and Kristina Arnahoutova for their critical feedback on the manuscript and Chris Shultis for the manuscript editing.

Not applicable.

Author information

Authors and affiliations.

Institute of Nursing Science (INS), Department Public Health (DPH), Faculty of Medicine, University of Basel, Bernoullistrasse 28, CH-4056, Basel, Switzerland

Juliane Mielke, Lynn Leppla, Sabine Valenta, Franziska Zúñiga, Sandra Staudacher & Sabina De Geest

Department of Medicine I, Faculty of Medicine, Medical Center University of Freiburg, Freiburg im Breisgau, Germany

Lynn Leppla

Department of Hematology, University Hospital Basel, Basel, Switzerland

Sabine Valenta

Center for Innovation to Accelerate Discovery and Practice Transformation (ADAPT), Durham Veterans Affairs Health Care & System, and Department of Population Health Sciences, School of Medicine, Duke University, Durham, NC, USA

Leah L. Zullig

Department of Health Services Research, Care and Public Health Research Institute, Maastricht University, Maastricht, The Netherlands

Sandra Staudacher

University of Applied Sciences Augsburg, Faculty of Computer Science, Augsburg, Germany

Alexandra Teynor

Academic Center for Nursing and Midwifery, Department of Public Health and Primary Care, KU Leuven, Leuven, Belgium

Sabina De Geest

You can also search for this author in PubMed   Google Scholar

Contributions

JM and SDG conceptualized the study and developed an initial version of BANANA. BANANA was iteratively further developed by LL, SV, LLZ, FZ, SS, AT, SDG, and JM. The manuscript was drafted by JM; SDG, FZ, LLZ, LL, SV, SS, and AT provided ongoing feedback and critically revised the manuscript. The authors read and approved the final manuscript.

Corresponding author

Correspondence to Sabina De Geest .

Ethics declarations

Ethics approval and consent to participate, consent for publication, competing interests.

LLZ reports research support from Proteus Digital Health, and the PhRMA Foundation, as well as consulting for Pfizer and Novartis. SDG consults for Sanofi and Novartis. All activities are unrelated to the current work.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1..

The Basel Approach for coNtextual ANAlysis (BANANA): Overview of its development process and theoretical underpinning.

Additional file 2.

Key resources for each component of the Basel Approach for coNtextual ANAlysis (BANANA).

Additional file 3.

Overview of contextual factors most commonly reported in empirical evidence to influence implementation.

Additional file 4.

Overview of variables assessed and themes explored in the SMILe project.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Mielke, J., Leppla, L., Valenta, S. et al. Unraveling implementation context: the Basel Approach for coNtextual ANAlysis (BANANA) in implementation science and its application in the SMILe project. Implement Sci Commun 3 , 102 (2022). https://doi.org/10.1186/s43058-022-00354-7

Download citation

Received : 07 April 2022

Accepted : 20 September 2022

Published : 01 October 2022

DOI : https://doi.org/10.1186/s43058-022-00354-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Contextual analysis

Implementation Science Communications

ISSN: 2662-2211

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

contextual analysis research methodology

Context analysis - A technique for analysing research in a field, applied to literature on the management of R & D at the section level

  • School of Management (SoM)
  • SoM Working and Occasional Papers

Show full item record

Files in this item

Thumbnail

This item appears in the following Collection(s)

  • SoM Working and Occasional Papers [613]

Search CERES

All of ceres.

  • Communities & Collections
  • By Publication Date
  • Supervisors

This Collection

  • View Usage Statistics

Cranfield University

  • Search Menu
  • Sign in through your institution
  • Browse content in Arts and Humanities
  • Browse content in Archaeology
  • Anglo-Saxon and Medieval Archaeology
  • Archaeological Methodology and Techniques
  • Archaeology by Region
  • Archaeology of Religion
  • Archaeology of Trade and Exchange
  • Biblical Archaeology
  • Contemporary and Public Archaeology
  • Environmental Archaeology
  • Historical Archaeology
  • History and Theory of Archaeology
  • Industrial Archaeology
  • Landscape Archaeology
  • Mortuary Archaeology
  • Prehistoric Archaeology
  • Underwater Archaeology
  • Zooarchaeology
  • Browse content in Architecture
  • Architectural Structure and Design
  • History of Architecture
  • Residential and Domestic Buildings
  • Theory of Architecture
  • Browse content in Art
  • Art Subjects and Themes
  • History of Art
  • Industrial and Commercial Art
  • Theory of Art
  • Biographical Studies
  • Byzantine Studies
  • Browse content in Classical Studies
  • Classical History
  • Classical Philosophy
  • Classical Mythology
  • Classical Literature
  • Classical Reception
  • Classical Art and Architecture
  • Classical Oratory and Rhetoric
  • Greek and Roman Epigraphy
  • Greek and Roman Law
  • Greek and Roman Archaeology
  • Greek and Roman Papyrology
  • Late Antiquity
  • Religion in the Ancient World
  • Digital Humanities
  • Browse content in History
  • Colonialism and Imperialism
  • Diplomatic History
  • Environmental History
  • Genealogy, Heraldry, Names, and Honours
  • Genocide and Ethnic Cleansing
  • Historical Geography
  • History by Period
  • History of Agriculture
  • History of Education
  • History of Emotions
  • History of Gender and Sexuality
  • Industrial History
  • Intellectual History
  • International History
  • Labour History
  • Legal and Constitutional History
  • Local and Family History
  • Maritime History
  • Military History
  • National Liberation and Post-Colonialism
  • Oral History
  • Political History
  • Public History
  • Regional and National History
  • Revolutions and Rebellions
  • Slavery and Abolition of Slavery
  • Social and Cultural History
  • Theory, Methods, and Historiography
  • Urban History
  • World History
  • Browse content in Language Teaching and Learning
  • Language Learning (Specific Skills)
  • Language Teaching Theory and Methods
  • Browse content in Linguistics
  • Applied Linguistics
  • Cognitive Linguistics
  • Computational Linguistics
  • Forensic Linguistics
  • Grammar, Syntax and Morphology
  • Historical and Diachronic Linguistics
  • History of English
  • Language Acquisition
  • Language Variation
  • Language Families
  • Language Evolution
  • Language Reference
  • Lexicography
  • Linguistic Theories
  • Linguistic Typology
  • Linguistic Anthropology
  • Phonetics and Phonology
  • Psycholinguistics
  • Sociolinguistics
  • Translation and Interpretation
  • Writing Systems
  • Browse content in Literature
  • Bibliography
  • Children's Literature Studies
  • Literary Studies (Asian)
  • Literary Studies (European)
  • Literary Studies (Eco-criticism)
  • Literary Studies (Modernism)
  • Literary Studies (Romanticism)
  • Literary Studies (American)
  • Literary Studies - World
  • Literary Studies (1500 to 1800)
  • Literary Studies (19th Century)
  • Literary Studies (20th Century onwards)
  • Literary Studies (African American Literature)
  • Literary Studies (British and Irish)
  • Literary Studies (Early and Medieval)
  • Literary Studies (Fiction, Novelists, and Prose Writers)
  • Literary Studies (Gender Studies)
  • Literary Studies (Graphic Novels)
  • Literary Studies (History of the Book)
  • Literary Studies (Plays and Playwrights)
  • Literary Studies (Poetry and Poets)
  • Literary Studies (Postcolonial Literature)
  • Literary Studies (Queer Studies)
  • Literary Studies (Science Fiction)
  • Literary Studies (Travel Literature)
  • Literary Studies (War Literature)
  • Literary Studies (Women's Writing)
  • Literary Theory and Cultural Studies
  • Mythology and Folklore
  • Shakespeare Studies and Criticism
  • Browse content in Media Studies
  • Browse content in Music
  • Applied Music
  • Dance and Music
  • Ethics in Music
  • Ethnomusicology
  • Gender and Sexuality in Music
  • Medicine and Music
  • Music Cultures
  • Music and Religion
  • Music and Culture
  • Music and Media
  • Music Education and Pedagogy
  • Music Theory and Analysis
  • Musical Scores, Lyrics, and Libretti
  • Musical Structures, Styles, and Techniques
  • Musicology and Music History
  • Performance Practice and Studies
  • Race and Ethnicity in Music
  • Sound Studies
  • Browse content in Performing Arts
  • Browse content in Philosophy
  • Aesthetics and Philosophy of Art
  • Epistemology
  • Feminist Philosophy
  • History of Western Philosophy
  • Metaphysics
  • Moral Philosophy
  • Non-Western Philosophy
  • Philosophy of Science
  • Philosophy of Action
  • Philosophy of Law
  • Philosophy of Religion
  • Philosophy of Language
  • Philosophy of Mind
  • Philosophy of Perception
  • Philosophy of Mathematics and Logic
  • Practical Ethics
  • Social and Political Philosophy
  • Browse content in Religion
  • Biblical Studies
  • Christianity
  • East Asian Religions
  • History of Religion
  • Judaism and Jewish Studies
  • Qumran Studies
  • Religion and Education
  • Religion and Health
  • Religion and Politics
  • Religion and Science
  • Religion and Law
  • Religion and Art, Literature, and Music
  • Religious Studies
  • Browse content in Society and Culture
  • Cookery, Food, and Drink
  • Cultural Studies
  • Customs and Traditions
  • Ethical Issues and Debates
  • Hobbies, Games, Arts and Crafts
  • Natural world, Country Life, and Pets
  • Popular Beliefs and Controversial Knowledge
  • Sports and Outdoor Recreation
  • Technology and Society
  • Travel and Holiday
  • Visual Culture
  • Browse content in Law
  • Arbitration
  • Browse content in Company and Commercial Law
  • Commercial Law
  • Company Law
  • Browse content in Comparative Law
  • Systems of Law
  • Competition Law
  • Browse content in Constitutional and Administrative Law
  • Government Powers
  • Judicial Review
  • Local Government Law
  • Military and Defence Law
  • Parliamentary and Legislative Practice
  • Construction Law
  • Contract Law
  • Browse content in Criminal Law
  • Criminal Procedure
  • Criminal Evidence Law
  • Sentencing and Punishment
  • Employment and Labour Law
  • Environment and Energy Law
  • Browse content in Financial Law
  • Banking Law
  • Insolvency Law
  • History of Law
  • Human Rights and Immigration
  • Intellectual Property Law
  • Browse content in International Law
  • Private International Law and Conflict of Laws
  • Public International Law
  • IT and Communications Law
  • Jurisprudence and Philosophy of Law
  • Law and Politics
  • Law and Society
  • Browse content in Legal System and Practice
  • Courts and Procedure
  • Legal Skills and Practice
  • Primary Sources of Law
  • Regulation of Legal Profession
  • Medical and Healthcare Law
  • Browse content in Policing
  • Criminal Investigation and Detection
  • Police and Security Services
  • Police Procedure and Law
  • Police Regional Planning
  • Browse content in Property Law
  • Personal Property Law
  • Study and Revision
  • Terrorism and National Security Law
  • Browse content in Trusts Law
  • Wills and Probate or Succession
  • Browse content in Medicine and Health
  • Browse content in Allied Health Professions
  • Arts Therapies
  • Clinical Science
  • Dietetics and Nutrition
  • Occupational Therapy
  • Operating Department Practice
  • Physiotherapy
  • Radiography
  • Speech and Language Therapy
  • Browse content in Anaesthetics
  • General Anaesthesia
  • Neuroanaesthesia
  • Browse content in Clinical Medicine
  • Acute Medicine
  • Cardiovascular Medicine
  • Clinical Genetics
  • Clinical Pharmacology and Therapeutics
  • Dermatology
  • Endocrinology and Diabetes
  • Gastroenterology
  • Genito-urinary Medicine
  • Geriatric Medicine
  • Infectious Diseases
  • Medical Oncology
  • Medical Toxicology
  • Pain Medicine
  • Palliative Medicine
  • Rehabilitation Medicine
  • Respiratory Medicine and Pulmonology
  • Rheumatology
  • Sleep Medicine
  • Sports and Exercise Medicine
  • Clinical Neuroscience
  • Community Medical Services
  • Critical Care
  • Emergency Medicine
  • Forensic Medicine
  • Haematology
  • History of Medicine
  • Browse content in Medical Dentistry
  • Oral and Maxillofacial Surgery
  • Paediatric Dentistry
  • Restorative Dentistry and Orthodontics
  • Surgical Dentistry
  • Medical Ethics
  • Browse content in Medical Skills
  • Clinical Skills
  • Communication Skills
  • Nursing Skills
  • Surgical Skills
  • Medical Statistics and Methodology
  • Browse content in Neurology
  • Clinical Neurophysiology
  • Neuropathology
  • Nursing Studies
  • Browse content in Obstetrics and Gynaecology
  • Gynaecology
  • Occupational Medicine
  • Ophthalmology
  • Otolaryngology (ENT)
  • Browse content in Paediatrics
  • Neonatology
  • Browse content in Pathology
  • Chemical Pathology
  • Clinical Cytogenetics and Molecular Genetics
  • Histopathology
  • Medical Microbiology and Virology
  • Patient Education and Information
  • Browse content in Pharmacology
  • Psychopharmacology
  • Browse content in Popular Health
  • Caring for Others
  • Complementary and Alternative Medicine
  • Self-help and Personal Development
  • Browse content in Preclinical Medicine
  • Cell Biology
  • Molecular Biology and Genetics
  • Reproduction, Growth and Development
  • Primary Care
  • Professional Development in Medicine
  • Browse content in Psychiatry
  • Addiction Medicine
  • Child and Adolescent Psychiatry
  • Forensic Psychiatry
  • Learning Disabilities
  • Old Age Psychiatry
  • Psychotherapy
  • Browse content in Public Health and Epidemiology
  • Epidemiology
  • Public Health
  • Browse content in Radiology
  • Clinical Radiology
  • Interventional Radiology
  • Nuclear Medicine
  • Radiation Oncology
  • Reproductive Medicine
  • Browse content in Surgery
  • Cardiothoracic Surgery
  • Gastro-intestinal and Colorectal Surgery
  • General Surgery
  • Neurosurgery
  • Paediatric Surgery
  • Peri-operative Care
  • Plastic and Reconstructive Surgery
  • Surgical Oncology
  • Transplant Surgery
  • Trauma and Orthopaedic Surgery
  • Vascular Surgery
  • Browse content in Science and Mathematics
  • Browse content in Biological Sciences
  • Aquatic Biology
  • Biochemistry
  • Bioinformatics and Computational Biology
  • Developmental Biology
  • Ecology and Conservation
  • Evolutionary Biology
  • Genetics and Genomics
  • Microbiology
  • Molecular and Cell Biology
  • Natural History
  • Plant Sciences and Forestry
  • Research Methods in Life Sciences
  • Structural Biology
  • Systems Biology
  • Zoology and Animal Sciences
  • Browse content in Chemistry
  • Analytical Chemistry
  • Computational Chemistry
  • Crystallography
  • Environmental Chemistry
  • Industrial Chemistry
  • Inorganic Chemistry
  • Materials Chemistry
  • Medicinal Chemistry
  • Mineralogy and Gems
  • Organic Chemistry
  • Physical Chemistry
  • Polymer Chemistry
  • Study and Communication Skills in Chemistry
  • Theoretical Chemistry
  • Browse content in Computer Science
  • Artificial Intelligence
  • Computer Architecture and Logic Design
  • Game Studies
  • Human-Computer Interaction
  • Mathematical Theory of Computation
  • Programming Languages
  • Software Engineering
  • Systems Analysis and Design
  • Virtual Reality
  • Browse content in Computing
  • Business Applications
  • Computer Security
  • Computer Games
  • Computer Networking and Communications
  • Digital Lifestyle
  • Graphical and Digital Media Applications
  • Operating Systems
  • Browse content in Earth Sciences and Geography
  • Atmospheric Sciences
  • Environmental Geography
  • Geology and the Lithosphere
  • Maps and Map-making
  • Meteorology and Climatology
  • Oceanography and Hydrology
  • Palaeontology
  • Physical Geography and Topography
  • Regional Geography
  • Soil Science
  • Urban Geography
  • Browse content in Engineering and Technology
  • Agriculture and Farming
  • Biological Engineering
  • Civil Engineering, Surveying, and Building
  • Electronics and Communications Engineering
  • Energy Technology
  • Engineering (General)
  • Environmental Science, Engineering, and Technology
  • History of Engineering and Technology
  • Mechanical Engineering and Materials
  • Technology of Industrial Chemistry
  • Transport Technology and Trades
  • Browse content in Environmental Science
  • Applied Ecology (Environmental Science)
  • Conservation of the Environment (Environmental Science)
  • Environmental Sustainability
  • Environmentalist Thought and Ideology (Environmental Science)
  • Management of Land and Natural Resources (Environmental Science)
  • Natural Disasters (Environmental Science)
  • Nuclear Issues (Environmental Science)
  • Pollution and Threats to the Environment (Environmental Science)
  • Social Impact of Environmental Issues (Environmental Science)
  • History of Science and Technology
  • Browse content in Materials Science
  • Ceramics and Glasses
  • Composite Materials
  • Metals, Alloying, and Corrosion
  • Nanotechnology
  • Browse content in Mathematics
  • Applied Mathematics
  • Biomathematics and Statistics
  • History of Mathematics
  • Mathematical Education
  • Mathematical Finance
  • Mathematical Analysis
  • Numerical and Computational Mathematics
  • Probability and Statistics
  • Pure Mathematics
  • Browse content in Neuroscience
  • Cognition and Behavioural Neuroscience
  • Development of the Nervous System
  • Disorders of the Nervous System
  • History of Neuroscience
  • Invertebrate Neurobiology
  • Molecular and Cellular Systems
  • Neuroendocrinology and Autonomic Nervous System
  • Neuroscientific Techniques
  • Sensory and Motor Systems
  • Browse content in Physics
  • Astronomy and Astrophysics
  • Atomic, Molecular, and Optical Physics
  • Biological and Medical Physics
  • Classical Mechanics
  • Computational Physics
  • Condensed Matter Physics
  • Electromagnetism, Optics, and Acoustics
  • History of Physics
  • Mathematical and Statistical Physics
  • Measurement Science
  • Nuclear Physics
  • Particles and Fields
  • Plasma Physics
  • Quantum Physics
  • Relativity and Gravitation
  • Semiconductor and Mesoscopic Physics
  • Browse content in Psychology
  • Affective Sciences
  • Clinical Psychology
  • Cognitive Neuroscience
  • Cognitive Psychology
  • Criminal and Forensic Psychology
  • Developmental Psychology
  • Educational Psychology
  • Evolutionary Psychology
  • Health Psychology
  • History and Systems in Psychology
  • Music Psychology
  • Neuropsychology
  • Organizational Psychology
  • Psychological Assessment and Testing
  • Psychology of Human-Technology Interaction
  • Psychology Professional Development and Training
  • Research Methods in Psychology
  • Social Psychology
  • Browse content in Social Sciences
  • Browse content in Anthropology
  • Anthropology of Religion
  • Human Evolution
  • Medical Anthropology
  • Physical Anthropology
  • Regional Anthropology
  • Social and Cultural Anthropology
  • Theory and Practice of Anthropology
  • Browse content in Business and Management
  • Business Strategy
  • Business History
  • Business Ethics
  • Business and Government
  • Business and Technology
  • Business and the Environment
  • Comparative Management
  • Corporate Governance
  • Corporate Social Responsibility
  • Entrepreneurship
  • Health Management
  • Human Resource Management
  • Industrial and Employment Relations
  • Industry Studies
  • Information and Communication Technologies
  • International Business
  • Knowledge Management
  • Management and Management Techniques
  • Operations Management
  • Organizational Theory and Behaviour
  • Pensions and Pension Management
  • Public and Nonprofit Management
  • Strategic Management
  • Supply Chain Management
  • Browse content in Criminology and Criminal Justice
  • Criminal Justice
  • Criminology
  • Forms of Crime
  • International and Comparative Criminology
  • Youth Violence and Juvenile Justice
  • Development Studies
  • Browse content in Economics
  • Agricultural, Environmental, and Natural Resource Economics
  • Asian Economics
  • Behavioural Finance
  • Behavioural Economics and Neuroeconomics
  • Econometrics and Mathematical Economics
  • Economic Systems
  • Economic Methodology
  • Economic History
  • Economic Development and Growth
  • Financial Markets
  • Financial Institutions and Services
  • General Economics and Teaching
  • Health, Education, and Welfare
  • History of Economic Thought
  • International Economics
  • Labour and Demographic Economics
  • Law and Economics
  • Macroeconomics and Monetary Economics
  • Microeconomics
  • Public Economics
  • Urban, Rural, and Regional Economics
  • Welfare Economics
  • Browse content in Education
  • Adult Education and Continuous Learning
  • Care and Counselling of Students
  • Early Childhood and Elementary Education
  • Educational Equipment and Technology
  • Educational Strategies and Policy
  • Higher and Further Education
  • Organization and Management of Education
  • Philosophy and Theory of Education
  • Schools Studies
  • Secondary Education
  • Teaching of a Specific Subject
  • Teaching of Specific Groups and Special Educational Needs
  • Teaching Skills and Techniques
  • Browse content in Environment
  • Applied Ecology (Social Science)
  • Climate Change
  • Conservation of the Environment (Social Science)
  • Environmentalist Thought and Ideology (Social Science)
  • Natural Disasters (Environment)
  • Social Impact of Environmental Issues (Social Science)
  • Browse content in Human Geography
  • Cultural Geography
  • Economic Geography
  • Political Geography
  • Browse content in Interdisciplinary Studies
  • Communication Studies
  • Museums, Libraries, and Information Sciences
  • Browse content in Politics
  • African Politics
  • Asian Politics
  • Chinese Politics
  • Comparative Politics
  • Conflict Politics
  • Elections and Electoral Studies
  • Environmental Politics
  • Ethnic Politics
  • European Union
  • Foreign Policy
  • Gender and Politics
  • Human Rights and Politics
  • Indian Politics
  • International Relations
  • International Organization (Politics)
  • International Political Economy
  • Irish Politics
  • Latin American Politics
  • Middle Eastern Politics
  • Political Methodology
  • Political Communication
  • Political Philosophy
  • Political Sociology
  • Political Theory
  • Political Behaviour
  • Political Economy
  • Political Institutions
  • Politics and Law
  • Politics of Development
  • Public Administration
  • Public Policy
  • Quantitative Political Methodology
  • Regional Political Studies
  • Russian Politics
  • Security Studies
  • State and Local Government
  • UK Politics
  • US Politics
  • Browse content in Regional and Area Studies
  • African Studies
  • Asian Studies
  • East Asian Studies
  • Japanese Studies
  • Latin American Studies
  • Middle Eastern Studies
  • Native American Studies
  • Scottish Studies
  • Browse content in Research and Information
  • Research Methods
  • Browse content in Social Work
  • Addictions and Substance Misuse
  • Adoption and Fostering
  • Care of the Elderly
  • Child and Adolescent Social Work
  • Couple and Family Social Work
  • Direct Practice and Clinical Social Work
  • Emergency Services
  • Human Behaviour and the Social Environment
  • International and Global Issues in Social Work
  • Mental and Behavioural Health
  • Social Justice and Human Rights
  • Social Policy and Advocacy
  • Social Work and Crime and Justice
  • Social Work Macro Practice
  • Social Work Practice Settings
  • Social Work Research and Evidence-based Practice
  • Welfare and Benefit Systems
  • Browse content in Sociology
  • Childhood Studies
  • Community Development
  • Comparative and Historical Sociology
  • Economic Sociology
  • Gender and Sexuality
  • Gerontology and Ageing
  • Health, Illness, and Medicine
  • Marriage and the Family
  • Migration Studies
  • Occupations, Professions, and Work
  • Organizations
  • Population and Demography
  • Race and Ethnicity
  • Social Theory
  • Social Movements and Social Change
  • Social Research and Statistics
  • Social Stratification, Inequality, and Mobility
  • Sociology of Religion
  • Sociology of Education
  • Sport and Leisure
  • Urban and Rural Studies
  • Browse content in Warfare and Defence
  • Defence Strategy, Planning, and Research
  • Land Forces and Warfare
  • Military Administration
  • Military Life and Institutions
  • Naval Forces and Warfare
  • Other Warfare and Defence Issues
  • Peace Studies and Conflict Resolution
  • Weapons and Equipment

The Oxford Handbook of Qualitative Research (2nd edn)

  • < Previous chapter
  • Next chapter >

The Oxford Handbook of Qualitative Research (2nd edn)

19 Content Analysis

Lindsay Prior, School of Sociology, Social Policy, and Social Work, Queen's University

  • Published: 02 September 2020
  • Cite Icon Cite
  • Permissions Icon Permissions

In this chapter, the focus is on ways in which content analysis can be used to investigate and describe interview and textual data. The chapter opens with a contextualization of the method and then proceeds to an examination of the role of content analysis in relation to both quantitative and qualitative modes of social research. Following the introductory sections, four kinds of data are subjected to content analysis. These include data derived from a sample of qualitative interviews ( N = 54), textual data derived from a sample of health policy documents ( N = 6), data derived from a single interview relating to a “case” of traumatic brain injury, and data gathered from fifty-four abstracts of academic papers on the topic of “well-being.” Using a distinctive and somewhat novel style of content analysis that calls on the notion of semantic networks, the chapter shows how the method can be used either independently or in conjunction with other forms of inquiry (including various styles of discourse analysis) to analyze data and also how it can be used to verify and underpin claims that arise from analysis. The chapter ends with an overview of the different ways in which the study of “content”—especially the study of document content—can be positioned in social scientific research projects.

What Is Content Analysis?

In his 1952 text on the subject of content analysis, Bernard Berelson traced the origins of the method to communication research and then listed what he called six distinguishing features of the approach. As one might expect, the six defining features reflect the concerns of social science as taught in the 1950s, an age in which the calls for an “objective,” “systematic,” and “quantitative” approach to the study of communication data were first heard. The reference to the field of “communication” was nothing less than a reflection of a substantive social scientific interest over the previous decades in what was called public opinion and specifically attempts to understand why and how a potential source of critical, rational judgment on political leaders (i.e., the views of the public) could be turned into something to be manipulated by dictators and demagogues. In such a context, it is perhaps not so surprising that in one of the more popular research methods texts of the decade, the terms content analysis and communication analysis are used interchangeably (see Goode & Hatt, 1952 , p. 325).

Academic fashions and interests naturally change with available technology, and these days we are more likely to focus on the individualization of communications through Twitter and the like, rather than of mass newspaper readership or mass radio audiences, yet the prevailing discourse on content analysis has remained much the same as it was in Berleson’s day. Thus, Neuendorf ( 2002 ), for example, continued to define content analysis as “the systematic, objective, quantitative analysis of message characteristics” (p. 1). Clearly, the centrality of communication as a basis for understanding and using content analysis continues to hold, but in this chapter I will try to show that, rather than locate the use of content analysis in disembodied “messages” and distantiated “media,” we would do better to focus on the fact that communication is a building block of social life itself and not merely a system of messages that are transmitted—in whatever form—from sender to receiver. To put that statement in another guise, we must note that communicative action (to use the phraseology of Habermas, 1987 ) rests at the very base of the lifeworld, and one very important way of coming to grips with that world is to study the content of what people say and write in the course of their everyday lives.

My aim is to demonstrate various ways in which content analysis (henceforth CTA) can be used and developed to analyze social scientific data as derived from interviews and documents. It is not my intention to cover the history of CTA or to venture into forms of literary analysis or to demonstrate each and every technique that has ever been deployed by content analysts. (Many of the standard textbooks deal with those kinds of issues much more fully than is possible here. See, for example, Babbie, 2013 ; Berelson, 1952 ; Bryman, 2008 , Krippendorf, 2004 ; Neuendorf, 2002 ; and Weber, 1990 ). Instead, I seek to recontextualize the use of the method in a framework of network thinking and to link the use of CTA to specific problems of data analysis. As will become evident, my exposition of the method is grounded in real-world problems. Those problems are drawn from my own research projects and tend to reflect my academic interests—which are almost entirely related to the analysis of the ways in which people talk and write about aspects of health, illness, and disease. However, lest the reader be deterred from going any further, I should emphasize that the substantive issues that I elect to examine are secondary if not tertiary to my main objective—which is to demonstrate how CTA can be integrated into a range of research designs and add depth and rigor to the analysis of interview and inscription data. To that end, in the next section I aim to clear our path to analysis by dealing with some issues that touch on the general position of CTA in the research armory, especially its location in the schism that has developed between quantitative and qualitative modes of inquiry.

The Methodological Context of Content Analysis

Content analysis is usually associated with the study of inscription contained in published reports, newspapers, adverts, books, web pages, journals, and other forms of documentation. Hence, nearly all of Berelson’s ( 1952 ) illustrations and references to the method relate to the analysis of written records of some kind, and where speech is mentioned, it is almost always in the form of broadcast and published political speeches (such as State of the Union addresses). This association of content analysis with text and documentation is further underlined in modern textbook discussions of the method. Thus, Bryman ( 2008 ), for example, defined CTA as “an approach to the analysis of documents and texts , that seek to quantify content in terms of pre-determined categories” (2008, p. 274, emphasis in original), while Babbie ( 2013 ) stated that CTA is “the study of recorded human communications” (2013, p. 295), and Weber referred to it as a method to make “valid inferences from text” (1990, p. 9). It is clear then that CTA is viewed as a text-based method of analysis, though extensions of the method to other forms of inscriptional material are also referred to in some discussions. Thus, Neuendorf ( 2002 ), for example, rightly referred to analyses of film and television images as legitimate fields for the deployment of CTA and by implication analyses of still—as well as moving—images such as photographs and billboard adverts. Oddly, in the traditional or standard paradigm of CTA, the method is solely used to capture the “message” of a text or speech; it is not used for the analysis of a recipient’s response to or understanding of the message (which is normally accessed via interview data and analyzed in other and often less rigorous ways; see, e.g., Merton, 1968 ). So, in this chapter I suggest that we can take things at least one small step further by using CTA to analyze speech (especially interview data) as well as text.

Standard textbook discussions of CTA usually refer to it as a “nonreactive” or “unobtrusive” method of investigation (see, e.g., Babbie, 2013 , p. 294), and a large part of the reason for that designation is because of its focus on already existing text (i.e., text gathered without intrusion into a research setting). More important, however (and to underline the obvious), CTA is primarily a method of analysis rather than of data collection. Its use, therefore, must be integrated into wider frames of research design that embrace systematic forms of data collection as well as forms of data analysis. Thus, routine strategies for sampling data are often required in designs that call on CTA as a method of analysis. These latter can be built around random sampling methods or even techniques of “theoretical sampling” (Glaser & Strauss, 1967 ) so as to identify a suitable range of materials for CTA. Content analysis can also be linked to styles of ethnographic inquiry and to the use of various purposive or nonrandom sampling techniques. For an example, see Altheide ( 1987 ).

The use of CTA in a research design does not preclude the use of other forms of analysis in the same study, because it is a technique that can be deployed in parallel with other methods or with other methods sequentially. For example, and as I will demonstrate in the following sections, one might use CTA as a preliminary analytical strategy to get a grip on the available data before moving into specific forms of discourse analysis. In this respect, it can be as well to think of using CTA in, say, the frame of a priority/sequence model of research design as described by Morgan ( 1998 ).

As I shall explain, there is a sense in which CTA rests at the base of all forms of qualitative data analysis, yet the paradox is that the analysis of content is usually considered a quantitative (numerically based) method. In terms of the qualitative/quantitative divide, however, it is probably best to think of CTA as a hybrid method, and some writers have in the past argued that it is necessarily so (Kracauer, 1952 ). That was probably easier to do in an age when many recognized the strictly drawn boundaries between qualitative and quantitative styles of research to be inappropriate. Thus, in their widely used text Methods in Social Research , Goode and Hatt ( 1952 ), for example, asserted that “modern research must reject as a false dichotomy the separation between ‘qualitative’ and ‘quantitative’ studies, or between the ‘statistical’ and the ‘non-statistical’ approach” (p. 313). This position was advanced on the grounds that all good research must meet adequate standards of validity and reliability, whatever its style, and the message is well worth preserving. However, there is a more fundamental reason why it is nonsensical to draw a division between the qualitative and the quantitative. It is simply this: All acts of social observation depend on the deployment of qualitative categories—whether gender, class, race, or even age; there is no descriptive category in use in the social sciences that connects to a world of “natural kinds.” In short, all categories are made, and therefore when we seek to count “things” in the world, we are dependent on the existence of socially constructed divisions. How the categories take the shape that they do—how definitions are arrived at, how inclusion and exclusion criteria are decided on, and how taxonomic principles are deployed—constitute interesting research questions in themselves. From our starting point, however, we need only note that “sorting things out” (to use a phrase from Bowker & Star, 1999 ) and acts of “counting”—whether it be of chromosomes or people (Martin & Lynch, 2009 )—are activities that connect to the social world of organized interaction rather than to unsullied observation of the external world.

Some writers deny the strict division between the qualitative and quantitative on grounds of empirical practice rather than of ontological reasoning. For example, Bryman ( 2008 ) argued that qualitative researchers also call on quantitative thinking, but tend to use somewhat vague, imprecise terms rather than numbers and percentages—referring to frequencies via the use of phrases such as “more than” and “less than.” Kracauer ( 1952 ) advanced various arguments against the view that CTA was strictly a quantitative method, suggesting that very often we wished to assess content as being negative or positive with respect to some political, social, or economic thesis and that such evaluations could never be merely statistical. He further argued that we often wished to study “underlying” messages or latent content of documentation and that, in consequence, we needed to interpret content as well as count items of content. Morgan ( 1993 ) argued that, given the emphasis that is placed on “coding” in almost all forms of qualitative data analysis, the deployment of counting techniques is essential and we ought therefore to think in terms of what he calls qualitative as well as quantitative content analysis. Naturally, some of these positions create more problems than they seemingly solve (as is the case with considerations of “latent content”), but given the 21st-century predilection for mixed methods research (Creswell, 2007 ), it is clear that CTA has a role to play in integrating quantitative and qualitative modes of analysis in a systematic rather than merely ad hoc and piecemeal fashion. In the sections that follow, I will provide some examples of the ways in which “qualitative” analysis can be combined with systematic modes of counting. First, however, we must focus on what is analyzed in CTA.

Units of Analysis

So, what is the unit of analysis in CTA? A brief answer is that analysis can be focused on words, sentences, grammatical structures, tenses, clauses, ratios (of, say, nouns to verbs), or even “themes.” Berelson ( 1952 ) gave examples of all of the above and also recommended a form of thematic analysis (cf., Braun & Clarke, 2006 ) as a viable option. Other possibilities include counting column length (of speeches and newspaper articles), amounts of (advertising) space, or frequency of images. For our purposes, however, it might be useful to consider a specific (and somewhat traditional) example. Here it is. It is an extract from what has turned out to be one of the most important political speeches of the current century.

Iraq continues to flaunt its hostility toward America and to support terror. The Iraqi regime has plotted to develop anthrax and nerve gas and nuclear weapons for over a decade. This is a regime that has already used poison gas to murder thousands of its own citizens, leaving the bodies of mothers huddled over their dead children. This is a regime that agreed to international inspections then kicked out the inspectors. This is a regime that has something to hide from the civilized world. States like these, and their terrorist allies, constitute an axis of evil, arming to threaten the peace of the world. By seeking weapons of mass destruction, these regimes pose a grave and growing danger. They could provide these arms to terrorists, giving them the means to match their hatred. They could attack our allies or attempt to blackmail the United States. In any of these cases, the price of indifference would be catastrophic. (George W. Bush, State of the Union address, January 29, 2002)

A number of possibilities arise for analyzing the content of a speech such as the one above. Clearly, words and sentences must play a part in any such analysis, but in addition to words, there are structural features of the speech that could also figure. For example, the extract takes the form of a simple narrative—pointing to a past, a present, and an ominous future (catastrophe)—and could therefore be analyzed as such. There are, in addition, several interesting oppositions in the speech (such as those between “regimes” and the “civilized” world), as well as a set of interconnected present participles such as “plotting,” “hiding,” “arming,” and “threatening” that are associated both with Iraq and with other states that “constitute an axis of evil.” Evidently, simple word counts would fail to capture the intricacies of a speech of this kind. Indeed, our example serves another purpose—to highlight the difficulty that often arises in dissociating CTA from discourse analysis (of which narrative analysis and the analysis of rhetoric and trope are subspecies). So how might we deal with these problems?

One approach that can be adopted is to focus on what is referenced in text and speech, that is, to concentrate on the characters or elements that are recruited into the text and to examine the ways in which they are connected or co-associated. I shall provide some examples of this form of analysis shortly. Let us merely note for the time being that in the previous example we have a speech in which various “characters”—including weapons in general, specific weapons (such as nerve gas), threats, plots, hatred, evil, and mass destruction—play a role. Be aware that we need not be concerned with the veracity of what is being said—whether it is true or false—but simply with what is in the speech and how what is in there is associated. (We may leave the task of assessing truth and falsity to the jurists). Be equally aware that it is a text that is before us and not an insight into the ex-president’s mind, or his thinking, or his beliefs, or any other subjective property that he may have possessed.

In the introductory paragraph, I made brief reference to some ideas of the German philosopher Jürgen Habermas ( 1987 ). It is not my intention here to expand on the detailed twists and turns of his claims with respect to the role of language in the “lifeworld” at this point. However, I do intend to borrow what I regard as some particularly useful ideas from his work. The first is his claim—influenced by a strong line of 20th-century philosophical thinking—that language and culture are constitutive of the lifeworld (Habermas, 1987 , p. 125), and in that sense we might say that things (including individuals and societies) are made in language. That is a simple justification for focusing on what people say rather than what they “think” or “believe” or “feel” or “mean” (all of which have been suggested at one time or another as points of focus for social inquiry and especially qualitative forms of inquiry). Second, Habermas argued that speakers and therefore hearers (and, one might add, writers and therefore readers), in what he calls their speech acts, necessarily adopt a pragmatic relation to one of three worlds: entities in the objective world, things in the social world, and elements of a subjective world. In practice, Habermas ( 1987 , p. 120) suggested all three worlds are implicated in any speech act, but that there will be a predominant orientation to one of them. To rephrase this in a crude form, when speakers engage in communication, they refer to things and facts and observations relating to external nature, to aspects of interpersonal relations, and to aspects of private inner subjective worlds (thoughts, feelings, beliefs, etc.). One of the problems with locating CTA in “communication research” has been that the communications referred to are but a special and limited form of action (often what Habermas called strategic acts). In other words, television, newspaper, video, and Internet communications are just particular forms (with particular features) of action in general. Again, we might note in passing that the adoption of the Habermassian perspective on speech acts implies that much of qualitative analysis in particular has tended to focus only on one dimension of communicative action—the subjective and private. In this respect, I would argue that it is much better to look at speeches such as George W Bush’s 2002 State of the Union address as an “account” and to examine what has been recruited into the account, and how what has been recruited is connected or co-associated, rather than use the data to form insights into his (or his adviser’s) thoughts, feelings, and beliefs.

In the sections that follow, and with an emphasis on the ideas that I have just expounded, I intend to demonstrate how CTA can be deployed to advantage in almost all forms of inquiry that call on either interview (or speech-based) data or textual data. In my first example, I will show how CTA can be used to analyze a group of interviews. In the second example, I will show how it can be used to analyze a group of policy documents. In the third, I shall focus on a single interview (a “case”), and in the fourth and final example, I will show how CTA can be used to track the biography of a concept. In each instance, I shall briefly introduce the context of the “problem” on which the research was based, outline the methods of data collection, discuss how the data were analyzed and presented, and underline the ways in which CTA has sharpened the analytical strategy.

Analyzing a Sample of Interviews: Looking at Concepts and Their Co-associations in a Semantic Network

My first example of using CTA is based on a research study that was initially undertaken in the early 2000s. It was a project aimed at understanding why older people might reject the offer to be immunized against influenza (at no cost to them). The ultimate objective was to improve rates of immunization in the study area. The first phase of the research was based on interviews with 54 older people in South Wales. The sample included people who had never been immunized, some who had refused immunization, and some who had accepted immunization. Within each category, respondents were randomly selected from primary care physician patient lists, and the data were initially analyzed “thematically” and published accordingly (Evans, Prout, Prior, Tapper-Jones, & Butler, 2007 ). A few years later, however, I returned to the same data set to look at a different question—how (older) lay people talked about colds and flu, especially how they distinguished between the two illnesses and how they understood the causes of the two illnesses (see Prior, Evans, & Prout, 2011 ). Fortunately, in the original interview schedule, we had asked people about how they saw the “differences between cold and flu” and what caused flu, so it was possible to reanalyze the data with such questions in mind. In that frame, the example that follows demonstrates not only how CTA might be used on interview data, but also how it might be used to undertake a secondary analysis of a preexisting data set (Bryman, 2008 ).

As with all talk about illness, talk about colds and flu is routinely set within a mesh of concerns—about causes, symptoms, and consequences. Such talk comprises the base elements of what has at times been referred to as the “explanatory model” of an illness (Kleinman, Eisenberg, & Good, 1978 ). In what follows, I shall focus almost entirely on issues of causation as understood from the viewpoint of older people; the analysis is based on the answers that respondents made in response to the question, “How do you think people catch flu?”

Semistructured interviews of the kind undertaken for a study such as this are widely used and are often characterized as akin to “a conversation with a purpose” (Kahn & Cannell, 1957 , p. 97). One of the problems of analyzing the consequent data is that, although the interviewer holds to a planned schedule, the respondents often reflect in a somewhat unstructured way about the topic of investigation, so it is not always easy to unravel the web of talk about, say, “causes” that occurs in the interview data. In this example, causal agents of flu, inhibiting agents, and means of transmission were often conflated by the respondents. Nevertheless, in their talk people did answer the questions that were posed, and in the study referred to here, that talk made reference to things such as “bugs” (and “germs”) as well as viruses, but the most commonly referred to causes were “the air” and the “atmosphere.” The interview data also pointed toward means of transmission as “cause”—so coughs and sneezes and mixing in crowds figured in the causal mix. Most interesting, perhaps, was the fact that lay people made a nascent distinction between facilitating factors (such as bugs and viruses) and inhibiting factors (such as being resistant, immune, or healthy), so that in the presence of the latter, the former are seen to have very little effect. Here are some shorter examples of typical question–response pairs from the original interview data.

(R:32): “How do you catch it [the flu]? Well, I take it its through ingesting and inhaling bugs from the atmosphere. Not from sort of contact or touching things. Sort of airborne bugs. Is that right?” (R:3): “I suppose it’s [the cause of flu] in the air. I think I get more diseases going to the surgery than if I stayed home. Sometimes the waiting room is packed and you’ve got little kids coughing and spluttering and people sneezing, and air conditioning I think is a killer by and large I think air conditioning in lots of these offices.” (R:46): “I think you catch flu from other people. You know in enclosed environments in air conditioning which in my opinion is the biggest cause of transferring diseases is air conditioning. Worse thing that was ever invented that was. I think so, you know. It happens on aircraft exactly the same you know.”

Alternatively, it was clear that for some people being cold, wet, or damp could also serve as a direct cause of flu; thus: Interviewer: “OK, good. How do you think you catch the flu?”

(R:39): “Ah. The 65 dollar question. Well, I would catch it if I was out in the rain and I got soaked through. Then I would get the flu. I mean my neighbour up here was soaked through and he got pneumonia and he died. He was younger than me: well, 70. And he stayed in his wet clothes and that’s fatal. Got pneumonia and died, but like I said, if I get wet, especially if I get my head wet, then I can get a nasty head cold and it could develop into flu later.”

As I suggested earlier, despite the presence of bugs and germs, viruses, the air, and wetness or dampness, “catching” the flu is not a matter of simple exposure to causative agents. Thus, some people hypothesized that within each person there is a measure of immunity or resistance or healthiness that comes into play and that is capable of counteracting the effects of external agents. For example, being “hardened” to germs and harsh weather can prevent a person getting colds and flu. Being “healthy” can itself negate the effects of any causative agents, and healthiness is often linked to aspects of “good” nutrition and diet and not smoking cigarettes. These mitigating and inhibiting factors can either mollify the effects of infection or prevent a person “catching” the flu entirely. Thus, (R:45) argued that it was almost impossible for him to catch flu or cold “cos I got all this resistance.” Interestingly, respondents often used possessive pronouns in their discussion of immunity and resistance (“my immunity” and “my resistance”)—and tended to view them as personal assets (or capital) that might be compromised by mixing with crowds.

By implication, having a weak immune system can heighten the risk of contracting colds and flu and might therefore spur one to take preventive measures, such as accepting a flu shot. Some people believe that the flu shot can cause the flu and other illnesses. An example of what might be called lay “epidemiology” (Davison, Davey-Smith, & Frankel, 1991 ) is evident in the following extract.

(R:4): “Well, now it’s coincidental you know that [my brother] died after the jab, but another friend of mine, about 8 years ago, the same happened to her. She had the jab and about six months later, she died, so I know they’re both coincidental, but to me there’s a pattern.”

Normally, results from studies such as this are presented in exactly the same way as has just been set out. Thus, the researcher highlights given themes that are said to have emerged from the data and then provides appropriate extracts from the interviews to illustrate and substantiate the relevant themes. However, one reasonable question that any critic might ask about the selected data extracts concerns the extent to which they are “representative” of the material in the data set as a whole. Maybe, for example, the author has been unduly selective in his or her use of both themes and quotations. Perhaps, as a consequence, the author has ignored or left out talk that does not fit the arguments or extracts that might be considered dull and uninteresting compared to more exotic material. And these kinds of issues and problems are certainly common to the reporting of almost all forms of qualitative research. However, the adoption of CTA techniques can help to mollify such problems. This is so because, by using CTA, we can indicate the extent to which we have used all or just some of the data, and we can provide a view of the content of the entire sample of interviews rather than just the content and flavor of merely one or two interviews. In this light, we must consider Figure 19.1 , which is based on counting the number of references in the 54 interviews to the various “causes” of the flu, though references to the flu shot (i.e., inoculation) as a cause of flu have been ignored for the purpose of this discussion. The node sizes reflect the relative importance of each cause as determined by the concept count (frequency of occurrence). The links between nodes reflect the degree to which causes are co-associated in interview talk and are calculated according to a co-occurrence index (see, e.g., SPSS, 2007 , p. 183).

What causes flu? A lay perspective. Factors listed as causes of colds and flu in 54 interviews. Node size is proportional to number of references “as causes.” Line thickness is proportional to co-occurrence of any two “causes” in the set of interviews.

Given this representation, we can immediately assess the relative importance of the different causes as referred to in the interview data. Thus, we can see that such things as (poor) “hygiene” and “foreigners” were mentioned as a potential cause of flu—but mention of hygiene and foreigners was nowhere near as important as references to “the air” or to “crowds” or to “coughs and sneezes.” In addition, we can also determine the strength of the connections that interviewees made between one cause and another. Thus, there are relatively strong links between “resistance” and “coughs and sneezes,” for example.

In fact, Figure 19.1 divides causes into the “external” and the “internal,” or the facilitating and the impeding (lighter and darker nodes). Among the former I have placed such things as crowds, coughs, sneezes, and the air, while among the latter I have included “resistance,” “immunity,” and “health.” That division is a product of my conceptualizing and interpreting the data, but whichever way we organize the findings, it is evident that talk about the causes of flu belongs in a web or mesh of concerns that would be difficult to represent using individual interview extracts alone. Indeed, it would be impossible to demonstrate how the semantics of causation belong to a culture (rather than to individuals) in any other way. In addition, I would argue that the counting involved in the construction of the diagram functions as a kind of check on researcher interpretations and provides a source of visual support for claims that an author might make about, say, the relative importance of “damp” and “air” as perceived causes of disease. Finally, the use of CTA techniques allied with aspects of conceptualization and interpretation has enabled us to approach the interview data as a set and to consider the respondents as belonging to a community, rather than regarding them merely as isolated and disconnected individuals, each with their own views. It has also enabled us to squeeze some new findings out of old data, and I would argue that it has done so with advantage. There are other advantages to using CTA to explore data sets, which I will highlight in the next section.

Analyzing a Sample of Documents: Using Content Analysis to Verify Claims

Policy analysis is a difficult business. To begin, it is never entirely clear where (social, health, economic, environmental) policy actually is. Is it in documents (as published by governments, think tanks, and research centers), in action (what people actually do), or in speech (what people say)? Perhaps it rests in a mixture of all three realms. Yet, wherever it may be, it is always possible, at the very least, to identify a range of policy texts and to focus on the conceptual or semantic webs in terms of which government officials and other agents (such as politicians) talk about the relevant policy issues. Furthermore, insofar as policy is recorded—in speeches, pamphlets, and reports—we may begin to speak of specific policies as having a history or a pedigree that unfolds through time (think, e.g., of U.S. or U.K. health policies during the Clinton years or the Obama years). And, insofar as we consider “policy” as having a biography or a history, we can also think of studying policy narratives.

Though firmly based in the world of literary theory, narrative method has been widely used for both the collection and the analysis of data concerning ways in which individuals come to perceive and understand various states of health, ill health, and disability (Frank, 1995 ; Hydén, 1997 ). Narrative techniques have also been adapted for use in clinical contexts and allied to concepts of healing (Charon, 2006 ). In both social scientific and clinical work, however, the focus is invariably on individuals and on how individuals “tell” stories of health and illness. Yet narratives can also belong to collectives—such as political parties and ethnic and religious groups—just as much as to individuals, and in the latter case there is a need to collect and analyze data that are dispersed across a much wider range of materials than can be obtained from the personal interview. In this context, Roe ( 1994 ) demonstrated how narrative method can be applied to an analysis of national budgets, animal rights, and environmental policies.

An extension of the concept of narrative to policy discourse is undoubtedly useful (Newman & Vidler, 2006 ), but how might such narratives be analyzed? What strategies can be used to unravel the form and content of a narrative, especially in circumstances where the narrative might be contained in multiple (policy) documents, authored by numerous individuals, and published across a span of time rather than in a single, unified text such as a novel? Roe ( 1994 ), unfortunately, was not in any way specific about analytical procedures, apart from offering the useful rule to “never stray too far from the data” (p. xii). So, in this example, I will outline a strategy for tackling such complexities. In essence, it is a strategy that combines techniques of linguistically (rule) based CTA with a theoretical and conceptual frame that enables us to unravel and identify the core features of a policy narrative. My substantive focus is on documents concerning health service delivery policies published from 2000 to 2009 in the constituent countries of the United Kingdom (that is, England, Scotland, Wales, and Northern Ireland—all of which have different political administrations).

Narratives can be described and analyzed in various ways, but for our purposes we can say that they have three key features: they point to a chronology, they have a plot, and they contain “characters.”

All narratives have beginnings; they also have middles and endings, and these three stages are often seen as comprising the fundamental structure of narrative text. Indeed, in his masterly analysis of time and narrative, Ricoeur ( 1984 ) argued that it is in the unfolding chronological structure of a narrative that one finds its explanatory (and not merely descriptive) force. By implication, one of the simplest strategies for the examination of policy narratives is to locate and then divide a narrative into its three constituent parts—beginning, middle, and end.

Unfortunately, while it can sometimes be relatively easy to locate or choose a beginning to a narrative, it can be much more difficult to locate an end point. Thus, in any illness narrative, a narrator might be quite capable of locating the start of an illness process (in an infection, accident, or other event) but unable to see how events will be resolved in an ongoing and constantly unfolding life. As a consequence, both narrators and researchers usually find themselves in the midst of an emergent present—a present without a known and determinate end (see, e.g., Frank, 1995 ). Similar considerations arise in the study of policy narratives where chronology is perhaps best approached in terms of (past) beginnings, (present) middles, and projected futures.

According to Ricoeur ( 1984 ), our basic ideas about narrative are best derived from the work and thought of Aristotle, who in his Poetics sought to establish “first principles” of composition. For Ricoeur, as for Aristotle, plot ties things together. It “brings together factors as heterogeneous as agents, goals, means, interactions, circumstances, unexpected results” (p. 65) into the narrative frame. For Aristotle, it is the ultimate untying or unraveling of the plot that releases the dramatic energy of the narrative.

Characters are most commonly thought of as individuals, but they can be considered in much broader terms. Thus, the French semiotician A. J. Greimas ( 1970 ), for example, suggested that, rather than think of characters as people, it would be better to think in terms of what he called actants and of the functions that such actants fulfill within a story. In this sense, geography, climate, and capitalism can be considered characters every bit as much as aggressive wolves and Little Red Riding Hood. Further, he argued that the same character (actant) can be considered to fulfill many functions, and the same function may be performed by many characters. Whatever else, the deployment of the term actant certainly helps us to think in terms of narratives as functioning and creative structures. It also serves to widen our understanding of the ways in which concepts, ideas, and institutions, as well “things” in the material world, can influence the direction of unfolding events every bit as much as conscious human subjects. Thus, for example, the “American people,” “the nation,” “the Constitution,” “the West,” “tradition,” and “Washington” can all serve as characters in a policy story.

As I have already suggested, narratives can unfold across many media and in numerous arenas—speech and action, as well as text. Here, however, my focus is solely on official documents—all of which are U.K. government policy statements, as listed in Table 19.1 . The question is, How might CTA help us unravel the narrative frame?

It might be argued that a simple reading of any document should familiarize the researcher with elements of all three policy narrative components (plot, chronology, and character). However, in most policy research, we are rarely concerned with a single and unified text, as is the case with a novel; rather, we have multiple documents written at distinctly different times by multiple (usually anonymous) authors that notionally can range over a wide variety of issues and themes. In the full study, some 19 separate publications were analyzed across England, Wales, Scotland, and Northern Ireland.

Naturally, listing word frequencies—still less identifying co-occurrences and semantic webs in large data sets (covering hundreds of thousands of words and footnotes)—cannot be done manually, but rather requires the deployment of complex algorithms and text-mining procedures. To this end, I analyzed the 19 documents using “Text Mining for Clementine” (SPSS, 2007 ).

Text-mining procedures begin by providing an initial list of concepts based on the lexicon of the text but that can be weighted according to word frequency and that take account of elementary word associations. For example, learning disability, mental health, and performance management indicate three concepts, not six words. Using such procedures on the aforementioned documents gives the researcher an initial grip on the most important concepts in the document set of each country. Note that this is much more than a straightforward concordance analysis of the text and is more akin to what Ryan and Bernard ( 2000 ) referred to as semantic analysis and Carley ( 1993 ) has referred to as concept and mapping analysis.

So, the first task was to identify and then extract the core concepts, thus identifying what might be called “key” characters or actants in each of the policy narratives. For example, in the Scottish documents, such actants included “Scotland” and the “Scottish people,” as well as “health” and the “National Health Service (NHS),” among others, while in the Welsh documents it was “the people of Wales” and “Wales” that figured largely—thus emphasizing how national identity can play every bit as important a role in a health policy narrative as concepts such as “health,” “hospitals,” and “well-being.”

Having identified key concepts, it was then possible to track concept clusters in which particular actants or characters are embedded. Such cluster analysis is dependent on the use of co-occurrence rules and the analysis of synonyms, whereby it is possible to get a grip on the strength of the relationships between the concepts, as well as the frequency with which the concepts appear in the collected texts. In Figure 19.2 , I provide an example of a concept cluster. The diagram indicates the nature of the conceptual and semantic web in which various actants are discussed. The diagrams further indicate strong (solid line) and weaker (dashed line) connections between the various elements in any specific mix, and the numbers indicate frequency counts for the individual concepts. Using Clementine , the researcher is unable to specify in advance which clusters will emerge from the data. One cannot, for example, choose to have an NHS cluster. In that respect, these diagrams not only provide an array in terms of which concepts are located, but also serve as a check on and to some extent validation of the interpretations of the researcher. None of this tells us what the various narratives contained within the documents might be, however. They merely point to key characters and relationships both within and between the different narratives. So, having indicated the techniques used to identify the essential parts of the four policy narratives, it is now time to sketch out their substantive form.

Concept cluster for “care” in six English policy documents, 2000–2007. Line thickness is proportional to the strength co-occurrence coefficient. Node size reflects relative frequency of concept, and (numbers) refer to the frequency of concept. Solid lines indicate relationships between terms within the same cluster, and dashed lines indicate relationships between terms in different clusters.

It may be useful to note that Aristotle recommended brevity in matters of narrative—deftly summarizing the whole of the Odyssey in just seven lines. In what follows, I attempt—albeit somewhat weakly—to emulate that example by summarizing a key narrative of English health services policy in just four paragraphs. Note how the narrative unfolds in relation to the dates of publication. In the English case (though not so much in the other U.K. countries), it is a narrative that is concerned to introduce market forces into what is and has been a state-managed health service. Market forces are justified in terms of improving opportunities for the consumer (i.e., the patients in the service), and the pivot of the newly envisaged system is something called “patient choice” or “choice.” This is how the story unfolds as told through the policy documents between 2000 and 2008 (see Table 19.1 ). The citations in the following paragraphs are to the Department of Health publications (by year) listed in Table 19.1 .

The advent of the NHS in 1948 was a “seminal event” (2000, p. 8), but under successive Conservative administrations, the NHS was seriously underfunded (2006, p. 3). The (New Labour) government will invest (2000) or already has (2003, p. 4) invested extensively in infrastructure and staff, and the NHS is now on a “journey of major improvement” (2004, p. 2). But “more money is only a starting point” (2000, p. 2), and the journey is far from finished. Continuation requires some fundamental changes of “culture” (2003, p. 6). In particular, the NHS remains unresponsive to patient need, and “all too often, the individual needs and wishes are secondary to the convenience of the services that are available. This ‘one size fits all’ approach is neither responsive, equitable nor person-centred” (2003, p. 17). In short, the NHS is a 1940s system operating in a 21st-century world (2000, p. 26). Change is therefore needed across the “whole system” (2005, p. 3) of care and treatment.

Above all, we must recognize that we “live in a consumer age” (2000, p. 26). People’s expectations have changed dramatically (2006, p. 129), and people want more choice, more independence, and more control (2003, p. 12) over their affairs. Patients are no longer, and should not be considered, “passive recipients” of care (2003, p. 62), but wish to be and should be (2006, p. 81) actively “involved” in their treatments (2003, p. 38; 2005, p. 18)—indeed, engaged in a partnership (2003, p. 22) of respect with their clinicians. Furthermore, most people want a personalized service “tailor made to their individual needs” (2000, p. 17; 2003, p. 15; 2004, p. 1; 2006, p. 83)—“a service which feels personal to each and every individual within a framework of equity and good use of public money” (2003, p. 6).

To advance the necessary changes, “patient choice” must be and “will be strengthened” (2000, p. 89). “Choice” must be made to “happen” (2003), and it must be “real” (2003, p. 3; 2004, p. 5; 2005, p. 20; 2006, p. 4). Indeed, it must be “underpinned” (2003, p. 7) and “widened and deepened” (2003, p. 6) throughout the entire system of care.

If “we” expand and underpin patient choice in appropriate ways and engage patients in their treatment systems, then levels of patient satisfaction will increase (2003, p. 39), and their choices will lead to a more “efficient” (2003, p. 5; 2004, p. 2; 2006, p. 16) and effective (2003, p. 62; 2005, p. 8) use of resources. Above all, the promotion of choice will help to drive up “standards” of care and treatment (2000, p. 4; 2003, p. 12; 2004, p. 3; 2005, p. 7; 2006, p. 3). Furthermore, the expansion of choice will serve to negate the effects of the “inverse care law,” whereby those who need services most tend to get catered to the least (2000, p. 107; 2003, p. 5; 2006, p. 63), and it will thereby help in moderating the extent of health inequalities in the society in which we live. “The overall aim of all our reforms,” therefore, “is to turn the NHS from a top down monolith into a responsive service that gives the patient the best possible experience. We need to develop an NHS that is both fair to all of us, and personal to each of us” (2003, p. 5).

We can see how most—though not all—of the elements of this story are represented in Figure 19.2. In particular, we can see strong (co-occurrence) links between care and choice and how partnership, performance, control, and improvement have a prominent profile. There are some elements of the web that have a strong profile (in terms of node size and links), but to which we have not referred; access, information, primary care, and waiting times are four. As anyone well versed in English healthcare policy would know, these elements have important roles to play in the wider, consumer-driven narrative. However, by rendering the excluded as well as included elements of that wider narrative visible, the concept web provides a degree of verification on the content of the policy story as told herein and on the scope of its “coverage.”

In following through on this example, we have moved from CTA to a form of discourse analysis (in this instance, narrative analysis). That shift underlines aspects of both the versatility of CTA and some of its weaknesses—versatility in the sense that CTA can be readily combined with other methods of analysis and in the way in which the results of the CTA help us to check and verify the claims of the researcher. The weakness of the diagram compared to the narrative is that CTA on its own is a somewhat one-dimensional and static form of analysis, and while it is possible to introduce time and chronology into the diagrams, the diagrams themselves remain lifeless in the absence of some form of discursive overview. (For a fuller analysis of these data, see Prior, Hughes, & Peckham, 2012 ).

Analyzing a Single Interview: The Role of Content Analysis in a Case Study

So far, I have focused on using CTA on a sample of interviews and a sample of documents. In the first instance, I recommended CTA for its capacity to tell us something about what is seemingly central to interviewees and for demonstrating how what is said is linked (in terms of a concept network). In the second instance, I reaffirmed the virtues of co-occurrence and network relations, but this time in the context of a form of discourse analysis. I also suggested that CTA can serve an important role in the process of verification of a narrative and its academic interpretation. In this section, however, I am going to link the use of CTA to another style of research—case study—to show how CTA might be used to analyze a single “case.”

Case study is a term used in multiple and often ambiguous ways. However, Gerring ( 2004 ) defined it as “an intensive study of a single unit for the purpose of understanding a larger class of (similar) units” (p. 342). As Gerring pointed out, case study does not necessarily imply a focus on N = 1, although that is indeed the most logical number for case study research (Ragin & Becker, 1992 ). Naturally, an N of 1 can be immensely informative, and whether we like it or not, we often have only one N to study (think, e.g., of the 1986 Challenger shuttle disaster or of the 9/11 attack on the World Trade Center). In the clinical sciences, case studies are widely used to represent the “typical” features of a wider class of phenomena and often used to define a kind or syndrome (as in the field of clinical genetics). Indeed, at the risk of mouthing a tautology, one can say that the distinctive feature of case study is its focus on a case in all of its complexity—rather than on individual variables and their interrelationships, which tends to be a point of focus for large N research.

There was a time when case study was central to the science of psychology. Breuer and Freud’s (2001) famous studies of “hysteria” (originally published in 1895) provide an early and outstanding example of the genre in this respect, but as with many of the other styles of social science research, the influence of case studies waned with the rise of much more powerful investigative techniques—including experimental methods—driven by the deployment of new statistical technologies. Ideographic studies consequently gave way to the current fashion for statistically driven forms of analysis that focus on causes and cross-sectional associations between variables rather than ideographic complexity.

In the example that follows, we will look at the consequences of a traumatic brain injury (TBI) on just one individual. The analysis is based on an interview with a person suffering from such an injury, and it was one of 32 interviews carried out with people who had experienced a TBI. The objective of the original research was to develop an outcome measure for TBI that was sensitive to the sufferer’s (rather than the health professional’s) point of view. In our original study (see Morris et al., 2005 ), interviews were also undertaken with 27 carers of the injured with the intention of comparing their perceptions of TBI to those of the people for whom they cared. A sample survey was also undertaken to elicit views about TBI from a much wider population of patients than was studied via interview.

In the introduction, I referred to Habermas and the concept of the lifeworld. Lifeworld ( Lebenswelt ) is a concept that first arose from 20th-century German philosophy. It constituted a specific focus for the work of Alfred Schutz (see, e.g., Schutz & Luckman, 1974 ). Schutz ( 1974 ) described the lifeworld as “that province of reality which the wide-awake and normal adult simply takes-for-granted in an attitude of common sense” (p. 3). Indeed, it was the routine and taken-for-granted quality of such a world that fascinated Schutz. As applied to the worlds of those with head injuries, the concept has particular resonance because head injuries often result in that taken-for-granted quality being disrupted and fragmented, ending in what Russian neuropsychologist A. R. Luria ( 1975 ) once described as “shattered” worlds. As well as providing another excellent example of a case study, Luria’s work is also pertinent because he sometimes argued for a “romantic science” of brain injury—that is, a science that sought to grasp the worldview of the injured patient by paying attention to an unfolding and detailed personal “story” of the individual with the head injury as well as to the neurological changes and deficits associated with the injury itself. In what follows, I shall attempt to demonstrate how CTA might be used to underpin such an approach.

In the original research, we began analysis by a straightforward reading of the interview transcripts. Unfortunately, a simple reading of a text or an interview can, strangely, mislead the reader into thinking that some issues or themes are more important than is warranted by the contents of the text. How that comes about is not always clear, but it probably has something to do with a desire to develop “findings” and our natural capacity to overlook the familiar in favor of the unusual. For that reason alone, it is always useful to subject any text to some kind of concordance analysis—that is, generating a simple frequency list of words used in an interview or text. Given the current state of technology, one might even speak these days of using text-mining procedures such as the aforementioned Clementine to undertake such a task. By using Clementine , and as we have seen, it is also possible to measure the strength of co-occurrence links between elements (i.e., words and concepts) in the entire data set (in this example, 32 interviews), though for a single interview these aims can just as easily be achieved using much simpler, low-tech strategies.

By putting all 32 interviews into the database, several common themes emerged. For example, it was clear that “time” entered into the semantic web in a prominent manner, and it was clearly linked to such things as “change,” “injury,” “the body,” and what can only be called the “I was.” Indeed, time runs through the 32 stories in many guises, and the centrality of time is a reflection of storytelling and narrative recounting in general—chronology, as we have noted, being a defining feature of all storytelling (Ricoeur, 1984 ). Thus, sufferers both recounted the events surrounding their injury and provided accounts as to how the injuries affected their current life and future hopes. As to time present, much of the patient story circled around activities of daily living—walking, working, talking, looking, feeling, remembering, and so forth.

Understandably, the word and the concept of “injury” featured largely in the interviews, though it was a word most commonly associated with discussions of physical consequences of injury. There were many references in that respect to injured arms, legs, hands, and eyes. There were also references to “mind”—though with far less frequency than with references to the body and to body parts. Perhaps none of this is surprising. However, one of the most frequent concepts in the semantic mix was the “I was” (716 references). The statement “I was,” or “I used to” was, in turn, strongly connected to terms such as “the accident” and “change.” Interestingly, the “I was” overwhelmingly eclipsed the “I am” in the interview data (the latter with just 63 references). This focus on the “I was” appears in many guises. For example, it is often associated with the use of the passive voice: “I was struck by a car,” “I was put on the toilet,” “I was shipped from there then, transferred to [Cityville],” “I got told that I would never be able …,” “I was sat in a room,” and so forth. In short, the “I was” is often associated with things, people, and events acting on the injured person. More important, however, the appearance of the “I was” is often used to preface statements signifying a state of loss or change in the person’s course of life—that is, as an indicator for talk about the patient’s shattered world. For example, Patient 7122 stated,

The main (effect) at the moment is I’m not actually with my children, I can’t really be their mum at the moment. I was a caring Mum, but I can’t sort of do the things that I want to be able to do like take them to school. I can’t really do a lot on my own. Like crossing the roads.

Another patient stated,

Everything is completely changed. The way I was … I can’t really do anything at the moment. I mean my German, my English, everything’s gone. Job possibilities is out the window. Everything is just out of the window … I just think about it all the time actually every day you know. You know it has destroyed me anyway, but if I really think about what has happened I would just destroy myself.

Each of these quotations, in its own way, serves to emphasize how life has changed and how the patient’s world has changed. In that respect, we can say that one of the major outcomes arising from TBI may be substantial “biographical disruption” (Bury, 1982 ), whereupon key features of an individual’s life course are radically altered forever. Indeed, as Becker ( 1997 , p. 37) argued in relation to a wide array of life events, “When their health is suddenly disrupted, people are thrown into chaos. Illness challenges one’s knowledge of one’s body. It defies orderliness. People experience the time before their illness and its aftermath as two separate entities.” Indeed, this notion of a cusp in personal biography is particularly well illustrated by Luria’s patient Zasetsky; the latter often refers to being a “newborn creature” (Luria, 1975 , pp. 24, 88), a shadow of a former self (p. 25), and as having his past “wiped out” (p. 116).

However, none of this tells us about how these factors come together in the life and experience of one individual. When we focus on an entire set of interviews, we necessarily lose the rich detail of personal experience and tend instead to rely on a conceptual rather than a graphic description of effects and consequences (to focus on, say, “memory loss,” rather than loss of memory about family life). The contents of Figure 19.3 attempt to correct that vision. Figure 19.3 records all the things that a particular respondent (Patient 7011) used to do and liked doing. It records all the things that he says he can no longer do (at 1 year after injury), and it records all the consequences that he suffered from his head injury at the time of the interview. Thus, we see references to epilepsy (his “fits”), paranoia (the patient spoke of his suspicions concerning other people, people scheming behind his back, and his inability to trust others), deafness, depression, and so forth. Note that, although I have inserted a future tense into the web (“I will”), such a statement never appeared in the transcript. I have set it there for emphasis and to show how, for this person, the future fails to connect to any of the other features of his world except in a negative way. Thus, he states at one point that he cannot think of the future because it makes him feel depressed (see Figure 19.3 ). The line thickness of the arcs reflects the emphasis that the subject placed on the relevant “outcomes” in relation to the “I was” and the “now” during the interview. Thus, we see that factors affecting his concentration and balance loom large, but that he is also concerned about his being dependent on others, his epileptic fits, and his being unable to work and drive a vehicle. The schism in his life between what he used to do, what he cannot now do, and his current state of being is nicely represented in the CTA diagram.

The shattered world of Patient 7011. Thickness of lines (arcs) is proportional to the frequency of reference to the “outcome” by the patient during the interview.

What have we gained from executing this kind of analysis? For a start, we have moved away from a focus on variables, frequencies, and causal connections (e.g., a focus on the proportion of people with TBI who suffer from memory problems or memory problems and speech problems) and refocused on how the multiple consequences of a TBI link together in one person. In short, instead of developing a narrative of acting variables, we have emphasized a narrative of an acting individual (Abbott, 1992 , p. 62). Second, it has enabled us to see how the consequences of a TBI connect to an actual lifeworld (and not simply an injured body). So the patient is not viewed just as having a series of discrete problems such as balancing, or staying awake, which is the usual way of assessing outcomes, but as someone struggling to come to terms with an objective world of changed things, people, and activities (missing work is not, for example, routinely considered an outcome of head injury). Third, by focusing on what the patient was saying, we gain insight into something that is simply not visible by concentrating on single outcomes or symptoms alone—namely, the void that rests at the center of the interview, what I have called the “I was.” Fourth, we have contributed to understanding a type, because the case that we have read about is not simply a case of “John” or “Jane” but a case of TBI, and in that respect it can add to many other accounts of what it is like to experience head injury—including one of the most well documented of all TBI cases, that of Zatetsky. Finally, we have opened up the possibility of developing and comparing cognitive maps (Carley, 1993 ) for different individuals and thereby gained insight into how alternative cognitive frames of the world arise and operate.

Tracing the Biography of a Concept

In the previous sections, I emphasized the virtues of CTA for its capacity to link into a data set in its entirety—and how the use of CTA can counter any tendency of a researcher to be selective and partial in the presentation and interpretation of information contained in interviews and documents. However, that does not mean that we always must take an entire document or interview as the data source. Indeed, it is possible to select (on rational and explicit grounds) sections of documentation and to conduct the CTA on the chosen portions. In the example that follows, I do just that. The sections that I chose to concentrate on are titles and abstracts of academic papers—rather than the full texts. The research on which the following is based is concerned with a biography of a concept and is being conducted in conjunction with a Ph.D. student of mine, Joanne Wilson. Joanne thinks of this component of the study more in terms of a “scoping study” than of a biographical study, and that, too, is a useful framework for structuring the context in which CTA can be used. Scoping studies (Arksey & O’Malley, 2005 ) are increasingly used in health-related research to “map the field” and to get a sense of the range of work that has been conducted on a given topic. Such studies can also be used to refine research questions and research designs. In our investigation, the scoping study was centered on the concept of well-being. Since 2010, well-being has emerged as an important research target for governments and corporations as well as for academics, yet it is far from clear to what the term refers. Given the ambiguity of meaning, it is clear that a scoping review, rather than either a systematic review or a narrative review of available literature, would be best suited to our goals.

The origins of the concept of well-being can be traced at least as far back as the 4th century bc , when philosophers produced normative explanations of the good life (e.g., eudaimonia, hedonia, and harmony). However, contemporary interest in the concept seemed to have been regenerated by the concerns of economists and, most recently, psychologists. These days, governments are equally concerned with measuring well-being to inform policy and conduct surveys of well-being to assess that state of the nation (see, e.g., Office for National Statistics, 2012 )—but what are they assessing?

We adopted a two-step process to address the research question, “What is the meaning of ‘well-being’ in the context of public policy?” First, we explored the existing thesauri of eight databases to establish those higher order headings (if any) under which articles with relevance to well-being might be cataloged. Thus, we searched the following databases: Cumulative Index of Nursing and Allied Health Literature, EconLit, Health Management Information Consortium, Medline, Philosopher’s Index, PsycINFO, Sociological Abstracts, and Worldwide Political Science Abstracts. Each of these databases adopts keyword-controlled vocabularies. In other words, they use inbuilt statistical procedures to link core terms to a set lexis of phrases that depict the concepts contained in the database. Table 19.2 shows each database and its associated taxonomy. The contents of Table 19.2 point toward a linguistic infrastructure in terms of which academic discourse is conducted, and our task was to extract from this infrastructure the semantic web wherein the concept of well-being is situated. We limited the thesaurus terms to well-being and its variants (i.e., wellbeing or well being). If the term was returned, it was then exploded to identify any associated terms.

To develop the conceptual map, we conducted a free-text search for well-being and its variants within the context of public policy across the same databases. We orchestrated these searches across five time frames: January 1990 to December 1994, January 1995 to December 1999, January 2000 to December 2004, January 2005 to December 2009, and January 2010 to October 2011. Naturally, different disciplines use different words to refer to well-being, each of which may wax and wane in usage over time. The searches thus sought to quantitatively capture any changes in the use and subsequent prevalence of well-being and any referenced terms (i.e., to trace a biography).

It is important to note that we did not intend to provide an exhaustive, systematic search of all the relevant literature. Rather, we wanted to establish the prevalence of well-being and any referenced (i.e., allied) terms within the context of public policy. This has the advantage of ensuring that any identified words are grounded in the literature (i.e., they represent words actually used by researchers to talk and write about well-being in policy settings). The searches were limited to abstracts to increase the specificity, albeit at some expense to sensitivity, with which we could identify relevant articles.

We also employed inclusion/exclusion criteria to facilitate the process by which we selected articles, thereby minimizing any potential bias arising from our subjective interpretations. We included independent, stand-alone investigations relevant to the study’s objectives (i.e., concerned with well-being in the context of public policy), which focused on well-being as a central outcome or process and which made explicit reference to “well-being” and “public policy” in either the title or the abstract. We excluded articles that were irrelevant to the study’s objectives, those that used noun adjuncts to focus on the well-being of specific populations (i.e., children, elderly, women) and contexts (e.g., retirement village), and those that focused on deprivation or poverty unless poverty indices were used to understand well-being as opposed to social exclusion. We also excluded book reviews and abstracts describing a compendium of studies.

Using these criteria, Joanne Wilson conducted the review and recorded the results on a template developed specifically for the project, organized chronologically across each database and timeframe. Results were scrutinized by two other colleagues to ensure the validity of the search strategy and the findings. Any concerns regarding the eligibility of studies for inclusion were discussed among the research team. I then analyzed the co-occurrence of the key terms in the database. The resultant conceptual map is shown in Figure 19.4.

The position of a concept in a network—a study of “well-being.” Node size is proportional to the frequency of terms in 54 selected abstracts. Line thickness is proportional to the co-occurrence of two terms in any phrase of three words (e.g., subjective well-being, economics of well-being, well-being and development).

The diagram can be interpreted as a visualization of a conceptual space. So, when academics write about well-being in the context of public policy, they tend to connect the discussion to the other terms in the matrix. “Happiness,” “health,” “economic,” and “subjective,” for example, are relatively dominant terms in the matrix. The node size of these words suggests that references to such entities is only slightly less than references to well-being itself. However, when we come to analyze how well-being is talked about in detail, we see specific connections come to the fore. Thus, the data imply that talk of “subjective well-being” far outweighs discussion of “social well-being” or “economic well-being.” Happiness tends to act as an independent node (there is only one occurrence of happiness and well-being), probably suggesting that “happiness” is acting as a synonym for well-being. Quality of life is poorly represented in the abstracts, and its connection to most of the other concepts in the space is very weak—confirming, perhaps, that quality of life is unrelated to contemporary discussions of well-being and happiness. The existence of “measures” points to a distinct concern to assess and to quantify expressions of happiness, well-being, economic growth, and gross domestic product. More important and underlying this detail, there are grounds for suggesting that there are in fact a number of tensions in the literature on well-being.

On the one hand, the results point toward an understanding of well-being as a property of individuals—as something that they feel or experience. Such a discourse is reflected through the use of words like happiness, subjective , and individual . This individualistic and subjective frame has grown in influence over the past decade in particular, and one of the problems with it is that it tends toward a somewhat content-free conceptualization of well-being. To feel a sense of well-being, one merely states that one is in a state of well-being; to be happy, one merely proclaims that one is happy (cf., Office for National Statistics, 2012 ). It is reminiscent of the conditions portrayed in Aldous Huxley’s Brave New World , wherein the rulers of a closely managed society gave their priority to maintaining order and ensuring the happiness of the greatest number—in the absence of attention to justice or freedom of thought or any sense of duty and obligation to others, many of whom were systematically bred in “the hatchery” as slaves.

On the other hand, there is some intimation in our web that the notion of well-being cannot be captured entirely by reference to individuals alone and that there are other dimensions to the concept—that well-being is the outcome or product of, say, access to reasonable incomes, to safe environments, to “development,” and to health and welfare. It is a vision hinted at by the inclusion of those very terms in the network. These different concepts necessarily give rise to important differences concerning how well-being is identified and measured and therefore what policies are most likely to advance well-being. In the first kind of conceptualization, we might improve well-being merely by dispensing what Huxley referred to as “soma” (a superdrug that ensured feelings of happiness and elation); in the other case, however, we would need to invest in economic, human, and social capital as the infrastructure for well-being. In any event and even at this nascent level, we can see how CTA can begin to tease out conceptual complexities and theoretical positions in what is otherwise routine textual data.

Putting the Content of Documents in Their Place

I suggested in my introduction that CTA was a method of analysis—not a method of data collection or a form of research design. As such, it does not necessarily inveigle us into any specific forms of either design or data collection, though designs and methods that rely on quantification are dominant. In this closing section, however, I want to raise the issue as to how we should position a study of content in our research strategies as a whole. We must keep in mind that documents and records always exist in a context and that while what is “in” the document may be considered central, a good research plan can often encompass a variety of ways of looking at how content links to context. Hence, in what follows, I intend to outline how an analysis of content might be combined with other ways of looking at a record or text and even how the analysis of content might be positioned as secondary to an examination of a document or record. The discussion calls on a much broader analysis, as presented in Prior ( 2011 ).

I have already stated that basic forms of CTA can serve as an important point of departure for many types of data analysis—for example, as discourse analysis. Naturally, whenever “discourse” is invoked, there is at least some recognition of the notion that words might play a part in structuring the world rather than merely reporting on it or describing it (as is the case with the 2002 State of the Nation address that was quoted in the section “Units of Analysis”). Thus, for example, there is a considerable tradition within social studies of science and technology for examining the place of scientific rhetoric in structuring notions of “nature” and the position of human beings (especially as scientists) within nature (see, e.g., work by Bazerman, 1988 ; Gilbert & Mulkay, 1984 ; and Kay, 2000 ). Nevertheless, little, if any, of that scholarship situates documents as anything other than inert objects, either constructed by or waiting patiently to be activated by scientists.

However, in the tradition of the ethnomethodologists (Heritage, 1991 ) and some adherents of discourse analysis, it is also possible to argue that documents might be more fruitfully approached as a “topic” (Zimmerman & Pollner, 1971 ) rather than a “resource” (to be scanned for content), in which case the focus would be on the ways in which any given document came to assume its present content and structure. In the field of documentation, these latter approaches are akin to what Foucault ( 1970 ) might have called an “archaeology of documentation” and are well represented in studies of such things as how crime, suicide, and other statistics and associated official reports and policy documents are routinely generated. That, too, is a legitimate point of research focus, and it can often be worth examining the genesis of, say, suicide statistics or statistics about the prevalence of mental disorder in a community as well as using such statistics as a basis for statistical modeling.

Unfortunately, the distinction between topic and resource is not always easy to maintain—especially in the hurly-burly of doing empirical research (see, e.g., Prior, 2003 ). Putting an emphasis on “topic,” however, can open a further dimension of research that concerns the ways in which documents function in the everyday world. And, as I have already hinted, when we focus on function, it becomes apparent that documents serve not merely as containers of content but also very often as active agents in episodes of interaction and schemes of social organization. In this vein, one can begin to think of an ethnography of documentation. Therein, the key research questions revolve around the ways in which documents are used and integrated into specific kinds of organizational settings, as well as with how documents are exchanged and how they circulate within such settings. Clearly, documents carry content—words, images, plans, ideas, patterns, and so forth—but the manner in which such material is called on and manipulated, and the way in which it functions, cannot be determined (though it may be constrained) by an analysis of content. Thus, Harper’s ( 1998 ) study of the use of economic reports inside the International Monetary Fund provides various examples of how “reports” can function to both differentiate and cohere work groups. In the same way. Henderson ( 1995 ) illustrated how engineering sketches and drawings can serve as what she calls conscription devices on the workshop floor.

Documents constitute a form of what Latour ( 1986 ) would refer to as “immutable mobiles,” and with an eye on the mobility of documents, it is worth noting an emerging interest in histories of knowledge that seek to examine how the same documents have been received and absorbed quite differently by different cultural networks (see, e.g., Burke, 2000 ). A parallel concern has arisen with regard to the newly emergent “geographies of knowledge” (see, e.g., Livingstone, 2005 ). In the history of science, there has also been an expressed interest in the biography of scientific objects (Latour, 1987 , p. 262) or of “epistemic things” (Rheinberger, 2000 )—tracing the history of objects independent of the “inventors” and “discoverers” to which such objects are conventionally attached. It is an approach that could be easily extended to the study of documents and is partly reflected in the earlier discussion concerning the meaning of the concept of well-being. Note how in all these cases a key consideration is how words and documents as “things” circulate and translate from one culture to another; issues of content are secondary.

Studying how documents are used and how they circulate can constitute an important area of research in its own right. Yet even those who focus on document use can be overly anthropocentric and subsequently overemphasize the potency of human action in relation to written text. In that light, it is interesting to consider ways in which we might reverse that emphasis and instead to study the potency of text and the manner in which documents can influence organizational activities as well as reflect them. Thus, Dorothy Winsor ( 1999 ), for example, examined the ways in which work orders drafted by engineers not only shape and fashion the practices and activities of engineering technicians but also construct “two different worlds” on the workshop floor.

In light of this, I will suggest a typology (Table 19.3 ) of the ways in which documents have come to be and can be considered in social research.

While accepting that no form of categorical classification can capture the inherent fluidity of the world, its actors, and its objects, Table 19.3 aims to offer some understanding of the various ways in which documents have been dealt with by social researchers. Thus, approaches that fit into Cell 1 have been dominant in the history of social science generally. Therein, documents (especially as text) have been analyzed and coded for what they contain in the way of descriptions, reports, images, representations, and accounts. In short, they have been scoured for evidence. Data analysis strategies concentrate almost entirely on what is in the “text” (via various forms of CTA). This emphasis on content is carried over into Cell 2–type approaches, with the key differences being that analysis is concerned with how document content comes into being. The attention here is usually on the conceptual architecture and sociotechnical procedures by means of which written reports, descriptions, statistical data, and so forth are generated. Various kinds of discourse analysis have been used to unravel the conceptual issues, while a focus on sociotechnical and rule-based procedures by means of which clinical, police, social work, and other forms of records and reports are constructed has been well represented in the work of ethnomethodologists (see Prior, 2011 ). In contrast, and in Cell 3, the research focus is on the ways in which documents are called on as a resource by various and different kinds of “user.” Here, concerns with document content or how a document has come into being are marginal, and the analysis concentrates on the relationship between specific documents and their use or recruitment by identifiable human actors for purposeful ends. I have pointed to some studies of the latter kind in earlier paragraphs (e.g., Henderson, 1995 ). Finally, the approaches that fit into Cell 4 also position content as secondary. The emphasis here is on how documents as “things” function in schemes of social activity and with how such things can drive, rather than be driven by, human actors. In short, the spotlight is on the vita activa of documentation, and I have provided numerous example of documents as actors in other publications (see Prior, 2003 , 2008 , 2011 ).

Content analysis was a method originally developed to analyze mass media “messages” in an age of radio and newspaper print, well before the digital age. Unfortunately, CTA struggles to break free of its origins and continues to be associated with the quantitative analysis of “communication.” Yet, as I have argued, there is no rational reason why its use must be restricted to such a narrow field, because it can be used to analyze printed text and interview data (as well as other forms of inscription) in various settings. What it cannot overcome is the fact that it is a method of analysis and not a method of data collection. However, as I have shown, it is an analytical strategy that can be integrated into a variety of research designs and approaches—cross-sectional and longitudinal survey designs, ethnography and other forms of qualitative design, and secondary analysis of preexisting data sets. Even as a method of analysis, it is flexible and can be used either independent of other methods or in conjunction with them. As we have seen, it is easily merged with various forms of discourse analysis and can be used as an exploratory method or as a means of verification. Above all, perhaps, it crosses the divide between “quantitative” and “qualitative” modes of inquiry in social research and offers a new dimension to the meaning of mixed methods research. I recommend it.

Abbott, A. ( 1992 ). What do cases do? In C. C. Ragin & H. S. Becker (Eds.), What is a case? Exploring the foundations of social inquiry (pp. 53–82). Cambridge, England: Cambridge University Press.

Google Scholar

Google Preview

Altheide, D. L. ( 1987 ). Ethnographic content analysis.   Qualitative Sociology, 10, 65–77.

Arksey, H. , & O’Malley, L. ( 2005 ). Scoping studies: Towards a methodological framework.   International Journal of Sociological Research Methodology, 8, 19–32.

Babbie, E. ( 2013 ). The practice of social research (13th ed.) Belmont, CA: Wadsworth.

Bazerman, C. ( 1988 ). Shaping written knowledge. The genre and activity of the experimental article in science . Madison: University of Wisconsin Press.

Becker, G. ( 1997 ). Disrupted lives. How people create meaning in a chaotic world . London, England: University of California Press.

Berelson, B. ( 1952 ). Content analysis in communication research . Glencoe, IL: Free Press.

Bowker, G. C. , & Star, S. L. ( 1999 ). Sorting things out. Classification and its consequences . Cambridge, MA: MIT Press.

Braun, V. , & Clarke, V. ( 2006 ). Using thematic analysis in psychology.   Qualitative Research in Psychology, 3, 77–101.

Breuer, J. , & Freud, S. ( 2001 ). Studies on hysteria. In L. Strachey (Ed.), The standard edition of the complete psychological works of Sigmund Freud (Vol. 2). London, England: Vintage.

Bryman, A. ( 2008 ). Social research methods (3rd ed.). Oxford, England: Oxford University Press.

Burke, P. ( 2000 ). A social history of knowledge. From Guttenberg to Diderot . Cambridge, MA: Polity Press.

Bury, M. ( 1982 ). Chronic illness as biographical disruption.   Sociology of Health and Illness, 4, 167–182.

Carley, K. ( 1993 ). Coding choices for textual analysis. A comparison of content analysis and map analysis.   Sociological Methodology, 23, 75–126.

Charon, R. ( 2006 ). Narrative medicine. Honoring the stories of illness . New York, NY: Oxford University Press.

Creswell, J. W. ( 2007 ). Designing and conducting mixed methods research . Thousand Oaks, CA: Sage.

Davison, C. , Davey-Smith, G. , & Frankel, S. ( 1991 ). Lay epidemiology and the prevention paradox.   Sociology of Health & Illness, 13, 1–19.

Evans, M. , Prout, H. , Prior, L. , Tapper-Jones, L. , & Butler, C. ( 2007 ). A qualitative study of lay beliefs about influenza.   British Journal of General Practice, 57, 352–358.

Foucault, M. ( 1970 ). The order of things. An archaeology of the human sciences . London, England: Tavistock.

Frank, A. ( 1995 ). The wounded storyteller: Body, illness, and ethics . Chicago, IL: University of Chicago Press.

Gerring, J. ( 2004 ). What is a case study, and what is it good for?   The American Political Science Review, 98, 341–354.

Gilbert, G. N. , & Mulkay, M. ( 1984 ). Opening Pandora’s box. A sociological analysis of scientists’ discourse . Cambridge, England: Cambridge University Press.

Glaser, B. G. , & Strauss, A. L. ( 1967 ). The discovery of grounded theory. Strategies for qualitative research . New York, NY: Aldine de Gruyter.

Goode, W. J. , & Hatt, P. K. ( 1952 ). Methods in social research . New York, NY: McGraw–Hill.

Greimas, A. J. ( 1970 ). Du Sens. Essays sémiotiques . Paris, France: Ėditions du Seuil.

Habermas, J. ( 1987 ). The theory of communicative action: Vol.2, A critique of functionalist reason ( T. McCarthy , Trans.). Cambridge, MA: Polity Press.

Harper, R. ( 1998 ). Inside the IMF. An ethnography of documents, technology, and organizational action . London, England: Academic Press.

Henderson, K. ( 1995 ). The political career of a prototype. Visual representation in design engineering.   Social Problems, 42, 274–299.

Heritage, J. ( 1991 ). Garkfinkel and ethnomethodology . Cambridge, MA: Polity Press.

Hydén, L-C. ( 1997 ). Illness and narrative.   Sociology of Health & Illness, 19, 48–69.

Kahn, R. , & Cannell, C. ( 1957 ). The dynamics of interviewing. Theory, technique and cases . New York, NY: Wiley.

Kay, L. E. ( 2000 ). Who wrote the book of life? A history of the genetic code . Stanford, CA: Stanford University Press.

Kleinman, A. , Eisenberg, L. , & Good, B. ( 1978 ). Culture, illness & care, clinical lessons from anthropologic and cross-cultural research.   Annals of Internal Medicine, 88, 251–258.

Kracauer, S. ( 1952 ). The challenge of qualitative content analysis.   Public Opinion Quarterly, Special Issue on International Communications Research (1952–53), 16, 631–642.

Krippendorf, K. ( 2004 ). Content analysis: An introduction to its methodology (2nd ed.). Thousand Oaks, CA: Sage.

Latour, B. ( 1986 ). Visualization and cognition: Thinking with eyes and hands. Knowledge and Society, Studies in Sociology of Culture, Past and Present, 6, 1–40.

Latour, B. ( 1987 ). Science in action. How to follow scientists and engineers through society . Milton Keynes, England: Open University Press.

Livingstone, D. N. ( 2005 ). Text, talk, and testimony: Geographical reflections on scientific habits. An afterword.   British Society for the History of Science, 38, 93–100.

Luria, A. R. ( 1975 ). The man with the shattered world. A history of a brain wound ( L. Solotaroff , Trans.). Harmondsworth, England: Penguin.

Martin, A. , & Lynch, M. ( 2009 ). Counting things and counting people: The practices and politics of counting.   Social Problems, 56, 243–266.

Merton, R. K. ( 1968 ). Social theory and social structure . New York, NY: Free Press.

Morgan, D. L. ( 1993 ). Qualitative content analysis. A guide to paths not taken.   Qualitative Health Research, 2, 112–121.

Morgan, D. L. ( 1998 ). Practical strategies for combining qualitative and quantitative methods.   Qualitative Health Research, 8, 362–376.

Morris, P. G. , Prior, L. , Deb, S. , Lewis, G. , Mayle, W. , Burrow, C. E. , & Bryant, E. ( 2005 ). Patients’ views on outcome following head injury: A qualitative study,   BMC Family Practice, 6, 30.

Neuendorf, K. A. ( 2002 ). The content analysis guidebook . Thousand Oaks: CA: Sage.

Newman, J. , & Vidler, E. ( 2006 ). Discriminating customers, responsible patients, empowered users: Consumerism and the modernisation of health care,   Journal of Social Policy, 35, 193–210.

Office for National Statistics. ( 2012 ). First ONS annual experimental subjective well-being results . London, England: Office for National Statistics. Retrieved from http://www.ons.gov.uk/ons/dcp171766_272294.pdf

Prior, L. ( 2003 ). Using documents in social research . London, England: Sage.

Prior, L. ( 2008 ). Repositioning documents in social research.   Sociology. Special Issue on Research Methods, 42, 821–836.

Prior, L. ( 2011 ). Using documents and records in social research (4 vols.). London, England: Sage.

Prior, L. , Evans, M. , & Prout, H. ( 2011 ). Talking about colds and flu: The lay diagnosis of two common illnesses among older British people.   Social Science and Medicine, 73, 922–928.

Prior, L. , Hughes, D. , & Peckham, S. ( 2012 ) The discursive turn in policy analysis and the validation of policy stories.   Journal of Social Policy, 41, 271–289.

Ragin, C. C. , & Becker, H. S. ( 1992 ). What is a case? Exploring the foundations of social inquiry . Cambridge, England: Cambridge University Press.

Rheinberger, H.-J. ( 2000 ). Cytoplasmic particles. The trajectory of a scientific object. In Daston, L. (Ed.), Biographies of scientific objects (pp. 270–294). Chicago, IL: Chicago University Press.

Ricoeur, P. ( 1984 ). Time and narrative (Vol. 1., K. McLaughlin & D, Pellauer, Trans.). Chicago, IL: University of Chicago Press.

Roe, E. ( 1994 ). Narrative policy analysis, theory and practice . Durham, NC: Duke University Press.

Ryan, G. W. , & Bernard, H. R. ( 2000 ). Data management and analysis methods. In N. K. Denzin & Y. S. Lincoln (Eds.), Handbook of qualitative research (2nd ed., pp. 769–802). Thousand Oaks, CA: Sage.

Schutz, A. , & Luckman, T. ( 1974 ). The structures of the life-world (R. M. Zaner & H. T. Engelhardt, Trans.). London, England: Heinemann.

SPSS. ( 2007 ). Text mining for Clementine . 12.0 User’s Guide. Chicago, IL: SPSS.

Weber, R. P. ( 1990 ). Basic content analysis . Newbury Park, CA: Sage.

Winsor, D. ( 1999 ). Genre and activity systems. The role of documentation in maintaining and changing engineering activity systems.   Written Communication, 16, 200–224.

Zimmerman, D. H. , & Pollner, M. ( 1971 ). The everyday world as a phenomenon. In J. D. Douglas (Ed.), Understanding everyday life (pp. 80–103). London, England: Routledge & Kegan Paul.

  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Neurol Res Pract

Logo of neurrp

How to use and assess qualitative research methods

Loraine busetto.

1 Department of Neurology, Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120 Heidelberg, Germany

Wolfgang Wick

2 Clinical Cooperation Unit Neuro-Oncology, German Cancer Research Center, Heidelberg, Germany

Christoph Gumbinger

Associated data.

Not applicable.

This paper aims to provide an overview of the use and assessment of qualitative research methods in the health sciences. Qualitative research can be defined as the study of the nature of phenomena and is especially appropriate for answering questions of why something is (not) observed, assessing complex multi-component interventions, and focussing on intervention improvement. The most common methods of data collection are document study, (non-) participant observations, semi-structured interviews and focus groups. For data analysis, field-notes and audio-recordings are transcribed into protocols and transcripts, and coded using qualitative data management software. Criteria such as checklists, reflexivity, sampling strategies, piloting, co-coding, member-checking and stakeholder involvement can be used to enhance and assess the quality of the research conducted. Using qualitative in addition to quantitative designs will equip us with better tools to address a greater range of research problems, and to fill in blind spots in current neurological research and practice.

The aim of this paper is to provide an overview of qualitative research methods, including hands-on information on how they can be used, reported and assessed. This article is intended for beginning qualitative researchers in the health sciences as well as experienced quantitative researchers who wish to broaden their understanding of qualitative research.

What is qualitative research?

Qualitative research is defined as “the study of the nature of phenomena”, including “their quality, different manifestations, the context in which they appear or the perspectives from which they can be perceived” , but excluding “their range, frequency and place in an objectively determined chain of cause and effect” [ 1 ]. This formal definition can be complemented with a more pragmatic rule of thumb: qualitative research generally includes data in form of words rather than numbers [ 2 ].

Why conduct qualitative research?

Because some research questions cannot be answered using (only) quantitative methods. For example, one Australian study addressed the issue of why patients from Aboriginal communities often present late or not at all to specialist services offered by tertiary care hospitals. Using qualitative interviews with patients and staff, it found one of the most significant access barriers to be transportation problems, including some towns and communities simply not having a bus service to the hospital [ 3 ]. A quantitative study could have measured the number of patients over time or even looked at possible explanatory factors – but only those previously known or suspected to be of relevance. To discover reasons for observed patterns, especially the invisible or surprising ones, qualitative designs are needed.

While qualitative research is common in other fields, it is still relatively underrepresented in health services research. The latter field is more traditionally rooted in the evidence-based-medicine paradigm, as seen in " research that involves testing the effectiveness of various strategies to achieve changes in clinical practice, preferably applying randomised controlled trial study designs (...) " [ 4 ]. This focus on quantitative research and specifically randomised controlled trials (RCT) is visible in the idea of a hierarchy of research evidence which assumes that some research designs are objectively better than others, and that choosing a "lesser" design is only acceptable when the better ones are not practically or ethically feasible [ 5 , 6 ]. Others, however, argue that an objective hierarchy does not exist, and that, instead, the research design and methods should be chosen to fit the specific research question at hand – "questions before methods" [ 2 , 7 – 9 ]. This means that even when an RCT is possible, some research problems require a different design that is better suited to addressing them. Arguing in JAMA, Berwick uses the example of rapid response teams in hospitals, which he describes as " a complex, multicomponent intervention – essentially a process of social change" susceptible to a range of different context factors including leadership or organisation history. According to him, "[in] such complex terrain, the RCT is an impoverished way to learn. Critics who use it as a truth standard in this context are incorrect" [ 8 ] . Instead of limiting oneself to RCTs, Berwick recommends embracing a wider range of methods , including qualitative ones, which for "these specific applications, (...) are not compromises in learning how to improve; they are superior" [ 8 ].

Research problems that can be approached particularly well using qualitative methods include assessing complex multi-component interventions or systems (of change), addressing questions beyond “what works”, towards “what works for whom when, how and why”, and focussing on intervention improvement rather than accreditation [ 7 , 9 – 12 ]. Using qualitative methods can also help shed light on the “softer” side of medical treatment. For example, while quantitative trials can measure the costs and benefits of neuro-oncological treatment in terms of survival rates or adverse effects, qualitative research can help provide a better understanding of patient or caregiver stress, visibility of illness or out-of-pocket expenses.

How to conduct qualitative research?

Given that qualitative research is characterised by flexibility, openness and responsivity to context, the steps of data collection and analysis are not as separate and consecutive as they tend to be in quantitative research [ 13 , 14 ]. As Fossey puts it : “sampling, data collection, analysis and interpretation are related to each other in a cyclical (iterative) manner, rather than following one after another in a stepwise approach” [ 15 ]. The researcher can make educated decisions with regard to the choice of method, how they are implemented, and to which and how many units they are applied [ 13 ]. As shown in Fig.  1 , this can involve several back-and-forth steps between data collection and analysis where new insights and experiences can lead to adaption and expansion of the original plan. Some insights may also necessitate a revision of the research question and/or the research design as a whole. The process ends when saturation is achieved, i.e. when no relevant new information can be found (see also below: sampling and saturation). For reasons of transparency, it is essential for all decisions as well as the underlying reasoning to be well-documented.

An external file that holds a picture, illustration, etc.
Object name is 42466_2020_59_Fig1_HTML.jpg

Iterative research process

While it is not always explicitly addressed, qualitative methods reflect a different underlying research paradigm than quantitative research (e.g. constructivism or interpretivism as opposed to positivism). The choice of methods can be based on the respective underlying substantive theory or theoretical framework used by the researcher [ 2 ].

Data collection

The methods of qualitative data collection most commonly used in health research are document study, observations, semi-structured interviews and focus groups [ 1 , 14 , 16 , 17 ].

Document study

Document study (also called document analysis) refers to the review by the researcher of written materials [ 14 ]. These can include personal and non-personal documents such as archives, annual reports, guidelines, policy documents, diaries or letters.

Observations

Observations are particularly useful to gain insights into a certain setting and actual behaviour – as opposed to reported behaviour or opinions [ 13 ]. Qualitative observations can be either participant or non-participant in nature. In participant observations, the observer is part of the observed setting, for example a nurse working in an intensive care unit [ 18 ]. In non-participant observations, the observer is “on the outside looking in”, i.e. present in but not part of the situation, trying not to influence the setting by their presence. Observations can be planned (e.g. for 3 h during the day or night shift) or ad hoc (e.g. as soon as a stroke patient arrives at the emergency room). During the observation, the observer takes notes on everything or certain pre-determined parts of what is happening around them, for example focusing on physician-patient interactions or communication between different professional groups. Written notes can be taken during or after the observations, depending on feasibility (which is usually lower during participant observations) and acceptability (e.g. when the observer is perceived to be judging the observed). Afterwards, these field notes are transcribed into observation protocols. If more than one observer was involved, field notes are taken independently, but notes can be consolidated into one protocol after discussions. Advantages of conducting observations include minimising the distance between the researcher and the researched, the potential discovery of topics that the researcher did not realise were relevant and gaining deeper insights into the real-world dimensions of the research problem at hand [ 18 ].

Semi-structured interviews

Hijmans & Kuyper describe qualitative interviews as “an exchange with an informal character, a conversation with a goal” [ 19 ]. Interviews are used to gain insights into a person’s subjective experiences, opinions and motivations – as opposed to facts or behaviours [ 13 ]. Interviews can be distinguished by the degree to which they are structured (i.e. a questionnaire), open (e.g. free conversation or autobiographical interviews) or semi-structured [ 2 , 13 ]. Semi-structured interviews are characterized by open-ended questions and the use of an interview guide (or topic guide/list) in which the broad areas of interest, sometimes including sub-questions, are defined [ 19 ]. The pre-defined topics in the interview guide can be derived from the literature, previous research or a preliminary method of data collection, e.g. document study or observations. The topic list is usually adapted and improved at the start of the data collection process as the interviewer learns more about the field [ 20 ]. Across interviews the focus on the different (blocks of) questions may differ and some questions may be skipped altogether (e.g. if the interviewee is not able or willing to answer the questions or for concerns about the total length of the interview) [ 20 ]. Qualitative interviews are usually not conducted in written format as it impedes on the interactive component of the method [ 20 ]. In comparison to written surveys, qualitative interviews have the advantage of being interactive and allowing for unexpected topics to emerge and to be taken up by the researcher. This can also help overcome a provider or researcher-centred bias often found in written surveys, which by nature, can only measure what is already known or expected to be of relevance to the researcher. Interviews can be audio- or video-taped; but sometimes it is only feasible or acceptable for the interviewer to take written notes [ 14 , 16 , 20 ].

Focus groups

Focus groups are group interviews to explore participants’ expertise and experiences, including explorations of how and why people behave in certain ways [ 1 ]. Focus groups usually consist of 6–8 people and are led by an experienced moderator following a topic guide or “script” [ 21 ]. They can involve an observer who takes note of the non-verbal aspects of the situation, possibly using an observation guide [ 21 ]. Depending on researchers’ and participants’ preferences, the discussions can be audio- or video-taped and transcribed afterwards [ 21 ]. Focus groups are useful for bringing together homogeneous (to a lesser extent heterogeneous) groups of participants with relevant expertise and experience on a given topic on which they can share detailed information [ 21 ]. Focus groups are a relatively easy, fast and inexpensive method to gain access to information on interactions in a given group, i.e. “the sharing and comparing” among participants [ 21 ]. Disadvantages include less control over the process and a lesser extent to which each individual may participate. Moreover, focus group moderators need experience, as do those tasked with the analysis of the resulting data. Focus groups can be less appropriate for discussing sensitive topics that participants might be reluctant to disclose in a group setting [ 13 ]. Moreover, attention must be paid to the emergence of “groupthink” as well as possible power dynamics within the group, e.g. when patients are awed or intimidated by health professionals.

Choosing the “right” method

As explained above, the school of thought underlying qualitative research assumes no objective hierarchy of evidence and methods. This means that each choice of single or combined methods has to be based on the research question that needs to be answered and a critical assessment with regard to whether or to what extent the chosen method can accomplish this – i.e. the “fit” between question and method [ 14 ]. It is necessary for these decisions to be documented when they are being made, and to be critically discussed when reporting methods and results.

Let us assume that our research aim is to examine the (clinical) processes around acute endovascular treatment (EVT), from the patient’s arrival at the emergency room to recanalization, with the aim to identify possible causes for delay and/or other causes for sub-optimal treatment outcome. As a first step, we could conduct a document study of the relevant standard operating procedures (SOPs) for this phase of care – are they up-to-date and in line with current guidelines? Do they contain any mistakes, irregularities or uncertainties that could cause delays or other problems? Regardless of the answers to these questions, the results have to be interpreted based on what they are: a written outline of what care processes in this hospital should look like. If we want to know what they actually look like in practice, we can conduct observations of the processes described in the SOPs. These results can (and should) be analysed in themselves, but also in comparison to the results of the document analysis, especially as regards relevant discrepancies. Do the SOPs outline specific tests for which no equipment can be observed or tasks to be performed by specialized nurses who are not present during the observation? It might also be possible that the written SOP is outdated, but the actual care provided is in line with current best practice. In order to find out why these discrepancies exist, it can be useful to conduct interviews. Are the physicians simply not aware of the SOPs (because their existence is limited to the hospital’s intranet) or do they actively disagree with them or does the infrastructure make it impossible to provide the care as described? Another rationale for adding interviews is that some situations (or all of their possible variations for different patient groups or the day, night or weekend shift) cannot practically or ethically be observed. In this case, it is possible to ask those involved to report on their actions – being aware that this is not the same as the actual observation. A senior physician’s or hospital manager’s description of certain situations might differ from a nurse’s or junior physician’s one, maybe because they intentionally misrepresent facts or maybe because different aspects of the process are visible or important to them. In some cases, it can also be relevant to consider to whom the interviewee is disclosing this information – someone they trust, someone they are otherwise not connected to, or someone they suspect or are aware of being in a potentially “dangerous” power relationship to them. Lastly, a focus group could be conducted with representatives of the relevant professional groups to explore how and why exactly they provide care around EVT. The discussion might reveal discrepancies (between SOPs and actual care or between different physicians) and motivations to the researchers as well as to the focus group members that they might not have been aware of themselves. For the focus group to deliver relevant information, attention has to be paid to its composition and conduct, for example, to make sure that all participants feel safe to disclose sensitive or potentially problematic information or that the discussion is not dominated by (senior) physicians only. The resulting combination of data collection methods is shown in Fig.  2 .

An external file that holds a picture, illustration, etc.
Object name is 42466_2020_59_Fig2_HTML.jpg

Possible combination of data collection methods

Attributions for icons: “Book” by Serhii Smirnov, “Interview” by Adrien Coquet, FR, “Magnifying Glass” by anggun, ID, “Business communication” by Vectors Market; all from the Noun Project

The combination of multiple data source as described for this example can be referred to as “triangulation”, in which multiple measurements are carried out from different angles to achieve a more comprehensive understanding of the phenomenon under study [ 22 , 23 ].

Data analysis

To analyse the data collected through observations, interviews and focus groups these need to be transcribed into protocols and transcripts (see Fig.  3 ). Interviews and focus groups can be transcribed verbatim , with or without annotations for behaviour (e.g. laughing, crying, pausing) and with or without phonetic transcription of dialects and filler words, depending on what is expected or known to be relevant for the analysis. In the next step, the protocols and transcripts are coded , that is, marked (or tagged, labelled) with one or more short descriptors of the content of a sentence or paragraph [ 2 , 15 , 23 ]. Jansen describes coding as “connecting the raw data with “theoretical” terms” [ 20 ]. In a more practical sense, coding makes raw data sortable. This makes it possible to extract and examine all segments describing, say, a tele-neurology consultation from multiple data sources (e.g. SOPs, emergency room observations, staff and patient interview). In a process of synthesis and abstraction, the codes are then grouped, summarised and/or categorised [ 15 , 20 ]. The end product of the coding or analysis process is a descriptive theory of the behavioural pattern under investigation [ 20 ]. The coding process is performed using qualitative data management software, the most common ones being InVivo, MaxQDA and Atlas.ti. It should be noted that these are data management tools which support the analysis performed by the researcher(s) [ 14 ].

An external file that holds a picture, illustration, etc.
Object name is 42466_2020_59_Fig3_HTML.jpg

From data collection to data analysis

Attributions for icons: see Fig. ​ Fig.2, 2 , also “Speech to text” by Trevor Dsouza, “Field Notes” by Mike O’Brien, US, “Voice Record” by ProSymbols, US, “Inspection” by Made, AU, and “Cloud” by Graphic Tigers; all from the Noun Project

How to report qualitative research?

Protocols of qualitative research can be published separately and in advance of the study results. However, the aim is not the same as in RCT protocols, i.e. to pre-define and set in stone the research questions and primary or secondary endpoints. Rather, it is a way to describe the research methods in detail, which might not be possible in the results paper given journals’ word limits. Qualitative research papers are usually longer than their quantitative counterparts to allow for deep understanding and so-called “thick description”. In the methods section, the focus is on transparency of the methods used, including why, how and by whom they were implemented in the specific study setting, so as to enable a discussion of whether and how this may have influenced data collection, analysis and interpretation. The results section usually starts with a paragraph outlining the main findings, followed by more detailed descriptions of, for example, the commonalities, discrepancies or exceptions per category [ 20 ]. Here it is important to support main findings by relevant quotations, which may add information, context, emphasis or real-life examples [ 20 , 23 ]. It is subject to debate in the field whether it is relevant to state the exact number or percentage of respondents supporting a certain statement (e.g. “Five interviewees expressed negative feelings towards XYZ”) [ 21 ].

How to combine qualitative with quantitative research?

Qualitative methods can be combined with other methods in multi- or mixed methods designs, which “[employ] two or more different methods [ …] within the same study or research program rather than confining the research to one single method” [ 24 ]. Reasons for combining methods can be diverse, including triangulation for corroboration of findings, complementarity for illustration and clarification of results, expansion to extend the breadth and range of the study, explanation of (unexpected) results generated with one method with the help of another, or offsetting the weakness of one method with the strength of another [ 1 , 17 , 24 – 26 ]. The resulting designs can be classified according to when, why and how the different quantitative and/or qualitative data strands are combined. The three most common types of mixed method designs are the convergent parallel design , the explanatory sequential design and the exploratory sequential design. The designs with examples are shown in Fig.  4 .

An external file that holds a picture, illustration, etc.
Object name is 42466_2020_59_Fig4_HTML.jpg

Three common mixed methods designs

In the convergent parallel design, a qualitative study is conducted in parallel to and independently of a quantitative study, and the results of both studies are compared and combined at the stage of interpretation of results. Using the above example of EVT provision, this could entail setting up a quantitative EVT registry to measure process times and patient outcomes in parallel to conducting the qualitative research outlined above, and then comparing results. Amongst other things, this would make it possible to assess whether interview respondents’ subjective impressions of patients receiving good care match modified Rankin Scores at follow-up, or whether observed delays in care provision are exceptions or the rule when compared to door-to-needle times as documented in the registry. In the explanatory sequential design, a quantitative study is carried out first, followed by a qualitative study to help explain the results from the quantitative study. This would be an appropriate design if the registry alone had revealed relevant delays in door-to-needle times and the qualitative study would be used to understand where and why these occurred, and how they could be improved. In the exploratory design, the qualitative study is carried out first and its results help informing and building the quantitative study in the next step [ 26 ]. If the qualitative study around EVT provision had shown a high level of dissatisfaction among the staff members involved, a quantitative questionnaire investigating staff satisfaction could be set up in the next step, informed by the qualitative study on which topics dissatisfaction had been expressed. Amongst other things, the questionnaire design would make it possible to widen the reach of the research to more respondents from different (types of) hospitals, regions, countries or settings, and to conduct sub-group analyses for different professional groups.

How to assess qualitative research?

A variety of assessment criteria and lists have been developed for qualitative research, ranging in their focus and comprehensiveness [ 14 , 17 , 27 ]. However, none of these has been elevated to the “gold standard” in the field. In the following, we therefore focus on a set of commonly used assessment criteria that, from a practical standpoint, a researcher can look for when assessing a qualitative research report or paper.

Assessors should check the authors’ use of and adherence to the relevant reporting checklists (e.g. Standards for Reporting Qualitative Research (SRQR)) to make sure all items that are relevant for this type of research are addressed [ 23 , 28 ]. Discussions of quantitative measures in addition to or instead of these qualitative measures can be a sign of lower quality of the research (paper). Providing and adhering to a checklist for qualitative research contributes to an important quality criterion for qualitative research, namely transparency [ 15 , 17 , 23 ].

Reflexivity

While methodological transparency and complete reporting is relevant for all types of research, some additional criteria must be taken into account for qualitative research. This includes what is called reflexivity, i.e. sensitivity to the relationship between the researcher and the researched, including how contact was established and maintained, or the background and experience of the researcher(s) involved in data collection and analysis. Depending on the research question and population to be researched this can be limited to professional experience, but it may also include gender, age or ethnicity [ 17 , 27 ]. These details are relevant because in qualitative research, as opposed to quantitative research, the researcher as a person cannot be isolated from the research process [ 23 ]. It may influence the conversation when an interviewed patient speaks to an interviewer who is a physician, or when an interviewee is asked to discuss a gynaecological procedure with a male interviewer, and therefore the reader must be made aware of these details [ 19 ].

Sampling and saturation

The aim of qualitative sampling is for all variants of the objects of observation that are deemed relevant for the study to be present in the sample “ to see the issue and its meanings from as many angles as possible” [ 1 , 16 , 19 , 20 , 27 ] , and to ensure “information-richness [ 15 ]. An iterative sampling approach is advised, in which data collection (e.g. five interviews) is followed by data analysis, followed by more data collection to find variants that are lacking in the current sample. This process continues until no new (relevant) information can be found and further sampling becomes redundant – which is called saturation [ 1 , 15 ] . In other words: qualitative data collection finds its end point not a priori , but when the research team determines that saturation has been reached [ 29 , 30 ].

This is also the reason why most qualitative studies use deliberate instead of random sampling strategies. This is generally referred to as “ purposive sampling” , in which researchers pre-define which types of participants or cases they need to include so as to cover all variations that are expected to be of relevance, based on the literature, previous experience or theory (i.e. theoretical sampling) [ 14 , 20 ]. Other types of purposive sampling include (but are not limited to) maximum variation sampling, critical case sampling or extreme or deviant case sampling [ 2 ]. In the above EVT example, a purposive sample could include all relevant professional groups and/or all relevant stakeholders (patients, relatives) and/or all relevant times of observation (day, night and weekend shift).

Assessors of qualitative research should check whether the considerations underlying the sampling strategy were sound and whether or how researchers tried to adapt and improve their strategies in stepwise or cyclical approaches between data collection and analysis to achieve saturation [ 14 ].

Good qualitative research is iterative in nature, i.e. it goes back and forth between data collection and analysis, revising and improving the approach where necessary. One example of this are pilot interviews, where different aspects of the interview (especially the interview guide, but also, for example, the site of the interview or whether the interview can be audio-recorded) are tested with a small number of respondents, evaluated and revised [ 19 ]. In doing so, the interviewer learns which wording or types of questions work best, or which is the best length of an interview with patients who have trouble concentrating for an extended time. Of course, the same reasoning applies to observations or focus groups which can also be piloted.

Ideally, coding should be performed by at least two researchers, especially at the beginning of the coding process when a common approach must be defined, including the establishment of a useful coding list (or tree), and when a common meaning of individual codes must be established [ 23 ]. An initial sub-set or all transcripts can be coded independently by the coders and then compared and consolidated after regular discussions in the research team. This is to make sure that codes are applied consistently to the research data.

Member checking

Member checking, also called respondent validation , refers to the practice of checking back with study respondents to see if the research is in line with their views [ 14 , 27 ]. This can happen after data collection or analysis or when first results are available [ 23 ]. For example, interviewees can be provided with (summaries of) their transcripts and asked whether they believe this to be a complete representation of their views or whether they would like to clarify or elaborate on their responses [ 17 ]. Respondents’ feedback on these issues then becomes part of the data collection and analysis [ 27 ].

Stakeholder involvement

In those niches where qualitative approaches have been able to evolve and grow, a new trend has seen the inclusion of patients and their representatives not only as study participants (i.e. “members”, see above) but as consultants to and active participants in the broader research process [ 31 – 33 ]. The underlying assumption is that patients and other stakeholders hold unique perspectives and experiences that add value beyond their own single story, making the research more relevant and beneficial to researchers, study participants and (future) patients alike [ 34 , 35 ]. Using the example of patients on or nearing dialysis, a recent scoping review found that 80% of clinical research did not address the top 10 research priorities identified by patients and caregivers [ 32 , 36 ]. In this sense, the involvement of the relevant stakeholders, especially patients and relatives, is increasingly being seen as a quality indicator in and of itself.

How not to assess qualitative research

The above overview does not include certain items that are routine in assessments of quantitative research. What follows is a non-exhaustive, non-representative, experience-based list of the quantitative criteria often applied to the assessment of qualitative research, as well as an explanation of the limited usefulness of these endeavours.

Protocol adherence

Given the openness and flexibility of qualitative research, it should not be assessed by how well it adheres to pre-determined and fixed strategies – in other words: its rigidity. Instead, the assessor should look for signs of adaptation and refinement based on lessons learned from earlier steps in the research process.

Sample size

For the reasons explained above, qualitative research does not require specific sample sizes, nor does it require that the sample size be determined a priori [ 1 , 14 , 27 , 37 – 39 ]. Sample size can only be a useful quality indicator when related to the research purpose, the chosen methodology and the composition of the sample, i.e. who was included and why.

Randomisation

While some authors argue that randomisation can be used in qualitative research, this is not commonly the case, as neither its feasibility nor its necessity or usefulness has been convincingly established for qualitative research [ 13 , 27 ]. Relevant disadvantages include the negative impact of a too large sample size as well as the possibility (or probability) of selecting “ quiet, uncooperative or inarticulate individuals ” [ 17 ]. Qualitative studies do not use control groups, either.

Interrater reliability, variability and other “objectivity checks”

The concept of “interrater reliability” is sometimes used in qualitative research to assess to which extent the coding approach overlaps between the two co-coders. However, it is not clear what this measure tells us about the quality of the analysis [ 23 ]. This means that these scores can be included in qualitative research reports, preferably with some additional information on what the score means for the analysis, but it is not a requirement. Relatedly, it is not relevant for the quality or “objectivity” of qualitative research to separate those who recruited the study participants and collected and analysed the data. Experiences even show that it might be better to have the same person or team perform all of these tasks [ 20 ]. First, when researchers introduce themselves during recruitment this can enhance trust when the interview takes place days or weeks later with the same researcher. Second, when the audio-recording is transcribed for analysis, the researcher conducting the interviews will usually remember the interviewee and the specific interview situation during data analysis. This might be helpful in providing additional context information for interpretation of data, e.g. on whether something might have been meant as a joke [ 18 ].

Not being quantitative research

Being qualitative research instead of quantitative research should not be used as an assessment criterion if it is used irrespectively of the research problem at hand. Similarly, qualitative research should not be required to be combined with quantitative research per se – unless mixed methods research is judged as inherently better than single-method research. In this case, the same criterion should be applied for quantitative studies without a qualitative component.

The main take-away points of this paper are summarised in Table ​ Table1. 1 . We aimed to show that, if conducted well, qualitative research can answer specific research questions that cannot to be adequately answered using (only) quantitative designs. Seeing qualitative and quantitative methods as equal will help us become more aware and critical of the “fit” between the research problem and our chosen methods: I can conduct an RCT to determine the reasons for transportation delays of acute stroke patients – but should I? It also provides us with a greater range of tools to tackle a greater range of research problems more appropriately and successfully, filling in the blind spots on one half of the methodological spectrum to better address the whole complexity of neurological research and practice.

Take-away-points

Acknowledgements

Abbreviations, authors’ contributions.

LB drafted the manuscript; WW and CG revised the manuscript; all authors approved the final versions.

no external funding.

Availability of data and materials

Ethics approval and consent to participate, consent for publication, competing interests.

The authors declare no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

How To Conduct Content Analysis: A Comprehensive Guide

Unlock hidden meanings! Learn how to conduct content analysis, determining the presence of words, themes, or concepts in your data.

' src=

Content analysis, a diverse research method, provides an organized approach for dissecting and comprehending communication in its multiple forms. Whether evaluating textual documents, visual images, social media content, or audio recordings, content analysis provides researchers with the tools they need to discover hidden meanings, identify common themes, and expose underlying patterns in varied datasets. 

Through this guide, researchers will understand how to conduct content analysis. This guide aims to serve as a beacon for researchers navigating the complicated landscape of content analysis, providing not only a thorough definition and explanation of its significance, but also practical insights into its application across qualitative and quantitative research paradigms. As methods of communication expand and diversify, knowing and mastering content analysis becomes increasingly important for researchers looking to delve deeper into the complexities of human expression and societal dynamics.

Understanding Content Analysis

As previously stated, content analysis is a robust research process used to evaluate and interpret various types of communication, including text and images, with meticulous attention to detail. 

Before understanding how to conduct content analysis, it’s important to recognize the profound significance this methodology holds in both qualitative and quantitative research paradigms, offering unique advantages and insights to researchers across diverse disciplines.

Related article: Research Paradigm: An Introduction with Examples

Content Analysis On Qualitative Research

  • Exploration of Complex Phenomena: Qualitative research seeks to understand both the breadth and depth of human experiences, points of view, and behaviors. Content analysis is a systematic method for analyzing textual, visual, or audio data, allowing researchers to identify complex meanings, patterns, and themes in qualitative information.
  • Comprehending Context and Culture: A common goal of qualitative research is to comprehend phenomena with regard to their sociocultural environment. Researchers can study how language, symbols, and representations are created and understood within certain social or cultural contexts by using content analysis.
  • Theory Building and Grounded Theory: Grounded theory methods, which allow researchers to construct theories based on empirical data, heavily rely on content analysis. Through methodical examination of qualitative data, researchers can identify emerging themes, enhance theoretical frameworks, and formulate theories based on empirical findings.
  • Flexibility and Adaptability: Researchers can customize their approach to the details of their research setting by using content analysis, which provides flexibility in data collecting and analysis. Content analysis can be tailored to accommodate a diverse array of qualitative data sources, including but not limited to interview transcripts, social media posts, and historical documents.

Content Analysis On Quantitative Research

  • Standardization and Objectivity: When gathering and analyzing data, quantitative research places a strong emphasis on standardization and objectivity. Textual or visual material can be methodically coded and categorized into quantifiable characteristics by researchers using content analysis, which offers an organized framework for quantifying qualitative data.
  • Large-Scale Data Analysis: Content analysis can be scaled up to analyze large volumes of data efficiently. Researchers can examine large datasets and reach statistically significant conclusions by using quantitative content analysis, whether the dataset is online forums, news articles, or survey replies.
  • Comparative Analysis and Generalizability: Researchers can find trends, patterns, or discrepancies in content across several contexts by using quantitative content analysis to assist comparative study across texts or historical periods. By quantifying textual data, researchers can also assess the generalizability of findings to broader populations or phenomena.
  • Integration with Statistical Methods: To improve data analysis and interpretation, quantitative content analysis can be combined with statistical techniques. Techniques such as frequency counts, chi-square tests, or regression analysis can be applied to analyze coded content and test hypotheses derived from theoretical frameworks.

Types Of Content Analysis

  • Manifest Content Analysis: Manifest content analysis focuses on analyzing the explicit, surface-level content of communication. It involves identifying and categorizing visible, tangible elements such as words, phrases, or images within the text or other forms of media. The goal is to describe and quantify the visible characteristics of communication without delving into deeper meanings or interpretations.
  • Latent Content Analysis: Latent content analysis goes beyond the explicit content to uncover underlying meanings, themes, and interpretations embedded within the communication. It involves interpreting the implicit, hidden messages, symbols, or metaphors conveyed through language, imagery, or other forms of representation. The aim is to uncover deeper insights into the underlying motives, beliefs, or attitudes reflected in the communication.
  • Thematic Analysis: Thematic analysis involves identifying, analyzing, and interpreting recurring themes or patterns within the content. It focuses on discovering commonalities, differences, and relationships between concepts or ideas expressed within the communication. The goal is to uncover overarching themes or conceptual categories that capture the essence of the data and provide insights into the underlying phenomena being studied.
  • Narrative Analysis: Narrative analysis focuses on analyzing the structure, content, and meaning of narratives or stories within the communication. It involves examining the plot, characters, settings, and other narrative elements to uncover the underlying themes, ideologies, or cultural meanings embedded within the stories. The aim is to understand how narratives shape identity, culture, and social discourse.
  • Discourse Analysis: Discourse analysis examines the language, rhetoric, and power dynamics inherent in communication practices. It involves analyzing how language is used to construct social realities, shape identities, and negotiate power relations within specific contexts. The goal is to uncover how language structures and reflects social norms, ideologies, and power dynamics within society.
  • Visual Content Analysis: Visual content analysis focuses on analyzing visual elements such as images, symbols, or graphics within communication media. It involves examining the composition, content, and meaning of visual representations to uncover underlying themes, messages, or cultural meanings conveyed through imagery. The aim is to understand how visuals influence perception, cognition, and communication processes.

Preparing For Content Analysis

Before embarking on the journey of content analysis, researchers must lay a solid groundwork by carefully selecting materials for analysis and defining clear categories for coding. This preparatory phase is crucial for ensuring the relevance, reliability, and validity of the content analysis process.

Material Selection

Criteria for choosing materials.

  • Relevance to Research Objectives: Select materials that are directly relevant to the research questions or objectives. Ensure that the content aligns with the scope and focus of the study.
  • Diversity and Representation: Choose materials that provide a diverse range of perspectives, viewpoints, or contexts relevant to the research topic. Seek to include a variety of sources to capture different dimensions of the phenomenon under study.
  • Accessibility and Availability: Prioritize materials that are readily accessible and available for analysis. Consider factors such as copyright restrictions, data availability, and ethical considerations when selecting materials.
  • Quality and Authenticity: Verify the credibility and authenticity of the materials to ensure the accuracy and reliability of the data. Use reputable sources and validate the authenticity of primary data sources where applicable.

How To Acquire Materials

  • Literature Review: Conduct a comprehensive literature review to identify relevant sources, studies, or datasets related to the research topic. Utilize academic databases, libraries, and online repositories to access scholarly articles, books, reports, and other relevant materials.

Also read: What is a literature review? Get the concept and start using it

  • Data Collection: Collect primary data through methods such as interviews, surveys, observations, or document analysis, depending on the research design. Use systematic sampling techniques to ensure representativeness and diversity in the selection of materials.
  • Digital Sources: Explore digital sources such as online databases, social media platforms, websites, or multimedia archives to access digital content for analysis. Use web scraping tools, APIs, or data extraction techniques to gather digital data in a structured format.
  • Ethical Considerations: Adhere to ethical guidelines and obtain necessary permissions or approvals for accessing and using copyrighted materials or sensitive data. Protect the privacy and confidentiality of participants and respect intellectual property rights when acquiring materials for analysis.

Defining And Identifying Categories

How to define categories.

  • Define Research Objectives: Clarify the research questions, objectives, and hypotheses to guide the development of coding categories. Determine the key concepts, themes, or variables of interest that will be coded and analyzed.
  • Conduct Preliminary Analysis: Review the selected materials to identify recurring patterns, themes, or topics relevant to the research focus. Use open coding techniques to generate initial categories based on the content of the materials.
  • Conceptualize Categories: Organize the initial codes into conceptual categories or thematic domains that encapsulate the main dimensions of the phenomenon under study. Group related codes together and refine the category labels to ensure clarity and coherence.
  • Establish Coding Rules: Develop clear and concise coding rules or definitions for each category to guide the coding process. Define inclusion and exclusion criteria, coding criteria, and examples to illustrate the application of each category.
  • Pilot Test Categories: Conduct a pilot test or inter-coder reliability assessment to evaluate the clarity, reliability, and validity of the coding categories. Revise and refine the categories based on feedback from pilot testing to improve coding consistency and accuracy.

Best Practices To Identify Categories

  • Iterative Process: Approach category development as an iterative process, refining and revising categories based on ongoing analysis and feedback. Continuously review and update categories to capture emerging themes or insights.
  • Triangulation: Use multiple sources of data or multiple coders to triangulate findings and ensure the reliability and validity of coding categories. Compare and cross-reference coding results to identify discrepancies or inconsistencies.
  • Peer Review: Seek feedback from colleagues, mentors, or experts in the field to validate the relevance and appropriateness of coding categories. Engage in peer review sessions to discuss and refine coding schemes collaboratively.
  • Reflexivity: Maintain reflexivity throughout the category development process, critically reflecting on your assumptions, biases, and interpretations. Consider alternative perspectives and interpretations to enhance the richness and depth of coding categories.
  • Consult Existing Frameworks: Draw upon existing theoretical frameworks, conceptual models, or coding schemes relevant to the research topic. Adapt and modify existing frameworks to suit the specific context and objectives of the study.

How To Conduct Content Analysis

Mastering content analysis empowers researchers to uncover insights and contribute to scholarly discourse across various disciplines. By following the guidelines outlined in this guide, researchers can conduct meaningful analyses that advance knowledge and inform decision-making processes.

Coding Content

To create an effective coding system, start by identifying the key concepts, themes, or variables you want to analyze within your content. Develop clear and concise code definitions and coding rules to guide the coding process. Ensure that your coding system is comprehensive, covering all relevant aspects of the content you are analyzing. Once your coding system is in place, apply it consistently and systematically to the entire dataset.

Let’s say you’re conducting a content analysis on customer reviews of a product. Your coding system may include categories such as “product quality,” “customer service,” and “value for money.” As you analyze each review, you’ll assign codes to relevant segments of text based on these categories. For example, a positive comment about the product’s durability may be coded under “product quality,” while a complaint about slow shipping may be coded under “customer service.”

Analyzing And Interpreting Results

Once you’ve coded your content, you can begin analyzing it to identify patterns, trends, and insights. Common techniques for analyzing content include frequency analysis, thematic analysis, and comparative analysis. Use these techniques to uncover key themes, relationships between variables, and variations across different segments of your dataset.

When interpreting your content analysis results, consider the context in which the content was produced and the characteristics of your sample. Look for overarching patterns and trends, but also pay attention to outliers or unexpected findings. Consider how your findings relate to existing literature and theories in your field, and be transparent about any limitations or biases in your analysis.

Validating The Results

Validating results in content analysis involves assessing the reliability and validity of your findings to ensure they accurately reflect the underlying content. This may include measures to ensure inter-coder reliability, triangulation with other data sources, and sensitivity analyses to test the robustness of your results.

Common methods used to validate results in content analysis include inter-coder reliability tests, where multiple coders independently code a subset of the data to assess consistency. Triangulation involves comparing findings from content analysis with other methods or sources of data to confirm or refute conclusions. Additionally, sensitivity analyses involve testing the impact of different coding decisions or analytical approaches on the results to assess their robustness.

Reporting Findings

In reporting findings, researchers distill the essence of their content analysis, presenting insights and conclusions clearly and concisely. This section is a very important part of how to conduct content analysis, as it provides guidance on structuring reports, writing effectively, and using visual aids to convey results with clarity and impact.

Writing And Structuring The Report

When writing your content analysis report, start by clearly stating your research objectives and methodology. Present your findings in a logical and organized manner, using descriptive statistics, tables, and visual aids to support your analysis. Discuss the implications of your findings for theory, practice, or policy, and conclude by summarizing the key insights and contributions of your study.

An effective content analysis report should be concise, clear, and well-structured. Use headings and subheadings to guide the reader through the report, and provide sufficient detail to support your conclusions. Be transparent about your methods and any limitations of your analysis, and use language that is accessible to your intended audience.

Organize your report into sections that mirror the steps of your content analysis process, such as coding, analysis, and interpretation. Use descriptive titles and subheadings to clearly delineate each section, and provide ample context and explanation for your findings. Consider including visual aids such as charts or graphs to enhance the clarity and readability of your report.

Visualising Data

Visualizing data is an effective way to communicate your findings and insights to your audience. Common visualizations used in content analysis include bar charts, pie charts, line graphs, and heat maps. Choose the visualization method that best represents the patterns and trends in your data and is most suitable for your audience.

Consider the nature of your data and the preferences of your audience when selecting visualization methods. For example, bar charts are useful for comparing frequencies or proportions across categories, while line graphs are suitable for showing trends over time. Choose visualization methods that are intuitive, informative, and visually appealing to effectively convey your content analysis results.

Related article: Art Of Describing Graphs And Representing Numbers Visually

Tips For A Successful Content Analysis

  • Document Your Process: Keeping detailed records of your content analysis process can prove invaluable, aiding in transparency, reproducibility, and troubleshooting. Record decisions made during material selection, category definition, and coding, as well as any challenges encountered and their resolutions. This documentation not only enhances the rigor of your analysis but also facilitates communication with collaborators and reviewers.
  • Embrace Iteration: Content analysis is rarely a linear process. Embrace iteration and refinement throughout each stage, from material selection to reporting findings. Regularly revisit and revise coding categories, analytical techniques, and interpretations in response to emerging insights or challenges. Iterative refinement ensures that your analysis remains dynamic and responsive to the complexities of the data.
  • Utilize Software Tools: While content analysis can be conducted manually, leveraging software tools can streamline and enhance the process. Explore software options tailored to content analysis tasks, such as qualitative data analysis software (QDAS) or text analysis tools. These tools often offer features for organizing data, coding text, and visualizing results, saving time and enhancing analytical capabilities.
  • Prioritize Inter-Coder Reliability: Inter-coder reliability, or the consistency of coding among multiple coders, is crucial for ensuring the validity and reliability of your analysis. Prioritize inter-coder reliability assessments early in the process, involving multiple coders in coding tasks and comparing their results. Establishing clear coding guidelines and conducting regular reliability checks can mitigate discrepancies and enhance the credibility of your findings.
  • Consider Cultural Sensitivity: When analyzing content that reflects cultural or linguistic diversity, it’s essential to approach the process with sensitivity and awareness. Consider the cultural context of the content, including language nuances, symbolism, and cultural norms, when interpreting and coding data. Engage with diverse perspectives and seek input from stakeholders to ensure that your analysis accurately reflects the complexity of the cultural landscape.
  • Be Mindful of Bias: Conscious and unconscious biases can influence every stage of the content analysis process, from material selection to interpretation of results. Stay vigilant for biases related to personal beliefs, disciplinary perspectives, or preconceived notions about the topic under study. Implement strategies to mitigate bias, such as peer review, reflexivity exercises, and triangulation with multiple data sources.
  • Foster Collaboration: Content analysis can benefit from interdisciplinary collaboration and diverse perspectives. Engage with colleagues, mentors, or experts from different fields to enrich your analysis and challenge assumptions. Collaborative approaches can foster creativity, rigor, and innovation, leading to more robust and nuanced findings.
  • Stay Open to Serendipity: While content analysis often involves systematic data collection and analysis, don’t overlook the potential for serendipitous discoveries. Remain open to unexpected insights, patterns, or connections that emerge during the analysis process. Serendipity can lead to novel research directions, enriching your understanding of the phenomenon under study.

Science Figures, Graphical Abstracts, And Infographics For Your Research

Mind the Graph is a valuable resource for scientists seeking to enhance the visual communication of their research through science figures, graphical abstracts, and infographics. With its user-friendly interface, extensive template library, and customizable design tools, the platform empowers researchers to create visually compelling and scientifically accurate visualizations that effectively communicate complex ideas and findings to a diverse audience.

illustrations-banner

Subscribe to our newsletter

Exclusive high quality content about effective visual communication in science.

Unlock Your Creativity

Create infographics, presentations and other scientifically-accurate designs without hassle — absolutely free for 7 days!

About Jessica Abbadia

Jessica Abbadia is a lawyer that has been working in Digital Marketing since 2020, improving organic performance for apps and websites in various regions through ASO and SEO. Currently developing scientific and intellectual knowledge for the community's benefit. Jessica is an animal rights activist who enjoys reading and drinking strong coffee.

Content tags

en_US

  • Open access
  • Published: 03 June 2024

Offering extended use of the contraceptive implant via an implementation science framework: a qualitative study of clinicians’ perceived barriers and facilitators

  • Nicole Rigler 1 ,
  • Gennifer Kully 2 , 3 ,
  • Marisa C. Hildebrand 2 ,
  • Sarah Averbach 2 , 3 &
  • Sheila K. Mody 2  

BMC Health Services Research volume  24 , Article number:  697 ( 2024 ) Cite this article

129 Accesses

Metrics details

The etonogestrel contraceptive implant is currently approved by the United States Food and Drug Administration (FDA) for the prevention of pregnancy up to 3 years. However, studies that suggest efficacy up to 5 years. There is little information on the prevalence of extended use and the factors that influence clinicians in offering extended use. We investigated clinician perspectives on the barriers and facilitators to offering extended use of the contraceptive implant.

Using the Consolidated Framework for Implementation Research (CFIR), we conducted semi-structured qualitative interviews. Participants were recruited from a nationwide survey study of reproductive health clinicians on their knowledge and perspective of extended use of the contraceptive implant. To optimize the diversity of perspectives, we purposefully sampled participants from this study. We used content analysis and consensual qualitative research methods to inform our coding and data analysis. Themes arose deductively and inductively.

We interviewed 20 clinicians including advance practice clinicians, family medicine physicians, obstetrician/gynecologist and complex family planning sub-specialists. Themes regarding barriers and facilitators to extended use of the contraceptive implant emerged. Barriers included the FDA approval for 3 years and clinician concern about liability in the context of off-label use of the contraceptive implant. Educational materials and a champion of extended use were facilitators.

Conclusions

There is opportunity to expand access to extended use of the contraceptive implant by developing educational materials for clinicians and patients, identifying a champion of extended use, and providing information on extended use prior to replacement appointments at 3 years.

Peer Review reports

The etonogestrel contraceptive implant is currently approved by the U.S. Food and Drug Administration (FDA) for 3 years of continuous use for the prevention of pregnancy [ 1 ]. However, there is evidence to support its use for up to 5 years while maintaining a low risk of pregnancy [ 2 , 3 , 4 ]. The off-label use of the contraceptive implant past its FDA-approved duration and up to 5 years is known as extended use. Importantly, the FDA supports off-label use of marketed drugs and medical devices so long as there is strong relevant published evidence [ 5 ]. Off-label use such as extended use of the contraceptive implant is common with many other reproductive devices and medications, including misoprostol for labor induction, the copper intrauterine device (IUD) for emergency contraception, and, prior to its recent FDA-approval for extended use, the 52 mg levonorgestrel (LNG) IUD for pregnancy prevention. The 52 mg LNG IUD was previously FDA-approved for 5 years, however strong published evidence demonstrated longer efficacy up to 8 years, leading clinicians to counsel on extended use and eventually contributing to updated federal guidelines [ 6 , 7 ].

Though there are clinicians who counsel patients on extended use of the contraceptive implant, many patients still undergo implant replacement after only 3 years of use [ 8 , 9 ]. Continuation rates of the contraceptive implant after 1 and 2 years of use is estimated to be at 81.7% and 68.7%, with the most common reason for early discontinuation prior to 3 years being changes to bleeding pattern [ 10 , 11 , 12 , 13 ]. Ali et al. report the most common reasons that patients decided to stop implant use in years 4 and 5: unspecified personal reasons, desired fertility, bleeding problems, and other medical reasons [ 4 ]. Additionally, a recent nationwide, web-based survey amongst a diverse group of reproductive health clinicians investigated the barriers and facilitators regarding extended use of the contraceptive implant up to 5 years [ 14 ]. The most common barriers found in the study were provider concerns about pregnancy risk and the current FDA approval for only 3 years of use. The key facilitators included strong published evidence supporting extended use and patient and clinician education on extended use. Other than these studies, the patient and clinician factors that facilitate and hinder widespread implementation of extended use of the contraceptive implant have not been explored.

Increasing implementation of extended use of the contraceptive implant across practice settings may decrease unnecessary procedures, devices, healthcare visits, and could improve access to, and satisfaction with, the contraceptive implant. Long-acting reversible contraceptive (LARC) methods such as the contraceptive implant and LNG IUD have significantly higher continuation and approval rates and are more efficacious at preventing pregnancy than non-LARC methods such as oral contraceptive pills and depot medroxyprogesterone acetate injection [ 12 , 15 , [ 16 ]. Given the continued high rates of unintended pregnancies in the United States and the consequential increase in healthcare costs and poor outcomes secondary to pregnancy complications, efficacious pregnancy prevention is an important public health objective and cost-saving measure [ 17 ].

Using a qualitative approach guided by an implementation science framework, the Consolidated Framework for Implementation Research (CFIR), [ 18 ] we sought to explore clinician perspectives on extended use of the contraceptive implant up to 5 years as well as the perceived barriers and facilitators for clinicians to offer extended use.

We conducted semi-structured interviews with 20 clinicians including obstetrics and gynecology generalists, family medicine physicians, complex family planning sub-specialists, and advanced practice clinicians. We recruited interview participants from a nationwide, web-based survey that assessed the prevalence of extended use of the contraceptive implant [ 17 ]. This study recruited respondents through email listservs for the Fellowship in Complex Family Planning, the Ryan Residency Training in Family Planning Program, women’s health nurse practitioners, and family medicine physicians, as well as private social media groups for obstetrician-gynecologists. The total reach of the survey was unknown, however, the study had a survey completion rate of 66.6% ( n  = 300/450). Of the 300 completed surveys, 290 respondents indicated their interest in being interviewed (96.7%).

Among the survey respondents, we invited 24 clinicians to participate in interviews, yielding an 83.3% response rate. We selectively recruited interview participants to enrich our sample, specifically focusing on clinician type, practice setting, and region of practice within the United States (U.S.). We also selected interview participants based on whether they always, sometimes, or never counsel on extended use to investigate a broad range of perspectives. For this study, offering extended use is defined as counseling on use past the current FDA-approved duration of 3 years and up to 5 years of use. Offering extended use can occur at any clinical encounter, including insertion appointments, replacement and removal appointments at or before 3 years, and general reproductive health appointments. Clinicians who always offer extended use were defined as those who counsel on extended use to patients who are considering or currently have the contraceptive implant. Clinicians who sometimes offer extended use were defined as those who counsel on extended use, but only to particular patients based on patient-specific factors such as body mass index or insurance coverage. Clinicians who never offer extended use were defined as those who never counsel on use of the contraceptive implant past 3 years of use.

The interview guide was created utilizing an implementation science framework that identifies factors for effectively enacting interventions [ 18 ]. The Consolidated Framework for Implementation Research (CFIR) is organized into 5 major domains: characteristics of the intervention, individual characteristics, inner setting, outer setting, and the process of implementation. The first domain, intervention characteristics, relates to the inherent qualities of the intervention, such as pharmacologic properties and side effects of the contraceptive implant when used up to 5 years. Individual characteristics relates to the roles and characteristics of individual patients and clinicians interacting with the intervention, such as educational background and type of insurance coverage. The inner setting domain assesses the internal setting in which an intervention will be implemented (i.e., clinic type, culture, and policies). The broader context in which an intervention will be implemented, including national policies and social norms is evaluated within the outer setting domain. Finally, the process of implementation domain explores the activities and strategies used to implement the intervention, such as educational materials or clinician and staff trainings on extended use.

We designed the interview guide around these specific domains with questions that aimed to identify targeted strategies to support successful implementation. The complete interview guide is in Appendix A . The interview guide was designed with input from clinicians who regularly prescribe contraception, including extended use of the contraceptive implant, as well as CFIR and implementation science experts. The Human Research Protection Program at our institution approved the study.

A single research team member conducted semi-structured interviews via secure video conference between July and August 2021. Interview participants provided informed consent. All participants were asked a full set of open-ended questions based on the interview guide, with focused follow-up questions to further investigate potential themes or to clarify points. All interviews were audio recorded, then transcribed. For data analysis, we used a content analysis approach to identify concepts and patterns within the dataset [ 19 ]. Themes arose deductively and inductively, with deductive themes identified from the CFIR domains and inductive themes arising from interview insights. Consensual qualitative research methods informed both our data analysis and coding process [ 20 ]. Three authors were involved in the thematic coding of the transcripts. Initially, 5 transcripts were independently coded then checked for inter-coder reliability. Any disagreements were discussed, and a consensus was achieved. The remaining transcripts were then coded by one of the three authors. Once all interviews were coded, major themes and representative quotes were identified. The research team utilized ATLAS.ti for analysis [ 21 ].

Between July and August 2021, we interviewed 20 clinicians from a variety of clinical settings, regions, and women’s health professions, achieving the intended diversity of perspectives (Table  1 ). Among participants, 7 (35.0%) always, 8 (40.0%) sometimes, 5 (25.0%) never offer extended use of the contraceptive implant (Table 2 ).

Characteristics of the intervention

We found that changes to bleeding pattern in or after the third year of use was a barrier to clinicians offering extended use of the contraceptive implant. The participants in this study noted that perceived increases in the irregularity or frequency of a patient’s bleeding makes extended use of the implant difficult for patients to accept. One clinician noticed that some patients correlate changes in their bleeding pattern with a perceived decrease in the efficacy of their implant:

"People who do start noticing changes in bleeding pattern […] [and] associating that with, ‘Oh, my implant is wearing out or becoming expired. I need to get this changed out."

-Complex Family Planning Specialist, Southwest, Academic Setting, sometimes offers extended use

The same clinician discussed that more research on bleeding patterns in the extended use period and potential treatments for implant-associated irregularities could be a facilitator of extended use:

"For bleeding, I think it would be awesome if there is a research study, looking at use of OCPs [oral contraceptive pills] to manage bleeding near the end of the use of an implant or near that three-year mark,, […] So that we could give people… Honestly, either a natural history or a, ‘Here’s how you can manage that if you do want to keep using your implant longer.’"

- Complex Family Planning Specialist, Southwest, Academic Setting, sometimes offers extended use

Information on the bleeding pattern in years 4 and 5 of use and how clinicians can address irregular bleeding during implant use may increase acceptability of extended use.

Individual characteristics

We found that insurance impacts whether a clinician offers extended use:

"I do sometimes have patients saying, ‘I might be changing jobs or I’m going to be turning 27 or whatever.’ And so insurance is a barrier and so they’re like, ‘I want the new one while I still have this insurance.’"

- Family Medicine Physician, Midwest, Community Setting, sometimes offers extended use

Many participants agreed with this concept and stated that acceptability of extended use depends on a patient’s perception of their future insurance status. Clinicians observed that if a patient believes they will have coverage for a replacement or removal in the future, they are more likely to pursue extended use of their implant. Conversely, one clinician discussed how lack of current insurance coverage could be a facilitator of extended use:

"So, I would generally offer extended use to people that didn’t have insurance and would have to self-pay. I would like go through the data with them so they wouldn’t have to pay like $1,000 to get a new implant because it could work another year, or people that were concerned about changing side effects at that time."

- Obstetrician-Gynecologist, Southwest, Academic Setting, sometimes offers extended use

Overall, clinicians perceived that patients’ concerns about current and future insurance coverage may affect acceptance of extended use.

Inner setting

This study found that having a champion of extended use at a clinician’s home or affiliate institution was a facilitator of extended use. Most clinicians in the study stated that it is or would be helpful to have someone who worked with them clinically that was knowledgeable on the data about extended use. When asked which factor would promote extended use of the implant the most, this clinician stated:

"…having a champion who is really ready to present the evidence, because the evidence can be there, but people don’t have time to read it. If it’s not brought to them, they’re not really going to know about it."

- Obstetrician-Gynecologist, West Coast, Community Setting, does not offer extended use

Potential champions identified were physicians, nurses, medical directors, or other clinicians in leadership positions, but participants generally believed that the position should be held by someone who is passionate about contraception, highly familiar with the specific setting, and knowledgeable about the clinical studies on extended use.

A barrier noted by a few participants was the effect of discordant counseling by different clinicians, sometimes within the same clinic, on acceptability of extended use:

"I mean, I guess like getting everyone on the same page, like in your practice can be a barrier. Especially in the practice I’ve been at, which like I said was in a state that was very litigious, so people weren’t always willing to like go outside guidelines that were… So getting your whole group on the same page so patients get like a more consistent message."

- Obstetrician-Gynecologist, Southwest, Academic Setting, sometimes offers extended use.

Participants discussed that it is important for clinician teams to relay a cohesive message to patients, especially in settings where patients may see multiple clinicians for their contraceptive care.

Outer setting

Lack of FDA approval for extended use was identified as barrier by many clinicians, and some clinicians counseled patients only on the FDA-approved duration of the contraceptive implant:

"So, generally in our practice we don’t really talk about extended use. We say this is FDA approved for three years."

- Advanced Practice Clinician, Southeast, Community Setting, sometimes offers extended use.

Even clinicians who do offer extended use of the implant noted that off-label use can be confusing to patients, making it difficult to counsel on extended use:

"So I have patients all the time, who’ll say, ‘Well, what do you mean I can keep X, Y or Z in for an extra year?’ And I’ll say, ‘We have big studies that tell us that this is an okay thing to do.’ But that just feels weird. People don’t necessarily understand the role of the FDA or sort of how it works. And so it’s something like extended use just might be a really such a foreign concept. Right? It’s so far outside. But I think that there are also, there are lay outlets that cover this stuff. So it’s not that it’s impossible to access. It’s just that the patient has to be interested just like the provider has to be interested."

- Complex Family Planning Specialist, East Coast, Academic Setting, sometimes offers extended use.

Clinicians also observed that certain clinics must follow official guidelines without the flexibility to offer extended use, regardless of a clinician’s perspective or willingness to counsel on extended use. Interestingly, patient confusion as well as mistrust of the healthcare system may impact patient acceptability of extended use in the context of a three-year FDA-approved duration:

"The other thing is the FDA approval because the box says three years, but then like I tell people, you can take it out in five years. And then they don’t believe… Like who is right. Is it my doctor who’s getting in front of me right or the box, right?"

- Family Medicine Physician, West Coast, Community Setting, always offers extended use.

This clinician noted that a disconnect between a clinician’s counseling and prescription information may lead patients to be confused about the recommendation for extended use.

Another barrier mentioned by a few participants was provider concern about liability in the event of an unintended pregnancy. Participants discussed fear of both legal and interpersonal repercussions of unintended pregnancy after counseling on off-label use of a contraceptive device:

"Even though there’s a slim chance that a patient would get pregnant on Nexplanon [the contraceptive implant], I feel like if we were to say, ‘Yeah, you can use it beyond the four years,’ and they come up and they get pregnant, they’re that 1% chance that gets pregnant, I feel like there could be a little bit of blame laid on us if we were to tell them that they’re able to it beyond the three years when the label doesn’t say that yet."

- Advanced Practice Clinician, Southeast, Private Practice, does not offer extended use.

Some participants felt that they would “have no ground to stand on” in the event of a lawsuit (OBGYN Physician, Midwest, Private Practice), making them concerned about the possibility of increased liability in counseling on off-label use without FDA approval.

Interestingly, multiple clinicians also discussed abortion restrictions in the United States as influencing patients in their decision to pursue extended use or not:

"In the past four years [2017–2021] have also had a lot of patients express concern about the administration. And so wanting to kind of be as current as they can be with their devices and so potentially exchanging them sooner than they need."

- Complex Family Planning Specialist, West Coast, Academic Setting, always offers extended use.

Clinicians observed that patients are noticing and reacting to abortion restrictions when making their contraceptive decisions, which may impact the widespread implementation of extended use.

Process of implementation

Many clinicians reported that a barrier to implementing extended use was patient preference for removal when they are already in clinic for a scheduled removal or replacement procedure, regardless of being counseled on extended use at that time:

“’Oh, I’m already here. I’m approved. Let’s just go ahead and get it done.’ So there’s probably not a whole lot you can do about that either, once they’re already in the clinic, and have their mind set on it.”

- Obstetrician-Gynecologist, Southeast, Academic Setting, does not offer extended use.

Many participants in this study noted that patients have made logistical arrangements prior to their appointments including paid time off, childcare, or prior authorization. It can be difficult for clinicians to offer extended use within this context, therefore counseling is better done prior to a patient coming in for a replacement appointment.

A perceived facilitator of extended use that was mentioned often was clear, concise clinician educational services or materials that illustrates existing data on efficacy and risks. Clinicians believed that this education could be in the form of continued medical education, targeted trainings, or written summaries of relevant studies, data, and recommendations. One consistency across interviews was that education on extended use must be integrated into regular practice and be easily understood by busy clinicians:

"I think that when we get a pamphlet or a brochure or a one page, something that just has everything condensed so it’s a really quick, oh, okay, this is something that we can be offering patients. And these are the reasons why it would be a benefit to them, and these are the patients that maybe would fall out of not offering this to. I think because of how busy we are, that’s the best way for us to make change."

- Advanced Practice Clinician, Southwest, Academic Setting, does not offer extended use.

Participants reported that these resources should be widely distributed beyond the complex family planning and obstetrician-gynecology community to increase accessibility to extended use.

Another potential facilitator identified was effective patient educational materials such as flyers that state the 5-year efficacy of the contraceptive implant, though producing these might require FDA approval. Participants in this study report that patients rely on clinicians to provide information on the efficacy and duration of their contraceptive implant. However, it is difficult for patients to accept extended use when there are inconsistencies across multiple sources of information:

"I mean, if online, there was information where it said you can keep it in for three to five years and they’re able to back that up. You know, people like to do their own research. I think that would be helpful, versus it says everywhere three, three, three, three, three, and then you’re the only person telling them something different, then it’s a little more tricky."

- Obstetrician-Gynecologist, West Coast, Community Setting, does not offer extended use.

Overall, participants in this study expressed that it would be helpful to have easily understood information for clinicians and patients that explained the evidence for extended use.

Our results demonstrate that there is an opportunity to increase widespread implementation of extended use through multiple interventions. Clinicians reported that patients prefer to have their implants replaced when they are already in clinic for the procedure. Therefore, intervening prior to replacement appointments at 3 years in the form of telemedicine visits or notifications from scheduling staff may make extended use of the contraceptive implant more acceptable to patients. Further, clinician and patient education on extended use that is easily understood and widely disseminated would likely increase use of the contraceptive implant up to 5 years.

The implementation of extended use of the contraceptive implant up to 5 years likely decreases healthcare costs secondary to fewer procedures and unintended pregnancies, and expands reproductive choices for patients seeking contraception. It has been found that clinicians who offer extended use state that most of their patients accept extended use when it is offered [ 14 ]. However, the reasons why a patient may or may not accept extended use are unclear, but may include changes in bleeding and concerns about use past the FDA-approved duration. Research on bleeding patterns in the extended use period may facilitate counseling and give patients a better expectation of possible changes they may see in years 4 and 5. Additionally, research on the patient perspective and acceptability of using the contraceptive implant past its FDA-approved timeframe is needed.

This study focused on clinicians and their perspectives on extended use. However, it is important to note that patients may be fully informed about extended use and choose to replace their implant at or before 3 years of duration. All discussions regarding contraception, including extended use of the implant, should always occur within a patient-centered and shared decision-making model. Widespread offering of extended use may allow for more patients to make fully informed decisions about the duration and use of their contraceptive devices, therefore expanding reproductive choice and agency in addition to potentially sparing patients from unnecessary procedures and extra healthcare costs.

Interestingly, although there are data to reflect high implant efficacy in years 4 and 5, [ 2 , 3 , 4 ] some participants in this study believe there is increased liability in counseling on off-label use without FDA approval. Importantly, off-label use is common among reproductive clinicians and is protected by the FDA if there is strong published evidence supporting off label use [ 5 ]. Additionally, the Society of Family Planning supports extended use of the contraceptive implant up to 5 years [ 22 ]. The FDA requires implant training for clinicians before they can insert or remove the implant. This training includes the FDA product labeling indicating the maximum duration of use for pregnancy prevention as three years [ 1 ]. It is possible that clinician training and product labels that advertise a 3-year duration dissuade clinicians from offering extended use of the contraceptive implant due to concerns about legal repercussions in the event of an unintended pregnancy with extended use. Therefore, organization- or systems-level guidelines, policy changes, and trainings in support of extended use may allow clinicians to feel comfortable offering off-label use of the implant. Additionally, FDA approval of the contraceptive implant to 5 years would likely greatly facilitate implementation of extended use.

Changing the FDA label to reflect extended use can be expensive, and contraceptive companies may not be incentivized to change the label. However, increasing the FDA approval of the contraceptive implant would allow for companies to have a longer-acting contraceptive device that is more directly comparable to other LARC devices such as the 52 mg LNG IUD that can be used for up to 8 years. If FDA approval for 5 years of use were to occur, it is not known if the barriers described in this study would continue to apply. However, it is likely that the facilitators of extended use from this study would support implementation of extended use irrespective of the federally approved duration.

One strength of the study is the national sample and the diversity of clinician types and settings. There is also representation of clinicians who consistently offer extended use and those who do not offer extended use. Another strength of this study is that it was designed utilizing a framework focusing on implementation, thus yielding results that can be used to create effective interventions.

Limitations of this study include the small sample size and selection bias from recruiting from a prior study that utilized listservs and social media. Additionally, we recruited from a population that was specifically interested in family planning and identified mostly as Caucasian and female. Because of this, our results may not be generalizable to the national population of clinicians who offer contraceptive implant services. However, our direct selection of participants who only sometimes or do not offer extended use allowed us to hear diverse perspectives regardless of prior knowledge or interest in extended use. Another limitation is that we did not ask advanced practice clinicians what their specific training was (i.e., nurse practitioner or physician’s assistant). As the training for advanced practice clinicians can vary greatly, our results may not be generalizable to all advanced practice clinicians.

In conclusion, this study describes the barriers and facilitators to widespread implementation of extended use of the contraceptive implant. These results offer new perspectives and potential strategies to increase widespread implementation of extended use of the contraceptive implant up to 5 years of use. Based on our findings, there is opportunity to expand access to extended use by developing educational materials for clinicians and patients, identifying a champion of extended use, and counseling on extended use prior to removal appointments at 3 years. Of note, these results should be viewed in the context of recent policy access issues regarding reproductive health and used to support patient-centered contraceptive choices, regardless of a patient’s decision to extend use of their contraceptive implant up to 5 years. It is important that clinicians and patients utilize shared decision making when discussing extended use of the contraceptive implant.

Availability of data and materials

The datasets generated and/or analyzed during the current study are not publicly available due to being stored in a private, HIPAA-compliant database, but are available from the corresponding author on reasonable request.

Abbreviations

Consolidated Framework for Implementation Research

Food and Drug Administration

CoIntrauterine device

  • Long-acting reversible contraception

Levonorgestrel

Obstetrician-Gynecologist

United States

Nexplanon® Prescribing Information. Organon. 2021. https://www.organon.com/product/usa/pi_circulars/n/nexplanon/nexplanon_pi.pdf . Accessed 20 Feb 2023.

McNicholas C, Swor E, Wan L, Peipert JF. Prolonged use of the etonogestrel implant and levonorgestrel intrauterine device: 2 years beyond food and drug administration-approved duration. Am J Obstet Gynecol. 2017;216(6):586e.

Article   Google Scholar  

McNicholas C, Maddipati R, Zhao Q, Swor E, Peipert JF. Use of the etonogestrel implant and levonorgestrel intrauterine device beyond the U.S. food and drug administration-approved duration. Obstet Gynecol. 2015;125(3):599–604. https://doi.org/10.1097/AOG.0000000000000690 .

Article   CAS   PubMed   PubMed Central   Google Scholar  

Ali M, Akin A, Bahamondes L, Brache V, Habib N, Landoulsi S, Hubacher D, WHO study group on subdermal contraceptive implants for women. Extended use up to 5 years of the etonogestrel-releasing subdermal contraceptive implant: comparison to levonorgestrel-releasing subdermal implant. Hum Reprod. 2016;31(11):2491–8. https://doi.org/10.1093/humrep/dew222 .

U.S. Food and Drug Administration. (1998). Off-label and investigational use of marketed drugs, biologics, and medical devices: guidance for institutional review boards and clinical investigators . Retrieved from https://www.fda.gov/regulatory-information/search-fda-guidance-documents/label-and-investigational-use-marketed-drugs-biologics-and-medical-devices . Accessed 20 Dec 2022.

Jensen JT, Lukkari-Lax E, Schulze A, Wahdan Y, Serrani M, Kroll R. Contraceptive efficacy and safety of the 52-mg levonorgestrel intrauterine system for up to 8 years: findings from the mirena extension trial. Am J Obstet Gynecol. 2022;227(6):873. https://doi.org/10.1016/j.ajog.2022.09.007 .

Article   CAS   Google Scholar  

O’Dwyer MC. Contraceptive Efficacy of the Mirena Intrauterine System Through 8 Years of Use. NEJM Journal Watch. Retrieved from https://www.jwatch.org/na55371/2022/10/04/contraceptive-efficacy-mirena-intrauterine-system-through . Accessed 7 Mar 2024.

Teunissen AM, Grimm B, Roumen FJ. Continuation rates of the subdermal contraceptive Implanon(®) and associated influencing factors. Eur J Contracept Reprod Health Care. 2014;19(1):15–21. https://doi.org/10.3109/13625187.2013.862231 .

Article   CAS   PubMed   Google Scholar  

Moray KV, Chaurasia H, Sachin O, Joshi B. A systematic review on clinical effectiveness, side-effect profile and meta-analysis on continuation rate of etonogestrel contraceptive implant. Reprod Health. 2021;18(1):4. https://doi.org/10.1186/s12978-020-01054-y .

Article   PubMed   PubMed Central   Google Scholar  

Blumenthal PD, Gemzell-Danielsson K, Marintcheva-Petrova M. Tolerability and clinical safety of implanon. Eur J Contracept Reprod Health Care. 2008;13(Suppl 1):29–36. https://doi.org/10.1080/13625180801960012 .

Mansour D, Korver T, Marintcheva-Petrova M, Fraser IS. The effects of implanon on menstrual bleeding patterns. Eur J Contracept Reprod Health Care. 2008;13(Suppl 1):13–28. https://doi.org/10.1080/13625180801959931 .

Diedrich JT, Zhao Q, Madden T, Secura GM, Peipert JF. Three-year continuation of reversible contraception. Am J Obstet Gynecol. 2015;213(5):e6621-6628. https://doi.org/10.1016/j.ajog.2015.08.001 .

Funk S, Miller MM, Mishell DR Jr, Archer DF, Poindexter A, Schmidt J, Zampaglione E, Implanon US Study Group. Safety and efficacy of implanon, a single-rod implantable contraceptive containing etonogestrel. Contraception. 2005;71(5):319–26. https://doi.org/10.1016/j.contraception.2004.11.007 .

Rigler N, Averbach S, Sandoval S, Meurice M, Hildebrand M, Mody SK. Barriers and facilitators of extended use of the contraceptive arm implants: a cross-sectional survey of clinicians. Obstet Gynecol. 2022;139:4S. https://doi.org/10.1097/01.AOG.0000826380.60071.b8 .

Hubacher D, Spector H, Monteith C, Chen PL. Not seeking yet trying long-acting reversible contraception: a 24-month randomized trial on continuation, unintended pregnancy and satisfaction. Contraception. 2018;97(6):524–32. https://doi.org/10.1016/j.contraception.2018.02.001 .

Winner B, Peipert JF, Zhao Q, Buckel C, Madden T, Allsworth JE, Secura GM. Effectiveness of long-acting reversible contraception. N Engl J Med. 2012;366(21):1998–2007. https://doi.org/10.1056/NEJMoa1110855 .

Monea E, Thomas A. Unintended pregnancy and taxpayer spending. Perspect Sex Reprod Health. 2011;43(2):88–93. https://doi.org/10.1363/4308811 .

Article   PubMed   Google Scholar  

CFIR Research Team-Center for Clinical Management Research. The Consolidated Framework for Implementation Research: Constructs. 2023. https://cfirguide.org/constructs/ . Accessed 24 Jan 2023.

Forman J, Damschroder LJ. Qualitative content analysis. In: Jacoby L, Siminoff L, editors. Empirical research for bioethics: A primer, vol. 11. Oxford: Elsevier Publishing; 2008. p. 39–62.

Chapter   Google Scholar  

Hill CE, Knox S, Thompson BJ, Williams EN, Hess SA. Consensual qualitative research: an update. J Couns Psychol. 2005;52:1–25.

Soratto J, Pires DEP, Friese S. Thematic content analysis using ATLAS.ti software: potentialities for researchs in health. Rev Bras Enferm. 2020;73(3):e20190250. https://doi.org/10.1590/0034-7167-2019-0250 .

Dethier D, Qasba N, Kaneshiro B. Society of family planning clinical recommendation: extended use of long-acting reversible contraception. Contraception. 2022;113:13–8. https://doi.org/10.1016/j.contraception.2022.06.003 .

Download references

Acknowledgements

We thank the participants in this study.

This study was funded by Organon (Study #201908). The funder had no role in the study design, analysis, or interpretation of findings.

Author information

Authors and affiliations.

School of Medicine, University of California San Diego, San Diego, CA, USA

Nicole Rigler

Division of Complex Family Planning, Department of Obstetrics, Gynecology and Reproductive Sciences, University of California San Diego, 9300 Campus Point Dr. MC 7433, La Jolla, San Diego, CA, USA

Gennifer Kully, Marisa C. Hildebrand, Sarah Averbach & Sheila K. Mody

Center on Gender Equity and Health, University of California, San Diego, CA, USA

Gennifer Kully & Sarah Averbach

You can also search for this author in PubMed   Google Scholar

Contributions

SM is the principal investigator and lead data analysis, including qualitative coding, and dissemination of findings. She was also involved in study design and participant recruitment. NR was the primary interviewer and was involved in study design, recruitment, data management, data analysis, and dissemination of findings. GK and MH were involved with study design, recruitment, coordination of the study, IRB documentation, data analysis, and dissemination of findings. SA was involved with study design and dissemination of findings. All authors read and approved the final draft of the manuscript.

Corresponding author

Correspondence to Sheila K. Mody .

Ethics declarations

Ethics approval and consent to participate.

This study was approved by the Institutional Review Board at University of California, San Diego (Study #201908). All participants gave written informed consent.

Consent for publication

Not applicable.

Competing interests

S.M. is a consultant for Bayer and Merck. She has grant funding from Organon and receives authorship royalties from UpToDate. S.A. has served as a consultant for Bayer on immediate postpartum IUD use. The remaining authors report no conflict of interest.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary material 1., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Rigler, N., Kully, G., Hildebrand, M.C. et al. Offering extended use of the contraceptive implant via an implementation science framework: a qualitative study of clinicians’ perceived barriers and facilitators. BMC Health Serv Res 24 , 697 (2024). https://doi.org/10.1186/s12913-024-10991-4

Download citation

Received : 19 December 2023

Accepted : 15 April 2024

Published : 03 June 2024

DOI : https://doi.org/10.1186/s12913-024-10991-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Contraceptive implant
  • Long-acting contraception
  • Extended use
  • Contraceptive access
  • Implementation science
  • Consolidated framework for implementation research
  • Off-label use

BMC Health Services Research

ISSN: 1472-6963

contextual analysis research methodology

It feels so real! Situated authentic language learning in immersive virtual reality

  • Published: 07 June 2024

Cite this article

contextual analysis research methodology

  • Sangmin-Michelle Lee   ORCID: orcid.org/0000-0002-7686-3537 1 ,
  • Xiaoran Wang   ORCID: orcid.org/0009-0002-1643-7621 2 ,
  • Ilee Park   ORCID: orcid.org/0009-0001-1091-4039 3 &
  • Riski Lestiono   ORCID: orcid.org/0000-0002-9320-9158 4  

While traditional English as a foreign language (EFL) classrooms often lack context and authenticity, immersive virtual reality (iVR) provides a valuable alternative opportunity to practice and acquire the target language in a replica of the real world. The present study used iVR ( Immerse ) for communicative tasks and investigated how iVR, as an authentic learning environment, promoted students’ speaking. For six weeks, Korean EFL university students ( N  = 19) participated in various communicative tasks in simulated real-life-like places in iVR. The study employed a qualitative research method and analyzed the students’ physical and verbal behaviors based on the recordings and their perceptions of learning in iVR through interviews. Analysis of the recordings revealed that the students engaged in authentic conversations in response to the surrounding virtual environments, manipulated virtual objects to engage in dynamic and contextualized conversations, and showed enjoyment and comfort. Triangulating the results of the recording analysis, the interviews confirmed the students’ preference for iVR over traditional teaching due to more realistic English-speaking contexts, more natural interactions, more opportunities to speak, and the connection of each scene to their own real-life experiences. Finally, pedagogical implications are proposed to integrate iVR to facilitate students’ authentic and meaningful learning.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

contextual analysis research methodology

Data availability

Recordings cannot be shared openly because they contain the students’ personal information. Anonymous interview data are available from the authors upon request.

Ahn, T. Y., & Lee, S.-M. (2016). User experience of a mobile speaking application with automatic speech recognition for EFL learning. British Journal of Educational Technology,  47 (4), 778–786. https://doi.org/10.1111/bjet.12354

Article   Google Scholar  

Alioon, Y., & Delialioğlu, Ö. (2019). The effect of authentic m-learning activities on student engagement and motivation: Authentic m-learning activities. British Journal of Educational Technology, 50 (2), 655–668. https://doi.org/10.1111/bjet.12559

Atkinson, D. (2014). Language learning in mindbodyworld: A sociocognitive approach to second language acquisition. Language Teaching, 47 , 467–483. https://doi.org/10.1017/S0261444813000153

Atkinson, D., & Shvidko, E. (2019). Natural pedagogy in second language learning and teaching. TESOL Quarterly, 53 (4), 1083–1114. https://doi.org/10.1002/tesq.538

Bailey, J. O., Bailenson, J. N., Obradović, J., & Aguiar, N. R. (2019). Virtual reality’s effect on children’s inhibitory control, social compliance, and sharing. Journal of Applied Developmental Psychology, 64 , 1010552. https://doi.org/10.1016/j.appdev.2019.101052

Barab, S. A., Squire, K. D., & Dueber, W. (2000). A co-evolutionary model for supporting the emergence of authenticity. Educational Technology Research & Development, 48 (2), 37–62. https://doi.org/10.1007/BF02313400

Borg, S., & Alshumaimeri, Y. (2019). Language learner autonomy in a tertiary context: Teachers’ beliefs and practices. Language Teaching Research, 23 (1), 9–38. https://doi.org/10.1177/1362168817725759

Brown, H. D., & Lee, H. (2015). Teaching by principles: An interactive approach to language pedagogy (4th ed.). Pearson Education Inc.

Google Scholar  

Brown, J. S., Collins, A., & Duguid, P. (1989). Situated cognition and the culture of learning. Educational Researcher, 18 (1), 32–42. https://doi.org/10.3102/0013189X018001032

Chen, C. H., Hung, H. T., & Yeh, H. C. (2021). Virtual reality in problem-based learning contexts: Effects on the problem-solving performance, vocabulary acquisition, and motivation of English language learners. Journal of Computer Assisted Learning, 37 (3), 851–860. https://doi.org/10.1111/jcal.12528

Chen, H. I., & Sevilla-Pavón, A. (2023). Negotiation of meaning via virtual exchange in immersive virtual reality environments. Language Learning & Technology , 27 (2), 118–154. https://hdl.handle.net/10125/73506 . Accessed 22 Nov 2023.

Chien, S. Y., Hwang, G. J., & Jong, M. S. Y. (2020). Effects of peer assessment within the context of spherical video-based virtual reality on EFL students’ English-speaking performance and learning perceptions. Computers and Education, 146 , 1–20. https://doi.org/10.1016/j.compedu.2019.103751

Chun, D. M., Honeiah, K., & Sañosa, D. J. (2022). Traveling by headset: Immersive VR for language learning. CALICO Journal, 39 (2), 129–149. https://doi.org/10.1558/cj.21306

Chung, L-Y. (2011). Using avatars to enhance active learning: Integration of virtual reality tools into college English curriculum. In H. C. Tin, K. D. Kwack, & S. Fong (Eds.), Proceedings of the 16th north-east Asia symposium on nano, information technology and reliability (NASNIT) (pp. 29–33). Institute of Electrical and Electronics Engineers (IEEE). https://doi.org/10.1109/NASNIT.2011.6111116

Cohen, L., Manion, L., & Morrison, K. (2007). Research methods in education (6th ed.). Routledge.

Book   Google Scholar  

Corbin, J. M., & Strauss, A. (1990). Grounded theory research: Procedures, canons, and evaluative criteria. Qualitative Sociology, 13 (1), 3–21. https://doi.org/10.1007/bf00988593

Creswell, J. W. (2014). Research design: Qualitative, quantitative, and mixed methods approaches (4th ed.). SAGE Publication Inc.

Creswell, J. W., & Plano Clark, V. L. (2011). Designing and conducting mixed methods research (2nd ed.). SAGE Publication Inc.

Di Natale, A., Repetto, C., Riva, G., & Villani, D. (2020). Immersive virtual reality in K-12 and higher education: A 10-year systematic review of empirical research. British Journal of Educational Technology, 51 (6), 2006–2033. https://doi.org/10.1111/bjet.13030

Enkin, E., Tytarenko, O., & Kirschling, E. (2021). Integrating and assessing the use of a “Makerspace” in a Russian cultural studies course: Utilizing immersive virtual reality and 3D printing for project-based learning. CALICO Journal, 38 (1), 103–127. https://doi.org/10.1558/cj.40926

Faraj, A. K. A. (2015). Effective strategies for turning receptive vocabulary into productive vocabulary in EFL context. Journal of Education and Practice , 6 (27), 10–19. https://files.eric.ed.gov/fulltext/EJ1077387.pdf

Farley, H. (2016). The reality of authentic learning in virtual reality. In S. Gregory, M. Lee, B. Dalgarno, & B. Tynan (Eds.), Learning in virtual worlds: Research and applications (pp. 129–149). Athabasca University Press.

Herrington, A., & Herrington, J. (2007). What is an authentic learning environment? In L. A. Tomei (Ed.), Online and distance learning: Concepts, methodologies, tools, and applications (pp. 68–77). IGI Global Publishing.

Herrington, J. (2015). Introduction to authentic learning. In V. Bozalek, D. Ng’ambi, D. Wood, J. Herrington, J. Hardman, & A. Amory (Eds.), Active theory, Authentic learning and emerging technologies: Towards a transformative higher education pedagogy (pp. 61–67). Routledge.

Hung, S. A., Chen, W., & Chien, S. (2023). Virtual reality is not always a cure-all: Evidences from a quasi-experiment of EFL business speaking courses. Interactive Learning Environments , 1–17. https://doi.org/10.1080/10494820.2023.2200811

Hwang, W.-Y., Guo, B.-C., Hoang, A., Chang, C.-C., & Wu, N.-T. (2022). Facilitating authentic contextual EFL speaking and conversation with smart mechanisms and investigating its influence on learning achievements. Computer Assisted Language Learning , 1–27. https://doi.org/10.1080/09588221.2022.2095406

Hwang, W.-Y., Shih, T. K., Ma, Z.-H., Shadiev, R., & Chen, S.-Y. (2016). Evaluating listening and speaking skills in a mobile game-based learning environment with situational contexts. Computer Assisted Language Learning, 29 (4), 639–657. https://doi.org/10.1080/09588221.2015.1016438

Lan, Y. J. (2020). Immersion, interaction, and experience-oriented learning: Bringing virtual reality into FL learning. Language Learning & Technology, 24 (1), 1–15. http://hdl.handle.net/10125/44704

Lan, Y. J. (2021). Language learning in virtual reality: Theoretical foundations and empirical practices. In Y. J. Lan & S. Grant (Eds.), Contextual language learning (pp.1–21). Springer. https://doi.org/10.1007/978-981-16-3416-1_1

Lavoie, R., Main, K., King, C., & King, D. (2021). Virtual experience, real consequences: The potential negative emotional consequences of virtual reality gameplay. Virtual Reality, 25 (1), 69–81. https://doi.org/10.1007/s10055-020-00440-y

Lee, S.-M., & Park, M. (2020). Reconceptualization of the context in language learning with a location-based AR app. Computer Assisted Language Learning, 33 (8), 936–959. https://doi.org/10.1080/09588221.2019.1602545

Lee, S-M., Yang, Z., & Wu, J. G. (2023). Live, play, and learn: Language learner engagement in the immersive VR environment. Education and Information Technologies , 1–22. https://doi.org/10.1007/S10639-023-12215-4

Lin, T. J., & Lan, Y. J. (2015). Language learning in virtual reality environments: Past, present, and future. Journal of Educational Technology & Society, 18 (4), 486–497. https://www.jstor.org/stable/jeductechsoci.18.4.486 .

Lloyd, A., Rogerson, S., & Stead, G. (2017). Imagining the potential for using virtual reality technologies in language learning. In M. Carrier, R. M. Damerow, & K. M. Bailey (Eds.), Digital language learning and teaching: Research, theory, and practice (pp. 222–234). Routledge.

Chapter   Google Scholar  

Macalister, J., & Nation, I. S. P. (2019). Language curriculum design . Routledge.

Makransky, G., & Petersen, G. B. (2021). The cognitive affective model of immersive learning (CAMIL): A theoretical research-based model of learning in immersive virtual reality. Educational Psychology Review, 33 , 937–958. https://doi.org/10.1007/s10648-020-09586-2

Mulders, M., Buchner, J., & Kerres, M. (2020). A framework for the use of immersive virtual reality in learning environments. International Journal of Emerging Technologies in Learning (iJET), 15 (24), 208–224. https://doi.org/10.3991/ijet.v15i24.16615

Mystakidis, S. (2022). Metaverse. Encyclopedia, 2 (1), 486–497. https://doi.org/10.3390/encyclopedia2010031

Nowicka, A., & Wilczyńska, W. (2011). Authenticity in oral communication of instructed L2 learners. In M. Pawlak, E. Waniek-Klimczak, & J. Majer (Eds.), Speaking and instructed foreign language acquisition (pp. 24–42). Multilingual Matters.

Ozverir, I., Herrington, J., & Osam, U. V. (2016). Design principles for authentic learning of English as a foreign language: Authentic learning of English as a foreign language. British Journal of Educational Technology, 47 (3), 484–493. https://doi.org/10.1111/bjet.12449

Parmaxi, A. (2023). Virtual reality in language learning: A systematic review and implications for research and practice. Interactive Learning Environments, 31 (1), 172–184. https://doi.org/10.1080/10494820.2020.1765392

Peixoto, B., Pinto, R., Melo, M., Cabral, L., & Bessa, M. (2021). Immersive virtual reality for foreign language education: A PRISMA systematic review. IEEE Access, 9 , 48952–48962. https://doi.org/10.1109/access.2021.3068858

Polydorou, D. (2021). The Tamarind Forest: An augmented virtuality experience. Digital Creativity, 32 (1), 71–77. https://doi.org/10.1080/14626268.2021.1882504

Radianti, J., Majchrzak, T. A., Fromm, J., & Wohlgenannt, I. (2020). A systematic review of immersive virtual reality applications for higher education: Design elements, lessons learned, and research agenda. Computers & Education, 147 , 103778. https://doi.org/10.1016/j.compedu.2019.103778

Richards, J. C. (2015). The changing face of language learning: Learning beyond the classroom. RELC Journal, 46 (1), 5–22. https://doi.org/10.1177/0033688214561621

Sadler, R., & Thrasher, T. (2023). XR: Crossing reality to enhance language learning. CALICO Journal, 40 (1), i–xi. https://doi.org/10.1558/cj.25517

Saville-Troike, M., & Barto, K. (2017). Introducing second language acquisition (3rd ed.). Cambridge University Press.

Shadiev, R., Hwang, W. Y., Huang, Y.-M., & Liu, T.-Y. (2017). Cognitive diffusion model: Facilitating EFL learning in an authentic environment. IEEE Transactions on Learning Technologies, 10 (2), 168–181. https://doi.org/10.1109/TLT.2016.2574356

Shih, Y. C. (2015). A virtual walk through London: Culture learning through a cultural immersion experience. Computer Assisted Language Learning, 28 (5), 407–428. https://doi.org/10.1080/09588221.2013.851703

Sirisrimangkorn, L. (2018). The use of project-based learning focusing on drama to promote speaking skills of EFL learners. Advances in Language and Literary Studies, 9 (6), 14–20. https://doi.org/10.7575/aiac.alls.v.9n.6p.14

Taguchi, N. (2021). Application of immersive virtual reality to pragmatics data collection methods: Insights from interviews. CALICO Journal, 38 (2), 181–201. https://doi.org/10.1558/cj.41136

Tai, T. Y., Chen, H. H. J., & Todd, G. (2020). The impact of a virtual reality app on adolescent EFL learners’ vocabulary learning. Computer Assisted Language Learning, 35 (4), 892–917. https://doi.org/10.1080/09588221.2020.1752735

Thrasher, T. (2022). The impact of virtual reality on L2 French learners’ language anxiety and oral comprehensibility: An exploratory study. CALICO Journal, 39 (2), 219–238. https://doi.org/10.1558/cj.42198

Vázquez, C. D., Xia, L., Aikawa, T., & Maes, P. (2018). Words in motion: Kinesthetic language learning in virtual reality . IEEE 18th International Conference on Advanced Learning Technologies (ICALT) , 272–276. https://doi.org/10.1109/ICALT.2018.00069

Vesisenaho, M., Juntunen, M., Häkkinen, P., Pöysä-Tarhonen, J., Fagerlund, J., Miakush, I., & Parviainen, T. (2019). Virtual reality in education: Focus on the role of emotions and physiological reactivity. Journal of Virtual Worlds Research, 12 (1), 1–15. https://doi.org/10.4101/jvwr.v12i1.7329

Wilson, M. (2002). Six views of embodied cognition. Psychonomic Bulletin & Review, 9 (4), 625–636. https://doi.org/10.3758/BF03196322

Wu, J. G., Miller, L., Huang, Q., & Wang, M. (2021). Learning with immersive virtual reality: An exploratory study of Chinese college nursing students. RELC Journal, 54 (3), 697–713. https://doi.org/10.1177/00336882211044860

Xie, Y., Chen, Y., & Ryder, L. H. (2021). Effects of using mobile-based virtual reality on Chinese L2 students’ oral proficiency. Computer Assisted Language Learning, 34 (3), 225–245. https://doi.org/10.1080/09588221.2019.1604551

Yeh, H. C., Tseng, S. S., & Heng, L. (2020). Enhancing EFL students’ intracultural learning through virtual reality. Interactive Learning Environments, 30 (9), 1609–1618. https://doi.org/10.1080/10494820.2020.1734625

Yoon, S. (2003). In search of meaningful online learning experiences. New Directions for Adult and Continuing Education, 100 , 19–30. https://doi-org-ssl.openlink.khu.ac.kr/10.1002/ace.116

Zhao, J., Lin, L., Sun, J., & Liao, Y. (2020). Using the summarizing strategy to engage learners: Empirical evidence in an immersive virtual reality environment. The Asia-Pacific Education Researcher, 29 (5), 473–482. https://doi.org/10.1007/s40299-020-00499-w

Zheng, Y., & Cheng, L. (2018). How does anxiety influence language performance? From the perspectives of foreign language classroom anxiety and cognitive test anxiety. Language Testing in Asia, 8 (1), 1–9. https://doi.org/10.1186/s40468-018-0065-4

Download references

Author information

Authors and affiliations.

Kyung Hee University/School of Global Communication, 201 College of Foreign Languages & Culture, Yonginsi, Gyeonggido, Korea

Sangmin-Michelle Lee

Graduate School of British-American Language and Culture, Kyung Hee University, Yonginsi, Gyeonggido, Korea

Xiaoran Wang

Bongeun Middle School, Kyung Hee University/Graduate School of British-American Language and Culture, 59, Yeongdong-Daero 128-Gil, Gangnam-Gu, Seoul, Korea

Universitas Muhammadiyah Malang/English Language Education Department, Kyung Hee University/Graduate School of British-American Language and Culture, Jl. Raya Tlogomas 246, Malang, Indonesia

Riski Lestiono

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Riski Lestiono .

Ethics declarations

The participants in this study gave their informed consent prior to its initiation.

Conflicts of interest

There are no conflicts of interest associated with this study.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix A. Prompt samples

Bar : You’re waiting for a friend at a bar and there’s a really cool guy/girl sitting alone nearby. You think it would be nice to be friends, so you go over to talk to the person, but the person seems to be bothered by you. One of you expresses that you’re interested in that person, and the other one tries to decline the offer politely.

Gym: It’s your first time at the gym and you have no idea how to use some of the equipment. You’re eager to try out a piece of equipment (e.g. barbell bench, squat rack or spin bike). However, there’s someone who's been using it for a long time. You'd like to talk to him/her to see if you can take turns using it and ask for instructions on how to use it. If possible, ask him/her how to use other equipment in the gym that you are interested in.

Fast food restaurant: You’ve been waiting for your order in the fast food restaurant for over 15 min, and it still hasn’t been served. Approach one of the restaurant staff to express your dissatisfaction with the delay, inquire about the cause of the prolonged wait time than usual, and kindly ask them to expedite your meal.

Appendix B. Interview questions

Describe the advantages, disadvantages, and difficulties of using iVR.

Describe your interaction with your partners. What did you like and didn’t you like about it?

Which scene did you like best and why?

How often did you interact with the virtual objects? For what purposes?

What do you want to change about the iVR environment?

Evaluate your experience and learning outcomes in iVR compared to the traditional English classroom.

Appendix C. Coding sample of the interview

Rights and permissions.

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Lee, SM., Wang, X., Park, I. et al. It feels so real! Situated authentic language learning in immersive virtual reality. Educ Inf Technol (2024). https://doi.org/10.1007/s10639-024-12807-8

Download citation

Received : 31 January 2024

Accepted : 17 May 2024

Published : 07 June 2024

DOI : https://doi.org/10.1007/s10639-024-12807-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Immersive virtual reality
  • Authenticity
  • Situated learning
  • Communicative tasks
  • Find a journal
  • Publish with us
  • Track your research

IMAGES

  1. Contextual Analysis: A Fundamental Attribute of Qualitative Research

    contextual analysis research methodology

  2. Project Contextual Analysis framework

    contextual analysis research methodology

  3. PPT

    contextual analysis research methodology

  4. 10 Easy Steps: How to Write Research Methodology in 2023

    contextual analysis research methodology

  5. Project Contextual Analysis framework unpacked

    contextual analysis research methodology

  6. (PDF) Contextual Analysis in Practice

    contextual analysis research methodology

VIDEO

  1. Textual Analysis as a Resaerch Method- Fall21

  2. Content Analysis || Research Methodology || Dr.vivek pragpura || sociology with vivek ||

  3. How to present the research context when writing the methodology #shorts

  4. GESI Contextual Analysis Tool

  5. interpretation of data , analysis and thesis writing (Nta UGC net sociology)

  6. #1 Introduction & Contextual Analysis 2/10 . 26th August 2020 2/2 . #AE-RM3-201

COMMENTS

  1. Contextual analysis: A research methodology and research approach

    Contextual analysis starts from a preliminary delimitation of research objects as wholes, and continues with discerning and delimiting main parts of the objects, and parts within the main parts. It is characteristic of the analysis that the research objects are seen as wholes of related and organized parts. In contextual analysis, information ...

  2. Contextual Analysis: A Fundamental Attribute of Qualitative Research

    One of the 10 unique or distinctive attributes of qualitative research is contextual, multilayered analysis. This is a fundamental aspect of qualitative research and, in fact, plays a central role in the unique attributes associated with data generation, i.e., the importance of context, the importance of meaning, the participant-researcher relationship, and researcher as instrument —…

  3. Contextual Analysis: Putting Research into Context

    This chapter reflects on contextual analysis, which examines the environment in which a given phenomenon operates. Contextual analysis is used widely in social sciences, such as history, managerial and leadership studies, organizational theory, business studies, and political sciences. It is useful for identifying trends and topics within unstructured data (contexts). In a sense, contextual ...

  4. Methodological approaches to study context in intervention

    Using EGM methodology, we applied a novel approach for summarizing and evaluating available evidence on contextual analysis to develop an initial evidence gap map on contextual analysis methodology. Based on a random sample drawn from 110 intervention implementation studies, we found that fewer than one-quarter of those studies (21.8%) reported ...

  5. Context analysis—A technique for analysing research in a ...

    Context analysis is a new method for appraising a body of publications. The process consists of creating a database of attributes assigned to each paper by the reviewer and then looking for interesting relationships in the data. Assigning the attributes requires an understanding of the subject matter of the papers. We present findings about one particular research field, Management of R&D at ...

  6. SAGE Research Methods: Find resources to answer your research methods

    Learn how to use contextual analysis to understand social phenomena and interpret data in this comprehensive book by SAGE Publications.

  7. Unraveling implementation context: the Basel Approach for coNtextual

    The importance of context Footnote 1 for a successful and sustainable implementation has gained significant attention in implementation science (IS) with contextual analysis increasingly being recognized as vital to IS methodology [1,2,3,4].While contextual analyses' value is widely accepted, guidance on how to conduct one is lacking and no unified definition of contextual analysis in IS exists.

  8. Contextual and Analytic Qualities of Research Methods Exemplified in

    Contextual analysis is presented as a useful methodology meeting those requirements. The aim is to discuss the most fundamental characteristics of contextual analysis and not to give a detailed description of the use of the methodology. ... one of the researchers who originally developed the research orientation called Phenomenography and has ...

  9. PDF Context analysis—A technique for analysing research in a field, a

    L. T. FALKINGHAM, R, REEVES: CONTEXT ANALYSIS method used must first be identified by reading a sample of the papers, and definitions developed. Then the balance of the papers in the field must be read, and a research method assigned to each paper. Typically five methods might be identified, named and defined.

  10. Contextual Analysis: Putting Research into ContextPutting Research into

    Hartson & Pyla (2012) defined contextual analysis as an in-depth study and analysis of understanding existing work practice and context. Willems (2021) explains the importance of the method.

  11. Context analysis

    Context analysis - A technique for analysing research in a field, applied to literature on the management of R & D at the section level. Falkingham, Leslie T.; Reeves, Richard. ... Context Analysis is a new method for surveying a body of publications as a whole. The process consists of creating a database of information about the publications ...

  12. Contextual Research

    The contextual interview is a one-on-one interview which lasts about 1.5-2 h with a user in context. The basic concept of contextual inquiry is quite simple: "[…] go to the user, watch them to do activities you care about, and talk with them about what they're doing right then" [].However, it is a sophisticated technique which requires an experienced interviewer and is embedded in a ...

  13. Understanding 'context' in realist evaluation and synthesis

    The problem of context in research. In the social sciences, conceptual confusion and inconsistent use of terms is a fundamental source of difficulty in theoretical frameworks and empirical analysis (Collier et al., Citation 2006).We wish to avoid collapsing the debate into a discussion of what 'context is' or 'is not' (for reasons we elaborate on further later in the paper) but ...

  14. Citation Context Analysis as a Method for Conducting Rigorous and

    A novel and emerging form of literature review called citation context analysis can help scholars explore and describe—at a specific and detailed level—how important ideas are used by and spread from a source text to subsequent citing works. This type of review involves gathering all the citations to a given work, set of works, or author, and then using content analysis techniques to ...

  15. Critical Discourse Analysis

    Critical discourse analysis (or discourse analysis) is a research method for studying written or spoken language in relation to its social context. It aims to understand how language is used in real life situations. When you conduct discourse analysis, you might focus on: The purposes and effects of different types of language.

  16. Content Analysis

    Content analysis is a research method used to identify patterns in recorded communication. To conduct content analysis, you systematically collect data from a set of texts, which can be written, oral, or visual: Books, newspapers and magazines. Speeches and interviews. Web content and social media posts. Photographs and films.

  17. (PDF) Contextual Analysis in Practice

    The framework for Strategic Systemic Thinking (SST) provides a method for. investigating an organisations perceptions, goals and identifying missing resources. and posing possible solutions using ...

  18. (PDF) Contextual Research

    Contextual research is part. of field study m ethods and used wh en explor ing the c ontext of usage of a product. or service, or the cultural context. It is applied when users' tasks are ...

  19. Case Study Methodology of Qualitative Research: Key Attributes and

    A case study is one of the most commonly used methodologies of social research. This article attempts to look into the various dimensions of a case study research strategy, the different epistemological strands which determine the particular case study type and approach adopted in the field, discusses the factors which can enhance the effectiveness of a case study research, and the debate ...

  20. Content Analysis

    Abstract. In this chapter, the focus is on ways in which content analysis can be used to investigate and describe interview and textual data. The chapter opens with a contextualization of the method and then proceeds to an examination of the role of content analysis in relation to both quantitative and qualitative modes of social research.

  21. How to use and assess qualitative research methods

    How to conduct qualitative research? Given that qualitative research is characterised by flexibility, openness and responsivity to context, the steps of data collection and analysis are not as separate and consecutive as they tend to be in quantitative research [13, 14].As Fossey puts it: "sampling, data collection, analysis and interpretation are related to each other in a cyclical ...

  22. Contextualizing Your Research Project

    The term 'context' comes from a Latin root meaning 'to knit together', 'to make a connection' or 'to link'. In research, contextualization is a way of approaching your research, or linking your research project to the relevant research and to the specific setting of the study (Rousseau & Fried, 2001, p. 1).Research contextualization is a vital aspect of any research project ...

  23. How To Conduct Content Analysis: A Comprehensive Guide

    Content analysis is a systematic method for analyzing textual, visual, or audio data, allowing researchers to identify complex meanings, patterns, and themes in qualitative information. Comprehending Context and Culture: A common goal of qualitative research is to comprehend phenomena with regard to their sociocultural environment. Researchers ...

  24. Green purchase intention factors: A systematic review and research agenda

    2. Materials and methods. To achieve the objective of the research, a systematic review of the literature is proposed as a methodology that allows carrying out a critical, rigorous, and exhaustive evaluation of the elements of the research, which in this case are the theories and variables that determine the purchase intention of green products, in the scientific literature.

  25. Exploring the Multifaceted Influences of Social Media Motivation on

    Utilizing a quantitative methodology, the research operationalizes and tests a conceptual model to assess the correlation between various motivational factors and online relationship commitment. ... In the context of academic research, ... 391 were deemed valid and suitable for data analysis—surpassing the minimum requirement of 384, as ...

  26. The Historical Methodological Foundation of Phenomenography

    The book, Contextual Analysis. A Research Methodology and Research Approach (Svensson, 2021) gives a comprehensive description of contextual analysis as a general research methodology and research approach. In the book, contextual analysis is described and argued for in relation to a broad field of scientific work.

  27. Diagnostics

    Bitemark analysis involves the examination of both patterned injuries and contextual circumstances, combining morphological and positional data. Considering the uniqueness of human dentition, bitemarks caused by teeth on skin or impressions on flexible surfaces could assist in human identification. Aims: to investigate the available literature systematically and evaluate the scientific ...

  28. Service Robots in Hospitality and Tourism Before and During the COVID

    Keywords that are strongly connected to one another can be clustered, and these clusters represent research subtopics. In thematic map analysis, the research subtopic clusters are placed in a quadrant graph that has the following axes of two-dimensional systems: centrality and density (Supplemental Figure 1). In the context of thematic map ...

  29. Offering extended use of the contraceptive implant via an

    The etonogestrel contraceptive implant is currently approved by the U.S. Food and Drug Administration (FDA) for 3 years of continuous use for the prevention of pregnancy [].However, there is evidence to support its use for up to 5 years while maintaining a low risk of pregnancy [2,3,4].The off-label use of the contraceptive implant past its FDA-approved duration and up to 5 years is known as ...

  30. It feels so real! Situated authentic language learning in immersive

    2.3 Data collection and analysis. The current research used the qualitative research method based on the recordings of the students' behaviors and interactions and post-interviews. All sessions were recorded by both instructors and students. The recordings captured both the students' verbal behaviors and the movements of their avatars in iVR.