• Open access
  • Published: 25 September 2020

The Implementation Research Logic Model: a method for planning, executing, reporting, and synthesizing implementation projects

  • Justin D. Smith   ORCID: orcid.org/0000-0003-3264-8082 1 , 2 ,
  • Dennis H. Li 3 &
  • Miriam R. Rafferty 4  

Implementation Science volume  15 , Article number:  84 ( 2020 ) Cite this article

91k Accesses

198 Citations

83 Altmetric

Metrics details

A Letter to the Editor to this article was published on 17 November 2021

Numerous models, frameworks, and theories exist for specific aspects of implementation research, including for determinants, strategies, and outcomes. However, implementation research projects often fail to provide a coherent rationale or justification for how these aspects are selected and tested in relation to one another. Despite this need to better specify the conceptual linkages between the core elements involved in projects, few tools or methods have been developed to aid in this task. The Implementation Research Logic Model (IRLM) was created for this purpose and to enhance the rigor and transparency of describing the often-complex processes of improving the adoption of evidence-based interventions in healthcare delivery systems.

The IRLM structure and guiding principles were developed through a series of preliminary activities with multiple investigators representing diverse implementation research projects in terms of contexts, research designs, and implementation strategies being evaluated. The utility of the IRLM was evaluated in the course of a 2-day training to over 130 implementation researchers and healthcare delivery system partners.

Preliminary work with the IRLM produced a core structure and multiple variations for common implementation research designs and situations, as well as guiding principles and suggestions for use. Results of the survey indicated a high utility of the IRLM for multiple purposes, such as improving rigor and reproducibility of projects; serving as a “roadmap” for how the project is to be carried out; clearly reporting and specifying how the project is to be conducted; and understanding the connections between determinants, strategies, mechanisms, and outcomes for their project.

Conclusions

The IRLM is a semi-structured, principle-guided tool designed to improve the specification, rigor, reproducibility, and testable causal pathways involved in implementation research projects. The IRLM can also aid implementation researchers and implementation partners in the planning and execution of practice change initiatives. Adaptation and refinement of the IRLM are ongoing, as is the development of resources for use and applications to diverse projects, to address the challenges of this complex scientific field.

Peer Review reports

Contributions to the literature

Drawing from and integrating existing frameworks, models, and theories, the IRLM advances the traditional logic model for the requirements of implementation research and practice.

The IRLM provides a means of describing the complex relationships between critical elements of implementation research and practice in a way that can be used to improve the rigor and reproducibility of research and implementation practice, and the testing of theory.

The IRLM offers researchers and partners a useful tool for the purposes of planning, executing, reporting, and synthesizing processes and findings across the stages of implementation projects.

In response to a call for addressing noted problems with transparency, rigor, openness, and reproducibility in biomedical research [ 1 ], the National Institutes of Health issued guidance in 2014 pertaining to the research it funds ( https://www.nih.gov/research-training/rigor-reproducibility ). The field of implementation science has similarly recognized a need for better specification with similar intent [ 2 ]. However, integrating the necessary conceptual elements of implementation research, which often involves multiple models, frameworks, and theories, is an ongoing challenge. A conceptually grounded organizational tool could improve rigor and reproducibility of implementation research while offering additional utility for the field.

This article describes the development and application of the Implementation Research Logic Model (IRLM). The IRLM can be used with various types of implementation studies and at various stages of research, from planning and executing to reporting and synthesizing implementation studies. Example IRLMs are provided for various common study designs and scenarios, including hybrid designs and studies involving multiple service delivery systems [ 3 , 4 ]. Last, we describe the preliminary use of the IRLM and provide results from a post-training evaluation. An earlier version of this work was presented at the 2018 AcademyHealth/NIH Conference on the Science of Dissemination and Implementation in Health, and the abstract appeared in the Implementation Science [ 5 ].

Specification challenges in implementation research

Having an imprecise understanding of what was done and why during the implementation of a new innovation obfuscates identifying the factors responsible for successful implementation and prevents learning from what contributed to failed implementation. Thus, improving the specification of phenomena in implementation research is necessary to inform our understanding of how implementation strategies work, for whom, under what determinant conditions, and on what implementation and clinical outcomes. One challenge is that implementation science uses numerous models and frameworks (hereafter, “frameworks”) to describe, organize, and aid in understanding the complexity of changing practice patterns and integrating evidence-based health interventions across systems [ 6 ]. These frameworks typically address implementation determinants, implementation process, or implementation evaluation [ 7 ]. Although many frameworks incorporate two or more of these broad purposes, researchers often find it necessary to use more than one to describe the various aspects of an implementation research study. The conceptual connections and relationships between multiple frameworks are often difficult to describe and to link to theory [ 8 ].

Similarly, reporting guidelines exist for some of these implementation research components, such as strategies [ 9 ] and outcomes [ 10 ], as well as for entire studies (i.e., Standards for Reporting Implementation Studies [ 11 ]); however, they generally help describe the individual components and not their interactions. To facilitate causal modeling [ 12 ], which can be used to elucidate mechanisms of change and the processes involved in both successful and unsuccessful implementation research projects, investigators must clearly define the relations among variables in ways that are testable with research studies [ 13 ]. Only then can we open the “black box” of how specific implementation strategies operate to predict outcomes.

  • Logic models

Logic models, graphic depictions that present the shared relationships among various elements of a program or study, have been used for decades in program development and evaluation [ 14 ] and are often required by funding agencies when proposing studies involving implementation [ 15 ]. Used to develop agreement among diverse stakeholders of the “what” and the “how” of proposed and ongoing projects, logic models have been shown to improve planning by highlighting theoretical and practical gaps, support the development of meaningful process indicators for tracking, and aid in both reproducing successful studies and identifying failures of unsuccessful studies [ 16 ]. They are also useful at other stages of research and for program implementation, such as organizing a project/grant application/study protocol, presenting findings from a completed project, and synthesizing the findings of multiple projects [ 17 ].

Logic models can also be used in the context of program theory, an explicit statement of how a project/strategy/intervention/program/policy is understood to contribute to a chain of intermediate results that eventually produce the intended/observed impacts [ 18 ]. Program theory specifies both a Theory of Change (i.e., the central processes or drivers by which change comes about following a formal theory or tacit understanding) and a Theory of Action (i.e., how program components are constructed to activate the Theory of Change) [ 16 ]. Inherent within program theory is causal chain modeling. In implementation research, Fernandez et al. [ 19 ] applied mapping methods to implementation strategies to postulate the ways in which changes to the system affect downstream implementation and clinical outcomes. Their work presents an implementation mapping logic model based on Proctor et al. [ 20 , 21 ], which is focused primarily on the selection of implementation strategy(s) rather than a complete depiction of the conceptual model linking all implementation research elements (i.e., determinants, strategies, mechanisms of action, implementation outcomes, clinical outcomes) in the detailed manner we describe in this article.

Development of the IRLM

The IRLM began out of a recognition that implementation research presents some unique challenges due to the field’s distinct and still codifying terminology [ 22 ] and its use of implementation-specific and non-specific (borrowed from other fields) theories, models, and frameworks [ 7 ]. The development of the IRLM occurred through a series of case applications. This began with a collaboration between investigators at Northwestern University and the Shirley Ryan AbilityLab in which the IRLM was used to study the implementation of a new model of patient care in a new hospital and in other related projects [ 23 ]. Next, the IRLM was used with three already-funded implementation research projects to plan for and describe the prospective aspects of the trials, as well as with an ongoing randomized roll-out implementation trial of the Collaborative Care Model for depression management [Smith JD, Fu E, Carroll AJ, Rado J, Rosenthal LJ, Atlas JA, Burnett-Zeigler I, Carlo, A, Jordan N, Brown CH, Csernansky J: Collaborative care for depression management in primary care: a randomized rollout trial using a type 2 hybrid effectiveness-implementation design submitted for publication]. It was also applied in the later stages of a nearly completed implementation research project of a family-based obesity management intervention in pediatric primary care to describe what had occurred over the course of the 3-year trial [ 24 ]. Last, the IRLM was used as a training tool in a 2-day training with 63 grantees of NIH-funded planning project grants funded as part of the Ending the HIV Epidemic initiative [ 25 ]. Results from a survey of the participants in the training are reported in the “Results” section. From these preliminary activities, we identified a number of ways that the IRLM could be used, described in the section on “Using the IRLM for different purposes and stages of research.”

The Implementation Research Logic Model

In developing the IRLM, we began with the common “pipeline” logic model format used by AHRQ, CDC, NIH, PCORI, and others [ 16 ]. This structure was chosen due to its familiarity with funders, investigators, readers, and reviewers. Although a number of characteristics of the pipeline logic model can be applied to implementation research studies, there is an overall misfit due to implementation research’s focusing on the systems that support adoption and delivery of health practices; involving multiple levels within one or more systems; and having its own unique terminology and frameworks [ 3 , 22 , 26 ]. We adapted the typical evaluation logic model to integrate existing implementation science frameworks as its core elements while keeping to the same aim of facilitating causal modeling.

The most common IRLM format is depicted in Fig. 1 . Additional File A1 is a Fillable PDF version of Fig. 1 . In certain situations, it might be preferable to include the evidence-based intervention (EBI; defined as a clinical, preventive, or educational protocol or a policy, principle, or practice whose effects are supported by research [ 27 ]) (Fig. 2 ) to demonstrate alignment of contextual factors (determinants) and strategies with the components and characteristics of the clinical intervention/policy/program and to disentangle it from the implementation strategies. Foremost in these indications are “home-grown” interventions, whose components and theory of change may not have been previously described, and novel interventions that are early in the translational pipeline, which may require greater detail for the reader/reviewer. Variant formats are provided as Additional Files A 2 to A 4 for use with situations and study designs commonly encountered in implementation research, including comparative implementation studies (A 2 ), studies involving multiple service contexts (A 3 ), and implementation optimization designs (A 4 ). Further, three illustrative IRLMs are provided, with brief descriptions of the projects and the utility of the IRLM (A 5 , A 6 and A 7 ).

figure 1

Implementation Research Logic Model (IRLM) Standard Form. Notes. Domain names in the determinants section were drawn from the Consolidated Framework for Implementation Research. The format of the outcomes column is from Proctor et al. 2011

figure 2

Implementation Research Logic Model (IRLM) Standard Form with Intervention. Notes. Domain names in the determinants section were drawn from the Consolidated Framework for Implementation Research. The format of the outcomes column is from Proctor et al. 2011

Core elements and theory

The IRLM specifies the relationships between determinants of implementation, implementation strategies, the mechanisms of action resulting from the strategies, and the implementation and clinical outcomes affected. These core elements are germane to every implementation research project in some way. Accordingly, the generalized theory of the IRLM posits that (1) implementation strategies selected for a given EBI are related to implementation determinants (context-specific barriers and facilitators), (2) strategies work through specific mechanisms of action to change the context or the behaviors of those within the context, and (3) implementation outcomes are the proximal impacts of the strategy and its mechanisms, which then relate to the clinical outcomes of the EBI. Articulated in part by others [ 9 , 12 , 21 , 28 , 29 ], this causal pathway theory is largely explanatory and details the Theory of Change and the Theory of Action of the implementation strategies in a single model. The EBI Theory of Action can also be displayed within a modified IRLM (see Additional File A 4 ). We now briefly describe the core elements and discuss conceptual challenges in how they relate to one another and to the overall goals of implementation research.

Determinants

Determinants of implementation are factors that might prevent or enable implementation (i.e., barriers and facilitators). Determinants may act as moderators, “effect modifiers,” or mediators, thus indicating that they are links in a chain of causal mechanisms [ 12 ]. Common determinant frameworks are the Consolidated Framework for Implementation Research (CFIR) [ 30 ] and the Theoretical Domains Framework [ 31 ].

Implementation strategies

Implementation strategies are supports, changes to, and interventions on the system to increase adoption of EBIs into usual care [ 32 ]. Consideration of determinants is commonly used when selecting and tailoring implementation strategies [ 28 , 29 , 33 ]. Providing the theoretical or conceptual reasoning for strategy selection is recommended [ 9 ]. The IRLM can be used to specify the proposed relationships between strategies and the other elements (determinants, mechanisms, and outcomes) and assists with considering, planning, and reporting all strategies in place during an implementation research project that could contribute to the outcomes and resulting changes

Because implementation research occurs within dynamic delivery systems with multiple factors that determine success or failure, the field has experienced challenges identifying consistent links between individual barriers and specific strategies to overcome them. For example, the Expert Recommendations for Implementing Change (ERIC) compilation of strategies [ 32 ] was used to determine which strategies would best address contextual barriers identified by CFIR [ 29 ]. An online CFIR–ERIC matching process completed by implementation researchers and practitioners resulted in a large degree of heterogeneity and few consistent relationships between barrier and strategy, meaning the relationship is rarely one-to-one (e.g., a single strategy is often is linked to multiple barriers; more than one strategy needed to address a single barrier). Moreover, when implementation outcomes are considered, researchers often find that to improve one outcome, more than one contextual barrier needs to be addressed, which might in turn require one or more strategies.

Frequently, the reporting of implementation research studies focuses on the strategy or strategies that were introduced for the research study, without due attention to other strategies already used in the system or additional supporting strategies that might be needed to implement the target strategy. The IRLM allows for the comprehensive specification of all introduced and present strategies, as well as their changes (adaptations, additions, discontinuations) during the project.

Mechanisms of action

Mechanisms of action are processes or events through which an implementation strategy operates to affect desired implementation outcomes [ 12 ]. The mechanism can be a change in a determinant, a proximal implementation outcome, an aspect of the implementation strategy itself, or a combination of these in a multiple-intervening-effect model. An example of a causal process might be using training and fidelity monitoring strategies to improve delivery agents’ knowledge and self-efficacy about the EBI in response to knowledge-related barriers in the service delivery system. This could result in raising their acceptability of the EBI, increase the likelihood of adoption, improve the fidelity of delivery, and lead to sustainment. Relatively, few implementation studies formally test mechanisms of action, but this area of investigation has received significant attention more recently as the necessity to understand how strategies operate grows in the field [ 33 , 34 , 35 ].

Implementation outcomes are the effects of deliberate and purposive actions to implement new treatments, practices, and services [ 21 ]. They can be indicators of implementation processes, or key intermediate outcomes in relation to service, or target clinical outcomes. Glasgow et al. [ 36 , 37 , 38 ] describe the interrelated nature of implementation outcomes as occurring in a logical, but not necessarily linear, sequence of adoption by a delivery agent, delivery of the innovation with fidelity, reach of the innovation to the intended population, and sustainment of the innovation over time. The combined impact of these nested outcomes, coupled with the size of the effect of the EBI, determines the population or public health impact of implementation [ 36 ]. Outcomes earlier in the sequence can be conceptualized as mediators and mechanisms of strategies on later implementation outcomes. Specifying which strategies are theoretically intended to affect which outcomes, through which mechanisms of action, is crucial for improving the rigor and reproducibility of implementation research and to testing theory.

Using the Implementation Research Logic Model

Guiding principles.

One of the critical insights from our preliminary work was that the use of the IRLM should be guided by a set of principles rather than governed by rules. These principles are intended to be flexible both to allow for adaptation to the various types of implementation studies and evolution of the IRLM over time and to address concerns in the field of implementation science regarding specification, rigor, reproducibility, and transparency of design and process [ 5 ]. Given this flexibility of use, the IRLM will invariably require accompanying text and other supporting documents. These are described in the section “Use of Supporting Text and Documents.”

Principle 1: Strive for comprehensiveness

Comprehensiveness increases transparency, can improve rigor, and allows for a better understanding of alternative explanations to the conclusions drawn, particularly in the presence of null findings for an experimental design. Thus, all relevant determinants, implementation strategies, and outcomes should be included in the IRLM.

Concerning determinants, the valence should be noted as being either a barrier, a facilitator, neutral, or variable by study unit. This can be achieved by simply adding plus (+) or minus (–) signs for facilitators and barriers, respectively, or by using coding systems such as that developed by Damschroder et al. [ 39 ], which indicates the relative strength of the determinant on a scale: – 2 ( strong negative impact ), – 1 ( weak negative impact ), 0 ( neutral or mixed influence ), 1 ( weak positive impact ), and 2 ( strong positive impact ). The use of such a coding system could yield better specification compared to using study-specific adjectives or changing the name of the determinant (e.g., greater relative priority, addresses patient needs, good climate for implementation). It is critical to include all relevant determinants and not simply limit reporting to those that are hypothesized to be related to the strategies and outcomes, as there are complex interrelationships between determinants.

Implementation strategies should be reported in their entirety. When using the IRLM for planning a study, it is important to list all strategies in the system, including those already in use and those to be initiated for the purposes of the study, often in the experimental condition of the design. Second, strategies should be labeled to indicate whether they were (a) in place in the system prior to the study, (b) initiated prospectively for the purposes of the study (particularly for experimental study designs), (c) removed as a result of being ineffective or onerous, or (d) introduced during the study to address an emergent barrier or supplement other strategies because of low initial impact. This is relevant when using the IRLM for planning, as an ongoing tracking system, for retrospective application to a completed study, and in the final reporting of a study. There have been a number of processes proposed for tracking the use of and adaptations to implementation strategies over time [ 40 , 41 ]. Each of these is more detailed than would be necessary for the IRLM, but the processes described provide a method for accurately tracking the temporal aspects of strategy use that fulfill the comprehensiveness principle.

Although most studies will indicate a primary implementation outcome, other outcomes are almost assuredly to be measured. Thus, they ought to be included in the IRLM. This guidance is given in large part due to the interdependence of implementation outcomes, such that adoption relates to delivery with fidelity, reach of the intervention, and potential for sustainment [ 36 ]. Similarly, the overall public health impact (defined as reach multiplied by the effect size of the intervention [ 38 ]) is inextricably tied to adoption, fidelity, acceptability, cost, etc. Although the study might justifiably focus on only one or two implementation outcomes, the others are nonetheless relevant and should be specified and reported. For example, it is important to capture potential unintended consequences and indicators of adverse effects that could result from the implementation of an EBI.

Principle 2: Indicate key conceptual relationships

Although the IRLM has a generalized theory (described earlier), there is a need to indicate the relationships between elements in a manner aligning with the specific theory of change for the study. Researchers ought to provide some form or notation to indicate these conceptual relationships using color-coding, superscripts, arrows, or a combination of the three. Such notations in the IRLM facilitate reference in the text to the study hypotheses, tests of effects, causal chain modeling, and other forms of elaboration (see “Supporting Text and Resources”). We prefer the use of superscripts to color or arrows in grant proposals and articles for practical purposes, as colors can be difficult to distinguish, and arrows can obscure text and contribute to visual convolution. When presenting the IRLM using presentation programs (e.g., PowerPoint, Keynote), colors and arrows can be helpful, and animations can make these connections dynamic and sequential without adding to visual complexity. This principle could also prove useful in synthesizing across similar studies to build the science of tailored implementation, where strategies are selected based on the presence of specific combinations of determinants. As previously indicated [ 29 ], there is much work to be done in this area given.

Principle 3: Specify critical study design elements

This critical element will vary by the study design (e.g., hybrid effectiveness-implementation trial, observational, what subsystems are assigned to the strategies). This principle includes not only researchers but service systems and communities, whose consent is necessary to carry out any implementation design [ 3 , 42 , 43 ].

Primary outcome(s)

Indicate the primary outcome(s) at each level of the study design (i.e., clinician, clinic, organization, county, state, nation). The levels should align with the specific aims of a grant application or the stated objective of a research report. In the case of a process evaluation or an observational study including the RE-AIM evaluation components [ 38 ] or the Proctor et al. [ 21 ] taxonomy of implementation outcomes, the primary outcome may be the product of the conceptual or theoretical model used when a priori outcomes are not clearly indicated. We also suggest including downstream health services and clinical outcomes even if they are not measured, as these are important for understanding the logic of the study and the ultimate health-related targets.

For quasi/experimental designs

When quasi/experimental designs [ 3 , 4 ] are used, the independent variable(s) (i.e., the strategies that are introduced or manipulated or that otherwise differentiate study conditions) should be clearly labeled. This is important for internal validity and for differentiating conditions in multi-arm studies.

For comparative implementation trials

In the context of comparative implementation trials [ 3 , 4 ], a study of two or more competing implementation strategies are introduced for the purposes of the study (i.e., the comparison is not implementation-as-usual), and there is a need to indicate the determinants, strategies, mechanisms, and potentially outcomes that differentiate the arms (see Additional File A 2 ). As comparative implementation can involve multiple service delivery systems, the determinants, mechanisms, and outcomes might also differ, though there must be at least one comparable implementation outcome. In our preliminary work applying the IRLM to a large-scale comparative implementation trial, we found that we needed to use an IRLM for each arm of the trial as it was not possible to use a single IRLM because the strategies being tested occurred across two delivery systems and strategies were very different, by design. This is an example of the flexible use of the IRLM.

For implementation optimization designs

A number of designs are now available that aim to test processes of optimizing implementation. These include factorial, Sequential Multiple Assignment Randomized Trial (SMART) [ 44 ], adaptive [ 45 ], and roll-out implementation optimization designs [ 46 ]. These designs allow for (a) building time-varying adaptive implementation strategies based on the order in which components are presented [ 44 ], (b) evaluating the additive and combined effects of multiple strategies [ 44 , 47 ], and (c) can incorporate data-driven iterative changes to improve implementation in successive units [ 45 , 46 ]. The IRLM in Additional File A 4 can be used for such designs.

Additional specification options

Users of the IRLM are allowed to specify any number of additional elements that may be important to their study. For example, one could notate those elements of the IRLM that have been or will be measured versus those that were based on the researcher’s prior studies or inferred from findings reported in the literature. Users can also indicate when implementation strategies differ by level or unit within the study. In large multisite studies, strategies might not be uniform across all units, particularly those strategies that already exist within the system. Similarly, there might be a need to increase the dose of certain strategies to address the relative strengths of different determinants within units.

Using the IRLM for different purposes and stages of research

Commensurate with logic models more generally, the IRLM can be used for planning and organizing a project, carrying out a project (as a roadmap), reporting and presenting the findings of a completed project, and synthesizing the findings of multiple projects or of a specific area of implementation research, such as what is known about how learning collaboratives are effective within clinical care settings.

When the IRLM is used for planning, the process of populating each of the elements often begins with the known parameter(s) of the study. For example, if the problem is improving the adoption and reach of a specific EBI within a particular clinical setting, the implementation outcomes and context, as well as the EBI, are clearly known. The downstream clinical outcomes of the EBI are likely also known. Working from the two “bookends” of the IRLM, the researchers and community partners and/or organization stakeholders can begin to fill in the implementation strategies that are likely to be feasible and effective and then posit conceptually derived mechanisms of action. In another example, only the EBI and primary clinical outcomes were known. The IRLM was useful in considering different scenarios for what strategies might be needed and appropriate to test the implementation of the EBI in different service delivery contexts. The IRLM was a tool for the researchers and stakeholders to work through these multiple options.

When we used the IRLM to plan for the execution of funded implementation studies, the majority of the parameters were already proposed in the grant application. However, through completing the IRLM prior to the start of the study, we found that a number of important contextual factors had not been considered, additional implementation strategies were needed to complement the primary ones proposed in the grant, and mechanisms needed to be added and measured. At the time of award, mechanisms were not an expected component of implementation research projects as they will likely become in the future.

For another project, the IRLM was applied retrospectively to report on the findings and overall logic of the study. Because nearly all elements of the IRLM were known, we approached completion of the model as a means of showing what happened during the study and to accurately report the hypothesized relationships that we observed. These relationships could be formally tested using causal pathway modeling [ 12 ] or other path analysis approaches with one or more intervening variables [ 48 ].

Synthesizing

In our preliminary work with the IRLM, we used it in each of the first three ways; the fourth (synthesizing) is ongoing within the National Cancer Institute’s Improving the Management of symPtoms during And Following Cancer Treatment (IMPACT) research consortium. The purpose is to draw conclusions for the implementation of an EBI in a particular context (or across contexts) that are shared and generalizable to provide a guide for future research and implementation.

Use of supporting text and documents

While the IRLM provides a good deal of information about a project in a single visual, researchers will need to convey additional details about an implementation research study through the use of supporting text, tables, and figures in grant applications, reports, and articles. Some elements that require elaboration are (a) preliminary data on the assessment and valence of implementation determinants; (b) operationalization/detailing of the implementation strategies being used or observed, using established reporting guidelines [ 9 ] and labeling conventions [ 32 ] from the literature; (c) hypothesized or tested causal pathways [ 12 ]; (d) process, service, and clinical outcome measures, including the psychometric properties, method, and timing of administration, respondents, etc.; (e) study procedures, including subject selection, assignment to (or observation of natural) study conditions, and assessment throughout the conduct of the study [ 4 ]; and (f) the implementation plan or process for following established implementation frameworks [ 49 , 50 , 51 ]. By utilizing superscripts, subscripts, and other notations within the IRLM, as previously suggested, it is easy to refer to (a) hypothesized causal paths in theoretical overviews and analytic plan sections, (b) planned measures for determinants and outcomes, and (c) specific implementation strategies in text, tables, and figures.

Evidence of IRLM utility and acceptability

The IRLM was used as the foundation for a training in implementation research methods to a group of 65 planning projects awarded under the national Ending the HIV Epidemic initiative. One investigator (project director or co-investigator) and one implementation partner (i.e., a collaborator from a community service delivery system) from each project were invited to attend a 2-day in-person summit in Chicago, IL, in October 2019. One hundred thirty-two participants attended, representing 63 of the 65 projects. A survey, which included demographics and questions pertaining to the Ending the HIV Epidemic, was sent to potential attendees prior to the summit, to which 129 individuals—including all 65 project directors, 13 co-investigators, and 51 implementation partners (62% Female)—responded. Those who indicated an investigator role ( n = 78) received additional questions about prior implementation research training (e.g., formal coursework, workshop, self-taught) and related experiences (e.g., involvement in a funded implementation project, program implementation, program evaluation, quality improvement) and the stage of their project (i.e., exploration, preparation, implementation, sustainment [ 50 ]).

Approximately 6 weeks after the summit, 89 attendees (69%) completed a post-training survey comprising more than 40 questions about their overall experience. Though the invitation to complete the survey made no mention of the IRLM, it included 10 items related to the IRLM and one more generally about the logic of implementation research, each rated on a 4-point scale (1 = not at all , 2 = a little , 3 = moderately , 4 = very much ; see Table 1 ). Forty-two investigators (65% of projects) and 24 implementation partners indicated attending the training and began and completed the survey (68.2% female). Of the 66 respondents who attended the training, 100% completed all 11 IRLM items, suggesting little potential response bias.

Table 1 provides the means, standard deviations, and percent of respondents endorsing either “moderately” or “very” response options. Results were promising for the utility of the IRLM on the majority of the dimensions assessed. More than 50% of respondents indicated that the IRLM was “moderately” or “very” helpful on all questions. Overall, 77.6% ( M = 3.18, SD = .827) of respondents indicated that their knowledge on the logic of implementation research had increased either moderately or very much after the 2-day training. At the time of the survey, when respondents were about 2.5 months into their 1-year planning projects, 44.6% indicated that they had already been able to complete a full draft of the IRLM.

Additional analyses using a one-way analysis of variance indicated no statistically significant differences in responses to the IRLM questions between investigators and implementation partners. However, three items approached significance: planning the project ( F = 2.460, p = .055), clearly reporting and specifying how the project is to be conducted ( F = 2.327, p = .066), and knowledge on the logic of implementation research ( F = 2.107, p = .091). In each case, scores were higher for the investigators compared to the implementation partners, suggesting that perhaps the knowledge gap in implementation research lay more in the academic realm than among community partners, who may not have a focus on research but whose day-to-day roles include the implementation of EBPs in the real world. Lastly, analyses using ordinal logistic regression did not yield any significant relationship between responses to the IRLM survey items and prior training ( n = 42 investigators who attended the training and completed the post-training survey), prior related research experience ( n = 42), and project stage of implementation ( n = 66). This suggests that the IRLM is a useful tool for both investigators and implementers with varying levels of prior exposure to implementation research concepts and across all stages of implementation research. As a result of this training, the IRLM is now a required element in the FY2020 Ending the HIV Epidemic Centers for AIDS Research/AIDS Research Centers Supplement Announcement released March 2020 [ 15 ].

Resources for using the IRLM

As the use of the IRLM for different study designs and purposes continues to expand and evolve, we envision supporting researchers and other program implementers in applying the IRLM to their own contexts. Our team at Northwestern University hosts web resources on the IRLM that includes completed examples and tools to assist users in completing their model, including templates in various formats (Figs. 1 and 2 , Additional Files A 1 , A 2 , A 3 and A 4 and others) a Quick Reference Guide (Additional File A 8 ) and a series of worksheets that provide guidance on populating the IRLM (Additional File A 9 ). These will be available at https://cepim.northwestern.edu/implementationresearchlogicmodel/ .

The IRLM provides a compact visual depiction of an implementation project and is a useful tool for academic–practice collaboration and partnership development. Used in conjunction with supporting text, tables, and figures to detail each of the primary elements, the IRLM has the potential to improve a number of aspects of implementation research as identified in the results of the post-training survey. The usability of the IRLM is high for seasoned and novice implementation researchers alike, as evidenced by our survey results and preliminary work. Its use in the planning, executing, reporting, and synthesizing of implementation research could increase the rigor and transparency of complex studies that ultimately could improve reproducibility—a challenge in the field—by offering a common structure to increase consistency and a method for more clearly specifying links and pathways to test theories.

Implementation occurs across the gamut of contexts and settings. The IRLM can be used when large organizational change is being considered, such as a new strategic plan with multifaceted strategies and outcomes. Within a narrower scope of a single EBI in a specific setting, the larger organizational context still ought to be included as inner setting determinants (i.e., the impact of the organizational initiative on the specific EBI implementation project) and as implementation strategies (i.e., the specific actions being done to make the organizational change a reality that could be leveraged to implement the EBI or could affect the success of implementation). The IRLM has been used by our team to plan for large systemic changes and to initiate capacity building strategies to address readiness to change (structures, processes, individuals) through strategic planning and leadership engagement at multiple levels in the organization. This aspect of the IRLM continues to evolve.

Among the drawbacks of the IRLM is that it might be viewed as a somewhat simplified format. This represents the challenges of balancing depth and detail with parsimony, ease of comprehension, and ease of use. The structure of the IRLM may inhibit creative thinking if applied too rigidly, which is among the reasons we provide numerous examples of different ways to tailor the model to the specific needs of different project designs and parameters. Relatedly, we encourage users to iterate on the design of the IRLM to increase its utility.

The promise of implementation science lies in the ability to conduct rigorous and reproducible research, to clearly understand the findings, and to synthesize findings from which generalizable conclusions can be drawn and actionable recommendations for practice change emerge. As scientists and implementers have worked to better define the core methods of the field, the need for theory-driven, testable integration of the foundational elements involved in impactful implementation research has become more apparent. The IRLM is a tool that can aid the field in addressing this need and moving toward the ultimate promise of implementation research to improve the provision and quality of healthcare services for all people.

Availability of data and materials

Not applicable.

Abbreviations

Consolidated Framework for Implementation Research

Evidence-based intervention

Expert Recommendations for Implementing Change

Implementation Research Logic Model

Nosek BA, Alter G, Banks GC, Borsboom D, Bowman SD, Breckler SJ, Buck S, Chambers CD, Chin G, Christensen G, et al. Promoting an open research culture. Science. 2015;348:1422–5.

Article   CAS   Google Scholar  

Slaughter SE, Hill JN, Snelgrove-Clarke E. What is the extent and quality of documentation and reporting of fidelity to implementation strategies: a scoping review. Implement Sci. 2015;10:1–12.

Article   Google Scholar  

Brown CH, Curran G, Palinkas LA, Aarons GA, Wells KB, Jones L, Collins LM, Duan N, Mittman BS, Wallace A, et al: An overview of research and evaluation designs for dissemination and implementation. Annual Review of Public Health 2017, 38:null.

Hwang S, Birken SA, Melvin CL, Rohweder CL, Smith JD: Designs and methods for implementation research: advancing the mission of the CTSA program. Journal of Clinical and Translational Science 2020:Available online.

Smith JD. An Implementation Research Logic Model: a step toward improving scientific rigor, transparency, reproducibility, and specification. Implement Sci. 2018;14:S39.

Google Scholar  

Tabak RG, Khoong EC, Chambers DA, Brownson RC. Bridging research and practice: models for dissemination and implementation research. Am J Prev Med. 2012;43:337–50.

Nilsen P. Making sense of implementation theories, models and frameworks. Implement Sci. 2015;10:53.

Damschroder LJ. Clarity out of chaos: use of theory in implementation research. Psychiatry Res. 2019.

Proctor EK, Powell BJ, McMillen JC. Implementation strategies: recommendations for specifying and reporting. Implement Sci. 2013;8.

Kessler RS, Purcell EP, Glasgow RE, Klesges LM, Benkeser RM, Peek CJ. What does it mean to “employ” the RE-AIM model? Evaluation & the Health Professions. 2013;36:44–66.

Pinnock H, Barwick M, Carpenter CR, Eldridge S, Grandes G, Griffiths CJ, Rycroft-Malone J, Meissner P, Murray E, Patel A, et al. Standards for Reporting Implementation Studies (StaRI): explanation and elaboration document. BMJ Open. 2017;7:e013318.

Lewis CC, Klasnja P, Powell BJ, Lyon AR, Tuzzio L, Jones S, Walsh-Bailey C, Weiner B. From classification to causality: advancing understanding of mechanisms of change in implementation science. Front Public Health. 2018;6.

Glanz K, Bishop DB. The role of behavioral science theory in development and implementation of public health interventions. Annu Rev Public Health. 2010;31:399–418.

WK Kellogg Foundation: Logic model development guide. Battle Creek, Michigan: WK Kellogg Foundation; 2004.

CFAR/ARC Ending the HIV Epidemic Supplement Awards [ https://www.niaid.nih.gov/research/cfar-arc-ending-hiv-epidemic-supplement-awards ].

Funnell SC, Rogers PJ. Purposeful program theory: effective use of theories of change and logic models. San Francisco, CA: John Wiley & Sons; 2011.

Petersen D, Taylor EF, Peikes D. The logic model: the foundation to implement, study, and refine patient-centered medical home models (issue brief). Mathematica Policy Research: Mathematica Policy Research Reports; 2013.

Davidoff F, Dixon-Woods M, Leviton L, Michie S. Demystifying theory and its use in improvement. BMJ Quality & Safety. 2015;24:228–38.

Fernandez ME, ten Hoor GA, van Lieshout S, Rodriguez SA, Beidas RS, Parcel G, Ruiter RAC, Markham CM, Kok G. Implementation mapping: using intervention mapping to develop implementation strategies. Front Public Health. 2019;7.

Proctor EK, Landsverk J, Aarons G, Chambers D, Glisson C, Mittman B. Implementation research in mental health services: an emerging science with conceptual, methodological, and training challenges. Admin Pol Ment Health. 2009;36.

Proctor EK, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, Griffey R, Hensley M. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health Ment Health Serv Res. 2011;38.

Rabin BA, Brownson RC: Terminology for dissemination and implementation research. In Dissemination and implementation research in health: translating science to practice. 2 edition. Edited by Brownson RC, Colditz G, Proctor EK. New York, NY: Oxford University Press; 2017: 19-45.

Smith JD, Rafferty MR, Heinemann AW, Meachum MK, Villamar JA, Lieber RL, Brown CH: Evaluation of the factor structure of implementation research measures adapted for a novel context and multiple professional roles. BMC Health Serv Res 2020.

Smith JD, Berkel C, Jordan N, Atkins DC, Narayanan SS, Gallo C, Grimm KJ, Dishion TJ, Mauricio AM, Rudo-Stern J, et al. An individually tailored family-centered intervention for pediatric obesity in primary care: study protocol of a randomized type II hybrid implementation-effectiveness trial (Raising Healthy Children study). Implement Sci. 2018;13:1–15.

Fauci AS, Redfield RR, Sigounas G, Weahkee MD, Giroir BP. Ending the HIV epidemic: a plan for the United States: Editorial. JAMA. 2019;321:844–5.

Grimshaw JM, Eccles MP, Lavis JN, Hill SJ, Squires JE. Knowledge translation of research findings. Implement Sci. 2012;7:50.

Brown CH, Curran G, Palinkas LA, Aarons GA, Wells KB, Jones L, Collins LM, Duan N, Mittman BS, Wallace A, et al. An overview of research and evaluation designs for dissemination and implementation. Annu Rev Public Health. 2017;38:1–22.

Krause J, Van Lieshout J, Klomp R, Huntink E, Aakhus E, Flottorp S, Jaeger C, Steinhaeuser J, Godycki-Cwirko M, Kowalczyk A, et al. Identifying determinants of care for tailoring implementation in chronic diseases: an evaluation of different methods. Implement Sci. 2014;9:102.

Waltz TJ, Powell BJ, Fernández ME, Abadie B, Damschroder LJ. Choosing implementation strategies to address contextual barriers: diversity in recommendations and future directions. Implement Sci. 2019;14:42.

Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4.

Atkins L, Francis J, Islam R, O’Connor D, Patey A, Ivers N, Foy R, Duncan EM, Colquhoun H, Grimshaw JM, et al. A guide to using the Theoretical Domains Framework of behaviour change to investigate implementation problems. Implement Sci. 2017;12:77.

Powell BJ, Waltz TJ, Chinman MJ, Damschroder LJ, Smith JL, Matthieu MM, Proctor EK, Kirchner JE. A refined compilation of implementation strategies: results from the Expert Recommendations for Implementing Change (ERIC) project. Implement Sci. 2015;10.

Powell BJ, Fernandez ME, Williams NJ, Aarons GA, Beidas RS, Lewis CC, McHugh SM, Weiner BJ. Enhancing the impact of implementation strategies in healthcare: a research agenda. Front Public Health. 2019;7.

PAR-19-274: Dissemination and implementation research in health (R01 Clinical Trial Optional) [ https://grants.nih.gov/grants/guide/pa-files/PAR-19-274.html ].

Edmondson D, Falzon L, Sundquist KJ, Julian J, Meli L, Sumner JA, Kronish IM. A systematic review of the inclusion of mechanisms of action in NIH-funded intervention trials to improve medication adherence. Behav Res Ther. 2018;101:12–9.

Gaglio B, Shoup JA, Glasgow RE. The RE-AIM framework: a systematic review of use over time. Am J Public Health. 2013;103:e38–46.

Glasgow RE, Harden SM, Gaglio B, Rabin B, Smith ML, Porter GC, Ory MG, Estabrooks PA. RE-AIM planning and evaluation framework: adapting to new science and practice with a 20-year review. Front Public Health. 2019;7.

Glasgow RE, Vogt TM, Boles SM. Evaluating the public health impact of health promotion interventions: the RE-AIM framework. Am J Public Health. 1999;89:1322–7.

Damschroder LJ, Reardon CM, Sperber N, Robinson CH, Fickel JJ, Oddone EZ. Implementation evaluation of the Telephone Lifestyle Coaching (TLC) program: organizational factors associated with successful implementation. Transl Behav Med. 2016;7:233–41.

Bunger AC, Powell BJ, Robertson HA, MacDowell H, Birken SA, Shea C. Tracking implementation strategies: a description of a practical approach and early findings. Health Research Policy and Systems. 2017;15:15.

Boyd MR, Powell BJ, Endicott D, Lewis CC. A method for tracking implementation strategies: an exemplar implementing measurement-based care in community behavioral health clinics. Behav Ther. 2018;49:525–37.

Brown CH, Kellam S, Kaupert S, Muthén B, Wang W, Muthén L, Chamberlain P, PoVey C, Cady R, Valente T, et al. Partnerships for the design, conduct, and analysis of effectiveness, and implementation research: experiences of the Prevention Science and Methodology Group. Adm Policy Ment Health Ment Health Serv Res. 2012;39:301–16.

McNulty M, Smith JD, Villamar J, Burnett-Zeigler I, Vermeer W, Benbow N, Gallo C, Wilensky U, Hjorth A, Mustanski B, et al: Implementation research methodologies for achieving scientific equity and health equity. In Ethnicity & disease, vol. 29. pp. 83-92; 2019:83-92.

Collins LM, Murphy SA, Strecher V. The multiphase optimization strategy (MOST) and the sequential multiple assignment randomized trial (SMART): new methods for more potent eHealth interventions. Am J Prev Med. 2007;32:S112–8.

Brown CH, Ten Have TR, Jo B, Dagne G, Wyman PA, Muthén B, Gibbons RD. Adaptive designs for randomized trials in public health. Annu Rev Public Health. 2009;30:1–25.

Smith JD: The roll-out implementation optimization design: integrating aims of quality improvement and implementation sciences. Submitted for publication 2020.

Dziak JJ, Nahum-Shani I, Collins LM. Multilevel factorial experiments for developing behavioral interventions: power, sample size, and resource considerations. Psychol Methods. 2012;17:153–75.

MacKinnon DP, Lockwood CM, Hoffman JM, West SG, Sheets V. A comparison of methods to test mediation and other intervening variable effects. Psychol Methods. 2002;7:83–104.

Graham ID, Tetroe J. Planned action theories. In: Straus S, Tetroe J, Graham ID, editors. Knowledge translation in health care: Moving from evidence to practice. Wiley-Blackwell: Hoboken, NJ; 2009.

Moullin JC, Dickson KS, Stadnick NA, Rabin B, Aarons GA. Systematic review of the Exploration, Preparation, Implementation, Sustainment (EPIS) framework. Implement Sci. 2019;14:1.

Rycroft-Malone J. The PARIHS framework—a framework for guiding the implementation of evidence-based practice. J Nurs Care Qual. 2004;19:297–304.

Download references

Acknowledgements

The authors wish to thank our colleagues who provided input at different stages of developing this article and the Implementation Research Logic Model, and for providing the examples included in this article: Hendricks Brown, Brian Mustanski, Kathryn Macapagal, Nanette Benbow, Lisa Hirschhorn, Richard Lieber, Piper Hansen, Leslie O’Donnell, Allen Heinemann, Enola Proctor, Courtney Wolk-Benjamin, Sandra Naoom, Emily Fu, Jeffrey Rado, Lisa Rosenthal, Patrick Sullivan, Aaron Siegler, Cady Berkel, Carrie Dooyema, Lauren Fiechtner, Jeanne Lindros, Vinny Biggs, Gerri Cannon-Smith, Jeremiah Salmon, Sujata Ghosh, Alison Baker, Jillian MacDonald, Hector Torres and the Center on Halsted in Chicago, Michelle Smith, Thomas Dobbs, and the pastors who work tirelessly to serve their communities in Mississippi and Arkansas.

This study was supported by grant P30 DA027828 from the National Institute on Drug Abuse, awarded to C. Hendricks Brown; grant U18 DP006255 to Justin Smith and Cady Berkel; grant R56 HL148192 to Justin Smith; grant UL1 TR001422 from the National Center for Advancing Translational Sciences to Donald Lloyd-Jones; grant R01 MH118213 to Brian Mustanski; grant P30 AI117943 from the National Institute of Allergy and Infectious Diseases to Richard D’Aquila; grant UM1 CA233035 from the National Cancer Institute to David Cella; a grant from the Woman’s Board of Northwestern Memorial Hospital to John Csernansky; grant F32 HS025077 from the Agency for Healthcare Research and Quality; grant NIFTI 2016-20178 from the Foundation for Physical Therapy; the Shirley Ryan AbilityLab; and by the Implementation Research Institute (IRI) at the George Warren Brown School of Social Work, Washington University in St. Louis, through grant R25 MH080916 from the National Institute of Mental Health and the Department of Veterans Affairs, Health Services Research & Development Service, and Quality Enhancement Research Initiative (QUERI) to Enola Proctor. The opinions expressed herein are the views of the authors and do not necessarily reflect the official policy or position of the National Institutes of Health, the Centers for Disease Control and Prevention, the Agency for Healthcare Research and Quality the Department of Veterans Affairs, or any other part of the US Department of Health and Human Services.

Author information

Authors and affiliations.

Department of Population Health Sciences, University of Utah School of Medicine, Salt Lake City, Utah, USA

Justin D. Smith

Center for Prevention Implementation Methodology for Drug Abuse and HIV, Department of Psychiatry and Behavioral Sciences, Department of Preventive Medicine, Department of Medical Social Sciences, and Department of Pediatrics, Northwestern University Feinberg School of Medicine, Chicago, Illinois, USA

Center for Prevention Implementation Methodology for Drug Abuse and HIV, Department of Psychiatry and Behavioral Sciences, Feinberg School of Medicine; Institute for Sexual and Gender Minority Health and Wellbeing, Northwestern University Chicago, Chicago, Illinois, USA

Dennis H. Li

Shirley Ryan AbilityLab and Center for Prevention Implementation Methodology for Drug Abuse and HIV, Department of Psychiatry and Behavioral Sciences and Department of Physical Medicine and Rehabilitation, Northwestern University Feinberg School of Medicine, Chicago, Illinois, USA

Miriam R. Rafferty

You can also search for this author in PubMed   Google Scholar

Contributions

JDS conceived of the Implementation Research Logic Model. JDS, MR, and DL collaborated in developing the Implementation Research Logic Model as presented and in the writing of the manuscript. All authors approved of the final version.

Corresponding author

Correspondence to Justin D. Smith .

Ethics declarations

Ethics approval and consent to participate.

Not applicable. This study did not involve human subjects.

Consent for publication

Competing interests.

None declared.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Additional file 1..

IRLM Fillable PDF form

Additional file 2.

IRLM for Comparative Implementation

Additional file 3.

IRLM for Implementation of an Intervention Across or Linking Two Contexts

Additional file 4.

IRLM for an Implementation Optimization Study

Additional file 5.

IRLM example 1: Faith in Action: Clergy and Community Health Center Communication Strategies for Ending the Epidemic in Mississippi and Arkansas

Additional file 6.

IRLM example 2: Hybrid Type II Effectiveness–Implementation Evaluation of a City-Wide HIV System Navigation Intervention in Chicago, IL

Additional file 7.

IRLM example 3: Implementation, spread, and sustainment of Physical Therapy for Mild Parkinson’s Disease through a Regional System of Care

Additional file 8.

IRLM Quick Reference Guide

Additional file 9.

IRLM Worksheets

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Smith, J.D., Li, D.H. & Rafferty, M.R. The Implementation Research Logic Model: a method for planning, executing, reporting, and synthesizing implementation projects. Implementation Sci 15 , 84 (2020). https://doi.org/10.1186/s13012-020-01041-8

Download citation

Received : 03 April 2020

Accepted : 03 September 2020

Published : 25 September 2020

DOI : https://doi.org/10.1186/s13012-020-01041-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Program theory
  • Integration
  • Study specification

Implementation Science

ISSN: 1748-5908

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

research article logic

  • Open access
  • Published: 16 August 2022

Developing an implementation research logic model: using a multiple case study design to establish a worked exemplar

  • Louise Czosnek   ORCID: orcid.org/0000-0002-2362-6888 1 ,
  • Eva M. Zopf 1 , 2 ,
  • Prue Cormie 3 , 4 ,
  • Simon Rosenbaum 5 , 6 ,
  • Justin Richards 7 &
  • Nicole M. Rankin 8 , 9  

Implementation Science Communications volume  3 , Article number:  90 ( 2022 ) Cite this article

9195 Accesses

4 Citations

21 Altmetric

Metrics details

Implementation science frameworks explore, interpret, and evaluate different components of the implementation process. By using a program logic approach, implementation frameworks with different purposes can be combined to detail complex interactions. The Implementation Research Logic Model (IRLM) facilitates the development of causal pathways and mechanisms that enable implementation. Critical elements of the IRLM vary across different study designs, and its applicability to synthesizing findings across settings is also under-explored. The dual purpose of this study is to develop an IRLM from an implementation research study that used case study methodology and to demonstrate the utility of the IRLM to synthesize findings across case sites.

The method used in the exemplar project and the alignment of the IRLM to case study methodology are described. Cases were purposely selected using replication logic and represent organizations that have embedded exercise in routine care for people with cancer or mental illness. Four data sources were selected: semi-structured interviews with purposely selected staff, organizational document review, observations, and a survey using the Program Sustainability Assessment Tool (PSAT). Framework analysis was used, and an IRLM was produced at each case site. Similar elements within the individual IRLM were identified, extracted, and re-produced to synthesize findings across sites and represent the generalized, cross-case findings.

The IRLM was embedded within multiple stages of the study, including data collection, analysis, and reporting transparency. Between 33-44 determinants and 36-44 implementation strategies were identified at sites that informed individual IRLMs. An example of generalized findings describing “intervention adaptability” demonstrated similarities in determinant detail and mechanisms of implementation strategies across sites. However, different strategies were applied to address similar determinants. Dependent and bi-directional relationships operated along the causal pathway that influenced implementation outcomes.

Conclusions

Case study methods help address implementation research priorities, including developing causal pathways and mechanisms. Embedding the IRLM within the case study approach provided structure and added to the transparency and replicability of the study. Identifying the similar elements across sites helped synthesize findings and give a general explanation of the implementation process. Detailing the methods provides an example for replication that can build generalizable knowledge in implementation research.

Peer Review reports

Contributions to the literature

Logic models can help understand how and why evidence-based interventions (EBIs) work to produce intended outcomes.

The implementation research logic model (IRLM) provides a method to understand causal pathways, including determinants, implementation strategies, mechanisms, and implementation outcomes.

We describe an exemplar project using a multiple case study design that embeds the IRLM at multiple stages. The exemplar explains how the IRLM helped synthesize findings across sites by identifying the common elements within the causal pathway.

By detailing the exemplar methods, we offer insights into how this approach of using the IRLM is generalizable and can be replicated in other studies.

The practice of implementation aims to get “someone…, somewhere… to do something differently” [ 1 ]. Typically, this involves changing individual behaviors and organizational processes to improve the use of evidence-based interventions (EBIs). To understand this change, implementation science applies different theories, models, and frameworks (hereafter “frameworks”) to describe and evaluate the factors and steps in the implementation process [ 2 , 3 , 4 , 5 ]. Implementation science provides much-needed theoretical frameworks and a structured approach to process evaluations. One or more frameworks are often used within a program of work to investigate the different stages and elements of implementation [ 6 ]. Researchers have acknowledged that the dynamic implementation process could benefit from using logic models [ 7 ]. Logic models offer a systematic approach to combining multiple frameworks and to building causal pathways that explain the mechanisms behind individual and organizational change.

Logic models visually represent how an EBI is intended to work [ 8 ]. They link the available resources with the activities undertaken, the immediate outputs of this work, and the intermediate outcomes and longer-term impacts [ 8 , 9 ]. Through this process, causal pathways are identified. For implementation research, the causal pathway provides the interconnection between a chosen EBI, determinants, implementation strategies, and implementation outcomes [ 10 ]. Testing causal mechanisms in the research translation pathway will likely dominate the next wave of implementation research [ 11 , 12 ]. Causal mechanisms (or mechanisms of change) are the “process or event through which an implementation strategy operates to affect desired implementation outcomes” [ 13 ]. Identifying mechanisms can improve implementation strategies’ selection, prioritization, and targeting [ 12 , 13 ]. This provides an efficient and evidence-informed approach to implementation.

Implementation researchers have proposed several methods to develop and examine causal pathways [ 14 , 15 ] and mechanisms [ 16 , 17 ]. This includes formalizing the inherent relationship between frameworks via developing the Implementation Research Logic Model (IRLM) [ 7 ]. The IRLM is a logic model designed to improve the rigor and reproducibility of implementation research. It specifies the relationship between elements of implementation (determinant, strategies, and outcomes) and the mechanisms of change. To do this, it recommends linking implementation frameworks or relevant taxonomies (e.g., determinant and evaluation frameworks and implementation strategy taxonomy). The IRLM authors suggest the tool has multiple uses, including planning, executing, and reporting on the implementation process and synthesizing implementation findings across different contexts [ 7 ]. During its development, the IRLM was tested to confirm its utility in planning, executing, and reporting; however, its utility in synthesizing findings across different contexts is ongoing. Users of the tool are encouraged to consider three principles: (1) comprehensiveness in reporting determinants, implementation strategies, and implementation outcomes; (2) specifying the conceptual relationships via diagrammatic tools such as colors and arrows; and (3) detailing important elements of the study design. Further, the authors also recognize that critical elements of IRLM will vary across different study designs.

This study describes the development of an IRLM from a multiple case study design. Case study methodology can answer “how and why” questions about implementation. They enable researchers to develop a rich, in-depth understanding of a contemporary phenomenon within its natural context [ 18 , 19 , 20 , 21 ]. These methods can create coherence in the dynamic context in which EBIs exist [ 22 , 23 ]. Case studies are common in implementation research [ 24 , 25 , 26 , 27 , 28 , 29 , 30 ], with multiple case study designs suitable for undertaking comparisons across contexts [ 31 , 32 ]. However, they are infrequently applied to establish mechanisms [ 11 ] or combine implementation elements to synthesize findings across contexts (as possible through the IRLM). Hollick and colleagues [ 33 ] undertook a comparative case study, guided by a determinant framework, to explore how context influences successful implementation. The authors contrasted determinants across sites where implementation was successful versus sites where implementation failed. The study did not extend to identifying implementation strategies or mechanisms. By contrast, van Zelm et al. [ 31 ] undertook a theory-driven evaluation of successful implementation across ten hospitals. They used joint displays to present mechanisms of change aligned with evaluation outcomes; however, they did not identify the implementation strategies within the causal pathway. Our study seeks to build on these works and explore the utility of the IRLM in synthesizing findings across sites. The dual objectives of this paper were to:

Describe how case study methods can be applied to develop an IRLM

Demonstrate the utility of the IRLM in synthesizing implementation findings across case sites.

In this section, we describe the methods used in the exemplar case study and the alignment of the IRLM to this approach. The exemplar study explored the implementation of exercise EBIs in the context of the Australian healthcare system. The exemplar study aimed to investigate the integration of exercise EBIs within routine mental illness or cancer care. The evidence base detailing the therapeutic benefits of exercise for non-communicable diseases such as cancer and mental illness are extensively documented [ 34 , 35 , 36 ] but inconsistently implemented as part of routine care [ 37 , 38 , 39 , 40 , 41 , 42 , 43 , 44 ].

Additional file 1 provides the Standards for Reporting Qualitative Research (SRQR).

Case study approach

We adopted an approach to case studies based on the methods described by Yin [ 18 ]. This approach is said to have post-positivist philosophical leanings, which are typically associated with the quantitative paradigm [ 19 , 45 , 46 ]. This is evidenced by the structured, deductive approach to the methods that are described with a constant lens on objectivity, validity, and generalization [ 46 ]. Yin’s approach to case studies aligns with the IRLM for several reasons. The IRLM is designed to use established implementation frameworks. The two frameworks and one taxonomy applied in our exemplar were the Consolidated Framework for Implementation Research (CFIR) [ 47 ], Expert Recommendations for Implementing Change (ERIC) [ 48 ], and Proctor et al.’s implementation outcomes framework [ 49 ]. These frameworks guided multiple aspects of our study (see Table 1 ). Commencing an implementation study with a preconceived plan based upon established frameworks is deductive [ 22 ]. Second, the IRLM has its foundation in logic modeling to develop cause and effect relationships [ 8 ]. Yin advocates using logic models to analyze case study findings [ 18 ]. They argue that developing logic models encourages researchers to iterate and consider plausible counterfactual explanations before upholding the causal pathway. Further, Yin notes that case studies are particularly valuable for explaining the transitions and context within the cause-and-effect relationship [ 18 ]. In our exemplar, the transition was the mechanism between the implementation strategy and implementation outcome. Finally, the proposed function of IRLM to synthesize findings across sites aligns with the exemplar study that used a multiple case approach. Multiple case studies aim to develop generalizable knowledge [ 18 , 50 ].

Case study selection and boundaries

A unique feature of Yin’s approach to multiple case studies is using replication logic to select cases [ 18 ]. Cases are chosen to demonstrate similarities (literal replication) or differences for anticipated reasons (theoretical replication) [ 18 ]. In the exemplar study, the cases were purposely selected using literal replication and displayed several common characteristics. First, all cases had delivered exercise EBIs within normal operations for at least 12 months. Second, each case site delivered exercise EBIs as part of routine care for a non-communicable disease (cancer or mental illness diagnosis). Finally, each site delivered the exercise EBI within the existing governance structures of the Australian healthcare system. That is, the organizations used established funding and service delivery models of the Australian healthcare system.

Using replication logic, we posited that sites would exhibit some similarities in the implementation process across contexts (literal replication). However, based on existing implementation literature [ 32 , 51 , 52 , 53 ], we expected sites to adapt the EBIs through the implementation process. The determinant analysis should explain these adaptions, which is informed by the CFIR (theoretical replication). Finally, in case study methods, clearly defining the boundaries of each case and the units of analysis, such as individual, the organization or intervention, helps focus the research. We considered each healthcare organization as a separate case. Within that, organizational-level analysis [ 18 , 54 ] and operationalizing the implementation outcomes focused inquiry (Table 1 ).

Data collection

During the study conceptualization for the exemplar, we mapped the data sources to the different elements of the IRLM (Fig. 1 ). Four primary data sources informed data collection: (1) semi-structured interviews with staff; (2) document review (such as meeting minutes, strategic plans, and consultant reports); (3) naturalistic observations; and (4) a validated survey (Program Sustainability Assessment Tool (PSAT)). A case study database was developed using Microsoft Excel to manage and organize data collection [ 18 , 54 ].

figure 1

Conceptual frame for the study

Semi-structured interviews

An interview guide was developed, informed by the CFIR interview guide tool [ 55 ]. Questions were selected across the five domains of the CFIR, which aligned with the delineation of determinant domains in the IRLM. Purposeful selection was used to identify staff for the interviews [ 56 ]. Adequate sample size in qualitative studies, particularly regarding the number of interviews, is often determined when data saturation is reached [ 57 , 58 ]. Unfortunately, there is little consensus on the definition of saturation [ 59 ], how to interpret when it has occurred [ 57 ], or whether it is possible to pre-determine in qualitative studies [ 60 ]. The number of participants in this study was determined based on the staff’s differential experience with the exercise EBI and their role in the organization. This approach sought to obtain a rounded view of how the EBI operated at each site [ 23 , 61 ]. Focusing on staff experiences also aligned with the organizational lens that bounded the study. Typical roles identified for the semi-structured interviews included the health professional delivering the EBI, the program manager responsible for the EBI, an organizational executive, referral sources, and other health professionals (e.g., nurses, allied health). Between five and ten interviews were conducted at each site. Interview times ranged from 16 to 72 min, most lasting around 40 min per participant.

Document review

A checklist informed by case study literature was developed outlining the typical documents the research team was seeking [ 18 ]. The types of documents sought to review included job descriptions, strategic plans/planning documents, operating procedures and organizational policies, communications (e.g., website, media releases, email, meeting minutes), annual reports, administrative databases/files, evaluation reports, third party consultant reports, and routinely collected numerical data that measured implementation outcomes [ 27 ]. As each document was identified, it was numbered, dated, and recorded in the case study database with a short description of the content related to the research aims and the corresponding IRLM construct. Between 24 and 33 documents were accessed at each site. A total of 116 documents were reviewed across the case sites.

Naturalistic observations

The onsite observations occurred over 1 week, wherein typical organizational operations were viewed. The research team interacted with staff, asked questions, and sought clarification of what was being observed; however, they did not disrupt the usual work routines. Observations allowed us to understand how the exercise EBI operated and contrast that with documented processes and procedures. They also provided the opportunity to observe non-verbal cues and interactions between staff. While onsite, case notes were recorded directly into the case study database [ 62 , 63 ]. Between 15 and 40 h were spent on observations per site. A total of 95 h was spent across sites on direct observations.

Program sustainability assessment tool (survey)

The PSAT is a planning and evaluation tool that assesses the sustainability of an intervention across eight domains [ 64 , 65 , 66 ]: (1) environmental support, (2) funding stability, (3) partnerships, (4) organizational capacity, (5) program evaluation, (6) program adaption, (7) communication, and (8) strategic planning [ 64 , 65 ]. The PSAT was administered to a subset of at least three participants per site who completed the semi-structured interview. The results were then pooled to provide an organization-wide view of EBI sustainability. Three participants per case site are consistent with previous studies that have used the tool [ 67 , 68 ] and recommendations for appropriate use [ 65 , 69 ].

We included a validated measure of sustainability, recognizing calls to improve understanding of this aspect of implementation [ 70 , 71 , 72 ]. Noting the limited number of measurement tools for evaluating sustainability [ 73 ], the PSAT’s characteristics displayed the best alignment with the study aims. To determine “best alignment,” we deferred to a study by Lennox and colleagues that helps researchers select suitable measurement tools based on the conceptualization of sustainability in the study [ 71 ]. The PSAT provides a multi-level view of sustainability. It is a measurement tool that can be triangulated with other implementation frameworks, such as the CFIR [ 74 ], to interrogate better and understand the later stages of implementation. Further, the tool provides a contemporary account of an EBIs capacity for sustainability [ 75 ]. This is consistent with case study methods, which explore complex, contemporary, real-life phenomena.

The voluminous data collection that is possible through case studies, and is often viewed as a challenge of the method [ 19 ], was advantageous to developing the IRLM in the exemplar and identifying the causal pathways. First, it aided three types of triangulation through the study (method, theory, and data source triangulation) [ 76 ]. Method triangulation involved collecting evidence via four methods: interview, observations, document review, and survey. Theoretical triangulation involved applying two frameworks and one taxonomy to understand and interpret the findings. Data source triangulation involved selecting participants with different roles within the organization to gain multiple perspectives about the phenomena being studied. Second, data collection facilitated depth and nuance in detailing determinants and implementation strategies. For the determinant analysis, this illuminated the subtleties within context and improved confidence and accuracy for prioritizing determinants. As case studies are essentially “naturalistic” studies, they provide insight into strategies that are implementable in pragmatic settings. Finally, the design’s flexibility enabled the integration of a survey and routinely collected numerical data as evaluation measures for implementation outcomes. This allowed us to contrast “numbers” against participants’ subjective experience of implementation [ 77 ].

Data analysis

Descriptive statistics were calculated for the PSAT and combined with the three other data sources wherein framework analysis [ 78 , 79 ] was used to analyze the data. Framework analysis includes five main phases: familiarization, identifying a thematic framework, indexing, charting, and mapping and interpretation [ 78 ]. Familiarization occurred concurrently with data collection, and the thematic frame was aligned to the two frameworks and one taxonomy we applied to the IRLM. To index and chart the data, the raw data was uploaded into NVivo 12 [ 80 ]. Codes were established to guide indexing that aligned with the thematic frame. That is, determinants within the CFIR [ 47 ], implementation strategies listed in ERIC [ 48 ], and the implementation outcomes [ 49 ] of acceptability, fidelity, penetration, and sustainability were used as codes in NVivo 12. This process produced a framework matrix that summarized the information housed under each code at each case site.

The final step of framework analysis involves mapping and interpreting the data. We used the IRLM to map and interpret the data in the exemplar. First, we identified the core elements of the implemented exercise EBI. Next, we applied the CFIR valance and strength coding to prioritize the contextual determinants. Then, we identified the implementation strategies used to address the contextual determinants. Finally, we provided a rationale (a causal mechanism) for how these strategies worked to address barriers and contribute to specific implementation outcomes. The systematic approach advocated by the IRLM provided a transparent representation of the causal pathway underpinning the implementation of the exercise EBIs. This process was followed at each case site to produce an IRLM for each organization. To compare, contrast, and synthesize findings across sites, we identified the similarities and differences in the individual IRLMs and then developed an IRLM that explained a generalized process for implementation. Through the development of the causal pathway and mechanisms, we deferred to existing literature seeking to establish these relationships [ 81 , 82 , 83 ]. Aligned with case study methods, this facilitated an iterative process of constant comparison and challenging the proposed causal relationships. Smith and colleagues advise that the IRLM “might be viewed as a somewhat simplified format,” and users are encouraged to “iterate on the design of the IRLM to increase its utility” [ 7 ]. Thus, we re-designed the IRLM within a traditional logic model structure to help make sense of the data collected through the case studies. Figure 1 depicts the conceptual frame for the study and provides a graphical representation of how the IRLM pathway was produced.

The results are presented with reference to the three principles of the IRLM: comprehensiveness, indicating the key conceptual relationship and specifying critical study design . The case study method allowed for comprehensiveness through the data collection and analysis described above. The mean number of data sources informing the analysis and development of the causal pathway at each case site was 63.75 (interviews ( M = 7), observational hours ( M =23.75), PSAT ( M =4), and document review ( M = 29). This resulted in more than 30 determinants and a similar number of implementation strategies identified at each site (determinant range per site = 33–44; implementation strategy range per site = 36–44). Developing a framework matrix meant that each determinant (prioritized and other), implementation strategy, and implementation outcome were captured. The matrix provided a direct link to the data sources that informed the content within each construct. An example from each construct was collated alongside the summary to evidence the findings.

The key conceptual relationship was articulated in a traditional linear process by aligning determinant → implementation strategy → mechanism → implementation outcome, as per the IRLM. To synthesize findings across sites, we compared and contrasted the results within each of the individual IRLM and extracted similar elements to develop a generalized IRLM that represents cross-case findings. By redeveloping the IRLM within a traditional logic model structure, we added visual representations of the bi-directional and dependent relationships, illuminating the dynamism within the implementation process. To illustrate, intervention adaptability was a prioritized determinant and enabler across sites. Healthcare providers recognized that adapting and tailoring exercise EBIs increased “fit” with consumer needs. This also extended to adapting how healthcare providers referred consumers to exercise so that it was easy in the context of their other work priorities. Successful adaption was contingent upon a qualified workforce with the required skills and competencies to enact change. Different implementation strategies were used to make adaptions across sites, such as promoting adaptability and using data experts. However, despite the different strategies, successful adaptation created positive bi-directional relationships. That is, healthcare providers’ confidence and trust in the EBI grew as consumer engagement increased and clinical improvements were observed. This triggered greater engagement with the EBI (e.g., acceptability → penetration → sustainability), albeit the degree of engagement differed across sites. Figure 2 illustrates this relationship within the IRLM and provides a contrasting relationship by highlighting how a prioritized barrier across sites (available resources) was addressed.

figure 2

Example of intervention adaptability (E) contrasted with available resources (B) within a synthesised IRLM across case sites

The final principle is to specify critical study design , wherein we have described how case study methodology was used to develop the IRLM exemplar. Our intention was to produce an explanatory causal pathway for the implementation process. The implementation outcomes of acceptability and fidelity were measured at the level of the provider, and penetration and sustainability were measured at the organizational level [ 49 ]. Service level and clinical level outcomes were not identified for a priori measurement throughout the study. We did identify evidence of clinical outcomes that supported our overall findings via the document review. Historical evaluations on the service indicated patients increased their exercise level or demonstrated a change in symptomology/function. The implementation strategies specified in the study were those chosen by the organizations. We did not attempt to augment routine practice or change implementation outcomes by introducing new strategies. The barriers across sites were represented with a (B) symbol and enablers with an (E) symbol in the IRLM. In the individual IRLM, consistent determinants and strategies were highlighted (via bolding) to support extraction. Finally, within the generalized IRLM, the implementation strategies are grouped according to the ERIC taxonomy category. This accounts for the different strategies applied to achieve similar outcomes across case studies.

This study provides a comprehensive overview that uses case study methodology to develop an IRLM in an implementation research project. Using an exemplar that examines implementation in different healthcare settings, we illustrate how the IRLM (that documents the causal pathways and mechanisms) was developed and enabled the synthesis of findings across sites.

Case study methodologies are fraught with inconsistencies in terminology and approach. We adopted the method described by Yin. Its guiding paradigm, which is rooted in objectivity, means it can be viewed as less flexible than other approaches [ 46 , 84 ]. We found the approach offered sufficient flexibility within the frame of a defined process. We argue that the defined process adds to the rigor and reproducibility of the study, which is consistent with the principles of implementation science. That is, accessing multiple sources of evidence, applying replication logic to select cases, maintaining a case study database, and developing logic models to establish causal pathways, demonstrates the reliability and validity of the study. The method was flexible enough to embed the IRLM within multiple phases of the study design, including conceptualization, philosophical alignment, and analysis. Paparini and colleagues [ 85 ] are developing guidance that recognizes the challenges and unmet value of case study methods for implementation research. This work, supported by the UK Medical Research Council, aims to enhance the conceptualization, application, analysis, and reporting of case studies. This should encourage and support researchers to use case study methods in implementation research with increased confidence.

The IRLM produced a relatively linear depiction of the relationship between context, strategies, and outcomes in our exemplar. However, as noted by the authors of the IRLM, the implementation process is rarely linear. If the tool is applied too rigidly, it may inadvertently depict an overly simplistic view of a complex process. To address this, we redeveloped the IRLM within a traditional logic model structure, adding visual representations of the dependent and bidirectional relationships evident within the general IRLM pathway [ 86 ]. Further, developing a general IRLM of cross-case findings that synthesized results involved a more inductive approach to identifying and extracting similar elements. It required the research team to consider broader patterns in the data before offering a prospective account of the implementation process. This was in contrast to the earlier analysis phases that directly mapped determinants and strategies to the CFIR and ERIC taxonomy. We argue that extracting similar elements is analogous to approaches that have variously been described as portable elements [ 87 ], common elements [ 88 ], or generalization by mechanism [ 89 ]. While defined and approached slightly differently, these approaches aim to identify elements frequently shared across effective EBIs and thus can form the basis of future EBIs to increase their utility, efficiency, and effectiveness [ 88 ]. We identified similarities related to determinant detail and mechanism of different implementation strategies across sites. This finding supports the view that many implementation strategies could be suitable, and selecting the “right mix” is challenging [ 16 ]. Identifying common mechanisms, such as increased motivation, skill acquisition, or optimizing workflow, enabled elucidation of the important functions of strategies. This can help inform the selection of appropriate strategies in future implementation efforts.

Finally, by developing individual IRLMs and then re-producing a general IRLM, we synthesized findings across sites and offered generalized findings. The ability to generalize from case studies is debated [ 89 , 90 ], with some considering the concept a fallacy [ 91 ]. That is, the purpose of qualitative research is to develop a richness through data that is situated within a unique context. Trying to extrapolate from findings is at odds with exploring unique context. We suggest the method described herein and the application of IRLM could be best applied to a form of generalization called ‘transferability’ [ 91 , 92 ]. This suggests that findings from one study can be transferred to another setting or population group. In this approach, the new site takes the information supplied and determines those aspects that would fit with their unique environment. We argue that elucidating the implementation process across multiple sites improves the confidence with which certain “elements” could be applied to future implementation efforts. For example, our approach may also be helpful for multi-site implementation studies that use methods other than case studies. Developing a general IRLM through study conceptualization could identify consistencies in baseline implementation status across sites. Multi-site implementation projects may seek to introduce and empirically test implementation strategies, such as via a cluster randomized controlled trial [ 93 ]. Within this study design, baseline comparison between control and intervention sites might extend to a comparison of organizational type, location and size, and individual characteristics, but not the chosen implementation strategies [ 94 ]. Applying the approach described within our study could enhance our understanding of how to support effective implementation.

Limitations

After the research team conceived this study, the authors of the PSAT validated another tool for use in clinical settings (Clinical Sustainability Assessment Tool (CSAT)) [ 95 ]. This tool appears to align better with our study design due to its explicit focus on maintaining structured clinical care practices. The use of multiple data sources and consistency in some elements across the PSAT and CSAT should minimize the limitations in using the PSAT survey tool. At most case sites, limited staff were involved in developing and implementing exercise EBI. Participants who self-selected for interviews may be more invested in assuring positive outcomes for the exercise EBI. Inviting participants from various roles was intended to reduce selection bias. Finally, we recognize recent correspondence suggesting the IRLM misses a critical step in the causal pathway. That is the mechanism between determinant and selection of an appropriate implementation strategy [ 96 ]. Similarly, Lewis and colleagues note that additional elements, including pre-conditions, moderators, and mediators (distal and proximal), exist within the causal pathway [ 13 ]. Through the iterative process of developing the IRLM, decisions were made about the determinant → implementation strategy relationship; however, this is not captured in the IRLM. Secondary analysis of the case study data would allow elucidation of these relationships, as this information can be extracted through the case study database. This was outside the scope of the exemplar study.

Developing an IRLM via case study methods proved useful in identifying causal pathways and mechanisms. The IRLM can complement and enhance the study design by providing a consistent and structured approach. In detailing our approach, we offer an example of how multiple case study designs that embed the IRLM can aid the synthesis of findings across sites. It also provides a method that can be replicated in future studies. Such transparency adds to the quality, reliability, and validity of implementation research.

Availability of data and materials

The data that support the findings of this study are available on request from the corresponding author [LC]. The data are not publicly available due to them containing information that could compromise research participant privacy.

Presseau J, McCleary N, Lorencatto F, Patey AM, Grimshaw JM, Francis JJ. Action, actor, context, target, time (AACTT): a framework for specifying behaviour. Implement Sci. 2019;14(1):102.

Article   PubMed   PubMed Central   Google Scholar  

Damschroder LJ. Clarity out of chaos: use of theory in implementation research. Psychiatry Res. 2020;283(112461).

Bauer M, Damschroder L, Hagedorn H, Smith J, Kilbourne A. An introduction to implementation science for the non-specialist. BMC Psychol. 2015;3(1):32.

Nilsen P. Making sense of implementation theories, models and frameworks. Implement Sci. 2015;10(1):53.

Lynch EA, Mudge A, Knowles S, Kitson AL, Hunter SC, Harvey G. “There is nothing so practical as a good theory”: a pragmatic guide for selecting theoretical approaches for implementation projects. BMC Health Serv Res. 2018;18(1):857.

Birken SA, Powell BJ, Presseau J, Kirk MA, Lorencatto F, Gould NJ, et al. Combined use of the Consolidated Framework for Implementation Research (CFIR) and the Theoretical Domains Framework (TDF): a systematic review. Implement Sci. 2017;12(1):2.

Smith JD, Li DH, Rafferty MR. The Implementation Research Logic Model: a method for planning, executing, reporting, and synthesizing implementation projects. Implement Sci. 2020;15(1):84.

Kellogg WK. Foundation. Logic model development guide. Michigan, USA; 2004.

McLaughlin JA, Jordan GB. Logic models: a tool for telling your programs performance story. Eval Prog Plann. 1999;22(1):65–72.

Article   Google Scholar  

Anselmi L, Binyaruka P, Borghi J. Understanding causal pathways within health systems policy evaluation through mediation analysis: an application to payment for performance (P4P) in Tanzania. Implement Sci. 2017;12(1):10.

Lewis C, Boyd M, Walsh-Bailey C, Lyon A, Beidas R, Mittman B, et al. A systematic review of empirical studies examining mechanisms of implementation in health. Implement Sci. 2020;15(1):21.

Powell BJ, Fernandez ME, Williams NJ, Aarons GA, Beidas RS, Lewis CC, et al. Enhancing the impact of implementation strategies in healthcare: a research agenda. Front Public Health. 2019;7(3).

Lewis CC, Klasnja P, Powell BJ, Lyon AR, Tuzzio L, Jones S, Walsh-Bailey C and Weiner B. From classification to causality: advancing understanding of mechanisms of change in implementation science. Front Public Health. 2018;6(136).

Bartholomew L, Parcel G, Kok G. Intervention mapping: a process for developing theory and evidence-based health education programs. Health Educ Behav. 1998;25(5):545–63.

Article   CAS   PubMed   Google Scholar  

Weiner BJ, Lewis MA, Clauser SB, Stitzenberg KB. In search of synergy: strategies for combining interventions at multiple levels. JNCI Monographs. 2012;2012(44):34–41.

Powell BJ, Beidas RS, Lewis CC, Aarons GA, McMillen J, Proctor EK, et al. Methods to improve the selection and tailoring of implementation strategies. J Behav Health Serv Res. 2017;44(2):177–94.

Fernandez ME, ten Hoor GA, van Lieshout S, Rodriguez SA, Beidas RS, Parcel G, Ruiter R, Markham C and Kok G. Implementation mapping: using intervention mapping to develop implementation strategies. Frontiers. Public Health. 2019;7(158).

Yin R. Case study research and applications design and methods. 6th Edition ed. United States of America: Sage Publications; 2018.

Google Scholar  

Crowe S, Cresswell K, Robertson A, Huby G, Avery A, Sheikh A. The case study approach. BMC Med Res Methodol. 2011;11:100.

Stake R. The art of case study reseach. United States of America: Sage Publications; 2005.

Thomas G. How to do your case study. 2nd Edition ed. London: Sage Publications; 2016.

Ramanadhan S, Revette AC, Lee RM and Aveling E. Pragmatic approaches to analyzing qualitative data for implementation science: an introduction. Implement Sci Commun. 2021;2(70).

National Cancer Institute. Qualitative methods in implementation science United States of America: National Institutes of Health Services; 2018.

Mathers J, Taylor R, Parry J. The challenge of implementing peer-led interventions in a professionalized health service: a case study of the national health trainers service in England. Milbank Q. 2014;92(4):725–53.

Powell BJ, Proctor EK, Glisson CA, Kohl PL, Raghavan R, Brownson RC, et al. A mixed methods multiple case study of implementation as usual in children’s social service organizations: study protocol. Implement Sci. 2013;8(1):92.

van de Glind IM, Heinen MM, Evers AW, Wensing M, van Achterberg T. Factors influencing the implementation of a lifestyle counseling program in patients with venous leg ulcers: a multiple case study. Implement Sci. 2012;7(1):104.

Greenhalgh T, Macfarlane F, Barton-Sweeney C, Woodard F. “If we build it, will it stay?” A case study of the sustainability of whole-system change in London. Milbank Q. 2012;90(3):516–47.

Urquhart R, Kendell C, Geldenhuys L, Ross A, Rajaraman M, Folkes A, et al. The role of scientific evidence in decisions to adopt complex innovations in cancer care settings: a multiple case study in Nova Scotia, Canada. Implement Sci. 2019;14(1):14.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Herinckx H, Kerlinger A, Cellarius K. Statewide implementation of high-fidelity recovery-oriented ACT: A case study. Implement Res Pract. 2021;2:2633489521994938.

Young AM, Hickman I, Campbell K, Wilkinson SA. Implementation science for dietitians: The ‘what, why and how’ using multiple case studies. Nutr Diet. 2021;78(3):276–85.

Article   PubMed   Google Scholar  

van Zelm R, Coeckelberghs E, Sermeus W, Wolthuis A, Bruyneel L, Panella M, et al. A mixed methods multiple case study to evaluate the implementation of a care pathway for colorectal cancer surgery using extended normalization process theory. BMC Health Serv Res. 2021;21(1):11.

Albers B, Shlonsky A, Mildon R. Implementation Science 3.0. Switzerland: Springer; 2020.

Book   Google Scholar  

Hollick RJ, Black AJ, Reid DM, McKee L. Shaping innovation and coordination of healthcare delivery across boundaries and borders. J Health Organ Manag. 2019;33(7/8):849–68.

Article   PubMed Central   Google Scholar  

Pedersen B, Saltin B. Exercise as medicine – evidence for prescribing exercise as therapy in 26 different chronic diseases. Scand J Med Sci Sports. 2015;25:1–72.

Firth J, Siddiqi N, Koyanagi A, Siskind D, Rosenbaum S, Galletly C, et al. The Lancet Psychiatry Commission: a blueprint for protecting physical health in people with mental illness. Lancet Psychiatry. 2019;6(8):675–712.

Campbell K, Winters-Stone K, Wisekemann J, May A, Schwartz A, Courneya K, et al. Exercise guidelines for cancer survivors: consensus statement from international multidisciplinary roundtable. Med Sci Sports Exerc. 2019;51(11):2375–90.

Deenik J, Czosnek L, Teasdale SB, Stubbs B, Firth J, Schuch FB, et al. From impact factors to real impact: translating evidence on lifestyle interventions into routine mental health care. Transl Behav Med. 2020;10(4):1070–3.

Suetani S, Rosenbaum S, Scott JG, Curtis J, Ward PB. Bridging the gap: What have we done and what more can we do to reduce the burden of avoidable death in people with psychotic illness? Epidemiol Psychiatric Sci. 2016;25(3):205–10.

Article   CAS   Google Scholar  

Stanton R, Rosenbaum S, Kalucy M, Reaburn P, Happell B. A call to action: exercise as treatment for patients with mental illness. Aust J Primary Health. 2015;21(2):120–5.

Rosenbaum S, Hobson-Powell A, Davison K, Stanton R, Craft LL, Duncan M, et al. The role of sport, exercise, and physical activity in closing the life expectancy gap for people with mental illness: an international consensus statement by Exercise and Sports Science Australia, American College of Sports Medicine, British Association of Sport and Exercise Science, and Sport and Exercise Science New Zealand. Transll J Am Coll Sports Med. 2018;3(10):72–3.

Chambers D, Vinson C, Norton W. Advancing the science of implementation across the cancer continuum. United States of America: Oxford University Press Inc; 2018.

Schmitz K, Campbell A, Stuiver M, Pinto B, Schwartz A, Morris G, et al. Exercise is medicine in oncology: engaging clinicians to help patients move through cancer. Cancer J Clin. 2019;69(6):468–84.

Santa Mina D, Alibhai S, Matthew A, Guglietti C, Steele J, Trachtenberg J, et al. Exercise in clinical cancer care: a call to action and program development description. Curr Oncol. 2012;19(3):9.

Czosnek L, Rankin N, Zopf E, Richards J, Rosenbaum S, Cormie P. Implementing exercise in healthcare settings: the potential of implementation science. Sports Med. 2020;50(1):1–14.

Harrison H, Birks M, Franklin R, Mills J. Case study research: foundations and methodological orientations. Forum: Qualitative. Soc Res. 2017;18(1).

Yazan B. Three approaches to case study methods in education: Yin, Merriam, and Stake. Qual Rep. 2015;20(2):134–52.

Damschroder L, Aaron D, Keith R, Kirsh S, Alexander J, Lowery J. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4:50.

Powell BJ, Waltz TJ, Chinman MJ, Damschroder LJ, Smith JL, Matthieu MM, et al. A refined compilation of implementation strategies: results from the Expert Recommendations for Implementing Change (ERIC) project. Implement Sci. 2015;10(1):21.

Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, et al. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Admin Pol Ment Health. 2011;38(2):65–76.

Heale R, Twycross A. What is a case study? Evid Based Nurs. 2018;21(1):7–8.

Brownson R, Colditz G, Proctor E. Dissemination and implementation research in health: translating science to practice. Second ed. New York: Oxford University Press; 2017.

Quiñones MM, Lombard-Newell J, Sharp D, Way V, Cross W. Case study of an adaptation and implementation of a Diabetes Prevention Program for individuals with serious mental illness. Transl Behav Med. 2018;8(2):195–203.

Wiltsey Stirman S, Baumann AA, Miller CJ. The FRAME: an expanded framework for reporting adaptations and modifications to evidence-based interventions. Implement Sci. 2019;14(1):58.

Baxter P, Jack S. Qualitative case study methodology: study design and implementation for novice researchers. Qual Rep. 2008;13(4):544–59.

Consolidated Framework for Implementation Research 2018. Available from: http://www.cfirguide.org/index.html . Cited 2018 14 February.

Palinkas LA, Horwitz SM, Green CA, Wisdom JP, Duan N, Hoagwood K. Purposeful sampling for qualitative data collection and analysis in mixed method implementation research. Admin Pol Ment Health. 2015;42(5):533–44.

Francis JJ, Johnston M, Robertson C, Glidewell L, Entwistle V, Eccles MP, et al. What is an adequate sample size? Operationalising data saturation for theory-based interview studies. Psychol Health. 2010;25(10):1229–45.

Teddlie C, Yu F. Mixed methods sampling: a typology with examples. J Mixed Methods Res. 2007;1(1):77–100.

Saunders B, Sim J, Kingstone T, Baker S, Waterfield J, Bartlam B, et al. Saturation in qualitative research: exploring its conceptualization and operationalization. Qual Quant. 2018;52(4):1893–907.

Braun V, Clarke V. To saturate or not to saturate? Questioning data saturation as a useful concept for thematic analysis and sample-size rationales. Qual Res Sport Exerc Health. 2021;13(2):201–16.

Burau V, Carstensen K, Fredens M, Kousgaard MB. Exploring drivers and challenges in implementation of health promotion in community mental health services: a qualitative multi-site case study using Normalization Process Theory. BMC Health Serv Res. 2018;18(1):36.

Phillippi J, Lauderdale J. A guide to field notes for qualitative research: context and conversation. Qual Health Res. 2018;28(3):381–8.

Mulhall A. In the field: notes on observation in qualitative research. J Adv Nurs. 2003;41(3):306–13.

Schell SF, Luke DA, Schooley MW, Elliott MB, Herbers SH, Mueller NB, et al. Public health program capacity for sustainability: a new framework. Implement Sci. 2013;8(1):15.

Washington University. The Program Sustainability Assessment Tool St Louis: Washington University; 2018. Available from: https://sustaintool.org/ . Cited 2018 14 February.

Luke DA, Calhoun A, Robichaux CB, Elliott MB, Moreland-Russell S. The Program Sustainability Assessment Tool: a new instrument for public health programs. Prev Chronic Dis. 2014;11:E12.

Stoll S, Janevic M, Lara M, Ramos-Valencia G, Stephens TB, Persky V, et al. A mixed-method application of the Program Sustainability Assessment Tool to evaluate the sustainability of 4 pediatric asthma care coordination programs. Prev Chronic Dis. 2015;12:E214.

Kelly C, Scharff D, LaRose J, Dougherty NL, Hessel AS, Brownson RC. A tool for rating chronic disease prevention and public health interventions. Prev Chronic Dis. 2013;10:E206.

Calhoun A, Mainor A, Moreland-Russell S, Maier RC, Brossart L, Luke DA. Using the Program Sustainability Assessment Tool to assess and plan for sustainability. Prev Chronic Dis. 2014;11:E11.

Proctor E, Luke D, Calhoun A, McMillen C, Brownson R, McCrary S, et al. Sustainability of evidence-based healthcare: research agenda, methodological advances, and infrastructure support. Implement Sci. 2015;10(1):88.

Lennox L, Maher L, Reed J. Navigating the sustainability landscape: a systematic review of sustainability approaches in healthcare. Implement Sci. 2018;13(1):27.

Moore JE, Mascarenhas A, Bain J, Straus SE. Developing a comprehensive definition of sustainability. Implement Sci. 2017;12(1):110.

Lewis CC, Fischer S, Weiner BJ, Stanick C, Kim M, Martinez RG. Outcomes for implementation science: an enhanced systematic review of instruments using evidence-based rating criteria. Implement Sci. 2015;10(1):155.

Shelton RC, Chambers DA, Glasgow RE. An extension of RE-AIM to enhance sustainability: addressing dynamic context and promoting health equity over time. Front Public Health. 2020;8(134).

Moullin JC, Sklar M, Green A, Dickson KS, Stadnick NA, Reeder K, et al. Advancing the pragmatic measurement of sustainment: a narrative review of measures. Implement Sci Commun. 2020;1(1):76.

Denzin N. The research act: A theoretical introduction to sociological methods. New Jersey: Transaction Publishers; 1970.

Grant BM, Giddings LS. Making sense of methodologies: a paradigm framework for the novice researcher. Contemp Nurse. 2002;13(1):10–28.

Gale NK, Heath G, Cameron E, Rashid S, Redwood S. Using the framework method for the analysis of qualitative data in multi-disciplinary health research. BMC Med Res Methodol. 2013;13(1):117.

Pope C, Ziebland S, Mays N. Qualitative research in health care. Analysing qualitative data. BMJ. 2000;320(7227):114–6.

QSR International. NVivo 11 Pro for Windows 2018. Available from: https://www.qsrinternational.com/nvivo-qualitative-data-analysissoftware/home .

Waltz TJ, Powell BJ, Fernández ME, Abadie B, Damschroder LJ. Choosing implementation strategies to address contextual barriers: diversity in recommendations and future directions. Implement Sci. 2019;14(1):42.

Michie S, Johnston M, Rothman AJ, de Bruin M, Kelly MP, Carey RN, et al. Developing an evidence-based online method of linking behaviour change techniques and theoretical mechanisms of action: a multiple methods study. Southampton (UK): NIHR Journals. Library. 2021;9:1.

Michie S, Johnston M, Abraham C, Lawton R, Parker D, Walker A. Making psychological theory useful for implementing evidence based practice: a consensus approach. Qual Saf Health Care. 2005;14(1):26–33.

Ebneyamini S, Sadeghi Moghadam MR. Toward developing a framework for conducting case study research. Int J Qual Methods. 2018;17(1):1609406918817954.

Paparini S, Green J, Papoutsi C, Murdoch J, Petticrew M, Greenhalgh T, et al. Case study research for better evaluations of complex interventions: rationale and challenges. BMC Med. 2020;18(1):301.

Sarkies M, Long JC, Pomare C, Wu W, Clay-Williams R, Nguyen HM, et al. Avoiding unnecessary hospitalisation for patients with chronic conditions: a systematic review of implementation determinants for hospital avoidance programmes. Implement Sci. 2020;15(1):91.

Koorts H, Cassar S, Salmon J, Lawrence M, Salmon P, Dorling H. Mechanisms of scaling up: combining a realist perspective and systems analysis to understand successfully scaled interventions. Int J Behav Nutr Phys Act. 2021;18(1):42.

Engell T, Kirkøen B, Hammerstrøm KT, Kornør H, Ludvigsen KH, Hagen KA. Common elements of practice, process and implementation in out-of-school-time academic interventions for at-risk children: a systematic review. Prev Sci. 2020;21(4):545–56.

Bengtsson B, Hertting N. Generalization by mechanism: thin rationality and ideal-type analysis in case study research. Philos Soc Sci. 2014;44(6):707–32.

Tsang EWK. Generalizing from research findings: the merits of case studies. Int J Manag Rev. 2014;16(4):369–83.

Polit DF, Beck CT. Generalization in quantitative and qualitative research: myths and strategies. Int J Nurs Stud. 2010;47(11):1451–8.

Adler C, Hirsch Hadorn G, Breu T, Wiesmann U, Pohl C. Conceptualizing the transfer of knowledge across cases in transdisciplinary research. Sustain Sci. 2018;13(1):179–90.

Wolfenden L, Foy R, Presseau J, Grimshaw JM, Ivers NM, Powell BJ, et al. Designing and undertaking randomised implementation trials: guide for researchers. BMJ. 2021;372:m3721.

Nathan N, Hall A, McCarthy N, Sutherland R, Wiggers J, Bauman AE, et al. Multi-strategy intervention increases school implementation and maintenance of a mandatory physical activity policy: outcomes of a cluster randomised controlled trial. Br J Sports Med. 2022;56(7):385–93.

Malone S, Prewitt K, Hackett R, Lin JC, McKay V, Walsh-Bailey C, et al. The Clinical Sustainability Assessment Tool: measuring organizational capacity to promote sustainability in healthcare. Implement Sci Commun. 2021;2(1):77.

Sales AE, Barnaby DP, Rentes VC. Letter to the editor on “the implementation research logic model: a method for planning, executing, reporting, and synthesizing implementation projects” (Smith JD, Li DH, Rafferty MR. the implementation research logic model: a method for planning, executing, reporting, and synthesizing implementation projects. Implement Sci. 2020;15 (1):84. Doi:10.1186/s13012-020-01041-8). Implement Sci. 2021;16(1):97.

Download references

Acknowledgements

The authors would like to acknowledge the healthcare organizations and staff who supported the study.

SR is funded by an NHMRC Early Career Fellowship (APP1123336). The funding body had no role in the study design, data collection, data analysis, interpretation, or manuscript development.

Author information

Authors and affiliations.

Mary MacKillop Institute for Health Research, Australian Catholic University, Melbourne, Australia

Louise Czosnek & Eva M. Zopf

Cabrini Cancer Institute, The Szalmuk Family Department of Medical Oncology, Cabrini Health, Melbourne, Australia

Eva M. Zopf

Peter MacCallum Cancer Centre, Melbourne, Australia

Prue Cormie

Sir Peter MacCallum Department of Oncology, The University of Melbourne, Melbourne, Australia

Discipline of Psychiatry and Mental Health, University of New South Wales, Sydney, Australia

Simon Rosenbaum

School of Health Sciences, University of New South Wales, Sydney, Australia

Faculty of Health, Victoria University of Wellington, Wellington, New Zealand

Justin Richards

Faculty of Medicine and Health, University of Sydney, Sydney, Australia

Nicole M. Rankin

Faculty of Medicine, Dentistry and Health Sciences, University of Melbourne, Melbourne, Australia

You can also search for this author in PubMed   Google Scholar

Contributions

LC, EZ, SR, JR, PC, and NR contributed to the conceptualization of the study. LC undertook the data collection, and LC, EZ, SR, JR, PC, and NR supported the analysis. The first draft of the manuscript was written by LC with NR and EZ providing first review. LC, EZ, SR, JR, PC, and NR commented on previous versions of the manuscript and provided critical review. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Louise Czosnek .

Ethics declarations

Ethics approval and consent to participate.

This study is approved by Sydney Local Health District Human Research Ethics Committee - Concord Repatriation General Hospital (2019/ETH11806). Ethical approval is also supplied by Australian Catholic University (2018-279E), Peter MacCallum Cancer Centre (19/175), North Sydney Local Health District - Macquarie Hospital (2019/STE14595), and Alfred Health (516-19).

Consent for publication

Not applicable.

Competing interests

PC is the recipient of a Victorian Government Mid-Career Research Fellowship through the Victorian Cancer Agency. PC is the Founder and Director of EX-MED Cancer Ltd, a not-for-profit organization that provides exercise medicine services to people with cancer. PC is the Director of Exercise Oncology EDU Pty Ltd, a company that provides fee for service training courses to upskill exercise professionals in delivering exercise to people with cancer.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1..

Standards for Reporting Qualitative Research (SRQR).

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Czosnek, L., Zopf, E.M., Cormie, P. et al. Developing an implementation research logic model: using a multiple case study design to establish a worked exemplar. Implement Sci Commun 3 , 90 (2022). https://doi.org/10.1186/s43058-022-00337-8

Download citation

Received : 19 March 2022

Accepted : 01 August 2022

Published : 16 August 2022

DOI : https://doi.org/10.1186/s43058-022-00337-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Logic model
  • Case study methods
  • Causal pathways
  • Causal mechanisms

Implementation Science Communications

ISSN: 2662-2211

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

research article logic

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here .

Loading metrics

Open Access

Peer-reviewed

Research Article

Developing and Optimising the Use of Logic Models in Systematic Reviews: Exploring Practice and Good Practice in the Use of Programme Theory in Reviews

* E-mail: [email protected]

Affiliation Evidence for Policy and Practice Information and Co-ordinating Centre (EPPI-Centre), UCL Institute of Education, University College London, London, United Kingdom

Affiliation Centre for Paediatrics, Blizard Institute, Queen Mary University of London, London, United Kingdom

  • Dylan Kneale, 
  • James Thomas, 
  • Katherine Harris

PLOS

  • Published: November 17, 2015
  • https://doi.org/10.1371/journal.pone.0142187
  • Reader Comments

Table 1

Logic models are becoming an increasingly common feature of systematic reviews, as is the use of programme theory more generally in systematic reviewing. Logic models offer a framework to help reviewers to ‘think’ conceptually at various points during the review, and can be a useful tool in defining study inclusion and exclusion criteria, guiding the search strategy, identifying relevant outcomes, identifying mediating and moderating factors, and communicating review findings.

Methods and Findings

In this paper we critique the use of logic models in systematic reviews and protocols drawn from two databases representing reviews of health interventions and international development interventions. Programme theory featured only in a minority of the reviews and protocols included. Despite drawing from different disciplinary traditions, reviews and protocols from both sources shared several limitations in their use of logic models and theories of change, and these were used almost unanimously to solely depict pictorially the way in which the intervention worked. Logic models and theories of change were consequently rarely used to communicate the findings of the review.

Conclusions

Logic models have the potential to be an aid integral throughout the systematic reviewing process. The absence of good practice around their use and development may be one reason for the apparent limited utility of logic models in many existing systematic reviews. These concerns are addressed in the second half of this paper, where we offer a set of principles in the use of logic models and an example of how we constructed a logic model for a review of school-based asthma interventions.

Citation: Kneale D, Thomas J, Harris K (2015) Developing and Optimising the Use of Logic Models in Systematic Reviews: Exploring Practice and Good Practice in the Use of Programme Theory in Reviews. PLoS ONE 10(11): e0142187. https://doi.org/10.1371/journal.pone.0142187

Editor: Paula Braitstein, University of Toronto Dalla Lana School of Public Health, CANADA

Received: February 1, 2015; Accepted: October 19, 2015; Published: November 17, 2015

Copyright: © 2015 Kneale et al. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited

Data Availability: Data can be found on the Cochrane database of systematic reviews ( http://onlinelibrary.wiley.com/cochranelibrary/search/ ) and the 3ie database of systematic reviews ( http://www.3ieimpact.org/evidence/systematic-reviews/ ).

Funding: This work was supported by the National Institute for Health Research (NIHR) Collaboration for Leadership in Applied Health Research and Care North Thames at Barts Health NHS Trust. The views expressed are those of the authors and not necessarily those of the NHS, the NIHR or the Department of Health. The funders had no direct role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Competing interests: The authors have declared that no competing interests exist.

Introduction

Researchers in academic institutions have historically measured their success by the impact that their research has within their own research communities, and have paid less attention to measuring its broader social impact. This presents a contradiction between the metrics of success of research and its ultimate extrinsic value [ 1 ], serving to expose a gulf between ‘strictly objective’ and ‘citizen’ scientists and social scientists [ 2 ]; the former believing that research should be objective and independent of external societal influences and the latter whose starting point is that science should benefit society. In recent years the need to link research within broader knowledge utilisation processes has been recognised, or at least accepted, by research councils and increasing numbers of researchers. While some forms of academic enquiry that pushes disciplinary boundaries or that represents ‘blue skies’ thinking remains important, despite being only distally linked to knowledge utilisation, there is little doubt as to the capacity of many other forms of ‘research’ to influence and transform policy and practice (see [ 3 , 4 ]). In many ways, both systematic reviews and logic models are both borne of such a need for greater knowledge transference and influence. Policy and practice-relevance is integral to most systematic reviews, with the systematic and transparent synthesis of evidence serving to enhance the accessibility of research findings to other researchers and wider audiences [ 5 , 6 ]. Through an explicit, rigorous and accountable process of discovery, description, quality assessment, and synthesis of the literature according to defined criteria, systematic reviews can help to make research accessible to policy-makers and other stakeholders who may not otherwise engage with voluminous tomes of evidence. Similarly, one of the motivations in evaluation research and programme management for setting out programme theory through a logic model or theory of change was to develop a shared understanding of the processes and underlying mechanisms by which interventions were likely to ‘work’. In the case of logic models this is undertaken through pictorially depicting the chain of components representing processes and conditions between the initial inputs of an intervention and the outcomes; a similar approach also underlies theories of change, albeit with a greater emphasis on articulating the specific hypotheses of how different parts of the chain result in progression to the next stage. This shared understanding was intended to develop across practitioner and program implementers, who may otherwise have very different roles in an intervention, as well as among a broader set of stakeholders, including funders and policy-makers.

As others before us have speculated, there is room for the tools of programme theory and the approach of systematic reviewing to converge, or more precisely, for logic models to become a tool to be employed as part of undertaking a systematic review [ 7 – 9 ]. This is not in dispute in this paper. However, even among audiences engaged in systematic research methods, we remain far from a shared understanding about the purpose and potential uses of a logic model, and even its definition. This has also left us without any protocol around how a logic model should be constructed to enhance a systematic review. In this paper we offer:

  • an account of the way in which logic models are used in the systematic review literature
  • an example of a logic model we have constructed to guide our own work and the documented steps taken to construct this
  • a set of principles for good practice in preparing a logic model

Here, we begin with an outline of the introduction of logic models into systematic reviews and their utility as part of the process.

The Use of Programme Theory in Review Literature

As understood in the program evaluation literature, logic models are one way of representing the underlying processes by which an intervention effects a change on individuals, communities or organisations. Logic models themselves have become an established part of evaluation methodology since the late 60s [ 10 ], although documentation that outlines the underlying assumptions addressing the ‘why’ and ‘for whom’ questions that define interventions is found in literature that dates back further, to the late 50s [ 11 ].

Despite being established in evaluation research, programme theory and the use of logic models remains an approach that is underutilised by many practitioners who design, run, and evaluate interventions in the UK [ 12 , 13 ]. Furthermore, there is a substantial degree of fragmentation regarding the programme theory approach used. Numerous overlapping approaches have been developed within evaluation literature, including ‘logic model’, ‘theory of change’, ‘theory of action’, ‘outcomes chain’, ‘programme theory’ and ‘program logic’ [ 11 , 13 ]. This very lack of consistency and agreement as to the appropriate tools to be used to conceptualise programme theory has been identified as one reason why, in a survey of 1,000 UK charities running interventions, four-fifths did not report using any formal program theory tool to understand the way in which their interventions effected change on their beneficiaries [ 12 ].

Conversely, within systematic reviewing so far, there has been some degree of consensus on the terminology used to represent the processes that link interventions and their outcomes (for example [ 7 , 8 , 9 ]). Many systematic reviews of health interventions tend to settle on a logic model as being the instrument of choice for guiding the review. Alternatively, reviews of international development interventions often include a theory of change, perhaps reflective of the added complexity of such interventions which often take place on a community, policy or systems basis. Logic models and theories of change sit on the same continuum, although a somewhat ‘fuzzy’ but important distinction exists. While a logic model may set out the chain of activities needed or are expected to lead to a chain of outcomes, a theory of change will provide a fuller account of the causal processes, indicators, and hypothesised mechanisms linking activities to outcomes. However, how reviews utilise programme theory is relatively unexplored.

Methods and criteria

To examine the use of logic models and theories of change in the systematic review literature, we examined indicative evidence from two sources. The first of these sources, the Cochrane database publishes reviews that have a largely standardised format that follow guidelines set out in the Cochrane Handbook (of systematic reviews of interventions) [ 14 ]. Currently, the handbook itself does not include a section on the use of programme theory in reviews. Other guidance set out by individual Cochrane review groups (of which there are 53, each focussed on a specific health condition or area) may highlight the utility of using programme theory in the review process. For example the Public Health Review Group, in their guidance on preparing a review protocol, describe the way in which logic models can be used to describe how interventions work and to justify a focus of a review on a particular part of the intervention or outcome [ 15 ]. Meanwhile, in the 2012 annual Cochrane methods-focussed report, the use logic models was viewed as holding potential to ‘confer benefits for review authors and their readers’ [ 16 ] and logic models have also been included in Cochrane Colloquium programme [ 17 ]. However, a definitive recommendation for use is not found, at the time of writing, in standard guidance provided to review authors. The second source, the 3ie database includes reviews with a focus on the effectiveness of social and economic interventions in low- and middle- income countries. The database includes reviews that have been funded by 3ie as well as those that are not directly funded but that nevertheless fall within its scope and are deemed to be of sufficient rigour. While the use of programme theory does not form part of the inclusion criteria, its use is encouraged in good practice set out by 3ie [ 18 ] and a high degree of importance is attributed to its use in 3ie’s criteria for awarding funding for reviews [ 19 ].

To obtain a sample of publications, we searched the Cochrane Library for systematic reviews and protocols and for material that included either the phrase ‘logic model’ or ‘theory of change’ occurring anywhere, published between September 2013 and September 2014 (over this period a total of 1,473 documents were published in the Cochrane Library). We also searched the 3ie (International Initiative for Impact Evaluation) database of systematic reviews published in 2013, and manually searched publications for the phrases ‘logic model’ or ‘theory of change’. Both searches were intended to provide a snapshot of review activity through capturing systematic review publications occurring over the course of a year. For the 3ie database, it was not possible to search by month, therefore we searched for publications by calendar year; to ensure that we obtained a full sample for a year we selected 2013 as our focus. For the Cochrane database, in order to obtain a more recent snapshot of publications to reflect current trends, we opted to search for publications occurring over a year (13 months in this case). All reviews and protocols of reviews that fell within the search parameters were analysed.

In the Cochrane database, over this period, four reviews and ten protocols were published that included the phrase ‘logic model’; while two protocols were published that included the phrase ‘theory of change’. It should be noted therefore that, certainly within reviews of health topics that adhere to Cochrane standards, that neither tool has made a substantial impact in this set of literature. This is likely to reflect the mainly clinical nature of many Cochrane reviews–among the 8 publications that were published through the public health group (all of which were protocols), 5 included mention of programme theory. Within the 3ie database of international development interventions, 53 reviews and protocols were published in 2013 (correct as of December 2014), of which 24 included a mention of either a logic model or a theory of change.

We developed a template for summarising the way in which logic models were used in the included protocols and systematic reviews based on the different stages of undertaking systematic reviews [ 6 ] and the potential usage of systematic reviews as identified by Anderson and colleagues and Waddington and colleagues [ 7 , 18 ], who in the former case describe logic models as being tools that can help to (i) scope the review; (ii) define and conduct the review; and (iii) make the review relevant to policy and practice including in communicating results. These template constructs also reflected the way in which logic model usage was described in the publications, which was primarily shaped by reporting conventions for protocols and reports published in Cochrane and 3ie (although the format for the latter source is less structured). Criteria around the constructs included in the template were then defined before two reviewers (see S1 Table . Data Coding Template) then independently assessed the use of logic model within the reviews and protocols published; the reviewers then met to discuss their findings. What this template approach cannot capture is the extent to which using a logic model shaped the conceptual thinking of the review teams, which, as discussed later in this paper, is one of the major contributions of using a logic model framework.

While both databases cover two different disciplines, allowing us to make comparisons between these, there may be some who argue that through having rigidly enforced methodological guidelines in the production of reviews, that we are unlikely to encounter innovative approaches to the use of programme theory in the reviews and protocols included here. This is a legitimate concern and is a caveat of the results presented here, although even among these sources we observe considerable diversity in the use of programme theory as we describe in our results.

Results: how are logic models and theories of change used in systematic reviews?

Looking first at publications from the Cochrane database and the two studies that included some component examining ‘theories of change’, the first of these described ‘theory of change’ in the context of synthesising different underlying theoretical frameworks [ 20 ] while the second used ‘theory of change’ in the context of describing a particular modality of theory [ 21 ]. Meanwhile, logic models were incorporated in a variety of ways; most frequently, they have been used as a shorthand way to describe how interventions are expected to work, outlining stages from intervention inputs through to expected outcomes at varying levels of detail ( Table 1 ).

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

https://doi.org/10.1371/journal.pone.0142187.t001

In around half of reports and protocols, the authors described in some form how they planned to, or did in fact use, their logic model in the review. Nevertheless, in the remainder of publications (all of which were protocols), the logic model was presented as a schematic to describe how the intervention may work and was not explicitly referred to further as a tool to guide the review process. Two Cochrane review protocols explicitly outlined the way in which the logic model would be used as a reference tool when considering the types of intervention that would be within the scope of the review [ 25 , 26 ]; this was also described in a full review [ 24 ]. We identified three publications where it was suggested that the logic model was developed through an iterative process where consensus was developed across the review team [ 23 , 24 , 30 ].

Two Cochrane reviews described how the logic model was used to determine the subgroup analyses a priori [ 22 , 23 ], helping to avoid some of the statistical pitfalls of running post-hoc sub-group analyses [ 35 ]. For example in their review of psychosocial smoking cessation interventions for pregnant women, Chamberlain and colleagues [ 22 ] developed their logic model from a synthesis of information collected before the review process began, with the express purpose of guiding analyses and stating sub-group analyses of interest. In Glenton’s study, a revision to the logic model they originally specified, based on the review findings was also viewed as being a useful tool to guide the sub-group analyses of future reviews. However, none of the protocols (as opposed to reviews) from the Cochrane database explicitly mentioned that the logic model had been used to consider which sub-group analyses should be undertaken. The review and protocol by Glenton et al. [ 23 ] and Ramke et al. [ 33 ] respectively provided the only examples where the logic model was to be revised iteratively during the course of the review based on review findings. Of the three Cochrane reviews included in Table 1 , Glenton and colleagues’ study [ 23 ] can be considered the one to have used a logic model most comprehensively as a dynamic tool to be refined and used to actively the synthesis of results in the review. The authors describe a novel use of the logic model in their mixed methods review as being a tool to describe mechanisms by which the intervention barriers and facilitators identified in the qualitative synthesis could impact on the outcomes assessed quantitatively in their review of programme effectiveness.

Among the studies extracted from the 3ie database, the terminology was weighted towards the ‘theories of change’ as opposed to ‘logic models’ (as expected, based on the guidance provided). Out of the 24 studies that were included ( Table 2 ), fourteen included a Logic Model and nine included a Theory of Change, while one report used both terms. Despite more studies including mention or an actual depiction of a theory of change or logic model, this body of literature shared the same limitations around the extent of use of programme theory as a tool integral to the review process. The majority of studies used a Theory of Change/Logic Model to describe their overall conceptual model or how they viewed the intervention or policy change under review would work, although this was reported at different stages of the review. Of the eleven protocols that were included, eight explicitly mentioned that they planned to return to their model at the end of the review, emphasising the use of programme theory tools as tools to help design the review and communicate the findings in this field. For example, in Willey and colleagues’ review of approaches to strengthen health services in developing countries [ 59 ], the Logic Model was updated at the end of the review to reflect the strength of the evidence discovered for each of the hypothesised pathways. Seven of the twenty protocols and studies described how a theory of change/logic model would be used to guide the review in terms of search strategy or more generally as a reference throughout the screening and other stages. Finally, two publications [ 48 , 52 ] described how they would use a theory of change as the basis for synthesising qualitative findings and two described how they would use a logic model/theory of change to structure sub-group meta analyses in quantitative syntheses [ 48 , 58 ]; both of these two latter protocols described how programme theory would be used at a number of key decision points in the review itself.

thumbnail

https://doi.org/10.1371/journal.pone.0142187.t002

Among the Cochrane and 3ie publications, few reviews or protocols described the logic model as being useful in the review initiation, description of the study characteristics or in assessing the quality and relevance of publications. Three Cochrane protocols and one Cochrane review described using existing logic models in full or examining components of existing logic models or reviews to develop their own while one in our sample of international development systematic reviews did so. Most authors appear to develop their own logic models afresh, and largely in the absence of guidance around good practice around the use of logic models. As Glenton and colleagues describe there is “no uniform template for developing logic models, although the most common approach involves identifying a logical flow that starts with specific planned inputs and activities and ends with specific outcomes or impacts, often with short-term or intermediate outcomes along the way” ([ 23 ]; p13).

Developing a Logic Model: A Worked Example from School Based Asthma Interventions

The second aim of this paper is to provide an example of the development of a logic model in practice. The logic model we describe is one developed as part of a systematic review examining the impact of school based interventions focussing on the self-management of asthma among children. This review is being carried out by a multidisciplinary team comprising team members with experience of systematic reviewing as well as team members who are trialists with direct experience in the field of asthma and asthma management. Of particular interest in this review are the modifiable intervention factors that can be identified as being associated with improvements in asthma management and health outcomes. The evidence will be used directly in the design of an intervention that will be trialled among London school children. Our approach was to view the development of the logic model as an iterative process, and we present three different iterations (Figs 1 – 3 ) that we undertook to arrive at the model we included in our review protocol [ 60 ]. Our first model was based on pathways identified by one reviewer through a summary examination of the literature and existing reviews. This was then challenged and refined through the input of a second reviewer and an information scientist, to form a second iteration of the model. Finally, a third iteration was constructed through the input of the wider review team, which included both methodological specialists and clinicians. These steps are summarised in Box 1 and are described in greater detail in the sections below. The example provided here is one that best reflects a process driven logic model where the focus is on establishing the key stages of interest and using the identified processes to guide later stages of the review. An alternative approach to developing a logic model may be to focus more on the representation of systems and theory [ 61 ]; although this approach may be better placed to support reviews of highly complex interventions (such as many of the international development reviews described earlier) or reviews that are more methodological than empirical in nature.

thumbnail

https://doi.org/10.1371/journal.pone.0142187.g001

thumbnail

https://doi.org/10.1371/journal.pone.0142187.g002

thumbnail

https://doi.org/10.1371/journal.pone.0142187.g003

Box 1. Summary of steps taken in developing the logic model for school based asthma interventions.

  • Synthesis of existing logic models in the field
  • Reviewer 1 identified distal outcomes
  • Working backwards, reviewer 1 then identified the necessary preconditions to reach distal outcomes; from distal outcomes intermediate and proximal level outcomes were then identified
  • Once outcomes had been identified, the outputs were defined (necessary pre-conditions but not necessarily goals in themselves); on completion the change part of the model was complete in draft form
  • Modifiable processes were then specified these were components that were not expected to be present in each intervention included in the review
  • Continuing to work backwards, intervention inputs (including core pedagogical inputs) were then specified. These were inputs that were expected to be present in each intervention included in the review, although their characteristics would differ between studies
  • In addition, external factors were identified as were potential moderators
  • Reviewer 1 and 2 then worked together to redevelop the model paying particular attention to clarity, the conceptual soundness of groupings and the sequencing of aspects
  • The review team and external members were asked to comment on the second iteration, and later agree a revised version 3. This version would provide the structure for some aspects of quantitative analyses and highlight where qualitative analyses were expected to illuminate areas of ambiguity.
  • The final version was included in the protocol with details on how it would be used in later stages of the review, including the way in which it would be transformed, based on the results uncovered, into a theory of change.
  • Consider undertaking additional/supplementary steps12.

Step 1, examination and synthesis of existing logic models

The first step we took in developing our logic model was to familiarise ourselves with the existing literature around the way in which improved self-management of asthma leads to improved outcomes among children and the way in which school-based learning can help to foster these. Previous systematic reviews around this intervention did not include a logic model or develop a theory of change but did help to identify some of the outcomes of self-management educational interventions. These included improved lung, self-efficacy, absenteeism from school, days of restricted activity, and number of visits to an emergency department, among others (see [ 62 ]). A logic model framework helped to order these sequentially and separate process outputs from proximal, intermediate and distal outcomes. Other studies also pointed towards the school being a good site for teaching asthma self-management techniques among children for several reasons, including the familiar environment for learning that it provides for children, and the potential for identification of large numbers of children with asthma at a single location [ 63 – 65 ]. Some individual studies and government action plans also included logic models showing how increased education aimed at improving self-management skills was expected to lead to improvements in asthma outcomes (for example [ 66 , 67 , 68 ]). This evidence was synthesised and was found to be particularly useful in helping to identify some of the intervention processes taking place that could lead to better asthma outcomes, although these were of varying relevance to our specific situation of interest in terms of school-based asthma interventions, as well as being very heavily shaped by local context. We adopted an aggregative approach to the synthesis of the evidence at this point, including all information which was transferable across contexts [ 69 ]. After examining the available literature, the first reviewer was able to proceed with constructing a first draft of the logic model.

Step 2, identification of distal outcomes

Reviewer 1 started with the identification of the very distal outcomes that could change as a result of school-based interventions aimed at improving asthma self-management. From these outcomes the reviewer worked backwards and identified the necessary pre-conditions to achieving these to develop a causal chain. Identifying this set of distal outcomes was analogous to questioning why a potential funding, delivery or hosting organisation (such as a school or health authority) may want to fund such an intervention–the underlying goals of running the intervention. In this case, these outcomes could include potential improvements in population-level health, reductions in health budgets and/or potential increases in measures of school performance ( Fig 1 ). After identifying these macro-level outcomes, we identified the distal child level outcomes which were restricted to changes in children’s outcomes that would only be perceptible at long-term follow-up. These included changes in quality of life and academic achievement, which we identified as being modifiable only after sustained periods of behaviour change and a reduction in the physical symptoms of asthma.

Step 3, identification of intermediate and proximal outcomes

Next, the reviewer 1 outlined some of the intermediate outcomes, those changes necessary to achieve the distal outcomes. Here our intermediate changes in health were based on observations of events, including emergency admissions and limitations in children’s activity over a period of time (which we left unspecified). The only intermediate educational outcome was school attendance, and we identified this as being the only (or at least main) pathway through which we may observe (distal) changes in academic achievement as a result of school-based asthma interventions. Working backwards, our proximal outcomes were defined those pre-conditions necessary to achieve our intermediate outcomes; these revolved around health symptoms and behaviour around asthma and asthma management. We expect these to be observable shortly after the intervention ends (although may be measured at long-term follow-up). The intention is for the systematic review to be published as a Cochrane review which requires the identification of 2–3 primary outcomes and approximately 7 outcomes in total, which in our case helped to rationalise the number of outcomes we included, which left unbounded could have included many more.

Step 4, identification of outputs

Finally in the ‘change’ section of the logic model (see Fig 2 ), we then specified the outputs of the intervention, which we define here as those aspects of behaviour or knowledge that are the direct focus for modification within the activities of the intervention, but are unlikely to represent the original motivations underlying the intervention. Our outputs are those elements of the intervention where changes will be detectable during the course of the intervention itself. Here increased knowledge of asthma may be a pre-condition for improved symptomology and would have a direct focus within intervention activities (outputs), but increased knowledge in itself was not viewed as an likely underlying motivation of running the intervention. A different review question may prioritise improved knowledge of a health condition, and view increased knowledge as an end-point in itself.

Step 5, specification of modifiable intervention processes

To aid in later stages of the review we placed the modifiable design characteristics in sequence after intervention inputs, as we view these as variants that can occur once the inputs of the intervention have been secured. Separating these from standard intervention inputs was a useful stage when it came to considering the types of process data we might extract and in designing data extraction forms. The number of modifiable design characteristics of the intervention specified was enhanced by examining some of the literature described earlier as well as through discussions with members of the review team who were most involved with designing the intervention that will take place after the review.

Step 6, specification of intervention inputs

Standard intervention inputs were specified as were the ‘core elements of the intervention’. These core elements represent the pedagogical focus of the intervention and form some of the selection criteria for studies that will be included, although studies will differ in terms of the number of core elements that are included as well as the way in which these are delivered. Studies that do not include any of these core elements were not considered as interventions that focus on the improvement asthma self-management skills.

Step 7, specification of intervention moderators including setting and population group

Finally, child level moderators (population characteristics) and the characteristics of the schools receiving the intervention were specified (context/setting characteristics). Specifying these early-on in the logic model helped to identify early-on the type of subgroup analyses we would conduct to investigate any potential sources of heterogeneity.

Step 8, share initial logic model, review and redraft

Reviewer 1 shared the draft logic model with a second member of the team. Of particular concern in this step was to establish consensus around the clarity, conceptual soundness of the groupings, the sequencing of the change part of the model, and the balance between meeting the design needs of the intervention and the generalisability of the findings to other settings. With respect to the latter, the second reviewer commented that specifying reductions in health budgets reflected our own experiences of the UK context, and may not be appropriate for all healthcare contexts likely to be included in our review. Therefore, Fig 2 in our second iteration only acknowledges that macro-level (beneficial) changes can be observed from observing changes in the distal outcomes of children, but we do not specify what these might be. At this stage it was helpful to have the first reviewer working alongside a second member of the review team who had greater expertise and knowledge of the design and delivery health-based interventions and who was working directly alongside schools in the preliminary stages of data collection around the intervention itself. Figs 1 – 3 show the development of the logic model across iterations. This second iteration had a clearer distinction between the action and change aspects of the logic model, and had refined the number of outcomes that would be explicitly outlined, which had implications for the search strategy. The action part of the model was also altered to better differentiate parts of the model that represented implementation processes from parts of the model that represented implementation measures or metrics.

Step 9, share revised logic model with wider group, review and redraft

The draft logic model was shared among the wider review team and to an information scientist with comments sought, particularly around those aspects of step 8 that had been the source of further discussion. The review team were asked specifically to input on the content of the different sections of the logic model, the sequencing of different parts, and the balance between meeting the design needs of the intervention and the generalisability of the findings to other settings. Input was sought from an information scientist external to review to ensure that the model adequately communicated and captured the body of literature that the review team aimed to include, and helped to make certain that the model was interpretable to those who were not directly part of the review team. For the third (and final) iteration, views were also sought about whether the main moderating factors across which the team might investigate sources of heterogeneity in meta analyses were included, or for those that would be identified through earlier qualitative analyses, that these were adequately represented. Once these views were collated, the third iteration was produced and agreed. The third iteration better represents the uncertainty in terms of processes that may be uncovered during qualitative analyses and the way in which these will be used to investigate heterogeneity in subgroup analyses in the quantitative (meta) analysis.

Step 10, present the final logic model in the protocol

The final version was included in the protocol with details on how it would be used in later stages of the review. At the end of the review, we intend to return to the logic model and represent those factors that are associated with successful interventions from the quantitative and qualitative analyses in a theory of change.

Potential additional and supplementary steps that could be taken elsewhere

Greater consultation or active collaboration with additional stakeholders on the logic model may be beneficial, particularly for complex reviews involving system-based interventions where different stakeholders will bring different types of knowledge [ 8 , 70 ]. There may also be merit in this approach at the outset in situations where the review findings are intended to inform on an intervention in a known setting, and to ensure that the elements that will enhance the applicability or transferability of the intervention are represented. In the example given here, as there were members of the review team who were both taking part in the review and the design of the intervention, there was less of a need to undertake this additional stage of consultation, and elements such as the presence or change in school asthma policies were included in the logic model to reflect the interests of the intervention team.

Produce further iterations of the logic model : When there is less consensus among the review team than was the case here, when there are greater numbers of stakeholders being consulted, or when the intervention itself is a more complex systems-based intervention; there may be a need to produce further multiple iterations of the logic model. In programme evaluation, logic models are considered to be iterative tools that reflect cumulative knowledge accrued during the course of running an intervention [ 11 ]. While the exact same principle doesn’t apply in the case of systematic reviews, a greater number of iterations may be necessary in order to produce a logic model to guide the review, for example to reflect the different forms of knowledge different stakeholders may bring. Where there are parts of the logic model that are unclear at the outset of a review, or in situations where there is an insurmountable lack of consensus and only the review findings can help to clarify the issue, these can be represented in a less concrete way in the logic model, for example the processes to be examined in our own review in Fig 3 .

Multiple logic models : There may also be a need to construct multiple logic models for large interventions to reflect the complexity of the intervention, although it may also be the case that such a large or complex question may be unsuitable for a single review but would instead fall across multiple linked reviews. However, where the same question is being examined using different types of evidence (mixed method review), multiple logic models representing the same processes in different ways could be useful–for example a logic model focussing on theory and mechanistic explanations for processes in addition to a logic model focussing on empirical expected changes may be necessary for certain forms of mixed methods reviews (dependent on the research question). In other cases, the review may focus on a particular intervention or behaviour change mechanism within a (small) number of defined situations–for example a review may focus on the impact of mass media to tackle public health issues using smoking cessation, alcohol consumption and sexual health as examples. The review question may be focussed on the transferability of processes between these public health issues but in order to guide the review itself it may be necessary to produce a separate logic model for each public health issue, which could be synthesised into a unified theory of change for mass media as an intervention at a later stage.

Using the logic model in the review.

The logic model described in this paper is being used to guide a review that is currently in progress and as such we are not able to give a full outline of its potential use. Others in the literature before us have described logic models as having added value to systematic reviewers when (i) scoping the review (refining the question; deciding on lumping or splitting a review topic; identifying review components); (ii) defining and conducting the review (identifying the review study criteria; guiding the search strategy; rationale behind surrogate outcomes; justifying sub-group analyses); (iii) making the review relevant to policy and practice (structuring the reporting of results; illustrating how harms and feasibility or connected with interventions, interpreting the results based on intervention theory) [ 7 , p35]. Others still have emphasised the utility of a logic model framework in helping reviewers to think conceptually through illustrating the influential relationships and components from inputs to outcomes, suggesting that logic models can help reviewers identify testable hypotheses to focus upon [ 8 , 71 ]; they have also speculated that a logic models could help to identify the parameters of a review as an addition to the well-established PICO framework [ 8 , 9 ].

Our own experience of using the logic model ( Fig 3 ) in a current systematic review to date is summarised in Table 3 below; which focuses on additions to the uses suggested elsewhere. While the additional description below provides an indication as to the potential added value of using a logic model, the use of a logic model has not be without its challenges. Firstly, the use of logic models is relatively novel within the systematic review literature (and even in program theory literature, as discussed earlier), and initially there was some apathy towards the logic model, even within the review team. Secondly, while we agree that a logic model could be used to depict the PICO criteria [ 8 , 9 ], our own logic model did not include a representation of ‘C’, the comparator, as this was the usual care provided across different settings, which could vary substantially. Others may also experience difficulties in representing the comparison element in their logic models. Finally, all of the utilities of the logic model described here and elsewhere are not unique qualities or contingent to using a logic model, but using a logic model accelerates these processes and brings about a shared understanding more quickly; for example development of exclusion criteria is not contingent on having a logic model, but rather that the logic model facilitates the process of identifying inclusion and exclusion criteria more rationally, and helps depict some of the reasoning underlying review decisions. Practically, where the logic model has its advantages is in aiding the initial conceptual thinking around the scope of interventions, its utility in aiding decisions about individual parts of the intervention within the context of the intervention as whole, its flexibility and its use as a reference tool in decision-making, and in communication across the review team.

thumbnail

https://doi.org/10.1371/journal.pone.0142187.t003

Developing Elements of Good Practice for the Use of Logic Models and Theories of Change

The earlier analysis suggests that many systematic review authors tend to use programme theory tools to depict the conceptual framework pictorially, but may not view either a logic model or theory of change as integral review tools. To prevent logic models and theories of change being included in reviews and protocols as simply part of a tick-box exercise, there is a need to develop good practice on how to use programme theory in systematic reviews, as well as developing good practice on how to develop a logic model/theory of change. This is not to over-complicate the process or to introduce rigidity where rigidity is unwelcome, but to maximise the contribution that programme theory can make to a review.

Here we introduce some elements of good practice that can be applied when considering the use of logic models. These are derived from (i) the literature around the use of logic models in systematic reviews [ 7 , 8 , 17 ]; (ii) the broader literature around the use of theory in systematic reviews [ 72 , 73 ]; (iii) our analyses contrasting the suggested uses of logic models in systematic reviews with their actual use (section1); (iv) the use of logic models in program theory literature [ 11 , 13 ]; as well as broader conceptual debates in systematic review and program theory literature. These principles draw from the work of Anderson and colleagues and Baxter and colleagues as well as our own experiences, but are unlikely to represent an exhaustive list as there is a need to maintain a degree of flexibility in the development and use of logic models. Our main concern is that logic models in the review literature appear to be used in such a limited way that a loose set of principles, such as those proposed here, can be applied with little cost in terms of imposing rigidity but with substantial impact in terms of enhanced transparency in use and benefit to the review concept, structure and communication.

A logic model is a tool and as such its use needs to be described

Logic models provide a framework for ‘thinking’ conceptually before, during and at the end of the review. In addition to the uses highlighted earlier by Anderson and Waddington [ 7 , 18 ], our own experiences of using the logic model on our review has emphasised the utility of a logic model in: (i) clarifying the scope of the review and assessing whether a question was too broad to be addressed in a single review; (ii) identifying points of uncertainty that could become focal points of investigation within the review; (iii) clarification of the scope of the study and particularly in distinguishing between different forms of intervention study design (in our own case between a process evaluation and a qualitative outcomes evaluation); (iv) ensuring that there is theoretical inclusivity at an early stage of the review; (v) clarifying inclusion and exclusion criteria, particularly with regards to core elements of the intervention; (vi) informing the search strategy with regards to the databases and scholarly disciplines upon which the review may draw literature; (vii) a communication tool and reference point when making decisions about the review design; (viii) as a project management tool in helping to identify dependencies within the review. Sharing the logic model with an information scientist was also a means of communicating the goals of the review itself while examination of existing logic models was found to be a way of developing expertise around how an intervention was expected to work. Use of a logic model has also been linked with a reduced risk of type III errors occurring, helping to avoid conflation between errors in the implementation and flaws in the intervention [ 17 , 74 ].

Summarising our own learning around the uses of the logic model and the uses identified by others (primarily Anderson) for their use as a tool in systematic reviews in Table 4 highlights that a logic model may have utility primarily at the beginning and end of the systematic review, and may be a useful reference tool throughout.

thumbnail

https://doi.org/10.1371/journal.pone.0142187.t004

Our analyses suggest that the use of logic models has faltered and our earlier review of the systematic review literature highlighted that (i) logic models were infrequently used as a review tool and that the extent of use is likely to reflect the conventions of different disciplines; and (ii) where logic models were used, they were often used in a very limited way to represent the intervention pictorially. Often, they did not ostensibly appear to be used as tools integral to the review. There remains the possibility that some of the reviews and protocols featured earlier simply did not report the extent to which they used the logic model, although given that this is both a tool for thinking conceptually and a communication tool, it could be expected that the logic model would be referred to and referenced at different points in the review process. Logic models can be useful review tools, although the limited scope of use described in the literature could suggest that they are in danger of becoming a box-ticking exercise included in reviews and protocols rather than methodological aids in their own right.

Terminology is important: Logic models and theories of change

We earlier stated that ‘theories of change’ and ‘logic models’ were used interchangeably by reviewers, largely dependent of the discipline in which they are conducted. However, outside the systematic review literature, a distinction often exists. Theories of change are often used to denote complex interventions where there is a need to identify the assumptions of how and why sometimes disparate elements of large interventions may effect change; they are also used in cases for less complex interventions where assumptions of how and why program components effect change are pre-specified. Theories of change can also be used to depict how entirely different interventions can lead to the same set of outcomes. Logic models on the other hand are used to outline program components and check whether they are plausible in relation to the outcomes; they do not necessitate the underlying assumptions to be stated [ 11 , 75 ]. This distinction fits in well with the different stages of a systematic review. A logic model provides a sequential depiction of the components of interventions and their outcomes, but not necessarily the preconditions that are needed to achieve these outcomes, or the relative magnitude of different components. Given that few of the programme theory tools that are used in current protocols and reviews are derived or build upon existing tools, for most systematic reviews that do not constitute whole system reviews of complex intervention strategies, or for reviews that are not testing a pre-existing theory of change, developing a Logic Model initially may be most appropriate. This assertion does not mean that systematic reviews should be atheoretical or ‘theory-lite’, and different possible conceptual frameworks can be included in Logic Models. However, the selection of a single conceptual framework upfront, as is implicitly the case when developing a Theory of Change, may not represent the diversity of disciplines that reviewers are likely to encounter. Except in the cases outlined earlier around highly complex/systems based interventions (mainly restricted to development studies literature), theories of change are causal models that are most suitable when developed through the evidence gathered in the review process itself.

Logic models can evolve into theories of change

Once a review has identified the factors that are associated with different outcomes, their magnitude, and the underlying causal pathways linking intervention components with different outcomes, this evidence can in some cases be used to update a logic model and construct a theory of change. We can observe examples in the literature where review evidence has been synthesised to map out the direction and magnitude of evidence in the literature (see [ 8 ], although in this case, the resulting model was described as a ‘Logic Model’ and not a ‘Theory of Change’), and this serves as a good model for all reviews. Programme theory can effectively be used to represent the change in understanding developed as a result of the review, and even in some cases the learning acquired during a review, although this is not the case for all reviews and there may be some where this approach is unsuitable or is otherwise not possible. A logic model can be viewed iteratively as a preliminary for constructing a theory of change at the end of the review, which in turn forms a useful tool to communicate the findings of the review. However, some reviewers may find little to update in the logic model in terms of the theory of the intervention or may otherwise find that the evidence around the outcomes and process of the intervention is unclear among the literature as it stands. There may also be occasions where reporting conventions for disciplines or review groups may preclude updating the logic model on the basis of the findings of the review.

Programme theory should not be developed in isolation

In our exploration of health-based and international development reviews, we observed just one example where the reviewers described a Logic Model as having been developed through consensus within the review team [ 24 ]. Other examples are found in the literature, where logic models or theories of change have been developed with stakeholders, for example Baxter and colleagues [ 8 ; p3] record that ‘following development of a draft model we sought feedback from stakeholders regarding the clarity of representation of the findings, and potential uses’. These examples are clearly in the minority in the systematic review literature, although most programme theory described in the evaluation literature is clear that models should be developed through a series of iterations taking into account the views of different stakeholders [ 11 ]. While some of this effect may be due to reporting, as it is likely that at least some of the models included in Tables 1 and 2 were developed having reached a consensus, it is nevertheless important to highlight that a more collaborative approach to developing models could bring benefits to the review itself. Given that systematic review teams are often interdisciplinary in nature, and can be engaging with literature that is also interdisciplinary, programme theory should reflect the expertise and experience of all team members as well as that of external stakeholders if appropriate. Programme theory is also used as a shorthand communication tool, and the process of developing a working theoretical model within a team can help to simplify the conceptual model into a format that is understandable within review teams, but which can also be used to involve external stakeholders, as is often necessary in the review process [ 70 ].

A logic model should be used as an Early Warning System

Logic models have their original purpose as a planning and communication tool in the evaluation literature. However in systematic reviews, they can also provide the first test of the underlying conceptual model of the review. If a review question cannot be represented in a Logic Model (or Theory of Change in the case of highly complex issues), this can provide a signal that the question itself may be too broad or needs further refining. It may be that a series of Logic Models may better represent the underlying concepts and the overall research question driving the review, and this may also reflect a need to undertake a series of reviews rather than a single review, particularly where the resources available may be more limited [ 73 ]. Alternatively, as is often the case with complex systems-based interventions (as encountered in many reviews of international development initiatives published on the 3ie database), the intervention may be based on a number of components, which could be represented individually through logic models, the mechanisms of which are relatively well-established and understood, and a theory of change may better represent the intervention. The tool can also help the reviewer to assess the degree to which the review may focus on collecting evidence around particular pathways of outcomes, and the potential contribution the review can make to the field, helping to establish whether the scope of the review is broad and deep (as might be the ideal in the presence of sufficient resource), or narrower and more limited in scope and depth [ 73 ]. This can also help to manage the expectations of stakeholders at the outset. Logic models can be used as the basis for developing a systematic review protocol, and should be considered living documents, subject to several iterations during the process of developing a protocol as the focus of the review is clarified. They can both guide and reflect the review scope and focus during the preparation of a review protocol.

There is no set format for a Logic Model (or Theory of Change), but there are key components that should be considered

Most Logic Models at a minimum, depict the elements included in the PICO/T/S framework (patient/problem/population; intervention; comparison/control/comparator; outcomes and time/setting) [ 76 ]. However, a logic model represents a causal chain of events resulting from an intervention (or from exposure, membership of a group or other ‘treatment’); therefore it is necessary to consider how outcomes may precede or succeed one another sequentially. Dividing outcomes into distal (from the intervention), intermediate or proximal categories is a strategy that is often used to help identify sets of pre-existing conditions or events needed in order to achieve a given outcome. The result is a causal chain of events including outputs and outcomes that represent the pre-conditions that are hypothesised to lead to distal outcomes. Outcomes are only achieved once different sets of outputs are met; these may represent milestones of the intervention but are not necessarily success criteria of the intervention in themselves (for example in Fig 3 ). In the case of reviews of observational studies, the notion of outputs (and even interventions and intervention components) may be less relevant, but may instead be better represented by ‘causes’ and potential ‘intervention points’ [ 71 ], that are also structured sequentially to indicate which are identified as necessary pre-conditions for later events in the causal chain.

Many of the elements described above refer to the role of the intervention (or condition) in changing the outcomes for an individual (or other study unit), which can also be referred to as a theory of change; the elements of the causal chain that reflect the intervention and its modifiable elements are known as the theory of action [ 11 , 77 ]. Within the theory of action, the modifiable components of the intervention needed to achieve later outputs and outcomes, such as the study design, resources, and process-level factors such as quality and adherence are usually included. Other modifiable elements, including population or group-level moderators can also be included, and even the underlying conceptual theories that may support different interventions may be included as potential modifiers. Finally, it some of the contextual factors that may reflect the environments in which interventions take place can also be represented. Within our example in Fig 3 , these include the school-level factors such as intake of the school and its local neighbourhood as well as broader health service factors and local health policies. For some reviews and studies, the influence of these contextual factors may themselves be the focus of the review.

Summary and Conclusions

In the past, whether justified or not, a critique often levelled at systematic reviews has been the absence of theory when classifying intervention components [ 78 ]. The absence of theory in reviews has transparent negative consequences in terms of the robustness of the findings, the applicability and validity of recommendations, and in the absence of mechanistic theories of why interventions work, limits on the generalisability of the findings. A number of systematic reviewers are beginning to address this critique directly through considering the methodological options available when reviewing theories [ 69 , 78 , 79 ], while others have gone further through exploring the role that differences in taxonomies of theory hold in explaining effect sizes [ 78 , 80 , 81 ]. Nevertheless, despite the benefits of, and need for, using theory to guide secondary data analysis, reviewers may be confronted by several situations where the conceptualisation of the theoretical framework itself may be problematic. Such instances include those where there may be little available detail on the theories underlying interventions, or competing theories or disciplinary differences in the articulation of theories for the same interventions exist (requiring synthesis) [ 79 ]; or a review topic may necessitate the grouping and synthesis of very different interventions to address a particular research question; or more fundamentally, where there is a need to consider alternative definitions, determinants and outcomes of interventions that goes beyond representing these within ‘black boxes’. In common with others before us [ 7 , 8 ], in this paper we view a logic model as a tool to help reviewers to overcome these challenges and meet these needs through providing a framework for ‘thinking’ conceptually.

Much of this paper examines the application of a logic model to ‘interventionist’ systematic reviews, and we have not directly considered their use in systematic reviews of observational phenomena. Certainly, while some of the terminology would need to change to reflect the absence of ‘outputs’ and ‘resources’, the benefits to the review process would remain. For some, this idea may simply be too close that of a graphical depiction of a conceptual framework. However, the logic model is distinct in that it represents only part of a conceptual framework–it does not definitively represent a single ideological perspective or epistemological stance, and the accompanying assumptions. Arguably, a theory of change often does attempt to represent an epistemological framework, and this is why we view a distinction between both tools. As the goal of a systematic review is uncover the strength of evidence and the likely mechanisms underlying how different parts of a causal pathway relate to one another, then the evidence can be synthesised into a theory of change; and we maintain the emphasis on this being a ‘theory’, to be investigated and tested across different groups and across different geographies of time and space.

In investigating the use of logic models, we found that among the comparatively small number of reviewers who used a theory of change or logic model, many described a limited role as well as role intrinsic to the beginning of the review and not as a tool to communicate review findings. A worked through example may help expand the use as will making their use a formal requirement, but the formation of guidelines will help make sure where they are used, they’re used to greater effect. A recommendation of this paper is for greater guidance to be prepared around how programme theory could and should be used in systematic reviewing, incorporating the elements raised here and others. Much of this paper is concerned with the benefits that logic models can bring to reviewers as a pragmatic tool in carrying out the review, as a tool to help strengthen the quality of reviews, and perhaps most importantly as a communication tool to disseminate the findings to reviewers and trialists within academic communities and beyond to policy-makers and funders. With respect to this last purpose in particular, improving the way in which logic models are used in reviews can only serve to increase the impact that systematic reviews can have in shaping policy and influencing practice in healthcare and beyond.

Supporting Information

S1 checklist. prisma checklist..

https://doi.org/10.1371/journal.pone.0142187.s001

S1 Flowchart. PRISMA Flowchart.

https://doi.org/10.1371/journal.pone.0142187.s002

S1 Table. Data Coding Template.

https://doi.org/10.1371/journal.pone.0142187.s003

Acknowledgments

We would like to acknowledge the contributions of Jonathan Grigg, Toby Lasserson and Vanessa McDonald in helping to shape the development of the logic model.

Author Contributions

Conceived and designed the experiments: DK JT. Analyzed the data: DK KH. Contributed reagents/materials/analysis tools: DK. Wrote the paper: DK JT.

  • View Article
  • PubMed/NCBI
  • Google Scholar
  • 6. Gough D, Oliver S, Thomas J. Introducing systematic reviews. In: Gough D, Oliver S, Thomas J, editors. An Introduction to Systematic Reviews. London: Sage; 2012.
  • 11. Funnell SC, Rogers PJ. Purposeful program theory: effective use of theories of change and logic models. San Francisco, CA: John Wiley & Sons; 2011.
  • 12. Ni Ogain E, Lumley T, Pritchard D. Making an Impact. London: NPC, 2012.
  • 14. Higgins JPT, Green S. Cochrane handbook for systematic reviews of interventions. Chichester: Wiley-Blackwell; 2011.
  • 15. The Cochrane Public Health Group. Guide for developing a Cochrane protocol. Melbourne, Australia: University of Melbourne, 2011.
  • 16. Brennan S. Using logic models to capture complexity in systematic reviews: Commentary. Oxford: The Cochrane Operations Unit, 2012.
  • 17. Francis D, Baker P. Developing and using logic models in reviews of complex interventions. 19th Cochrane Colloquium; October 18th; Madrid: Cochrane; 2011.
  • 19. Bhavsar A, Waddington H. 3ie tips for writing strong systematic review applications London: 3ie; 2012. Available: http://www.3ieimpact.org/en/funding/systematic-reviews-grants/3ie-tips-for-writing-systematic-review-applications/.Accessed 3 December 2014.
  • 20. Barlow J, MacMillan H, Macdonald G, Bennett C, Larkin SK. Psychological interventions to prevent recurrence of emotional abuse of children by their parents. The Cochrane Library. 2013.
  • 21. McLaughlin AE, Macdonald G, Livingstone N, McCann M. Interventions to build resilience in children of problem drinkers. The Cochrane Library. 2014.
  • 25. Burns J, Boogaard H, Turley R, Pfadenhauer LM, van Erp AM, Rohwer AC, et al. Interventions to reduce ambient particulate matter air pollution and their effect on health. The Cochrane Library. 2014.
  • 26. Costello JT, Baker PRA, Minett GM, Bieuzen F, Stewart IB, Bleakley C. Whole-body cryotherapy (extreme cold air exposure) for preventing and treating muscle soreness after exercise in adults. The Cochrane Library. 2013.
  • 27. Gavine A, MacGillivray S, Williams DJ. Universal community-based social development interventions for preventing community violence by young people 12 to 18 years of age. The Cochrane Library. 2014.
  • 28. Kuehnl A, Rehfuess E, von Elm E, Nowak D, Glaser J. Human resource management training of supervisors for improving health and well-being of employees. The Cochrane Library. 2014.
  • 29. Land M-A, Christoforou A, Downs S, Webster J, Billot L, Li M, et al. Iodine fortification of foods and condiments, other than salt, for preventing iodine deficiency disorders. The Cochrane Library. 2013.
  • 30. Langbecker D, Diaz A, Chan RJ, Marquart L, Hevey D, Hamilton J. Educational programmes for primary prevention of skin cancer. The Cochrane Library. 2014.
  • 31. Michelozzi P, Bargagli AM, Vecchi S, De Sario M, Schifano P, Davoli M. Interventions for reducing adverse health effects of high temperature and heatwaves. The Cochrane Library. 2014.
  • 32. Peña-Rosas JP, Field MS, Burford BJ, De-Regil LM. Wheat flour fortification with iron for reducing anaemia and improving iron status in populations. The Cochrane Library. 2014.
  • 33. Ramke J, Welch V, Blignault I, Gilbert C, Petkovic J, Blanchet K, et al. Interventions to improve access to cataract surgical services and their impact on equity in low- and middle- income countries. The Cochrane Library. 2014.
  • 34. Sreeramareddy CT, Sathyanarayana TN. Decentralised versus centralised governance of health services. The Cochrane Library. 2013.
  • 37. Brody C, Dworkin S, Dunbar M, Murthy P, Pascoe L. The effects of economic self-help group programs on women's empowerment: A systematic review protocol. Oslo, Norway: The Campbell Collaboration, 2013.
  • 38. Cirera X, Lakshman R, Spratt S. The impact of export processing zones on employment, wages and labour conditions in developing countries. London: 3ie, 2013.
  • 40. Giedion U, Andrés Alfonso E, Díaz Y. The impact of universal coverage schemes in the developing world: a review of the existing evidence. Washington DC: World Bank, 2013.
  • 41. Gonzalez L, Piza C, Cravo TA, Abdelnour S, Taylor L. The Impacts of Business Support Services for Small and Medium Enterprises on Firm Performance in Low-and Middle-Income Countries: A Systematic Review. The Campbell Collaboration, 2013.
  • 42. Higginson A, Mazerolle L, Benier KH, Bedford L. Youth gang violence in developing countries: a systematic review of the predictors of participation and the effectiveness of interventions to reduce involvement. London: 3ie, 2013.
  • 43. Higginson A, Mazerolle L, Davis J, Bedford L, Mengersen K. The impact of policing interventions on violent crime in developing countries. London: 3ie, 2013.
  • 44. Kingdon G, Aslam M, Rawal S, Das S. Are contract and para-teachers a cost effective intervention to address teacher shortages and improve learning outcomes? London: Institute of Education, 2012.
  • 45. Kluve J, Puerto S, Robalino D, Rother F, Weidenkaff F, Stoeterau J, et al. Interventions to improve labour market outcomes of youth: a systematic review of training, entrepreneurship promotion, employment services, mentoring, and subsidized employment interventions. The Campbell Collaboration, 2013.
  • 46. Loevinsohn M, Sumberg J, Diagne A, Whitfield S. Under What Circumstances and Conditions Does Adoption of Technology Result in Increased Agricultural Productivity? A Systematic Review. Brighton: Institute for Development Studies, 2013.
  • 47. Lynch U, Macdonald G, Arnsberger P, Godinet M, Li F, Bayarre H, et al. What is the evidence that the establishment or use of community accountability mechanisms and processes improve inclusive service delivery by governments, donors and NGOs to communities. London: EPPI-Centre, Social Science Research Unit, Institute of Education, University of London, 2013.
  • 48. Molina E, Pacheco A, Gasparini L, Cruces G, Rius A. Community Monitoring to Curb Corruption and Increase Efficiency in Service Delivery: Evidence from Low Income Communities. Campbell Collaboration, 2013.
  • 49. Posthumus H, Martin A, Chancellor T. A systematic review on the impacts of capacity strengthening of agricultural research systems for development and the conditions of success. London: EPPI-Centre, Social Science Research Unit, Institute of Education, University of London, 2013 1907345469.
  • 50. Samarajiva R, Stork C, Kapugama N, Zuhyle S, Perera RS. Mobile phone interventions for improving economic and productive outcomes for farm and non-farm rural enterprises and households in low and middle-income countries. London: 3ie, 2013.
  • 51. Samii C, Lisiecki M, Kulkarni P, Paler L, Chavis L. Impact of Payment for Environmental Services and De-Centralized Forest Management on Environmental and Human Welfare: A Systematic Review. The Campbell Collaboration, 2013.
  • 52. Seguin M, Niño-Zarazúa M. What do we know about non-clinical interventions for preventable and treatable childhood diseases in developing countries? United Nations University, 2013.
  • 53. Spangaro J, Zwi A, Adogu C, Ranmuthugala G, Davies GP, Steinacker L. What is the evidence of the impact of initiatives to reduce risk and incidence of sexual violence in conflict and post-conflict zones and other humanitarian crises in lower-and middle-income countries? London: EPPI-Centre, Social Science Research Unit, Institute of Education, University of London, 2013.
  • 56. Tripney J, Roulstone A, Vigurs C, Moore M, Schmidt E, Stewart R. Protocol for a Systematic Review: Interventions to Improve the Labour Market Situation of Adults with Physical and/or Sensory Disabilities in Low-and Middle-Income Countries. The Campbell Collaboration, 2013.
  • 58. Welch VA, Awasthi S, Cumberbatch C, Fletcher R, McGowan J, Krishnaratne S, et al. Deworming and Adjuvant Interventions for Improving the Developmental Health and Well-being of Children in Low- and Middle- income Countries: A Systematic Review and Network Meta-analysis. Campbell Collaboration, 2013.
  • 59. Willey B, Smith Paintain L, Mangham L, Car J, Armstrong Schellenberg J. Effectiveness of interventions to strengthen national health service delivery on coverage, access, quality and equity in the use of health services in low and lower middle income countries. London: EPPI-Centre, Social Science Research Unit, Institute of Education, University of London 2013.
  • 61. Rohwer AC, Rehfuess E. Logic model templates for systematic reviews of complex health interventions. Cochrane Colloquium; Quebec, Canada2013.
  • 66. Alamgir AH. Texas Asthma Control Program: Strategic Evaluation Plan 2011–2014. Austin, TX: Texas Department of State Health Services 2012.
  • 67. AAP. Schooled in Asthma–Physicians and Schools Managing Asthma Together. Elk Grove Village, IL: American Academy of Pediatrics (AAP) 2001.
  • 70. Rees R, Oliver S. Stakeholder perspectives and participation in reviews. In: Gough D, Oliver S, Thomas J, editors. An Introduction to Systematic Reviews. London: Sage Publications; 2012.
  • 73. Gough D, Thomas J. Commonality and diversity in reviews. In: Gough D, Oliver S, Thomas J, editors. An Introduction to Systematic Reviews. London: Sage; 2012.
  • 74. Waddington H. Response to 'Developing and using logic models in reviews of complex interventions'. 19th Cochrane Colloquium; October 18th; Madrid: Cochrane; 2011.
  • 75. Clark H, Anderson AA. Theories of Change and Logic Models: Telling Them Apart. American Evaluation Association; Atlanta, Georgia2004.
  • 76. Brunton G, Stansfield C, Thomas J. Finding relevant studies. In: Gough D, Oliver S, Thomas J, editors. An Introduction to Systematic Reviews. London: Sage; 2012.
  • 77. Chen H-T. Theory-driven evaluations. Newbury Park, CA, USA: Sage Publications; 1990.

Creating Logical Flow When Writing Scientific Articles

  • October 2021
  • Journal of Korean medical science 36(40)
  • CC BY-NC 4.0
  • This person is not on ResearchGate, or hasn't claimed this research yet.

Abstract and Figures

Construction of a well-structured paragraph.

Discover the world's research

  • 25+ million members
  • 160+ million publication pages
  • 2.3+ billion citations

Yitagesu Habtu

  • Edward Barroga

Ahmet Akgul

  • İbrahim Sayin
  • Pitchai Balakumar
  • Ali S. Alqahtani

Kumaran Shanmugam

  • Matthew Bennett
  • J KOREAN MED SCI

Tatyana Yakhontova

  • Glafera Janet Matanguihan
  • Tamás Hajdu

Gábor Hajdu

  • J TRANSL MED

Sandra Lopez Leon

  • Recruit researchers
  • Join for free
  • Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up

August 15, 2019

Can We Rely on Our Intuition?

As the world becomes more complex, making decisions becomes harder. Is it best to depend on careful analysis or to trust your gut?

By Laura Kutsch

research article logic

We face decisions all day long. Intuition, some believe, is an ability that can be trained and can play a constructive role in decision-making.

Windsor and Wiehahn Getty Images

“I go with my gut feelings,” says investor Judith Williams. Sure, you might think, “so do I,”— if the choice is between chocolate and vanilla ice cream. But Williams is dealing with real money in the five and six figures.

Williams is one of the lions on the program The Lions’ Den, a German television show akin to Shark Tank . She and other participants invest their own money in business ideas presented by contestants. She is not the only one who trusts her gut. Intuition, it seems, is on a roll: bookstores are full of guides advising us how to heal, eat or invest intuitively. They promise to unleash our inner wisdom and strengths we do not yet know we have.

But can we really rely on intuition, or is it a counsel to failure? Although researchers have been debating the value of intuition in decision-making for decades, they continue to disagree.

On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing . By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.

A Source of Error?

Intuition can be thought of as insight that arises spontaneously without conscious reasoning. Daniel Kahneman, who won a Nobel prize in economics for his work on human judgment and decision-making, has proposed that we have two different thought systems: system 1 is fast and intuitive; system 2 is slower and relies on reasoning. The fast system, he holds, is more prone to error. It has its place: it may increase the chance of survival by enabling us to anticipate serious threats and recognize promising opportunities. But the slower thought system, by engaging critical thinking and analysis, is less susceptible to producing bad decisions.  

Kahneman, who acknowledges that both systems usually operate when people think, has described many ways that the intuitive system can cloud judgment. Consider, for example, the framing effect: the tendency to be influenced by the way a problem is posed or a question is asked. In the 1980s Kahneman and his colleague Amos Tversky presented a hypothetical public health problem to volunteers and framed the set of possible solutions in different ways to different volunteers. In all cases, the volunteers were told to imagine that the U.S. was preparing for an outbreak of an unusual disease expected to kill 600 people and that two alternative programs for combating the disease had been proposed.

For one group, the choices were framed by Tversky and Kahneman in terms of gains—how many people would be saved:

If Program A is adopted, 200 people will be saved.

If Program B is adopted, there is 1/3 probability that 600 people will be saved, and 2/3 probability that no people will be saved.

The majority of volunteers selected the first option, Program A.

For another group, the choices were framed in terms of losses—how many people would die:

If Program C is adopted 400 people will die.

If Program D is adopted there is 1/3 probability that nobody will die, and 2/3 probability that 600 people will die.

In this case, the vast majority of volunteers were willing to gamble and selected the second option, Program D.

In fact, the options presented to both groups were the same: The first program would save 200 people and lose 400. The second program offered a one-in-three chance that everyone would live and a two-in-three chance that everyone would die. Framing the alternatives in terms of lives saved or lives lost is what made the difference. When choices are framed in terms of gains, people often become risk-averse, whereas when choices are framed in terms of losses, people often became more willing to take risks.

Intuition’s Benefits

Other cognitive scientists argue that intuition can lead to effective decision-making more commonly than Kahneman suggests. Gerd Gigerenzer of the Max Planck Institute for Human Development in Berlin is among them. He, too, says that people rarely make decisions on the basis of reason alone, especially when the problems faced are complex. But he thinks intuition’s merit has been vastly underappreciated. He views intuition as a form of unconscious intelligence.

Intuitive decisions can be grounded in heuristics: simple rules of thumb. Heuristics screen out large amounts of information, thereby limiting how much needs to be processed. Such rules of thumb may be applied consciously, but in general we simply follow them without being aware that we are doing so. Although they can lead to mistakes, as Kahneman points out, Gigerenzer emphasizes that they can be based on reliable information while leaving out unnecessary information. For example, an individual who wants to buy a good pair of running shoes might bypass research and brain work by simply purchasing the same running shoes used by an acquaintance who is an experienced runner.

In 2006 a paper by Ap Dijksterhuis and his colleagues, then at the University of Amsterdam, came to a similarly favorable view of intuition’s value. The researchers tested what they called the “deliberation without attention” hypothesis: although conscious thought makes the most sense for simple decisions (for example, what size skillet to use), it can actually be detrimental when considering more complex matters, such as buying a house.

In one of their experiments, test subjects were asked to select which of the four cars was the best, taking into account four characteristics, among them gas consumption and luggage space. One set of subjects had four minutes to think about the decision; another set was distracted by solving brainteasers. The distracted group made the wrong choice (according to the researchers’ criteria for the best car) more often than those who were able to think without being distracted. But if participants were asked to assess 12 characteristics, the opposite happened: undisturbed reflection had a negative effect on decision-making; only 25 percent selected the best car. In contrast, 60 percent of the subjects distracted by brainteasers got it right.

Investigators have been unable to replicate these findings, however. And in a 2014 review Ben R. Newell of the University of New South Wales and David R. Shanks of University College London concluded that the effect of intuition has been overrated by many researchers and that there is little evidence that conscious thought arrives at worse solutions in complex situations.

What about Real Life?

Of course, problems in the real world can be considerably more complicated than the artificially constructed ones often presented in laboratory experiments. In the late 1980s this difference sparked the Naturalistic Decision Making movement, which seeks to determine how people make decisions in real life. With questionnaires, videos and observations, it studies how firefighters, nurses, managers and pilots use their experience to deal with challenging situations involving time pressure, uncertainty, unclear goals and organizational constraints.

Researchers in the field found that highly experienced individuals tend to compare patterns when making decisions. They are able to recognize regularities, repetitions and similarities between the information available to them and their past experiences. They then imagine how a given situation might play out. This combination enables them to make relevant decisions quickly and competently. It further became evident that the certainty of the decider did not necessarily increase with an increase in information. On the contrary: too much information can prove detrimental.

Gary Klein, one of the movement’s founders, has called pattern matching “the intuitive part” and mental simulation “the conscious, deliberate and analytical part.” He has explained the benefits of the combination this way: “A purely intuitive strategy relying only on pattern matching would be too risky because sometimes the pattern matching generates flawed options. A completely deliberative and analytic strategy would be too slow.” In the case of firefighters, he notes, if a slow, systematic approach were used, “the fires would be out of control by the time the commanders finished deliberating.”

Intuition Is Not Irrational

Kamila Malewska of the Poznán University of Economics and Business in Poland has also studied intuition in real-world settings and likewise finds that people often apply a combination of strategies. She asked managers at a food company how they use intuition in their everyday work. Almost all of them stated that, in addition to rational analyses, they tapped gut feelings when making decisions. More than half tended to lean on rational approaches; about a quarter used a strategy that blended rational and intuitive elements; and about a fifth generally relied on intuition alone. Interestingly, the more upper-level managers tended more toward intuition.

Malewska thinks that intuition is neither irrational nor the opposite of logic. Rather it is a quicker and more automatic process that plumbs the many deep resources of experience and knowledge that people have gathered over the course of their lives. Intuition, she believes, is an ability that can be trained and can play a constructive role in decision-making.

Field findings published in 2017 by Lutz Kaufmann of the Otto Beisheim School of Management in Germany and his co-workers support the view that a mixture of thinking styles can be helpful in decision-making. The participants in their study, all purchasing managers, indicated how strongly they agreed or disagreed with various statements relating to their decision-making over the prior three months. For example: “I looked extensively for information before making a decision” (rational), “I did not have time to decide analytically, so I relied on my experience” (experience-based), or “I was not completely sure how to decide, so I decided based on my gut feeling” (emotional). The researchers, who consider experience-based and emotional processes as “two dimensions of intuitive processing,” also rated the success of a manager based on the unit price the person negotiated for a purchased product, as well as on the quality of the product and the punctuality of delivery.

Rational decision-making was associated with good performance. A mixture of intuitive and rational approaches also proved useful; however, a purely experience-based and a purely emotional approach did not work well. In other words, a blending of styles, which is frequently seen in everyday life, seems beneficial.

Economists Marco Sahm of the University of Bamberg and Robert K. von Weizsäcker of the Technical University of Munich study the extent to which our background knowledge determines whether rationality or gut feeling is more effective. Both Sahm and Weizsäcker are avid chess players, and they brought this knowledge to bear on their research. As children, they both learned intuitively by imitating the moves of their opponents and seeing where they led. Later, they approached the game more analytically, by reading chess books that explained and illustrated promising moves. Over time Weizsäcker became a very good chess player and has won international prizes. These days he bases his play mainly on intuition.

The two economists developed a mathematical model that takes the costs and benefits of both strategies into account. They have come to the conclusion that whether it is better to rely more on rational assessments or intuition depends both on the complexity of a particular problem and on the prior knowledge and cognitive abilities of the person. Rational decisions are more precise but entail higher costs than intuitive ones—for example, they involve more effort spent gathering and then analyzing information. This additional cost can decrease over time, but it will never disappear. The cost may be worth it if the problem is multifaceted and the decision maker gains a lot of useful information quickly (if the decision maker’s “learning curve is steep”). Once a person has had enough experience with related problems, though, intuitive decision-making that draws on past learning is more likely to yield effective decisions, Sahm and Weizsäcker say. The intuitive approach works better in that case because relying on accumulated experience and intuitive pattern recognition spares one the high costs of rational analysis.

One thing is clear: intuition and rationality are not necessarily opposites. Rather it is advantageous to master both intuition and analytic skills. Let us not follow our inner voice blindly, but let us not underestimate it either.

Laura Kutsch is a communications psychologist and journalist based in Lebach, Germany.

What's the difference between deductive reasoning and inductive reasoning?

Deductive reasoning and inductive reasoning are easy to mix up. Learn what the difference is and see examples of each type of scientific reasoning.

Sherlock Holmes, the fictional sleuth who famously resides on Baker Street, is known for his impressive powers of logical reasoning. With a quick visual sweep of a crime scene, he generates hypotheses, gathers observations and draws inferences that ultimately reveal the responsible criminal's methods and identity.

Holmes is often said to be a master of deductive reasoning, but he also leans heavily on inductive reasoning. Because of their similar names, however, these concepts are easy to mix up.

So what's the difference between deductive and inductive reasoning? Read on to learn the key distinctions between these two modes of logic used by literary detectives and real-life scientists alike. 

Related: Sherlock Holmes' famous memory trick really works  

close up on a boy dressed as sherlock holmes and looking through a magnifying glass surrounded by other children and adults also dressed as sherlock holmes in central London in Summer 2014

What is deductive reasoning?

Deductive reasoning, also known as deduction, is a basic form of reasoning that uses a general principle or premise as grounds to draw specific conclusions. 

This type of reasoning leads to valid conclusions when the premise is known to be true — for example, "all spiders have eight legs" is known to be a true statement. Based on that premise, one can reasonably conclude that, because tarantulas are spiders, they, too, must have eight legs.

The scientific method uses deduction to test scientific hypotheses and theories , which predict certain outcomes if they are correct, said Sylvia Wassertheil-Smoller , a researcher and professor emerita at Albert Einstein College of Medicine. 

"We go from the general — the theory — to the specific — the observations," Wassertheil-Smoller told Live Science. In other words, theories and hypotheses can be built on past knowledge and accepted rules, and then tests are conducted to see whether those known principles apply to a specific case.

Deductive reasoning begins with a first premise, which is followed by a second premise and an inference, or a conclusion based on reasoning and evidence. A common form of deductive reasoning is the "syllogism," in which two statements — a major premise and a minor premise — together reach a logical conclusion. 

For example, the major premise "Every A is B" could be followed by the minor premise "This C is A." Those statements would lead to the conclusion that "This C is B." Syllogisms are considered a good way to test deductive reasoning to make sure the argument is valid.

In deductive reasoning, if something is true of a class of things in general, it is also true for all members of that class. 

Deductive conclusions are reliable provided that the premises they're based on are true, but you run into trouble if they're false, according to Norman Herr , a professor of secondary education at California State University, Northridge. For instance the argument "All bald men are grandfathers. Harold is bald. Therefore, Harold is a grandfather," is logically valid, but it is untrue because the original premise is false.

Related: Crows outthink monkeys, can grasp recursive patterns  

Deductive reasoning examples

Here are some examples of deductive reasoning:

Major premise: All mammals have backbones. Minor premise: Humans are mammals. Conclusion: Humans have backbones.

Major premise: All birds lay eggs. Minor premise: Pigeons are birds. Conclusion: Pigeons lay eggs.

Major premise: All plants perform photosynthesis. Minor premise: A cactus is a plant. Conclusion: A cactus performs photosynthesis. 

What is inductive reasoning?

Inductive reasoning uses specific and limited observations to draw general conclusions that can be applied more widely. So while deductive reasoning is more of a top-down approach — moving from a general premise to a specific case — inductive reasoning is the opposite. It uses a bottom-up approach to generate new premises, or hypotheses, based on observed patterns, according to the University of Illinois .

Inductive reasoning is also called inductive logic or inference. "In inductive inference, we go from the specific to the general," Wassertheil-Smoller told Live Science. "We make many observations, discern a pattern, make a generalization, and infer an explanation or a theory." 

In science, she added, there is a constant interplay between inductive and deductive reasoning that leads researchers steadily closer to a truth that can be verified with certainty, 

The reliability of a conclusion made with inductive logic depends on the completeness of the observations. For instance, let's say you have a bag of coins; you pull three coins from the bag, and each coin is a penny. Using inductive logic, you might then propose that all of the coins in the bag are pennies.

Even though all of the initial observations — that each coin taken from the bag was a penny — are correct, inductive reasoning does not guarantee that the conclusion will be true. The next coin you pull could be a quarter. 

Here's another example: " Penguins are birds. Penguins can't fly. Therefore, no birds can fly." The conclusion does not follow logically from the statements, because the only birds included in the sample were penguins.

Despite this inherent limitation, inductive reasoning has its place in the scientific method, and scientists use it to form hypotheses and theories. Researchers then use deductive reasoning to apply the theories to specific situations.

Related: Does everyone have an inner monologue?  

a chinstrap penguin pictured with one wing outstretched and looking at the camera as it sits near a body of water

Inductive reasoning examples

Here are some examples of inductive reasoning:

Data: I see fireflies in my backyard every summer. Hypothesis: This summer, I will probably see fireflies in my backyard.

Data: I tend to catch colds when people around me are sick. Hypothesis: Colds are infectious.

Data: Every dog I meet is friendly. Hypothesis: Most dogs are usually friendly. 

What is abductive reasoning?

Another form of scientific reasoning that diverges from inductive and deductive reasoning is called abductive. Abductive reasoning is a form of logic that starts with an incomplete set of observations and proceeds to the likeliest possible explanation for that data, according to Butte College in Oroville, California. 

It is based on making and testing hypotheses using the best information available. It often entails making an educated guess after observing a phenomenon for which there is no clear explanation. 

For example, a person walks into their living room and finds torn-up papers all over the floor. The person's dog has been alone in the apartment all day. The person concludes that the dog tore up the papers because it is the most likely scenario. It's possible that a family member with a key to the apartment swung by and destroyed the papers, or it may have been done by the landlord. But the dog theory is the most likely conclusion based on the data at hand. 

Abductive reasoning is useful for forming hypotheses to be tested. For instance, abductive reasoning is used by doctors when they're assessing which ailment a patient likely has based on their symptoms. They then check which potential diagnosis is correct using medical tests. Jurors also use abductive reasoning to make decisions based on the select evidence presented to them by lawyers and witnesses.

Related: What is Occam's razor?  

a bulldog pictured in a living room surrounded by torn up newpaper

Abductive reasoning examples

Here are some examples of abductive reasoning:

— How many calories can the brain burn by thinking?

— What is the Dunning-Kruger effect?

— Can we ever stop thinking?

Observation: The grass is wet outside when you get up in the morning, but you haven't recently watered the lawn. Best-guess explanation: It likely rained last night.

Observation: At a restaurant, you see a bag and a half-eaten sandwich at an empty table. Best-guess explanation: The table's occupant is probably in the restroom.

Observation: You enter a basketball court and see a group of people in red shirts celebrating while another group in blue shirts sulks. Best-guess explanation: The red team probably just beat the blue team in a game.

Editor's note: This article was updated on March 7, 2024.

Sign up for the Live Science daily newsletter now

Get the world’s most fascinating discoveries delivered straight to your inbox.

30,000 years of history reveals that hard times boost human societies' resilience

'We're meeting people where they are': Graphic novels can help boost diversity in STEM, says MIT's Ritu Raman

Evidence of more than 200 survivors of Mount Vesuvius eruption discovered in ancient Roman records

Most Popular

  • 2 Rare fungal STI spotted in US for the 1st time
  • 3 Earth's upper atmosphere could hold a missing piece of the universe, new study hints
  • 4 Viking Age 'treasure' discovered by metal detectorist on Isle of Man
  • 5 James Webb telescope finds carbon at the dawn of the universe, challenging our understanding of when life could have emerged
  • 2 Neanderthals and humans interbred 47,000 years ago for nearly 7,000 years, research suggests
  • 3 Bornean clouded leopard family filmed in wild for 1st time ever
  • 4 32 haunting shipwrecks from the ancient world
  • 5 Bear vs tiger: Watch 2 of nature's heavyweights face off in the wild in India

research article logic

Appointments at Mayo Clinic

  • Stress management

Positive thinking: Stop negative self-talk to reduce stress

Positive thinking helps with stress management and can even improve your health. Practice overcoming negative self-talk with examples provided.

Is your glass half-empty or half-full? How you answer this age-old question about positive thinking may reflect your outlook on life, your attitude toward yourself, and whether you're optimistic or pessimistic — and it may even affect your health.

Indeed, some studies show that personality traits such as optimism and pessimism can affect many areas of your health and well-being. The positive thinking that usually comes with optimism is a key part of effective stress management. And effective stress management is associated with many health benefits. If you tend to be pessimistic, don't despair — you can learn positive thinking skills.

Understanding positive thinking and self-talk

Positive thinking doesn't mean that you ignore life's less pleasant situations. Positive thinking just means that you approach unpleasantness in a more positive and productive way. You think the best is going to happen, not the worst.

Positive thinking often starts with self-talk. Self-talk is the endless stream of unspoken thoughts that run through your head. These automatic thoughts can be positive or negative. Some of your self-talk comes from logic and reason. Other self-talk may arise from misconceptions that you create because of lack of information or expectations due to preconceived ideas of what may happen.

If the thoughts that run through your head are mostly negative, your outlook on life is more likely pessimistic. If your thoughts are mostly positive, you're likely an optimist — someone who practices positive thinking.

The health benefits of positive thinking

Researchers continue to explore the effects of positive thinking and optimism on health. Health benefits that positive thinking may provide include:

  • Increased life span
  • Lower rates of depression
  • Lower levels of distress and pain
  • Greater resistance to illnesses
  • Better psychological and physical well-being
  • Better cardiovascular health and reduced risk of death from cardiovascular disease and stroke
  • Reduced risk of death from cancer
  • Reduced risk of death from respiratory conditions
  • Reduced risk of death from infections
  • Better coping skills during hardships and times of stress

It's unclear why people who engage in positive thinking experience these health benefits. One theory is that having a positive outlook enables you to cope better with stressful situations, which reduces the harmful health effects of stress on your body.

It's also thought that positive and optimistic people tend to live healthier lifestyles — they get more physical activity, follow a healthier diet, and don't smoke or drink alcohol in excess.

Identifying negative thinking

Not sure if your self-talk is positive or negative? Some common forms of negative self-talk include:

  • Filtering. You magnify the negative aspects of a situation and filter out all the positive ones. For example, you had a great day at work. You completed your tasks ahead of time and were complimented for doing a speedy and thorough job. That evening, you focus only on your plan to do even more tasks and forget about the compliments you received.
  • Personalizing. When something bad occurs, you automatically blame yourself. For example, you hear that an evening out with friends is canceled, and you assume that the change in plans is because no one wanted to be around you.
  • Catastrophizing. You automatically anticipate the worst without facts that the worse will happen. The drive-through coffee shop gets your order wrong, and then you think that the rest of your day will be a disaster.
  • Blaming. You try to say someone else is responsible for what happened to you instead of yourself. You avoid being responsible for your thoughts and feelings.
  • Saying you "should" do something. You think of all the things you think you should do and blame yourself for not doing them.
  • Magnifying. You make a big deal out of minor problems.
  • Perfectionism. Keeping impossible standards and trying to be more perfect sets yourself up for failure.
  • Polarizing. You see things only as either good or bad. There is no middle ground.

Focusing on positive thinking

You can learn to turn negative thinking into positive thinking. The process is simple, but it does take time and practice — you're creating a new habit, after all. Following are some ways to think and behave in a more positive and optimistic way:

  • Identify areas to change. If you want to become more optimistic and engage in more positive thinking, first identify areas of your life that you usually think negatively about, whether it's work, your daily commute, life changes or a relationship. You can start small by focusing on one area to approach in a more positive way. Think of a positive thought to manage your stress instead of a negative one.
  • Check yourself. Periodically during the day, stop and evaluate what you're thinking. If you find that your thoughts are mainly negative, try to find a way to put a positive spin on them.
  • Be open to humor. Give yourself permission to smile or laugh, especially during difficult times. Seek humor in everyday happenings. When you can laugh at life, you feel less stressed.
  • Follow a healthy lifestyle. Aim to exercise for about 30 minutes on most days of the week. You can also break it up into 5- or 10-minute chunks of time during the day. Exercise can positively affect mood and reduce stress. Follow a healthy diet to fuel your mind and body. Get enough sleep. And learn techniques to manage stress.
  • Surround yourself with positive people. Make sure those in your life are positive, supportive people you can depend on to give helpful advice and feedback. Negative people may increase your stress level and make you doubt your ability to manage stress in healthy ways.
  • Practice positive self-talk. Start by following one simple rule: Don't say anything to yourself that you wouldn't say to anyone else. Be gentle and encouraging with yourself. If a negative thought enters your mind, evaluate it rationally and respond with affirmations of what is good about you. Think about things you're thankful for in your life.

Here are some examples of negative self-talk and how you can apply a positive thinking twist to them:

Putting positive thinking into practice
Negative self-talk Positive thinking
I've never done it before. It's an opportunity to learn something new.
It's too complicated. I'll tackle it from a different angle.
I don't have the resources. Necessity is the mother of invention.
I'm too lazy to get this done. I couldn't fit it into my schedule, but I can re-examine some priorities.
There's no way it will work. I can try to make it work.
It's too radical a change. Let's take a chance.
No one bothers to communicate with me. I'll see if I can open the channels of communication.
I'm not going to get any better at this. I'll give it another try.

Practicing positive thinking every day

If you tend to have a negative outlook, don't expect to become an optimist overnight. But with practice, eventually your self-talk will contain less self-criticism and more self-acceptance. You may also become less critical of the world around you.

When your state of mind is generally optimistic, you're better able to handle everyday stress in a more constructive way. That ability may contribute to the widely observed health benefits of positive thinking.

There is a problem with information submitted for this request. Review/update the information highlighted below and resubmit the form.

From Mayo Clinic to your inbox

Sign up for free and stay up to date on research advancements, health tips, current health topics, and expertise on managing health. Click here for an email preview.

Error Email field is required

Error Include a valid email address

To provide you with the most relevant and helpful information, and understand which information is beneficial, we may combine your email and website usage information with other information we have about you. If you are a Mayo Clinic patient, this could include protected health information. If we combine this information with your protected health information, we will treat all of that information as protected health information and will only use or disclose that information as set forth in our notice of privacy practices. You may opt-out of email communications at any time by clicking on the unsubscribe link in the e-mail.

Thank you for subscribing!

You'll soon start receiving the latest Mayo Clinic health information you requested in your inbox.

Sorry something went wrong with your subscription

Please, try again in a couple of minutes

  • Forte AJ, et al. The impact of optimism on cancer-related and postsurgical cancer pain: A systematic review. Journal of Pain and Symptom Management. 2021; doi:10.1016/j.jpainsymman.2021.09.008.
  • Rosenfeld AJ. The neuroscience of happiness and well-being. Child and Adolescent Psychiatric Clinics of North America. 2019;28:137.
  • Kim ES, et al. Optimism and cause-specific mortality: A prospective cohort study. American Journal of Epidemiology. 2016; doi:10.1093/aje/kww182.
  • Amonoo HL, et al. Is optimism a protective factor for cardiovascular disease? Current Cardiology Reports. 2021; doi:10.1007/s11886-021-01590-4.
  • Physical Activity Guidelines for Americans. 2nd ed. U.S. Department of Health and Human Services. https://health.gov/paguidelines/second-edition. Accessed Oct. 20, 2021.
  • Seaward BL. Essentials of Managing Stress. 4th ed. Burlington, Mass.: Jones & Bartlett Learning; 2021.
  • Seaward BL. Cognitive restructuring: Reframing. Managing Stress: Principles and Strategies for Health and Well-Being. 8th ed. Burlington, Mass.: Jones & Bartlett Learning; 2018.
  • Olpin M, et al. Stress Management for Life. 5th ed. Cengage Learning; 2020.
  • A very happy brain
  • Being assertive
  • Bridge pose
  • Caregiver stress
  • Cat/cow pose
  • Child's pose
  • COVID-19 and your mental health
  • Does stress make rheumatoid arthritis worse?
  • Downward-facing dog
  • Ease stress to reduce eczema symptoms
  • Ease stress to reduce your psoriasis flares
  • Forgiveness
  • Job burnout
  • Learn to reduce stress through mindful living
  • Manage stress to improve psoriatic arthritis symptoms
  • Mayo Clinic Minute: Meditation is good medicine
  • Mountain pose
  • New School Anxiety
  • Seated spinal twist
  • Standing forward bend
  • Stress and high blood pressure
  • Stress relief from laughter
  • Stress relievers
  • Support groups
  • Tips for easing stress when you have Crohn's disease

Mayo Clinic does not endorse companies or products. Advertising revenue supports our not-for-profit mission.

  • Opportunities

Mayo Clinic Press

Check out these best-sellers and special offers on books and newsletters from Mayo Clinic Press .

  • Mayo Clinic on Incontinence - Mayo Clinic Press Mayo Clinic on Incontinence
  • The Essential Diabetes Book - Mayo Clinic Press The Essential Diabetes Book
  • Mayo Clinic on Hearing and Balance - Mayo Clinic Press Mayo Clinic on Hearing and Balance
  • FREE Mayo Clinic Diet Assessment - Mayo Clinic Press FREE Mayo Clinic Diet Assessment
  • Mayo Clinic Health Letter - FREE book - Mayo Clinic Press Mayo Clinic Health Letter - FREE book
  • Healthy Lifestyle
  • Positive thinking Stop negative self-talk to reduce stress

We’re transforming healthcare

Make a gift now and help create new and better solutions for more than 1.3 million patients who turn to Mayo Clinic each year.

Concepts and Reasoning: a Conceptual Review and Analysis of Logical Issues in Empirical Social Science Research

  • Published: 08 July 2023
  • Volume 58 , pages 502–530, ( 2024 )

Cite this article

research article logic

  • Qingjiang Yao   ORCID: orcid.org/0000-0002-0550-4211 1  

429 Accesses

Explore all metrics

A substantial number of social science studies have shown a lack of conceptual clarity, inadequate understanding of the nature of the empirical research approaches, and undue preference for deduction, which have caused much confusion, created paradigmatic incommensurability, and impeded scientific advancement. This study, through conceptual review and analysis of canonical discussions of concepts and the reasoning approaches of deduction and induction and their applications in social science theorization by philosophers and social scientists, is purported to unveil the logical nature of empirical research and examine the legitimacy of the preference of deduction among social scientists. The findings note that conceptual clarity as the foundation of social science research, exchange, and replication can be achieved through interdisciplinary stress of conceptual analyses to establish universal measurements and that the primacy of deduction in social sciences needs to concede to or be balanced with induction for new knowledge, more discoveries, and scientific advancement. The study recommends that institutions and researchers of social sciences invest more in conceptual analysis and inductive research through collaboration and separate efforts.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

Similar content being viewed by others

research article logic

What is Qualitative in Qualitative Research

research article logic

Criteria for Good Qualitative Research: A Comprehensive Review

research article logic

Social Identity Theory

Data availability.

This conceptual review paper involves no empirical data.

American Educational Research Association, American Psychological Association, National Council on Measurement in Education. (2014). Standards for educational and psychological testing . American Educational Research Association. https://www.apa.org/science/programs/testing/standards

Allen, M. S., Iliescu, D., & Greiff, S. (2022). Single item measures in psychological science: A call to action. European Journal of Psychological Assessment, 38 (1), 1–5. https://doi.org/10.1027/1015-5759/a000699

Article   Google Scholar  

Aristotle. (1984). The organon. In J. Barnes (Ed.), The complete works of Aristotle: The revised Oxford translation (v. 1) . Princeton University Press.

Aven, T. (2018). Reflections on the use of conceptual research in risk analysis. Risk Analysis, 38 (11), 2415–2423. https://doi.org/10.1111/risa.13139

Article   PubMed   Google Scholar  

Babbie, E. (2021). The practice of social research (15th ed.). Cengage.

Bacon, F. (2000). In L. Jardine, & M. Silverthorne (Eds.), The new organon . Cambridge University Press.

Bal, M. (2009). Working with concepts. European Journal of English Studies, 13 (1), 13–23. https://doi.org/10.1080/13825570802708121

Baldwin, D. (1980). Interdependence and power: A conceptual analysis. International Organization, 34 (4), 471–506. https://doi.org/10.1017/S0020818300018828

Baskaran, S., Ng, C. H., Mahadi, N., & Ayob, S. A. (2017). Youth and social media comportment: A conceptual perspective. International Journal of Academic Research in Business and Social Sciences, 7 (11), 1260–1277. https://doi.org/10.6007/IJARBSS/v7-i11/3563

Bergkvist, L. (2015). Appropriate use of single-item measures is here to stay. Marketing Letters, 26 , 245–255. https://doi.org/10.1007/s11002-014-9325-y

Bird, F. (2020). A defense of objectivity in the social sciences, rightly understood. Sustainability: Science Practice and Policy, 16 (1), 83–98. https://doi.org/10.1080/15487733.2020.1785679

Birks, M., & Mills, J. (2015). Grounded theory: A practical guide (2nd ed.). Sage Publications.

Blumer, H. (1956). Sociological analysis and the “variable. American Sociological Review, 21 (6), 683–690. https://doi.org/10.2307/2088418

Boateng, G. O., Neilands, T. B., Frongillo, E. A., Melgar-Quinonez, H., & Young, S. L. (2018). Best practices for developing and validating scales for health, social, and behavioral research: A primer. Frontiers in Public Health, 6 , 149. https://doi.org/10.3389/fpubh.2018.00149

Article   PubMed   PubMed Central   Google Scholar  

Boole, G. (1952/2012). Studies in logic and probability . Dover Publications.

Boyd, N. M., & Bogen, J. (2021). Theory and observation in science. In N. Zalta & U. Nodelman (Eds.), Stanford Encyclopedia of Philosophy (Winter 2021 Edition) .   https://plato.stanford.edu/archives/win2021/entries/science-theory-observation/ . Accessed 1 Apr 2023.

Brandt, P., & Timmermans, S. (2021). Abductive logic of inquiry for quantitative research in the digital age. Sociological Science, 8 , 191–210. https://doi.org/10.15195/v8.a10

Brewer, W. F., & Chinn, C. A. (1994). The theory-ladenness of data: An experimental demonstration. In A. Ram & K. Eiselt (Eds.), Proceedings of the sixteenth annual conference of the cognitive science society (p.5). Routledge. https://doi.org/10.4324/9781315789354

Bringmann, L. F., Elmer, T., & Eronen, M. I. (2022). Back to basics: The importance of conceptual clarification in psychological science. Current Direction in Psychological Science, 31 (4), 340–346. https://doi.org/10.1177/09637214221096485

Carnap, R. (1956). The methodological character of theoretical concepts. In H. Feigl, & M. Scriven (Eds.), Foundations of science and the concepts of psychology and psychoanalysis (pp. 38–76). University of Minnesota Press.

Carnap, R. (1963). Replies and systematic expositions. In P. A. Shilpp (Ed.), The philosophy of Rudolf Carnap (pp. 859–1013). Open Court.

Chaffee, S. (1991). Explication . Sage Publications.

Charmaz, K., & Thornberg, R. (2021). The pursuit of quality in grounded theory. Qualitative Research in Psychology, 18 (3), 305–327. https://doi.org/10.1080/14780887.2020.1780357

Clark, L. A., & Watson, D. (2019). Constructing validity: New developments in creating objective measuring instruments. Psychological Assessment, 31 (12), 1412–1427. https://doi.org/10.1037/pas0000626

Clark, T., Foster, L., Sloan, L., & Bryman, A. (2021). Bryman’s social research methods (6th ed.). Oxford University Press.

Cohen, I. B. (1994). A note on “social science” and on “natural science.” In I. B. Cohen (Ed.), The natural sciences and the social sciences (pp. xxv-xxxvi). Kluwer Academic Publishers.

Copi, I., Cohen, C., & McMahon, K. (2010). Introduction to logic (14th ed.). Prentice-Hall.

Darwin, C. (1897). In F. Darwin (Eds.), The life and letters of Charles Darwin . D. Appleton and Company.

Dewey, J. (1910/1997). How we think . Dover Publications.

Dreher, A. (2000). Foundations for conceptual research in psychoanalysis . Karnac (Books) Ltd.

Dreher, A. (2003). What does conceptual research have to offer? In M. Leuzinger-Bohleber, A. Dreher, & J. Ganestri (Eds.), Pluralism and unity? Methods of research in psychoanalysis (pp. 109–124). IPA.

Dubin, R. (1978). Theory building (2nd ed.). Free Press.

Dummett, M. (1991). The logical basis of metaphysics . Harvard University Press.

Dunwoody, S. (2005). Explicate, please. MAPOR News , fall issue, 4. http://www.mapor.org/newsletters/Fall2005.pdf

Eagly, A., & Chaiken, S. (1993). The psychology of attitudes . Harcourt Brace Jovanovich College Publishers.

Fetzer, J. (2022). Carl Hempel. In N. Zalta & U. Nodelman (Eds.), Stanford encyclopedia of philosophy . Retrieved April 1, 2023, from https://plato.stanford.edu/entries/hempel/

Fisher, R. (1955). Statistical methods and scientific induction. Journal of the Royal Statistical Society: Series B(Methodological), 17 (1), 69–78. https://doi.org/10.1111/j.2517-6161.1955.tb00180.x

Fodor, J. (1998). Concepts: Where cognitive science went wrong . Oxford University Press.

Fodor, J. (2004). Having concepts: A brief refutation of the 20th century. Mind and Language, 19 , 29–47. https://doi.org/10.1111/j.1468-0017.2004.00245.x

Foxall, G. R. (1986). The role of radical behaviorism in the explanation of consumer choice. In R. J. Lutz (Ed.), Advances in consumer research (13 vol., pp. 187–191). Association for Consumer Research.

Francis, G. (2012). Publication bias and the failure of replication in experimental psychology. Psychonomic Bulletin and Review, 19 , 975–991. https://doi.org/10.3758/s13423-012-0322-y

Gardner, D. G., Cummings, L. L., Dunham, R. B., & Pierce, J. L. (1998). Single-item versus multiple-item measurement scales: An empirical comparison. Educational and Psychological Measurement, 58 (6), 898–915. https://doi.org/10.1177/0013164498058006003

Gatzka, T. (2021). Aspects of openness as predictors of academic achievement. Personality and Individual Differences, 170 ,. https://doi.org/10.1016/j.paid.2020.110422

Glaser, B. G., & Strauss, A. L. (1967). The discovery of grounded theory: Strategies for qualitative research . Aldine de Gruyter.

Gunnell, J. (1975). Philosophy, science, and political inquiry . General Learning Press.

Haslam, N. (2016). Concept creep: Psychology’s expanding concepts of harm and pathology. Psychological Inquiry, 27 (1), 1–17. https://doi.org/10.1080/1047840X.2016.1082418

Haslam, N., Dakin, B. C., Fabiano, F., McGrath, M. J., Rhee, J., Vylomova, E., Weaving, M., & Wheeler, M. A. (2020). Harm inflation: Making sense of concept creep. European Review of Social Psychology, 31 (1), 254–286. https://doi.org/10.1080/10463283.2020.1796080

Haslam, N., Tse, J. S. Y., & Deyne, S. D. (2021). Concept creep and psychiatrization. Frontiers in Sociology, 6 , 806147. https://doi.org/10.3389/fsoc.2021.806147

Hayes, A. F., & Coutts, J. J. (2020). Use Omega rather than Cronbach’s alpha for estimating reliability but…. Communication Methods and Measures, 14 (1), 1–24. https://doi.org/10.1080/19312458.2020.1718629

Hempel, C. (1965). Aspects of scientific explanation and other essays in the philosophy of science . Free Press.

Homans, G. (1951). The human group . Routledge.

Hume, D. (1748/1999). An enquiry concerning human understanding . Oxford University Press.

Hurley, P., & Watson, L. (2018). A concise introduction to logic (13th ed.). Cengage Learning.

Husserl, E. (1900/1973). Logical investigations (trans. Findlay J.N.). Routledge & Kegan Paul.

Jackson, F. (1998). From metaphysics to ethics: A defense of conceptual analysis . Oxford University Press.

Kant, I. (1781/1998). The critique of pure reason (Trans. P Guyer, & A. W. Wood). Cambridge University Press.

Kerlinger, F. N, & Lee, H. B. (1999). Foundations of behavioral research (4th ed.). Wadsworth Publishing.

King, G., Keohane, R. O., & Verba, S. (2021). Designing social inquiry: Scientific inference in qualitative research (new ed.). Princeton University Press.

Kistruck, G. M., & Shantz, A. S. (2022). Research on grand challenges: Adopting an abductive experimentation methodology. Organization Studies, 43 (9), 1479–1505. https://doi.org/10.1177/01708406211044886

Kuhn, T. (1996). The structure of scientific revolutions . The University of Chicago Press.

Leibniz, G. W. (1989). Dissertation on the art of combinations. In L. E. Loemker (Ed.) Philosophical papers and letters. The new synthese historical library (Texts and studies in the history of philosophy) (vol 2). Springer. https://doi.org/10.1007/978-94-010-1426-7_2

Lakatos, I. (1978). The methodology of scientific research programmes. In J. Worrall, & G. Currie (Eds.), Philosophical papers, V 1 . Cambridge University Press.

Locke, E. A. (2007). The case for inductive theory building. Journal of Management, 33 (6), 867–890. https://doi.org/10.1177/0149206307307636

Locke, J. (1689/1997). An essay concerning human understanding, book III . Penguin Press.

Locke, E. A., & Latham, G. P. (2020). Building a theory by induction: The example of goal setting theory. Organizational Psychology Review, 10 (3–4), 223–239. https://doi.org/10.1177/2041386620921931

Margolis, E., & Laurence, S. (2022). Concepts. In E. N. Zalta & U. Nodelman (Eds.),  The Stanford encyclopedia of philosophy (Fall 2022 Edition). https://plato.stanford.edu/archives/fall2022/entries/concepts/ . Accessed 1 Apr 2023.

Martinez, R. A. M., Andrabi, N., Goodwin, A. N., Wilbur, R. E., Smith, N. R., & Zivich, P. N. (2023). Conceptualization, operationalization, and utilization of race and ethnicity in major epidemiology journals, 1995–2018: A systematic review. American Journal of Epidemiology, 192 (3), 483–496. https://doi.org/10.1093/aje/kwac146

McCombs, M., & Donald, S. (1972). The agenda-setting function of mass media. The Public Opinion Quarterly, 36 (2), 176–187. https://doi.org/10.1086/267990

McLeod, J., & Chaffee, S. (2017). The construction of social reality. In J. T. Tedeschi (Ed.), The social influence processes (pp50-99) . Routledge.

Mill, J. S. (1843/2011). A system of logic, ratiocinative and inductive: Being a connected view of the principles of evidence, and the methods of scientific investigation (1V vol). Cambridge University Press. https://doi.org/10.1017/CBO9781139149839.017

Mokgohloa, K., Kanakana-Katumba, G., Maladzhi, R., & Xaba, S. (2021). A grounded theory approach to digital transformation in the postal sector in southern Africa. Advances in Science Technology and Engineering Systems Journal, 6 (2), 313–323. https://doi.org/10.25046/aj060236

Mollaret, P. (2009). Using common psychological terms to describe other people: From lexical hypothesis to polysemous conception. Theory and Psychology, 19 (3), 315–334. https://doi.org/10.1177/0959354309104157

Mukumbang, F. C., Kabongo, E. M., & Eastwood, J. G. (2021). Examining the application of retroductive theorizing in realist-informed studies. International Journal of Qualitative Methods, 20 , 1–14. https://doi.org/10.1177/16094069211053516

Nuhoglu, H. (2020). The effect of deduction and induction methods used in modeling current environmental issues with system dynamics approach in science education. Participatory Education Research (PER), 7 (1), 111–126. https://doi.org/10.17275/per.20.7.7.1

Nunnally, J. (1987). Introduction to psychological measurement . McGraw-Hill Book Company.

O’Shaughnessy, J. (1992). Explaining buyer behavior: Central concepts and philosophy of science issues . Oxford University Press.

Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349 , 1–8. https://doi.org/10.1126/science.aac4716

Pan, Z., & Kosicki, G. (1993). Framing analysis: An approach to news discourse. Political Communication, 10 (1), 55–75. https://doi.org/10.1080/10584609.1993.9962963

Peacocke, P. (2009). Frege’s hierarchy: A puzzle. In J. Almog, & P. Leonardi (Eds.), The philosophy of David Kaplan (pp. 159–186). Oxford University Press.

Peirce, C. S. (1898/1992). Reason and the logic of things: The Cambridge conferences lectures of 1898 (ed. Ketner, K.L). Harvard University Press.

Petronio, S., & Child, J. T. (2020). Conceptualization and operationalization: Utility of communication privacy management theory. Current Opinion in Psychology, 31 , 76–82. https://doi.org/10.1016/j.copsyc.2019.08.009

Philipsen, K. (2017). Theory building: Using abductive search strategies. In P. Freytag, & L. Young (Eds.), Collaborative research design (pp. 45–71). Springer. https://doi.org/10.1007/978-981-10-5008-4_3

Popper, K. (1959). The logic of scientific discovery . Hutchinson.

Popper, K. (1963). Conjectures and refutations: The growth of scientific knowledge . Routledge & Kegan Paul.

Popper, K. (1982). Unended quest: An intellectual autobiography . Open Court.

Potter, J. (2012). Media effects . Sage Publications.

Putnam, H. (1962). The analytic and the synthetic. In H. Feigl, & G. Maxwell (Eds.), Minnesota studies in the philosophy of science , V III (pp. 358–97). University of Minnesota Press.

Quine, W. (1974). Roots of reference . Open Court.

Rose, J., & Johnson, C. W. (2020). Contextualizing reliability and validity in qualitative research: Toward more rigorous and trustworthy qualitative social science in leisure research. Journal of Leisure Research, 51 (4), 432–451. https://doi.org/10.1080/00222216.2020.1722042

Rosenberg-Jansen, S. (2022). The emerging world of humanitarian energy: A conceptual research review. Energy Research and Social Science, 92 ,. https://doi.org/10.1016/j.erss.2022.102779

Russell, B. (1946). A history of western philosophy . George Allen and Unwin Ltd.

Sætre, A. S., & Van de Ven, A. (2021). Generating theory by abduction. Academy of Management Review, 46 (4), 684–701. https://doi.org/10.5465/amr.2019.0233

Salmon, W. (1971). Statistical explanation and statistical relevance . University of Pittsburgh Press.

Schimmack, U. (2020). A meta-psychological perspective on the decade of replication failures in social psychology. Canadian Psychology/Psychologie Canadienne, 61 (4), 364–376. https://doi.org/10.1037/cap0000246

Schlagwein, D. (2021). Natural sciences, philosophy of science and the orientation of the social sciences. Journal of Information Technology, 36 (1), 85–80. https://doi.org/10.1177/0268396220951203

Schrepp, M. (2020). On the usage of Cronbach’s alpha to measure reliability of UX scales. Journal of Usability Studies, 15 (4), 247–258.

Google Scholar  

Shrestha, Y. R., He, V. F., Puranam, P., & von Krogh, G. (2020). Algorithm supported induction for building theory: How can we use prediction models to theorize? Organization Science, 32 (3), 856–880. https://doi.org/10.1287/orsc.2020.1382

Stadler, M., Sailer, M., & Fischer, F. (2021). Knowledge as a formative construct: A good alpha is not always better. New Ideas in Psychology, 60 ,. https://doi.org/10.1016/j.newideapsych.2020.100832

Stich, S., & Weinberg, J. (2001). Jackson’s empirical assumptions. Philosophy and Phenomenological Research, 62 (3), 637–643. https://doi.org/10.1111/j.1933-1592.2001.tb00081.x

Surma-aho, A., & Otto, K. H. (2021). Conceptualization and operationalization of empathy in design research. Design Studies, 78 , 101075. https://doi.org/10.1016/j.destud.2021.101075

Svejvid, P. (2021). A meta-theoretical framework for theory building in project management. International Journal of Project Management, 39 , 849–9722. https://doi.org/10.1016/j.ijproman.2021.09.006

Szatek, P. K. (2020). The Duhem-Quine thesis reconsidered. Studies in Logic, Grammar, and Rhetoric, 62 (75), 73–93. https://doi.org/10.2478/slgr-2020-0014

Tarski, A. (1946/1996). Introduction to logic: And to the methodology of deductive sciences . Dover Publications.

Toulmin, S. (1953). The philosophy of science: An introduction . Hutchinson.

Thomas, C. G. (2021). Research methodology and scientific writing (2nd ed.). Springer Nature. https://doi.org/10.1007/978-3-030-64865-7

Tie, Y. T., Birks, M., & Francis, K. (2019). Grounded theory research: A design framework for novice researchers. Sage Open Medicine, 7 , 1–8. https://doi.org/10.1177/2050312118822927

Veen, M. (2021). Creative leaps in theory: The might of abduction. Advances in Health Sciences Education, 26 , 1173–1183. https://doi.org/10.1007/s10459-021-10057-8

Verster, J. C., Sandalova, E., Garssen, J., & Bruce, G. (2021). The use of single-item ratings versus traditional multiple-item questionnaires to assess mood and health. European Journal of Investigation in Health Psychology and Education, 11 , 183–198. https://doi.org/10.3390/ejihpe11010015

Whitehead, A. N., & Russell, B. (1956). Principia Mathematica to *56 . Cambridge University Press.

Wittgenstein, L. (1922/2007). Tractatus Logico-Philosohicus . Cosimo, Inc.

Woiceshyn, J., & Daellenbach, U. (2018). Evaluating inductive versus deductive research in management studies: Implications for authors, editors, and reviewers. Qualitative Research in Organizations and Management: An International Journal, 13 (2), 183–195. https://doi.org/10.1108/QROM-06-2017-1538

Wu, X., Levis, B., Sun, Y., Krishnan, A., He, C., et al. (2020). Probability of major depression diagnostic classification based on the SCID, CIDI and MINI diagnostic interviews controlling for hospital anxiety and depression scale-depression subscale score: An individual participant data meta-analysis of 73 primary studies. Journal of Psychosomatic Research, 129 , 109892. https://doi.org/10.1016/j.jpsychores.2019.109892

Xin, S., Tribe, J., & Chambers, D. (2013). Conceptual research in tourism. Annals of Tourism Research, 41 , 66–88. https://doi.org/10.1016/j.annals.2012.12.003

Yao, Q. J. (2023a). Conceptual analysis. In J. Mattingly (Ed.), The Sage encyclopedia of theory in science, technology, engineering, and mathematics . Sage Reference. https://doi.org/10.4135/9781071872383

Yao, Q. J. (2023b). Deduction. In J. Mattingly (Ed.), The Sage encyclopedia of theory in science, technology, engineering, and mathematics . Sage Reference. https://doi.org/10.4135/9781071872383

Yao, Q. J. (2023c). Induction. In J. Mattingly (Ed.), The Sage encyclopedia of theory in science, technology, engineering, and mathematics . Sage Reference. https://doi.org/10.4135/9781071872383

Yao, Q. J., Liu, Z., & Stephens, L. S. (2020). Exploring the dynamics in the environmental discourse: The longitudinal interaction among public opinion, presidential opinion, media coverage, policymaking in 3 decades and an integrated model of media effects. Environment Systems and Decision, 40 (1), 14–28. https://doi.org/10.1007/s10669-019-09746-y

Young, C. (2008). The advertising research handbook (2nd ed.). Ad Essentials, LLC.

Download references

Acknowledgements

The author appreciates Dr. Steven H. Chaffee for the inspiration of his work on explication in conducting this study.

Author information

Authors and affiliations.

Department of Communication & Media, Lamar University, P.O. Box 10050, Beaumont, TX, 77710, USA

Qingjiang Yao

You can also search for this author in PubMed   Google Scholar

Contributions

This paper is solely authored by Qingjiang (Q. J.) Yao, who bears all responsibility related to the paper.

Corresponding author

Correspondence to Qingjiang Yao .

Ethics declarations

The author has no financial or non-financial interests that are directly or indirectly related to this work submitted.

Ethical Approval

This conceptual research study is conducted in accordance with relevant guidelines/regulations applicable and involves no human participants. *

Informed Consent

Competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Yao, Q. Concepts and Reasoning: a Conceptual Review and Analysis of Logical Issues in Empirical Social Science Research. Integr. psych. behav. 58 , 502–530 (2024). https://doi.org/10.1007/s12124-023-09792-x

Download citation

Accepted : 14 June 2023

Published : 08 July 2023

Issue Date : June 2024

DOI : https://doi.org/10.1007/s12124-023-09792-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Conceptual analysis
  • Hypotheticodeductive model
  • Falsification
  • Social sciences
  • Find a journal
  • Publish with us
  • Track your research

Read the Latest on Page Six

latest in US News

Creepy Oregon dad admits he drugged 12-year-old girls’ smoothies at sleepover -- and gives his bizarre excuse why

Creepy Oregon dad admits he drugged 12-year-old girls’...

Kamala for president? Few voters think Harris could win as Dems’ 2028 pick: poll

Kamala for president? Few voters think Harris could win as...

Ypsilanti, Michigan, school coach choked teen with shirt in attack

Middle school coach choked 14-year-old with shirt in...

Rudy Giuliani suffering 'possible' 9/11-related lung disease, his lawyers claim

Rudy Giuliani suffering 'possible' 9/11-related lung disease, his...

NJ school bus driver who drove drunk with 27 kids slapped with 14 year sentence

NJ school bus driver who drove drunk with 27 kids slapped with 14...

White House won't rule out Biden commuting Hunter's sentence if he gets prison time

White House won't rule out Biden commuting Hunter's sentence if...

Strange bot-fellows! AI-powered candidate running for mayor in Wyoming

Strange bot-fellows! AI-powered candidate running for mayor in...

Elderly homeowner arrested after fatally shooting California burglary suspect, cops say

Elderly homeowner arrested after fatally shooting Calif. burglary...

Breaking news, new research reveals bodily changes space tourists may experience compared to seasoned astronauts .

Thanks for contacting us. We've received your submission.

Space tourists experience some of the same body changes as astronauts who spend months in orbit , according to new studies published Tuesday.

Those shifts mostly returned to normal once the amateurs returned to Earth, researchers reported.

Research on four space tourists is included in a series of studies on the health effects of space travel, down to the molecular level.

The findings paint a clearer picture of how people — who don’t undergo years of astronaut training — adapt to weightlessness and space radiation, the researchers said.

Jared Issacman and Haley Arceneaux prepare to head to launchpad 39A for a launch on a SpaceX Falcon 9

“This will allow us to be better prepared when we’re sending humans into space for whatever reason,” said Allen Liu, a mechanical engineering professor at the University of Michigan who was not involved with the research.

NASA and others have long studied the toll of space travel on astronauts, including yearlong residents of the International Space Station, but there’s been less attention on space tourists.

The first tourist visit to the space station was in 2001, and opportunities for private space travel have expanded in recent years.

A three-day chartered flight in 2021 gave researchers the chance to examine how quickly the body reacts and adapts to spaceflight, said Susan Bailey, a radiation expert at Colorado State University who took part in the research.

While in space, the four passengers on the SpaceX flight , dubbed Inspiration4, collected samples of blood, saliva, skin and more.

A SpaceX Falcon 9 rocket is launched, carrying 23 Starlink satellites into low Earth orbit in Cape Canaveral, Florida, U.S. May 6, 2024

Researchers analyzed the samples and found wide-ranging shifts in cells and changes to the immune system.

Most of these shifts stabilized in the months after the four returned home, and the researchers found that the short-term spaceflight didn’t pose significant health risks.

“This is the first time we’ve had a cell-by-cell examination of a crew when they go to space,” said researcher and co-author Chris Mason with Weill Cornell Medicine.

Sian Proctor, right, talks to a friend from a car window before a trip to Kennedy Space Center's Launch Pad 39-A and a planned liftoff on a SpaceX Falcon 9 rocket Wednesday, Sept. 15, 2021

The papers, which were published Tuesday in Nature journals and are now part of a database, include the impact of spaceflight on the skin, kidneys and immune system.

The results could help researchers find ways to counteract the negative effects of space travel, said Afshin Beheshti, a researcher with the Blue Marble Space Institute of Science who took part in the work.

AP videojournalist Mary Conlon contributed from New York.

Share this article:

Jared Issacman and Haley Arceneaux prepare to head to launchpad 39A for a launch on a SpaceX Falcon 9

Advertisement

We use cookies to understand how you use our site and to improve your experience. This includes personalizing content and advertising. To learn more, click here . By continuing to use our site, you accept our use of cookies, revised Privacy Policy and Terms of Service .

Zacks Investment Research Home

New to Zacks? Get started here.

Member Sign In

Don't Know Your Password?

Zacks

  • Zacks #1 Rank
  • Zacks Industry Rank
  • Zacks Sector Rank
  • Equity Research
  • Mutual Funds
  • Mutual Fund Screener
  • ETF Screener
  • Earnings Calendar
  • Earnings Releases
  • Earnings ESP
  • Earnings ESP Filter
  • Stock Screener
  • Premium Screens
  • Basic Screens
  • Research Wizard
  • Personal Finance
  • Money Management
  • Retirement Planning
  • Tax Information
  • My Portfolio
  • Create Portfolio
  • Style Scores
  • Testimonials
  • Zacks.com Tutorial

Services Overview

  • Zacks Ultimate
  • Zacks Investor Collection
  • Zacks Premium

Investor Services

  • ETF Investor
  • Home Run Investor
  • Income Investor
  • Stocks Under $10
  • Value Investor
  • Top 10 Stocks

Other Services

  • Method for Trading
  • Zacks Confidential

Trading Services

  • Black Box Trader
  • Counterstrike
  • Headline Trader
  • Insider Trader
  • Large-Cap Trader
  • Options Trader
  • Short Sell List
  • Surprise Trader
  • Alternative Energy

Zacks Investment Research Home

You are being directed to ZacksTrade, a division of LBMZ Securities and licensed broker-dealer. ZacksTrade and Zacks.com are separate companies. The web link between the two companies is not a solicitation or offer to invest in a particular security or type of security. ZacksTrade does not endorse or adopt any particular investment strategy, any analyst opinion/rating/report or any approach to evaluating individual securities.

If you wish to go to ZacksTrade, click OK . If you do not, click Cancel.

research article logic

Image: Bigstock

Is Trending Stock QuickLogic Corporation (QUIK) a Buy Now?

QuickLogic ( QUIK Quick Quote QUIK - Free Report ) is one of the stocks most watched by Zacks.com visitors lately. So, it might be a good idea to review some of the factors that might affect the near-term performance of the stock.

Over the past month, shares of this maker of chips for mobile and portable electronics manufacturers have returned -4%, compared to the Zacks S&P 500 composite's +3.5% change. During this period, the Zacks Electronics - Semiconductors industry, which QuickLogic falls in, has gained 6%. The key question now is: What could be the stock's future direction?

While media releases or rumors about a substantial change in a company's business prospects usually make its stock 'trending' and lead to an immediate price change, there are always some fundamental facts that eventually dominate the buy-and-hold decision-making.

Revisions to Earnings Estimates

Rather than focusing on anything else, we at Zacks prioritize evaluating the change in a company's earnings projection. This is because we believe the fair value for its stock is determined by the present value of its future stream of earnings.

We essentially look at how sell-side analysts covering the stock are revising their earnings estimates to reflect the impact of the latest business trends. And if earnings estimates go up for a company, the fair value for its stock goes up. A higher fair value than the current market price drives investors' interest in buying the stock, leading to its price moving higher. This is why empirical research shows a strong correlation between trends in earnings estimate revisions and near-term stock price movements.

For the current quarter, QuickLogic is expected to post earnings of $0.01 per share, indicating a change of +108.3% from the year-ago quarter. The Zacks Consensus Estimate has changed -600% over the last 30 days.

For the current fiscal year, the consensus earnings estimate of $0.50 points to a change of +194.1% from the prior year. Over the last 30 days, this estimate has changed +29.4%.

For the next fiscal year, the consensus earnings estimate of $0.61 indicates a change of +22% from what QuickLogic is expected to report a year ago. Over the past month, the estimate has changed +5.2%.

With an impressive externally audited track record , our proprietary stock rating tool -- the Zacks Rank -- is a more conclusive indicator of a stock's near-term price performance, as it effectively harnesses the power of earnings estimate revisions. The size of the recent change in the consensus estimate, along with three other factors related to earnings estimates , has resulted in a Zacks Rank #3 (Hold) for QuickLogic.

The chart below shows the evolution of the company's forward 12-month consensus EPS estimate:

12 Month EPS

12-month consensus EPS estimate for QUIK _12MonthEPSChartUrl

Due to inactivity, you will be signed out in approximately:

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Perspect Med Educ
  • v.7(3); 2018 Jun

Logo of pmeded

Using rhetorical appeals to credibility, logic, and emotions to increase your persuasiveness

Lara varpio.

Department of Medicine, Uniformed Services University of the Health Sciences, Bethesda, MD USA

In the Writer’s Craft section we offer simple tips to improve your writing in one of three areas: Energy, Clarity and Persuasiveness. Each entry focuses on a key writing feature or strategy, illustrates how it commonly goes wrong, teaches the grammatical underpinnings necessary to understand it and offers suggestions to wield it effectively. We encourage readers to share comments on or suggestions for this section on Twitter, using the hashtag: #how’syourwriting?

Scientific research is, for many, the epitome of objectivity and rationality. But, as Burke reminds us, conveying the meaning of our research to others involves persuasion. In other words, when I write a research manuscript, I must construct an argument to persuade the reader to accept my rationality .

While asserting that scientific findings must be persuasively conveyed may seem contradictory, it is simply a consequence of how we conduct research. Scientific research is a social activity centred on answering challenging questions. When these questions are answered, the solutions we propose are just that—propositions. Our solutions are accepted by the community until another, better proposition offers a more compelling explanation. In other words, everything we know is accepted for now but not forever .

This means that when we write up our research findings, we need to be persuasive. We must convince readers to accept our findings and the conclusions we draw from them. That acceptance may require dethroning widely held perspectives. It may require having the reader adopt new ways of thinking about a phenomenon. It may require convincing the audience that other, highly respected researchers are wrong. Regardless of the argument I want the reader to accept, I have to persuade the reader to agree with me .

Therefore, being a successful researcher requires developing the skills of persuasion—the skills of a rhetorician. Fortunately for the readers of Perspectives on Medical Education, The Writer’s Craft series offers a treasure trove of rhetorical tools that health professions education researchers can mine.

A primary lesson of rhetoric was developed by Aristotle. He studied rhetoric analytically, investigating all the means of persuasion available in a given situation. He identified three appeals at play in all acts of persuasion: ethos, logos and pathos. The first is focused on the author, the second on the argument, the third on the reader. Together, they support effective persuasion, and so can be harnessed by researchers to powerfully convey the meaning of their research.

Ethos is the appeal focused on the writer. It refers to the character of the writer, including her credibility and trustworthiness. The reader must be convinced that the author is an authority and merits attention. In scientific research, the author must establish her credibility as a rigorous and expert researcher. Much of an author’s ethos, then, lies in using well-reasoned and justified research methodologies and methods. But, a writer’s credibility can be bolstered using a number of rhetorical techniques including similitude and deference .

Similitude appeals to similarities between the author and the reader to create a sense of mutual identification. Using pronouns like we and us, the writer reinforces commonality with the reader and so encourages a sense of cohesion and community. To illustrate, consider the following:

While burnout continues to plague our residents , medical educators have yet to identify the root causes of this problem. We owe it to our residents to delve into this area of inquiry to secure their wellbeing over their lifetime of clinical service.
While burnout continues to plague residents , medical educators have yet to identify the root causes of this problem. Medical educators owe it to their residents to delve into this area of inquiry to secure their wellbeing over their lifetime of clinical service.

In the first sentence, the author aligns herself with the community of medical educators involved in residency education. The writer is part of the we who has to support residents. She makes the burnout problem something she and the reader are both called upon to address. In the second sentence, the author separates herself from this community of educators. She creates social distance between herself and the reader, and thus places the burden of resolving the problem more squarely on the shoulders of the reader than herself.

Both phrasings are equally correct, grammatically. One creates social connection, the other social distance.

Deference is a way for the author to signal respect for others, and personal humility. The writer can demonstrate deference by using phrases such as in my opinion , or through the use of adjectives (e.g., Smith rigorously studied) or adverbs (e.g., the important work by Jones). For example:

The thoughtful research conducted by Jane Doe et al. suggests that resident burnout is more prevalent among those learners who were shamed by attending physicians. Echoing the calls of others [ 1 ], we contend that this work should be extended to also consider the role of fellow learners as potential contributors to resident experiences of burnout.

In this sentence, the author does not present Jane Doe and colleagues as weak researchers, nor as developing findings that should be rejected. Instead, it shows deference to these researchers by acknowledging the quality of their research and a willingness to build on the foundation provided by their findings. (Note how the author also builds ethos via similitude with other scholars by calling the reader’s attention to the fact that other researchers have also called for more research on the author’s suggested extension of Doe’s work).

Readers pick up on the respect authors pay to other researchers. Being rude or unkind in our writing rarely achieves anything except reflecting poorly on the writer.

In sum, as my grandmother used to say: ‘You’ll slide farther on honey than gravel.’ Establishing similitude and showing deference helps to establish your ethos as an author. They help the writer make honey, not gravel.

Logos is the rhetorical appeal that focuses on the argument being presented by the author. It is an appeal to rationality, referring to the clarity and logical integrity of the argument. Logos is, therefore, primarily rooted in the reasoning that holds different elements of the manuscript’s argument together. Do the findings logically connect to support the conclusion being drawn? Are there errors in the author’s reasoning (i.e., logical fallacies) that undermine the logic presented in the manuscript? Logical fallacies will undercut the persuasive power of a manuscript. Authors are well advised to spend time mapping out the premises of their arguments and how they logically lead to the conclusions being drawn, avoiding common errors in reasoning (see Purdue’s on-line writing lab [ 2 ] for 12 of the most common logical fallacies that plague authors, complete with definitions and examples).

However, logos is not merely contained in the logic of the argument itself. Logos is only achieved if the reader is able to follow the author’s logic. To support the reader’s ability to process the logical argument presented in the manuscript, authors can use signposting. Signposting is often accomplished via words (e.g., first, next, specifically, alternatively, also, consequently, etc.) and phrases (e.g., as a result, and yet, for example, in conclusion, etc.) that help the reader to follow the line of reasoning as it moves through the manuscript. Signposts indicate to the reader the structure of the argument to come, where they are in the argument at the moment, and/or what they can expect to come next. Consider the following sentence from one of my own manuscripts. This is the last sentence in the Introduction [ 3 ]:

This study addresses these gaps by investigating the following questions: How often are residents taught informally by physicians and nurses in clinical settings? What competencies are informally taught to residents by physicians and nurses? What teaching techniques are used by physicians and nurses to deliver informal education?

At the end of the Introduction, this sentence offers a map to the reader of how the paper’s argument will develop. The reader can now expect that the manuscript will address each of these questions, in this order. I could also use large-scale signposting, such as sub-headings in the Results, to organize the reading of data related to each of these questions. In the Discussion, I can use small-scale signpost terms and phrases (i.e., however, in contrast, in addition, finally, etc.) to help the reader follow the progression of the argument I am presenting .

I must offer one word of caution here: be sure to use your signposts precisely. If not, your writing will not be logically developed and you will weaken the logos at work in the manuscript. For instance, however signposts a contrasting or contradicting idea:

I enjoy working with residents; however , I loathe completing in-training evaluation reports.

If the writer uses the wrong signpost, the meaning of the sentence falls apart, and so does the logos:

I enjoy working with residents; alternatively , I loathe completing in-training evaluation reports.

Alternatively indicates a different option or possibility. This sentence does not present two different alternatives; it presents two contrasting ideas. Using alternatively confuses the meaning of the sentence, and thus impairs logos.

With clear and precise signposting, the reader will easily follow your argument across the manuscript. This supports the logos you develop as you guide the reader to your conclusions.

Pathos is the rhetorical appeal that focuses on the reader. Pathos refers to the emotions that are stirred in the reader while reading the manuscript. The author should seek to trigger specific emotional reactions in their writing. And, yes, there is room for emotions in scientific research articles. Some of my favourite manuscripts in The Writer’s Craft series are those that help authors elicit specific emotions from the reader.

For instance, in Joining the conversation: the problem/gap/hook heuristic Lingard highlights the importance of ‘hooking’ your audience. The hook ‘convinces readers that this gap [in the current literature] is of consequence’ [ 4 ]. The author must persuade the reader that the argument is important and worthy of the reader’s attention. This is an appeal to the readers’ emotions.

Another example is found in Bonfire red titles. As Lingard explains, the title of your manuscript is ‘advertising for what is inside your research paper’ [ 5 ]. The title must attract the readers’ attention and create a desire within them to read your manuscript. Here, again, is pathos in action in a scientific research paper: grab the reader’s attention from the very first word of the title.

Beyond those already addressed in The Writer’s Craft series, another rhetorical technique that appeals to the emotions of the reader is the strategic use of God-terms [ 1 ] . Burke defined God-terms as words or phrases that are ‘the ultimates of motivation,’ embodying characteristics that are fundamentally valued by humans. To use an analogy from card games (e.g., bridge or euchre), God-terms are like emotional trump cards. God-terms like freedom, justice, and duty call on shared human values, trumping contradictory feelings. By alluding to God-terms in our research, we increase the emotional appeal of our writing. Let us reconsider the example from above:

While burnout continues to plague our residents, medical educators have yet to identify the root causes of this problem. We owe it to our residents to delve into this area of inquiry to secure their wellbeing over their lifetime of clinical service .

Here, the author reminds the reader that residents will be in service as physicians for their lifetime, and that we have a  duty (i.e., we owe it ) to support them in that calling to meet the public’s healthcare needs. Invoking the God-terms of service and duty, the writer taps into the reader’s sense of responsibility to support these learners.

It is important not to overplay pathos in a scientific research paper—i.e., readers are keenly intelligent scholars who will easily identify emotional exaggeration. Consider this variation on the previous example:

While burnout continues to ruin the lives of our residents, medical educators have neglected to identify the root causes of this problem. We have a moral obligation to our residents to delve into this area of inquiry to secure their wellbeing over their lifetime of clinical service.

This rephrasing is likely to create a sense of unease in the reader because of the emotional exaggerations it uses. By over-amplifying the appeals to emotion, this rephrasing elicits feelings of refusal and rejection in the reader. Instead of drawing the reader in, it pushes the reader away. When it comes to pathos, a light hand is best.

Peter Gould famously stated: ‘data can never speak for themselves’ [ 6 ]. Researchers must explain them. In that explaining, we endeavour to convince the audience that our propositions should be accepted. While the science in our research is at the core of that persuasion, there are techniques from rhetoric that can help us convince readers to accept our arguments. Ethos, logos and pathos are appeals that, when used intentionally and judiciously, can buoy the persuasive power of your manuscripts.

The views expressed herein are those of the authors and do not necessarily reflect those of the United States of America’s Department of Defense or other federal agencies.

PhD, is a professor in the Department of Medicine at Uniformed Services University of the Health Sciences, MD. Her program of research investigates the many kinds of teams involved in health professions education (e.g., interprofessional clinical care teams, health professions education scholarship unit teams, etc.). A self-professed ‘theory junky’, she uses theories from the social sciences and humanities, and qualitative methods/methodologies to build practical, theory-based knowledge.

Wherever there is meaning there is persuasion. —Kenneth Burke [ 1 ]

Meet the anti-abortion group using white coats and research to advance its cause

Photo illustration of Dr. James Studnicki, Charles Donovan, and Dr. Ingrid Skop and a background image of a sonogram

On a winter day less than two years after the fall of Roe v. Wade, Dr. Ingrid Skop beamed at a crowd of anti-abortion activists gathered at the Texas Capitol.

“The sun is shining on us. I think someone is happy with what we’re doing,” said Skop, a longtime OB-GYN, clad in a white doctor's coat.

Her smile dropped as she launched into a speech attacking the Food and Drug Administration’s regulation of mifepristone for medication abortions. “One out of 20 women ends up needing emergency surgery with these dangerous pills,” she said.

The statistic isn’t far off, but the procedure that Skop warned of is a vacuum aspiration to clear the uterus, considered routine in miscarriage care and low-risk .

Skop’s warnings about abortion extend far beyond this rally. She is the vice president and director of medical affairs of the Charlotte Lozier Institute, established in 2011 as the research arm of Susan B. Anthony Pro-Life America, a nonprofit group that works to elect anti-abortion candidates. 

In a movement where many adherents are guided by religious or ideological beliefs, the institute has tried to win on secular grounds by offering research and studies aimed at countering the well-established scientific consensus that abortion care is safe.

Since Roe v. Wade was overturned in 2022, the institute has gained visibility and notoriety as it has worked to justify abortion bans the majority of Americans don’t support . Two studies led by its vice president, James Studnicki, were cited in a federal ruling challenging the approval of mifepristone. Skop is part of a group that brought the original suit. The Supreme Court is expected to rule on a narrower version of the case this month. 

But the institute has also taken heat. The two studies were later retracted — unfairly, the authors argued — by the journal that published them.

In May, Skop's appointment to the Texas maternal mortality review committee drew the ire of maternal health advocates and abortion rights supporters who see her positions as ideological and in conflict with the committee’s mission to improve maternal health.

Ingrid Skop speaks during a hearing

The institute is often described as the anti-abortion movement’s answer to the Guttmacher Institute, a research and policy group that supports abortion rights. Skop and her peers have provided conservative officials with their own bench of experts.

“Abortion activists dominate the scientific community,” Skop told NBC News by email. “CLI research is one of the only voices to counter the biased, abortion-affirming research.”

Over the past decade, the institute’s studies, including ones assailing abortion medication and promoting crisis pregnancy centers that counsel women against abortion, have been cited by politicians and judges alike .

Mary Ziegler, a historian and expert on abortion law, said the group’s work may help give legislators “political cover.”

“The scientific arguments that CLI is making," she said, "are just one more arrow in the quiver of legislators who already think abortion is contrary to God’s law."

Named for one of the first female physicians in the U.S., the Charlotte Lozier Institute was launched in a different era for abortion rights. Roe v. Wade was still the law of the land, and Susan B. Anthony Pro-Life America was working to elect candidates who would make it harder to access abortions.

The institute’s job, under the guidance of its founder and first president, Chuck Donovan, a veteran of the anti-abortion movement, was to provide data to help them. In a 2018 promotional video , a series of state lawmakers praised the group as a source of “facts” and “credibility.”

Kristi Hamrick, a spokesperson for Students for Life of America, said the institute plays a “very important role” by providing “an alternative scientific voice that looks at data” that groups like hers can use in their campaigns.

The institute’s influence extends to state legislatures, where its team testified in favor of bills that would prohibit most abortions after 20 weeks, and require that patients be told about a process called “abortion reversal,” a disputed treatment that abortion opponents claim can undo a medication abortion. 

Skop and others on the institute’s roster of staff and representatives are open about their religious beliefs . On a recent episode of a podcast affiliated with the American Family Association, a right-wing Christian activist group, she launched into a fierce critique of other abortion research.

“The abortion industry drives the narrative. They publish poor-quality studies. The mainstream media, of course, promotes abortion and picks it up,” she said. “So the American people have been gaslighted.”

In an interview, Rachel Jones, a principal research scientist with the Guttmacher Institute, said its work holds up under scrutiny. “We’ve been doing research for over 50 years on abortion, and we haven’t had any studies retracted,” Jones said, noting that the group is transparent about its data and its shortcomings. “Our track record speaks for itself.”

As patients with pregnancy complications in restrictive states like Texas go public with experiences of being denied treatment, the Charlotte Lozier Institute, like many anti-abortion groups, has argued that these have resulted from a misreading of the laws, rather than the bans themselves.

“Rather than blame pro-life laws when confused physicians have withheld emergency medical care, a result of abortion advocates’ fear mongering, state medical boards must provide guidance to clarify confusion, but many have not done so,” Skop said in a statement to NBC News.

But doctors have said the bans — which call for stripping medical licenses, and imposing fines or criminal charges on violators — create a chilling effect , and the institute itself cites guidance that discourages abortions as an emergency intervention.

When it’s necessary to perform what Skop calls a “separation of the mother and her unborn child” in the second trimester, she has cited the American Association of Pro-Life Obstetricians and Gynecologists , of which she is a member, in arguing that doctors should perform cesarean sections or induce labor, rather than an abortion procedure commonly called dilation and evacuation (D&E). More OBGYNs have the skills to perform C-sections and induction, Skop has noted , and in some cases they could preserve a chance of saving the fetus’ life.

Dr. Ghazaleh Moayedi, an OB-GYN who practices in Texas, serves as the board chair for Physicians for Reproductive Health, which supports abortion rights. She sees recommendations like these as an attempt to limit doctors’ ability to provide necessary care. In many cases, she added, it’s clear that a fetus won’t survive, and to imply otherwise is misleading.

“They view it as more dignified in some way for the fetus,” she said of the institute’s stance that doctors should avoid D&E’s. “What’s left unsaid in that statement is that it’s at the expense of any dignity, humanity or care for pregnant people themselves.”

Skop pushed back on the assertion that some patients might prefer a D&E to induction in these cases, referring to it by a term commonly used by abortion opponents.

“When experiencing a tragic loss, I have never had a pregnant mother prefer a dismemberment abortion over induction because mothers want to hold and bury their babies, which assists in their grief,” she wrote.

A sign at a protest reads: "Follow the Science, Life begins at Conception"

In March, the institute named a new leader, Karen Czarnecki, who previously worked for the American Legislative Exchange Council and Heritage Foundation.

It continues to rely on private funding. In recent years, its donors have included the Alliance Defending Freedom, which brought the mifepristone case; The 85 Fund, a group tied to the conservative judicial activist Leonard Leo; and the Knights of Columbus, a Catholic fraternal order.

In a recent video appearance , Studnicki, highlighted the institute’s recent wins. “If you look at our record in the last three or four years, we’ve been very, very successful,” he said.

But it also suffered a major blow this February when the medical publisher Sage retracted three studies on which Studnicki was a lead author. In a blog post, Sage said it flagged problems including conflicts of interest, “misleading presentations of data,” and “fundamental problems with the study design and methodology.”

One of the studies, published in 2021, made an alarming claim: that hospital visits had skyrocketed between 2002 and 2015 among Medicaid patients who had medication abortions. 

Ushma Upadhyay, who researches medication abortion at the University of California, San Francisco, published a paper that reviewed the study. She said the institute conflated visits to the ER within 30 days of an abortion with “adverse events.” But patients may go to the ER just to get reassurance that the procedure has gone smoothly, Upadhyay said.

Her research has shown that more than half of these visits don’t result in treatment. Lozier's study, she said, didn’t address whether patients were admitted or received treatments, which would present a fuller picture of what, if any, complications occurred.

In response to that critique, Skop said ER visits, even without treatment, showed that patients “don’t know what to expect because they are not receiving adequate informed consent.”

She called the retractions “meritless,” contending that the group’s anti-abortion stance has led to “unprovoked and partisan attacks.”

The Supreme Court will soon rule on mifepristone. It opted not to hear arguments challenging the FDA’s initial approval of the drug, but will consider whether to restrict its availability. The changes proposed would require patients to visit a doctor in person to get the medication, as was the case before the Covid pandemic, and mandate that the pills only be used up to seven weeks in a pregnancy, rather than 10.

The retractions so far don’t seem to have damaged the institute’s standing in the anti-abortion movement. 

“These findings have been used in legal action in many of the states,” Studnicki recently said in a video response to the retractions. “We have become visible. People are quoting us. And for that reason, we are dangerous.”

research article logic

Bracey Harris is a national reporter for NBC News, based in Jackson, Mississippi. 

Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

  • Publications
  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

  • Gun Violence Widely Viewed as a Major – and Growing – National Problem

U.S. public evenly split on whether gun ownership does more to increase or decrease safety

Table of contents.

  • Majority of Americans say gun laws should be stricter
  • Views of gun policies among gun owners, non-owners
  • Acknowledgments
  • Methodology

research article logic

Pew Research Center conducted this study to better understand Americans’ views of gun policy. For this analysis, we surveyed 5,115 adults from June 5-11, 2023. Everyone who took part in this survey is a member of the Center’s American Trends Panel (ATP), an online survey panel that is recruited through national, random sampling of residential addresses. This way nearly all U.S. adults have a chance of selection. The survey is weighted to be representative of the U.S. adult population by gender, race, ethnicity, partisan affiliation, education and other categories. Read more about the ATP’s methodology .

Here are the questions used for the report and its methodology .

Chart shows majority of Americans expect gun violence to increase; public divided on impact of gun ownership on safety

With total gun-related deaths reaching new highs in recent years, growing shares of Americans view both gun violence and violent crime as very big national problems.

Looking ahead, twice as many Americans expect the level of gun violence to increase rather than stay about the same over the next five years (62% vs. 31%). Just 7% say it will decrease.

The question of whether gun ownership does more to increase or decrease safety evenly divides Americans: 49% say it increases safety by allowing law-abiding citizens to protect themselves; an identical share says it reduces safety by giving too many people access to firearms and increasing misuse.

The new survey, conducted June 5-11, 2023, among 5,115 members of Pew Research Center’s nationally representative American Trends Panel, also finds:

Chart shows Public support for stricter gun laws has ticked up since 2021, is similar to 2019

  • A majority of Americans (58%) say gun laws in the country should be stricter; 26% say they are about right, while just 15% say they should be less strict. Support for stricter gun laws has ticked up since 2021 and is at about the same level as in 2019.
  • Large majorities favor preventing mentally ill people from buying guns (88%) and increasing the minimum age for buying guns to 21 years old (79%).
  • Other gun policy proposals, including banning high-capacity magazines (66%) and banning assault-style weapons (64%), continue to draw majority support.

Large divides by party, community type in views of impact of gun ownership on safety

Gun policy continues to be one of the most polarizing issues in American politics. Republicans and Democrats are sharply divided over the impact of gun ownership on public safety: 79% of Republicans and independents who lean toward the Republican Party say that gun ownership increases safety, while a nearly identical share of Democrats and Democratic leaners (78%) say it decreases safety.

Views of gun ownership are also closely tied to where one lives, with those who say they live in rural areas about twice as likely as those who live in urban areas to say that gun ownership increases safety (65% vs. 34%). And those who personally own guns are nearly twice as likely as non-owners to say this (71% vs. 37%).

Overall, 32% of Americans report owning a gun.

Gun violence and violent crime increasingly viewed as major problems

While there are wide partisan gaps in views of the impact of gun ownership and in views of many gun policies, Republicans and Democrats also differ over whether gun violence is a major problem for the country. About twice as many Democrats as Republicans say it is “a very big” national problem (81% vs. 38%).

Chart shows growing shares of Americans say gun violence, violent crime are ‘very big’ national problems

Over the past year, however, there have been 11 percentage point increases in the shares of both parties saying it is a very big problem.

Views of whether violent crime is a major problem have tended to be less partisan. And growing shares in both parties also view crime as a very big problem.

Since last year, the share of Republicans who say violent crime is a major problem has slightly increased from 60% to 64%. There has been a comparable shift among Democrats, from 47% to 52%.

Both violent crime and gun violence rank high on the public’s list of top national problems. Refer to our recent report for more.

CORRECTION (June 28, 2023): In the chart “Growing shares of Americans say gun violence, violent crime are ‘very big’ national problems,” a previous version of the chart omitted a July 2021 data point of the shares saying violent crime was a very big problem for the country. The chart has now been updated to include the following: 61% of Americans (including 67% of Republicans and 55% of Democrats) said violent crime was a very big problem for the country in July 2021.

The following sentence was also updated to reflect the above additions: “Since last year, the share of Republicans who say violent crime is a major problem has slightly increased from 60% to 64%. There has been a comparable shift among Democrats, from 47% to 52%.”

Views of gun policies

Chart shows large majority of Americans support raising minimum age for buying guns to 21

There continues to be wide public support for various specific gun policy proposals. For example, 88% of Americans favor preventing people with mental illnesses from purchasing guns, including 72% who strongly favor this.

This is the only policy proposal among eight asked about in the survey which draws overwhelming bipartisan support (89% of Democrats, 88% of Republicans).

While opinions about most gun policies have not changed much in recent years, an increasing share of the public favors allowing teachers and school officials to arm themselves.

Half of adults now favor allowing teachers and other school officials to carry guns in K-12 schools, up from 43% two years ago.

Sign up for our weekly newsletter

Fresh data delivery Saturday mornings

Sign up for The Briefing

Weekly updates on the world of news & information

  • Partisanship & Issues

Cultural Issues and the 2024 Election

About 1 in 4 u.s. teachers say their school went into a gun-related lockdown in the last school year, striking findings from 2023, key facts about americans and guns, for most u.s. gun owners, protection is the main reason they own a gun, most popular, report materials.

1615 L St. NW, Suite 800 Washington, DC 20036 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of  The Pew Charitable Trusts .

© 2024 Pew Research Center

IMAGES

  1. How to Structure your research article

    research article logic

  2. Logic Argument of Research Article

    research article logic

  3. The Research Logic as applied to this article

    research article logic

  4. (PDF) Typical Research Article Structure

    research article logic

  5. Logic Argument of Research Article

    research article logic

  6. The logic of the research process and its application to the research

    research article logic

VIDEO

  1. Logic

  2. Logical reasoning based on Ranking| Arun Sharma| Examples & Basic Concepts| CAT

  3. 【プログラミング講座】(補足)第50回 論理演算について【独り言】

  4. 【プログラミング講座(C#)】第50回 論理演算について【独り言】

  5. ladakh Before and after the abrogation of article 370 #india #politics #bjp #ladakh #kashmir #short

  6. Logic 2

COMMENTS

  1. Creating Logical Flow When Writing Scientific Articles

    Logical flow is the key to achieving a smooth and orderly progression of ideas, sentences, paragraphs, and content towards a convincing conclusion. This article provides guidelines for creating logical flow when writing the text and main sections of a scientific article. The first step is creating a draft outline of the whole article.

  2. The Implementation Research Logic Model: a method for planning

    The Implementation Research Logic Model (IRLM) was created for this purpose and to enhance the rigor and transparency of describing the often-complex processes of improving the adoption of evidence-based interventions in healthcare delivery systems. The IRLM structure and guiding principles were developed through a series of preliminary ...

  3. Developing an implementation research logic model: using a multiple

    The Implementation Research Logic Model (IRLM) facilitates the development of causal pathways and mechanisms that enable implementation. Critical elements of the IRLM vary across different study designs, and its applicability to synthesizing findings across settings is also under-explored. The dual purpose of this study is to develop an IRLM ...

  4. Development of a 'real-world' logic model through testing the

    Early versions of evidence-based healthcare are widely criticised for failing to recognise that healthcare interventions are qualitatively different to easily standardisable, clinical interventions (Ho et al., 2008).This criticism has even been levelled at attempts, emerging in the late 2000s, to permit a greater role for qualitative research in intervention research (Cohn et al., 2013 ...

  5. Advancing complexity science in healthcare research: the logic of logic

    Logic models can be used to model complex interventions that adapt to context but more flexible and dynamic models are required. An implication of this is that how logic models are used in healthcare research may have to change. Using logic models to forge consensus among stakeholders and/or provide precise guidance across different settings ...

  6. Research Methodology: Logic, Methods and Cases

    A discussion on the differentiation between research methodology and research methods is the focus of the introductory chapter. The concept of research ethics is outlined along with the logic of designing various types of research proposals. The introduction also gives insight on the practical aspects of research proposal evaluation and its ...

  7. Developing an implementation research logic model: using a multiple

    Background Implementation science frameworks explore, interpret, and evaluate different components of the implementation process. By using a program logic approach, implementation frameworks with different purposes can be combined to detail complex interactions. The Implementation Research Logic Model (IRLM) facilitates the development of causal pathways and mechanisms that enable ...

  8. Logic as a methodological discipline

    Logic serves as a methodological discipline with respect to any theoretical practice, and this generality, as well as logic's reflexive nature, distinguish it from other methodological disciplines. Finally, the evolution of model theory is taken as a case study, with a focus on its methodological role. ... Related areas of research would be ...

  9. Development of a 'real-world' logic model through testing the

    Logic models feature prominently in intervention research yet there is increasing debate about their ability to express how interventions work in the real-world. 'Real-world' logic models are a new proposition which express complex interventions in context.

  10. Logic and science: science and logic

    According to Ole Hjortland, Timothy Williamson, Graham Priest, and others, anti-exceptionalism about logic is the view that logic "isn't special", but is continuous with the sciences. Logic is revisable, and its truths are neither analytic nor a priori. And logical theories are revised on the same grounds as scientific theories are. What isn't special, we argue, is anti-exceptionalism ...

  11. Developing and Optimising the Use of Logic Models in Systematic ...

    Background Logic models are becoming an increasingly common feature of systematic reviews, as is the use of programme theory more generally in systematic reviewing. Logic models offer a framework to help reviewers to 'think' conceptually at various points during the review, and can be a useful tool in defining study inclusion and exclusion criteria, guiding the search strategy, identifying ...

  12. PDF The Role of Logic in Research

    The role of Logic in research is key as it aids in arriving at valid conclusions in research studies. This study aims to establish through a literature review; the branches of Logic, the approaches to. research and the application of various branches of Logic to research. Findings are that; Logic is. categorized as Inductive and deductive, and ...

  13. (PDF) Creating Logical Flow When Writing Scientific Articles

    The first step is creating a draft outline of the whole article. Once completed, the draft outline is developed into a single, coherent article that logically explains the study. Logical flow in ...

  14. The Logic of Research and Questionable Research Practices ...

    The logic of conventional psychological research might be called consistent with a weak version of pancritical rationalism (Cook & Campbell, 1979; Bartley, 1990) in that it attempts to anticipate criticisms to valid inference and promotes the design and implementation of a corresponding methodological move to potentially address that criticism.

  15. Full article: The logic of design research

    Defining the logic of DR helps us to better practise DR, which leads to improved learning outcomes, training of new researchers, methodology, and communication of DR within and outside the DR community. Taken together, this work furthers the paradigmatic development of educational design research in the following ways.

  16. Search strategy formulation for systematic reviews: Issues, challenges

    These methods still rely on Boolean logic, but provide some level of transparency and better support for concept representation and provide a partial solution to the problem, addressing in part the formalism barrier e.g., by supporting the separation of search strategies from the platform when recording and sharing it (Russell-Rose and Gooch ...

  17. Enhancing the Effectiveness of Logic Models

    One of the most widely used communication tools in evaluation is the logic model. Despite its extensive use, there has been little research into the visualization aspect of the logic model. To assess the impact that design modifications would have on its effectiveness, we applied established visualization principles to revise a program model.

  18. Can We Rely on Our Intuition?

    We face decisions all day long. Intuition, some believe, is an ability that can be trained and can play a constructive role in decision-making. Windsor and Wiehahn Getty Images. Behavior. "I go ...

  19. Deductive reasoning vs. Inductive reasoning

    Inductive reasoning uses specific and limited observations to draw general conclusions that can be applied more widely. So while deductive reasoning is more of a top-down approach — moving from ...

  20. Printed on‐Chip Perovskite Heterostructure Arrays for Optical

    Research Article. Printed on-Chip Perovskite Heterostructure Arrays for Optical Switchable Logic Gates. Hongfei Xie, Hongfei Xie. Key Laboratory of Green Printing, CAS Research/Education Center for Excellence in Molecular Sciences, Institute of Chemistry, Chinese Academy of Sciences, Beijing, 100190 P. R. China ...

  21. Positive thinking: Reduce stress by eliminating negative self-talk

    Positive thinking often starts with self-talk. Self-talk is the endless stream of unspoken thoughts that run through your head. These automatic thoughts can be positive or negative. Some of your self-talk comes from logic and reason. Other self-talk may arise from misconceptions that you create because of lack of information or expectations due ...

  22. Environmental Group to Study Effects of Artificially Cooling Earth

    June 10, 2024. The Environmental Defense Fund will finance research into technologies that could artificially cool the planet, an idea that until recently was viewed as radical but is quickly ...

  23. The Implementation Research Logic Model: a method for planning

    This article describes the development and application of the Implementation Research Logic Model (IRLM). The IRLM can be used with various types of implementation studies and at various stages of research, from planning and executing to reporting and synthesizing implementation studies.

  24. Concepts and Reasoning: a Conceptual Review and Analysis of ...

    A substantial number of social science studies have shown a lack of conceptual clarity, inadequate understanding of the nature of the empirical research approaches, and undue preference for deduction, which have caused much confusion, created paradigmatic incommensurability, and impeded scientific advancement. This study, through conceptual review and analysis of canonical discussions of ...

  25. New research reveals bodily changes space tourists may experience

    00:00. 01:11. Space tourists experience some of the same body changes as astronauts who spend months in orbit, according to new studies published Tuesday. Those shifts mostly returned to normal ...

  26. Is Trending Stock QuickLogic Corporation (QUIK) a Buy Now?

    This is why empirical research shows a strong correlation between trends in earnings estimate revisions and near-term stock price movements. For the current quarter, QuickLogic is expected to post ...

  27. Biden and Trump voters, gender identity, LGBTQ ...

    Views of gender identity. Nearly two-thirds of registered voters (65%) say whether a person is a man or woman is determined by the sex assigned to them at birth. About a third (34%) say whether someone is a man or woman can be different from the sex at birth. Nine-in-ten Trump supporters and about four-in-ten Biden supporters (39%) say sex at ...

  28. Using rhetorical appeals to credibility, logic, and emotions to

    Logos. Logos is the rhetorical appeal that focuses on the argument being presented by the author. It is an appeal to rationality, referring to the clarity and logical integrity of the argument. Logos is, therefore, primarily rooted in the reasoning that holds different elements of the manuscript's argument together.

  29. Meet the anti-abortion group using white coats and research to advance

    On a winter day less than two years after the fall of Roe v. Wade, Dr. Ingrid Skop beamed at a crowd of anti-abortion activists gathered at the Texas Capitol. "The sun is shining on us. I think ...

  30. Gun Violence Widely Viewed as a Major

    Partisanship & Issues. Growing shares of Americans view both gun violence and violent crime as very big national problems. 49% of U.S. adults say gun ownership increases safety by allowing law-abiding citizens to protect themselves; an identical share says it reduces safety by giving too many people access to firearms and increasing misuse.