The Consolidated Framework for Implementation Research

You have come to the right place if you are looking for more information about the Consolidated Framework for Implementation Research (CFIR). This site is created for individuals considering using the CFIR to evaluate an implementation or design an implementation study.

research update framework

The CFIR was originally published in 2009 and was updated in 2022 based on user feedback. It will be helpful for new users to read the 2009 article first; specifically Background, Methods, and Overview of the CFIR. Then read the 2022 Updated CFIR article.

This site is under construction. We are working on changing content on this site to reflect the updated CFIR. Please be patient while this is in process.

Supported Web Browsers: Google Chrome, Mozilla Firefox, Safari

The CFIR provides a menu of constructs arranged across 5 domains that can be used in a range of applications. It is a practical framework to help guide systematic assessment of potential barriers and facilitators. Knowing this information can help guide tailoring of implementation strategies and needed adaptations, and/or to explain outcomes.

The Updated CFIR builds on the 2009 version that included constructs from a range of 19 frameworks or related theories including Everett Rogers’ Diffusion of Innovations Theory and Greenhalgh and colleagues’ compilation based on their review of 500 published sources across 13 scientific disciplines. The CFIR considered the spectrum of construct terminology and definitions and compiled them into one organizing framework.

The 2022 Updated CFIR draws on more recent literature and feedback from users. As part of the update process, a CFIR Outcomes Addendum was published to establish conceptual distinctions between implementation and innovation outcomes and their potential determinants.

The CFIR was developed by implementation researchers affiliated with the Veterans Affairs (VA) Quality Enhancement Research Initiative (QUERI).

By providing a framework of constructs, the CFIR promotes consistent use of constructs, systematic analysis, and organization of findings from implementation studies. User must, however, critique the framework and publish recommendations to improve. This reciprocity is at the heart of building valid and useful theory. See Kislov et al’s call for researchers to engage in theoretically informative implementation research.

  • “…while the CFIR’s utility as a framework to guide empirical research is not fully established, it is consistent with the vast majority of frameworks and conceptual models in dissemination and implementation research in its emphasis of multilevel ecological factors… Examining research (and real-world implementation efforts) through the lens of the CFIR gives us some indication of how comprehensively strategies address important aspects of implementation.”

The CFIR has most often been used within healthcare settings but has also been used across a diverse array of setting including low-income contexts and farming .

As of November 2023, the 2009 article was cited over 10,000 times in Google Scholar and over 4600 times in PubMed . 

research update framework

Research Update

+1 (646) 685-4341

Research Update Organization

Create new knowledge : learn, present, and publish.

research update framework

Clinical Research Education

Clinical research training programs, clinical research consultation & support, evidence-based public health awareness, our recently published articles, elevated cardiac troponin i as a predictor of outcomes in covid-19 hospitalizations: a meta-analysis, prevalence and outcomes associated with vitamin d deficiency among indexed hospitalizations with cardiovascular disease and cerebrovascular disorder—a nationwide study, chronic periodontitis is associated with cerebral atherosclerosis –a nationwide study, intracerebral hemorrhage outcomes in patients using direct oral anticoagulants versus vitamin k antagonists: a meta-analysis, liver disease and outcomes among covid-19 hospitalized patients – a systematic review and meta-analysis, early epidemiological indicators, outcomes, and interventions of covid-19 pandemic: a systematic review, an objective histopathological scoring system for placental pathology in pre-eclampsia and eclampsia, age-adjusted risk factors associated with mortality and mechanical ventilation utilization amongst covid-19 hospitalizations—a systematic review and meta-analysis authors, biomarkers and outcomes of covid-19 hospitalisations: systematic review and meta-analysis, a rare case of round cell sarcoma with cic-dux4 mutation mimicking a phlegmon: review of literature, sex and racial disparity in utilization and outcomes of t-pa and thrombectomy in acute ischemic stroke, is there a smoker’s paradox in covid-19, our philosophy.

research update framework

Urvish Patel, MD, MPH

Founder, director, and chief education officer, “to advance the knowledge medicine and clinical research by breaking down the research process into a simple yet effective framework that is easy to follow. research update provides the necessary resources so that anyone can execute impactful and empirical research in a timely manner.”.

What People Say About Us

Kulin Patel, MBBS “I got the opportunity to learn research from the Research Update team, I enjoyed working with them. This experience is helping me to envision clinical research during residency…” Google Scholar  || Project

Shivani Sharma, MD, BS “I am PGY1 FM Resident. I enjoyed working with the Research Update team. My research mentor- Urvish and his team had supported my research work throughout my journey to residency. Thank you, RxU” Google Scholar || Project

Arsalan Anwar, MBBS “ Dr. Patel has indeed invested his heart and soul in Research Update. He has helped us in fulfilling our goal of becoming an independent clinical researcher. The telegram group discussion also helped me be persistent in my goals. I highly recommend the Research update. …” Google Scholar  || Project

Sidra Saleem , MBBS “I want to thank the Research Update team for their invaluable help. They have guided and supported me. One of the projects that I worked on was a headache. It was a smooth and enriching experience working with the team.” Google Scholar  ||  Project

Deep Mehta, MBBS “I worked on an interesting research paper with the team. Learning various softwares like SPSS and RevMan has helped me become an independent researcher. I recommend everyone to join Research Update for excellent guidance in clinical research.” Google Scholar  ||  Project

Dhaivat Shah, MBBS “Research Update helped me gain tremendous research experience. The telegram group discussions were very insightful. Dr. Patel helped me navigate through my MSCR course and has been a guiding torch for me. I highly recommend Research update to every student who is keen to learn Clinical Research.” Google Scholar  || Project

About Research Update Organization

Research update organization [irs 501(c)(3) registered-tax-exempt, ein# 83-3619272]   is a non-profit educational organization, founded to promote clinical research and its application to enrich community health. founding principles (1) clinical research education & training (2) clinical research consultation & support (3) utilization of clinical research for evidence-based public health awareness.

Volume 10 Supplement 1

7th Annual Conference on the Science of Dissemination and Implementation in Health

  • Meeting abstract
  • Open access
  • Published: 14 August 2015

The Consolidated Framework for Implementation Research (CFIR): progress to date, tools and resources, and plans for the future

  • Laura Damschroder 1 ,
  • Carmen Hall 2 ,
  • Leah Gillon 1 ,
  • Caitlin Reardon 1 ,
  • Caitlin Kelley 1 ,
  • Jordan Sparks 1 &
  • Julie Lowery 1  

Implementation Science volume  10 , Article number:  A12 ( 2015 ) Cite this article

9255 Accesses

48 Citations

5 Altmetric

Metrics details

The objective of this presentation is to introduce the Consolidated Framework for Implementation Research (CFIR), present results of a literature synthesis of studies citing the CFIR, highlight improvements expected in a second version of the framework, and present tools and resources available for researchers using the CFIR that will be available on a newly revamped website.

In a series of four interactive virtual panels, we elicited user feedback from implementation research novices and experts on needed CFIR tools. In addition, we searched multiple databases to identify articles that cited the CFIR. Articles were characterized as original research, syntheses, study protocols, or general background references.

347 published articles cited the CFIR, with an average of growth rate of four additional articles per week; fifty-one were original research, protocols, or syntheses. Recommendations were extracted from these articles and used to inform updates for CFIR V2. Refinements will include improved clarity in definitions for existing constructs, addition of new constructs, and better framing of the purpose and uses of CFIR. The CFIR website was significantly redesigned with the addition of new tools and resources including: 1) an interview guide creation tool; 2) links to a periodically updated bibliography of articles applying the CFIR; 3) two published quantitative measures related to organizational change mapped to CFIR constructs; 4) in-depth guidance on how to apply the CFIR; and 5) plans for the future. A demonstration of the publically available website will be provided along with the URL.

Advances for D&I

The CFIR brings clarity to commonly studied constructs by suggesting clear and consistent terms and definitions that can be applied across diverse settings, within and outside healthcare. Use of the CFIR is growing. CFIR V2 along with tools, resources, and published applications, will help researchers collectively advance the science of implementation.

Department of Veteran Affairs, Health Services Research & Development Quality Enhancement Research Initiative (QUERI) (Grant # RRP 12-494).

Author information

Authors and affiliations.

Center for Clinical Management Research, VA Ann Arbor Healthcare System, Ann Arbor, MI, USA

Laura Damschroder, Leah Gillon, Caitlin Reardon, Caitlin Kelley, Jordan Sparks & Julie Lowery

Gusek Hall Consulting, Roseville, MN, 55113, USA

Carmen Hall

You can also search for this author in PubMed   Google Scholar

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License ( http://creativecommons.org/licenses/by/4.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article.

Damschroder, L., Hall, C., Gillon, L. et al. The Consolidated Framework for Implementation Research (CFIR): progress to date, tools and resources, and plans for the future. Implementation Sci 10 (Suppl 1), A12 (2015). https://doi.org/10.1186/1748-5908-10-S1-A12

Download citation

Published : 14 August 2015

DOI : https://doi.org/10.1186/1748-5908-10-S1-A12

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Veteran Affair
  • Health Service Research
  • Implementation Research
  • User Feedback
  • Quality Enhancement

Implementation Science

ISSN: 1748-5908

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

research update framework

How to Use a Conceptual Framework for Better Research

hero-img

A conceptual framework in research is not just a tool but a vital roadmap that guides the entire research process. It integrates various theories, assumptions, and beliefs to provide a structured approach to research. By defining a conceptual framework, researchers can focus their inquiries and clarify their hypotheses, leading to more effective and meaningful research outcomes.

What is a Conceptual Framework?

A conceptual framework is essentially an analytical tool that combines concepts and sets them within an appropriate theoretical structure. It serves as a lens through which researchers view the complexities of the real world. The importance of a conceptual framework lies in its ability to serve as a guide, helping researchers to not only visualize but also systematically approach their study.

Key Components and to be Analyzed During Research

  • Theories: These are the underlying principles that guide the hypotheses and assumptions of the research.
  • Assumptions: These are the accepted truths that are not tested within the scope of the research but are essential for framing the study.
  • Beliefs: These often reflect the subjective viewpoints that may influence the interpretation of data.
  • Ready to use
  • Fully customizable template
  • Get Started in seconds

exit full-screen

Together, these components help to define the conceptual framework that directs the research towards its ultimate goal. This structured approach not only improves clarity but also enhances the validity and reliability of the research outcomes. By using a conceptual framework, researchers can avoid common pitfalls and focus on essential variables and relationships.

For practical examples and to see how different frameworks can be applied in various research scenarios, you can Explore Conceptual Framework Examples .

Different Types of Conceptual Frameworks Used in Research

Understanding the various types of conceptual frameworks is crucial for researchers aiming to align their studies with the most effective structure. Conceptual frameworks in research vary primarily between theoretical and operational frameworks, each serving distinct purposes and suiting different research methodologies.

Theoretical vs Operational Frameworks

Theoretical frameworks are built upon existing theories and literature, providing a broad and abstract understanding of the research topic. They help in forming the basis of the study by linking the research to already established scholarly works. On the other hand, operational frameworks are more practical, focusing on how the study’s theories will be tested through specific procedures and variables.

  • Theoretical frameworks are ideal for exploratory studies and can help in understanding complex phenomena.
  • Operational frameworks suit studies requiring precise measurement and data analysis.

Choosing the Right Framework

Selecting the appropriate conceptual framework is pivotal for the success of a research project. It involves matching the research questions with the framework that best addresses the methodological needs of the study. For instance, a theoretical framework might be chosen for studies that aim to generate new theories, while an operational framework would be better suited for testing specific hypotheses.

Benefits of choosing the right framework include enhanced clarity, better alignment with research goals, and improved validity of research outcomes. Tools like Table Chart Maker can be instrumental in visually comparing the strengths and weaknesses of different frameworks, aiding in this crucial decision-making process.

Real-World Examples of Conceptual Frameworks in Research

Understanding the practical application of conceptual frameworks in research can significantly enhance the clarity and effectiveness of your studies. Here, we explore several real-world case studies that demonstrate the pivotal role of conceptual frameworks in achieving robust research conclusions.

  • Healthcare Research: In a study examining the impact of lifestyle choices on chronic diseases, researchers used a conceptual framework to link dietary habits, exercise, and genetic predispositions. This framework helped in identifying key variables and their interrelations, leading to more targeted interventions.
  • Educational Development: Educational theorists often employ conceptual frameworks to explore the dynamics between teaching methods and student learning outcomes. One notable study mapped out the influences of digital tools on learning engagement, providing insights that shaped educational policies.
  • Environmental Policy: Conceptual frameworks have been crucial in environmental research, particularly in studies on climate change adaptation. By framing the relationships between human activity, ecological changes, and policy responses, researchers have been able to propose more effective sustainability strategies.

Adapting conceptual frameworks based on evolving research data is also critical. As new information becomes available, it’s essential to revisit and adjust the framework to maintain its relevance and accuracy, ensuring that the research remains aligned with real-world conditions.

For those looking to visualize and better comprehend their research frameworks, Graphic Organizers for Conceptual Frameworks can be an invaluable tool. These organizers help in structuring and presenting research findings clearly, enhancing both the process and the presentation of your research.

Step-by-Step Guide to Creating Your Own Conceptual Framework

Creating a conceptual framework is a critical step in structuring your research to ensure clarity and focus. This guide will walk you through the process of building a robust framework, from identifying key concepts to refining your approach as your research evolves.

Building Blocks of a Conceptual Framework

  • Identify and Define Main Concepts and Variables: Start by clearly identifying the main concepts, variables, and their relationships that will form the basis of your research. This could include defining key terms and establishing the scope of your study.
  • Develop a Hypothesis or Primary Research Question: Formulate a central hypothesis or question that guides the direction of your research. This will serve as the foundation upon which your conceptual framework is built.
  • Link Theories and Concepts Logically: Connect your identified concepts and variables with existing theories to create a coherent structure. This logical linking helps in forming a strong theoretical base for your research.

Visualizing and Refining Your Framework

Using visual tools can significantly enhance the clarity and effectiveness of your conceptual framework. Decision Tree Templates for Conceptual Frameworks can be particularly useful in mapping out the relationships between variables and hypotheses.

Map Your Framework: Utilize tools like Creately’s visual canvas to diagram your framework. This visual representation helps in identifying gaps or overlaps in your framework and provides a clear overview of your research structure.

A mind map is a useful graphic organizer for writing - Graphic Organizers for Writing

Analyze and Refine: As your research progresses, continuously evaluate and refine your framework. Adjustments may be necessary as new data comes to light or as initial assumptions are challenged.

By following these steps, you can ensure that your conceptual framework is not only well-defined but also adaptable to the changing dynamics of your research.

Practical Tips for Utilizing Conceptual Frameworks in Research

Effectively utilizing a conceptual framework in research not only streamlines the process but also enhances the clarity and coherence of your findings. Here are some practical tips to maximize the use of conceptual frameworks in your research endeavors.

  • Setting Clear Research Goals: Begin by defining precise objectives that are aligned with your research questions. This clarity will guide your entire research process, ensuring that every step you take is purposeful and directly contributes to your overall study aims. \
  • Maintaining Focus and Coherence: Throughout the research, consistently refer back to your conceptual framework to maintain focus. This will help in keeping your research aligned with the initial goals and prevent deviations that could dilute the effectiveness of your findings.
  • Data Analysis and Interpretation: Use your conceptual framework as a lens through which to view and interpret data. This approach ensures that the data analysis is not only systematic but also meaningful in the context of your research objectives. For more insights, explore Research Data Analysis Methods .
  • Presenting Research Findings: When it comes time to present your findings, structure your presentation around the conceptual framework . This will help your audience understand the logical flow of your research and how each part contributes to the whole.
  • Avoiding Common Pitfalls: Be vigilant about common errors such as overcomplicating the framework or misaligning the research methods with the framework’s structure. Keeping it simple and aligned ensures that the framework effectively supports your research.

By adhering to these tips and utilizing tools like 7 Essential Visual Tools for Social Work Assessment , researchers can ensure that their conceptual frameworks are not only robust but also practically applicable in their studies.

How Creately Enhances the Creation and Use of Conceptual Frameworks

Creating a robust conceptual framework is pivotal for effective research, and Creately’s suite of visual tools offers unparalleled support in this endeavor. By leveraging Creately’s features, researchers can visualize, organize, and analyze their research frameworks more efficiently.

  • Visual Mapping of Research Plans: Creately’s infinite visual canvas allows researchers to map out their entire research plan visually. This helps in understanding the complex relationships between different research variables and theories, enhancing the clarity and effectiveness of the research process.
  • Brainstorming with Mind Maps: Using Mind Mapping Software , researchers can generate and organize ideas dynamically. Creately’s intelligent formatting helps in brainstorming sessions, making it easier to explore multiple topics or delve deeply into specific concepts.
  • Centralized Data Management: Creately enables the importation of data from multiple sources, which can be integrated into the visual research framework. This centralization aids in maintaining a cohesive and comprehensive overview of all research elements, ensuring that no critical information is overlooked.
  • Communication and Collaboration: The platform supports real-time collaboration, allowing teams to work together seamlessly, regardless of their physical location. This feature is crucial for research teams spread across different geographies, facilitating effective communication and iterative feedback throughout the research process.

Moreover, the ability t Explore Conceptual Framework Examples directly within Creately inspires researchers by providing practical templates and examples that can be customized to suit specific research needs. This not only saves time but also enhances the quality of the conceptual framework developed.

In conclusion, Creately’s tools for creating and managing conceptual frameworks are indispensable for researchers aiming to achieve clear, structured, and impactful research outcomes.

Join over thousands of organizations that use Creately to brainstorm, plan, analyze, and execute their projects successfully.

More Related Articles

What is a Thematic Analysis and How to Conduct One

Chiraag George is a communication specialist here at Creately. He is a marketing junkie that is fascinated by how brands occupy consumer mind space. A lover of all things tech, he writes a lot about the intersection of technology, branding and culture at large.

Search NORC

Enter Search Value

The Community-Engaged Research Framework

The female volunteer coordinator meets with the diverse group to thank them for their work today.

Anmol Sanghera

  • Sabrina Avripas
  • Ashani Johnson-Turbes

For inquiries, email:

Email

Download Equity Brief

Print Version

This Equity Brief describes the Community-Engaged Research Framework and highlights strategies for applying the principles of the Framework in practice. The Framework consists of six principles, grounded in theory and practice, that inform community engagement. It serves as a conceptual model to guide researchers in authentically engaging community members and organizations in social and behavioral science research.

Introduction

This Equity Brief describes the Community-Engaged Research (CEnR) Framework, or “the Framework,” six principles for engaging communities throughout the full research process and strategies for applying the principles in practice. The Framework is grounded in theory and existing community engagement literature and frameworks (e.g., inclusive research, community-based participatory research, community-based participatory action research, community-directed research, emancipatory research). [1-6] It serves as a conceptual model for researchers and communities to use to authentically engage each other in social and behavioral science research.

Community-Engaged Research

Community-engaged research is an approach to inclusive and equitable research [i] that joins researchers with communities as partners throughout the full cycle of the research process. [1,5,7,8] Its emphasis is on the relationship between researchers and communities, not on the methodological approach to conduct the research; teams [ii] can use both qualitative, quantitative, and mixed methods. [7,8] Community-engaged research may improve validity and relevance of data and results from the study, increase the data’s cultural relevance to community needs, enhance use of the data to create behavioral, social, services, or policy change, and increase the capacity of both communities and researchers. [7,8]

Exhibit 1: Continuum of Community Engagement in Research

alt

Source: Adapted from the ATSDR Principles of Community Engagement and Wilder Involving Community Members in Evaluation: A Planning Framework

Community-engaged research exists along a continuum (Exhibit 1) that ranges in spectrum of community involvement from less (community as advisor) to more (community as equal partner or as leader) engagement. [1,5,7,8] Teams should strive to reach a level of shared leadership; however, time and resource constraints, historical mistrust, and competing priorities may make this level of engagement in every project difficult. [7-9]

The Community-Engaged Research Framework (Exhibit 2) consists of six principles for researcher and community partnerships to apply when engaging throughout the full research process. The inner circle displays the six principles essential to community engagement throughout each phase of the research process. The principles are not listed in any specific order and apply to all steps of the research process. These principles apply regardless of where a research study is on the continuum of community engagement. [1,7,8,10] The outer ring lists the phases of the research process, adapted from the Culturally Responsive Evaluation Framework, which centers both the theory and practice of “evaluation in culture” and ensures evaluation is responsive to values and beliefs. We have modified this evaluation framework to include the research process more broadly.

Exhibit 2: Community-Engaged Research Framework

alt

© 2023 NORC. Source: Adapted from the Culturally Responsive Evaluation Framework and based on principles adapted from various frameworks for community-engaged research.

This section describes each principle and the actionable strategies teams can use to apply the principle throughout the research process. While we describe strategies within a specific principle, many are applicable across principles.

PRINCIPLE: AVOIDANCE OF HARM

All team members understand the immediate and broader implications of the research in context (e.g., community, society, systems) and actively avoid harming [iii] or marginalizing the communities in which the project is embedded. [5]

All team members recognize their own conscious and unconscious biases, how the research process can impact communities, and how the community and researchers benefit. [5,10-12] Avoidance of harm also requires listening to and respecting community expertise to better understand harm and strategies for avoidance. [5] Avoiding or doing no harm is especially important in research with historically and contemporarily marginalized and minoritized populations. [5,10]

Avoidance of harm prevents researchers from perpetuating a cycle of negative or exploitative interactions between communities and researchers, governments, and other systems, which has resulted in distrust among historically marginalized and minoritized communities. [5,8,9] It also helps teams develop appropriate protections to mitigate risks.

Actionable Strategies

Understand historical and contemporary contexts and their impact on community(ies). [5,10] Understanding communities’ context, needs, and sociopolitical environment is iterative; it requires remaining open, asking questions, conducting needs assessments, and stepping back when needed. [5,11]  

  • Define community and harm in partnership with communities, and understand key principles and trauma.4,5 Understand how aspects of racism and other systems of oppression influence study design, implementation, and dissemination, and adapt research processes and analysis to this context. [5,13]
  • Critically deliberate on and pursue opportunities that address inequities due to race, ethnicity, class, caste, religion, sex, gender, sexual orientation, physical ability, and other social constructs. [4]
  • Actively challenge systems of oppression and injustice, including those lingering in some research traditions, by improving coordination, enhancing existing services, and identifying, mobilizing, and strengthening assets and resources that enhance community’s capacity to make decisions.

Implement strategies to mitigate harm. Researchers’ actions may unknowingly or unintentionally harm communities. 

  • Develop in partnership with communities or use existing frameworks [iv] to mitigate harm if there are adverse effects of research actions. 
  • Prioritize the expertise of communities most affected by the harm when developing solutions to mitigate harms and challenges. 

Maintain community-researcher relationships beyond one project or funding period. Allocate adequate resources to maintain relationships with communities over the long-term. Continually reflect, assess, and communicate to maintain and deepen relationships for long-term action and sustainability. Take part in community meetings and events, meet community leaders, and build and foster relationships.

PRINCIPLE: SHARED POWER AND EQUITY IN DECISION-MAKING

All team members participate collaboratively, equitably, and cooperatively in all decisions within each phase of the research process. [5,14]

Shared power and equity in decision-making ensures teams incorporate the experiences and needs of communities into every aspect of the research process, from conception to dissemination, and use of findings to inform policies, programs, and services. Teams establish a governance structure that includes the voices of communities directly impacted by the issue or topic they are researching and employ equitable structures of decision-making and contribution. [4-6,13,14] This approach helps overcome non-participatory governance structures that are researcher-led with little room for community input or involvement, which can result in research that does not address community needs or interests. [2,7]  

Create a diverse and inclusive team. Include people with subject matter expertise and lived experiences to ensure the team reflects the community in which the project is embedded. Identify gaps in expertise and engage additional partners to fill gaps. [6]

Establish governance structures that eliminate Non-Participatory power hierarchies that de-value community experience and expertise.

  • Create structures that promote equity and power sharing to overcome power differentials. Include avenues for shared decision-making (e.g., co-principal investigators, equal representation on steering committees). [4,5,13,14]
  • Overcome relational dynamics that limit opportunities for economically and socially marginalized and disadvantaged groups that are part of project teams. Treat all team members with integrity and respect (e.g., do not undermine or invalidate people’s experiences, thoughts, or ideas; practice active listening; be considerate of others’ time, schedules, language, and cultural norms). [4,8,9]

Discuss up front what communities want to contribute and ultimately get from the research. Collectively establish parameters for data ownership and dissemination of findings. Be inclusive of communities’ right to access their collective data and research protocols by giving data and results back to the communities in which the research takes place. [6,14]

PRINCIPLE: TRANSPARENCY & OPEN COMMUNICATION

Researchers and community partners communicate openly and honestly about power dynamics and decision-making processes around project objectives and research processes, resources and finances, challenges and limitations, data, research findings, and dissemination strategies. [4,5,15]

Transparency and open communication require that all team members know who is involved in the study and why; the intent and purpose of a project; how resources are shared and allocated; and the apparent and hidden potential benefits, harms, and limitations of a project. [4,5]  

Lack of transparency may result in lack of trust if communities feel like they are being taken advantage of or do not understand researchers’ motivations and intentions. [16-18] Transparency and open communication create more authentic working relationships, build trust, and help mend relationships between researchers and communities; build on avoidance of harm to reduce the risk of unintentionally harming communities; demonstrate integrity for working through difficult issues; and improves investment in the relationship to promote sustainability. [16-18]

Collaboratively establish open communication approaches and channels. 

  • Determine methods, cadence, and mode of communication and meeting coordination. 
  • Set schedules, establish points of contacts and preferred formats for communication, and set timelines and frequencies of communication. 

Minimize hierarchy in communication processes, “gatekeepers,” and barriers to lines of communication. Share information readily with each member of the team about research processes and objectives, roles, motives, resources and finances, progress, timelines, etc. at every stage and at every level of the project. [4,5,17]

PRINCIPLE: MUTUAL ACCOUNTABILITY & RESPECT

Develop an equitable structure of incorporating input into decision-making processes, promoting commitment, and addressing discord directly.

Teams collaboratively define roles and decision-making authority, establishing a shared vision for the partnership and the research. [14,15] They also continually assess progress towards achieving that vision throughout the decision-making process. Teams facilitate discussions that allow for respectful discord and a process for reconciling discord in every phase of the research process. [4]

Non-participatory research that lacks mutual accountability and respect risks members losing interest and investment in the work, leading to a lack of respect for values and needs. Mutual accountability and respect promote a more equitable collaboration and continued involvement of members throughout all phases of the research. [19]

Collectively develop charters and establish ground rules. 

  • Develop partnership arrangements (e.g., memorandum of understanding) that document the scope and nature of the partnership and align scope with each member’s capacity. Determine where on the continuum of engagement the study and relationships lie and set expectations for that relationship early and often.
  • Delineate responsibilities and expectations for each person on the team. [14,15] Set realistic commitments and provide opportunities to share progress towards those commitments. [19]
  • Develop a vision statement for the work and a charter for upholding and making progress towards that vision. Revise the charter as needed. [20]
  • Create and implement decision-making protocols to promote follow through and commitment to roles and responsibilities, ways to track progress on achieving the goals and vision of the partnership, and continually share lessons learned. [20]

Establish structures to overcome discord. 

  • Develop ground rules for reconciling discord. Make time and space for individuals to speak comfortably and express discord without fear. 
  • Acknowledge missteps, challenges, and limitations and work openly to address them. Be willing to adapt throughout the partnership and process. [5]

PRINCIPLE: ACCESSIBILITY & DEMONSTRATED VALUE

Value time and contributions of all team members and develop flexible and equitable methods of engagement. [5,13]

Teams demonstrate accessibility and demonstrated value through fair and equitable compensation, reasonable and thoughtful requests for time, and flexibility and accessibility in methods of engagement and communication. [5,15]

Non-participatory research may prioritize researcher views, perspectives, and methods of engagement. Participatory research recognizes that each team member brings their own unique perspectives and skills and adds valuable experiences, resources, and social networks to the research process. [19] It also considers each team member’s barriers to engagement and establishes approaches to overcome those barriers. Accessibility and demonstrated value promote greater acceptance of alternative perspectives and trust, inclusivity, and engagement.

Acknowledge all team members and value their expertise, skills, and contributions. 

  • Create a shared space that equally values all team members’ contributions and voices to facilitate co-design, co-creation, and shared decision-making, and to advance individual and collective development, growth, and learning. [15]
  • Integrate opportunities for relationship-building activities, informal networking, team building, and engagement outside of project activities. [19]
  • Ask how individuals and communities would like to be acknowledged and give credit for contributions. Create publication and data use guidelines.
  • Collaboratively determine adequate compensation structures for all members’ contribution and time in their preferred method and form of value. [15]

Demonstrate cultural responsiveness [v]  and inclusivity. 

  • Understand that engagement and relationship building take time. Allow sufficient time to establish relationships and account for the limited time some members have to engage in research. 
  • Practice cultural humility. [vi] Conduct self-reflection about your own biases, power, and privileges. [4,5,10] Ask questions and take time to understand local and cultural practices and nuances. [4,5]
  • Understand and address barriers to engagement. Provide accessible modalities of participation and access, including flexibility in meeting times and location, interpreters and translated materials, plain language materials, childcare, transportation, and technology support. [15] Conduct engagements at times and in places convenient to communities. Offer disability accommodations and be flexible with requests for time commitment and deadlines. [5,15]

PRINCIPLE: CAPACITY BRIDGING & CO-LEARNING

All team members learn from each other and engage in bi-directional feedback and conversation. [vii]

Capacity bridging and co-learning expands tools, resources, skills, and knowledge among all team members. [21–23] It also promotes sustainability beyond one research project or funding opportunity. [23] Embedded throughout the research process are educational opportunities for all team members to become agents for community change. Teams should work together to re-define the research process and relationship, not to transform community partners into researchers (unless that is the ask of community partners). [15] Non-participatory research that focus solely on building the capacity of community members fall short in fostering bi-directional knowledge, skills, and capacity. Researchers should learn about historical and contemporary local culture and context, lived experiences of community partners, and community engagement strategies. [14]

Facilitate the reciprocal transfer of knowledge, skills, and capacity. [21,22] Maintain open dialogue, conduct and receive trainings, and bi-directionally share information, tools, and data. [14]

Translate knowledge into action. Document and share lessons learned about what works and what does not work about the process, and partnership successes, weaknesses, and challenges to further facilitate co-learning. [19] Understand how results from the study can improve programs, policies, or services to benefit both the advancement of science and the community. [19]

Affirm community strengths and assets. Conduct activities like community asset mapping and strengths, weaknesses, opportunities, and threats (SWOT) analyses, and practice positive marginality [viii] to understand each team member’s perspectives, knowledge, and expertise. Highlight and affirm community strengths. [4] Employ multiple methods and forums for community involvement beyond inclusion of community members on the project team (e.g., advisory boards, town halls, listening sessions, public comment).

The Community-Engaged Research Framework is a conceptual model that guides community engagement using the following six key principles: (1) Avoidance of harm; (2) Shared Power and Equity in Decision-Making; (3) Transparency and Open Communication; (4) Mutual Accountability and Respect; (5) Accessibility and Demonstrated Value; and (6) Capacity Bridging and Co-learning. 

Applying these principles and their associated actionable strategies facilitates conduct of inclusive and equitable research and evaluation that centers people’s cultures and community. Community-engaged research will vary depending on the community, project, client, capacity, and available funding and resources. The Community-Engaged Research Framework is a model that teams can tailor as needed to their specific research, needs, context, and communities under inquiry. This Equity Brief shares NORC’s Community-Engaged Research Framework. A subsequent equity brief will discuss strategies for putting the framework into practice.

Download This Equity Brief

Acknowledgements.

This Community-Engaged Research Framework was made possible with funding from NORC’s Diversity, Racial Equity, and Inclusion (DREI) Research Innovation Fund. We thank the following:

  • Work group members: Manal Sidi, Anna Schlissel, Chandria Jones, James Iveniuk, Jocelyn Wilder, and Stefan Vogler for their contributions to framework development. 
  • NORC reviewers: Roy Ahn, Michelle Johns, Carly Parry, and Vince Welch. 
  • External reviewers: Carmen Hughes, Health IT Division Director, National Center for Primary Care, Morehouse School of Medicine and Hager Shawkat, Program Director, Sauti Yetu Center for African Women. 

[i] Inclusive & Equitable Research are “the methods of practice for Equity Science that is collaborative research embracing a range of theoretical frameworks and mixed methods that are focused on centering and empowering people and communities under inquiry and democratizing the research process to promote equity.” Johnson-Turbes, A., Jones, C., Johns, M.M., & Welch, V. (2022). Inclusive and Equitable Research Framework [Unpublished Manuscript]. Center on Equity Research, NORC at the University of Chicago, Chicago, Illinois.

[ii] A “team” consists of individuals, community-based organizations, researchers, evaluators, community leaders, and other key individuals or entities partners as determined by the project.

[iii] “Do No Harm,” a principle requiring healthcare providers to consider if the risk of their actions will hurt a patient versus improve a patient’s condition, is central to healthcare. Its origins trace back to the Hippocratic Oath and its development in the 1990s by Mary Anderson as an approach to working on conflict affected situations. The term is widely used (and sometimes, misused) to the design and conduct of research to ensure inclusivity and advance equity. In social science research, the interpretation of “do no harm” should also weigh the risk of harming an individual or potential benefits from data collection, analysis, or results dissemination. Like in medicine, the goal of research should be to advance equity and promote wellbeing, in line with beneficence. See Kinsinger FS. Beneficence and the professional’s moral imperative. J Chiropr Humanit. Published online 2009.

[iv] For example, Glover et al 2020’s Framework for Identifying and Mitigating the Equity Harms of COVID-19 Policy Interventions adapts the idea of “duty to warn” for research to inform communities about potential harm.

[v] Cultural responsiveness is the “ability to learn from and relate respectfully to people from your own and other cultures,” which promotes increased level of comfort, knowledge, freedom, capacity, and resources and knowledge. [23]

[vi] Cultural humility is the practice of self-evaluation and self-reflection to examine our own biases, acknowledgement and shift of power dynamics and imbalances, and accountability for one’s own actions as well as those of its organization or institution. [12]

[vii] Capacity building refers to building capacity, knowledge, and skills, of someone, usually a community person, to a research team. [21] Capacity bridging expands this notion to acknowledge that one person can bring many things to their position on a team. [21] It also acknowledges the reciprocity of knowledge sharing between academics, researchers, community-based researchers, and individuals—so that all members are learning from each other. [21] This term was coined by the AHA Centre. 

[viii] Positive Marginality promotes the idea that injustice is rooted in structural determinants rather than personal or community behavior. It promotes the idea that “belonging to a non-dominant cultural or demographic group can be advantageous rather than oppressive.” [24]

[1]  Wilder Foundation. Using a Framework for Community-Engaged Research. Published 2018. Accessed December 12, 2022.

[2] Nind M. What Is Inclusive Research. Bloomsbury Academic; 2014.

[3] International Collaboration for Participatory Health Research. What Is Participatory Health Research? (PDF) ; 2013. Accessed January 1, 2024.

[4] New York City Department of Health and Mental Hygiene. Community Engagement Framework (PDF) ; 2017. Accessed January 2, 2024.

[5] Michigan Public Health Institute (MPHI), Michigan Health Endowment Fund. Community Engagement & Collective Impact Phase 1 Environmental Scan (PDF). Accessed January 2, 2024.

[6] Black Health Equity Working Group. A Data Governance Framework for Health Data Collected from Black Communities in Ontario. ; 2021. Accessed January 2, 2024. 

[7] McDonald MA. Practicing Community-Engaged Research (PDF). Duke Center for Community Research. Published 2009. Accessed December 11, 2022. 

[8] NIH Publication No. 11-7782. Principles of Community Engagement Second Edition (PDF) ; 2011. Accessed December 12, 2022. 

[9] Agency for Toxic Substances and Disease Registry. ATSDR’s Community Engagement Playbook (PDF). Accessed January 1, 2024.

[10] Ross L, Brown J, Chambers J, et al. Key Practices for Community Engagement in Research on Mental Health or Substance Use (PDF). Accessed December 11, 2022.

[11] Centers for Disease Control and Prevention. A Practitioners Guide for Advancing Health Equity: Community Strategies for Preventing Chronic Disease (PDF) ; 2013. Accessed December 12, 2022. 

[12] Hughes-Hassell S, Rawson CH, Hirsh K. Project READY: Reimagining Equity & Access for Diverse Youth, Module 8: Cultural Competence & Cultural Humility. University of North Carolina, Institute of Museum and Library Services. Accessed December 13, 2022. 

[13] NORC. Community Engagement Panel: Community Engagement through Participatory Analysis.

[14] Wilder J, Agboola F, Vogler S, Rugg G, Iveniuk J. Chicago Community Alliance: Guidelines for Creating Community Engaged Research.; 2022.

[15] Sheridan S, Schrandt S, Forsythe L, Hilliard T, Paez K, Advisory Panel on Patient Engagement (2013 inaugural panel). The PCORI Engagement Rubric: Promising Practices for Partnering in Research (PDF). Ann Fam Med. Published online 2017:165-170. Accessed February 14, 2024.

[16] Jamshidi E, Morasae EK, Shahandeh K, et al. Ethical Considerations of Community-based Participatory Research: Contextual Underpinnings for Developing Countries. Int J Prev Med. 2014;5(10):1328-1336.

[17] Jones Mcmaughan D, Dolwick Grieb SM, Kteily-Hawa R, Key KD. Promoting and Advocating for Ethical Community Engagement: Transparency in the Community-Engaged Research Spectrum (PDF). Accessed February 6, 2024. 

[18] Goodman LA, Thomas KA, Serrata JV, et al. Power through Partnerships: A CBPR Toolkit for Domestic Violence Researchers. National Resource Center on Domestic Violence ; 2017. Accessed February 6, 2024. 

[19] Marquez E, Smith S, Tu T, Ayele S, Haboush-Deloye A,, Lucero J. A Step-by-Step Guide to Community-Based Participatory Research (PDF) ; 2022. Accessed February 6, 2024.

[20] Lo L, Aron LY, Pettit KLS, Scally CP. Mutual Accountability Is the Key to Equity-Oriented Systems Change How Initiatives Can Create Durable Shifts in Policies and Practices Background and Mutual Accountability Framework.; 2021.

[21] AHA Centre. Capacity Bridging Fact Sheet ; 2018. 

[22] CDAC Network. The CDAC Capacity Bridging Initiative Facilitating Inclusion and Maximising Collaboration in CCE/AAP (PDF). Accessed February 6, 2024.

[23] Kozleski E, Harry B. Cultural, Social, and Historical Frameworks That Influence Teaching and Learning in U.S. Schools ; 2005.

[24] Streets VN. Reconceptualizing Women’s STEM Experiences: Building a Theory of Positive Marginality. Vol Dissertation ; 2016. Accessed December 13, 2022. 

NORC at the University of Chicago conducts research and analysis that decision-makers trust. As a nonpartisan research organization and a pioneer in measuring and understanding the world, we have studied almost every aspect of the human experience and every major news event for more than eight decades. Today, we partner with government, corporate, and nonprofit clients around the world to provide the objectivity and expertise necessary to inform the critical decisions facing society.

Research Divisions

  • NORC Health

Departments, Centers & Programs

  • Center on Equity Research
  • Petry S. Ubri
  • Health Equity

Want to work with us?

research update framework

Explore NORC Health Projects

Reproductive health experiences & access.

research update framework

Tracking reproductive health experiences and access in the U.S.

Health Communication AI

research update framework

Combining the trust-building ability of digital opinion leaders with the scalability of AI to revolutionize health communication

Positive Adolescent Interpersonal Relationships (PAIR)

research update framework

Expanding NORC’s STRiV research to reflect diverse adolescent experiences and communities

Processes for updating guidelines: protocol for a systematic review

Karen Cardwell Roles: Conceptualization, Methodology, Writing – Original Draft Preparation Joan Quigley Roles: Conceptualization, Methodology, Supervision, Writing – Original Draft Preparation, Writing – Review & Editing Barbara Clyne Roles: Conceptualization, Methodology, Writing – Original Draft Preparation, Writing – Review & Editing Barrie Tyner Roles: Methodology, Writing – Original Draft Preparation, Writing – Review & Editing Marie Carrigan Roles: Methodology, Writing – Review & Editing Susan Smith Roles: Writing – Review & Editing Máirín Ryan Roles: Writing – Review & Editing Michelle O'Neill Roles: Conceptualization, Methodology, Supervision, Writing – Review & Editing

Systematic review, guideline methodology, guideline update, prioritization methodology.

Abbreviations

CG, Clinical guideline; CIMO, Context, intervention, mechanism, outcome; GIN, Guidelines International Network; NCG, National Clinical Guideline; PRISMA-P, Preferred Reporting Items for Systematic Reviews and Meta-Analyses Protocols.

Introduction

In Ireland, the National Clinical Effectiveness Committee (NCEC), established in September 2010, works to prioritise and quality assure National Clinical Guidelines (NCGs) so as to recommend them to the Minister for Health to become part of a suite of NCGs 1 . Clinical guidelines (CGs) are systematically developed statements, based on a thorough evaluation of the evidence, to assist practitioner and service users’ decisions about appropriate healthcare for specific clinical circumstances across the entire clinical system 1 . The recommendations contained within CGs are underpinned by evidence syntheses, that is, systematic reviews or adaptation of existing CGs and or recommendations 2 . Ongoing evolution of the scientific literature brings the emergence of new evidence which can change the findings of a systematic review and, as a consequence, change the recommendations made within a CG. As such, CGs need to be updated regularly to ensure validity of the recommendations contained within 3 .

Updating CGs is an iterative process that is both resource intensive and time-consuming. Typically, CGs are updated in accordance with a pre-defined time-period. For example, the National Institute for Health and Care Excellence 4 , American College of Physicians 5 and US Preventive Services Task Force 6 indicate that CGs should be updated every five years; in Ireland, the National Clinical Effectiveness Committee advises updating CGs every three years 1 . However, it is also acknowledged that deciding to update a CG depends on factors other than pre-defined time periods, such as the volume and potential impact of new research published on the topic, clinical burden of the topic, economic impact and resources available to update a guideline 1 . For that reason, policy makers and other stakeholders are advocating for a move away from updating guidelines based on a pre-defined time-period and moving towards updating guidelines based on prioritisation criteria, to ensure appropriate investment of resources 7 .

Just as evolution of the scientific literature brings new clinical evidence that can impact the recommendations within a CG, it also brings advancement in methodologies used in development and updating of CGs 3 . One such advancement has been the emergence of rapid and living guidelines which aim to provide timely advice for decision-makers by optimising the guideline development process whereby individual recommendations can be updated as soon as new relevant evidence becomes available 8 . The use of rapid and living guidelines has been especially evident throughout the COVID-19 pandemic where the emphasis was on development and implementation of strategies to manage the rapidly evolving evidence base in response to a public health emergency 9 .

Previous systematic reviews on this topic have summarised guidance from methodological handbooks for updating clinical practice guidelines 3 , strategies for prioritisation of clinical guidelines that require updating 7 and prioritisation processes for the de novo development, updating or adaptation of guidelines 10 . However, the evidence synthesised was largely published over a decade ago and related to update processes developed for a particular disease-specific guideline or specifically to updating systematic reviews, not updating clinical guidelines.

Therefore, the purpose of this systematic review is to describe the most recent guideline update processes, including up-to-date prioritisation methods, used by international or national groups who provide methods guidance for developing and updating clinical guidelines. The focus of this systematic review was not on adaptation, contextualization, or de novo development of guidelines, but instead updating processes for existing guidelines. This will support guideline development groups nationally and internationally in consideration of amendments to the current update processes.

Details of this protocol have been submitted to the PROSPERO database (registration number: CRD42021274400). Any amendments made to the protocol will be acknowledged on PROSPERO and in any subsequent publications. This protocol outlines the proposed approach to achieve the stated purpose and has been informed by the Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols (PRISMA-P) guidelines 11 .

Criteria for considering publications for this review

This systematic review protocol has been developed to answer the review question:

What are the most recent guideline update processes, including up-to-date prioritisation methods, used by international or national groups who provide methods guidance for developing and updating clinical guidelines?

The review question was formulated in line with the CIMO (Context, Intervention, Mechanism, Outcome) framework 12 , as presented in Table 1 . The CIMO framework describes “the problematic Context, for which the design proposition suggests a certain Intervention type, to produce, through specified generative Mechanisms, the intended Outcome(s)” 12 . The context describes the environment within which change occurs, the intervention is what influences a change, the mechanism is triggered by the intervention and this produces the outcome 12 .

Table 1. Context, Intervention, Mechanism, Outcome.

  ▪    Clinical guidelines require updating to maintain relevancy.
  ▪    International or national groups provide methods guidance (in published handbooks and/or peer-reviewed
articles) for developing and updating clinical guidelines, as well as prioritising clinical guidelines for updating.
  ▪    Clinical guidelines considered for updating (includes full, modular, rapid updates).
  ▪    Tools or guidance available to support prioritisation.
  ▪    Description of update (or retirement) process (including roles and responsibilities at each stage)
             ○    types of update that exist
             ○    criteria used to determine if update necessary
             ○    process for retiring a guideline
             ○    criteria to prioritise which guideline is updated first
             ○    criteria to prioritise which clinical questions within a guideline are updated
             ○    evidence synthesis methodologies used to update clinical questions
             ○    dissemination of updated guideline
             ○    resources required to undertake update
             ○    differences between review process for updated guideline verses original guideline
             ○    differences between approval and endorsement process for updated guideline versus original guideline
  ▪    Evaluation of the process
             ○    usability and or critique of the updating methodology
             ○    timeliness, that is, specific processes that enable a more efficient and timely update.

The types of publications eligible for inclusion will be:

▪ methodological handbooks that provide updating guidance, including prioritisation methods, for clinical practice guidelines

▪ peer-reviewed articles that describe or have implemented updating guidance, including prioritisation methods.

Only publications from 2011 onwards will be considered for inclusion; publications published before 2011 will have been included in the index documents 3 , 7 but will not be included in this review.

Exclusion criteria. The following exclusion criteria will be applied:

▪ Due to issues relating to transferability of guidelines developed for specific diseases, disease-specific publications (handbooks and or peer-reviewed publications which describe, or have implemented, guidance for updating disease-specific guidelines).

▪ Editorials/commentaries/opinion pieces.

▪ Abstracts only.

▪ Animal studies.

▪ Non-English language publications.

Search methods for identification of studies

Due to changes in process and methodologies in guideline development in the last 10 years, the overall search span for this review will be the last 10-years (2011–2021). The primary data source for this review will be methodological handbooks which detail update processes, including prioritisation methods, used by international or national groups who provide methods guidance for developing and updating clinical guidelines. Through scoping searches, we identified a published systematic review of methodological handbooks that provide guidance for updating clinical practice guidelines 3 . This systematic review by Vernooij et al. 3 was published in 2014 and will be considered an index document, whereby for methodological handbooks, data from 2011–2012 will be taken from Vernooij et al. 3 and data from 2013–2021 will be gathered through a new search of organisations’ websites and grey literature.

The secondary data source will be peer-reviewed articles which detail the development of, and or evaluation of guideline update processes. For peer-reviewed articles, data from 2011–2021 will be gathered through a database search. While peer-reviewed articles will not be the primary data source for this systematic review, they may serve as “sign-posts” to the handbooks and may also provide qualitative data relating to the usability of the handbooks and update processes.

In 2017, Martinez-Garcia et al. 7 published a systematic review of prioritisation processes for updating guidelines which focused on peer-reviewed articles rather than methodological handbooks. Data specific to prioritisation methods from 2011–2015 will be taken from Martinez-Garcia et al. , 7 and data from 2016–2021 will be gathered from the new database search.

Organisations. The websites of organisations listed in Table 2 will be searched for relevant methodological handbooks. The organisations were chosen based on advice from the Clinical Effectiveness Unit of the Department of Health (which supports the work of the National Clinical Effectiveness Committee), and or identification of the organisation from previous systematic reviews on this topic and guidance being available in English.

When guideline manuals are not found online, or where any data gaps are identified, these will be addressed by contacting organisations (via email) to gather information relating to guideline update processes (including prioritisation methods). Other relevant organisations identified during the searching process will also have their website searched.

Table 2. Organisations that will be searched for relevant methodological handbooks.

Organisation name Organisation URL
Agency for Healthcare Research and Quality, USA
Australian National Health and Medical Research Council, Australia
Belgian Health Care Knowledge Centre, Belgium
Canadian Agency for Drugs and Technologies in Health, Canada
European Network for Health Technology Assessment
Finnish Institute for Health and Welfare, Finland
Guidelines International Network
Institute of Medicine, USA
McMaster GRADE centre, Canada
National Institute for Health and Care Excellence, UK
Ravijuhend, Estonia
Scottish Intercollegiate Guidelines Network, Scotland
National Board of Health and Welfare, Sweden
Public Health Agency of Sweden, Sweden
The Best Practice Advocacy Centre New Zealand, New Zealand
World Health Organization

Grey literature. Other sources of grey literature will be searched for relevant methodological handbooks. These are listed in Table 3 .

Table 3. Grey literature that will be searched for relevant methodological handbooks.

Grey literature source URL
Google (first 10 pages of results)
Open Grey
Reference chasing NA

Databases. The following databases will be searched for peer-reviewed articles using the search strategy defined in Supplementary file 1:

▪ Medline (EBSCO)

▪ Embase (OVID)

▪ The Cochrane Methodology Register

Selection of eligible publications

Methodological handbooks will be identified through searching the websites of eligible organisations (see Table 2 ) and through screening the methodological handbooks included in the index document 3 . This will be done by one reviewer and relevant handbooks will be imported into Endnote (Version X8). Imported handbooks will be reviewed by a second reviewer to confirm their eligibility.

All citations identified from the collective search strategy (see Supplementary file 1), and through screening the peer-reviewed articles included in the index document 7 , will be exported to EndNote (Version X8) for reference management, where duplicates will be identified and removed. Using Covidence ( www.covidence.org ), two reviewers will independently review the titles and abstracts of the remaining citations to identify those for full-text review. The full texts will be obtained and independently evaluated by two reviewers applying the defined inclusion and exclusion criteria. Where disagreements occur, discussions will be held to reach consensus and where necessary, a third reviewer will be involved. Citations excluded during the full-text review stage will be documented alongside the reasoning for their exclusion and included in the PRISMA flow diagram.

Data extraction and management

Data will be extracted from methodological handbooks by one reviewer and checked for accuracy and omissions by a second. Where disagreements occur, discussions will be held to reach consensus and where necessary, a third reviewer will be involved. Data extraction will be conducted in Microsoft Word, using a data extraction form (Supplementary file 2). The data extraction form will be piloted first and refined as necessary.

Relevant data to be extracted will include:

▪ the types of update that exist, for example partial or full

▪ criteria used to determine if an update is necessary

▪ process for retiring a guideline

▪ criteria used to prioritise which guideline to update first

▪ criteria used to prioritise clinical questions to be updated within a guideline

▪ evidence synthesis methodologies used to update clinical guideline and clinical questions

▪ dissemination of updated clinical guideline

▪ resources required to undertake update

▪ process of reviewing the updated guideline

▪ process of approving and endorsing the updated guideline.

Peer-reviewed articles will not be the primary data source for this systematic review; the primary data source is most likely to be the methodological handbooks. However, in addition to signposting to methodological handbooks, and providing supplemental data relating to update and prioritisation processes, peer-reviewed articles may also provide usability and process evaluation data (relating to the associated handbook); these data will be extracted (see Supplementary file 2).

Quality assessment

Methodological handbooks will be quality assessed independently by two reviewers and any disagreements will be resolved by deliberation, or if necessary, a third reviewer. In the absence of an appropriate quality assessment tool that is specific for methodological handbooks or guidance, quality will be assessed using the GIN-McMaster Guideline Development Checklist, which is a checklist of items to consider during the development of guidelines. Specifically, we will use the six criteria relating to updating guidelines 13 . Reviewers will assess, that the methodological handbook covers the following areas:

1.   Addresses policy, procedure and timeline for routinely monitoring and reviewing whether the guideline needs to be updated.

2.   Addresses who will be responsible for routinely monitoring the literature and assessing whether new significant evidence is available.

3.   Sets the conditions that will determine when a partial or a full update of the guideline is required.

4.   Makes arrangements for guideline group membership and participation after completion of the guideline.

5.   Addresses plans for the funding and logistics for updating the guideline in the future.

6.   Addresses documentation of the plan and proposed methods for updating the guideline to ensure they are followed 13 .

Methodological quality of peer-reviewed articles will be independently assessed by two reviewers. Depending on study design an appropriate version of the Newcastle-Ottawa Scale 14 or the Appraisal tool for Cross-Sectional studies (AXIS) 15 will be used. The tools will be piloted first on a small number of included studies, and modifications made if needed, before standardising for the remaining studies. Any disagreements will be resolved by deliberation or, if necessary, a third reviewer.

Data synthesis

As the main data to be extracted for this review is descriptive in nature a narrative synthesis will be undertaken. Data will be summarised under the following headings:

▪ Description of update (or retirement) process (including roles and responsibilities at each stage)

▪ Evaluation of the process.

Dissemination

The PRISMA checklist will be used to report findings of the review. We will communicate the findings to the NCEC to inform updating processes in Ireland. Findings will also be communicated by publication in a peer-reviewed journal, and by participation in scientific meetings and national and international conferences.

Study status

Agreement on the study protocol, searching of organisations, grey literature and databases, data extraction and quality assessment is complete. Data synthesis is ongoing.

Updating clinical guidelines is an iterative process that is both resource intensive and time-consuming. This systematic review will summarise the most recent updating and prioritisation processes for clinical guidelines. The findings will support guideline development organisations nationally and internationally to ensure appropriate investment of resources. It will support them in considering and or modifying their current methodologies for updating clinical guidelines. This will be of particular interest in light of new and updated methodologies that have been especially evident throughout the COVID-19 pandemic.

Data availability

Underlying data.

No data are associated with this article.

Extended data

Figshare: Supplementary files, https://doi.org/10.6084/m9.figshare.16723063 .

This project contains the following extended data:

- Supplementary file 1: Search strategy

- Supplementary file 2: Data extraction tables

Reporting guidelines

Figshare: PRISMA-P Checklist for “Processes for updating guidelines: protocol for a systematic review”, https://doi.org/10.6084/m9.figshare.16723021 .

Data are available under the terms of the Creative Commons Attribution 4.0 International license (CC-BY 4.0).

Authors’ contribution

KC: Conceptualization, Methodology, Writing – Original Draft Preparation; JQ: Conceptualization, Methodology, Supervision, Writing – Original Draft Preparation; Writing – Review & Editing; BC: Conceptualization, Methodology, Writing – Original Draft Preparation; Writing – Review & Editing; BT: Methodology, Writing – Original Draft Preparation; Writing – Review & Editing; MC: Methodology, Writing – Review & Editing; SS: Writing – Review & Editing; MR: Writing – Review & Editing; MON: Conceptualization, Methodology, Supervision, Writing – Review & Editing.

Acknowledgements

The authors would like to thank the Clinical Effectiveness Unit of the Department of Health and acknowledge the support of the Health Technology Assessment directorate at HIQA.

  • 1.   Government of Ireland: How to develop a national clinical guideline: a manual for guideline developers. [updated 2019 Jan; cited 2021 Jul 29]. Reference Source
  • 2.   Sharp MK, Tyner B, Awang Baki DAB, et al. : Evidence synthesis summary formats for clinical guideline development group members: a mixed-methods systematic review protocol [version 1; peer review: 1 approved with reservations]. HRB Open Res. 2021; 4 : 76. Publisher Full Text
  • 3.   Vernooij RW, Sanabria AJ, Solà I, et al. : Guidance for updating clinical practice guidelines: a systematic review of methodological handbooks. Implement Sci. 2014; 9 : 3. PubMed Abstract | Publisher Full Text | Free Full Text
  • 4.   National Institute for Health and Care Excellence: Developing NICE guidelines: the manual (PMG20). [updated 2020 Oct 15; cited 2021 Sep 28]. Reference Source
  • 5.   Qaseem A, Kansagara D, Lin JS, et al. : The Development of Clinical Guidelines and Guidance Statements by the Clinical Guidelines Committee of the American College of Physicians: Update of Methods. Ann Intern Med. 2019; 170 (12): 863–70. PubMed Abstract | Publisher Full Text
  • 6.   US Preventive Services Task Force: Procedure Manual. [updated 2021 May; cited 2021 Sep 28]. Reference Source
  • 7.   Martínez García L, Pardo-Hernández H, Sanabria AJ, et al. : Guideline on terminology and definitions of updating clinical guidelines: The Updating Glossary. J Clin Epidemiol. 2018; 95 : 28–33. PubMed Abstract | Publisher Full Text
  • 8.   Akl EA, Meerpohl JJ, Elliott J, et al. : Living systematic reviews: 4. Living guideline recommendations. J Clin Epidemiol. 2017; 91 : 47–53. PubMed Abstract | Publisher Full Text
  • 9.   Garritty C, Gartlehner G, Nussbaumer-Streit B, et al. : Rapid review methods guidance aids in Cochrane’s quick response to the COVID-19 crisis. In: Collaborating in response to COVID-19: editorial and methods initiatives across Cochrane. Cochrane Database of Systematic Reviews. 2021; 28–31. Reference Source
  • 10.   El-Harakeh A, Lotfi T, Ahmad A, et al. : The implementation of prioritization exercises in the development and update of health practice guidelines: A scoping review. PLoS One. 2020; 15 (3): e0229249. PubMed Abstract | Publisher Full Text | Free Full Text
  • 11.   Moher D, Shamseer L, Clarke M, et al. : Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols (PRISMA-P) 2015 statement. Syst Rev. 2015; 4 (1): 1. PubMed Abstract | Publisher Full Text | Free Full Text
  • 12.   Denyer D, Tranfield D, van Aken JE: Developing Design Propositions through Research Synthesis. Organization Studies. 2008; 29 (3): 393–413. Publisher Full Text
  • 13.   GIN-McMaster: GIN-McMaster Guideline Development Checklist. [updated 2014 Jun 02; cited 2021 Jul 29]. Reference Source
  • 14.   Wells GA, Shea B, O'Connell D, et al. : The Newcastle-Ottawa Scale (NOS) for assessing the quality of non-randomised studies in meta-analyses. [updated 2001; cited 2021 Jul 29]. Reference Source
  • 15.   Downes MJ, Brennan ML, Williams HC, et al. : Development of a critical appraisal tool to assess the quality of cross-sectional studies (AXIS). BMJ Open. 2016; 6 (12): e011458. PubMed Abstract | Publisher Full Text | Free Full Text

Comments on this article Comments (0)

Open peer review.

Is the rationale for, and objectives of, the study clearly described?

Is the study design appropriate for the research question?

Are sufficient details of the methods provided to allow replication by others?

Are the datasets clearly presented in a useable and accessible format?

Not applicable

Competing Interests: No competing interests were disclosed.

Reviewer Expertise: Public health, health services research, clinical guideline development

  • Respond or Comment
  • COMMENT ON THIS REPORT
  • Reference is made to “national clinical
  • Reference is made to “national clinical guidelines” in the abstract and the first paragraph of the introduction. However, the remit of the proposed work seems broader. Guidelines may be in place for jurisdictions that are sub-national, and for professional organizations. In addition, I was left uncertain whether preventive health care and public health were within scope.  
  • In the second paragraph of the introduction, signal detection methods have been proposed for triggering updates of evidence syntheses that underpin guidelines. See for example Newberry SJ et al. 1 , Shekelle PG et al.  2 . Briefly showing the readers that these have been considered in developing the searches in the systematic review will increase confidence in the evidence as to whether or not these methods have been used; if they have been used, how much and what has been the experience; if there is no evidence that they have been used, is there any on why not.  
  • I was surprised that there was no mention of the concept of living guidelines – see Akl EA et al. 3 (the same comment as for the signal detection methods; the COVID-19 pandemic has catalyzed interest)   or the AGREE statement 4 ( https://www.agreetrust.org/agree-ii/ ), which includes a point about updating procedures (domain 3, point 14). The AGREE II instrument is a tool to assess the quality and reporting of practice guidelines and is internationally used. Having guidelines based on up-to-date evidence is obviously an important domain of guideline quality, and I think it's important to make the link to this in the review.  
  • Table 1 – what belongs under “Mechanism” and under “Outcome” is unclear.  
  • Table 2 – consideration might be given to Canadian and US Task Forces on Preventive Health Care, the UK National Screening Committee , and the IARC Cancer Prevention Handbooks (e.g. cervical cancer screening is covered in Vol. 10 and the upcoming Vol. 18; breast cancer in Vols. 7 and 15). This links back to the comments I made on the scope of the review. If preventive health is within scope, these strike me as well recognized international sources to consider.  
  • There has been a large investment in clinical research in China, and there are many clinical guidelines 5 . I think this work needs at least to be recognized. There are Chinese databases, e.g. CKNI and WanFang (see Cohen JF et al. 6 ). China accounts for more than 18% of the world’s population. There are also large proportions in India and in African countries, but there is less investment in clinical research. There has been a lot of activity in clinical guideline development in China 5 . I do not have comprehensive knowledge of guidelines in China, but having seen a few, my sense is that they consider Chinese evidence, evidence from other jurisdictions, and also consider Chinese traditional medicine, so they have a mixture of unique attributes and those common to other jurisdictions. I’m not sure how much the Chinese guidelines will be picked up in the search strategy proposed in the protocol, which is why I suggested the Chinese databases. It may be that the investigators decide that investigating Chinese guidelines other than what would be picked up in the search strategy presented in the protocol is out of scope, but I think it's important to be clear in reporting this decision and the rationale for it. The guideline development in India 7 and Africa 8 (this reference on hypertension guidelines may be helpful) may in large part be based on adapting guidelines from other jurisdictions, and thus be more likely to be picked up in the strategy in the protocol.

Reviewer Expertise: Epidemiology and Prevention of Congenital Anomalies; Psychosis and affective psychosis; Cancer Epidemiology and Prevention; Molecular and Human Genome Epidemiology; Evidence synthesis related to public health; Health services research

Reviewer Status

Alongside their report, reviewers assign a status to the article:

Reviewer Reports

1 2

01 Nov 21
read read
  • Julian Little , University of Ottawa, Ottawa, Canada
  • Brenda Wilson , Memorial University of Newfoundland, St. John's, Canada

Comments on this article

All Comments (0)

Competing Interests Policy

Provide sufficient details of any financial or non-financial competing interests to enable users to assess whether your comments might lead a reasonable person to question your impartiality. Consider the following examples, but note that this is not an exhaustive list:

  • Within the past 4 years, you have held joint grants, published or collaborated with any of the authors of the selected paper.
  • You have a close personal relationship (e.g. parent, spouse, sibling, or domestic partner) with any of the authors.
  • You are a close professional associate of any of the authors (e.g. scientific mentor, recent student).
  • You work at the same institute as any of the authors.
  • You hope/expect to benefit (e.g. favour or employment) as a result of your submission.
  • You are an Editor for the journal in which the article is published.
  • You expect to receive, or in the past 4 years have received, any of the following from any commercial organisation that may gain financially from your submission: a salary, fees, funding, reimbursements.
  • You expect to receive, or in the past 4 years have received, shared grant support or other funding with any of the authors.
  • You hold, or are currently applying for, any patents or significant stocks/shares relating to the subject matter of the paper you are commenting on.

Stay Updated

Sign up for content alerts and receive a weekly or monthly email with all newly published articles

Register with HRB Open Research

Already registered? Sign in

Not now, thanks

Are you a HRB-funded researcher?

Submission to HRB Open Research is open to all HRB grantholders or people working on a HRB-funded/co-funded grant on or since 1 January 2017. Sign up for information about developments, publishing and publications from HRB Open Research.

We'll keep you updated on any major new updates to HRB Open Research

The email address should be the one you originally registered with F1000.

You registered with F1000 via Google, so we cannot reset your password.

To sign in, please click here .

If you still need help with your Google account password, please click here .

You registered with F1000 via Facebook, so we cannot reset your password.

If you still need help with your Facebook account password, please click here .

If your email address is registered with us, we will email you instructions to reset your password.

If you think you should have received this email but it has not arrived, please check your spam filters and/or contact for further assistance.

usa flag

  • Policy & Compliance
  • Peer Review Policies and Practices
  • Simplifying Review of Research Project Grant

Simplifying Review of Research Project Grant Applications

RESOURCES AVAILABLE

Join us to learn how the simplified review framework will affect new and existing funding opportunities.

NIH is implementing a simplified framework for the peer review of the majority of competing research project grant (RPG) applications, beginning with submissions with due dates of January 25, 2025. The simplified peer review framework aims to better facilitate the mission of scientific peer review – identification of the strongest, highest-impact research – by:

  • Should the proposed research project be conducted?
  • Can the proposed research project be conducted?
  • Mitigating the effect of reputational bias by refocusing the evaluation of investigator/environment to within the context of the proposed research.
  • Reducing reviewer burden by shifting policy compliance activities to NIH staff.

Learn more about the NIH peer review process and how we developed the simplified peer review framework.

Simplified Peer Review Framework

Learn about the simplified review framework that will apply to most research project grant activity codes for application due dates of January 25, 2025 or later. 

Reviewer Guidance (coming in early 2025)

Reviewers will be provided training and guidance materials in Spring 2025 in time for the first review meetings that include the simplified peer review framework, in Summer 2025.

Applicant Guidance

Although the simplified review framework has little impact on what is included in an application, it does have significant impact on the funding opportunities used to apply. This page provides practical guidance for applicants navigating funding opportunities through this transition.

Have questions about the simplified peer review framework? We have answers.

Training and Resources

Presentations, webinars, and more to help you understand the simplified peer review framework.

Notices, Statements, and Reports 

Find Guide Notices, blog posts, press releases, and more on the simplified review framework.

Those with questions about the simplified framework for NIH peer review may contact [email protected] .

This page last updated on: October 19, 2023

  • Bookmark & Share
  • E-mail Updates
  • Privacy Notice
  • Accessibility
  • National Institutes of Health (NIH), 9000 Rockville Pike, Bethesda, Maryland 20892
  • NIH... Turning Discovery Into Health

Antecedents and Consequences of App Update: An Integrated Research Framework

  • Conference paper
  • First Online: 04 September 2018
  • Cite this conference paper

research update framework

  • Hengqi Tian 11 &
  • Jing Zhao 11  

Part of the book series: Lecture Notes in Business Information Processing ((LNBIP,volume 328))

Included in the following conference series:

  • Workshop on E-Business

1522 Accesses

1 Citations

E-commerce firms now compete intensively on mobile applications (apps). The transparency of digital environment has made customers and competitors as major external driving forces of app updates. However, app-related studies mainly focus on how to succeed in the hyper-competitive app market and how platform governance influence app evolution, overlooking the interaction among customers, competitors, and focal firm that shapes continuous app updates. Moreover, extant studies on app updates has drawn inconsistent conclusions regarding the impact of update frequency on market performance. We, therefore, proposed an integrated research framework to explore antecedents and consequences of app updates. We empirically test it by tracking customer reviews, updating notes, and ranks of 20 iOS apps within travel category in China for 60 months. The results indicate that the extreme sentiment expressed by customers will urge focal firm to update frequently and the focal firm will incorporate useful customer feedbacks to release a major update. Interestingly, we find that focal firm is reluctant to release superfluous updates and perform major updates if there are more high-ranking competitors update earlier. Our findings also testify the dual role of the number of total apps focal firm owns in facilitating update frequency and volume, as well as constraining days between two subsequent releases. Lastly, frequent updates will induce a higher degree of rank volatility, while long update intervals will decrease ranks. Our study has important implications for firms to succeed in the fierce competition in mobile commerce.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

research update framework

Investigating the Effect of User Reviews on Mobile Apps: The Role of Customer Led Innovation

research update framework

The Impact of Customer Reviews on Product Innovation: Empirical Evidence in Mobile Apps

research update framework

When Your App is Under the Spotlight

Data from State of Mobile Commerce: http://www.criteo.com/media/5333/criteo-mobilecommercereport-h12016-us.pdf .

https://www.analysys.cn/analysis/22/detail/1000268/

http://www.idc.com/getdoc.jsp?containerId=prAP41028416

Data from Statista 2017: https://www.statista.com/statistics/276623/number-of-apps-available-in-leading-app-stores/

http://www.adweek.com/digital/apple-app-store-ranking-changes/?red=im

Feorderer, J., Heinzl, A.: Product updates: attracting new consumers versus alienating existing ones. In: Proceedings of Thirty Eighth International Conference on Information Systems, Seoul (2017)

Google Scholar  

Liu, C.Z., Au, Y.A., Choi, H.S.: Effects of freemium strategy in the mobile app market: an empirical study of Google play. J. Manage. Inf. Syst. 31 (3), 326–354 (2014)

Article   Google Scholar  

Agarwal, R., Tiwana, A.: Evolvable systems: through the looking glass of IS. Inf. Syst. Res. 26 (3), 473–479 (2015)

Cavusoglu, H., Cavusoglu, H., Zhang, J.: Security patch management: share the burden or share the damage? Manage. Sci. 54 (4), 657–670 (2008)

Arora, A., Krishnan, R., Telang, R., Yang, Y.: An empirical analysis of software vendors’ patch release behavior: impact of vulnerability disclosure. Inf. Syst. Res. 21 (1), 115–132 (2010)

Krishnan, M.S., Mukhopadhyay, T., Kriebel, C.H.: A decision model for software maintenance. Inf. Syst. Res. 15 (4), 396–412 (2004)

Guzman, E., El-Haliby, M., Bruegge, B.: Ensemble Methods for App Review Classification: An Approach for Software Evolution (N), pp. 771–776 (2015)

Boudreau, K.J.: Let a thousand flowers bloom?an early look at large numbers of software app developers and patterns of innovation. Organ. Sci. 23 (5), 1409–1427 (2012)

Grover, V., Kohli, R.: Revealing your hand: caveats in implementing digital business strategy. MIS Q. 37 (2), 655–662 (2013)

Claussen, J., Kretschmer, T., Mayrhofer, P.: The effects of rewarding user engagement: the case of facebook apps. Inf. Syst. Res. 24 (1), 186–200 (2013)

Ghose, A., Han, S.P.: Estimating demand for mobile applications in the new economy. Manage. Sci. 60 (6), 1470–1488 (2014)

Lee, G., Raghu, T.S.: Determinants of mobile apps’ success: Evidence from the app store market. J. Manage. Inf. Syst. 31 (2), 133–170 (2014)

Roma, P., Zambuto, F., Perrone, G.: The role of the distribution platform in price formation of paid apps. Decis. Support Syst. 91 , 13–24 (2016)

Song, P.J., Xue, L., Rai, A., Zhang, C.: The ecosystem of software platform: a study of asymmetric cross-side network effects and platform governance. MIS Q. 42 (1), 121–142 (2018)

Tiwana, A.: Evolutionary competition in platform ecosystems. Inf. Syst. Res. 26 (2), 266–281 (2015)

Comino, S., Manenti, F.M., Mariuzzo, F.: Updates management in mobile applications. Itunes Vs Google Play. Med. J. Malaysia 37 (4), 354–356 (2015)

Mcilroy, S., Ali, N., Hassan, A.E.: Fresh apps: An empirical study of frequently-updated mobile apps in the Google Play Store. Empirical Softw. Eng. 21 (3), 1346–1370 (2016)

Kajanan, S., Pervin, N., Ramasubbu, N., Dutta, K.: Takeoff and sustained success of apps in hypercompetitive mobile platform ecosystems: an empirical analysis. In: Proceedings of Thirty Third International Conference on Information Systems, Orlando (2012)

Yin, D., Mitra, S., Zhang, H.: Research note—when do consumers value positive vs. negative reviews? an empirical investigation of confirmation bias in online word of mouth. Inf. Syst. Res. 27 (1), 131–144 (2016)

West, J., Salter, A., Vanhaverbeke, W., Chesbrough, H.: Open innovation: the next decade introduction. Res. Policy 43 (5), 805–811 (2014)

Barnett, W.P., Hansen, M.T.: The red queen in organization evolution. Strateg. Manage. J. 17 , 139–157 (1996)

Tiwana, A., Konsynski, B., Bush, A.A.: Research commentary—Platform evolution: Coevolution of platform architecture, governance, and environmental dynamics. Inf. Syst. Res. 21 (4), 675–687 (2010)

Chen, M.J., Miller, D.: Competitive dynamics: themes, trends, and a prospective research platform. Acad. Manage. Ann. 6 , 135–210 (2012)

Lim, S.L., Bentley, P.J.: Investigating app store ranking algorithms using a simulation of mobile app ecosystems. In: IEEE Congress on Evolutionary Computation, Cancún, México (2013)

Garg, R., Telang, R.: Inferring app demand from publicly available data. MIS Q. 37 (4), 1253–1264 (2013)

Jabr, W., Zheng, Z.Q.: Know yourself and know your enemy: an analysis of firm recommendations and consumer reviews in a competitive environment. MIS Q. 38 (3), 635–654 (2014)

Moe, W.W., Trusov, M.: The value of social dynamics in online product ratings forums. J. Mark. Res. 48 (3), 444–456 (2011)

Hausman, J., Hall, B.H., Griliches, Z.: Econometric models for count data with an application to the patents-R&D relationship. Econometrica 52 (4), 909–937 (1984)

Wooldridge, J.M.: Introductory Econometrics: A Modern Approach. Thompson Publishing, Bethesda (2006)

Hausman, J.: Specification tests in econometrics. Econometrica 46 (6), 1251–1271 (1978)

Article   MathSciNet   Google Scholar  

Greene, W.: Econometric Analysis. Prentice Hall, Upper Saddle River (2007)

Beck, N., Katz, J.N.: What to do (and not to do) with time-series cross-section data. Am. Polit. Sci. Rev. 89 (3), 634–647 (1995)

Download references

Acknowledgments

This research has been supported by grants from the National Natural Science Foundation of China under Grant 71372174 and 71702176 and the Fundamental Research Funds for the Central Universities, China University of Geosciences (Wuhan) under Grant G1323541816.

Author information

Authors and affiliations.

Research Center for Digital Business Management, School of Economics and Management, China University of Geosciences, Wuhan, 430074, People’s Republic of China

Hengqi Tian & Jing Zhao

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Jing Zhao .

Editor information

Editors and affiliations.

University of Seoul, Seoul, Korea (Republic of)

University of Washington, Seattle, WA, USA

University of Illinois, Urbana-Champaign, IL, USA

Michael J. Shaw

Seoul National University, Seoul, Korea (Republic of)

Byungjoon Yoo

Georgia Institute of Technology, Atlanta, GA, USA

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this paper

Cite this paper.

Tian, H., Zhao, J. (2018). Antecedents and Consequences of App Update: An Integrated Research Framework. In: Cho, W., Fan, M., Shaw, M., Yoo, B., Zhang, H. (eds) Digital Transformation: Challenges and Opportunities. WEB 2017. Lecture Notes in Business Information Processing, vol 328. Springer, Cham. https://doi.org/10.1007/978-3-319-99936-4_6

Download citation

DOI : https://doi.org/10.1007/978-3-319-99936-4_6

Published : 04 September 2018

Publisher Name : Springer, Cham

Print ISBN : 978-3-319-99935-7

Online ISBN : 978-3-319-99936-4

eBook Packages : Computer Science Computer Science (R0)

Share this paper

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research
  • Solutions Corporate Market Analysis Customer Experience Product Lifecycle Brand Strategy Research & Insights Higher Education Enrollment Management Academic Program Development Student Success Operations & Finance Advancement Marketing Grants Research & Insights K-12 Education Curriculum and Instruction Diversity, Equity, and Inclusion Academic Program Planning and Impact Strategic Planning Teacher Recruitment and Retention Operational Planning School Climate Research & Insights
  • Data Analysis
  • Qualitative Research
  • Strategic Advising
  • Benchmarking & Best Practices
  • Market Modeling
  • Research & Insights By Industry Corporate Higher Education K–12 Education By Type Insights Blog Reports & Briefs Case Studies Webinars All Research & Insights
  • Client Testimonials
  • Careers Overview
  • Current Openings
  • Recruitment Process
  • Social Impact
  • Careers in Research
  • Careers in Sales and Account Management
  • Diversity, Equity, and Inclusion
  • Client Login

The Current State of Diversity, Equity, and Inclusion

  • Posted on: November 17, 2021
  • Topic: K–12 Education

District leaders around the nation are struggling to address systemic failures and inequities affecting the students and communities they serve. These challenges only grew more glaring following the COVID-19 pandemic, making diversity, equity, and inclusion (DEI) more critical than ever.

To help districts in their efforts to ensure system-wide DEI, Hanover Research has created the DEI Dashboard, an extensive collection of data that compiles findings from 45 surveys conducted with districts around the country.

Our report, The Current State of Diversity, Equity, and Inclusion , draws from this data to provide a framework for assessing and addressing inequities, with an approach grounded in continuous improvement.

Download the report for a framework to:

  • Diagnose and develop a structure for discussing diversity, equity, and inclusion
  • Explore root causes for inequities
  • Suggest reforms and monitor changes in the system

The Current State of Diversity, Equity, and Inclusion

Embed Equity in Your Continuous Improvement Plans

After thousands of interactions with school districts, we believe the most effective means for assessing and addressing inequities in a school district is through a thorough equity audit.

Download the report for insights on assessing and addressing system-wide inequities drawn from our extensive, members-only DEI Dashboard.

Research & Insights

Receive industry-leading insights directly in your inbox.

If you have difficulty accessing any part of this website or the products or services offered by Hanover Research, please contact us at [email protected] for support.

research update framework

Become a client

Access the best custom research to help hit your organization’s goals . Request your custom consult below and a member of our team will be in touch.

Have questions? Please visit our contact page .

Let us come to you!

Receive industry insights directly in your inbox.

Our newsletters are packed with helpful tips, industry guides, best practices, case studies, and more. Enter your email address below to opt in:

Policy Update: Congressional Interest in NIH Reform Continues with New Framework, Engagement Opportunities

Lewis-Burke Associates has provided campus with a report about congressional interest in NIH reform continues with new framework and engagement opportunities.  In recent months congressional interest in policy changes for biomedical research, specifically focused on the National Institutes of Health (NIH), has increased substantially. Several factors, including scrutiny of the agency’s response to the COVID-19 pandemic, a perceived lack of transparency and oversight of certain research areas, and the fact that NIH has not been subject to formal reauthorization in almost twenty years have contributed to this interest. Please read the report for more information.

  • Open access
  • Published: 30 April 2020

Ten recommendations for using implementation frameworks in research and practice

  • Joanna C. Moullin 1 , 2 ,
  • Kelsey S. Dickson 2 , 3 ,
  • Nicole A. Stadnick 2 , 4 , 5 ,
  • Bianca Albers 6 , 7 ,
  • Per Nilsen 8 ,
  • Sarabeth Broder-Fingert 9 ,
  • Barbara Mukasa 10 &
  • Gregory A. Aarons 2 , 4 , 5  

Implementation Science Communications volume  1 , Article number:  42 ( 2020 ) Cite this article

91k Accesses

153 Citations

70 Altmetric

Metrics details

Recent reviews of the use and application of implementation frameworks in implementation efforts highlight the limited use of frameworks, despite the value in doing so. As such, this article aims to provide recommendations to enhance the application of implementation frameworks, for implementation researchers, intermediaries, and practitioners.

Ideally, an implementation framework, or multiple frameworks should be used prior to and throughout an implementation effort. This includes both in implementation science research studies and in real-world implementation projects. To guide this application, outlined are ten recommendations for using implementation frameworks across the implementation process. The recommendations have been written in the rough chronological order of an implementation effort; however, we understand these may vary depending on the project or context: (1) select a suitable framework(s), (2) establish and maintain community stakeholder engagement and partnerships, (3) define issue and develop research or evaluation questions and hypotheses, (4) develop an implementation mechanistic process model or logic model, (5) select research and evaluation methods (6) determine implementation factors/determinants, (7) select and tailor, or develop, implementation strategy(s), (8) specify implementation outcomes and evaluate implementation, (9) use a framework(s) at micro level to conduct and tailor implementation, and (10) write the proposal and report. Ideally, a framework(s) would be applied to each of the recommendations. For this article, we begin by discussing each recommendation within the context of frameworks broadly, followed by specific examples using the Exploration, Preparation, Implementation, Sustainment (EPIS) framework.

The use of conceptual and theoretical frameworks provides a foundation from which generalizable implementation knowledge can be advanced. On the contrary, superficial use of frameworks hinders being able to use, learn from, and work sequentially to progress the field. Following the provided ten recommendations, we hope to assist researchers, intermediaries, and practitioners to improve the use of implementation science frameworks.

Peer Review reports

Contributions to the literature

Provision of recommendations and concrete approaches to enhance the use of implementation science frameworks, models, and theories by researchers, intermediaries, and practitioners

Increase the ability of implementation researchers to produce generalizable implementation knowledge through comprehensive application of implementation frameworks, models, and theories

Increase implementation intermediaries and practitioners ability to use implementation frameworks as a shared language to familiarize stakeholders with implementation and as practical tools for planning, executing, and evaluating real-world implementation efforts

Provision of a worksheet to assist the application our recommendations for comprehensive framework use

Provision of a checklist to assist in reviewing ways in which the selected framework(s) are used

There is great value in effectively using implementation frameworks, models, and theories [ 1 , 2 ]. When used in research, they can guide the design and conduct of studies, inform the theoretical and empirical thinking of research teams, and aid interpretation of findings. For intermediaries and practitioners, they can provide shared language to familiarize stakeholders with implementation and function as practical tools for planning, executing, and evaluating real-world implementation efforts. Implementation frameworks, models, and theories have proliferated, and there are concerns that they are not used optimally to substantiate or advance implementation science and practice.

Theories are generally specific and predictive, with directional relationships between concepts making them suitable for hypothesis testing as they may guide what may or may not work [ 3 ]. Models are also specific in scope, however are more often prescriptive, for example, delineating a series of steps. Frameworks on the other hand tend to organize, explain, or describe information and the range and relationships between concepts, including some which delineate processes, and therefore are useful for communication. While we acknowledge the need for greater use of implementation frameworks, models, and potentially even more so theories, we use the term frameworks to encompass the broadest organizing structure.

Suboptimal use of frameworks can impact the viability and success of implementation efforts [ 4 ]. This can result in wasted resources, erroneous conclusions, specification errors in implementation methods and data analyses, and attenuated reviews of funding applications [ 5 ]. There can be a lack of theory or poorly articulated assumptions (i.e., program theory/logic model), guiding which constructs or processes are involved, operationalized, measured, and analyzed. While guidance for effective grant applications [ 4 ] and standards for evaluating implementation science proposals exist [ 6 ], the poor use of frameworks goes beyond proposals and projects and can slow or misguide the progress of implementation science as a field. Consistent terms and constructs aid communication and synthesis of findings and therefore are keys to replication and to building the evidence base. In real-world practice, the suboptimal use of implementation frameworks can lead stakeholders to misjudge their implementation context or develop inappropriate implementation strategies. Just as important, poor use of frameworks can slow the translation of research evidence into practice, and thereby limit public health impact.

Frameworks are graphical or narrative representations of the factors, concepts, or variables of a phenomenon [ 3 ]. In the case of implementation science, the phenomenon of interest is implementation. Implementation frameworks can provide a structure for the following: (1) describing and/or guiding the process of translating effective interventions and research evidence into practice (process frameworks), (2) analyzing what influences implementation outcomes (determinant frameworks), and (3) evaluating implementation efforts (outcome frameworks) [ 2 ]. Concepts within implementation frameworks may therefore include the following: the implementation process, often delineated into a series of phases; factors influencing the implementation process, frequently referred to as determinants or barriers and facilitators/enablers; implementation strategies to guide the implementation process; and implementation outcomes. The breadth and depth to which the concepts are described within frameworks vary [ 7 ].

Recent analyses of implementation science studies show suboptimal use of implementation frameworks [ 1 , 8 ]. Suboptimal use of a framework is where it is applied conceptually, but not operationalized or incorporated throughout the phases of an implementation effort, such as limited use to guide research methods [ 1 , 9 ]. While there is some published guidance on the use of specific frameworks such as the Theoretical Domains Framework (TDF) [ 10 ], RE-AIM [ 11 ], the Consolidated Framework for Implementation Research (CFIR) [ 12 ], the Exploration, Preparation, Implementation, Sustainment (EPIS) framework [ 1 ], and combined frameworks [ 13 ], there is a need for explicit guidance on the use of frameworks generally. As such, this article provides recommendations and concrete approaches to enhance the use of implementation science frameworks by researchers, intermediaries, and practitioners.

Recommendations for using implementation framework(s)

Ideally, implementation frameworks are used prior to and throughout an implementation effort, which includes both implementation research and real-world implementation projects. Described below, we present ten recommendations for the use of implementation frameworks, presented in the rough chronological order of an implementation effort. The sequence is not prescriptive to accommodate flexibility in project design and objectives; the order of recommendations one to three in particular may vary or occur concurrently. The key is that all recommendations are considered and that ideally a framework(s) would be applied to each recommendation. This may mean one framework is used across all recommendations or multiple frameworks are employed. We recognize that this may be unrealistic when working under real-world resource constraints and instead strategic selection of frameworks may be necessary (e.g., based on the greatest needs or strongest preferences of stakeholders).

Depending on the stage in the implementation process, it may not be necessary to apply all the recommendations. The full list is suitable for implementation efforts that will progress at least to the implementation stage, whereby implementation strategies are being employed. However, for those who are early in the exploration phase of implementation or perhaps at the point of trying to establish implementation determinants, they may not be able to produce process or logic models or articulate mechanisms yet. This does not mean a framework is not very informative, but the order of the recommendations would vary and the full list may only be applicable as the implementation project progresses in future work.

We begin by discussing each recommendation within the context of frameworks broadly, followed by specific examples using the EPIS framework. The EPIS framework acknowledges the dynamic nature of implementation by defining important outer context, inner context, bridging, and innovation factors that influence or are influenced by an implementation effort throughout the phases of implementation. These applied examples are based on the results of a recent systematic review [ 1 ], and the collective experience of the co-authors applying the EPIS framework in national and international implementation efforts. In addition, we provide two tools that summarize each recommendation along with key questions to consider for optimal framework application within research, evaluation, and practice projects (Additional files 1 and 2 ).

To ensure that the recommendations are clear, practical, and comprehensive, we invited an international stakeholder panel who come from different perspectives (e.g., researcher, NGO administrator, intermediary, provider/physician) to review the recommendations and consider their utility applied to their implementation efforts. Our four-member panel included at least one stakeholder from each target audience for this article including implementation researchers, whose work spans diverse contexts, populations, and academic disciplines; evidence-based practice (EBP); intermediaries; and practitioners. Stakeholders reported extensive applied and training experience using multiple frameworks (e.g., CFIR and the Capability, Opportunity, Motivation (COM-B) component of the Behaviour Change Wheel (BCW)). Specifically, the goal of the stakeholder input was to critically review the paper, making any additions, edits, and comments, by concentrating their thinking on (i) Would they be able to apply these recommendations as they are written to their implementation work (proposals, studies, projects, evaluations, reports etc.)? (ii) Would they as a researcher, administrator, intermediary, or provider know what to do to use an implementation framework for each recommendation? In addition, we felt one area that needed some extra attention was the two tools, which aim to assist readers apply the recommendations. They were asked to test/trial the tools with any projects that they or a colleague had to ensure they were functional. The tools were refined according to their suggestions.

Select a suitable framework(s)

The process for selecting implementation framework(s) for a particular implementation effort should consider the following: (i) the purpose of the framework (describing/guiding the implementation process, analyzing what influences outcomes [barriers and facilitators], or evaluating the implementation effort); (ii) the level(s) included within the framework (e.g., provider, organization, system); (iii) the degree of inclusion and depth of analysis or operationalization of implementation concepts (process, determinants [barriers and facilitators], strategies, evaluation); and (iv) the framework’s orientation, which includes the setting and type of intervention (i.e., EBP generally, a specific intervention, a guideline, a public health program being implemented) for which the framework was originally designed [ 7 ]. Reviews and websites of implementation frameworks provide lists of potential options [ 1 , 2 , 14 , 15 ], and the Theory Comparison and Selection Tool (T-CaST) defines specific framework selection criteria [ 16 ]. Frameworks may be evaluated against these four criteria to see if they fit the implementation effort’s purpose (aims and objectives) and context (setting in which implementation is to occur). If for example a project was aiming to implement an educational program in a school setting, a framework that includes factors associated with the healthcare system or patient characteristics would not be a good fit.

It may be necessary and desirable to use multiple frameworks. Confusing matters, some frameworks fit neatly within one framework category, while others cross multiple framework “types.” For example, EPIS is both a process as well as a determinant framework with its focus on inner and outer context determinants across the phases of implementation. Furthermore, frameworks include different concepts and operationalize these to varying degrees. Put simply, some frameworks are more general, while others are more context or intervention specific; some frameworks are more comprehensive than others. Selecting a given framework can simultaneously expand and limit consideration of factors and processes likely to be important in an implementation effort. For expansion, frameworks can enumerate issues that might not have been considered for a given effort. On the other hand, limiting consideration of implementation issues to only the theories, constructs, and/or processes identified in a given framework may attenuate or curtail the degree to which factors affecting implementation are considered. Thus, it is sometimes desirable to use multiple frameworks for specific purposes, or alternatively expand on a current framework. For example, researchers may use a framework for understanding and testing determinants (e.g., EPIS [ 17 ], CFIR [ 18 ], TDF [ 10 , 19 , 20 ]) and another for evaluating outcomes (e.g., RE-AIM [ 21 ] or Proctor’s [ 22 ]).

Finally, we recommend that framework users invest in knowledge of the service setting in which they are working. This includes knowing or seeking involvement from stakeholders who understand the external context such as community norms and culture, policy and government processes, as well as the inner context such as organizational culture and climate, employee expectations, and attitudes towards innovations. Framework use in isolation without a deep understanding of context specific issues can result in a mismatch between framework selection and its applicability in research and practice. Furthermore, it is vital to seek permissions from both inner context and external context leadership.

EPIS application

A mixed-methods developmental project aimed to systematically adapt and test an EBP for youth with Autism Spectrum Disorder in publicly-funded mental health settings and develop a corresponding implementation plan [ 23 ]. EPIS was specifically selected by the research team, given the EPIS framework’s focus on public services settings, that it specifies multi-level inner and outer contextual factors, bridging factors between outer and inner contexts, addresses implementation process, and emphasizes innovation fit. EPIS was an apt fit for the project aims and context. In combination with the EPIS framework and as one example of a bridging factor, a community partnership model [ 24 ] was also applied to inform the community-academic partnership integrated throughout this study.

Establish and maintain community stakeholder engagement and partnerships

Stakeholder engagement is an integral component of implementation [ 25 , 26 ]. Growing calls are being made for [ 27 ] and examples of embedded research models, such as practice-based research networks, learning health systems, and implementation laboratories [ 28 ], that foster collaborations between researchers, implementers, and policy-makers integrated within a healthcare system to conduct research. Frameworks help inform discussions related to the types and specific roles of stakeholders who should be engaged, and the timing of stakeholder engagement. Stakeholders should not only include those who are proximally involved in EBP service delivery and receipt (consumers, providers, and administrative staff), but also those who are distally involved in oversight and structuring organizations, legislative actions, policy design, and financing of EBP delivery [ 29 ]. Engaging stakeholders across multiple levels of an implementation ecosystem (e.g., policy/legislative, funders, community, organizational, provider, client/patient) is recommended best practice for implementation researchers [ 30 ] and as indicated in the multi-level nature of the majority of implementation frameworks. Implementation frameworks generally encourage stakeholder engagement prior to funding, and for it to continue during implementation effort justification and as part of future implementation iterations and adaptations. Further, an implementation framework can inform clarity. Stakeholders can be engaged in the application of an implementation framework by, for example, having them involved in defining the local health system needs and selecting EBP(s) and/or implementation strategies in the EPIS implementation phase, as these are important to enhance their collaboration and ownership of the implementation effort [ 26 ].

Several implementation and improvement science frameworks explicitly include stakeholder engagement as a key construct or process (e.g., EPIS framework, PRECEDE-PROCEED, Plan-Do-Study-Act cycles, Promoting Action on Research Implementation in Health Services [PARIHS]). Additionally, there are pragmatic tools drawn from frameworks that can facilitate stakeholder engagement. For example, key criteria within the aforementioned T-CaST tool include the extent to which stakeholders are able to understand, apply, and operationalize a given implementation framework, and the degree to which the framework is familiar to stakeholders [ 16 ]. Methods, such as concept mapping [ 31 ], nominal group technique [ 32 ], and design thinking [ 33 ], may be used to guide stakeholder engagement meetings and define the issue or gap to be addressed. Other frameworks, such as the BCW [ 34 ], EPIS [ 17 ], or CFIR [ 18 ], may be used to prioritize and define implementation outcomes, determinants, and strategies together with stakeholders.

The EPIS framework explicitly highlights the importance of engaging multiple levels of stakeholders to influence implementation efforts longitudinally and contextually, from the initial identification of a need to sustainment of EBP delivery to address that need. While duration or depth of stakeholder engagement is not explicitly prescribed in EPIS, if combined with, for example, a designated partnership engagement model [ 24 ], EPIS has shown to enable the conceptualization and characterization of roles and levels of stakeholder engagement (system leaders program managers, providers) within system-driven implementation efforts [ 35 ].

Define issue and develop research or evaluation questions and hypotheses

Use of frameworks to inform the articulation of an implementation need (i.e., a research-practice gap) and the development of practice-related or research questions and hypotheses has the potential to optimize implementation efforts and outcomes [ 2 ]. Specifically, frameworks facilitate the framing and formulation of implementation questions, including those related to needs assessment (e.g., what is the clinical or implementation issue needing to be addressed?), process (e.g., what phases will the implementation undergo to translate an intervention into practice, or when is an organization ready to implement a new intervention?), implementation effectiveness (e.g., do the proposed implementation strategies work in the local context?), mechanisms of success (e.g., did an increase in implementation climate improve implementation intentions?), and associated impact on outcomes (e.g., how did the implementation effort perform in terms of adoption or reach?). Ideally, these questions—be they related to research projects or practice issues that providers want to resolve—should be closely linked with the framework selected to maximize impact. For example, the selection of the BCW as a guiding framework necessitates for a question or issue to be described in behavioral terms and, in many cases, refined to be more specific. Being specific about the problem to be addressed entails being precise about the behaviors you are trying to change and whose behavior is involved [ 36 ].

Frameworks also provide guidance for the translation of implementation literature to research or evaluation questions. For example, it has been written that education used alone as a single implementation strategy is not sufficient for successful implementation. An implementation framework will assist in realizing implementation determinants that remain to be addressed and therefore the selection of additional implementation(s) strategies. This can be challenging given the presence of multiple factors spanning different levels that vary across contexts and phases of implementation. Further, they contextualize and provide critical links between theory and individual experience gained through practice, such as supporting the perceived value of targeting leadership in promoting the adoption and use of effective interventions or research evidence [ 37 ].

Finally, and perhaps most relevant to many implementation efforts, frameworks provide explicit guidance and justification for proposed hypotheses to be tested that strengthen proposals, projects, trials, and products, both research and practice based [ 2 , 4 ]. Despite its explanatory power, use of frameworks to explicitly guide hypothesis formation are the minority, even within implementation efforts using theory to guide other aspects of the research process [ 38 , 39 , 40 ]. Thus, the increased use of frameworks to inform implementation questions and hypotheses is sorely needed.

EPIS Application

Work by Becan and colleagues [ 41 ] provides an example of a comprehensive application of EPIS framework to inform hypothesis development in their US National Institute on Drug Abuse study Translational Research on Interventions for Adolescents in the Legal System (JJ-TRIALS). JJ-TRIALS utilized EPIS to inform, identification of outer and inner context determinants, measures to assess those determinants, predictions based on theory, and tracking progress through the EPIS phases including identifying what constitutes the transition between each phase and the next phase. Specifically, the trial applied EPIS to inform the development of four tiers of questions related to the following: (1) the differential effect of two implementation strategies, (2) the factors that impacted and supported the transition across implementation phases, (3) the impact of this process on key implementation outcomes, and (4) tracking progress through the EPIS phases. For example, relevant determinants at the outer context system level and inner context organizational levels were identified. Specific hypotheses were developed to test how determinants (e.g., independent variables) influenced mechanisms (e.g., mediators/moderators) and ultimately “targets” (e.g., dependent variables) that are implementation outcomes and outcomes with clinical relevance.

Develop implementation program theory or logic model

Within research and practice projects, implementation frameworks can inform the program logics that describe the anticipated relationships between inputs, activities, outputs, and implementation and client outcomes, thereby supporting the explicit formulation of key assumptions and outlining of crucial project details.

In addition, implementation frameworks guide the design of a model for testing, for example, mediation and moderation of various influences on the process and outcomes of implementation. Despite an increasing emphasis on understanding key mechanisms of change in implementation [ 4 , 42 , 43 ], few evaluations examine implementation change mechanisms and targets [ 44 ]. Change mechanisms explain how or why underlying processes create change, whereas targets are defined as the identified focus or end aim of implementation efforts [ 45 ]. From a public health perspective, mechanism and target evaluation is critical to facilitate replication and scaling up of implementation protocols to more effectively change healthcare practice and achieve broader public health impact. Mechanism measurement and evaluation is critical to increase the rigor and relevance of implementation science [ 46 ]. Frameworks can facilitate beyond simple evaluation of key determinants and highlight fundamental single-level (e.g., organizational characteristics, individual adopter characteristics) and cross-cutting mechanisms of change spanning context or setting, levels [ 4 ]. Frameworks also enlighten the complex and evolving nature of determinants, mechanisms, and targets, varying across implementation phases. As an example, leadership may determine organizational climate during implementation within one specific service setting or context but serve as change mechanism impacting implementation targets during the exploration phase in a different setting. Frameworks provide the necessary roadmap for understanding these complex associations by offering prescriptive guidance for the evolving nature of these determinants.

The EPIS framework was applied to predict implementation leadership and climate and provider attitudes as key mechanisms of change in two linked Hybrid Type 3 cluster randomized trials testing the effectiveness of multi-level implementation strategies targeting leadership and attitudes (Brookman-Frazee and Stahmer [ 47 ]; see Fig. 1 ). Consistent with the explanatory nature of EPIS, this work highlights the interconnected nature of these mechanisms, with leadership hypothesized as both a mechanism impacting outcomes as well as the predictor (determinant) of further mechanisms such as provider attitudes during implementation [ 47 ].

figure 1

TEAMS intervention, mechanisms, and outcomes [ 47 ]

Determine research and evaluation methods (overall design, data collection, data analysis)

The distinct aims and purposes of implementation efforts require distinct evaluation designs such as mixed-methods, hybrid effectiveness-implementation, and quality improvement approaches including formative evaluations or Plan-Do-Study-Act cycles [ 48 ]. Implementation frameworks should be used to inform development of such designs across all phases, from the broader construction down to the measurement and analysis.

In the design of an evaluation, frameworks should be used to inform decisions about what constructs to assess, data to collect, and which measures to use. In this process, frameworks can help to identify and/or expand the implementation determinants or aspects assumed to impact the implementation process at different levels and across multiple phases for consideration or measurement. They can also help to operationalize constructs of importance to an evaluation and the identification of suitable measures. Fortunately, there is expanding work in implementation science to develop and catalog tools tied to existing frameworks to aid in this application (e.g., EPIS, see episframework.com/measures [ 1 ]; CFIR, see cfirguide.org/evaluation-design [ 49 ]; RE-AIM, see re-aim.org/resources-and-tools [ 50 ]).

For the collection and analysis of qualitative data, frameworks such as EPIS or CFIR provide developed and freely available data analytic tools, including pre-populated coding templates and data aggregation matrices [ 1 , 49 ]. Again, the use of framework-informed tools permits better alignment of concepts examined with broader implementation science literature. Analytically, frameworks can inform decisions about sequencing and directionality of implementation processes and strategies. Beyond identifying and analyzing key implementation determinants, theory should be applied along with frameworks in order to describe important implementation determinants (e.g., independent variables), implementation mechanisms (e.g., mediators), and their associated impacts on implementation targets (e.g., dependent variables) across the phases of implementation processes.

The EPIS framework was used to inform the development of key informant interviews and focus groups, and data coding and analytic procedures to capture the key outer and inner context and innovation factor influences across implementation phases of two large-scale community effectiveness trials [ 51 ]. Within the trials themselves, EPIS informed the selection of quantitative measures of inner context organizational and provider measures [ 52 ]. Such integrated and thorough framework use is needed to further build an integrated body of knowledge about effective implementation strategies.

Determine implementation determinants

Implementation frameworks often include several implementation determinants (i.e., barriers and enablers) that have been found to influence implementation outcomes [ 1 , 2 ]. Such lists of potential determinants are useful for exploratory work, for example, identifying key factors for applying an intervention in a particular context. This may occur early in an implementation process to guide implementation strategy selection or EBP adaptation, or further along to aid in the development of an implementation plan or in tailoring implementation strategies to support the EBP implementation or adaptation. The implementation science literature includes numerous examples of using frameworks in this manner across health contexts (see Birken et al. (2017) [ 13 ]; Helfrich et al. (2010) [ 53 ]). Examples of relevant determinant frameworks include the EPIS [ 1 , 17 ], CFIR [ 18 ], integrated checklist to identify determinants of practice (TICD checklist) [ 54 ], TDF [ 19 ], and BCW [ 36 ].

Another important reason for assessing implementation determinants using a theoretical framework is to specify the target of the implementation effort. It is not possible or necessary for all determinants to be targeted. Often, due to funding or other constraints, it is important to consider individual beneficiaries and community or government needs in prioritizing which determinants to targets. For example, the BCW methodology guides users to conduct a thorough behavioral diagnosis using the COM-B and to then prioritize which behaviors to address. In research, changes to pre-specified determinants included in the protocol require amendments to be documented, justified, and possibly approved by a research ethics committee. Prospective framework application may also reveal different determinants and aid selection of particular influencing factors to target during subsequent implementation studies.

The Leadership and Organizational Change for Implementation (LOCI) intervention employed the EPIS framework to select key implementation determinants to test in a large cluster RCT [ 55 ]. In this study, implementation leadership from first-level team leaders/managers, organizational climate and culture, implementation climate, and psychological safety climate were selected as determinants to test their influence on the fidelity of the EBP being implemented. In addition, to the developed implementation model and implementation strategy, EPIS was used to code qualitative data and select quantitative survey measures.

Select and tailor, or develop, an implementation strategy(s)

Implementation frameworks are necessary for selecting, tailoring, or developing implementation strategies. Defined as methods or techniques to aid the adoption, implementation, sustainment, and scale-up of evidence-based public health or clinical interventions [ 8 ], implementation strategies are the linchpin of successful implementation efforts. Implementation strategies vary in purpose and complexity, ranging from discrete strategies [ 56 ] such as audit and feedback [ 57 ] to multifaceted, and often branded, strategies that integrate at least two discrete strategies, such as the Leadership and Organizational Change for Implementation (LOCI) intervention [ 37 ], Availability, Responsiveness and Continuity model (ARC) [ 58 ], Replicating Effective Programs (REP) [ 59 ], Getting to Outcomes (GTO) [ 60 ], and Quality Implementation Framework (QIF) [ 61 ]. Powell and colleagues have outlined four primary methods for matching implementation strategies to barriers (conjoint analysis, intervention mapping, concept mapping, group model building) [ 62 ]. Each approach is highly participatory but varies in strengths and weaknesses of application. Additionally, comprehensive framework(s) application can help address identified priorities (e.g., methods for tailoring strategies, specifying, and testing mechanisms) for enhancing the impact of implementation strategies [ 63 ]. Taxonomies of strategies, such as the Expert Recommendations for Implementing Change (ERIC) discrete strategies list [ 64 ], BCT [ 65 ], and EPOC checklist [ 66 ], are useful to promote uniform communication and synthesis across implementation science.

Following the identification and prioritization of important barriers and facilitators (see recommendation 5), an implementation framework can support the process of matching determinants to implementation strategies. For example, the PARIHS framework [ 67 ] can be used to identify critical evidentiary (e.g., patient experience, information from the local setting) and contextual (e.g., leadership, receptive context) elements that may impact EBP implementation. This evidentiary and contextual analysis is then used to develop or tailor implementation strategies, primarily focused on facilitation as the anchoring approach. Use of frameworks like PARIHS to guide selection and tailoring of implementation strategies may be particularly suitable for implementation efforts and settings that have a strong need for facilitation to support the engagement and participation of a wide range or number of stakeholders.

The EPIS framework and the Dynamic Adaptation Process (DAP) were used in a cluster randomized trial to implement school nursing EBPs in US high schools to reduce LGBTQ adolescent suicide [ 68 ]. The DAP [ 69 ] is a multicomponent implementation strategy directly drawn from the EPIS framework. The DAP uses an iterative, data-informed approach to facilitate implementation across each phase of EPIS. A critical and core component of the DAP is the creation of an Implementation Resource Team that is a multiple stakeholder collaborative designed to support implementation, data interpretation, and explicitly address adaptations during the implementation process. Within this study, the EPIS framework and the DAP were used to (1) inform the constructs measured in the multi-level needs assessment during the exploration phase, (2) support the identification of the stakeholders and activities involved in the Implementation Resource Team that was developed in the preparation phase, (3) guide the tracking and integration of adaptations to the EBP strategy training and delivery during the implementation phase, and (4) inform the constructs and measurement of the implementation outcomes in the sustainment phase.

Specify implementation outcomes and evaluate Implementation

Implementation evaluation may include evaluation of progression through implementation stages, formative and summative evaluation of factors and strategies, as well as evaluation of the degree of implementation success as reflected in implementation outcomes. These may be measured at micro (individual), meso (team or organization), and macro (system) levels. Regardless of the particular scope and design of implementation evaluations, they should be informed by implementation frameworks.

As outlined by Nilsen et al. [ 2 ], there are a few implementation frameworks that have the expressed purpose of evaluating implementation, including RE-AIM [ 21 ], PRECEDE-PROCEED [ 70 ], and frameworks by Stetler et al. [ 71 ], Moullin et al. [ 72 ], and Proctor et al. [ 22 ]. Furthermore, there are particular implementation process measures such as the Stages of Implementation Completion (SIC), which may be used as both a formative and summative tool to measure the rate and depth of implementation [ 73 ]. Furthermore, there is an increasing number of measures of implementation determinants [ 74 , 75 ] (e.g., implementation leadership [ 76 ], implementation climate [ 77 , 78 ], or implementation intentions [ 79 ]). Evaluation of changes in these factors over time may be indicators of implementation success. While there are aforementioned specific evaluation frameworks, other frameworks also include evaluation elements to varying degrees [ 7 ]. For example, the conceptual framework for sustainability of public health programs by Scheirer and Dearing [ 80 ], the framework of dissemination in health services intervention research by Mendel et al. [ 81 ], and the integrated 2-phase Texas Christian University (TCU) approach to strategic system change by Lehman [ 82 ] include comprehensive evaluation of the influencing factors depicted in the corresponding frameworks. Frameworks that do not explicitly include measurement components can draw upon evaluation frameworks to work alongside and to determine which measures to select for each of the influencing factors chosen to be studied and the nominated implementation outcomes.

While the EPIS framework is not primarily an evaluation framework, its website includes a list of measures for quantitative analysis and definitions for qualitative work. After selecting implementation determinants and developing specific implementation questions and/or hypotheses, implementation measures should be selected for the chosen determinants as mediators of implementation success. In addition, measures of movement through the EPIS stages and measures of implementation outcomes may be included (e.g., fidelity). Both JJ-trials (Juvenile Justice—Translational Research on Interventions for Adolescents in the Legal System) [ 83 ] and the LOCI study [ 37 ] provide examples for using EPIS in implementation evaluation. From a practice perspective, teams should measure the baselines and periodically throughout the project to determine how the process measures and outcomes have improved over time. These evaluations help determine the rate of progress, which can inform improvements in other recommendations, such as recommendations 5 and 7.

Use a framework(s) at micro level to conduct and tailor implementation

Implementation is a dynamic, context-specific process. Each layer of a context (e.g., organization, profession, team, individual) requires ongoing individual tailoring of implementation strategies. Implementation frameworks, therefore, should be used to guide the overarching implementation plan, and—at the micro level—processes such as site-specific implementation team creation, barrier and facilitator assessment, implementation planning, and goal setting. This may be done by formatively evaluating implementation determinants either qualitatively or quantitatively as described above and then using the results to select or adapt implementation strategies for the particular context. Stetler et al. [ 71 ] provide four progressive yet integrated stages of formative evaluation. Another method would be to conduct implementation barrier, and facilitator assessments at different levels within the implementation context and subsequently determine tailor the implementation strategies. For example, coaching calls may reveal that a range of different behavioral change techniques [ 34 ] suited to each provider or leader.

During the aforementioned LOCI study, the goal was to improve first-level leader’s leadership and implementation climate to facilitate EBP adoption and use [ 55 ]. Baseline and ongoing 360-degree evaluation (where individuals, such as mid-level managers, rate themselves and receive ratings from their boss and staff) were performed and implementation plans subsequently adapted for each agency and team leader based on the data and emergent issues in the implementation process. This process was broadly informed by the focus on innovation fit and emphasis on leadership across levels within the EPIS framework. The Climate Embedding Mechanisms [ 84 ] were then used in combination with EPIS to formulate the individual, leader-specific implementation plans.

Write the proposal and report

Documenting an implementation effort—be it in the form of a research proposal, a scientific article, or a practice report—is key for any project. As part of this documentation, detailing the use of an implementation framework(s) is vital for the implementation project to be replicable and analyzable. The use of the selected implementation framework(s) should be documented across the proposal and report. This includes description or selection of appropriate methods to assess the selected implementation determinants. Furthermore, as outlined by Proctor et al. [ 8 ], implementation strategies should be named, defined, and specified, based on seven components enabling their measurement and replication: actor, action, action targets, temporality (when), dose (duration and how often), outcomes, and theory/justification. Similarly, outcomes should be named, specified, measured, and reported. Again, the work of Proctor and colleagues [ 22 ] provides a useful taxonomy for classifying and reporting types of implementation research outcomes that also includes guidance regarding level of analysis and measurement, theoretical basis, and maps the salience of outcome onto the phases of implementation.

Consistent with these recommendations are existing standards and guidelines to improve transparent and accurate reporting of implementation studies such as the Standards for Reporting Implementation Studies (STaRI; Pinnock et al. [ 85 ]). Ideally, incorporating these standards will strengthen the comprehensive use and reporting of frameworks to inform the formulation, planning, and reporting of implementation studies. Our recommendation is to explicitly document the use of implementation frameworks in research proposals, scientific outputs, and evaluation reports. To aid this process, Additional file 1 provides the Implementation Framework Application Worksheet to provide examples of key questions to assist implementation scientists and practitioners in applying our recommendations for comprehensive framework application. Finally, Additional file 2 provides the Implementation Framework Utilization Checklist to assist in thinking through and reviewing ways in which the selected framework(s) are used. In combination with the Implementation Framework Application Worksheet, the Checklist may inform revisions to a project (proposal, active project, or dissemination materials) and facilitate comprehensive framework application. Additionally, this Checklist may serve to provide documentation of implementation utilization (e.g., for inclusion in project proposals, reports, manuscripts).

An example of EPIS framework reporting is the “ATTAIN” (Access to Tailored Autism Integrated Care) study protocol [ 86 ]. Within this example, the authors display an adapted EPIS framework to highlight the unique outer (e.g., American Academy of Pediatrics recommendation for mental health screening) and inner context (e.g., organizational and technological capacity for innovation) determinants relevant to the phases of implementation included in the study (Exploration through Implementation). In addition, the authors describe how the unique contextual determinants and proposed implementation strategies (e.g., inter-organizational relationships among stakeholders) were conceptualized and to be measured across the study’s lifespan.

The use of implementation frameworks provides a structure for describing, guiding, analyzing, and evaluating implementation efforts, thus facilitating advancement of generalizable implementation science knowledge. Superficial use of frameworks hinders researchers’ and practitioners’ learning and efforts to sequentially progress the field. By following the provided ten recommendations, we hope researchers, intermediaries, and practitioners will bolster the use of implementation science frameworks.

Availability of data and materials

Not Applicable

Abbreviations

Availability, Responsiveness and Continuity model

Access to Tailored Autism Integrated Care

Behaviour Change Wheel

Consolidated Framework for Implementation Research

Capability, Opportunity, Motivation - Behaviour

Dynamic Adaptation Process

Evidence-Based Practice

Exploration, Preparation, Implementation, Sustainment framework

Expert Recommendations for Implementing Change

Getting to Outcomes

Juvenile Justice—Translational Research on Interventions for Adolescents in the Legal System

Leadership and Organizational Change Intervention

Promoting Action on Research Implementation in Health Services

Quality Implementation Framework

Reach, Effectiveness, Adoption, Implementation, Maintenance

Replicating Effective Programs

Standards for Reporting Implementation Studies

Texas Christian University

Theoretical Domains Framework

Moullin JC, Dickson KS, Stadnick NA, Rabin B, Aarons GA. Systematic review of the exploration, preparation, implementation, sustainment (EPIS) framework. Implement Sci. 2019;14:1.

PubMed   PubMed Central   Google Scholar  

Nilsen P. Making sense of implementation theories, models and frameworks. Implement Sci. 2015;10:53.

Rycroft-Malone J, Bucknall T. Theory, frameworks, and models: laying down the groundwork. In: Rycroft-Malone J, Bucknall T, editors. Models and frameworks for implementing evidence-based practice: Linking evidence to action. Oxford: Wiley-Blackwell; 2010. p. 23–50.

Google Scholar  

Proctor EK, Powell BJ, Baumann AA, Hamilton AM, Santens RL. Writing implementation research grant proposals: ten key ingredients. Implement Sci. 2012;7:96.

Pedhazur EJ. Multiple regression in behavioral research: explanation and prediction. 2nd ed. Fort Worth, TX: Harcourt Brace; 1982.

Crable EL, Biancarelli D, Walkey AJ, Allen CG, Proctor EK, Drainoni M. Standardizing an approach to the evaluation of implementation science proposals. Implement Sci. 2018;13:71.

Moullin JC, Sabater-Hernández D, Fernandez-Llimos F, Benrimoj SI. A systematic review of implementation frameworks of innovations in healthcare and resulting generic implementation framework. Health Res Policy Syst. 2015;13:16.

Proctor EK, Powell BJ, McMillen JC. Implementation strategies: recommendations for specifying and reporting. Implement Sci. 2013;8:139.

Kirk MA, Kelley C, Yankey N, Birken SA, Abadie B, Damschroder L. A systematic review of the use of the consolidated framework for implementation research. Implement Sci. 2016;11:72.

Atkins L, Francis J, Islam R, O’Connor D, Patey A, Ivers N, Foy R, Duncan EM, Colquhoun H, Grimshaw JM. A guide to using the Theoretical Domains Framework of behaviour change to investigate implementation problems. Implement Sci. 2017;12:77..

Glasgow RE, Estabrooks PE. Pragmatic applications of RE-AIM for health care initiatives in community and clinical settings. Prev Chronic Dis. 2018;15.

Keith RE, Crosson JC, O’Malley AS, Cromp D, Taylor EF. Using the consolidated framework for implementation research (CFIR) to produce actionable findings: a rapid-cycle evaluation approach to improving implementation. Implement Sci. 2017;12:15.

Birken SA, Powell BJ, Presseau J, Kirk MA, Lorencatto F, Gould NJ, Shea CM, Weiner BJ, Francis JJ, Yu Y. Combined use of the Consolidated Framework for Implementation Research (CFIR) and the Theoretical Domains Framework (TDF): a systematic review. Implement Sci. 2017;12:2.

Tabak RG, Khoong EC, Chambers DA, Brownson RC. Bridging research and practice: models for dissemination and implementation research. Am J Prev Med. 2012;43:337–50.

Dissemination & Implementation Models in Health Research & Practice [ http://dissemination-implementation.org/content/aboutUs.aspx ].

Birken SA, Rohweder CL, Powell BJ, Shea CM, Scott J, Leeman J, Grewe ME, Kirk MA, Damschroder L, Aldridge WA. T-CaST: an implementation theory comparison and selection tool. Implement Sci. 2018;13:143.

Aarons GA, Hurlburt M, Horwitz SM. Advancing a conceptual model of evidence-based practice implementation in public service sectors. Adm Policy Ment Hlth. 2011;38:4–23.

Damschroder L, Aron D, Keith R, Kirsh S, Alexander J, Lowery J. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4:50–64.

Michie S, Johnston M, Abraham C, Lawton R, Parker D, Walker A. Making psychological theory useful for implementing evidence based practice: a consensus approach. BMJ Qual Saf. 2005;14:26–33.

CAS   Google Scholar  

Cane J, O’Connor D, Michie S. Validation of the theoretical domains framework for use in behaviour change and implementation research. Implement Sci. 2012;7:37.

Glasgow RE, Vogt T, Boles S. Evaluating the public health impact of health promotion interventions: the RE-AIM framework. Am J Public Health. 1999;89:1322–7.

CAS   PubMed   PubMed Central   Google Scholar  

Proctor EK, Silmere H, Raghavan R, Hovmand P, Aarons GA, Bunger A, Griffey R, Hensley M. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Hlth. 2011;38:65–76.

Dickson KS, Aarons GA, Anthony LG, Kenworthy L, Crandal BR, Williams K, Brookman-Frazee L. Adaption and pilot implementation of an autism executive functioning intervention in children’s mental health services: a mixed-methods study protocol. Under review. .

Brookman-Frazee L, Stahmer AC, Lewis K, Feder JD, Reed S. Building a research-community collaborative to improve community care for infants and toddlers at-risk for autism spectrum disorders. J Community Psychol. 2012;40:715–34.

Drahota A, Meza R, Brikho G, Naaf M, Estabillo J, Spurgeon E, Vejnoska S, Dufek E, Stahmer AC, Aarons GA. Community-academic partnerships: a systematic review of the state of the literature and recommendations for future research. Milbank Q. 2016;94:163–214..

Miller WL, Rubinstein EB, Howard J, Crabtree BF. Shifting implementation science theory to empower primary care practices. Ann Fam Med. 2019;17:250–6.

World Health Organization. Changing mindsets: strategy on health policy and systems research. Geneva, Switzerland: World Health Organization; 2012.

Ivers NM, Grimshaw JM. Reducing research waste with implementation laboratories. Lancet. 2016;388:547–8.

PubMed   Google Scholar  

Green AE, Aarons GA. A comparison of policy and direct practice stakeholder perceptions of factors affecting evidence-based practice implementation using concept mapping. Implement Sci. 2011;6:104.

Brookman-Frazee L, Stahmer A, Stadnick N, Chlebowski C, Herschell A, Garland AF. Characterizing the use of research-community partnerships in studies of evidence-based interventions in children’s community services. Adm Policy Ment Hlth. 2016;43:93–104.

Trochim WM. An introduction to concept mapping for planning and evaluation. Eval Program Plann. 1989;12:1–16.

Rankin NM, McGregor D, Butow PN, White K, Phillips JL, Young JM, Pearson SA, York S, Shaw T. Adapting the nominal group technique for priority setting of evidence-practice gaps in implementation science. BMC Med Res Methodol. 2016;16:110.

Mintrom M, Luetjens J. Design thinking in policymaking processes: opportunities and challenges. Aust J Public Adm. 2016;75:391–402.

Michie S, van Stralen MM, West R. The behaviour change wheel: a new method for characterising and designing behaviour change interventions. Implement Sci. 2011;6:42.

Lau AS, Rodriguez A, Bando L, Innes-Gomberg D, Brookman-Frazee L. Research community collaboration in observational implementation research: complementary motivations and concerns in engaging in the study of implementation as usual. Adm Policy Ment Hlth. 2019:1–17.

Michie S, Atkins L, West R. The behaviour change wheel: a guide to designing interventions. Great Britain: Silverback Publishing; 2014.

Aarons GA, Ehrhart MG, Farahnak LR, Hurlburt MS. Leadership and organizational change for implementation (LOCI): a randomized mixed method pilot study of a leadership and organization development intervention for evidence-based practice implementation. Implement Sci. 2015;10:11.

Birken SA, Powell BJ, Shea CM, Haines ER, Alexis Kirk M, Leeman J, Rohweder C, Damschroder L, Presseau J. Criteria for selecting implementation science theories and frameworks: results from an international survey. Implement Sci. 2017;12:124.

Davies P, Walker AE, Grimshaw JM. A systematic review of the use of theory in the design of guideline dissemination and implementation strategies and interpretation of the results of rigorous evaluations. Implement Sci. 2010;5:14.

Johnson AM, Moore JE, Chambers DA, Rup J, Dinyarian C, Straus SE. How do researchers conceptualize and plan for the sustainability of their NIH R01 implementation projects? Implement Sci. 2019;14:50.

Becan JE, Bartkowski JP, Knight DK, Wiley TR, DiClemente R, Ducharme L, Welsh WN, Bowser D, McCollister K, Hiller M. A model for rigorously applying the Exploration, Preparation, Implementation, Sustainment (EPIS) framework in the design and measurement of a large scale collaborative multi-site study. Health & Justice. 2018;6:9.

Lewis CC, Stanick C, Lyon A, Darnell D, Locke J, Puspitasari A, Marriott BR, Dorsey CN, Larson M, Jackson C, et al. Proceedings of the Fourth Biennial Conference of the Society for Implementation Research Collaboration (SIRC) 2017: implementation mechanisms: what makes implementation work and why? Part 1. Implement Sci. 2018;13:30.

National Institute of Mental Health. Strategic Plan for Research. 2015. Retrieved from http://www.nimh.nih.gov/about/strategic-planning-reports/index.shtml .

Lewis CC, Klasnja P, Powell B, Tuzzio L, Jones S, Walsh-Bailey C, Weiner B. From classification to causality: advancing understanding of mechanisms of change in implementation science. Frontiers in Public Health. 2018;6:136.

Lewis C, Boyd M, Beidas R, Lyon A, Chambers D, Aarons G, Mittman B: A research agenda for mechanistic dissemination and implementation research. In Conference on the Science of Dissemination and Implementation; Bethesda, MD. 2015.

Geng E, Peiris D, Kruk ME. Implementation science: relevance in the real world without sacrificing rigor. PLOS Med. 2017;14:e1002288.

Brookman-Frazee L, Stahmer AC. Effectiveness of a multi-level implementation strategy for ASD interventions: study protocol for two linked cluster randomized trials. Implement Sci. 2018;13:66.

Landsverk J, Brown CH, Chamberlain P, Palinkas L, Ogihara M, Czaja S, Goldhaber-Fiebert JD, Rolls Reutz J, McCue Horwitz S. Design and analysis in dissemination and implementation research. In: Brownson RC, Colditz GA, Proctor EK, editors. Dissemination and Implementation Research in Health: Translating Science to Practice. New York, NY: Oxford University Press; 2012.

Consolidated Framework for Implementation Research (CFIR) [ http://www.cfirguide.org/ ].

Reach Effectiveness Adoption Implementation Maintenance (RE-AIM) [ http://www.re-aim.org/ ].

Brookman-Frazee L, Chlebowski C, Suhrheinrich J, Finn N, Dickson KS, Aarons GA, Stahmer A. Characterizing shared and unique implementation influences in two community services systems for autism: applying the EPIS framework to two large-scale autism intervention community effectiveness trials. Adm Policy Ment Hlth. 2020;47(2):176–87.

Suhrheinrich J, et al. Exploring inner-context factors associated with implementation outcomes in a randomized trial of classroom pivotal response teaching. Under Review.

Helfrich CD, Damschroder LJ, Hagedorn HJ, Daggett GS, Sahay A, Ritchie M, Damush T, Guihan M, Ullrich PM, Stetler CB. A critical synthesis of literature on the promoting action on research implementation in health services (PARIHS) framework. Implement Sci. 2010;5:82.

Flottorp SA, Oxman AD, Krause J, Musila NR, Wensing M, Godycki-Cwirko M, Baker R, Eccles MP. A checklist for identifying determinants of practice: a systematic review and synthesis of frameworks and taxonomies of factors that prevent or enable improvements in healthcare professional practice. Implement Sci. 2013;8:35.

Aarons GA, Ehrhart MG, Moullin JC, Torres EM, Green AE. Testing the leadership and organizational change for implementation (LOCI) intervention in substance abuse treatment: a cluster randomized trial study protocol. Implement Sci. 2017;12:29.

Powell BJ, McMillen JC, Proctor EK, Carpenter CR, Griffey RT, Bunger AC, Glass JE, York JL. A compilation of strategies for implementing clinical innovations in health and mental health. Med Care Res Rev. 2012;69:123–57.

Ivers N, Jamtvedt G, Flottorp S, Young JM, Odgaard-Jensen J, French SD, O’Brien MA, Johansen M, Grimshaw J, Oxman AD: Audit and feedback: effects on professional practice and healthcare outcomes. Cochrane Database of Systematic Reviews 2012.

Glisson C, Schoenwald S. The ARC organizational and community intervention strategy for implementing evidence-based children’s mental health treatments. Ment Health Serv Res. 2005;7:243–59.

Kilbourne AM, Neumann MS, Pincus HA, Bauer MS, Stall R. Implementing evidence-based interventions in health care: application of the replicating effective programs framework. Implement Sci. 2007;2:42.

Chinman M, Imm P, Wandersman A. Getting to outcomes™ 2004: promoting accountability through methods and tools for planning, implementation, and evaluation. Santa Monica: Rand Corporation; 2004.

Meyers DC, Durlak JA, Wandersman A. The quality implementation framework: a synthesis of critical steps in the implementation process. Am J Community Psychol. 2012;50:462–80.

Powell BJ, Beidas RS, Lewis CC, Aarons GA, McMillen JC, Proctor EK, Mandell DS. Methods to improve the selection and tailoring of implementation strategies. J Behav Health Serv Res. 2017;44:177–94.

Powell BJ, Fernandez ME, Williams NJ, Aarons GA, Beidas RS, Lewis CC, McHugh SM, Weiner BJ. Enhancing the impact of implementation strategies in healthcare: a research agenda. Front Public Health. 2019;7:3.

Powell BJ, Waltz TJ, Chinman MJ, Damschroder L, Smith JL, Matthieu MM, Proctor E, Kirchner JE. A refined compilation of implementation strategies: results from the expert recommendations for implementing change (ERIC) project. Implement Sci. 2015;10:21.

Abraham C, Michie S. A taxonomy of behavior change techniques used in interventions. Health Psychol. 2008;27:379–87.

Effective Practice and Organisation of Care (EPOC) Taxonomy [ epoc.cochrane.org/epoc-taxonomy ].

Kitson A, Harvey G, McCormack B. Enabling the implementation of evidence based practice: a conceptual framework. BMJ Qual Saf. 1998;7:149–58.

Willging CE, Green AE, Ramos MM. Implementing school nursing strategies to reduce LGBTQ adolescent suicide: a randomized cluster trial study protocol. Implement Sci. 2016;11:145.

Aarons GA, Green AE, Palinkas LA, Self-Brown S, Whitaker DJ, Lutzker JR, Silovsky JF, Hecht DB, Chaffin MJ. Dynamic adaptation process to implement an evidence-based child maltreatment intervention. Implement Sci. 2012;7:32.

Green L, Kreuter M. Health program planning: an educational and ecological approach. Boston: McGraw Hill; 2005.

Stetler CB, Legro MW, Wallace CM, Bowman C, Guihan M, Hagedorn H, Kimmel B, Sharp ND, Smith JL. The role of formative evaluation in implementation research and the QUERI experience. J Gen Intern Med. 2006;21:S1–8.

Moullin JC, Sabater-Hernandez D, Benrimoj SI. Model for the evaluation of implementation programs and professional pharmacy services. Res Social Adm Pharm. 2016;12:515–22.

Chamberlain P, Brown CH, Saldana L. Observational measure of implementation progress in community based settings: the stages of implementation completion (SIC). Implement Sci. 2011;6:116–23.

Lewis CC, Weiner BJ, Stanick C, Fischer SM. Advancing implementation science through measure development and evaluation: a study protocol. Implement Sci. 2015;10:102.

Rabin BA, Purcell P, Naveed S, MR P, Henton MD, Proctor EK, Brownson RC, Glasgow RE. Advancing the application, quality and harmonization of implementation science measures. Implement Sci. 2012;7:119.

Aarons GA, Ehrhart MG, Farahnak LR. The implementation leadership scale (ILS): development of a brief measure of unit level implementation leadership. Implement Sci. 2014;9:157.

Ehrhart MG, Aarons GA, Farahnak LR. Assessing the organizational context for EBP implementation: the development and validity testing of the Implementation Climate Scale (ICS). Implement Sci. 2014;9:157.

Weiner BJ, Belden CM, Bergmire DM, Johnston M. The meaning and measurement of implementation climate. Implement Sci. 2011;6:78.

Moullin JC, Ehrhart MG, Aarons GA. Development and testing of the Measure of Innovation-Specific Implementation Intentions (MISII) using Rasch measurement theory. Implement Sci. 2018;13:89.

Scheirer MA, Dearing JW. An agenda for research on the sustainability of public health programs. Am J Public Health. 2011;101:2059–67.

Mendel P, Meredith L, Schoenbaum M, Sherbourne C, Wells K. Interventions in organizational and community context: a framework for building evidence on dissemination and implementation in health services research. Adm Policy Ment Hlth. 2008;35:21–37.

Lehman WE, Simpson DD, Knight DK, Flynn PM. Integration of treatment innovation planning and implementation: strategic process models and organizational challenges. Psychol Addict Behav. 2011;25:252.

Knight DK, Belenko S, Wiley T, Robertson AA, Arrigona N, Dennis M, Wasserman GA. Juvenile Justice—Translational Research on Interventions for Adolescents in the Legal System (JJ-TRIALS): a cluster randomized trial targeting system-wide improvement in substance use services. Implement Sci. 2016;11:57.

Schein EH. Organizational culture. Am Psychol. 1990;45:109–19.

Pinnock H, Barwick M, Carpenter CR, Eldridge S, Grandes G, Griffiths CJ, Rycroft-Malone J, Meissner P, Murray E, Patel A, Sheikh A. Standards for reporting implementation studies (StaRI) statement. bmj. 2017;356:i6795.

Stadnick NA, Brookman-Frazee L, Mandell DS, Kuelbs CL, Coleman KJ, Sahms T, Aarons GA. A mixed methods study to adapt and implement integrated mental healthcare for children with autism spectrum disorder. Pilot Feasibility Stud. 2019;5:51.

Download references

Acknowledgements

Dr. Aarons is core faculty, and Dr. Dickson, Dr. Stadnick, and Dr. Broder-Fingert are fellows with the Implementation Research Institute (IRI), at the George Warren Brown School of Social Work, Washington University in St. Louis; through an award from the National Institute of Mental Health (5R25MH08091607).

Trial registration

Not applicable

This project was supported in part by the US National Institute of Mental Health R03MH117493 (Aarons and Moullin), K23MH115100 (Dickson), K23MH110602 (Stadnick), K23MH109673 (Broder-Fingert), and National Institute of Drug Abuse R01DA038466 (Aarons). The opinions expressed herein are the views of the authors and do not necessarily reflect the official policy or position of the NIMH, NIDA, or any other part of the US Department of Health and Human Services.

Author information

Authors and affiliations.

Faculty of Health Sciences, School of Pharmacy and Biomedical Sciences, Curtin University, Kent Street, Bentley, Søborg, Western Australia, 6102, Australia

Joanna C. Moullin

Child and Adolescent Services Research Center, 3665 Kearny Villa Rd., Suite 200N, San Diego, CA, 92123, USA

Joanna C. Moullin, Kelsey S. Dickson, Nicole A. Stadnick & Gregory A. Aarons

San Diego State University, 5500 Campanile Drive, San Diego, CA, 92182, USA

Kelsey S. Dickson

Department of Psychiatry, University of California San Diego, 9500 Gilman Drive (0812), La Jolla, CA, 92093-0812, USA

Nicole A. Stadnick & Gregory A. Aarons

UC San Diego Dissemination and Implementation Science Center, 9452 Medical Center Dr, La Jolla, CA, 92037, USA

European Implementation Collaborative, Odense, Denmark

Bianca Albers

School of Health Sciences, University of Melbourne, 161 Barry St, Carlton, VIC, 3053, Australia

Department of Health, Medicine and Caring Sciences, Linköping University, 58183, Linköping, Sweden

School of Medicine, Department of Pediatrics, Boston Medical Center and Boston University, 801 Albany Street, Boston, MA, 02114, USA

Sarabeth Broder-Fingert

Mildmay Uganda, 24985 Lweza, Entebbe Road, Kampala, Uganda

Barbara Mukasa

You can also search for this author in PubMed   Google Scholar

Contributions

GAA, KSD, NS, and JCM conceptualized the debate and drafted the manuscript. BA, PN, SBF, and BM provided expert opinion and guidance on the manuscript. All authors edited and approved the final manuscript.

Corresponding author

Correspondence to Joanna C. Moullin .

Ethics declarations

Ethics approval and consent to participate.

Ethics approval was not required.

Consent for publication

Competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Additional file 1:.

Table S1. Implementation Framework Application Worksheet.

Additional file 2:

Table S2. Implementation Framework Utilization Tool.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Moullin, J.C., Dickson, K.S., Stadnick, N.A. et al. Ten recommendations for using implementation frameworks in research and practice. Implement Sci Commun 1 , 42 (2020). https://doi.org/10.1186/s43058-020-00023-7

Download citation

Received : 06 November 2019

Accepted : 26 February 2020

Published : 30 April 2020

DOI : https://doi.org/10.1186/s43058-020-00023-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Implementation

Implementation Science Communications

ISSN: 2662-2211

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

research update framework

  • Health data governance

research update framework

Getty Images/iStockphoto

New Healthcare AI Framework Incorporates Medical Knowledge, Values

A newly published artificial intelligence framework advocates for a “sociotechnical” approach to advance the technology’s integration into healthcare..

Shania Kennedy

  • Shania Kennedy, Assistant Editor

A novel normative framework for healthcare artificial intelligence (AI), described in a recent issue of Patterns , asserts that medical knowledge, procedures, practices, and values should be considered when integrating the technology into clinical settings.

The approach—developed by researchers from Carnegie Mellon University, The Hospital for Sick Children, the Dalla Lana School of Public Health, Columbia University, and the University of Toronto—is designed to help stakeholders holistically evaluate AI in healthcare.

“Regulatory guidelines and institutional approaches have focused narrowly on the performance of AI tools, neglecting knowledge, practices, and procedures necessary to integrate the model within the larger social systems of medical practice,” explained co-author Alex John London, PhD, the K&L Gates Professor of Ethics and Computational Technologies at Carnegie Mellon, in a press release . “Tools are not neutral—they reflect our values—so how they work reflects the people, processes, and environments in which they are put to work.”

The framework advocates for healthcare AI to be viewed as part of a larger “intervention ensemble,” or a set of practices, procedures, and knowledge that enable care delivery. This conceptual shift characterizes AI models as “sociotechnical systems,” a term that describes how the tool’s computational functioning reflects the values and processes of the people and environment surrounding it.

By viewing healthcare AI in this way, the researchers hope that the framework can help advance responsible implementation of these tools.

The authors noted that previous studies and frameworks exploring ethical AI integration in healthcare have been largely descriptive, focusing on how human systems and AI systems interact.

Conversely, their framework was developed to take a more proactive approach by guiding stakeholders on how to integrate AI tools into workflows with the highest potential to benefit patients.

The researchers indicated that their framework can be utilized to drive institutional insights and to guide regulation, in addition to appraising and evaluating already-deployed health AI tools to ensure that they are being used ethically and responsibly.

To demonstrate how their approach can be used, the authors applied it to a case study of the IDx-DR system, a well-known AI tool designed to screen for and detect mild diabetic retinopathy. For this illustration, the researchers defined the intervention ensemble for the system to help connect the intended benefits and goals of the tool to the evidence base for the empirical claims surrounding it.

“Only a small majority of models evaluated through clinical trials have shown a net benefit,” said co-author Melissa McCradden, PhD, a bioethicist at the Hospital for Sick Children and assistant professor of Clinical and Public Health at the Dalla Lana School of Public Health. “We hope our proposed framework lends precision to evaluation and interests regulatory bodies exploring the kinds of evidence needed to support the oversight of AI systems.”

As interest in AI grows across the healthcare sector, researchers and other stakeholders are increasingly concerned with how these tools can be developed and deployed responsibly.

This week, the American Medical Association (AMA), published seven principles to guide the development, deployment, and use of healthcare augmented intelligence, also called artificial intelligence.

The guidance builds on existing AI policies and seeks to support the establishment of a national governance structure for health AI.

The principles also act as a cornerstone for the AMA’s advocacy strategy around these technologies, an approach that has thus far prioritized the implementation of national policies to ensure health AI is ethical, equitable, responsible, and transparent.

  • Exploring a Framework for Pediatric Data Use in Health AI Research
  • AMA Establishes New Principles for AI Development, Deployment and Use
  • More Guidance Needed to Curb Discrimination by Clinical Algorithm Use

Dig Deeper on Health data governance

research update framework

National Academy of Medicine publishes AI Code of Conduct draft

ShaniaKennedy

NHS SBS launches digital advisory framework

LisEvenstad

Google Health shares generative AI, health equity research updates

research update framework

Generative AI may bolster digital healthcare software development

UW Health nurses are piloting a generative AI tool that drafts responses to patient messages to improve clinical efficiency ...

Industry stakeholders are working with DirectTrust to create a data standard for secure cloud fax to address health data exchange...

A new HHS final rule outlines information blocking disincentives for healthcare providers across several CMS programs, including ...

Europe PMC requires Javascript to function effectively.

Either your web browser doesn't support Javascript or it is currently turned off. In the latter case, please turn on Javascript support in your web browser and reload this page.

Introducing the Cognitive Foundations Framework

Description.

Learn about the specific cognitive abilities students need to have in order to succeed with language comprehension and word recognition. and ultimately arrive at the goal of reading comprehension.

research update framework

Topics: Early learning , Instructional strategy & resources , Literacy , Professional learning , Science of reading

Products: MAP Growth , MAP Reading Fluency , Professional Learning

Related Topics

research update framework

Vocabulary Assessment and Intervention

See how to assess vocabulary learning and how to intensify supports for students with vocabulary learning difficulties by connecting reading between school and home.

research update framework

Vocabulary Instruction

Learn about some of the many different ways you can help students improve their vocabulary!

research update framework

The Importance of Vocabulary

Find out why vocabulary is known as the “keystone of reading comprehension,” and how students need to know so much more than just a word’s meaning.

research update framework

Common Misconceptions About Multisyllabic Word Reading

Get essential clarity on some of the misconceptions about multisyllabic word reading, including why syllable rules are not always the gold standard.

research update framework

Data-Based Decision Making for Multisyllabic Word Reading

Discover what you can learn from student oral reading fluency data and see the five steps for data-based decision making.

research update framework

Strategies for Teaching Multisyllabic Word Reading

Get useful tips for helping struggling readers and learn six big instructional ideas to help your students with multisyllabic word reading.

research update framework

Building Fluent Readers

Learn about reading fluency, how to build fluent readers in the K–1 and 2–8 grade ranges, and why reading faster does not equal reading fluency!

Products: MAP Reading Fluency , MAP Growth , Professional Learning

Popular Topics

Stay current by subscribing to our newsletter

STAY CURRENT

You are now signed up to receive our newsletter containing the latest news, blogs, and resources from nwea., research partnerships, thank you for registering to be a partner in research.

Close Overlay

Download the guide

Click below to view now..

Continue exploring >>

  • Environment
  • Science & Technology
  • Business & Industry
  • Health & Public Welfare
  • Topics (CFR Indexing Terms)
  • Public Inspection
  • Presidential Documents
  • Document Search
  • Advanced Document Search
  • Public Inspection Search
  • Reader Aids Home
  • Office of the Federal Register Announcements
  • Using FederalRegister.Gov
  • Understanding the Federal Register
  • Recent Site Updates
  • Federal Register & CFR Statistics
  • Videos & Tutorials
  • Developer Resources
  • Government Policy and OFR Procedures
  • Congressional Review
  • My Clipboard
  • My Comments
  • My Subscriptions
  • Sign In / Sign Up
  • Site Feedback
  • Search the Federal Register

The Federal Register

The daily journal of the united states government.

  • Legal Status

This site displays a prototype of a “Web 2.0” version of the daily Federal Register. It is not an official legal edition of the Federal Register, and does not replace the official print version or the official electronic version on GPO’s govinfo.gov.

The documents posted on this site are XML renditions of published Federal Register documents. Each document posted on the site includes a link to the corresponding official PDF file on govinfo.gov. This prototype edition of the daily Federal Register on FederalRegister.gov will remain an unofficial informational resource until the Administrative Committee of the Federal Register (ACFR) issues a regulation granting it official legal status. For complete information about, and access to, our official publications and services, go to About the Federal Register on NARA's archives.gov.

The OFR/GPO partnership is committed to presenting accurate and reliable regulatory information on FederalRegister.gov with the objective of establishing the XML-based Federal Register as an ACFR-sanctioned publication in the future. While every effort has been made to ensure that the material on FederalRegister.gov is accurately displayed, consistent with the official SGML-based PDF version on govinfo.gov, those relying on it for legal research should verify their results against an official edition of the Federal Register. Until the ACFR grants it official status, the XML rendition of the daily Federal Register on FederalRegister.gov does not provide legal notice to the public or judicial notice to the courts.

Considerations in Demonstrating Interchangeability With a Reference Product: Update; Draft Guidance for Industry; Availability

A Notice by the Food and Drug Administration on 06/21/2024

This document has a comment period that ends in 53 days. (08/20/2024) Submit a formal comment

Thank you for taking the time to create a comment. Your input is important.

Once you have filled in the required fields below you can preview and/or submit your comment to the Health and Human Services Department for review. All comments are considered public and will be posted online once the Health and Human Services Department has reviewed them.

You can view alternative ways to comment or you may also comment via Regulations.gov at /documents/2024/06/21/2024-13429/considerations-in-demonstrating-interchangeability-with-a-reference-product-update-draft-guidance .

  • What is your comment about?

Note: You can attach your comment as a file and/or attach supporting documents to your comment. Attachment Requirements .

this will NOT be posted on regulations.gov

  • Opt to receive email confirmation of submission and tracking number?
  • Tell us about yourself! I am... *
  • First Name *
  • Last Name *
  • State Alabama Alaska American Samoa Arizona Arkansas California Colorado Connecticut Delaware District of Columbia Florida Georgia Guam Hawaii Idaho Illinois Indiana Iowa Kansas Kentucky Louisiana Maine Maryland Massachusetts Michigan Minnesota Mississippi Missouri Montana Nebraska Nevada New Hampshire New Jersey New Mexico New York North Carolina North Dakota Ohio Oklahoma Oregon Pennsylvania Puerto Rico Rhode Island South Carolina South Dakota Tennessee Texas Utah Vermont Virgin Islands Virginia Washington West Virginia Wisconsin Wyoming
  • Country Afghanistan Åland Islands Albania Algeria American Samoa Andorra Angola Anguilla Antarctica Antigua and Barbuda Argentina Armenia Aruba Australia Austria Azerbaijan Bahamas Bahrain Bangladesh Barbados Belarus Belgium Belize Benin Bermuda Bhutan Bolivia, Plurinational State of Bonaire, Sint Eustatius and Saba Bosnia and Herzegovina Botswana Bouvet Island Brazil British Indian Ocean Territory Brunei Darussalam Bulgaria Burkina Faso Burundi Cambodia Cameroon Canada Cape Verde Cayman Islands Central African Republic Chad Chile China Christmas Island Cocos (Keeling) Islands Colombia Comoros Congo Congo, the Democratic Republic of the Cook Islands Costa Rica Côte d'Ivoire Croatia Cuba Curaçao Cyprus Czech Republic Denmark Djibouti Dominica Dominican Republic Ecuador Egypt El Salvador Equatorial Guinea Eritrea Estonia Ethiopia Falkland Islands (Malvinas) Faroe Islands Fiji Finland France French Guiana French Polynesia French Southern Territories Gabon Gambia Georgia Germany Ghana Gibraltar Greece Greenland Grenada Guadeloupe Guam Guatemala Guernsey Guinea Guinea-Bissau Guyana Haiti Heard Island and McDonald Islands Holy See (Vatican City State) Honduras Hong Kong Hungary Iceland India Indonesia Iran, Islamic Republic of Iraq Ireland Isle of Man Israel Italy Jamaica Japan Jersey Jordan Kazakhstan Kenya Kiribati Korea, Democratic People's Republic of Korea, Republic of Kuwait Kyrgyzstan Lao People's Democratic Republic Latvia Lebanon Lesotho Liberia Libya Liechtenstein Lithuania Luxembourg Macao Macedonia, the Former Yugoslav Republic of Madagascar Malawi Malaysia Maldives Mali Malta Marshall Islands Martinique Mauritania Mauritius Mayotte Mexico Micronesia, Federated States of Moldova, Republic of Monaco Mongolia Montenegro Montserrat Morocco Mozambique Myanmar Namibia Nauru Nepal Netherlands New Caledonia New Zealand Nicaragua Niger Nigeria Niue Norfolk Island Northern Mariana Islands Norway Oman Pakistan Palau Palestine, State of Panama Papua New Guinea Paraguay Peru Philippines Pitcairn Poland Portugal Puerto Rico Qatar Réunion Romania Russian Federation Rwanda Saint Barthélemy Saint Helena, Ascension and Tristan da Cunha Saint Kitts and Nevis Saint Lucia Saint Martin (French part) Saint Pierre and Miquelon Saint Vincent and the Grenadines Samoa San Marino Sao Tome and Principe Saudi Arabia Senegal Serbia Seychelles Sierra Leone Singapore Sint Maarten (Dutch part) Slovakia Slovenia Solomon Islands Somalia South Africa South Georgia and the South Sandwich Islands South Sudan Spain Sri Lanka Sudan Suriname Svalbard and Jan Mayen Swaziland Sweden Switzerland Syrian Arab Republic Taiwan, Province of China Tajikistan Tanzania, United Republic of Thailand Timor-Leste Togo Tokelau Tonga Trinidad and Tobago Tunisia Turkey Turkmenistan Turks and Caicos Islands Tuvalu Uganda Ukraine United Arab Emirates United Kingdom United States United States Minor Outlying Islands Uruguay Uzbekistan Vanuatu Venezuela, Bolivarian Republic of Viet Nam Virgin Islands, British Virgin Islands, U.S. Wallis and Futuna Western Sahara Yemen Zambia Zimbabwe
  • Organization Type * Company Organization Federal State Local Tribal Regional Foreign U.S. House of Representatives U.S. Senate
  • Organization Name *
  • You are filing a document into an official docket. Any personal information included in your comment text and/or uploaded attachment(s) may be publicly viewable on the web.
  • I read and understand the statement above.
  • Preview Comment

Document Details

Information about this document as published in the Federal Register .

Document Statistics

Enhanced content.

Relevant information about this document from Regulations.gov provides additional context. This information is not part of the official Federal Register document.

Regulations.gov Logo

Published Document

This document has been published in the Federal Register . Use the PDF linked in the document sidebar for the official electronic format.

Enhanced Content - Table of Contents

This table of contents is a navigational tool, processed from the headings within the legal text of Federal Register documents. This repetition of headings to form internal navigation links has no substantive legal effect.

Electronic Submissions

Written/paper submissions, for further information contact:, supplementary information:, i. background, ii. paperwork reduction act of 1995, iii. electronic access, enhanced content - submit public comment.

  • Submit a public comment on this document

Enhanced Content - Read Public Comments

  • This feature is not available for this document.

Enhanced Content - Sharing

  • Email this document to a friend

Enhanced Content - Document Print View

  • Print this document

Enhanced Content - Document Tools

These tools are designed to help you understand the official document better and aid in comparing the online edition to the print edition.

These markup elements allow the user to see how the document follows the Document Drafting Handbook that agencies use to create their documents. These can be useful for better understanding how a document is structured but are not part of the published document itself.

Enhanced Content - Developer Tools

This document is available in the following developer friendly formats:.

  • JSON: Normalized attributes and metadata
  • XML: Original full text XML
  • MODS: Government Publishing Office metadata

More information and documentation can be found in our developer tools pages .

Official Content

  • View printed version (PDF)

This PDF is the current document as it appeared on Public Inspection on 06/20/2024 at 8:45 am. It was viewed 0 times while on Public Inspection.

If you are using public inspection listings for legal research, you should verify the contents of the documents against a final, official edition of the Federal Register. Only official editions of the Federal Register provide legal notice of publication to the public and judicial notice to the courts under 44 U.S.C. 1503 & 1507 . Learn more here .

Food and Drug Administration, HHS.

Notice of availability.

The Food and Drug Administration (FDA, Agency, or we) is announcing the availability of a draft guidance for industry entitled “Considerations in Demonstrating Interchangeability With a Reference Product: Update.” This draft guidance describes considerations regarding a switching study or studies intended to support a demonstration that a proposed therapeutic protein product is interchangeable with a reference product for the purposes of submitting a marketing application or supplement under the Public Health Service Act (PHS Act). After considering any comments received in the docket for this draft guidance, we intend to revise the final guidance for industry entitled “Considerations in Demonstrating Interchangeability With a Reference Product” issued on May 14, 2019, to amend sections in that document regarding the subject addressed in this draft guidance.

Submit either electronic or written comments on the draft guidance by August 20, 2024 to ensure that the Agency considers your comment on this draft guidance before it begins work on the final version of the guidance.

You may submit comments on any guidance at any time as follows:

Submit electronic comments in the following way:

  • Federal eRulemaking Portal: https://www.regulations.gov . Follow the instructions for submitting comments. Comments submitted electronically, including attachments, to https://www.regulations.gov will be posted to the docket unchanged. Because your comment will be made public, you are solely responsible for ensuring that your comment does not include any confidential information that you or a third party may not wish to be posted, such as medical information, your or anyone else's Social Security number, or confidential business information, such Start Printed Page 52061 as a manufacturing process. Please note that if you include your name, contact information, or other information that identifies you in the body of your comments, that information will be posted on https://www.regulations.gov .
  • If you want to submit a comment with confidential information that you do not wish to be made available to the public, submit the comment as a written/paper submission and in the manner detailed (see “Written/Paper Submissions” and “Instructions”).

Submit written/paper submissions as follows:

  • Mail/Hand Delivery/Courier (for written/paper submissions): Dockets Management Staff (HFA-305), Food and Drug Administration, 5630 Fishers Lane, Rm. 1061, Rockville, MD 20852.
  • For written/paper comments submitted to the Dockets Management Staff, FDA will post your comment, as well as any attachments, except for information submitted, marked and identified, as confidential, if submitted as detailed in “Instructions.”

Instructions: All submissions received must include the Docket No. FDA-2017-D-0154 for “Considerations in Demonstrating Interchangeability With a Reference Product: Update.” Received comments will be placed in the docket and, except for those submitted as “Confidential Submissions,” publicly viewable at https://www.regulations.gov or at the Dockets Management Staff between 9 a.m. and 4 p.m., Monday through Friday, 240-402-7500.

  • Confidential Submissions—To submit a comment with confidential information that you do not wish to be made publicly available, submit your comments only as a written/paper submission. You should submit two copies total. One copy will include the information you claim to be confidential with a heading or cover note that states “THIS DOCUMENT CONTAINS CONFIDENTIAL INFORMATION.” The Agency will review this copy, including the claimed confidential information, in its consideration of comments. The second copy, which will have the claimed confidential information redacted/blacked out, will be available for public viewing and posted on https://www.regulations.gov . Submit both copies to the Dockets Management Staff. If you do not wish your name and contact information to be made publicly available, you can provide this information on the cover sheet and not in the body of your comments and you must identify this information as “confidential.” Any information marked as “confidential” will not be disclosed except in accordance with 21 CFR 10.20 and other applicable disclosure law. For more information about FDA's posting of comments to public dockets, see 80 FR 56469 , September 18, 2015, or access the information at: https://www.govinfo.gov/​content/​pkg/​FR-2015-09-18/​pdf/​2015-23389.pdf .

Docket: For access to the docket to read background documents or the electronic and written/paper comments received, go to https://www.regulations.gov and insert the docket number, found in brackets in the heading of this document, into the “Search” box and follow the prompts and/or go to the Dockets Management Staff, 5630 Fishers Lane, Rm. 1061, Rockville, MD 20852, 240-402-7500.

You may submit comments on any guidance at any time (see 21 CFR 10.115(g)(5) ).

Submit written requests for single copies of the draft guidance to the Division of Drug Information, Center for Drug Evaluation and Research, Food and Drug Administration, 10001 New Hampshire Ave., Hillandale Building, 4th Floor, Silver Spring, MD 20993-0002; or the Office of Communication, Outreach, and Development, Center for Biologics Evaluation and Research, Food and Drug Administration, 10903 New Hampshire Ave., Bldg. 71, Rm. 3128, Silver Spring, MD 20993-0002. Send one self-addressed adhesive label to assist that office in processing your requests. See the SUPPLEMENTARY INFORMATION section for electronic access to the draft guidance document.

Mustafa Unlu, Center for Drug Evaluation and Research, Food and Drug Administration, 10903 New Hampshire Ave., Bldg. 22, Rm. 1139, Silver Spring, MD 20993, 301-796-3396; or James Myers, Center for Biologics Evaluation and Research, Food and Drug Administration, 10903 New Hampshire Ave., Bldg. 71, Rm. 7301, Silver Spring, MD 20993-0002, 240-402-7911.

FDA is announcing the availability of a draft guidance for industry entitled “Considerations in Demonstrating Interchangeability With a Reference Product: Update.” This draft guidance describes considerations regarding a switching study or studies intended to support a demonstration that a proposed therapeutic protein product is interchangeable with a reference product for the purposes of submitting a marketing application or supplement under section 351(k) of the PHS Act ( 42 U.S.C. 262(k) ). After considering any comments received in the docket for this draft guidance, we intend to revise the final guidance for industry entitled “Considerations in Demonstrating Interchangeability With a Reference Product” (Interchangeability Guidance) issued on May 14, 2019 ( 84 FR 21342 ) to amend sections in that document regarding the subject addressed in this draft guidance.

FDA issued the Interchangeability Guidance before receiving and reviewing any biologics license applications submitted under section 351(k) of the PHS Act for a proposed interchangeable biosimilar product. Since publication of the Interchangeability Guidance, experience has shown that for the products approved as biosimilars to date, the risk in terms of safety or diminished efficacy is insignificant following single or multiple switches between a reference product and a biosimilar product. Accordingly, FDA's scientific approach to when a switching study or studies may be needed to support a demonstration of interchangeability has evolved.

This draft guidance is not intended to be finalized as a standalone guidance. Instead, the recommendations in this draft guidance, when finalized, are intended to revise the Interchangeability Guidance and to replace sections in that document, such as sections VI.A and VII, to reflect FDA's current thinking regarding the subject addressed in this guidance. FDA is issuing this draft guidance to seek public comment through the accompanying docket.

This draft guidance is being issued consistent with FDA's good guidance practices regulation ( 21 CFR 10.115 ). It does not establish any rights for any person and is not binding on FDA or the public. You can use an alternative approach if it satisfies the requirements of the applicable statutes and regulations.

While this guidance contains no collection of information, it does refer to previously approved FDA collections of information. The previously approved collections of information are subject to review by the Office of Management and Budget (OMB) under the Paperwork Reduction Act of 1995 (PRA) ( 44 U.S.C. 3501-3521 ). The collections of information for the submission of a biologics license application or supplemental application under section 351(k) of the PHS Act have been approved under OMB control number 0910-0718. The collections of information in 21 CFR part 312 for the Start Printed Page 52062 submissions of investigational new drug applications have been approved under OMB control number 0910-0014. The collections of information in 21 CFR part 314 for the submissions of new drug applications have been approved under OMB control number 0910-0001. The collections of information in 21 CFR part 601 for the submissions of biologics license application and supplemental applications have been approved under OMB control number 0910-0338.

Persons with access to the internet may obtain the draft guidance at https://www.fda.gov/​drugs/​guidance-compliance-regulatory-information/​guidances-drugs , https://www.fda.gov/​vaccines-blood-biologics/​guidance-compliance-regulatory-information-biologics/​biologics-guidances , https://www.fda.gov/​regulatory-information/​search-fda-guidance-documents , or https://www.regulations.gov .

Dated: June 13, 2024.

Lauren K. Roth,

Associate Commissioner for Policy.

[ FR Doc. 2024-13429 Filed 6-20-24; 8:45 am]

BILLING CODE 4164-01-P

  • Executive Orders

Reader Aids

Information.

  • About This Site
  • Accessibility
  • No Fear Act
  • Continuity Information

Insider Risk Management

Google Introduces Project Naptime for AI-Powered Vulnerability Research

AI-Powered Vulnerability Research

Google has developed a new framework called Project Naptime that it says enables a large language model (LLM) to carry out vulnerability research with an aim to improve automated discovery approaches.

"The Naptime architecture is centered around the interaction between an AI agent and a target codebase," Google Project Zero researchers Sergei Glazunov and Mark Brand said . "The agent is provided with a set of specialized tools designed to mimic the workflow of a human security researcher."

The initiative is so named for the fact that it allows humans to "take regular naps" while it assists with vulnerability research and automating variant analysis.

The approach, at its core, seeks to take advantage of advances in code comprehension and general reasoning ability of LLMs, thus allowing them to replicate human behavior when it comes to identifying and demonstrating security vulnerabilities.

Cybersecurity

It encompasses several components such as a Code Browser tool that enables the AI agent to navigate through the target codebase, a Python tool to run Python scripts in a sandboxed environment for fuzzing, a Debugger tool to observe program behavior with different inputs, and a Reporter tool to monitor the progress of a task.

AI-Powered Vulnerability Research

Google said Naptime is also model-agnostic and backend-agnostic, not to mention be better at flagging buffer overflow and advanced memory corruption flaws, according to CYBERSECEVAL 2 benchmarks. CYBERSECEVAL 2, released earlier this April by researchers from Meta, is an evaluation suite to quantify LLM security risks.

In tests carried out by the search giant to reproduce and exploit the flaws, the two vulnerability categories achieved new top scores of 1.00 and 0.76, up from 0.05 and 0.24, respectively for OpenAI GPT-4 Turbo.

"Naptime enables an LLM to perform vulnerability research that closely mimics the iterative, hypothesis-driven approach of human security experts," the researchers said. "This architecture not only enhances the agent's ability to identify and analyze vulnerabilities but also ensures that the results are accurate and reproducible."

Browser Security

Continuous Attack Surface Discovery & Penetration Testing

Continuously discover, prioritize, & mitigate exposures with evidence-backed ASM, Pentesting, and Red Teaming.

Over 110,000 Websites Affected by Hijacked Polyfill Supply Chain Attack

Cybersecurity Webinars

Secure your digital identity with these 5 must-have itdr features.

Facing identity threats? Discover how ITDR can save you from lateral movement and ransomware attacks.

Why Compromised Credentials Are the #1 Cyber Threat in 2024

From data breaches to identity theft, compromised credentials can cost you everything. Learn how to stop attackers in their tracks.

Cybersecurity

Democratization of Cyberattacks: How Billions of Unskilled Would-be Hackers Can Now Attack Your Organization

Expert Insights

Survey Reveals Compliance Professionals Seek Quality, Efficiency, Trust & Partnership

Expert Insights

Securing SaaS Apps in the Era of Generative AI

Expert Insights

Patching vs. Isolating Vulnerabilities

Get the latest news, expert insights, exclusive resources, and strategies from industry leaders – all for free.

Google framework helps LLMs perform basic vulnerability research

(Credit: Backcountry Media – stock.adobe.com)

(Credit: Backcountry Media – stock.adobe.com)

Google’s Project Zero team has developed a framework to enable large language models (LLMs) to perform basic vulnerability research autonomously.

A recent blog post explained how the “Project Naptime” framework builds on research by Meta, which set benchmarks for the ability of LLMs to discover and exploit memory vulnerabilities, namely advanced memory corruption and buffer overflow flaws.

The project sought to address a fundamental shortcoming in LLMs when it comes to assessing security flaws. In the Meta experiments, dubbed “CyberSecEval 2,” LLMs were found to score low in their ability to perform basic vulnerability discovery, with none coming close to “passing” the benchmark challenge.

However, Google’s Project Zero researchers found that the Naptime framework, named for the idea that LLMs may one day allow security researchers to “take regular naps” during automated processes, improved the performance of LLMs on CyberSecEval 2 tests by up to 20-fold.

Project Naptime gives LLMs access to tools to mimic human workflows

The Naptime architecture designed by Project Zero includes a toolset consisting of a debugger, code browser, Python tool and reporter tool that enhance LLMs’ abilities to evaluate code, exploit vulnerabilities and verify successful exploitation autonomously.

For example, the code browser enables LLMs to navigate the target program’s source code similarly to how a human researcher would use something like Chromium Code Search to better identify the locations of referenced functions or variables.

The Python tool enables the LLMs to run Python scripts within a sandbox in order to both perform precise calculations and generate complex inputs to text and exploit that target program.

The debugger grants the LLMs the ability to better observe, record and understand the behavior of the target program in response to different inputs, and the reporter provides a mechanism for the LLM to signal its progress to a controller, which will verify whether or not a success condition, such as a crash, has been achieved.

The Naptime framework also aims to grant LLMs the ability to work more similarly to a human researcher by giving it more flexibility to use “reasoning” processes. For example, the framework encourages the LLMs to produce long explanations for its decisions, which has been shown to increase accuracy.

GPT 4 Turbo, Gemini 1.5 Pro excel in basic vulnerability research

The Naptime test results published by Project Zero reveal that GPT 4 Turbo performed best in the CyberSecEval 2 buffer overflow test, which required exploiting a buffer overflow vulnerability to trigger a program output outside of the program’s “normal” execution, while Gemini 1.5 Pro scored highest in the advanced memory corruption test, in which triggering a program crash signaled success.

In the buffer overflow test, GPT 4 Turbo was the only LLM to receive a “passing” score of 1.00, with Gemini 1.5 Pro coming in at a close second with a score of 0.99 over 20 test completions.

In the advanced memory corruption test, the researchers discovered that the LLMs achieved an unexpectedly high success rate by discovering and exploiting a separate unintended, easy-to-exploit vulnerability in the target program, with GPT 4 Turbo achieving the best results.

However, when this unintended flaw was removed, leaving only the original target vulnerability, Gemini 1.5 Pro came out on top with a score of 0.58 after 20 test completions.

The other models tested were GPT 3.5 Turbo and Gemini 1.5 Flash, which scored a maximum of 0.21 and 0.26 in the buffer overflow test and a maximum of 0.56 and 0.53 in the advanced memory corruption test, respectively.

“When provided with the right tools, current LLMs can really start to perform (admittedly rather basic) vulnerability research!” the researchers wrote.

However, the Project Zero team acknowledged that LLMs are still far from achieving the ability to autonomously aid researchers in real-life vulnerability research scenarios, which involve greater ambiguity and complexity than the benchmark tests of CyberSecEval 2.

“Solving these challenges is closer to the typical usage of targeted, domain-specific fuzzing performed as part of a manual review workflow than a fully autonomous researcher,” the authors concluded. “More importantly, we believe that in tasks where an expert human would rely on multiple iterative steps of reasoning, hypothesis formation, and validation, we need to provide the same flexibility to the models; otherwise, the results cannot reflect the true capability level of the models.”

Laura French

Four ways to mitigate robotic process automation security risks

Amichai Shulman June 28, 2024

Here are four tips for getting the most of of RPA applications.

research update framework

Vanna AI prompt injection vulnerability enables RCE

Laura French June 27, 2024

A process for generating charts is susceptible to code injection through specially crafted prompts.

AI growth risk as Good Bots and a Bad Bot and chatbot as a social vulnerability for Robots gone rogue and the danger of robotic or artificial intelligence technology in a 3D illustration style.

AI to further fuel accelerated synthetic identity fraud growth

SC Staff June 27, 2024

The report noted that synthetic identity fraud could be combated by public sector organizations through the implementation of omnichannel verification, or the corroboration of identities through multiple approaches.

Get daily email updates

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy .

Warning: The NCBI web site requires JavaScript to function. more...

U.S. flag

An official website of the United States government

The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Browse Titles

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Cover of Models and applications for measuring the impact of health research: update of a systematic review for the Health Technology Assessment programme

Models and applications for measuring the impact of health research: update of a systematic review for the Health Technology Assessment programme

Health Technology Assessment, No. 20.76

James Raftery , Steve Hanney , Trish Greenhalgh , Matthew Glover , and Amanda Blatch-Jones .

Affiliations

  • Copyright and Permissions

The Payback Framework remains the most widely used approach to assessing the value of research investment although the field has expanded considerably. Monitoring of impact in the changing context of health services, and ongoing review of alternative methods of achieving this should be high priorities for research funders.

Background:

This report reviews approaches and tools for measuring the impact of research programmes, building on, and extending, a 2007 review.

Objectives:

(1) To identify the range of theoretical models and empirical approaches for measuring the impact of health research programmes; (2) to develop a taxonomy of models and approaches; (3) to summarise the evidence on the application and use of these models; and (4) to evaluate the different options for the Health Technology Assessment ( HTA ) programme.

Data sources:

We searched databases including Ovid MEDLINE, EMBASE, Cumulative Index to Nursing and Allied Health Literature and The Cochrane Library from January 2005 to August 2014.

Review methods:

This narrative systematic literature review comprised an update, extension and analysis/discussion. We systematically searched eight databases, supplemented by personal knowledge, in August 2014 through to March 2015.

The literature on impact assessment has much expanded. The Payback Framework, with adaptations, remains the most widely used approach. It draws on different philosophical traditions, enhancing an underlying logic model with an interpretative case study element and attention to context. Besides the logic model, other ideal type approaches included constructionist, realist, critical and performative. Most models in practice drew pragmatically on elements of several ideal types. Monetisation of impact, an increasingly popular approach, shows a high return from research but relies heavily on assumptions about the extent to which health gains depend on research. Despite usually requiring systematic reviews before funding trials, the HTA programme does not routinely examine the impact of those trials on subsequent systematic reviews. The York/Patient-Centered Outcomes Research Institute and the Grading of Recommendations Assessment, Development and Evaluation toolkits provide ways of assessing such impact, but need to be evaluated. The literature, as reviewed here, provides very few instances of a randomised trial playing a major role in stopping the use of a new technology. The few trials funded by the HTA programme that may have played such a role were outliers.

Discussion:

The findings of this review support the continued use of the Payback Framework by the HTA programme. Changes in the structure of the NHS, the development of NHS England and changes in the National Institute for Health and Care Excellence’s remit pose new challenges for identifying and meeting current and future research needs. Future assessments of the impact of the HTA programme will have to take account of wider changes, especially as the Research Excellence Framework ( REF ), which assesses the quality of universities’ research, seems likely to continue to rely on case studies to measure impact. The HTA programme should consider how the format and selection of case studies might be improved to aid more systematic assessment. The selection of case studies, such as in the REF, but also more generally, tends to be biased towards high-impact rather than low-impact stories. Experience for other industries indicate that much can be learnt from the latter. The adoption of researchfish ® (researchfish Ltd, Cambridge, UK) by most major UK research funders has implications for future assessments of impact. Although the routine capture of indexed research publications has merit, the degree to which researchfish will succeed in collecting other, non-indexed outputs and activities remains to be established.

Limitations:

There were limitations in how far we could address challenges that faced us as we extended the focus beyond that of the 2007 review, and well beyond a narrow focus just on the HTA programme.

Conclusions:

Research funders can benefit from continuing to monitor and evaluate the impacts of the studies they fund. They should also review the contribution of case studies and expand work on linking trials to meta-analyses and to guidelines.

The National Institute for Health Research HTA programme.

  • Collapse All
  • Plain English summary
  • Scientific summary
  • Evidence explaining why this research is needed now
  • Structure of the report
  • Review methods
  • Search strategies
  • Update to the previous review methods
  • Extension of the literature methods
  • Review findings
  • Summary of the literature identified
  • Conceptual frameworks developed and/or used
  • Comparing frameworks
  • Methods used in empirical impact assessment studies
  • Summary findings from multiproject programmes
  • Different philosophical roots: five ‘ideal types’
  • ‘Logic models’ of research impact: strengths and limitations
  • Alternatives to the logic model approach
  • Constructivist models of impact (developed in social sciences)
  • Realist models: impact as ‘theory of change’
  • Participatory and critical emancipatory models of impact
  • Co-production models (e.g. multistakeholder research partnerships)
  • Introduction
  • Literature searches
  • First in class
  • Updated systematic review
  • Taxonomy of approaches
  • Monetary value on the impact of health research
  • Assessing the impact of trials on systematic reviews
  • Impact of randomised trials on stopping the use of particular health technologies
  • Limitations of the Health Technology Assessment review
  • Options for the National Institute for Health Research/Health Technology Assessment to take research impact assessment forward
  • Options for the National Institute for Health Research/Health Technology Assessment for health research impact and research recommendations
  • Chapter 9. Conclusions
  • Acknowledgements
  • Appendix 1. Literature search strategies
  • Appendix 2. Data extraction sheet
  • Appendix 3. The included studies in the updated review
  • Appendix 4. List of interesting studies
  • Appendix 5. Frameworks included in previous analyses by RAND Europe
  • Appendix 6. Summary of methods for estimating the monetary value of the impact of health research
  • Appendix 7. Studies of impact assessment in the 2014 Research Excellence Framework
  • List of abbreviations

About the Series

Article history.

The research reported in this issue of the journal was funded by the HTA programme as project number 14/72/01. The contractual start date was in June 2014. The draft report began editorial review in May 2015 and was accepted for publication in December 2015. The authors have been wholly responsible for all data collection, analysis and interpretation, and for writing up their work. The HTA editors and publisher have tried to ensure the accuracy of the authors’ report and would like to thank the reviewers for their constructive comments on the draft document. However, they do not accept liability for damages or losses arising from material published in this report.

Declared competing interests of authors

James Raftery is a member of the National Institute for Health Research (NIHR) Health Technology Assessment Editorial Board and the NIHR Journals Library Editorial Group. He was previously Director of the Wessex Institute and Head of the NIHR Evaluation, Trials and Studies Co-ordinating Centre (NETSCC). Amanda Blatch-Jones is a senior researcher at NETSCC.

Last reviewed: May 2015; Accepted: December 2015.

Included under terms of UK Non-commercial Government License .

  • Cite this Page Raftery J, Hanney S, Greenhalgh T, et al. Models and applications for measuring the impact of health research: update of a systematic review for the Health Technology Assessment programme. Southampton (UK): NIHR Journals Library; 2016 Oct. (Health Technology Assessment, No. 20.76.) doi: 10.3310/hta20760
  • PDF version of this title (3.0M)
  • Disable Glossary Links

Other titles in this collection

  • Health Technology Assessment

Related information

  • NLM Catalog Related NLM Catalog Entries

Similar articles in PubMed

  • The impact of the National Institute for Health Research Health Technology Assessment programme, 2003-13: a multimethod evaluation. [Health Technol Assess. 2015] The impact of the National Institute for Health Research Health Technology Assessment programme, 2003-13: a multimethod evaluation. Guthrie S, Bienkowska-Gibbs T, Manville C, Pollitt A, Kirtley A, Wooding S. Health Technol Assess. 2015 Aug; 19(67):1-291.
  • Review What is the value of routinely testing full blood count, electrolytes and urea, and pulmonary function tests before elective surgery in patients with no apparent clinical indication and in subgroups of patients with common comorbidities: a systematic review of the clinical and cost-effective literature. [Health Technol Assess. 2012] Review What is the value of routinely testing full blood count, electrolytes and urea, and pulmonary function tests before elective surgery in patients with no apparent clinical indication and in subgroups of patients with common comorbidities: a systematic review of the clinical and cost-effective literature. Czoski-Murray C, Lloyd Jones M, McCabe C, Claxton K, Oluboyede Y, Roberts J, Nicholl JP, Rees A, Reilly CS, Young D, et al. Health Technol Assess. 2012 Dec; 16(50):i-xvi, 1-159.
  • Review An assessment of the impact of the NHS Health Technology Assessment Programme. [Health Technol Assess. 2007] Review An assessment of the impact of the NHS Health Technology Assessment Programme. Hanney S, Buxton M, Green C, Coulson D, Raftery J. Health Technol Assess. 2007 Dec; 11(53):iii-iv, ix-xi, 1-180.
  • Modelling approaches for histology-independent cancer drugs to inform NICE appraisals: a systematic review and decision-framework. [Health Technol Assess. 2021] Modelling approaches for histology-independent cancer drugs to inform NICE appraisals: a systematic review and decision-framework. Murphy P, Glynn D, Dias S, Hodgson R, Claxton L, Beresford L, Cooper K, Tappenden P, Ennis K, Grosso A, et al. Health Technol Assess. 2021 Dec; 25(76):1-228.
  • Review Clinical trial metadata: defining and extracting metadata on the design, conduct, results and costs of 125 randomised clinical trials funded by the National Institute for Health Research Health Technology Assessment programme. [Health Technol Assess. 2015] Review Clinical trial metadata: defining and extracting metadata on the design, conduct, results and costs of 125 randomised clinical trials funded by the National Institute for Health Research Health Technology Assessment programme. Raftery J, Young A, Stanton L, Milne R, Cook A, Turner D, Davidson P. Health Technol Assess. 2015 Feb; 19(11):1-138.

Recent Activity

  • Models and applications for measuring the impact of health research: update of a... Models and applications for measuring the impact of health research: update of a systematic review for the Health Technology Assessment programme

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

Connect with NLM

National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894

Web Policies FOIA HHS Vulnerability Disclosure

Help Accessibility Careers

statistics

IMAGES

  1. Research methodology framework

    research update framework

  2. Research Framework

    research update framework

  3. Research framework of the study.

    research update framework

  4. What Is A Framework In Research

    research update framework

  5. A new framework for developing and evaluating complex interventions

    research update framework

  6. Overview of Library Update Framework with three modules Update

    research update framework

VIDEO

  1. [Webinar] Advancing the Framework: A National Career Clusters Framework Update

  2. Welcome and Keynote

  3. Conference XP

  4. Data visualization

  5. Implementation of Simplified Review Framework for Research Project Grants

  6. Business research methods : The Research Process

COMMENTS

  1. The Consolidated Framework for Implementation Research

    As part of the update process, a CFIR Outcomes Addendum was published to establish conceptual distinctions between implementation and innovation outcomes and their potential determinants. ... while the CFIR's utility as a framework to guide empirical research is not fully established, it is consistent with the vast majority of frameworks and ...

  2. The updated Consolidated Framework for Implementation Research based on

    Provides an update of the Consolidated Framework for Implementation Research (CFIR), one of the most highly cited frameworks in implementation science. Addresses important user critiques of the CFIR based on the literature and a survey, including better centering innovation recipients, and adding determinants to equity in implementation.

  3. Ten recommendations for using implementation frameworks in research and

    For example, the conceptual framework for sustainability of public health programs by Scheirer and Dearing , the framework of dissemination in health services intervention research by Mendel et al. , and the integrated 2-phase Texas Christian University (TCU) approach to strategic system change by Lehman include comprehensive evaluation of the ...

  4. Research Update Organization

    Research Update Organization [IRS 501(c)(3) registered-tax-exempt] is a non-profit educational organization founded to promote clinical research and its application to enrich community health. We offer International Medical Graduates (IMGs) and medical students clinical research education, experience, electives, opportunities, and publication skills.

  5. The updated Consolidated Framework for Implementation Research ...

    The Consolidated Framework for Implementation Research (CFIR) is one of the most commonly used determinant frameworks to assess these contextual factors; however, it has been over 10 years since publication and there is a need for updates. The purpose of this project was to elicit feedback from experienced CFIR users to inform updates to the ...

  6. The Consolidated Framework for Implementation Research (CFIR): progress

    The objective of this presentation is to introduce the Consolidated Framework for Implementation Research (CFIR), present results of a literature synthesis of studies citing the CFIR, highlight improvements expected in a second version of the framework, and present tools and resources available for researchers using the CFIR that will be available on a newly revamped website.

  7. The new framework

    The framework aims to improve the design and conduct of complex intervention research to increase its utility, efficiency and impact. Consistent with the principles of increasing the value of research and minimising research waste,22 the framework (1) emphasises the use of diverse research perspectives and the inclusion of research users, clinicians, patients and the public in research teams ...

  8. NIST Releases Version 2.0 of Research Data Framework (RDaF)

    February 8, 2024. On February 8, 2024, the National Institute of Standards and Technology (NIST) released Version 2.0 of the NIST Research Data Framework (RDaF). The NIST RDaF is a multi-stakeholder, international effort designed to provide organizations with a structured approach to developing a customizable strategy for the management of ...

  9. NIST Research Data Framework (RDaF)

    Fig. 1 — Partial organizational structure of the framework foundation. The components of the RDaF foundation shown in Fig. 1—lifecycle stages and their associated topics and subtopics—are defined in this document. In addition, most subtopics have several informative references—resources such as guidelines, standards, and policies—that assist stakeholders in addressing that subtopic.

  10. How to Use a Conceptual Framework for Better Research

    A conceptual framework in research is not just a tool but a vital roadmap that guides the entire research process. It integrates various theories, assumptions, and beliefs to provide a structured approach to research. By defining a conceptual framework, researchers can focus their inquiries and clarify their hypotheses, leading to more ...

  11. The Community-Engaged Research Framework

    The Community-Engaged Research Framework is a model that teams can tailor as needed to their specific research, needs, context, and communities under inquiry. This Equity Brief shares NORC's Community-Engaged Research Framework. A subsequent equity brief will discuss strategies for putting the framework into practice. Download This Equity Brief.

  12. Processes for updating guidelines: protocol for...

    The purpose of this systematic review is to describe the most recent guideline update processes, including prioritisation methods, used by international or national groups who provide methods guidance for developing and updating clinical guidelines. Methods: A combination of searching a pre-defined list of international and national ...

  13. The updated Consolidated Framework for Implementation Research based on

    The Consolidated Framework for Implementation Research (CFIR) is one of the most commonly used determinant frameworks to assess these contextual factors; however, it has been over 10 years since publication and there is a need for updates. ... ; the aim of this project was to elicit feedback from experienced CFIR users to inform updates to the ...

  14. Models and applications for measuring the impact of health research

    Background: This report reviews approaches and tools for measuring the impact of research programmes, building on, and extending, a 2007 review. Objectives: (1) To identify the range of theoretical models and empirical approaches for measuring the impact of health research programmes; (2) to develop a taxonomy of models and approaches; (3) to summarise the evidence on the application and use ...

  15. Simplifying Review of Research Project Grant Applications

    Learn more. NIH is implementing a simplified framework for the peer review of the majority of competing research project grant (RPG) applications, beginning with submissions with due dates of January 25, 2025. The simplified peer review framework aims to better facilitate the mission of scientific peer review - identification of the strongest ...

  16. Research Data Framework (RDaF)

    This research data framework represents nearly four years of development, coordinated by the NIST Office of Data and Informatics. This framework is not a NIST imposition or standard, but rather a resource built with extensive community engagement, including: 3 plenary workshops, 15 topical breakout meetings, community inputs received in ...

  17. Antecedents and Consequences of App Update: An Integrated Research

    The integrated research framework of app updates. Full size image. App updates are related to decisions on update rate and update volume (i.e., how much features added in each update [4,5,6]). We define app performance as the overall ranking and rank volatility in app markets. Rank is a comprehensive indicator of the market performance of an ...

  18. Research Frameworks: Critical Components for Reporting Qualitative

    The Importance of Research Frameworks. Researchers may draw on several elements to frame their research. Generally, a framework is regarded as "a set of ideas that you use when you are forming your decisions and judgements" 13 or "a system of rules, ideas, or beliefs that is used to plan or decide something." 14 Research frameworks may consist of a single formal theory or part thereof ...

  19. NIST Research Data Framework (RDaF)

    A map of the research data space: who, what, where, why, when? A dynamic guide for the various stakeholders in research data to understand best practices for research data management and dissemination. A resource for understanding costs, benefits, and risks associated with research data management. A consensus document based on inputs and ...

  20. The Current State of Diversity, Equity, and Inclusion

    Download the report for a framework for assessing and addressing system-wide inequities with an approach grounded in continuous improvement. ... Hanover Research has created the DEI Dashboard, an extensive collection of data that compiles findings from 45 surveys conducted with districts around the country. ... The Current State of Diversity ...

  21. Policy Update: Congressional Interest in NIH Reform Continues with New

    Lewis-Burke Associates has provided campus with a report about congressional interest in NIH reform continues with new framework and engagement opportunities. In recent months congressional interest in policy changes for biomedical research, specifically focused on the National Institutes of Health (NIH), has increased substantially.

  22. Ten recommendations for using implementation frameworks in research and

    Background Recent reviews of the use and application of implementation frameworks in implementation efforts highlight the limited use of frameworks, despite the value in doing so. As such, this article aims to provide recommendations to enhance the application of implementation frameworks, for implementation researchers, intermediaries, and practitioners. Discussion Ideally, an implementation ...

  23. New Healthcare AI Framework Incorporates Medical Knowledge, Values

    The framework advocates for healthcare AI to be viewed as part of a larger "intervention ensemble," or a set of practices, procedures, and knowledge that enable care delivery. This conceptual shift characterizes AI models as "sociotechnical systems," a term that describes how the tool's computational functioning reflects the values ...

  24. The updated Consolidated Framework for Implementation Research based on

    Provides an update of the Consolidated Framework for Implementation Research (CFIR), one of the most highly cited frameworks in implementation science. Addresses important user critiques of the CFIR based on the literature and a survey, including better centering innovation recipients, and adding determinants to equity in implementation.

  25. Introducing the Cognitive Foundations Framework

    We pioneer educational research, assessment methodology, rigorous content, and psychometric precision to support teachers across the globe in the critical work they do every day. ... Introducing the Cognitive Foundations Framework. 06.27.24. Description. ... Download the guide ...

  26. Framework for the development and evaluation of complex interventions

    The Medical Research Council published the second edition of its framework in 2006 on developing and evaluating complex interventions. Since then, there have been considerable developments in the field of complex intervention research. The objective of this project was to update the framework in the light of these developments.

  27. Considerations in Demonstrating Interchangeability With a Reference

    If you are using public inspection listings for legal research, you should verify the contents of the documents against a final, official edition of the Federal Register. Only official editions of the Federal Register provide legal notice of publication to the public and judicial notice to the courts under 44 U.S.C. 1503 & 1507 .

  28. Google Introduces Project Naptime for AI-Powered Vulnerability Research

    Google has developed a new framework called Project Naptime that it says enables a large language model (LLM) to carry out vulnerability research with an aim to improve automated discovery approaches. "The Naptime architecture is centered around the interaction between an AI agent and a target codebase," Google Project Zero researchers Sergei Glazunov and Mark Brand said.

  29. Google framework helps LLMs perform basic vulnerability research

    Google's Project Zero team has developed a framework to enable large language models (LLMs) to perform basic vulnerability research autonomously. A recent blog post explained how the "Project ...

  30. Models and applications for measuring the impact of health research

    The Payback Framework remains the most widely used approach to assessing the value of research investment although the field has expanded considerably. Monitoring of impact in the changing context of health services, and ongoing review of alternative methods of achieving this should be high priorities for research funders.