The Consolidated Framework for Implementation Research

You have come to the right place if you are looking for more information about the Consolidated Framework for Implementation Research (CFIR). This site is created for individuals considering using the CFIR to evaluate an implementation or design an implementation study.

research update framework

The CFIR was originally published in 2009 and was updated in 2022 based on user feedback. It will be helpful for new users to read the 2009 article first; specifically Background, Methods, and Overview of the CFIR. Then read the 2022 Updated CFIR article.

This site is under construction. We are working on changing content on this site to reflect the updated CFIR. Please be patient while this is in process.

Supported Web Browsers: Google Chrome, Mozilla Firefox, Safari

The CFIR provides a menu of constructs arranged across 5 domains that can be used in a range of applications. It is a practical framework to help guide systematic assessment of potential barriers and facilitators. Knowing this information can help guide tailoring of implementation strategies and needed adaptations, and/or to explain outcomes.

The Updated CFIR builds on the 2009 version that included constructs from a range of 19 frameworks or related theories including Everett Rogers’ Diffusion of Innovations Theory and Greenhalgh and colleagues’ compilation based on their review of 500 published sources across 13 scientific disciplines. The CFIR considered the spectrum of construct terminology and definitions and compiled them into one organizing framework.

The 2022 Updated CFIR draws on more recent literature and feedback from users. As part of the update process, a CFIR Outcomes Addendum was published to establish conceptual distinctions between implementation and innovation outcomes and their potential determinants.

The CFIR was developed by implementation researchers affiliated with the Veterans Affairs (VA) Quality Enhancement Research Initiative (QUERI).

By providing a framework of constructs, the CFIR promotes consistent use of constructs, systematic analysis, and organization of findings from implementation studies. User must, however, critique the framework and publish recommendations to improve. This reciprocity is at the heart of building valid and useful theory. See Kislov et al’s call for researchers to engage in theoretically informative implementation research.

  • “…while the CFIR’s utility as a framework to guide empirical research is not fully established, it is consistent with the vast majority of frameworks and conceptual models in dissemination and implementation research in its emphasis of multilevel ecological factors… Examining research (and real-world implementation efforts) through the lens of the CFIR gives us some indication of how comprehensively strategies address important aspects of implementation.”

The CFIR has most often been used within healthcare settings but has also been used across a diverse array of setting including low-income contexts and farming .

As of November 2023, the 2009 article was cited over 10,000 times in Google Scholar and over 4600 times in PubMed . 

research update framework

Research Update

+1 (646) 685-4341 or +1 (646) 685-4344

Research Update Organization

Create new knowledge : learn, present, and publish.

research update framework

Clinical Research Education

Clinical research training programs, clinical research consultation & support, evidence-based public health awareness, our recently published articles, elevated cardiac troponin i as a predictor of outcomes in covid-19 hospitalizations: a meta-analysis, prevalence and outcomes associated with vitamin d deficiency among indexed hospitalizations with cardiovascular disease and cerebrovascular disorder—a nationwide study, chronic periodontitis is associated with cerebral atherosclerosis –a nationwide study, intracerebral hemorrhage outcomes in patients using direct oral anticoagulants versus vitamin k antagonists: a meta-analysis, liver disease and outcomes among covid-19 hospitalized patients – a systematic review and meta-analysis, early epidemiological indicators, outcomes, and interventions of covid-19 pandemic: a systematic review, an objective histopathological scoring system for placental pathology in pre-eclampsia and eclampsia, age-adjusted risk factors associated with mortality and mechanical ventilation utilization amongst covid-19 hospitalizations—a systematic review and meta-analysis authors, biomarkers and outcomes of covid-19 hospitalisations: systematic review and meta-analysis, a rare case of round cell sarcoma with cic-dux4 mutation mimicking a phlegmon: review of literature, sex and racial disparity in utilization and outcomes of t-pa and thrombectomy in acute ischemic stroke, is there a smoker’s paradox in covid-19, our philosophy.

research update framework

Urvish Patel, MD, MPH

Founder, director, and chief education officer, “to advance the knowledge medicine and clinical research by breaking down the research process into a simple yet effective framework that is easy to follow. research update provides the necessary resources so that anyone can execute impactful and empirical research in a timely manner.”.

What People Say About Us

Kulin Patel, MBBS “I got the opportunity to learn research from the Research Update team, I enjoyed working with them. This experience is helping me to envision clinical research during residency…” Google Scholar  || Project

Shivani Sharma, MD, BS “I am PGY1 FM Resident. I enjoyed working with the Research Update team. My research mentor- Urvish and his team had supported my research work throughout my journey to residency. Thank you, RxU” Google Scholar || Project

Arsalan Anwar, MBBS “ Dr. Patel has indeed invested his heart and soul in Research Update. He has helped us in fulfilling our goal of becoming an independent clinical researcher. The telegram group discussion also helped me be persistent in my goals. I highly recommend the Research update. …” Google Scholar  || Project

Sidra Saleem , MBBS “I want to thank the Research Update team for their invaluable help. They have guided and supported me. One of the projects that I worked on was a headache. It was a smooth and enriching experience working with the team.” Google Scholar  ||  Project

Deep Mehta, MBBS “I worked on an interesting research paper with the team. Learning various softwares like SPSS and RevMan has helped me become an independent researcher. I recommend everyone to join Research Update for excellent guidance in clinical research.” Google Scholar  ||  Project

Dhaivat Shah, MBBS “Research Update helped me gain tremendous research experience. The telegram group discussions were very insightful. Dr. Patel helped me navigate through my MSCR course and has been a guiding torch for me. I highly recommend Research update to every student who is keen to learn Clinical Research.” Google Scholar  || Project

About Research Update Organization

Research update organization [irs 501(c)(3) registered-tax-exempt, ein# 83-3619272]   is a non-profit educational organization, founded to promote clinical research and its application to enrich community health. founding principles (1) clinical research education & training (2) clinical research consultation & support (3) utilization of clinical research for evidence-based public health awareness.

  • - Google Chrome

Intended for healthcare professionals

  • Access provided by Google Indexer
  • My email alerts
  • BMA member login
  • Username * Password * Forgot your log in details? Need to activate BMA Member Log In Log in via OpenAthens Log in via your institution

Home

Search form

  • Advanced search
  • Search responses
  • Search blogs
  • A new framework for...

A new framework for developing and evaluating complex interventions: update of Medical Research Council guidance

  • Related content
  • Peer review
  • Kathryn Skivington , research fellow 1 ,
  • Lynsay Matthews , research fellow 1 ,
  • Sharon Anne Simpson , professor of behavioural sciences and health 1 ,
  • Peter Craig , professor of public health evaluation 1 ,
  • Janis Baird , professor of public health and epidemiology 2 ,
  • Jane M Blazeby , professor of surgery 3 ,
  • Kathleen Anne Boyd , reader in health economics 4 ,
  • Neil Craig , acting head of evaluation within Public Health Scotland 5 ,
  • David P French , professor of health psychology 6 ,
  • Emma McIntosh , professor of health economics 4 ,
  • Mark Petticrew , professor of public health evaluation 7 ,
  • Jo Rycroft-Malone , faculty dean 8 ,
  • Martin White , professor of population health research 9 ,
  • Laurence Moore , unit director 1
  • 1 MRC/CSO Social and Public Health Sciences Unit, Institute of Health and Wellbeing, University of Glasgow, Glasgow, UK
  • 2 Medical Research Council Lifecourse Epidemiology Unit, University of Southampton, Southampton, UK
  • 3 Medical Research Council ConDuCT-II Hub for Trials Methodology Research and Bristol Biomedical Research Centre, Bristol, UK
  • 4 Health Economics and Health Technology Assessment Unit, Institute of Health and Wellbeing, University of Glasgow, Glasgow, UK
  • 5 Public Health Scotland, Glasgow, UK
  • 6 Manchester Centre for Health Psychology, University of Manchester, Manchester, UK
  • 7 London School of Hygiene and Tropical Medicine, London, UK
  • 8 Faculty of Health and Medicine, Lancaster University, Lancaster, UK
  • 9 Medical Research Council Epidemiology Unit, University of Cambridge, Cambridge, UK
  • Correspondence to: K Skivington Kathryn.skivington{at}glasgow.ac.uk
  • Accepted 9 August 2021

The UK Medical Research Council’s widely used guidance for developing and evaluating complex interventions has been replaced by a new framework, commissioned jointly by the Medical Research Council and the National Institute for Health Research, which takes account of recent developments in theory and methods and the need to maximise the efficiency, use, and impact of research.

Complex interventions are commonly used in the health and social care services, public health practice, and other areas of social and economic policy that have consequences for health. Such interventions are delivered and evaluated at different levels, from individual to societal levels. Examples include a new surgical procedure, the redesign of a healthcare programme, and a change in welfare policy. The UK Medical Research Council (MRC) published a framework for researchers and research funders on developing and evaluating complex interventions in 2000 and revised guidance in 2006. 1 2 3 Although these documents continue to be widely used and are now accompanied by a range of more detailed guidance on specific aspects of the research process, 4 5 6 7 8 several important conceptual, methodological and theoretical developments have taken place since 2006. These developments have been included in a new framework commissioned by the National Institute of Health Research (NIHR) and the MRC. 9 The framework aims to help researchers work with other stakeholders to identify the key questions about complex interventions, and to design and conduct research with a diversity of perspectives and appropriate choice of methods.

Summary points

Complex intervention research can take an efficacy, effectiveness, theory based, and/or systems perspective, the choice of which is based on what is known already and what further evidence would add most to knowledge

Complex intervention research goes beyond asking whether an intervention works in the sense of achieving its intended outcome—to asking a broader range of questions (eg, identifying what other impact it has, assessing its value relative to the resources required to deliver it, theorising how it works, taking account of how it interacts with the context in which it is implemented, how it contributes to system change, and how the evidence can be used to support real world decision making)

A trade-off exists between precise unbiased answers to narrow questions and more uncertain answers to broader, more complex questions; researchers should answer the questions that are most useful to decision makers rather than those that can be answered with greater certainty

Complex intervention research can be considered in terms of phases, although these phases are not necessarily sequential: development or identification of an intervention, assessment of feasibility of the intervention and evaluation design, evaluation of the intervention, and impactful implementation

At each phase, six core elements should be considered to answer the following questions:

How does the intervention interact with its context?

What is the underpinning programme theory?

How can diverse stakeholder perspectives be included in the research?

What are the key uncertainties?

How can the intervention be refined?

What are the comparative resource and outcome consequences of the intervention?

The answers to these questions should be used to decide whether the research should proceed to the next phase, return to a previous phase, repeat a phase, or stop

Development of the Framework for Developing and Evaluating Complex Interventions

The updated Framework for Developing and Evaluating Complex Interventions is the culmination of a process that included four stages:

A gap analysis to identify developments in the methods and practice since the previous framework was published

A full day expert workshop, in May 2018, of 36 participants to discuss the topics identified in the gap analysis

An open consultation on a draft of the framework in April 2019, whereby we sought stakeholder opinion by advertising via social media, email lists and other networks for written feedback (52 detailed responses were received from stakeholders internationally)

Redraft using findings from the previous stages, followed by a final expert review.

We also sought stakeholder views at various interactive workshops throughout the development of the framework: at the annual meetings of the Society for Social Medicine and Population Health (2018), the UK Society for Behavioural Medicine (2017, 2018), and internationally at the International Congress of Behavioural Medicine (2018). The entire process was overseen by a scientific advisory group representing the range of relevant NIHR programmes and MRC population health investments. The framework was reviewed by the MRC-NIHR Methodology Research Programme Advisory Group and then approved by the MRC Population Health Sciences Group in March 2020 before undergoing further external peer and editorial review through the NIHR Journals Library peer review process. More detailed information and the methods used to develop this new framework are described elsewhere. 9 This article introduces the framework and summarises the main messages for producers and users of evidence.

What are complex interventions?

An intervention might be considered complex because of properties of the intervention itself, such as the number of components involved; the range of behaviours targeted; expertise and skills required by those delivering and receiving the intervention; the number of groups, settings, or levels targeted; or the permitted level of flexibility of the intervention or its components. For example, the Links Worker Programme was an intervention in primary care in Glasgow, Scotland, that aimed to link people with community resources to help them “live well” in their communities. It targeted individual, primary care (general practitioner (GP) surgery), and community levels. The intervention was flexible in that it could differ between primary care GP surgeries. In addition, the Link Workers did not support just one specific health or wellbeing issue: bereavement, substance use, employment, and learning difficulties were all included. 10 11 The complexity of this intervention had implications for many aspects of its evaluation, such as the choice of appropriate outcomes and processes to assess.

Flexibility in intervention delivery and adherence might be permitted to allow for variation in how, where, and by whom interventions are delivered and received. Standardisation of interventions could relate more to the underlying process and functions of the intervention than on the specific form of components delivered. 12 For example, in surgical trials, protocols can be designed with flexibility for intervention delivery. 13 Interventions require a theoretical deconstruction into components and then agreement about permissible and prohibited variation in the delivery of those components. This approach allows implementation of a complex intervention to vary across different contexts yet maintain the integrity of the core intervention components. Drawing on this approach in the ROMIO pilot trial, core components of minimally invasive oesophagectomy were agreed and subsequently monitored during main trial delivery using photography. 14

Complexity might also arise through interactions between the intervention and its context, by which we mean “any feature of the circumstances in which an intervention is conceived, developed, implemented and evaluated.” 6 15 16 17 Much of the criticism of and extensions to the existing framework and guidance have focused on the need for greater attention on understanding how and under what circumstances interventions bring about change. 7 15 18 The importance of interactions between the intervention and its context emphasises the value of identifying mechanisms of change, where mechanisms are the causal links between intervention components and outcomes; and contextual factors, which determine and shape whether and how outcomes are generated. 19

Thus, attention is given not only to the design of the intervention itself but also to the conditions needed to realise its mechanisms of change and/or the resources required to support intervention reach and impact in real world implementation. For example, in a cluster randomised trial of ASSIST (a peer led, smoking prevention intervention), researchers found that the intervention worked particularly well in cohesive communities that were served by one secondary school where peer supporters were in regular contact with their peers—a key contextual factor consistent with diffusion of innovation theory, which underpinned the intervention design. 20 A process evaluation conducted alongside a trial of robot assisted surgery identified key contextual factors to support effective implementation of this procedure, including engaging staff at different levels and surgeons who would not be using robot assisted surgery, whole team training, and an operating theatre of suitable size. 21

With this framing, complex interventions can helpfully be considered as events in systems. 16 Thinking about systems helps us understand the interaction between an intervention and the context in which it is implemented in a dynamic way. 22 Systems can be thought of as complex and adaptive, 23 characterised by properties such as emergence, feedback, adaptation, and self-organisation ( table 1 ).

Properties and examples of complex adaptive systems

  • View inline

For complex intervention research to be most useful to decision makers, it should take into account the complexity that arises both from the intervention’s components and from its interaction with the context in which it is being implemented.

Research perspectives

The previous framework and guidance were based on a paradigm in which the salient question was to identify whether an intervention was effective. Complex intervention research driven primarily by this question could fail to deliver interventions that are implementable, cost effective, transferable, and scalable in real world conditions. To deliver solutions for real world practice, complex intervention research requires strong and early engagement with patients, practitioners, and policy makers, shifting the focus from the “binary question of effectiveness” 26 to whether and how the intervention will be acceptable, implementable, cost effective, scalable, and transferable across contexts. In line with a broader conception of complexity, the scope of complex intervention research needs to include the development, identification, and evaluation of whole system interventions and the assessment of how interventions contribute to system change. 22 27 The new framework therefore takes a pluralistic approach and identifies four perspectives that can be used to guide the design and conduct of complex intervention research: efficacy, effectiveness, theory based, and systems ( table 2 ).

Although each research perspective prompts different types of research question, they should be thought of as overlapping rather than mutually exclusive. For example, theory based and systems perspectives to evaluation can be used in conjunction, 33 while an effectiveness evaluation can draw on a theory based or systems perspective through an embedded process evaluation to explore how and under what circumstances outcomes are achieved. 34 35 36

Most complex health intervention research so far has taken an efficacy or effectiveness perspective and for some research questions these perspectives will continue to be the most appropriate. However, some questions equally relevant to the needs of decision makers cannot be answered by research restricted to an efficacy or effectiveness perspective. A wider range and combination of research perspectives and methods, which answer questions beyond efficacy and effectiveness, need to be used by researchers and supported by funders. Doing so will help to improve the extent to which key questions for decision makers can be answered by complex intervention research. Example questions include:

Will this effective intervention reproduce the effects found in the trial when implemented here?

Is the intervention cost effective?

What are the most important things we need to do that will collectively improve health outcomes?

In the absence of evidence from randomised trials and the infeasibility of conducting such a trial, what does the existing evidence suggest is the best option now and how can this be evaluated?

What wider changes will occur as a result of this intervention?

How are the intervention effects mediated by different settings and contexts?

Phases and core elements of complex intervention research

The framework divides complex intervention research into four phases: development or identification of the intervention, feasibility, evaluation, and implementation ( fig 1 ). A research programme might begin at any phase, depending on the key uncertainties about the intervention in question. Repeating phases is preferable to automatic progression if uncertainties remain unresolved. Each phase has a common set of core elements—considering context, developing and refining programme theory, engaging stakeholders, identifying key uncertainties, refining the intervention, and economic considerations. These elements should be considered early and continually revisited throughout the research process, and especially before moving between phases (for example, between feasibility testing and evaluation).

Fig 1

Framework for developing and evaluating complex interventions. Context=any feature of the circumstances in which an intervention is conceived, developed, evaluated, and implemented; programme theory=describes how an intervention is expected to lead to its effects and under what conditions—the programme theory should be tested and refined at all stages and used to guide the identification of uncertainties and research questions; stakeholders=those who are targeted by the intervention or policy, involved in its development or delivery, or more broadly those whose personal or professional interests are affected (that is, who have a stake in the topic)—this includes patients and members of the public as well as those linked in a professional capacity; uncertainties=identifying the key uncertainties that exist, given what is already known and what the programme theory, research team, and stakeholders identify as being most important to discover—these judgments inform the framing of research questions, which in turn govern the choice of research perspective; refinement=the process of fine tuning or making changes to the intervention once a preliminary version (prototype) has been developed; economic considerations=determining the comparative resource and outcome consequences of the interventions for those people and organisations affected

  • Download figure
  • Open in new tab
  • Download powerpoint

Core elements

The effects of a complex intervention might often be highly dependent on context, such that an intervention that is effective in some settings could be ineffective or even harmful elsewhere. 6 As the examples in table 1 show, interventions can modify the contexts in which they are implemented, by eliciting responses from other agents, or by changing behavioural norms or exposure to risk, so that their effects will also vary over time. Context can be considered as both dynamic and multi-dimensional. Key dimensions include physical, spatial, organisational, social, cultural, political, or economic features of the healthcare, health system, or public health contexts in which interventions are implemented. For example, the evaluation of the Breastfeeding In Groups intervention found that the context of the different localities (eg, staff morale and suitable premises) influenced policy implementation and was an explanatory factor in why breastfeeding rates increased in some intervention localities and declined in others. 37

Programme theory

Programme theory describes how an intervention is expected to lead to its effects and under what conditions. It articulates the key components of the intervention and how they interact, the mechanisms of the intervention, the features of the context that are expected to influence those mechanisms, and how those mechanisms might influence the context. 38 Programme theory can be used to promote shared understanding of the intervention among diverse stakeholders, and to identify key uncertainties and research questions. Where an intervention (such as a policy) is developed by others, researchers still need to theorise the intervention before attempting to evaluate it. 39 Best practice is to develop programme theory at the beginning of the research project with involvement of diverse stakeholders, based on evidence and theory from relevant fields, and to refine it during successive phases. The EPOCH trial tested a large scale quality improvement programme aimed at improving 90 day survival rates for patients undergoing emergency abdominal surgery; it included a well articulated programme theory at the outset, which supported the tailoring of programme delivery to local contexts. 40 The development, implementation, and post-study reflection of the programme theory resulted in suggested improvements for future implementation of the quality improvement programme.

A refined programme theory is an important evaluation outcome and is the principal aim where a theory based perspective is taken. Improved programme theory will help inform transferability of interventions across settings and help produce evidence and understanding that is useful to decision makers. In addition to full articulation of programme theory, it can help provide visual representations—for example, using a logic model, 41 42 43 realist matrix, 44 or a system map, 45 with the choice depending on which is most appropriate for the research perspective and research questions. Although useful, any single visual representation is unlikely to sufficiently articulate the programme theory—it should always be articulated well within the text of publications, reports, and funding applications.

Stakeholders

Stakeholders include those individuals who are targeted by the intervention or policy, those involved in its development or delivery, or those whose personal or professional interests are affected (that is, all those who have a stake in the topic). Patients and the public are key stakeholders. Meaningful engagement with appropriate stakeholders at each phase of the research is needed to maximise the potential of developing or identifying an intervention that is likely to have positive impacts on health and to enhance prospects of achieving changes in policy or practice. For example, patient and public involvement 46 activities in the PARADES programme, which evaluated approaches to reduce harm and improve outcomes for people with bipolar disorder, were wide ranging and central to the project. 47 Involving service users with lived experiences of bipolar disorder had many benefits, for example, it enhanced the intervention but also improved the evaluation and dissemination methods. Service users involved in the study also had positive outcomes, including more settled employment and progression to further education. Broad thinking and consultation is needed to identify a diverse range of appropriate stakeholders.

The purpose of stakeholder engagement will differ depending on the context and phase of the research, but is essential for prioritising research questions, the co-development of programme theory, choosing the most useful research perspective, and overcoming practical obstacles to evaluation and implementation. Researchers should nevertheless be mindful of conflicts of interest among stakeholders and use transparent methods to record potential conflicts of interest. Research should not only elicit stakeholder priorities, but also consider why they are priorities. Careful consideration of the appropriateness and methods of identification and engagement of stakeholders is needed. 46 48

Key uncertainties

Many questions could be answered at each phase of the research process. The design and conduct of research need to engage pragmatically with the multiple uncertainties involved and offer a flexible and emergent approach to exploring them. 15 Therefore, researchers should spend time developing the programme theory, clearly identifying the remaining uncertainties, given what is already known and what the research team and stakeholders identify as being most important to determine. Judgments about the key uncertainties inform the framing of research questions, which in turn govern the choice of research perspective.

Efficacy trials of relatively uncomplicated interventions in tightly controlled conditions, where research questions are answered with great certainty, will always be important, but translation of the evidence into the diverse settings of everyday practice is often highly problematic. 27 For intervention research in healthcare and public health settings to take on more challenging evaluation questions, greater priority should be given to mixed methods, theory based, or systems evaluation that is sensitive to complexity and that emphasises implementation, context, and system fit. This approach could help improve understanding and identify important implications for decision makers, albeit with caveats, assumptions, and limitations. 22 Rather than maintaining the established tendency to prioritise strong research designs that answer some questions with certainty but are unsuited to resolving many important evaluation questions, this more inclusive, deliberative process could place greater value on equivocal findings that nevertheless inform important decisions where evidence is sparse.

Intervention refinement

Within each phase of complex intervention research and on transition from one phase to another, the intervention might need to be refined, on the basis of data collected or development of programme theory. 4 The feasibility and acceptability of interventions can be improved by engaging potential intervention users to inform refinements. For example, an online physical activity planner for people with diabetes mellitus was found to be difficult to use, resulting in the tool providing incorrect personalised advice. To improve usability and the advice given, several iterations of the planner were developed on the basis of interviews and observations. This iterative process led to the refined planner demonstrating greater feasibility and accuracy. 49

Refinements should be guided by the programme theory, with acceptable boundaries agreed and specified at the beginning of each research phase, and with transparent reporting of the rationale for change. Scope for refinement might also be limited by the policy or practice context. Refinement will be rare in the evaluation phase of efficacy and effectiveness research, where interventions will ideally not change or evolve within the course of the study. However, between the phases of research and within systems and theory based evaluation studies, refinement of interventions in response to accumulated data or as an adaptive and variable response to context and system change are likely to be desirable features of the intervention and a key focus of the research.

Economic considerations

Economic evaluation—the comparative analysis of alternative courses of action in terms of both costs (resource use) and consequences (outcomes, effects)—should be a core component of all phases of intervention research. Early engagement of economic expertise will help identify the scope of costs and benefits to assess in order to answer questions that matter most to decision makers. 50 Broad ranging approaches such as cost benefit analysis or cost consequence analysis, which seek to capture the full range of health and non-health costs and benefits across different sectors, 51 will often be more suitable for an economic evaluation of a complex intervention than narrower approaches such as cost effectiveness or cost utility analysis. For example, evaluation of the New Orleans Intervention Model for infants entering foster care in Glasgow included short and long term economic analysis from multiple perspectives (the UK’s health service and personal social services, public sector, and wider societal perspectives); and used a range of frameworks, including cost utility and cost consequence analysis, to capture changes in the intersectoral costs and outcomes associated with child maltreatment. 52 53 The use of multiple economic evaluation frameworks provides decision makers with a comprehensive, multi-perspective guide to the cost effectiveness of the New Orleans Intervention Model.

Developing or identifying a complex intervention

Development refers to the whole process of designing and planning an intervention, from initial conception through to feasibility, pilot, or evaluation study. Guidance on intervention development has recently been developed through the INDEX study 4 ; although here we highlight that complex intervention research does not always begin with new or researcher led interventions. For example:

A key source of intervention development might be an intervention that has been developed elsewhere and has the possibility of being adapted to a new context. Adaptation of existing interventions could include adapting to a new population, to a new setting, 54 55 or to target other outcomes (eg, a smoking prevention intervention being adapted to tackle substance misuse and sexual health). 20 56 57 A well developed programme theory can help identify what features of the antecedent intervention(s) need to be adapted for different applications, and the key mechanisms that should be retained even if delivered slightly differently. 54 58

Policy or practice led interventions are an important focus of evaluation research. Again, uncovering the implicit theoretical basis of an intervention and developing a programme theory is essential to identifying key uncertainties and working out how the intervention might be evaluated. This step is important, even if rollout has begun, because it supports the identification of mechanisms of change, important contextual factors, and relevant outcome measures. For example, researchers evaluating the UK soft drinks industry levy developed a bounded conceptual system map to articulate their understanding (drawing on stakeholder views and document review) of how the intervention was expected to work. This system map guided the evaluation design and helped identify data sources to support evaluation. 45 Another example is a recent analysis of the implicit theory of the NHS diabetes prevention programme, involving analysis of documentation by NHS England and four providers, showing that there was no explicit theoretical basis for the programme, and no logic model showing how the intervention was expected to work. This meant that the justification for the inclusion of intervention components was unclear. 59

Intervention identification and intervention development represent two distinct pathways of evidence generation, 60 but in both cases, the key considerations in this phase relate to the core elements described above.

Feasibility

A feasibility study should be designed to assess predefined progression criteria that relate to the evaluation design (eg, reducing uncertainty around recruitment, data collection, retention, outcomes, and analysis) or the intervention itself (eg, around optimal content and delivery, acceptability, adherence, likelihood of cost effectiveness, or capacity of providers to deliver the intervention). If the programme theory suggests that contextual or implementation factors might influence the acceptability, effectiveness, or cost effectiveness of the intervention, these questions should be considered.

Despite being overlooked or rushed in the past, the value of feasibility testing is now widely accepted with key terms and concepts well defined. 61 62 Before initiating a feasibility study, researchers should consider conducting an evaluability assessment to determine whether and how an intervention can usefully be evaluated. Evaluability assessment involves collaboration with stakeholders to reach agreement on the expected outcomes of the intervention, the data that could be collected to assess processes and outcomes, and the options for designing the evaluation. 63 The end result is a recommendation on whether an evaluation is feasible, whether it can be carried out at a reasonable cost, and by which methods. 64

Economic modelling can be undertaken at the feasibility stage to assess the likelihood that the expected benefits of the intervention justify the costs (including the cost of further research), and to help decision makers decide whether proceeding to a full scale evaluation is worthwhile. 65 Depending on the results of the feasibility study, further work might be required to progressively refine the intervention before embarking on a full scale evaluation.

The new framework defines evaluation as going beyond asking whether an intervention works (in the sense of achieving its intended outcome), to a broader range of questions including identifying what other impact it has, theorising how it works, taking account of how it interacts with the context in which it is implemented, how it contributes to system change, and how the evidence can be used to support decision making in the real world. This implies a shift from an exclusive focus on obtaining unbiased estimates of effectiveness 66 towards prioritising the usefulness of information for decision making in selecting the optimal research perspective and in prioritising answerable research questions.

A crucial aspect of evaluation design is the choice of outcome measures or evidence of change. Evaluators should work with stakeholders to assess which outcomes are most important, and how to deal with multiple outcomes in the analysis with due consideration of statistical power and transparent reporting. A sharp distinction between one primary outcome and several secondary outcomes is not necessarily appropriate, particularly where the programme theory identifies impacts across a range of domains. Where needed to support the research questions, prespecified subgroup analyses should be carried out and reported. Even where such analyses are underpowered, they should be included in the protocol because they might be useful for subsequent meta-analyses, or for developing hypotheses for testing in further research. Outcome measures could capture changes to a system rather than changes in individuals. Examples include changes in relationships within an organisation, the introduction of policies, changes in social norms, or normalisation of practice. Such system level outcomes include how changing the dynamics of one part of a system alters behaviours in other parts, such as the potential for displacement of smoking into the home after a public smoking ban.

A helpful illustration of the use of system level outcomes is the evaluation of the Delaware Young Health Program—an initiative to improve the health and wellbeing of young people in Delaware, USA. The intervention aimed to change underlying system dynamics, structures, and conditions, so the evaluation identified systems oriented research questions and methods. Three systems science methods were used: group model building and viable systems model assessment to identify underlying patterns and structures; and social network analysis to evaluate change in relationships over time. 67

Researchers have many study designs to choose from, and different designs are optimally suited to consider different research questions and different circumstances. 68 Extensions to standard designs of randomised controlled trials (including adaptive designs, SMART trials (sequential multiple assignment randomised trials), n-of-1 trials, and hybrid effectiveness-implementation designs) are important areas of methods development to improve the efficiency of complex intervention research. 69 70 71 72 Non-randomised designs and modelling approaches might work best if a randomised design is not practical, for example, in natural experiments or systems evaluations. 5 73 74 A purely quantitative approach, using an experimental design with no additional elements such as a process evaluation, is rarely adequate for complex intervention research, where qualitative and mixed methods designs might be necessary to answer questions beyond effectiveness. In many evaluations, the nature of the intervention, the programme theory, or the priorities of stakeholders could lead to a greater focus on improving theories about how to intervene. In this view, effect estimates are inherently context bound, so that average effects are not a useful guide to decision makers working in different contexts. Contextualised understandings of how an intervention induces change might be more useful, as well as details on the most important enablers and constraints on its delivery across a range of settings. 7

Process evaluation can answer questions around fidelity and quality of implementation (eg, what is implemented and how?), mechanisms of change (eg, how does the delivered intervention produce change?), and context (eg, how does context affect implementation and outcomes?). 7 Process evaluation can help determine why an intervention fails unexpectedly or has unanticipated consequences, or why it works and how it can be optimised. Such findings can facilitate further development of the intervention programme theory. 75 In a theory based or systems evaluation, there is not necessarily such a clear distinction between process and outcome evaluation as there is in an effectiveness study. 76 These perspectives could prioritise theory building over evidence production and use case study or simulation methods to understand how outcomes or system behaviour are generated through intervention. 74 77

Implementation

Early consideration of implementation increases the potential of developing an intervention that can be widely adopted and maintained in real world settings. Implementation questions should be anticipated in the intervention programme theory, and considered throughout the phases of intervention development, feasibility testing, process, and outcome evaluation. Alongside implementation specific outcomes (such as reach or uptake of services), attention to the components of the implementation strategy, and contextual factors that support or hinder the achievement of impacts, are key. Some flexibility in intervention implementation might support intervention transferability into different contexts (an important aspect of long term implementation 78 ), provided that the key functions of the programme are maintained, and that the adaptations made are clearly understood. 8

In the ASSIST study, 20 a school based, peer led intervention for smoking prevention, researchers considered implementation at each phase. The intervention was developed to have minimal disruption on school resources; the feasibility study resulted in intervention refinements to improve acceptability and improve reach to male students; and in the evaluation (cluster randomised controlled trial), the intervention was delivered as closely as possible to real world implementation. Drawing on the process evaluation, the implementation included an intervention manual that identified critical components and other components that could be adapted or dropped to allow flexible implementation while achieving delivery of the key mechanisms of change; and a training manual for the trainers and ongoing quality assurance built into rollout for the longer term.

In a natural experimental study, evaluation takes place during or after the implementation of the intervention in a real world context. Highly pragmatic effectiveness trials or specific hybrid effectiveness-implementation designs also combine effectiveness and implementation outcomes in one study, with the aim of reducing time for translation of research on effectiveness into routine practice. 72 79 80

Implementation questions should be included in economic considerations during the early stages of intervention and study development. How the results of economic analyses are reported and presented to decision makers can affect whether and how they act on the results. 81 A key consideration is how to deal with interventions across different sectors, where those paying for interventions and those receiving the benefits of them could differ, reducing the incentive to implement an intervention, even if shown to be beneficial and cost effective. Early engagement with appropriate stakeholders will help frame appropriate research questions and could anticipate any implementation challenges that might arise. 82

Conclusions

One of the motivations for developing this new framework was to answer calls for a change in research priorities, towards allocating greater effort and funding to research that can have the optimum impact on healthcare or population health outcomes. The framework challenges the view that unbiased estimates of effectiveness are the cardinal goal of evaluation. It asserts that improving theories and understanding how interventions contribute to change, including how they interact with their context and wider dynamic systems, is an equally important goal. For some complex intervention research problems, an efficacy or effectiveness perspective will be the optimal approach, and a randomised controlled trial will provide the best design to achieve an unbiased estimate. For others, alternative perspectives and designs might work better, or might be the only way to generate new knowledge to reduce decision maker uncertainty.

What is important for the future is that the scope of intervention research is not constrained by an unduly limited set of perspectives and approaches that might be less risky to commission and more likely to produce a clear and unbiased answer to a specific question. A bolder approach is needed—to include methods and perspectives where experience is still quite limited, but where we, supported by our workshop participants and respondents to our consultations, believe there is an urgent need to make progress. This endeavour will involve mainstreaming new methods that are not yet widely used, as well as undertaking methodological innovation and development. The deliberative and flexible approach that we encourage is intended to reduce research waste, 83 maximise usefulness for decision makers, and increase the efficiency with which complex intervention research generates knowledge that contributes to health improvement.

Monitoring the use of the framework and evaluating its acceptability and impact is important but has been lacking in the past. We encourage research funders and journal editors to support the diversity of research perspectives and methods that are advocated here and to seek evidence that the core elements are attended to in research design and conduct. We have developed a checklist to support the preparation of funding applications, research protocols, and journal publications. 9 This checklist offers one way to monitor impact of the guidance on researchers, funders, and journal editors.

We recommend that the guidance is continually updated, and future updates continue to adopt a broad, pluralist perspective. Given its wider scope, and the range of detailed guidance that is now available on specific methods and topics, we believe that the framework is best seen as meta-guidance. Further editions should be published in a fluid, web based format, and more frequently updated to incorporate new material, further case studies, and additional links to other new resources.

Acknowledgments

We thank the experts who provided input at the workshop, those who responded to the consultation, and those who provided advice and review throughout the process. The many people involved are acknowledged in the full framework document. 9 Parts of this manuscript have been reproduced (some with edits and formatting changes), with permission, from that longer framework document.

Contributors: All authors made a substantial contribution to all stages of the development of the framework—they contributed to its development, drafting, and final approval. KS and LMa led the writing of the framework, and KS wrote the first draft of this paper. PC, SAS, and LMo provided critical insights to the development of the framework and contributed to writing both the framework and this paper. KS, LMa, SAS, PC, and LMo facilitated the expert workshop, KS and LMa developed the gap analysis and led the analysis of the consultation. KAB, NC, and EM contributed the economic components to the framework. The scientific advisory group (JB, JMB, DPF, MP, JR-M, and MW) provided feedback and edits on drafts of the framework, with particular attention to process evaluation (JB), clinical research (JMB), implementation (JR-M, DPF), systems perspective (MP), theory based perspective (JR-M), and population health (MW). LMo is senior author. KS and LMo are the guarantors of this work and accept the full responsibility for the finished article. The corresponding author attests that all listed authors meet authorship criteria and that no others meeting authorship criteria have been omitted.

Funding: The work was funded by the National Institute for Health Research (Department of Health and Social Care 73514) and Medical Research Council (MRC). Additional time on the study was funded by grants from the MRC for KS (MC_UU_12017/11, MC_UU_00022/3), LMa, SAS, and LMo (MC_UU_12017/14, MC_UU_00022/1); PC (MC_UU_12017/15, MC_UU_00022/2); and MW (MC_UU_12015/6 and MC_UU_00006/7). Additional time on the study was also funded by grants from the Chief Scientist Office of the Scottish Government Health Directorates for KS (SPHSU11 and SPHSU18); LMa, SAS, and LMo (SPHSU14 and SPHSU16); and PC (SPHSU13 and SPHSU15). KS and SAS were also supported by an MRC Strategic Award (MC_PC_13027). JMB received funding from the NIHR Biomedical Research Centre at University Hospitals Bristol NHS Foundation Trust and the University of Bristol and by the MRC ConDuCT-II Hub (Collaboration and innovation for Difficult and Complex randomised controlled Trials In Invasive procedures - MR/K025643/1). DF is funded in part by the NIHR Manchester Biomedical Research Centre (IS-BRC-1215-20007) and NIHR Applied Research Collaboration - Greater Manchester (NIHR200174). MP is funded in part as director of the NIHR’s Public Health Policy Research Unit. This project was overseen by a scientific advisory group that comprised representatives of NIHR research programmes, of the MRC/NIHR Methodology Research Programme Panel, of key MRC population health research investments, and authors of the 2006 guidance. A prospectively agreed protocol, outlining the workplan, was agreed with MRC and NIHR, and signed off by the scientific advisory group. The framework was reviewed and approved by the MRC/NIHR Methodology Research Programme Advisory Group and MRC Population Health Sciences Group and completed NIHR HTA Monograph editorial and peer review processes.

Competing interests: All authors have completed the ICMJE uniform disclosure form at http://www.icmje.org/coi_disclosure.pdf and declare: support from the NIHR, MRC, and the funders listed above for the submitted work; KS has project grant funding from the Scottish Government Chief Scientist Office; SAS is a former member of the NIHR Health Technology Assessment Clinical Evaluation and Trials Programme Panel (November 2016 - November 2020) and member of the Chief Scientist Office Health HIPS Committee (since 2018) and NIHR Policy Research Programme (since November 2019), and has project grant funding from the Economic and Social Research Council, MRC, and NIHR; LMo is a former member of the MRC-NIHR Methodology Research Programme Panel (2015-19) and MRC Population Health Sciences Group (2015-20); JB is a member of the NIHR Public Health Research Funding Committee (since May 2019), and a core member (since 2016) and vice chairperson (since 2018) of a public health advisory committee of the National Institute for Health and Care Excellence; JMB is a former member of the NIHR Clinical Trials Unit Standing Advisory Committee (2015-19); DPF is a former member of the NIHR Public Health Research programme research funding board (2015-2019), the MRC-NIHR Methodology Research Programme panel member (2014-2018), and is a panel member of the Research Excellence Framework 2021, subpanel 2 (public health, health services, and primary care; November 2020 - February 2022), and has grant funding from the European Commission, NIHR, MRC, Natural Environment Research Council, Prevent Breast Cancer, Breast Cancer Now, Greater Sport, Manchester University NHS Foundation Trust, Christie Hospital NHS Trust, and BXS GP; EM is a member of the NIHR Public Health Research funding board; MP has grant funding from the MRC, UK Prevention Research Partnership, and NIHR; JR-M is programme director and chairperson of the NIHR’s Health Services Delivery Research Programme (since 2014) and member of the NIHR Strategy Board (since 2014); MW received a salary as director of the NIHR PHR Programme (2014-20), has grant funding from NIHR, and is a former member of the MRC’s Population Health Sciences Strategic Committee (July 2014 to June 2020). There are no other relationships or activities that could appear to have influenced the submitted work.

Patient and public involvement: This project was methodological; views of patients and the public were included at the open consultation stage of the update. The open consultation, involving access to an initial draft, was promoted to our networks via email and digital channels, such as our unit Twitter account ( @theSPHSU ). We received five responses from people who identified as service users (rather than researchers or professionals in a relevant capacity). Their input included helpful feedback on the main complexity diagram, the different research perspectives, the challenge of moving interventions between different contexts and overall readability and accessibility of the document. Several respondents also highlighted useful signposts to include for readers. Various dissemination events are planned, but as this project is methodological we will not specifically disseminate to patients and the public beyond the planned dissemination activities.

Provenance and peer review: Not commissioned; externally peer reviewed.

This is an Open Access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) license, which permits others to distribute, remix, adapt and build upon this work, for commercial use, provided the original work is properly cited. See: http://creativecommons.org/licenses/by/4.0/ .

  • Macintyre S ,
  • Nazareth I ,
  • Petticrew M ,
  • Medical Research Council Guidance
  • Campbell M ,
  • Fitzpatrick R ,
  • O’Cathain A ,
  • Gunnell D ,
  • Ruggiero ED ,
  • Frohlich KL ,
  • Copeland L ,
  • Skivington K ,
  • Matthews L ,
  • Simpson SA ,
  • Hawkins K ,
  • Fitzpatrick B ,
  • Mercer SW ,
  • Blencowe NS ,
  • Skilton A ,
  • ROMIO Study team
  • Greenhalgh T ,
  • Petticrew M
  • Campbell R ,
  • Starkey F ,
  • Holliday J ,
  • ↵ Randell R, Honey S, Hindmarsh J, et al. A realist process evaluation of robot-assisted surgery: integration into routine practice and impacts on communication, collaboration and decision-making . NIHR Journals Library, 2017. https://www.ncbi.nlm.nih.gov/books/NBK447438/ .
  • ↵ The Health Foundation. Evidence Scan. Complex adaptive systems. Health Foundation 2010. https://www.health.org.uk/publications/complex-adaptive-systems .
  • Wiggins M ,
  • Sawtell M ,
  • Robinson M ,
  • Kessler R ,
  • Folegatti PM ,
  • Oxford COVID Vaccine Trial Group
  • Clemens SAC ,
  • Shearer JC ,
  • Burgess RA ,
  • Osborne RH ,
  • Yongabi KA ,
  • Paltiel AD ,
  • Schwartz JL ,
  • Walensky RP
  • Lhussier M ,
  • Williams L ,
  • Guthrie B ,
  • Pinnock H ,
  • ↵ Penney T, Adams J, Briggs A, et al. Evaluation of the impacts on health of the proposed UK industry levy on sugar sweetened beverages: developing a systems map and data platform, and collection of baseline and early impact data. National Institute for Health Research, 2018. https://www.journalslibrary.nihr.ac.uk/programmes/phr/164901/#/ .
  • Hoddinott P ,
  • Britten J ,
  • Funnell SC ,
  • Lawless A ,
  • Delany-Crowe T ,
  • Stephens TJ ,
  • Pearse RM ,
  • EPOCH trial group
  • Melendez-Torres GJ ,
  • Mounier-Jack S ,
  • Hargreaves SC ,
  • Manzano A ,
  • Uzochukwu B ,
  • ↵ White M, Cummins S, Raynor M, et al. Evaluation of the health impacts of the UK Treasury Soft Drinks Industry Levy (SDIL) Project Protocol. NIHR Journals Library, 2018. https://www.journalslibrary.nihr.ac.uk/programmes/phr/1613001/#/summary-of-research .
  • ↵ National Institute for Health and Care Excellence. What is public involvement in research? – INVOLVE. https://www.invo.org.uk/find-out-more/what-is-public-involvement-in-research-2/ .
  • Barrowclough C ,
  • Stuckler D ,
  • Monteiro C ,
  • Lancet NCD Action Group
  • Yardley L ,
  • Ainsworth B ,
  • Arden-Close E ,
  • Barnett ML ,
  • Ettner SL ,
  • Powell BJ ,
  • ↵ National Institute for Health and Care Excellence. Developing NICE guidelines: the manual. NICE, 2014. https://www.nice.org.uk/process/pmg20/resources/developing-nice-guidelines-the-manual-pdf-72286708700869 .
  • Balogun MO ,
  • BeST study team
  • Escoffery C ,
  • Lebow-Skelley E ,
  • Haardoerfer R ,
  • Stirman SW ,
  • Miller CJ ,
  • Forsyth R ,
  • Purcell C ,
  • Hawkins J ,
  • Movsisyan A ,
  • Rehfuess E ,
  • ADAPT Panel ,
  • ADAPT Panel comprises of Laura Arnold
  • Hawkes RE ,
  • Ogilvie D ,
  • Eldridge SM ,
  • Campbell MJ ,
  • PAFS consensus group
  • Thabane L ,
  • Hopewell S ,
  • Lancaster GA ,
  • ↵ Craig P, Campbell M. Evaluability Assessment: a systematic approach to deciding whether and how to evaluate programmes and policies. Evaluability Assessment working paper. 2015. http://whatworksscotland.ac.uk/wp-content/uploads/2015/07/WWS-Evaluability-Assessment-Working-paper-final-June-2015.pdf
  • Cummins S ,
  • ↵ Expected Value of Perfect Information (EVPI). YHEC - York Health Econ. Consort. https://yhec.co.uk/glossary/expected-value-of-perfect-information-evpi/ .
  • Cartwright N
  • Britton A ,
  • McPherson K ,
  • Sanderson C ,
  • Burnett T ,
  • Mozgunov P ,
  • Pallmann P ,
  • Villar SS ,
  • Wheeler GM ,
  • Collins LM ,
  • Murphy SA ,
  • McDonald S ,
  • Coronado GD ,
  • Schwartz M ,
  • Tugwell P ,
  • Knottnerus JA ,
  • McGowan J ,
  • ↵ Egan M, McGill E, Penney T, et al. NIHR SPHR Guidance on Systems Approaches to Local Public Health Evaluation. Part 1: Introducing systems thinking. NIHR School for Public Health Research, 2019. https://sphr.nihr.ac.uk/wp-content/uploads/2018/08/NIHR-SPHR-SYSTEM-GUIDANCE-PART-1-FINAL_SBnavy.pdf .
  • Fletcher A ,
  • ↵ Bicket M, Christie I, Gilbert N, et al. Magenta Book 2020 Supplementary Guide: Handling Complexity in Policy Evaluation. Lond HM Treas 2020. https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/879437/Magenta_Book_supplementary_guide._Handling_Complexity_in_policy_evaluation.pdf
  • Pfadenhauer LM ,
  • Gerhardus A ,
  • Mozygemba K ,
  • Curran GM ,
  • Mittman B ,
  • Landes SJ ,
  • McBain SA ,
  • ↵ Imison C, Curry N, Holder H, et al. Shifting the balance of care: great expectations. Research report. Nuffield Trust. https://www.nuffieldtrust.org.uk/research/shifting-the-balance-of-care-great-expectations
  • Martinez-Alvarez M ,
  • Chalmers I ,

research update framework

National Institute of Standards and Technology

Nist technical series publication.

NIST SP 1500-18r2

NIST Research Data Framework (RDaF)

Version 2.0

Robert J. Hanisch

Office of Data and Informatics

Material Measurement Laboratory

Debra L. Kaiser

Andrea Medina-Smith

Bonnie C. Carroll

Eva M. Campo

Campostella Research and Consulting

Alexandria, VA

This publication is available free of charge

https://doi.org/10.6028/NIST.SP.1500-18r2

February 2024

The NIST Research Data Framework (RDaF) is a multifaceted and customizable tool that aims to help shape the future of open data access and research data management (RDM). The RDaF will allow organizations and individual researchers to develop their own RDM strategy. Though NIST is leading the RDaF, most of the content in the current version 2.0, which supersedes preliminary V1.0 and interim V1.5, was obtained via engagement with national and international leaders in the research data community. NIST held a series of three plenary and 15 stakeholder workshops from October 2021 to September 2023. Workshop attendees represented many stakeholder sectors: US government agencies, national laboratories, academia, industry, non-profit organizations, publishers, professional societies, trade organizations, and funders (public and private), including international organizations. The audience for the RDaF is the entire research data community in all disciplines—the biological, chemical, medical, social, and physical sciences and the humanities. The RDaF is applicable from the organization to the project level and encompasses a wide array of job roles involving RDM, from executives and Chief Data Officers to publishers, funders, and researchers. The RDaF is a map of the research data space that uses a lifecycle approach with six stages to organize key information concerning RDM and research data dissemination. Through a community-driven and in-depth process, NIST identified and defined specific, high-priority topics and subtopics for each lifecycle stage. The topics and subtopics are programmatic and operational activities, concepts, and other important factors relevant to RDM which form the foundation of the framework. This foundation enables organizations and individual researchers to use the RDaF for self-assessment of their RDM status. Each subtopic has several informative references —resources such as guidelines, standards, and policies—to help a user understand or implement that subtopic. As such, the RDaF may be considered a “best practices” document. Fourteen overarching themes —topic areas identified as pervasive throughout the framework—illustrate the connections among the six lifecycle stages. Finally, the RDaF includes eight sample profiles for common job functions or roles. Each profile contains topics and subtopics an individual in the given role needs to consider in fulfilling their RDM responsibilities. Individual researchers and organizations involved in the research data lifecycle will be able to tailor these sample profiles or generate entirely new profiles for their specific job function. The methodologies used to generate the content of this publication, RDaF V2.0, are described in detail. An interactive web application has been developed and released that provides an interface for all the components of the RDaF mentioned above and replicates this document. The web application is easy and intuitive to navigate and provides new functionality enabled by the interactive environment.

Publications in the SP1500 subseries are intended to capture external perspectives related to NIST standards, measurement, and testing-related efforts. These external perspectives can come from industry, academia, government, and others. These reports are intended to document external perspectives and do not represent official NIST positions. The opinions, recommendations, findings, and conclusions in this publication do not necessarily reflect the views or policies of NIST or the United States Government.

Certain commercial entities, equipment, or materials may be identified in this document to describe an experimental procedure or concept adequately. Such identification is not intended to imply recommendation or endorsement by NIST, nor is it intended to imply that the entities, materials, or equipment are necessarily the best available for the purpose.

NIST Technical Series Policies

Copyright, Fair Use, and Licensing Statements

NIST Technical Series Publication Identifier Syntax

Publication History

Approved by the NIST Editorial Review Board on 2023-12-21

Supersedes NIST Series 1500-18 version 1.5 (May 2023) https://doi.org/10.6028/NIST.SP.1500-18r1 ; NIST Series 1500-18 (February 2021) https://doi.org/10.6028/NIST.SP.1500-18

How to Cite this NIST Technical Series Publication

Hanisch, RJ; Kaiser, D; Yuan, A; Medina-Smith, A; Carroll, B; Campo, E (2023) NIST Research Data Framework (RDaF) Version 2.0. (National Institute of Standards and Technology, Gaithersburg, MD), NIST Special Publication (SP) 1500-18r2. https://doi.org/10.6028/NIST.SP.1500-18r2

NIST Author ORCID IDs

Robert Hanisch: 0000-0002-6853-4602

Debra Kaiser: 0000-0001-5114-7588

Alda Yuan: 0000-0001-9619-306X

Andrea Medina-Smith: 0000-0002-1217-701X

Bonnie Carroll: 0000-0001-8924-1000

Eva Campo: 0000-0002-9808-4112

Contact Information

[email protected]

Version 2.0 of the NIST Research Data Framework builds on the Preliminary version 1.0 released in February 2021 and on the interim version 1.5 released in May 2023, and incorporates input from many stakeholders. Version 2.0 has more than twice as many   topics  and  subtopics  as V1.0 and includes new sections. The major new sections are  overarching themes : terms prevalent in multiple lifecycle stages, and  profiles , which provide a list of the most relevant topics and subtopics for a given job function or role within the research data management ecosystem. A Request for Information (RFI) based on interim V1.5 was posted in the Federal Register in early June 2023. All comments received in response to this RFI were considered and the RDaF V1.5 was revised as appropriate. A draft of this modified version was presented at a stakeholder workshop held in September 2023.

Author Contributions

Robert Hanisch : Conceptualization, Methodology, Supervision, Writing- review and editing; Debra Kaiser : Formal Analysis, Methodology, Writing- review and editing; Alda Yuan : Formal Analysis, Methodology, Project Administration, Writing- original draft, Writing- review and editing, Visualization; Andrea Medina-Smith : Data Curation, Formal Analysis, Visualization, Software, Writing- review and editing; Bonnie Carroll : Conceptualization, Supervision, Writing- review and editing; Eva M. Campo : Data Curation, Visualization, Writing- review and editing.

Acknowledgments

The completeness, relevance, and success of the NIST RDaF is wholly dependent on the input and participation of the broad research data community. NIST is grateful to all the workshop participants and others who have provided input to this effort. First and foremost, NIST thanks the members of the RDaF Steering Committee, past and present, who have given sound advice and shared their invaluable expertise since the inception of the RDaF in December 2019: Laura Biven, Cate Brinson, Bonnie Carroll (Chair), Mercè Crosas, Anita de Waard, Chris Erdmann, Joshua Greenberg, Martin Halbert, Hilary Hanahoe, Heather Joseph, Mark Leggott, Barend Mons, Sarah Nusser, Beth Plale, and Carly Strasser.

The RDaF team is also grateful to Susan Makar from the NIST Research Library for assistance with the informative references and to Angela Lee for development of the V2.0 interactive web application. Thanks to Eric Lin and James St. Pierre for their critical advice.

Thanks to the former members of the RDaF team including Breeze Dorsey, Laura Espinal, and Tamae Wong. Thanks as well to Campostella Research and Consulting for providing administrative support for the project and technical support for the natural language processing work. Our appreciation also goes to the NIST Material Measurement Laboratory (MML) leadership for their support and to all participants of the various workshops held to solicit community feedback, particularly those individuals who volunteered to serve as discussion leaders.

And finally, thanks to all involved with the NIST Cybersecurity Framework, which provided an initial model for development of the RDaF.

Keywords Research data, research data ecosystem, research data framework, research data lifecycle, research data management, research data dissemination, use, and reuse, research data governance, research data sharing, research data stewardship, open data.

1 Introduction

NIST’s Research Data Framework (RDaF) is designed to help shape the future of research data management (RDM) and open data access. Research data are defined here as “the recorded factual material commonly accepted in the scientific community as necessary to validate research findings.”[ 1 ] The motivation for the RDaF as articulated in the first RDaF publication V1.0 [ 2 ]—that the research data ecosystem is complicated and requires a comprehensive approach to assist organizations and individuals in attaining their RDM goals—has not changed since the project was initiated in 2019. Developed through active involvement and input from national and international leaders in the research data community, the RDaF provides a customizable strategy for the management of research data. The audience for the RDaF is the entire research data community, including all organizations and individuals engaged in any activities concerned with RDM, from Chief Data Officers and researchers to publishers and funders. The RDaF builds upon previous data-focused frameworks but is distinct through its emphasis on research data, the community-driven nature of its formulation, and its broad applicability to all disciplines, including the social sciences and humanities.

The RDaF is a map of the research data space that uses a lifecycle approach with six high-level lifecycle stages to organize key information concerning RDM and research data dissemination. Through a community-driven and in-depth process, stakeholders identified topics and subtopics —programmatic and operational activities, concepts, and other important factors relevant to RDM. These topics and subtopics, identified via stakeholder input, are nested under the six stages of the research data lifecycle. A partial example of this structure is illustrated in Fig. 1 .

Table which shows the nested organizational structure of the Framework core where Topics, Subtopics, and Informative References fall under the broader heading of the Research Data Lifecycle Stage

Fig. 1 — Partial organizational structure of the framework foundation

The components of the RDaF foundation shown in Fig. 1 —lifecycle stages and their associated topics and subtopics—are defined in this document. In addition, most subtopics have several informative references —resources such as guidelines, standards, and policies—that assist stakeholders in addressing that subtopic. Specific standards and protocols provided in the text or informative references may only be relevant for certain RDM situations. A link to the complete list of informative references is given in Appendix A .

The RDaF is not prescriptive; it does not instruct stakeholders to take any specific approach or action. Rather, the RDaF provides stakeholders with a structure for understanding the various components of RDM and for selecting components relevant to their RDM goals. The RDaF also includes sample profiles , which contain topics and subtopics an individual in a job role or function are encouraged consider in fulfilling their RDM responsibilities. Researchers and organizations involved in the research data lifecycle will be able to tailor these profiles using a supplementary document and online tools that will be available on the RDaF homepage . Entirely new profiles may be generated using a blank on-line template available in this supplementary document. Other uses of the RDaF include self-assessment and improvement of RDM infrastructure and practices for both organizations and individuals.

The RDaF was designed to be applicable to all stakeholders involved in research data. An organization seeking to review their data management policies may use the subtopics to create their own metrics for RDM assessment. Researchers who wish to ensure that their data are open access may use the framework to create a “checklist” of RDM considerations and tasks. A research project leader seeking guidance on how to assign data management roles may use the eight sample profiles as a starting point to create customized lists of responsibilities for individual researchers in their lab.

Since the first publication of the RDaF in 2021 (V1.0 [ 2 ]), NIST has expanded and enriched the framework through extensive engagement with stakeholders in the research data community. This publication, RDaF V2.0, includes updates to V1.0 and new features. Definitions and informative references for each subtopic have been added to improve the usability and applicability of the RDaF. In addition to profiles discussed in the previous paragraph, this document includes overarching themes that appear across multiple lifecycle stages and a list of many of the key organizations in the RDM space (see Appendix B ). The methodology used to generate the content of V2.0 is described in detail in the following section.

Note that the terms “data,” “datasets,” “data assets,” “digital objects,” and “digital data objects” are used throughout the framework depending on the context. Data is the most general and frequently used term. Dataset means a specific collection of data having related content. A data asset is “any entity that is comprised of data which may be a system or application output file, database, document, and web page.”[ 3 ] Digital objects and digital data objects typically have a structure such that they can be understood without the need for separate documentation. In addition, the terms “organization” and “institution” used throughout the framework are synonymous and the terms "RDaF team" and "team" refer to the authors of this publication. Finally, a list that spells out the full names of acronyms and initialisms used throughout this document is provided in Appendix C .

2 Methodology

This section describes the approaches used to develop RDaF V2.0, including brief descriptions of activities since the inception of the project in 2019. Throughout the lifetime of the RDaF project, the Steering Committee members noted previously in the Acknowledgements section were consulted, took leadership roles as discussion leaders at workshops, and provided valuable input and feedback on all aspects of the project.

2.1 Framework Development Through Stakeholder Input

The RDaF is driven by the research data stakeholder community, which can use the framework for multiple purposes such as identifying best practices for research data management (RDM) and dissemination and changing the research data culture in an organization. To ensure that the RDaF is a consensus document, NIST held stakeholder engagement workshops as the primary mechanism to gather input on the framework. The workshops have taken place in three phases, each resulting in further examination and refinement of the framework.

2.1.1 Phase 1: Plenary Scoping Workshop and Publication of the Preliminary RDaF V1.0

In the plenary scoping workshop held in December 2019, a group of about 50 distinguished research data experts selected a research data lifecycle approach as the organizing principle of the RDaF. The RDaF team subsequently selected six lifecycle stages—Envision, Plan, Generate/Acquire, Process/Analyze, Share/Use/Reuse, and Preserve/Discard—from a larger pool of stages suggested by workshop break-out groups. Feedback from this workshop contributed to the publication of the RDaF V1.0, which provides a structured and customizable approach to developing a strategy for the management of research data. The framework core (subsequently renamed foundation in V2.0) consisting of these six lifecycle stages and their associated topics and subtopics is the main result of that publication.

2.1.2 Phase 2: Opening Plenary Workshops

The second phase of the RDaF development began with two virtual plenary workshops held in late 2021. Each workshop had approximately 70 attendees and focused on two cohorts. The university cohort (UC) workshop, co-hosted by the Association of American Universities, the Association of Public Land-grant Universities, and the Association of Research Libraries, was a horizontal cut across various stakeholder roles in universities (e.g., vice presidents of research, deans, professors, and librarians), publishing organizations, data-based trade organizations, and professional societies. In contrast, the materials cohort (MC) workshop, held in cooperation with the Materials Research Data Alliance , was a vertical cut across stakeholder organizations engaged in materials science, including academia, government agencies, industry, publishers, and professional societies.

Prior to the workshops, the attendees selected, or were assigned to, one of six breakout sessions, each focused on a stage in the RDaF research data lifecycle. A NIST coordinator sent the attendees a link to the RDaF publication V1.0, a list of the participants, and definitions of the topics for that session’s lifecycle stage. The agenda for the two workshops included an overview talk by Robert Hanisch on the RDaF, a one-hour breakout session, and a plenary session with summaries presented by an attendee of each breakout and with closing remarks. During the breakout sessions, a discussion leader, recruited by the RDaF team, solicited input from the 10 to 12 participants on the following questions:

What are the most important (two or three) topics and the least important one?

Are there any missing topics?

Should any topics be modified or moved to another lifecycle stage?

The identical questions were posed regarding the subtopics for each topic. Attendee input was captured as notes taken by the session rapporteur and the NIST coordinator and an audio recording. After the two opening plenary workshops, the RDaF team revised the topics and subtopics for the lifecycle stages based on input from the workshops. All six of the lifecycle stages were then reviewed side-by-side for consistency and completeness.

The collective review revealed 14 overarching themes which appeared in multiple lifecycle stages. These themes include metadata and provenance, data quality, the FAIR (Findable, Accessible, Interoperable, and Reusable) data principles, software tools, and cost implications. Section 4 of this document will address all overarching themes in detail.

2.1.3 Phase 3: Stakeholder Workshops

The next step in obtaining community input involved a series of two-hour stakeholder workshops focused on specific roles, equivalent to job functions or position titles. To secure a broad range of feedback, the RDaF team compiled a list of more than 200 invitees, including attendees of previous workshops and additional experts. These invitees were assigned to one of the following 15 roles:

Academic mid-level executive/head of research

Budget/cost expert

Data/IT leader

Data/research governance leader

Institute/center/program director

Open data expert

Professional society/trade organization leader

Provider of data tools/services/infrastructure

Senior executive

Unlike the first two RDaF workshops, these role-focused workshops were composed of smaller groups. The goal of these workshops was to develop profiles, i.e., lists of topics and subtopics important for individuals in a specific role with respect to RDM. Though the target size of these two-hour workshops was 10 to12 participants, the actual number ranged from four to 14. For each workshop, the RDaF team identified and invited an expert to serve as the discussion leader. Two members of the team were assigned to each workshop: a presenter and a rapporteur.

During the workshops, after a brief presentation covering the purpose and structure of the RDaF, participants selected the lifecycle stages most relevant to their assigned role. For each lifecycle stage, participants reviewed the topics and subtopics, and discussed any that were missing, misplaced or unclear. Depending on the length of the discussion, each workshop covered two to four of the lifecycle stages. In addition to requesting input on the topics and subtopics, the NIST coordinators asked participants to consider which topics and subtopics had the greatest influence on their role and those over which they had the greatest influence.

2.2 Framework Revisions per Stakeholder Workshop Input

Most of the input from participants at the Stakeholder Workshops concerned the topics and subtopics, and this input was used to revise them.

2.2.1 Stakeholder Workshop Note Aggregation

After the Stakeholder Workshops, the RDaF team designed a common methodology for collecting and analyzing the feedback, using a template to record the input from each workshop. This template contained the following:

A column for topics and subtopics in a lifecycle stage that were missing, misplaced, or unclear

A column for topics and subtopics relevant to, or missing from, the profile for a role

A section on feedback that addressed the definition of the role

A section on “takeaways” regarding the framework as a whole

A section on proposed new overarching themes

To analyze the feedback from each stakeholder workshop, selected RDaF team members first reviewed the rapporteur’s notes to familiarize themselves with the discussion. Then these team members viewed the recording of the workshop, read through any written comments provided in the workshop chat, and noted every comment in the appropriate section of the template. After the first draft of the template notes was completed, the team members viewed the recording a second time, added any missing comments, and converted each comment and suggestion concerning a topic or subtopic into a potential change for review. Finally, the entire RDaF team considered each potential change and generated an updated interim V1.5 of the framework foundation.

2.2.2 Input for Profile Development

After updating the framework foundation based on the stakeholder feedback, the next step involved the generation of a sample profile for each role addressed by a workshop. As the feedback from the stakeholder workshops concerning profiles was limited and varied in form and specificity, more data were needed to develop these profiles.

The updated topics and subtopics were used to develop blank checklists of topics and subtopics for the lifecycle stages discussed at each of the 15 stakeholder workshops. The appropriate spreadsheet was sent to the participants of a given workshop with instructions to mark those topics and subtopics that were most relevant to the role addressed at that workshop. About 60 participants submitted out a spreadsheet with their responses for the workshop they attended.

The responses were analyzed for similarities and several roles were modified. For example, professors and researchers were grouped together to form one role as professors are typically involved in their groups’ research. After consideration of the participants’ responses, the RDaF team selected eight common job roles for the generation of sample profiles. These roles are AI expert, curator, budget/cost expert, data and IT expert, provider of data tools, publisher, research organization leader, and researcher.

For each sample profile, the RDaF team first calculated the percentage of responses that labeled a subtopic as relevant. When 50% or more of the respondents considered a subtopic to be relevant, it was presumptively deemed relevant for the sample profile. Next, the team considered all comments received with the profile responses as well as all the notes from the Stakeholder Workshop to further flesh out the sample profile. Lastly, the RDaF team consulted with experts in these roles to finalize the profiles.

2.2.3 Request for Information on Interim Version 1.5

Interim V1.5 of the RDaF was published in May 2023 [ 4 ]. This publication included the entire list of topics and subtopics for the six lifecycle stages, definitions, informative references for most of the subtopics, 14 overarching themes, and eight sample profiles.

The RDaF team developed a Request for Information (RFI) that was posted in the Federal Register on June 6, 2023, to communicate updates to the RDaF and receive additional feedback on V1.5. The public had 30 days after release of the RFI to comment on any aspect of the RDaF. The RDaF team reviewed and distilled the comments into almost 70 possible action items which were considered individually within the context of the intent of the framework. All comments received were considered in generating V2.0 of the framework.

2.3 Development of an Interactive Web Application

A web application has been developed and released that presents an interface to the RDaF components—lifecycle stages, topics, subtopics, definitions, informative references, overarching themes, and sample profiles—and thus replicates this RDaF V2.0 document in an interactive environment. In addition to providing an easy means of navigating through the various components and the relationships among them, the web application has new functionality such as the capability to link subtopics to their corresponding informative references and to direct a user to the original source of any reference.

The web application runs on a variety of platforms including Windows, MacOS, and Linux. Development of the software—database design, Entity Framework Core, web application framework, search strategies, and user interface—is the subject of a separate publication in preparation.

3 Framework Foundation – Lifecycle Stages, Topics, and Subtopics

The foundation of the RDaF consists of lifecycle stages, topics, and subtopics selected by the RDaF team using a vast amount of stakeholder input as described in Section 2 . The RDaF research data lifecycle graphic depicted in Fig. 2 is cyclical rather than linear and has six stages defined below. Each stage is interconnected to all other stages, i.e., a stage can lead into any other stage. An organization or individual may initially approach the lifecycle from any stage and subsequently address any other stage. It is likely that an organization or individual will be involved in all lifecycle stages simultaneously, though with different levels of intensity or capacity.

Envision – This lifecycle stage encompasses a review of the overall strategies and drivers of an organization’s research data program. In this lifecycle stage, choices and decisions are made that together chart a high-level course of action to achieve desired organizational goals, including how the research data program is incorporated into an organization’s data governance strategy.

Plan – This lifecycle stage encompasses the activities associated with preparing for data acquisition, selection of data formats and storage solutions, and anticipation of data sharing and dissemination strategies and policies, including how a research data program is incorporated into an organization’s data management plan.

Generate/Acquire – This lifecycle stage covers the generation of raw research data, both experimentally and computationally, within an organization or by an individual, and the collection or acquisition of research data produced outside of an organization.

Process/Analyze – This lifecycle stage concerns the actions performed on generated or externally acquired research data to yield processed research data, typically using software, from which observations and conclusions can be made.

Share/Use/Reuse – This lifecycle stage outlines how raw and processed research data are disseminated, used, and reused within an organization or by an individual and any constraints or encouragements to use/reuse such data. This stage also includes the dissemination, use, and reuse of raw and processed research data outside an organization.

Preserve/Discard – This lifecycle stage delineates the end-of-use and end-of-life provisions for research data by an organization or individual and includes records management, archiving, and safe disposal.

A depiction of the six research data lifecycle stages which are envision, plan, generate/acquire, process/analyze, share/use/reuse, and preserve/discard. The lifecycle stages are arranged in a circle to represent their cyclic and interrelated nature

Fig. 2 — Research data framework lifecycle stages

Tables 1 - 6 presented below each cover one research data lifecycle stage and its associated topics and subtopics. The goal of the framework is to be comprehensive while remaining flexible. An organization or individual may find that not every topic and subtopic in a lifecycle stage is relevant to their work. The selection of subtopics to generate a profile for a job or function will be described in Section 5 .

Many lexicons are used in the research data management space. Though the RDaF does not intend to introduce an entirely new vocabulary, it is important to be precise with the use of key terms. For each topic and subtopic, the RDaF provides definitions to assist users in understanding what tasks and responsibilities are associated with that topic or subtopic. To derive these definitions, the RDaF team performed a search of common data lexicons such as CODATA’s Research Data Management Terminology and Techopedia [ 5 , 6 ]. Additionally, the team searched more broadly for common and research data management-specific definitions, including ones for the informative references that provide guidance in the implementation of the RDaF. Some definitions are general or commonly understood and as such have no references. The definitions were checked for consistency with stakeholder feedback. Individual researchers and organizations should keep in mind that these definitions are not prescriptive and consider their own context when determining whether the definitions provided are appropriate.

Table 1. Envision lifecycle stage

Table 2. Plan lifecycle stage

Table 3. Generate/Acquire lifecycle stage

Table 4. Process/Analyze lifecycle stage

Table 5. Share/Use/Reuse lifecycle stage

Table 6. Preserve/Discard lifecycle stage

4 Overarching Themes

The RDaF was refined from the preliminary V1.0 using input from the two opening plenary workshops and the 15 stakeholder workshops. During this refinement process, 14 themes that spanned the various lifecycle stages were identified. Rather than repeat these themes in each stage, they are listed here with a brief explanation of their meaning in the context of research data and research data management (RDM). Following the explanatory narrative, the specific lifecycle stages/topics/subtopics in which each theme appears are shown in tabular form.

In most cases, the overarching themes are supported by explicit references in the framework. In other cases, the themes are implicit. For example, the cost implications and sustainability theme touches on every topic or subtopic, although it is not called out in any lifecycle stage: there is a financial implication to every decision and action that will be considered by those working with research data in any capacity. Note that while these 14 themes emerge from the general definitions of the topics and subtopics, considering the scope of RDM from the perspective of a specific individual or organization, other themes may emerge. Such custom themes can serve as an additional organizing function for job roles, tasks, and other activities represented by the topics and subtopics in the framework.

Separate tables generated for each overarching theme document the topics and subtopics most closely associated to that theme (see Tables 7 - 20 below). There are also two graphics that provide summary information. Figure 3 is a Sankey diagram that provides a visualization of the relationship between each lifecycle stage and each overarching theme. Figure 4 is a matrix table that gives a high-level overview of the relationships between the overarching themes and the topics for each lifecycle stage. (Some of the overarching theme names in Figs. 3 and 4 have been truncated or abbreviated for visualization purposes.)

Sankey key diagram showing the relationships between lifecycle stages and overarching themes. This information is in the tables below each Overarching theme section.

Fig. 3 — Sankey diagram of the relationships between lifecycle stages and overarching themes

A matrix showing the overarching themes and each topic which is explained in the text.

Fig. 4 — Matrix diagram of topics and overarching themes

4.1 Community Engagement

Community engagement , typically broader for RDM practices and more focused for research data projects, is an intentional set of approaches for both listening to and communicating with stakeholders. Successful research, data management, and data curation come from strong engagement with the community of practice or discipline and the organization in which the research is conducted. Community engagement is present in all the RDaF lifecycle stages, although there is an emphasis on it within the Envision and Plan stages. Engagement with stakeholders early in the research process may result in stronger outcomes and uptake of new research. In the other four lifecycle stages, stakeholder engagement is essential for accomplishing the goals established at the beginning of a research project.

Table 7 lists the topics and subtopics that are most relevant to the overarching theme of community engagement.

Table 7. Community engagement (overarching theme)

4.2 Cost Implications and Sustainability

Cost implications and sustainability is a theme that touches every lifecycle stage and most stakeholders in the research ecosystem. From Chief Data Officers and provosts to researchers and grant administrators, cost is a constant focus of all individuals’ work in public and private organizations. Administrators and C-suite officers would typically focus their efforts on the stages of Envision and Plan, while researchers, particularly those with curation duties and service provision, have more impact on the cost implications in the Generate/Acquire, Process/Analyze, Share/Use/Reuse, and Preserve/Discard stages.

Sustainability in research and RDM means sustainable funding, staffing, and preservation models as applied to research data. It is imperative that sustainable plans affecting these three areas are assessed as the areas are developed and maintained to prevent institutions and users from losing access to valuable datasets.

Table 8 lists the topics and subtopics that are most relevant to the overarching theme of cost implications and sustainability.

Table 8. Cost implications and sustainability (overarching theme)

4.3 Culture

Culture is the basis for the entirety of a given organization’s success in managing research data and in nearly every other aspect of running a collective enterprise; culture is what gives an institution or organization its character and consistency over time. Cultures are firmly embedded and stem from both informal practices and formal written policies which can make them difficult to change. Culture shapes norms within an organization and creates glide paths towards ingrained values and behaviors as well as resistance to others. Specifically, culture dictates how research data are valued or supported in an institution.

Table 9 lists the topics and subtopics that are most relevant to the overarching theme of culture.

Table 9. Culture (overarching theme)

4.4 Curation and Stewardship

The processes and procedures to make research data shareable and reusable are typically referred to as curation and stewardship . Both curation and stewardship, and the job roles that are responsible for them, aim to collect, manage, preserve, and promote research data over their lifecycles. Curation is often performed by librarians and others outside of a laboratory or research group, while data stewards tend to work with a specific research group, lab, or department (i.e., a specific discipline) to ensure that they are embedded in research projects from the onset of the Plan lifecycle stage. Because curators tend to work outside of labs, they are typically engaged in research projects much later during the Share/Use/Reuse stage, which may introduce complications. The curation and stewardship theme implicitly touches each lifecycle stage.

Table 10 lists the topics and subtopics that are most relevant to the overarching theme of curation and stewardship.

Table 10. Curation and stewardship (overarching theme)

4.5 Data Quality

Data quality directly impacts a dataset’s fitness for purpose, usability, and reusability. All parties involved in every stage of a dataset’s lifecycle should be cognizant of data quality. The CODATA Research Data Management Terminology [ 5 ] definition of data quality includes the following attributes: accuracy, completeness, update status, relevance, consistency across data sources, reliability, appropriate presentation, and accessibility. Assessment of data quality is not a single process, but rather a series of actions that, over the lifetime of a dataset, collectively assure the greatest degree of quality.

Table 11 lists the topics and subtopics that are most relevant to the overarching theme of data quality.

Table 11. Data quality (overarching theme)

4.6 Data Standards

Data standards, both discipline-specific (e.g., Darwin Core [ 255 ] or NeXus [ 256 ]) and general (e.g., PREMIS [ 257 ] or schema.org [ 258 ]) are implemented by researchers to make their datasets both more FAIR and of higher quality. Researchers may use formal (e.g., ISO [ 259 ] or ANSI [ 260 ] standards) or de facto (e.g., DataCite [ 209 ]) standards for their research community. Use of data standards ensures consistency within a discipline and can reduce cost by decreasing the likelihood that data will have to be created again. Data standards are called out in every lifecycle stage except Envision.

Table 12 lists the topics and subtopics that are most relevant to the overarching theme of data standards.

Table 12. Data standards (overarching theme)

4.7 Diversity, Equity, Inclusion, and Accessibility

Diversity, equity, inclusion, and accessibility (DEIA) is a broad theme covering important social and cultural aspects of a research enterprise. Efforts in DEIA center on growing the sense of belonging for everyone in every laboratory, research group, department, or institution. Research data practices are not immune to biases and historical disadvantages must often be addressed through intentional action. DEIA is important not just for members of underrepresented and marginalized groups, but for the integrity of the research process as a whole. More inclusive research tends to be more rigorous as it introduces different perspectives that enable more complete and broader interpretations of research data. Given the typical challenges associated with cultural changes within an institution, DEIA efforts must be embedded throughout the research data management lifecycle to maximize their effectiveness.

Table 13 lists the topics and subtopics that are most relevant to the overarching theme of diversity, equity, inclusion, and accessibility.

Table 13. Diversity, equity, inclusion, and accessibility (overarching theme)

4.8 Ethics, Trust, and the CARE Principles

Ethics, trust, and the CARE principles encompass the ethical generation, analysis, use, reuse, sharing, disposal, and preservation of data and are pillars of responsible research that are called out throughout the framework. The phrase “as open as possible, as closed as necessary” [ 261 ] comes to mind when working through the ethical implications of sharing data. While ethical choices are often made at the Share/Use/Reuse lifecycle stage, questions and concerns regarding the generation or collection of data are likely to be examined by an institutional or ethics review board and must be considered in the Plan stage. In the Preserve/Discard stage, it is essential to comply with preservation and disposition standards. While the subtopics in the framework are a starting point for understanding how ethics touches every aspect of the research data lifecycle, it is also important that a project be securely grounded in the practices of a given discipline; for example, the standards for historical research will differ from those for economic or healthcare research.

Trust is a factor across the Framework and is the basis for relationships between data producers and users, the funding agencies that support projects, and the institutions that host research. Specific populations will also have various ethical considerations, for example, the CARE Principles for Indigenous Data Governance are quickly becoming the standard for working with indigenous data worldwide [ 262 ].

Table 14 lists the topics and subtopics that are most relevant to the overarching theme of ethics, trust, and the CARE principles.

Table 14. Ethics, trust, and the CARE principles (overarching theme)

4.9 Legal Considerations

As much as technical capabilities structure the ways in which data can be gathered, created, published, and preserved, legal considerations constrain and channel the research data lifecycle. Laws form the background rules governing how data can be managed and shared. Legal considerations can be complex, as they are context-specific, hierarchical, and change over time. They typically vary by sector (e.g., healthcare, finance, education, and public government) and by geographic location (e.g., municipal, regional, national, and international), and are often subject to interpretation. Institutions that share data often use contracts and agreements that rely upon the legal system to order and enforce the terms therein. Laws sometimes restrict access, especially for categories of sensitive data such as personally identifiable information, certain types of healthcare information, and business identifiable information. However, laws can also enable data sharing by providing clear guidelines or directives to provide open data when it is in the public interest. Though legal considerations appear in most of the six lifecycle stages, meticulous planning and preparation make any constraints and compliance with policy requirements less onerous.

Table 15 lists the topics and subtopics that are most relevant to the overarching theme of legal considerations.

Table 15. Legal considerations (overarching theme)

4.10 Metadata and Provenance

Metadata and provenance comprise the information about a dataset that defines, describes, and links the dataset to other datasets and provides contextualization of the dataset [ 91 ]. Metadata are essential to the effective use, reuse, and preservation of research data over time. In the Envision and Plan stages, metadata support legal and regulatory compliance, and are a consideration in planning data outputs and resources.

The table below shows each topic/subtopic that mentions or covers metadata. While the final lifecycle stage (Preserve/Discard) does not explicitly relate to metadata, the existence of descriptive and other metadata is imperative to this stage. The robustness of metadata for a file or dataset determines the level of curation needed for preservation and use: richer metadata allows for better findability, interoperability, and reuse in support of the FAIR data principles, while less robust metadata make all these activities more difficult and time intensive. Poor-quality metadata can render an otherwise important dataset unusable when the creator of the dataset is no longer available.

Included in the metadata theme is provenance, the historical information concerning the data [ 41 ]. Understanding the provenance of a given dataset, including metadata on the experimental conditions used to generate the data, is essential for many disciplines. Without proper provenance documentation, it is difficult to assess the quality and reliability of the data and to publish them with correct metadata. Provenance can be used as a criterion for preservation.

Table 16 lists the topics and subtopics that are most relevant to the overarching theme of metadata and provenance.

Table 16. Metadata and provenance (overarching theme)

4.11 Reproducibility and the FAIR Data Principles

Touching many of the lifecycle stages are reproducibility and the FAIR data principles , which are findability, accessibility, interoperability, and reusability. Reproducible research yields data that can be replicated by the author or other researchers using only information provided in the original work [ 84 ]. Standards for reproducibility differ by research discipline, but typically the metadata and other contextual information needed for reproducibility are similar to those described by the FAIR data principles [ 33 ]. These community-based principles have come to define, for many disciplines, the state to which a published dataset should aspire. By keeping the principles of findability, accessibility, interoperability, and reusability in mind while planning a project or when data are collected, the data will be ready for broader reuse when they are publicly released. Extensions of the FAIR data principles also exist, such as FAIRER, which adds Ethical and Revisable to the base principles [ 263 ].

Table 17 lists the topics and subtopics that are most relevant to the overarching theme of reproducibility and the FAIR data principles.

Table 17. Reproducibility and the FAIR data principles (overarching theme)

4.12 Security and Privacy

Digital data are designed to be easily shared, copied, and transformed, but their mobility can make privacy and security difficult to ensure. Security and privacy issues are fundamentally about trust, both in the institutions and systems that facilitate collection, storage, and transfer of data, as well as the individuals within those institutions. Proper protocols, rationally based on the need to protect vulnerable populations or sensitive information, or stemming from common understandings of security needs, promote trust, which can enable greater data mobility. In the European Union, organizations that collect, store, or hold personal data must comply with the General Data Protection Regulation. [ 264 ] The U.S. does not have such a universal regulation, though various federal laws govern different sectors and types of data, and some states have their own additional regulations. Security and privacy issues arise in the Envision and Plan lifecycle stages, with the results folded into the day-to-day procedures for handling and accessing data and appear again in the Share/Use/Reuse lifecycle stage.

Table 18 lists the topics and subtopics that are most relevant to the overarching theme of security and privacy.

Table 18. Security and privacy (overarching theme)

Lifecycle Stage

Data Governance—Strategic/Qualitative

Data management organization

Organizational values, including DEIA

Data Governance—Legal and Regulatory Compliance

Safety and security assurance

Education and Workforce Development

Workforce skills inventory

Data Architecture

Hosting and storage, cloud storage

Hardware and Software Infrastructure

Security and privacy considerations

Access Control Associated with Data Sensitivity

Identification of responsible parties for access management

Ease of maintenance and implementation of records

Regulatory compliance

Sensitive data/PII

Antecedents and Consequences of App Update: An Integrated Research Framework

  • Conference paper
  • First Online: 04 September 2018
  • Cite this conference paper

research update framework

  • Hengqi Tian 11 &
  • Jing Zhao 11  

Part of the book series: Lecture Notes in Business Information Processing ((LNBIP,volume 328))

Included in the following conference series:

  • Workshop on E-Business

1510 Accesses

1 Citations

E-commerce firms now compete intensively on mobile applications (apps). The transparency of digital environment has made customers and competitors as major external driving forces of app updates. However, app-related studies mainly focus on how to succeed in the hyper-competitive app market and how platform governance influence app evolution, overlooking the interaction among customers, competitors, and focal firm that shapes continuous app updates. Moreover, extant studies on app updates has drawn inconsistent conclusions regarding the impact of update frequency on market performance. We, therefore, proposed an integrated research framework to explore antecedents and consequences of app updates. We empirically test it by tracking customer reviews, updating notes, and ranks of 20 iOS apps within travel category in China for 60 months. The results indicate that the extreme sentiment expressed by customers will urge focal firm to update frequently and the focal firm will incorporate useful customer feedbacks to release a major update. Interestingly, we find that focal firm is reluctant to release superfluous updates and perform major updates if there are more high-ranking competitors update earlier. Our findings also testify the dual role of the number of total apps focal firm owns in facilitating update frequency and volume, as well as constraining days between two subsequent releases. Lastly, frequent updates will induce a higher degree of rank volatility, while long update intervals will decrease ranks. Our study has important implications for firms to succeed in the fierce competition in mobile commerce.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Data from State of Mobile Commerce: http://www.criteo.com/media/5333/criteo-mobilecommercereport-h12016-us.pdf .

https://www.analysys.cn/analysis/22/detail/1000268/

http://www.idc.com/getdoc.jsp?containerId=prAP41028416

Data from Statista 2017: https://www.statista.com/statistics/276623/number-of-apps-available-in-leading-app-stores/

http://www.adweek.com/digital/apple-app-store-ranking-changes/?red=im

Feorderer, J., Heinzl, A.: Product updates: attracting new consumers versus alienating existing ones. In: Proceedings of Thirty Eighth International Conference on Information Systems, Seoul (2017)

Google Scholar  

Liu, C.Z., Au, Y.A., Choi, H.S.: Effects of freemium strategy in the mobile app market: an empirical study of Google play. J. Manage. Inf. Syst. 31 (3), 326–354 (2014)

Article   Google Scholar  

Agarwal, R., Tiwana, A.: Evolvable systems: through the looking glass of IS. Inf. Syst. Res. 26 (3), 473–479 (2015)

Cavusoglu, H., Cavusoglu, H., Zhang, J.: Security patch management: share the burden or share the damage? Manage. Sci. 54 (4), 657–670 (2008)

Arora, A., Krishnan, R., Telang, R., Yang, Y.: An empirical analysis of software vendors’ patch release behavior: impact of vulnerability disclosure. Inf. Syst. Res. 21 (1), 115–132 (2010)

Krishnan, M.S., Mukhopadhyay, T., Kriebel, C.H.: A decision model for software maintenance. Inf. Syst. Res. 15 (4), 396–412 (2004)

Guzman, E., El-Haliby, M., Bruegge, B.: Ensemble Methods for App Review Classification: An Approach for Software Evolution (N), pp. 771–776 (2015)

Boudreau, K.J.: Let a thousand flowers bloom?an early look at large numbers of software app developers and patterns of innovation. Organ. Sci. 23 (5), 1409–1427 (2012)

Grover, V., Kohli, R.: Revealing your hand: caveats in implementing digital business strategy. MIS Q. 37 (2), 655–662 (2013)

Claussen, J., Kretschmer, T., Mayrhofer, P.: The effects of rewarding user engagement: the case of facebook apps. Inf. Syst. Res. 24 (1), 186–200 (2013)

Ghose, A., Han, S.P.: Estimating demand for mobile applications in the new economy. Manage. Sci. 60 (6), 1470–1488 (2014)

Lee, G., Raghu, T.S.: Determinants of mobile apps’ success: Evidence from the app store market. J. Manage. Inf. Syst. 31 (2), 133–170 (2014)

Roma, P., Zambuto, F., Perrone, G.: The role of the distribution platform in price formation of paid apps. Decis. Support Syst. 91 , 13–24 (2016)

Song, P.J., Xue, L., Rai, A., Zhang, C.: The ecosystem of software platform: a study of asymmetric cross-side network effects and platform governance. MIS Q. 42 (1), 121–142 (2018)

Tiwana, A.: Evolutionary competition in platform ecosystems. Inf. Syst. Res. 26 (2), 266–281 (2015)

Comino, S., Manenti, F.M., Mariuzzo, F.: Updates management in mobile applications. Itunes Vs Google Play. Med. J. Malaysia 37 (4), 354–356 (2015)

Mcilroy, S., Ali, N., Hassan, A.E.: Fresh apps: An empirical study of frequently-updated mobile apps in the Google Play Store. Empirical Softw. Eng. 21 (3), 1346–1370 (2016)

Kajanan, S., Pervin, N., Ramasubbu, N., Dutta, K.: Takeoff and sustained success of apps in hypercompetitive mobile platform ecosystems: an empirical analysis. In: Proceedings of Thirty Third International Conference on Information Systems, Orlando (2012)

Yin, D., Mitra, S., Zhang, H.: Research note—when do consumers value positive vs. negative reviews? an empirical investigation of confirmation bias in online word of mouth. Inf. Syst. Res. 27 (1), 131–144 (2016)

West, J., Salter, A., Vanhaverbeke, W., Chesbrough, H.: Open innovation: the next decade introduction. Res. Policy 43 (5), 805–811 (2014)

Barnett, W.P., Hansen, M.T.: The red queen in organization evolution. Strateg. Manage. J. 17 , 139–157 (1996)

Tiwana, A., Konsynski, B., Bush, A.A.: Research commentary—Platform evolution: Coevolution of platform architecture, governance, and environmental dynamics. Inf. Syst. Res. 21 (4), 675–687 (2010)

Chen, M.J., Miller, D.: Competitive dynamics: themes, trends, and a prospective research platform. Acad. Manage. Ann. 6 , 135–210 (2012)

Lim, S.L., Bentley, P.J.: Investigating app store ranking algorithms using a simulation of mobile app ecosystems. In: IEEE Congress on Evolutionary Computation, Cancún, México (2013)

Garg, R., Telang, R.: Inferring app demand from publicly available data. MIS Q. 37 (4), 1253–1264 (2013)

Jabr, W., Zheng, Z.Q.: Know yourself and know your enemy: an analysis of firm recommendations and consumer reviews in a competitive environment. MIS Q. 38 (3), 635–654 (2014)

Moe, W.W., Trusov, M.: The value of social dynamics in online product ratings forums. J. Mark. Res. 48 (3), 444–456 (2011)

Hausman, J., Hall, B.H., Griliches, Z.: Econometric models for count data with an application to the patents-R&D relationship. Econometrica 52 (4), 909–937 (1984)

Wooldridge, J.M.: Introductory Econometrics: A Modern Approach. Thompson Publishing, Bethesda (2006)

Hausman, J.: Specification tests in econometrics. Econometrica 46 (6), 1251–1271 (1978)

Article   MathSciNet   Google Scholar  

Greene, W.: Econometric Analysis. Prentice Hall, Upper Saddle River (2007)

Beck, N., Katz, J.N.: What to do (and not to do) with time-series cross-section data. Am. Polit. Sci. Rev. 89 (3), 634–647 (1995)

Download references

Acknowledgments

This research has been supported by grants from the National Natural Science Foundation of China under Grant 71372174 and 71702176 and the Fundamental Research Funds for the Central Universities, China University of Geosciences (Wuhan) under Grant G1323541816.

Author information

Authors and affiliations.

Research Center for Digital Business Management, School of Economics and Management, China University of Geosciences, Wuhan, 430074, People’s Republic of China

Hengqi Tian & Jing Zhao

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Jing Zhao .

Editor information

Editors and affiliations.

University of Seoul, Seoul, Korea (Republic of)

University of Washington, Seattle, WA, USA

University of Illinois, Urbana-Champaign, IL, USA

Michael J. Shaw

Seoul National University, Seoul, Korea (Republic of)

Byungjoon Yoo

Georgia Institute of Technology, Atlanta, GA, USA

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this paper

Cite this paper.

Tian, H., Zhao, J. (2018). Antecedents and Consequences of App Update: An Integrated Research Framework. In: Cho, W., Fan, M., Shaw, M., Yoo, B., Zhang, H. (eds) Digital Transformation: Challenges and Opportunities. WEB 2017. Lecture Notes in Business Information Processing, vol 328. Springer, Cham. https://doi.org/10.1007/978-3-319-99936-4_6

Download citation

DOI : https://doi.org/10.1007/978-3-319-99936-4_6

Published : 04 September 2018

Publisher Name : Springer, Cham

Print ISBN : 978-3-319-99935-7

Online ISBN : 978-3-319-99936-4

eBook Packages : Computer Science Computer Science (R0)

Share this paper

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

usa flag

NIH Simplified Peer Review Framework for Research Project Grants (RPG): Implementation and Impact on Funding Opportunities

Wednesday, April 17, 2024 - This event has concluded. 1:00 – 2:00 p.m. ET

The National Institutes of Health (NIH) is simplifying the framework for the peer review of most Research Project Grant (RPG) applications, effective for due dates on or after January 25, 2025. These changes are designed to address the complexity of the peer review process and mitigate potential bias. Make plans to hear the latest updates, timelines, and how these changes will impact existing and new funding opportunities. A Q&A with NIH experts will follow the presentation to address additional questions.

PAST EVENT: Resources available below.

vector of two scientists with beakers and a report in the middle

 #NIHGrantsEvents

Virtual Event Overview

Date: Wednesday, April 17, 2024 Time: 1:00-2:00 PM ET (Eastern Time Zone)

Presentation Resources:

  • PPT: NIH Simplified Review Framework for Research Project Grants (RPGs): Implementation and Impact on Funding Opportunities (PPT)
  • Transcript: NIH Simplified Review Framework for Research Project Grants (RPGs): Implementation and Impact on Funding Opportunities Transcript (Word)
  • Accessible Video: Video includes captions and ASL interpreter (YouTube)

Related Resources

  • Simplifying Review of Research Project Grant Applications
  • Previous Webinar (11/3/2023): Online Briefing on NIH's Simplified Peer Review Framework for NIH Research Project Grant (RPG) Applications and Impact to New and Existing Funding Opportunities
  • Video Recording (YouTube)
  • Transcript (PDF)

Agenda Format

  • Introduction
  • Overview of changes
  • Live Q&A with NIH Policy Experts

Erica Brown headshot

Erica Brown, Ph.D.

Director, Division of Extramural Activities (DEA) National Institute of General Medical Sciences (NIGMS), NIH Simplified Review Framework Implementation Executive Committee Member

[email protected]

Mark Caprara headshot

Mark Caprara, Ph.D.

Chief, Molecular and Cellular Sciences and Technologies (MCST) Center for Scientific Review (CSR), NIH Simplified Review Framework Implementation Executive Committee Co-Chair

[email protected]

Megan Columbus headshot

Megan Columbus (Moderator)

Director, Division of Communication and Outreach (DCO) Office of Extramural Research (OER), NIH Simplified Review Framework Implementation Executive Committee Member

[email protected]

research update framework

Contact: Requests for reasonable accommodation and/or questions related to this event should be submitted no less than three business days before the event to: [email protected] .

Erica Brown, Ph.D., serves as the director of the Division of Extramural Activities (DEA) at the National Institute of General Medical Sciences (NIGMS). In this position, she oversees grant-related activities of the Institute, including grants policies and procedures; the development of funding opportunities; and the receipt, referral, review, and fiscal management of grants.

Dr. Brown joined NIGMS in 2017 as the DEA deputy director. Prior to that, she served as director of the NIH Guide to Grants and Contracts in the NIH Office of Extramural Research (OER), providing leadership and management of the publication of notice of funding opportunities and notices in the NIH Guide, ensuring that all announcements complied with applicable policies, regulations, and laws. While in OER, she also served as the director of the NIH Academic Research Enhancement Award (AREA) program and the coordinator of the NIH conference grant program. Before joining OER, she served as a scientific review officer at the National Institute of Allergy and Infectious Diseases.

BDr. Brown earned her B.S. in biochemistry at Elizabethtown College in Pennsylvania and her Ph.D. in microbiology and immunology at the Wake Forest University School of Medicine in North Carolina.

Mark Caprara, Ph.D

Dr. Mark Caprara serves as Chief of the Molecular and Cellular Sciences and Technologies Review Branch (MCST RB).

After receiving his Ph.D. in biology from Temple University, he carried out postdoctoral training in the Institute for Cellular and Molecular Biology at the University of Texas, Austin. He went onto Case Western Reserve University in Cleveland, Ohio where he was an assistant professor carrying out research on structural/functional relationships of proteins involved in the regulation of RNA processing. In addition, his lab carried out research on mobile genetic elements.

Megan Columbus

As Communications Director for the NIH Office of Extramural Research, Ms. Megan Columbus is responsible for leading strategic planning and communication activities pertinent to the management of NIH’s extramural program. She enjoys connecting scientists and administrators to information and tools in support of their research programs, helping the broader public learn how NIH-supported research contributes to health advances, and supporting the ongoing dialog between NIH and the research community. Ms. Columbus’ office is responsible for the NIH Grants and Funding website, the NIH Guide to Grants and Contracts, the Extramural Nexus newsletter and “Open Mike” blog, eRA system communications, extramural staff training, media and legislative relations, and a host of other resources. She especially enjoys her outreach responsibilities that involve more personal engagement with the NIH extramural research community, such as live webinars and the NIH Virtual Conferences.

This page last updated on: April 19, 2024

  • Bookmark & Share
  • E-mail Updates
  • Help Downloading Files
  • Privacy Notice
  • Accessibility
  • National Institutes of Health (NIH), 9000 Rockville Pike, Bethesda, Maryland 20892
  • NIH... Turning Discovery Into Health

U.S. flag

An official website of the United States government

The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Browse Titles

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Raftery J, Hanney S, Greenhalgh T, et al. Models and applications for measuring the impact of health research: update of a systematic review for the Health Technology Assessment programme. Southampton (UK): NIHR Journals Library; 2016 Oct. (Health Technology Assessment, No. 20.76.)

Cover of Models and applications for measuring the impact of health research: update of a systematic review for the Health Technology Assessment programme

Models and applications for measuring the impact of health research: update of a systematic review for the Health Technology Assessment programme.

Chapter 3 updated systematic review.

The purpose of the current review was to update the previous review, 2 including a summary of the range of approaches used in health research impact assessment, and to collate the quantitative findings from studies assessing the impact of multiproject programmes. First, we present a summary of the literature that is reported in the large number of studies. Second, we describe 20 conceptual frameworks, or approaches that are the most commonly used and/or have the most relevance for assessing the impact of programmes such as the HTA programme. Third, we briefly compare the 20 frameworks. Fourth, we discuss the methods used in the various studies, and describe a range of techniques that are evolving. Fifth, we collate the quantitative findings from studies assessing the impact of multiproject programmes, such as the HTA programme, and analyse the findings in light of the full body of evolving literature.

  • Review findings

The number of papers identified though each source is set in Table 1 . A total of 513 records were identified, of which 161 were eligible; databases directly identified only 40 of these 161 (see Appendix 3 , Table 14 , for a brief summary of each of the 161 references) ( Figure 1 ).

TABLE 1

Type of sources used to identify relevant literature

Flow diagram of identified studies.

  • Summary of the literature identified

From the initial searching and application of the inclusion criteria, the number of publications identified this time was approximately three times the 46 included in the ‘body of evidence’ for the 2007 review. 2 Using wider criteria, we ended up with a list of 161.

We classified 51 as conceptual/methodological papers (including reviews), 54 as application papers and 56 as both conceptual and application papers (these are classified and reported in Appendix 3 , Table 14 , under column ‘Type’). The 51 conceptual and methodological papers not only reflect an increase in the discussion about appropriate frameworks to use but also reflect the wider criteria used in the extension to the update, including some pre-2005 publications. Thus, a simple comparison between the 51 conceptual papers in the update and the five in the previous review would not be appropriate.

The papers come predominantly from four English-speaking nations (Australia, Canada, the UK and the USA), with clusters from the Netherlands and Catalonia/Spain. We also identified an increasing number of health research impact assessment studies from individual low- and middle-income countries, as well as many covering more than one country, including European Union ( EU ) programmes and international development initiatives.

Some of the studies on this topic are published in the ‘grey literature’, which probably means they are even more likely to be published in local languages than they would be if they were in the peer-reviewed literature. This exacerbates the bias towards a selection of publications from English-speaking nations that arises from the inclusion of publications if they are available only in English.

Appendix 3 (see Table 14 ) lists the 161 included studies with a brief summary of each. We note basic data such as lead author, year, type of study (method, application, or both) and country. The last item has become more complicated with the increase in the range of studies conducted. We prioritised the location of the research in which impact was assessed rather than the location of the team conducting the impact assessment. Similarly, for reviews or other studies intended to inform the approach taken in a particular country, it is important to identify the location of the commissioner of the review, if different from the team conducting the study. We also recorded the programme/specialism of the research in which impact was assessed, and the conceptual frameworks and methods used to conduct the assessment. A further column covers the impacts examined and a brief account of the findings. The final column offers comments, and quotes, where appropriate, on the strengths and weaknesses of the impact assessment and factors associated with achieving impact.

We also identified a range of papers that were of some interest for the review, but the papers did not sufficiently meet the inclusion criteria (see Appendix 4 for further details of these papers).

The included studies demonstrate that the diversity and complexity of the field has intensified. It has long been recognised that research might be used in many ways, even in relation to just one impact category, such as informing policy-making. 34 , 35 Within any one impact assessment, there can be many different ways and circumstances in which research from a single programme might be used. Furthermore, as a detailed analysis of one of the case studies described in Wooding et al. 36 illustrated, even a single project or stream of research might make an impact in various different ways, some relying on interaction between the research team and potential users and some through other routes.

The diversity in the approaches is also linked to the different types of research (basic, clinical, health services research, etc.) and fields, the various modes of research funding (responsive, commissioned, core funding, research training), and the diverse purposes and audiences for impact assessments. These are considered at various points in this review.

The 51 conceptual/methodological papers in Table 14 (see Appendix 3 ) illustrate the diversity. Some of these 51 papers developed new conceptual frameworks and some reviewed empirical studies and used the review to propose new approaches. Others analysed existing frameworks trying to identify the most appropriate frameworks for particular purposes. RAND Europe conducted one of the major streams of such review work. These reviews include background material informing the framework for the Canadian Academy of Health Sciences ( CAHS ), 37 an analysis commissioned by the HEFCE to inform the REF , 38 and a review commissioned by the Association of American Medical Colleges. 9

Such reviews represent major advances in the analysis of methods and conceptual frameworks, and each compares a range of approaches. They often focus on a relatively small number of major approaches. Although Guthrie et al. 9 identified 21 frameworks, many are not health specific and they vary in how far the assessment of impact features in the broader research evaluation frameworks.

Our starting position was different, and aimed to complement this stream of review work. We collated and reviewed a much wider range of empirical studies, in addition to the methodological papers. We not only identified the impacts assessed, but also considered the findings from empirical studies, both to learn what they might tell us about approaches to assessing research impact in practice and also to provide a context for the assessment of the second decade of the HTA programme.

In selecting the conceptual frameworks and methods on which to focus, we thought it was important to reflect the diversity in the field as far as possible, but at the same time focus on analysis of approaches likely to be of greatest relevance for assessing the impact of programmes such as the HTA programme.

  • Conceptual frameworks developed and/or used

We identified a wider range of conceptual frameworks than in the previous review. How the 20 frameworks were used can be seen later (see Table 2 ). We have grouped the discussion of conceptual frameworks into three main sections. The data are presented in ways that allow analysis from several perspectives. First, we present a historical analysis that helps to identify which frameworks have developed from those included in the 2007 review. Second, we order the frameworks by the level of aggregation at which they can be applied. Having briefly introduced each of the frameworks we then present them in tabular form under headings, such as the methods used, impacts assessed, strengths and weaknesses. Finally, in our analysis comparing the frameworks we locate each one on a figure with two dimensions: categories of impacts assessed and focus/level of aggregation at which the framework has primarily been applied.

TABLE 2

Empirical studies using the 20 selected frameworks/approaches

The three main groups of frameworks are:

  • Post-2005 application, and further development, of frameworks described in the 2007 review, and reported in the order first reported in 2007 (five frameworks).
  • Additional frameworks or approaches applied to assess the impact of programmes of health research, and mostly developed since 2005 (13 frameworks). (These are broadly ordered according to the focus of the assessment, starting with frameworks that are primarily used to assess the impact from the programmes of research of specific funders, then frameworks that are more relevant for the work of individual researchers and, finally, approaches for the work of centres or research groups.)
  • Recent generic approaches to research impact developed and applied in the UK at a high level of aggregation, namely regular monitoring of impacts [e.g. via researchfish ® (researchfish Ltd, Cambridge, UK)] and the REF (two frameworks or approaches).

Post-2005 applications of frameworks described in the 2007 review

Five are listed as follows:

  • the Payback Framework 39
  • monetary value approaches to estimating returns from research (i.e. return on investment, cost–benefit analysis, or estimated cost savings)
  • the approach of the Royal Netherlands Academy of Arts and Sciences (2002) 40
  • a combination of the frameworks originally developed in the project funded by the UK’s Economic and Social Research Council ( ESRC ) on the non-academic impact of socioeconomic research 41 and in the Netherlands in 1994 42 [this became the Social Impact Assessment Methods through the study of Productive Interactions ( SIAMPI )]
  • detailed case studies and follow-up analysis, on HTA policy impacts and cost savings: Quebec Council of Health Care Technology assessments ( CETS ). 43 , 44

The Payback Framework

The Payback Framework consists of two main elements: a multidimensional categorisation of benefits and a model to organise the assessment of impacts. The five main payback categories reflect the range of benefits from health research, from knowledge production through to the wider social benefits of informing policy development, and improved health and economy. This categorisation, which has evolved, is shown in Box 1 .

Example of the multidimensional categorisation of paybacks of the Payback Framework

Although a detailed account of the various impact categories is available elsewhere, 2 key recent aspects of the framework’s evolution relate to headings number 2 and 5 in Box 1 .

In the ‘Benefits to future research and research use’ category, the subcategory termed ‘A critical capacity to absorb and appropriately utilise existing research, including that from overseas’ had proven difficult to operationalise in applications of the Payback Framework. However, a more recent evidence synthesis 46 incorporated this concept into a wider analysis of the benefits to the health-care performance that might arise when clinicians and organisations engage in research. Although the evidence base is disparate, a range of studies was identified that suggested when clinicians and health-care organisations engaged in research there was a likelihood of improved health-care performance. Identification of the mechanisms through which this occurs contributes to the understanding of how impacts might arise, and increases the validity of some of the findings from payback studies in which researchers claim that research is making an impact on clinical behaviour in their local health-care systems.

In the ‘Broader economic benefits’ category, recent developments emphasise approaches that monetise the health gains per se from research, rather than assessing the economic benefits from research in terms of valuing the gains from a healthy workforce. 26 Nason et al. 47 applied the Payback Framework in a way that highlighted the economic benefits category and identified various subcategories.

The payback model is intended to assist the assessment of impact and is not intended necessarily to be a model of how impact arises. It consists of seven stages and two interfaces between the research system and the wider environment, with feedback and also the level of permeability at the interfaces being key issues: developments do not necessarily flow smoothly, or even at all, from one stage to the next ( Figure 2 ).

The Payback Framework: model for organising the assessment of the outcomes of health research. Reproduced with permission.

As noted in the 2007 review, 2 although the framework is presented as an ‘input–output model’, it ‘also captures many of the characteristics of earlier models of research utilisation’ such as those of Weiss 34 and Kogan and Henkel. 49 The framework recognises that research might be utilised in various ways. It was devised to assess the impact of the Department of Health/NHS programme of research, a programme in which development was informed by Kogan and Henkel’s earlier analysis of the department’s research and development. 49 That analysis had promoted the idea that collaboration between potential users and researchers was important in encouraging the commissioning of research that was more likely to make an impact. Partly, the development of the Payback Framework was a joint enterprise between the Department of Health and the Health Economics Research Group. 50 The inclusion in the updated review of the findings from the application of the framework to the assessment of the first decade of the HTA programme illustrates the context within which the framework seems best suited.

The conceptual framework informs the methods used in an application; hence, documentary analysis, surveys and case study interview schedules are all structured according the framework, which is also used to organise the data analysis and present case studies in a consistent format. The various elements were devised both to reflect and capture the realities of the diverse ways in which impact arises, including as a product of interaction between researchers and potential users at agenda-setting and other stages. The emphasis on examining the state of the knowledge reservoir at the time of research commissioning enables some evidence to be gathered that might help explore issues of attribution, and possibly the counterfactual, because it forces consideration of whatever other work might have been going on in the relevant field.

One of the limitations of the Payback Framework, and various other frameworks, arises because of the focus on single projects as the unit of analysis, when it is often argued that many advances in health care should be attributed to a body of work. This ‘project fallacy’ is widely noted, including by many who apply the framework. In some studies applying the framework, for example to the research funded by Asthma UK, 51 the problem was acknowledged in the way in which case studies that started with a focus on a single project were expanded to cover streams of work. Although some studies have been able to apply a version of the framework to demonstrate considerable impact from single studies, 52 this has tended to be in particular types of research – in this case, intervention studies.

Some studies applied the framework in new ways, as noted in Table 14 (see Appendix 3 ). This might lead to welcome innovation, but also to applications that do not recognise the importance of features such as the interfaces between the research system and the wider environment and the desirability of capturing aspects such as the level of interaction prior to research commissioning.

Despite the challenges in application, 27 of our 110 empirical studies published since 2005 2 , 36 , 47 , 51 – 74 claim their framework is based either substantially or partly on the Payback Framework ( Table 2 ).

In addition, the Payback Framework also informed the development of several other frameworks, especially the framework from the CAHS . 7 Furthermore, the framework based on the review by Banzi et al. 4 built on both the Payback Framework and the CAHS’s Payback Framework. The Payback Framework also contributed to the development, by Engel-Cox et al. , 63 of the National Institute of Environmental Health Sciences ( NIEHS ) framework.

Monetary value approaches to estimating returns from research (i.e. return on investment, cost–benefit analysis or estimated cost savings)

These approaches differ in the scope of the impacts that are valued and the valuation method adopted. In particular, since 2007 further methods have been developed that apply a value to, or monetise, the health gain resulting from research. Much of this work assesses the impacts of national portfolios of research, and is thus at a higher level of aggregation than that of a programme of research. Most of the studies of this are, therefore, not included here in Chapter 3 , but are described in Chapter 5 , which looks specifically at such developments. Nevertheless, three studies 25 , 27 , 75 from this stream do assess the value of a programme of work and so are included in the update. Of the three, Guthrie et al. 27 and Johnston et al. 75 are the clearest applications of this approach to specific research programmes.

Furthermore, many econometric approaches to assessing research impact do not relate to the impact of specific programmes of research. However, an increasing number of frameworks have been developed that propose ways of collecting data from specific projects or programmes that can be built up to provide a broader picture of economic impacts. For example, Muir et al. 107 developed an approach for measuring the economic benefits from programmes of public research in Australia. Other work includes the development of frameworks by the UK department responsible for science; the Department of Business, Innovation and Skills ( BIS ) and, earlier, the Department for Innovation, Universities and Science 108 developed frameworks under which the department collects data on economic benefits from each research council’s programmes of research, including the MRC . 76 The impacts include patents, spin-offs, intellectual property income; and data collection overlaps with the approach of regular collection of data from the MRC described below (see Regular monitoring or data collection ). 76

A further category in the BIS framework is data on the employment of research staff. The classification of such data as a category of impact is part of a wider trend, but is controversial. However, in political jurisdictions, such as Ireland 47 or Northern Ireland, 6 it might be appropriate to consider the increased employment that comes as a result of local expenditure of public funds leveraging additional research funds from other sources.

To varying degrees the assessment of economic impacts can form part of wider frameworks, including the Payback Framework, as in the two Irish examples above, and the VINNOVA approach described by Eriksen and Hervik 94 (see VINNOVA ).

The approach of the Royal Netherlands Academy of Arts and Sciences

The report from the Royal Netherlands Academy of Arts and Sciences 79 updated the evaluation framework previously used by the academy to assess research, not just impact, at the level of research organisations and groups or programmes. The approach combines self-evaluation and external peer review, including a site visit every 6 years. The report listed a range of specific measures, indicators or more qualitative approaches that might be used in self-evaluation. They included the long-term focus on the societal relevance of research, defined as ‘how research affects specific stakeholders or specific procedures in society (for example, protocols, laws and regulations)’. 79 The report proceeds to give the website for the Evaluating Research in Context ( ERiC ) project, which is described in Spaapen et al. 109 as being driven partly by the need, and/or opportunity, to develop methods to assist faculty in conducting the self-evaluation required under the assessment system for academic research in the Netherlands.

A combination of the frameworks originally developed in 2000 in the project funded by the UK’s Economic and Social Research Council on the non-academic impact of socioeconomic research and in the Netherlands in 1994 (this became the Social Impact Assessment Methods through the study of Productive Interactions)

In 2000, a team led by Molas-Gallart, 41 working on the project funded by the UK’s ESRC on the non-academic impact of socioeconomic research, developed an approach based on the interconnections of three major elements: the types of output expected from research; the channels through which their diffusion to non-academic actors occurs; and the forms of impact. Later the team combined forces with Spaapen, whose early work with Sylvain 42 on the societal quality of research had long been influential in the Netherlands, and, collectively, they led the SIAMPI approach. 110 This overlaps also with the development of the SciQuest method by Spaapen et al. 109 that came from the ERiC project described in The approach of the Royal Netherlands Academy of Arts and Sciences .

Its authors described SciQuest as a ‘fourth-generation’ approach to impact assessment. The previous three generations were characterised, they suggested, by measurement (e.g. an unenhanced logic model), description (e.g. the narrative accompanying a logic model) and judgement (e.g. an assessment of whether the impact was socially useful or not). The authors suggested that fourth-generation impact assessment is fundamentally a social, political and value-oriented activity and involves reflexivity on the part of researchers to identify and evaluate their own research goals and key relationships.

SciQuest methodology requires a detailed assessment of the research programme in context and the development of bespoke metrics (both qualitative and quantitative) to assess its interactions, outputs and outcomes. These are then presented in a unique research embedment and performance profile, visualised in a radar chart.

In addition to these two papers, 109 , 110 the study by Meijer 80 was partly informed by SIAMPI (see Appendix 3 ).

Detailed case studies and follow-up analysis on Health Technology Assessment policy impacts and cost savings: Quebec Council of Health Care Technology assessments

In the 2007 review, 2 we described a series of studies of the benefits from HTAs conducted by the CETS . 43 , 44 They conducted case studies based on documentary analysis and interviews, and developed a scoring system for an overall assessment of the impact on policy that went from 0 (no impact) to +++ (major impact). They also assessed the impact on costs. Bodeau-Livinec et al. 82 assessed the impact on policy of 13 HTAs conducted by the French Committee for the Assessment and Dissemination of Technological Innovations. Although they did not explicitly state that they were using a particular conceptual framework, their approach to scoring impact appears to follow the earlier studies of CETS in Quebec.

Zechmeister and Schumacher 83 assessed the impact of all HTA reports produced in Austria at the Institute for Technology Assessment and Ludwig Boltzmann Institute for HTA aimed at use before reimbursement decisions were made or decisions for disinvestment. Again, they developed their own methods, but the impact of these HTA reports was analysed partly by descriptive quantitative analysis of administrative data informed by the Quebec studies. 43 , 44

Additional frameworks or approaches applied to assess the impact of programmes of health research and mostly developed since 2005

Many other conceptual frameworks have been developed to assess the impacts from programmes of health research, mostly since 2005. Some studies have combined several approaches. Below we list 13 frameworks that have also been applied at least once. Some frameworks combine elements of existing frameworks, an approach recommended by Hansen et al. 111 This means that in the list of studies that have applied different conceptual frameworks (see Table 2 ), there are some inevitable overlaps. Scope exists for different interpretations of exactly how far a specific study does draw on a certain framework. An important consideration in deciding how much detail to give on each framework has been its perceived relevance for a programme such as the HTA programme.

The 13 conceptual frameworks are presented as follows: first, frameworks applicable to programmes that have funded multiple projects; second, frameworks devised for application by individual researchers; third frameworks devised for application to groups of researchers or departments within an institution; and, finally, a generic evaluation approach that has been applied to assess the impact of a new type of funded programmes. Inevitably, it is not this clear-cut and there are some hybrids.

Canadian Academy of Health Sciences

The CAHS established an international panel of experts, chaired by Cyril Frank, to make recommendations on the best way to assess the impact of health research. Its report, Making an Impact: A Preferred Framework and Indicators to Measure Returns on Investment in Research , 7 contained a main analysis, supported by a series of appendices by independent experts. The appendices discuss the most appropriate framework for different types of research and are analysed in Table 14 (see Appendix 3 ). 37 , 112 – 114

The CAHS framework was designed to track impacts from research through translation to end use. It also demonstrates how research influences feedback upstream and the potential effect on future research. It aims to capture specific impacts in multiple domains, at multiple levels and for a wide range of audiences. As noted in several of the appendices, it is based on the Buxton and Hanney Payback Framework (see Figure 2 ). 39 The framework tracks impacts under the following categories, which draw extensively on the Payback Framework: advancing knowledge; capacity building; informing decision-making; health impacts; broader economic; and social impacts. 7 , 115 The categories from the Payback Framework had already been adopted in Canada by the country’s main public funder of health research, the Canadian Institutes of Health Research, for use in assessing the payback from its research.

The main difference in the categorisation from that in the original Payback Framework is the substitution of ‘informing decision-making’ for ‘informing policy and product development’. The CAHS return on investment version, 7 allows the categorisation to include decisions by both policy-makers and individual clinicians in the same category, whereas the Payback Framework distinguishes between policy changes and behavioural changes, and does not specifically include decisions by individual clinicians in the policy category. Therefore, the CAHS framework explicitly includes the collection of data about changes in clinical behaviour as a key impact category, but in studies applying the Payback Framework any assessments that can be made of behavioural changes by clinicians and/or the public in the adoption stage of the model help form the basis for an attempt to assess any health gain.

The CAHS ’s logic model framework also builds on a Payback logic model, and combines the five impact categories into the model showing specific areas and target audiences where health research impacts can be found, including the health industry, other industries, government and public information groups. It also recognises that the impacts, such as improvements in health and well-being, can arise in many ways, including through health-care access, prevention, treatment and the determinants of health.

The Canadian Institutes of Health Research divided its research portfolio into four pillars. Pillars I–IV cover the following areas: biomedical; clinical; health services; and social, cultural, environmental and population health. The CAHS team conducted detailed work to identify the impact from the different outputs arising in each of these areas.

The team also developed a menu of 66 indicators that could be collected. It was intended for use across Canada, and has been adopted by the Canadian Institutes of Health Research and in some of the provinces, for example by Alberta Innovates: Health Solutions ( AIHS ), the main Albertan public funder of health research. AIHS also further developed the framework into a specific version for their organisation and explored how it would be implemented and developed. Implementation had to do with standardising indicators across programmes to track progress to impact. It was developed to improve the organisation’s ability to assess its contributions to health systems impacts, in addition to the contributions of its grantees. 85 The CAHS framework has also been applied in Catalonia by the Catalan Agency for Health Information and Quality. 84

Banzi’s research impact model

Banzi et al. , 4 in a review of the literature on research impact assessment, identified the Payback Framework as the most frequently used approach. They presented the CAHS ’s payback approach in detail, including the five payback categories as listed above. Building on the CAHS report, Banzi et al. 4 set out a list of indicators for each domain of impact and a range of methods that could be used in impact assessment. The Banzi research impact model has been used as the organising framework for several detailed studies of programmes of research in Australia.

A number of the applications have suggested ways of trying to address some of the limitations noted in the earlier account of the Payback Framework. For example, the study by Laws et al. 88 applied the Banzi framework to assess the impact of a schools physical activity and nutrition survey in Australia. They found it difficult to attribute impacts to a single piece of research, particularly the longer-term impacts, and wondered whether or not the use of contribution mapping, as proposed by Kok and Schuit may provide an alternative way forward (see Chapter 4 for a description of Kok and Schuit 116 ).

National Institute of Environmental Health Sciences’s logic model

The US NIEHS developed and applied a framework to assess the impact from the research and the researchers it funded. Engel-Cox et al. 63 developed the NIEHS logic framework and identified a range of outcomes by drawing on the Payback Framework and Bozeman’s public value mapping. 117 These outcomes included translation into policy, guidelines, improved allocation of resources, commercial development; new and improved products and processes; the incidence, magnitude and duration of social change; health and social welfare gain and national economic benefit from commercial exploration and a healthy workforce; and environmental quality and sustainability. They added metrics for logic model components. The logic model is complex; in addition to the standard logic model components of inputs, activities, outputs and outcomes (short term, intermediate, long term), there are also four pathways: NIEHS and other government pathways, grantee institutions, business and industry, and community. The model also included the knowledge reservoir and contextual factors ( Figure 3 ).

The NIEHS’s logic model. Reproduced with permission from Environmental Health Perspective .

The various pathways allow a broader perspective to be developed than that of individual projects, for example by the grantee institution pathway, and by focusing on streams of research from multiple funders. Challenges identified in the initial case studies included ‘the lack of direct attribution of NIEHS -supported work to many of the outcome measures’. 63 The NIEHS put considerable effort into developing, testing and using the framework. Orians et al. 17 used it as an organising framework for a web-based survey of 1151 asthma researchers who received funding from NIEHS or comparison federal agencies from 1975 to 2005. Although considerable data were gathered, the authors noted that ‘this method does not support attribution of these outcomes to specific research activities nor to specific funding sources’. 17

Furthermore, Liebow et al. 91 were funded to tailor the logic model of the NIEHS ’s framework to inputs, outputs and outcomes of the NIEHS asthma portfolio. Data from existing National Institutes of Health databases were used and, in some cases, data matched with that from public data on, for example the US Food and Drug Administration website for the references in new drug applications, plus available bibliometric data and structured review of expert opinion stated in legislative hearings. Considerable progress was made that did not require any direct input form researchers. However, not all the pathways could be used and they found their aim to obtain readily accessible, consistently organised indicator data could not in general be realised.

A further attempt was made to gather data from databases. Drew et al. 90 developed a high-impacts tracking system: ‘an innovative, Web-based application intended to capture and track short-and long-term research outputs and impacts’. It was informed by the stream of work from NIEHS , 17 , 63 but also by the Becker Library approach 118 and by the development in the UK of researchfish. The high-impacts tracking system imports much data from existing National Institutes of Health databases of grant information, in addition to text of progress reports and notes of programme officers/managers.

This series of studies demonstrates both a substantial effort to develop an approach to assessing research impacts, and the difficulties encountered. The various attempts at application clearly suggest that the full logic model is difficult and too complex to apply as a whole. Although the stream of work has, nevertheless, had some influence on thinking beyond the NIEHS , apart from the in-house stream of work no further empirical studies were identified as claiming that their framework was based on the NIEHS’s logic model approach.

Medical research logic model (Weiss)

Anthony Weiss analysed ways of assessing health research impact, but, unlike many of the other approaches identified, his analysis was not undertaken in the context of aiming to develop an approach for any specific funding or any research-conducting organisation. He drew on the United Way model 119 for measuring programme outcomes to develop a medical research logic model. As with standard logic models it moves from inputs, to activities, outputs, and outcomes: initial, intermediate, long term. He also discussed various approaches that could be used, for example surveys of practitioners to track awareness of research findings; changes in guidelines, and education and training; use of disability-adjusted life-years (DALYs) or quality-adjusted life-years (QALYs) to assess patient benefit. He also analysed a range of dimensions from the outputs, such as publications through to clinician awareness, guidelines, implementation and overall patient well-being. 120

Although this model was not developed for a specific organisation, it does overlap with the emphasis given to logic models in various frameworks and studies, including the W.K. Kellogg logic model. 121 Weiss’s account is included here because it has become quite high profile and is widely cited. It has informed a range of studies rather than being directly applied in empirical studies.

National Institute for Occupational Health and Safety’s logic model

Williams et al. , 92 from the RAND Corporation in the USA, with advice from colleagues in RAND Europe, developed a logic model to assess the impact from the research funded by the National Institute for Occupational Health and Safety ( NIOSH ). At one level the basic structure of the logic model was a standard approach, as described by Weiss 120 and as in the logic model from W.K. Kellogg. 121 Its stages include inputs, activities, outputs, transfer, intermediate customs, intermediate outcomes, final customers, intermediate outcomes and end outcomes.

A novel feature of the NIOSH model was outcome worksheets based on the historical tracing approach, 122 which reversed the order ‘articulated in the logic model and essentially places the burden on research programs to trace backward how specific outcomes were generated from research activities’. 92

The outcome worksheet was primarily designed as a practical tool to help NIOSH researchers think through the causal linkages between specific outcomes and research activities, determine the data needed to provide evidence of impact, and provide an organisational structure for the evidence. Williams et al. 92

The combination of historical tracing with a logic model is interesting because previously historical tracing has been more associated with identifying the impact made by different types of research (i.e. basic vs. clinical), irrespective of how they were funded, rather than contributing to the analysis of the impact from specific programmes of research.

The Wellcome Trust’s assessment framework

The Wellcome Trust’s assessment framework has six outcome measures and 12 indicators of success. 93 A range of qualitative and quantitative measures are linked to the indicators and are collected annually. A wide range of internal and external sources is drawn on, including end-of-grant forms. The evaluation team leads the information gathering and production of the report with contributions from many staff from across the trust.

‘The Assessment Framework Report predominantly describes outputs and achievements associated with trust activities though, where appropriate, inputs are also included where considered a major Indicator of Progress.’ 93 To complement the more quantitative and metric-based information contained in volume 1 of the Assessment Framework Report, volume 2 contains a series of research profiles that describe the story of a particular outcome or impact associated with Wellcome Trust funding. The Wellcome Trust research profiles are agreed with the researchers involved and validated by senior trust staff.

Although there is no specific overall framework, it is a comprehensive approach. This is another example of a major funder including impact in the annual collection of data about the work funded. On the one hand, the importance of case studies is highlighted: ‘Case studies and stories have gained increasing currency as tools to support impact evaluation’, 93 but, on the other hand, the report described an interest in also moving towards more regular data collection during the life of a project: ‘In future years, as the Trust further integrates its online grant progress reporting system throughout its funding activities . . . it will be easier to provide access to, and updates on grant-associated outputs throughout their lifecycle’. 93

VINNOVA, the Swedish innovation agency, has been assessing the impact of its research funding for some time. The VINNOVA framework consists of two parts, an ongoing evaluation process and an impact analysis, as described in the review for CAHS by Brutscher et al. 37 The former defines the results and impact of a programme against which it can be evaluated. It allows the collection of data on various indicators. The impact analyses, the main element in the framework, are conducted to study the long-term impact of programmes or portfolios of research. There are various channels through which impacts arise, but each specific impact analysis can take a particular form.

The aim has been, as far as possible to quantify the effects in financial terms, or in terms of other physically measurable effects, and to highlight the contribution made by the research from the point of view of the innovation system. Eriksen and Hervik 94

This approach is a hybrid in that it does relate to a stream of research funded by a specific funder, but it is at a single unit.

Flows of knowledge, expertise and influence

Meagher et al. 95 developed the ‘flows of knowledge, expertise and influence’ approach to assess the impact of ESRC -funded projects in the field of psychology research. As part of a major analysis of the ways in which research might make an impact, the authors pointed out that one limitation was that their study was on a collection of responsive-mode projects and while they did have a common funder (i.e. the ESRC), they had not been commissioned to be a ‘programme’. This again makes the example more of a hybrid, and the study is described in more detail in Chapter 4 , but this is the only application of the approach that we identified in our search.

Research impact framework

The research impact framework ( RIF ) was developed at the London School of Hygiene and Tropical Medicine by Kuruvilla et al. , 123 who noted that researchers were increasingly required to describe the impact of their work, for example in grant proposals, project reports, press releases and research assessment exercises for which the researchers would be grouped into a department or unit within an organisation. They also thought that specialised impact assessment studies could be difficult to replicate and may require resources and skills not available to individual researchers. Researchers, they felt, were often hard-pressed to identify and describe research impacts, but ad hoc accounts do not facilitate comparison across time or projects.

A prototype of the framework was used to guide an analysis of the impact of selected research projects at the London School of Hygiene and Tropical Medicine. Additional areas of impact were identified in the process and researchers also provided feedback on which descriptive categories they thought were useful and valid vis-à-vis the nature and impact of their work.

The RIF has four main areas of impact: research-related, policy, service and societal. Within each of these areas, further descriptive categories were identified, as set out in Table 3 . According to Kuruvilla et al. , 123 ‘Researchers, while initially sceptical, found that the RIF provided prompts and descriptive categories that helped them systematically identify a range of specific and verifiable impacts related to their work (compared to ad hoc approaches they had previously used).’ 123

TABLE 3

Although it is multidimensional in similar ways to the Payback Framework, the categories were broadened to cover health literacy, social capital and empowerment, and sustainable development.

Another major feature of the RIF is the intention that it could become a tool that researchers themselves could use to assess the impact of their research. This addresses one of the major concerns about other research impact assessment approaches. However, while the broader categorisation has been used, on its own or in combination, in an increasing number of studies 124 , we are not aware of any studies that have used it by adopting the self-assessment approach envisaged. Nevertheless, it could be useful to researchers having to prepare for exercises such as the REF in the UK.

The Becker Medical Library’s model/the translational research impact scale

Sarli et al. 118 developed a new approach called the Becker Medical Library model for assessment of research. Its starting point is the logic model of the W.K. Kellogg Foundation, 121 ‘which emphasises inputs, activities, outputs, outcomes, and impact measures as a means of evaluating a programme’. 118

For each of a series of main headings, it lists the range of indicators and the evidence for each indicator. The main headings are research outputs knowledge transfer; clinical implementation; and community benefit. The main emphasis is on the indicators for which the data are to be collected, and referring to the website on which the indicators are made available the authors state: ‘Specific databases and resources for each indicator are identified and search tips are provided’. 118 The authors found during the pilot case study that some supporting documentation was not available. In such instances, the authors contacted the policy-makers or relevant others to retrieve the required information.

The Sarli et al. 118 article includes the case study in which the Becker team applied the model, but the Becker model is mainly seen as a tool for self-evaluation, with the suggestion that it ‘may provide a tool for research investigators not only for documenting and quantifying research impact, but also . . . noting potential areas of anticipated impact for funding agencies’. 118 It is generating some interest in the USA, including partially informing the Drew et al. 90 implementation of the NIEHS framework described above, and a UK application from Sainty. 99

More recently, Dembe et al. 124 proposed the translational research impact scale, which is informed not only by a logic model from the W.K. Kellogg Foundation and by the RIF , 123 but also by the Becker Medical Library model. 118

The authors identified 79 possible indicators, used in 25 previous articles, and reduced them to 72 through consulting a panel of experts, but further work was being undertaken to develop the requisite measurement processes: ‘Our eventual goal is to develop an aggregate composite score for measuring impact attainment across sites’. 124 However, there is no indication provided about how a valid composite score could ever be devised. Although as far as we are aware an application of it has yet to be reported, from the perspective of our review it usefully illustrates how new models are being built on a combination of existing ones.

Societal quality score

Mostert et al. 100 developed the societal quality score using the theory of communication from Van Ark and Klasen. 125 Audiences are segmented into different target groups that need different approaches. Scientific quality depends on communication with the academic sector and societal quality depends on communication with groups in society; specifically, three groups: lay public, health-care professionals and private sector.

Three types of communication are identified: knowledge production, for example papers, briefings, radio/television services, products; knowledge exchange, for example running courses, giving lectures, participating in guideline development, responding to invitations to advise or give invited lectures (these can be divided into ‘sender to receiver’, ‘mutual exchange’ and ‘receiver to sender’); and knowledge use, for example citation of papers, purchase of products, and earning capacity (i.e. the ability of the research group to attract external funding). Four steps are then listed:

  • Step 1: count the relative occurrences of each indicator for each department.
  • Step 2: allocate weightings to each indicator (e.g. a television appearance is worth x, a paper is worth y).
  • Step 3: multiply 1 by 2 = ‘societal quality’ for each indicator.
  • Step 4: the average societal quality for each group is used to get the total societal quality score for each department.

It is a heavily quantitative approach and looks only at process, as the authors say that ultimate societal quality takes a long time to happen and is hard to attribute to a single research group. The approach does not appear to control for the size of the group but seems to be more applicable to research at an institution rather than project level.

Research performance evaluation framework

Schapper et al. 72 describe the research performance evaluation framework used at Murdoch Children’s Research Institute in Australia. It is ‘based on eight key research payback categories’ from the Payback Framework and also draws on the approach described in the RIF . 123

The centre has an annual evaluation overseen by the Performance Evaluation Committee, with a nominee from each of six themes and external member and chairperson. The evaluation ‘seeks to assess quantitatively the direct benefits from research, such as gains in knowledge, health sector benefits, and economic benefits’. 72 Data for the Research performance evaluation are gathered centrally by the research strategy office and verified by the relevant theme. The theme with highest score on a particular measure is awarded maximum points; others are ranked relative to this. Each theme nominates its best three research outcomes over 5 years, and is then interviewed by the research strategy team using detailed questionnaires to gain evidence and verify outcomes. Research outcomes are assessed using a questionnaire based on the RIF . There are three broad categories: knowledge creation; inputs to research; and commercial, clinical and health outcomes. The six major areas of outcomes are development of an intervention; development of new research methods or applications; communication to a broad audience; adoption into practice and development of guidelines and policy; translation into practice; and impact of translation and on health.

Realist evaluation

The final approach described in this subsection, realist evaluation, is a relatively new generic evaluation approach originally developed in the field of social policy. It has been applied to evaluating the impact of the NIHR -funded Collaborations for Leadership in Applied Health Research and Care (CLAHRCs). This evaluation by Rycroft-Malone et al. 102 is described in Chapter 4 [see Co-production models (e.g. multistakeholder research partnerships) ]. Realist evaluation may be more widely applicable to other programmes in the NIHR. The realist evaluation approach was also used in the evaluation of public involvement in health research in England. 101

Generic approaches to research impact assessment developed and applied in the UK, and parallel developments in other countries

In this final section considering conceptual frameworks we focus on two generic approaches that have recently been introduced in the UK, namely researchfish and the REF , and in which the data collection from individual projects or research groups, respectively, is brought together at a high level of aggregation. Here we consider some of the accounts we gathered about them from reports and articles included in our review.

Regular monitoring or data collection

Research funders became increasingly interested in moving beyond one-off impact assessments of the type conducted through the Payback Framework and similar approaches. Of the various streams of work to develop such approaches one emerged from the application of the framework to assess the impact of the research funded by the Arthritis Research Campaign. 104 Developed in consultation with members of the research community, the RAND/Arthritis Research Campaign’s impact scoring system was loosely based on the questions asked on previous payback surveys, but evolved thereafter, simplifying the questions and increasing the number. According to Morgan Jones and Grant, 126 this informed the development of researchfish.

Researchfish (formerly MRC ’s e-Val) is the system used to collect information on the outputs, outcomes and impacts that have arisen from MRC-funded research. MRC’s e-Val was first launched in November 2009 and was used in three rounds of data collection. In 2011/12, the MRC worked with a group of approximately 10 other funders on a ‘federated’ version of e-Val that works across funders so that researchers can enter an output just once and then associate it with the relevant funder or funders.

Launched in 2012 as researchfish, by March 2014 there were more than 80 research organisations and funders using it, including more than 50 medical research charities and 10 universities. The fourth data-gathering period in 2012 – the first using researchfish – saw a 98% response rate.

The MRC plans to continue to co-ordinate use of researchfish closely with university support offices and/or research unit. It sees the data being used in a variety of ways, from funders returning it to universities so that they can be used for their REF submissions, to using data to inform funders’ strategic plans and as evidence for the Government’s spending reviews. 127

Researchfish is considered in the MRC ’s report, Outputs, Outcomes and Impact of MRC Research . 103 Although it could have been included in the list above, it might be seen more appropriately as a tool. The researchfish web-based survey asks project principal investigators a series of questions under 11 major headings ranging from publications through to impact on the private sector.

These headings have some parallels with some of the models considered above, although no conceptual framework is made explicit. Given the nature of the requirements to complete the annual survey this approach results in a high level of compliance, at least in terms of principal investigator’s supplying some response.

A range of health research funders, including NIHR and the MRC , use researchfish. In addition to the description in MRC reports, 103 the results are also included as some of the data required in the reporting for the BIS framework on economic impacts. 76

Research Excellence Framework impact assessment (Higher Education Funding Council for England) and the Research Quality Framework

The Research Quality Framework ( RQF ) was developed for the assessment of university research in Australia. 128 Owing mainly to a change of government, this framework was not actually used in Australia, but it affected developments for research impact assessment in the higher education sector in the UK. The Australian model proposed the use of narrative cases studies written by higher education institutes as the basis of expert peer review in national assessments of university research performance. 128 The key impacts to be assessed were wider economic, social, environmental and cultural benefits of research. The study by Kalucy et al. 65 piloted the expected introduction of the RQF and found the Payback Framework would be likely to be a suitable framework to use to gather the data to submit to the assessment.

In preparing for the REF in the UK, the HEFCE commissioned RAND Europe to review possible frameworks that might be adopted. 38 RAND Europe reviewed four methods for evaluating impact of university research against HEFCE criteria and recommended the adoption of a case study approach, drawing on the RQF from Australia. 128

In the 2014 REF , 33 the HEFCE required universities to submit impact case studies in the form of a four-page description of a research project/programme and its ensuing impact, with references and corroborating sources. In relation to medicine and life sciences the report identified the kind of impacts that were sought:

. . . benefits to one or more areas of the economy, society, culture, public policy and services, health, production, environment, international development or quality of life, whether locally, regionally, nationally or internationally.
. . . manifested in a wide variety of ways including . . . the many types of beneficiary (individuals, organisations, communities, regions and other entities). p. 26 33

The final report on the application of the REF to biomedical and health research from the REF 2014 Main Panel A, which had overseen the assessment of some 1600 case studies, concluded that the case study approach had been broadly successful. 106 The report noted, ‘International MPA [Main Panel A] members cautioned against attempts to “metricise” the evaluation of the many superb and well-told narrations describing the evolution of basic discovery to health, economic and societal impact’. 106 International members of the panel also produced a separate section for the report and described the REF as:

To our knowledge, the first systematic and extensive evaluation of research impact on a national level. We applaud this initiative by which impact, with its various elements, has received considerable emphasis. p. 21 106

The REF approach of assessing research impact through case studies prepared in institutions by groups of researchers, and assessed and graded by peer reviewers in accordance with the criteria of reach and significance, was adopted in Australia in a trial exercise by the Group of Eight and the Australian Technology Network of Universities. 105 Called Excellence in Innovation for Australia ( EIA ), this ‘replication’ of the REF approach was a small-scale trial, with 162 case studies, and was conducted much more rapidly, reporting in 2012. This study also reported that the case study methodology ‘to assess research impact is applicable as a way forward to a national assessment of impact’. 105

  • Comparing frameworks

The various analyses of research impact assessment frameworks conducted by RAND Europe involved making a series of detailed comparisons. 9 , 37 , 38 These included the scoring of 21 frameworks (e.g. SIAMPI , REF , CAHS /Payback) against 19 characteristics (e.g. formative, comprehensive, quantitative and transparency). 9

Over half of the 20 frameworks we described above were included in one or more of the three comparisons of frameworks noted here. Appendix 5 lists all the frameworks appearing at least once in the main analyses in these reviews, and identifies those we have included in our list of 20 frameworks, those for which we have included a later or alternative version, and those not included, with reasons, but some of these are described in Table 14 (see Appendix 3 ). The additional ones we have included that were not in the three reviews are generally more recent and have been applied specifically to assess the impact of programmes of health research.

In Table 4 we provide a brief analysis of the 20 frameworks described above. Much of the discussion of strengths and weaknesses focuses on specific aspects of particular frameworks, with more generic analysis in Chapter 4 . The table of comparisons is intended to inform our assessment of options in Chapter 8 .

TABLE 4

Comparison of 20 selected frameworks/approaches

Figure 4 locates the various frameworks on two dimensions in an attempt to identify clusters of frameworks that might be attempting to do similar things. One dimension is the type of impact categories assessed. We have abstracted the key impact categories described in the frameworks: multidimensional (i.e. covers a range that can include health gains, economic impacts and policy impacts); economic impacts (value of improved health and GDP ); policy impacts (including clinical policies); and communication/interactive processes. The other dimension is the level of aggregation at which the framework has primarily been applied and whether the focus is on programmes of work from funders or on the portfolio of work of individual researchers, groups of researchers or institutions. (We classed the REF as being in the producers of research category because the work assessed was funded by multiple organisations and conducted by institutions and their units, even though the assessment results will then be used to allocate the future funds from the specific funding organisation conducting the assessment, i.e. the HEFCE .) Where the focus is on programmes of funded research, the impact assessment is most likely to gather data from individual studies, but these are then pulled together and reported on at an aggregate programme level. Furthermore, there can be some data gathering about the whole programme.

Twenty key frameworks: prime assessment focus/level and impact categories assessed.

Finally, in this section we draw attention to a very different approach: the balanced scorecard ( BSC ), which is analysed in the CAHS report. 7 Some studies describe health-care systems that include research as part of a BSC approach to assessing performance of their system, 130 , 131 but it is argued that the approach is not a comprehensive impact assessment of research. 7 If, however, a BSC approach is used to assess health-care organisations, and includes research impact as one of the criteria, this could be a mechanism for encouraging health-care organisations to foster research activity in their facilities.

  • Methods used in empirical impact assessment studies

Our updated review identified several studies that undertook important analysis of the methods used in research impact evaluation. These include the UK Evaluation Forum, 19 the CAHS report 7 and the report from RAND for the Association of American Medical Colleges. 9 The last analysed 11 methods or tools used in a range of six major research evaluation frameworks; most relate to the collection of data and others to how data are presented. The authors provided a brief description of each with a suggestion of when and how it is used. The 11 methods/tools were set out in alphabetical order: bibliometrics, cases studies, data mining, data visualisation, document review, economic analysis, interviews, logic models, peer review, site visits and surveys. The review by Boaz et al. 5 of studies assessing the impact of research on policy-making identified 16 methods as having been used, with semistructured interviews, case study analysis and documentary analysis as the three most commonly adopted. Milat et al. 129 reported that typically mixed methods were used, which could include publications and citations analysis, interviews with principal investigators, peer assessment, case studies and documentary analysis.

Our review of 110 empirical studies also found that a wide range of methods were adopted, but in various combinations. Frequently used methods included desk analysis, surveys, interviews and case studies. The full range of methods used in the studies listed can be found in Table 14 (see Appendix 3 ), and below we note some interesting trends and show how our review provides further evidence on long-standing issues about the range of methods available for impact assessments. In relation to surveys, for example, there are concerns about the burden on researchers of completing them and on the accuracy of the data. The burden is widely viewed as having increased with the introduction of the above annual surveys, notwithstanding the attempts to reduce the burden by enabling the data entered to be attached to a range of grants. This increased burden might result in incomplete data in the response to specific questions within the overall survey, and might also have implications for the willingness of researchers to complete voluntary but bespoke surveys that specific funders might consider commissioning. The survey response rates in the included studies varied enormously. The compliance requirements in a survey such as researchfish result in very high formal response rate, but the rate has also been high in other surveys; for example, it was 87% in a study in Hong Kong. 66 The rate, however, was only 22% in a pilot study assessing the impact of the EU ’s international development public health programme, 27 but they did use a range of other methods as well.

In terms of the accuracy of the data from surveys of researchers, several studies report that, in general, the findings from users were similar to those from researchers, for example Guinea et al. 64 and Cohen et al. 52 When comparisons have been made between the responses to surveys, and the data gathered in subsequent case studies on the same project, researchers have been found not to routinely exaggerate. 2 Indeed, Gutman et al. 132 found that researchers interviewed claimed a higher level of impact on policy than was reported by researchers in a web survey, although the questions were slightly different. Meagher et al. 95 also reported that while, ‘case studies were crucial in illuminating the nature of policy and practice impacts . . . there were no evident contradictions between results obtained by different methods.’

Doubts have also been expressed as to how much researchers actually know about the impact their research might have made. One trend that might provide some reassurance about this is that some of the studies in Table 14 (see Appendix 3 ) report relatively small-scale research funding schemes in which much of the claimed impact arises from the adoption of the findings in the researcher’s own health-care unit, where researchers are well-placed to know the impact made. Some examples of this were reported by Caddell et al. 96

A balance must be found between coverage and resources. Several of the reported assessments relied on the programme office and/or impact evaluators gathering the data from databases, for example in the case of the evaluation of the impact from the EU ’s public health programmes 53 and in one of the NIEHS ’s studies. 91 However, in both cases and others there were some doubts about whether or not sufficient data could be collected in this way, but one of the advantages was that it did not place the burden on researchers. Other attempts to increase practicality go in other directions. Individual researchers might be encouraged to construct accounts of the impact from their own work. In particular, Kuruvilla et al. 123 designed the RIF as a do-it-yourself approach, which prompts researchers to systematically think about the impact of their work using descriptive categories. The Becker Medical Library model was also primarily seen as a tool for self-evaluation. 118

Case studies tend to provide a wider and more rounded perspective on how the impact might have arisen and can address attribution. They tend to be resource intensive and usually conducted only selectively. One dilemma is case study selection, for which a purposive approach is often adopted. However, a stratified random selection has been used when applying the Payback Framework, 2 , 36 and a recent study in Australia conducted case studies on all the projects in which the respondents had completed two surveys and an interview, thus avoiding any selection bias. 52 Case studies can, however, be conducted through self-assessment, perhaps based on desk analysis. They can then be evaluated by peers in an approach that seems to be becoming increasingly important and broadly successful. 33 , 105 , 106

There are also an increasing number of studies reporting attempts to score case studies. In addition to the examples of scoring of self-assessment described above, this also includes scoring case studies produced by impact assessors, 36 , 52 , 89 or produced initially by central teams in the institution, including the cases produced for the research performance evaluation framework used at Murdoch Children’s Research Institute in Australia. 72

Whatever the method of data collection, attention has been given in several studies to expected benefits. We excluded studies that solely considered potential impact before research was commissioned, but some studies are considering aspects of ‘expected’ impacts in several ways. Some make a comparison between what was expected from a project and what had been achieved. Examples include studies from the EU studies, 53 , 133 from Catalonia/Spain 56 , 58 and from Australia. 70 Studies can also emphasise what impacts are expected from research that has already been completed, but which had not yet arisen at the time of the impact study: such questions are, for example, often a feature of surveys in studies applying the Payback Framework. This also includes the application of the framework to assess the impact of the funding provided for biomedical research by the annual TV3 Telethon in Catalonia. 58

Attempts are also being made to develop ways to consider the impact of programmes as a whole in addition to the impact that might come from the collation of data on individual projects. This overlaps with consideration of conceptual frameworks, where, for example, we discussed the role of realist evaluation in assessing one of the CLAHRCs, 102 but it can also relate to the methods used in other studies. For example, in their assessment of the Austrian HTA programme, Schumacher and Zechmeister 134 set out the methods they had used and the issues that could be addressed by each one, including attempts to identify the development of a HTA culture. Rispel and Doherty 135 claimed that in their assessment of the impact of the Centre for Health Policy in South Africa, their own experiences gave them an ‘insider–outsider’ perspective, and that a rounded view of the Centre was provided by interviewing people with a predominantly ‘insider’ perspective, and others with an ‘outsider’ perspective.

Finally, in the 2007 report there was speculation regarding whether a conceptual framework was really needed or whether it might be possible just to apply some of the methods. It was claimed, however, that a conceptual framework could be most useful in informing the structure of a range of methods, such as documentary analysis, surveys and case study interviews. This was seen to be the case with the Payback Framework, and has remained so, as illustrated by the both the survey and the semistructured interview schedule included in the article describing the assessment of the impacts from Asthma UK funding. 51 This is also the case for newer frameworks such as the RIF .

Timing of assessments

Points about timing have sometimes been noted in the strengths and weaknesses column of Table 14 (see Appendix 3 ). As much of the impact from research is likely to arise some time after the completion of the research, any early one-off assessment is likely to capture less than regular monitoring that continues for some time after the completion of the project. Some impact assessments, for example Oortwijn, 69 explicitly stated that they felt the early timing of the assessment had inhibited the level of impact that could have arisen and thus be recorded.

However, even this issue is not clear-cut and partly overlaps with the nature of the research approach. In the evaluation of the Africa Health Systems Initiative Support to African Research Partnerships, Hera 136 reported that because the evaluation was before the end of the programme it was possible to observe the final workshop and present preliminary findings. It may have been too early for some of the expected impact to arise, but the interactive approach of the whole programme had led to some policy impact during project, and there were some advantages in analysing it while project meetings were still occurring. Nevertheless, the recent results from the UK’s REF clearly show that allowing up to 20 years for the impact to occur can contribute to assessments that show considerable impacts have been achieved by a range of research groups. 106

In the future, regular monitoring of research outcomes and continuous monitoring of uptake/coverage might provide ways of reducing at least some of the variations between studies in terms of the timing of assessments.

  • Summary findings from multiproject programmes

The findings from the analysis of multiproject programmes reported in the 2007 review provide a context for the current analysis. That review found that the six impact assessment studies that were focused on HTA programmes reported that the number of individual projects making an impact on policy ranged between 70% and 100%. The 10 impact assessment studies that were focused on ‘other health research programmes’, claimed that the number of individual projects making an impact on policy ranged between < 10% and 53%, and the number of projects making an impact on practice ranged between < 10% and 69%. These findings reflected the different roles of the two identified groups of programmes, but there was also considerable diversity within the nature of the programmes within each group.

The study of the impact of the first decade of the NHS HTA programme was reported as the main part of the 2007 report. However, the study was not included in the literature review chapter of that 2007 report because that review included studies published up to a cut-off point of mid-2005, and had been conducted in order to inform the assessment that was undertaken of the NHS HTA programme. Therefore, the findings below, from the survey of the lead researchers conducted as part of the assessment of the HTA programme, were not referred to in the review chapter. They show a similar pattern to that identified in the 2007 review, that is, an even higher level of impact being claimed for the Technology Assessment Reports (TARs) than for the other types of HTA-funded research, which, in the case of trials, are nearer to the research in the ‘other research programmes category’ than they are to appraisals that constitute the work of most HTA programmes ( Table 5 ).

TABLE 5

Opinion of lead researchers in the first decade of the NHS HTA programme about existing and potential impact on policy and behaviour

In our current review a collation of the quantitative findings from studies assessing the impact from multiproject programmes (such as the HTA programme) and published since the previous review conducted in 2005 should provide a context for the results from the parallel study being conducted of the impact from the second decade of the HTA programme.

The diversity of circumstances makes it difficult to be certain about which studies to include, but we classified 26 studies as being empirical studies of the impact from multiproject programmes, and a further two studies of the impact from research training have been included because the impact assessment covered the wider impact made by the research conducted in each training award, as well as the impact on the trainees’ subsequent careers (see Table 6 for the included studies). Even for these 28 studies there is considerable diversity in a range of aspects, including:

TABLE 6

Studies assessing the impact from programmes with multiple projects and training fellowships

  • types of research and modes of funding of the programmes of research assessment
  • timing of impact assessment (some while the programme was still continuing, some conducted years afterwards)
  • conceptual frameworks used for assessment (e.g. some ask about impact on policy, including guidelines, and separately ask about impact on practice; but others ask about a combined ‘decision-making’ and have that as an impact category)
  • methods used for collecting and presenting data in impact evaluations (e.g. some present percentage of projects claiming each type of impact and some present the total number of examples of each type of impact, making it impossible to tell how many projects are represented by the total number because some projects might have generated more than one example of a particular type of impact).

It is likely that there will be different levels of impact on policy achieved, for example by a programme of responsive mode basic research than by a programme of commissioned HTA research. However, studies assessing impact from research do not necessarily fall into such neat categories because different funders will have a different mix of research in their programmes and portfolios. Therefore, we have listed all 28 studies ( Table 6 ), but do not include the figures for each study for the percentage of project principal investigators claiming to have made various impacts.

All the data for the individual studies are available from Table 14 (see Appendix 3 ), but here in Table 7 we show the average figures for the 23 of the 26 multiproject programmes in which the data were presented in terms of the number, or percentage, of individual projects claiming to have made an impact in the categories being assessed. Presenting it in this way allows the overall picture from the quantitative analysis of multiproject programmes to be seen, but also allows a commentary to include some data from individual projects, while at the same time describing key features of a particular research programme, including sometimes the context in which it had been conducted. Table 7 presents the averages and the range on each of the following criteria: impact on policy; impact on practice; a combined category, for example policy and clinician impact, or impact on decision-making; and impact in terms of improved care/health gain/patient benefit.

TABLE 7

Analysis of quantitative data from studies assessing the impact from all 23 projects reporting on findings from each project in a multiproject programme

These are considered in turn.

Policy impacts

As in the 2007 review, the HTA programmes analysed generally showed the highest percentage achieving or claiming an impact on policy, but various examples illustrate a range of issues. Although 97% of the assessments from the Austrian HTA programme were classified by Zechmeister and Schumacher 83 as making some impact on coverage policies, other factors also played a role and in only 45% of reports ‘the recommendation and decision were totally consistent’. 83 There is some uncertainty about whether or not Bodeau-Livinec et al. 82 included all the studies available, but, assuming that they did, 10 out of 13 recommendations from the French HTA body explored ‘had an impact on the introduction of technology in health establishments’; 82 in seven cases the impact was considerable and in three it was moderate.

In the case of the more mixed HTA programmes, we noted above the considerable impact made by the NHS HTA programme, but with the TARs having a higher figure than the primary studies. For the Belgium Health Care Knowledge Centre programme, Poortvliet et al. 140 reported that within the overall figure of 58% of project co-ordinators claiming the projects had made an impact, the figure for HTAs was higher than for the other two programmes. Finally, the Health Care Efficiency Research programme from the Netherlands was classified as a HTA programme, but included a large responsive mode element and most studies were prospective clinical trials. Furthermore, Oortwijn 69 reported that the impact assessment was conducted soon after many of the projects had been completed. These various factors are likely to have contributed to the proportion claiming an impact on policy (in these cases mostly citation on a guideline) being lower than other HTA programmes at 29%.

In four non- HTA studies, 66 , 70 , 74 , 136 more than one-third of the projects appeared to make an impact on policy, and generally interaction with potential users was highlighted as a factor in the impact being achieved. Of the principal investigators in four studies, ≤ 10% reported that their research had made an impact on policy, but three of these studies 62 , 104 , 139 assessed the impact of wide-ranging research programmes that, in addition to clinical and other types of research, covered basic research from which policy impact would be much less likely to occur. However, some of these programmes also made an impact in areas not reported on the table. For example, Donovan et al. 62 reported that 11% of principal investigators from the research funded by the National Breast Cancer Foundation in Australia claimed to have made an impact on product development.

Informed practice

Of the 10 studies reporting on impact on clinical practice, 2 , 53 , 62 , 66 , 69 , 84 , 96 , 99 , 139 , 140 the five highest were in a narrow band of 37–43% of the principal investigators claiming such impact. 2 , 66 , 84 , 96 , 99 The projects in these programmes generally incorporated factors associated with achieving impact, including being funded to meet the needs of the local health-care system and interaction with potential users. Two of the studies 96 , 99 looked at small-scale funding initiatives, and found that the impact was often at the location where the research was conducted.

Combined category

The three studies 89 , 137 , 138 in which the impact seemed best reported at a combined level covering policy and practice impact, all suggested considerable levels of impact from projects where partnerships with potential users were a key feature.

Health gain/patient benefit/improved care

Only eight studies went as far as attempting to assess impact in terms of health gain or improved care, 51 , 53 , 66 , 70 , 71 , 74 , 75 , 96 and none of them reported a figure > 50%. Three studies 66 , 74 , 96 were the only studies in which over one-third of principal investigators claimed an impact on health care, and, as noted, all three had features associated with impact being achieved. Also of note is Johnston et al. 75 because although only eight out of a programme of 28 RCTs (29%) were identified as having a measurable use, with six (21%) leading to a health gain, these health gains were monetised and provide a major example of valuing the benefits from a programme of health research. The study is fully reviewed and critiqued in Chapter 5 .

Finally, both the studies assessing the impact of research training schemes 54 , 141 indicate that between one-third and three-quarters of the former trainees claimed that a wider impact had arisen from the research conducted in each training award. Here, however, even more than with project funding, it can be difficult to discern the impact from the specific research conducted and that from subsequent research that built on it.

Analysis of the findings from multiproject programmes

The picture emerging from Tables 6 and 7 , plus the equivalent one in the 2007 review, is that many multiproject programmes are being identified as resulting in a range of impacts, but levels are highly variable.

An analysis of the findings from quantitative studies contributes to the overall review in various ways.

  • It is recognised there are many limitations in reducing issues of influence on policy and the other areas to a tick-box survey, and recognition that case studies (externally conducted based on interviews and documentary review, or self-assessment through desk analysis, etc.) are likely to provide a richer and more nuanced analysis. However, we also noted above that a variety of studies that have used another method in addition to surveying researchers suggest that, on average, researchers do not seem to be making exaggerated claims in their survey responses. Therefore, surveys of researchers can play some role in research impact assessment, and do allow wider coverage than is usually possible through more resource-intensive methods such as case studies.
  • There is an undoubted desire from some to improve survey methods, for example by computer-assisted telephone interviews. Nevertheless, this portfolio of studies suggests impact assessment can be done to some degree across multiproject programmes.
  • The findings indicate that different types of research programmes are likely to lead to different levels and ranges of impact. With better understanding of the expectations of what might arise from different programmes, it might be possible to tailor impact assessments to focus on appropriate areas for the different types of research. Various studies of small-scale initiatives 54 , 96 , 99 illustrate that there is now wide interest in assessing the impact of health research funding, but also illustrate that conducting research in a health-care setting can lead to impacts in that health-care setting.
  • Impact assessments are partly conducted to inform the approach to organising and managing research. Therefore, collating these studies can add weight to the comments made in individual studies. Quite frequent comments are made about impact being more likely when the research is focused on the needs of the health-care system and/or there is interaction or partnership with potential users. 2 , 66 , 84 , 89 , 132 , 136 – 138 , 141 The particular circumstances in which HTAs are conducted to meet very specific needs of organisations that are arranged to receive and use the findings as ‘receptor bodies’ are also associated with high levels of impact. 82 , 83 , 140 The qualitative study by Williams et al. , 78 which included observation of meetings, provides some verification of the finding in the assessment of the HTA programme that the TARs do inform decision-making. Looking specifically at the economic evaluations included in TARs they reported that, ‘economic analysis is highly integrated into the decision-making process of NICE ’s technology appraisal programme’. 78

We looked for suitable comparators against which to consider these findings from assessments of multiproject programmes. Potentially this could have come from a large-scale regular assessment that could provide data about the proportion of projects claiming impacts in certain categories across a whole research system. However, this is not the way researchfish operates and we could find no other equivalent comparator.

Instead, the 2014 REF 33 and the EIA 105 offer illuminating comparators in that they show high levels of impact were achieved from the small percentage of the total research that was described in the case studies submitted by institutions for consideration through the REF and EIA. So, while the REF was based on the research conducted by groups of researchers, rather than, in most cases, being based on the work of a single funded programme, it is also of value as a comparator because of the amount of evidence gathered in support of the exercise. The findings from our collection of studies in some ways reflect aspects of the REF, for example in that the REF assumed only a minority of the research from groups over a 20-year period (in practice, 1993–2013) would be suitable for entry for using to demonstrate impact had been achieved. As described, some of the studies of the whole portfolios of research funders included in our review covered a wide range of projects, and usually, in such cases, the percentage of principal investigators reporting impacts on policy and practice was lower than in other studies. However, such studies often identified examples of research within the portfolio that had made major impacts, although these were best explored in depth through case studies. This reinforces the point that in most research programmes only a minority of research should be expected to make much impact, but the impact from that minority can sometimes be considerable.

Furthermore, the nature of some of the major impacts claimed in the impact assessments from around the globe are similar to those reported in REF cases, even if the impacts in the REF are generally the more substantial examples. For instance, the report on the impacts from the Main Panel A suggests that in the REF many cases reported citations in clinical guidelines as an impact, and this is frequently a focus of the impacts reported in the assessments of multiproject programmes.

Overall, therefore, the quantitative analysis of studies assessing multiproject programmes can contribute to understanding the role impact assessments might play, and the strengths and weaknesses of the methods available.

The considerable growth of interest in assessing the impact from health research was captured in our review. We identified an increasing number and range of conceptual frameworks being developed and applied, and included 110 new empirical applications (see Appendix 3 ), in comparison with the 41 reported in the review published in 2007. 2 In particular, we described and compared 20 frameworks or approaches that had been applied since 2005, some of them having also been described in the previous review. Quite a few of the 20 frameworks, and others, built on earlier frameworks, and sometimes combine elements from several. This partly reflects the need to address the various challenges identified as facing attempts to assess the impact from research.

The Payback Framework 39 remains the most widely used approach for evaluating the impact of funded research programmes. It has been widely applied, and sometimes adapted and refined, including in the CAHS framework 115 and Banzi’s research impact model. 4 Other robust models that show promise in capturing the diverse forms of health and non-health impacts from research include the RIF 123 and various approaches to considering the economic impacts of health research. A comparison of the 20 frameworks indicates that while most, if not all, could contribute something to the thinking about options for future assessment of impact by NIHR , some are more likely than others to be relevant for assessing the impact of the bulk of the portfolio.

There is considerable diversity in terms of the impacts measured in the studies examined. Some of them make no attempt to move beyond the assessment of impact on policy to consider whether or not there has been any health gain. Others that adopt a multidimensional categorisation often recognise the desirability of identifying health gains, but, in practice, lack the resources to make much progress in measuring the health gain even in those cases (usually a small minority) where some links can be established between the research being assessed and the eventual health gains. Finally, some studies, at least in a few of the case studies included in an overall assessment, do go on to attempt to assess the health gains that might be at least partially associated with particular research. The variations depend on combinations of (1) the type of research portfolio that is being assessed, for example if it is a commissioned programme; (2) the type of framework being used for the assessment; (3) the resources available; and (4) the actual outcomes from the particular examples of research assessed. The multidimensional categorisation of impacts, and the way it is applied in approaches such as the Payback Framework and CAHS framework, allows considerable flexibility. In each case study, for example, it might be appropriate to take the analysis as far along the categorisation as it is practical to go. So, for some it might be possible to show an impact on clinical policies, such as guidelines or screening policies, and then for a minority of those there might be opportunities to take the analysis further and explore whether or not there is evidence from databases of practice change, screening uptake rates, etc. that could feed into an estimate of possible health gain.

Although interviews, surveys, documentary analysis and cases studies remained the most frequently used methods to apply the models, the range of methods and ways in which they were combined also increased. The purpose behind a particular study often influenced the frameworks and methods adopted. We identified 28 studies that had reported the findings from an assessment of the impact from all the projects in multiproject programmes. We were able to compare the findings from 25 of these studies, and, as in the previous review, they varied markedly in the percentage of projects within each programme that seemed to make an impact on health policy and practice. Generally, the programmes with the highest levels of impact were HTA -type programmes in which the projects were primarily reviews or appraisals that fed directly into policy-making processes. Other programmes in which quite high proportions of projects were seen to be making some impact were ones in which there had been one or more of the following: thorough needs assessments conducted beforehand; frequent interactions with potential users; and the existence of ‘receptor’ bodies that would receive and potentially use the findings. A key conclusion from this is that impacts from such programmes were best assessed by frameworks devised to capture data about the context and interactions related to research programmes.

The consideration of the findings from studies and the role of the different possible frameworks and methods have to take account of the major recent developments in impact assessment described in the chapter, namely the introduction of regular monitoring of impact, for example through researchfish, 76 and the major, and largely successful, REF exercise in the UK. 33 Both of these developments mean that any future additional assessment of impact by NIHR will take place in an environment in which there is already considerably more data available about impacts than was ever previously the case. Both developments also demonstrate that impact assessment can be conducted in ways that identify that a wide range of impacts come from health research and, therefore, provide a degree of endorsement of the previous smaller exercises. However, many challenges remain in assessing research impact and further consideration of the most appropriate approaches is highly desirable.

Included under terms of UK Non-commercial Government License .

  • Cite this Page Raftery J, Hanney S, Greenhalgh T, et al. Models and applications for measuring the impact of health research: update of a systematic review for the Health Technology Assessment programme. Southampton (UK): NIHR Journals Library; 2016 Oct. (Health Technology Assessment, No. 20.76.) Chapter 3, Updated systematic review.
  • PDF version of this title (3.0M)
  • Disable Glossary Links

In this Page

Other titles in this collection.

  • Health Technology Assessment

Recent Activity

  • Updated systematic review - Models and applications for measuring the impact of ... Updated systematic review - Models and applications for measuring the impact of health research: update of a systematic review for the Health Technology Assessment programme

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

Connect with NLM

National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894

Web Policies FOIA HHS Vulnerability Disclosure

Help Accessibility Careers

statistics

U.S. flag

An official website of the United States government

Here’s how you know

Official websites use .gov A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS A lock ( Lock A locked padlock ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

https://www.nist.gov/cyberframework/updates-archive

Cybersecurity Framework

Updates archive.

2024 2023 2022 2021 2020 2019

  2018 2017 2016   2015 2014 2013  

  • NIST awarded ‘Ecosystem Champion’ Cyber Policy Award for CSF 2.0 efforts on April 24, 2024.
  • A CSF 2.0 Community Profiles NCCoE Webinar took place on April 23, 2024 and focused on  opportunities to help organizations develop community profiles based on the CSF 2.0.
  • On March 20, 2024, NIST hosted a webinar titled “Overview of the NIST Cybersecurity Framework 2.0 Small Business Quick Start Guide.” The video recording and slides are available here .
  • Aspen Institute hosted a discussion on CSF 2.0, including the Under Secretary for Standards and Technology and NIST Director Laurie Locascio. The video recording is available as a resource.
  • The NCCoE has released Draft NIST IR 8467,  Cybersecurity Framework (CSF) Profile for Genomic Data . This CSF Profile provides voluntary, actionable guidance to help organizations manage, reduce, and communicate cybersecurity risks for systems, networks, and assets that process any type of genomic data.
  • The NCCoE published Final NIST IR 8432,  Cybersecurity of Genomic Data .  This report summarizes the current practices, challenges, and proposed solutions for securing genomic data, as identified by genomic data stakeholders from industry, government, and academia.
  • The NIST NCCoE has published the final version of NIST Internal Report (NIST IR) 8473,  Cybersecurity Framework Profile for Electric Vehicle Extreme Fast Charging Infrastructure
  • Journey to the NIST CSF 2.0 Workshop #3 | September 19-20, 2023
  • Now Available for Public Comment —  Draft NIST IR 8473, Cybersecurity Framework Profile for Electric Vehicle Extreme Fast Charging Infrastructure
  • The NCCoE has published the final version of  NIST IR 8406, Cybersecurity Framework Profile for Liquefied Natural Gas .
  • The NCCoE has published for comment  Draft NIST IR 8441, Cybersecurity Framework Profile for Hybrid Satellite Networks (HSN) .  The public comment period for this draft is now open until 11:59 p.m. ET on July 5, 2023.
  • Just released:  Discussion Draft of the NIST CSF 2.0 Core  - feedback on this discussion draft may be submitted at any time. However, for comments to inform the upcoming complete NIST CSF 2.0 draft they must be submitted by May 31st.The comment deadline for the  Cybersecurity Framework 2.0 Concept Paper  ended on March 17th, 2023.
  • NIST has released  NIST IR 8323 Revision 1  | Foundational PNT Profile: Applying the Cybersecurity Framework for the Responsible Use of PNT Services.
  • NIST has released the “Cybersecurity Framework 2.0 Concept Paper: Potential Significant Updates to the Cybersecurity Framework,”  outlining potential significant changes to the Cybersecurity Framework for public review and comment. Please provide feedback by March 3, 2023 .  The Paper will be discussed at the upcoming CSF 2.0 Workshop #2  on February 15, 2023 and the CSF 2.0 Working Sessions  on February 22-23, 2023.
  • IN-PERSON CSF 2.0 WORKING SESSIONS | February 22 or 23, 2023 (half day events). Attendees should only register for ONE session.

VIRTUAL WORKSHOP #2 | February 15, 2023 (9:00 AM – 5:30 PM EST). Join us to discuss potential significant updates to the CSF as outlined in the soon-to-be-released CSF Concept Paper.

  • A  recording  of a Framework Version 2.0 informal discussion, hosted by NIST and the Depart. of Treasury OCCIP on September 12, 2022 is now available.
  • Draft NIST IR 8406,  Cybersecurity Framework Profile for Liquefied Natural Gas  - is now open for public comment through November 17th.
  • NISTIR 8286C,  Staging Cybersecurity Risks for Enterprise Risk Management and Governance Oversight , has now been released as final. This report continues an in-depth discussion of the concepts introduced in NISTIR 8286,  Integrating Cybersecurity and Enterprise Risk Management , and provides additional detail regarding the enterprise application of cybersecurity risk information.
  • Responding to suggestions from participants during the recent CSF 2.0 workshop, NIST has improved its CSF web page by elevating attention to Examples of Framework Profiles The page, which now is easier to find, features links to more than a dozen profiles produced by NIST or others.
  • The first workshop on the NIST Cybersecurity Framework update, “ Beginning our Journey to the NIST Cybersecurity Framework 2.0”, was held virtually on August 17, 2022 with 3900+ attendees from 100 countries in attendance. Details can be found  here  ( the full  event recording  is NOW AVAILABLE ). 
  • A CSF Draft Profile, “Draft Foundational PNT Profile: Applying the Cybersecurity Framework for the Responsible Use of Positioning, Navigation, and Timing (PNT) Services” ( Draft NISTIR 8323 Revision 1 ), is available for public comment through August 12, 2022. This Revision includes five new Cybersecurity Framework subcategories, and two new appendices.
  • A CSF Draft Profile, Cybersecurity Profile for Hybrid Satellite Networks (HSN) Draft Annotated Outline ( Draft White Paper NIST CSWP 27 ) is available for public comment through August 9, 2022. This Profile will consider the cybersecurity of all the interacting systems that form the HSN rather than the traditional approach of the government acquiring the entire satellite system that includes the satellite bus, payloads, and ground system.
  • On June 3, 2022, NIST announced it would proceed with an update the Cybersecurity Framework, toward CSF 2.0.  A blog post by NIST staff Cherilyn Pascoe outlines what stakeholders can expect with the update. You can also track the update process on the CSF 2.0 webpage .  As part of this announcement, NIST posted a summary analysis of the comments received in response to the cybersecurity Request for Information issued February 2022. All RFI comments received are also available on the website .
  • Draft NISTIR 8286D,  Using Business Impact Analysis to Inform Risk Prioritization and Response , is available for public comment through July 18, 2022. This report continues an in-depth discussion of the concepts introduced in NISTIR 8286,  Integrating Cybersecurity and Enterprise Risk Management , and provides additional detail regarding the enterprise application of cybersecurity risk information.
  • Check out the  Speaker Series , hosted by the NCCoE, focusing on the development of a Framework Profile for the Liquefied Natural Gas Industry on May 24, 2022.
  • The Ransomware Risk Management Profile:  Ransomware Risk Management: A Cybersecurity Framework Profile  is now final and a  quick start guide  is available.
  • We are excited to announce that the Framework has been translated into  Ukrainian !
  • NIST Seeks Input to Update Cybersecurity Framework, Supply Chain Guidance
  • NIST has issued an RFI for Evaluating and Improving NIST Cybersecurity Resources - responses are due by April 25, 2022.
  • We are excited to announce that the Framework has been translated into  French !
  • Draft NISTIR 8286C,  Staging Cybersecurity Risks for Enterprise Risk Management and Governance Oversight , is now available for public comment! This report continues an in-depth discussion of the concepts introduced in NISTIR 8286,  Integrating Cybersecurity and Enterprise Risk Management , and provides additional detail regarding the enterprise application of cybersecurity risk information.
  • See our latest Success Story featuring how the Lower Colorado River Authority (LCRA) [nist.gov] implemented a risk-based approach to the CSF and tailored it to meet their unique needs.
  • NIST has released a Cybersecurity White Paper,  Benefits of an Updated Mapping Between the NIST Cybersecurity Framework and the NERC Critical Infrastructure Protection Standards,  which describes a recent mapping initiative between the NERC CIP standards and the NIST Cybersecurity Framework. In addition,  a mapping  is available to show which Cybersecurity Framework Subcategories can help organizations achieve a more mature CIP requirement compliance program.
  • NIST has released a draft ransomware risk management profile,  The Cybersecurity Framework Profile for Ransomware Risk Management, Draft NISTIR 8374 , which is now open for comment through October 8, 2021.
  • Draft NISTIR 8286B,  Prioritizing Cybersecurity Risk for Enterprise Risk Management , is now available for public comment! This report continues an in-depth discussion of the concepts introduced in NISTIR 8286,  Integrating Cybersecurity and Enterprise Risk Management , with a focus on the use of enterprise objectives to prioritize, optimize, and respond to cybersecurity risks.
  • NIST just released  Security Measures for “EO-Critical Software” Use Under Executive Order (EO) 14028   to outline security measures intended to better protect the use of deployed EO-critical software in agencies’ operational environments.
  • A second public draft of  NISTIR 8286A  is available: "Identifying and Estimating Cybersecurity Risk for Enterprise Risk Management." The comment period is open through August 6, 2021.
  • NIST has released a draft version of NISTIR 8374 - Cybersecurity Framework Profile for Ransomware Risk Management . This profile can be used as a guide to managing the risk of ransomware events. Please submit your comments by July 9th.
  • To highlight our ongoing international engagement, we’ve collected a series of videos that show how our partners across the world are looking at various cybersecurity and privacy issues that we at NIST are also tracking. Check these videos out  HERE !
  • Getting started using the Cybersecurity Framework just got easier with this new  Quick Start Guide !
  • RSA Conference 2021 was unique this year as it was a virtual experience, but it still successfully brought together the cybersecurity community with well-attended sessions led by NIST experts—session topics included: AI-enabled technology, data breaches, telehealth cybersecurity, PNT services, and IoT. For a full list of our 2021 RSAC sessions, see:  https://www.nccoe.nist.gov/events/rsa-conference-2021 [nccoe.nist.gov].
  • The International Organization for Standardization (ISO), in conjunction with the International Electrotechnical Commission (IEC), has published  ISO/IEC 27110:  Information technology, cybersecurity and privacy protection — Cybersecurity framework development guidelines . This document specifies guidelines for developing a cybersecurity framework. The guidelines specify that all cybersecurity frameworks should have the following concepts: Identify, Protect, Detect, Respond, Recover. 
  • NIST is pleased to announce the release of  NISTIR 8323 Foundational PNT Profile: Applying the Cybersecurity Framework for the Responsible Use of Positioning, Navigation, and Timing (PNT) Services . The PNT Profile was created by using the NIST Cybersecurity Framework and can be used as part of a risk management program to help organizations manage risks to systems, networks, and assets that use PNT services.
  • Check out Kevin Stine’s latest blog ( 2021: What’s Ahead from NIST in Cybersecurity and Privacy? ) which highlights NIST's decision to focus on nine priority areas over the next several years.
  • Check out  NISTIR 8286A (Draft) - Identifying and Estimating Cybersecurity Risk for Enterprise Risk Management (ERM) , which provides a more in-depth discussion of the concepts introduced in the  NISTIR 8286  and highlights that cybersecurity risk management (CSRM) is an integral part of ERM.
  • NIST is pleased to announce the release of NISTIRs  8278  &  8278A  for the  Online Informative References Program . These reports focus on 1) OLIR program overview and uses (NISTIR 8278), and 2) submission guidance for OLIR developers (NISTIR 8278A).
  • NIST is pleased to announce the release of  NISTIR 8323 (Draft) Cybersecurity Profile for the Responsible Use of Positioning, Navigation, and Timing (PNT) Services . The comment period is open through November 23, 2020 with instructions for submitting comments available  HERE .
  • NIST just published  NISTIR 8286, Integrating Cybersecurity and Enterprise Risk Management (ERM) . This report promotes greater understanding of the relationship between cybersecurity risk management and ERM, and the benefits of integrating those approaches. 
  • Check out NIST’s new  Cybersecurity Measurements for Information Security  page!
  • Check out the Cybersecurity Framework’s  Critical Infrastructure Resource  page, where we added the new  Version 1.1 Manufacturing Profile .
  • On September 22-24, 2020, the IAPP will host a  virtual workshop  on the development of a workforce capable of managing privacy risk. NIST will join the IAPP to lead working sessions where stakeholders can share feedback on the roles, tasks, knowledge, and skills that are necessary to achieve the Privacy Framework’s outcomes and activities.
  • NIST hosted the NIST Profile on Responsible Use of Positioning, Navigation, and Timing (PNT) Services virtual workshop on September 15-16, 2020. To learn more about this event, please visit the event homepage  HERE .
  • Check out the latest two draft NISTIRs  8278  &  8278A  for the  Online Informative References Program . The draft reports focus on 1) OLIR program overview and uses (NISTIR 8278), and 2) submission guidance for OLIR developers (NISTIR 8278A).
  • Thank you to those who submitted comments on the  2nd Draft of NISTIR 8286, Integrating Cybersecurity and Enterprise Risk Management (ERM) . 
  • The latest blog,  Keeping the Lights On , by Ron Ross has now been posted!
  • Check out the latest webinar -  The Missing Link: Integrating Cybersecurity and ERM  - to learn how a panel of experts has used ERM principles in leading cybersecurity frameworks and methods to bring cybersecurity risks into context at the enterprise level.
  • Check out the Cybersecurity Framework Critical Infrastructure Resources newest addition, Federal Energy Regulatory Commission’s Cybersecurity Incentives Policy White Paper (DRAFT) , a white paper on potential incentives to encourage utilities to go above and beyond mandated cybersecurity measures.
  • New Success Stories demonstrate how several diverse organizations all leverage the Cybersecurity Framework differently to improve their cybersecurity risk management.
  • We are excited to announce that the Framework has been translated into  Bulgarian !
  • Check out the  blog  by NIST’s Amy Mahn on engaging internationally to support the Framework!
  • Check out the Cybersecurity Framework  International Resources [nist.gov]  page, where we added a new resource category (Additional Guidance) and another resource (The Coalition to Reduce Cyber Risk's  Seamless Security: Elevating Global Cyber Risk Management Through Interoperable Frameworks [static1.squarespace.com] ).
  • NIST has released  Draft NISTIR 8286,  Integrating Cybersecurity and Enterprise Risk Management (ERM) , for public comment. This report promotes greater understanding of the relationship between cybersecurity risk management and ERM, and the benefits of integrating those approaches. The public comment period closes on April 20, 2020. See the  publication details  for a copy of the draft and instructions for submitting comments.
  • NIST has published  NISTIR 8170,  Approaches for Federal Agencies to Use the Cybersecurity Framework . It provides guidance on how the  Cybersecurity Framework  can be used in the U.S. Federal Government in conjunction with the current and planned suite of NIST security and privacy risk management publications.  
  • Given the growing global concern over the spread of the coronavirus (COVID-19), it is in the best interest of the attendees, speakers, and staff to cancel this year’s NIST Advancing Cybersecurity Risk Management Conference. Please stay tuned for future opportunities to engage, including potential virtual events.  
  • A  draft revision of NISTIR 8183 , the Cybersecurity Framework (CSF) Manufacturing Profile, has been developed that includes the subcategory enhancements established in NIST's  Framework Version 1.1 .  The public comment period for this document ends May 4, 2020.
  • Thank you to all who attended #RSAC2020 and had a chance to chat/interact with our team #NISTatRSAC! If you were unable to attend, be sure to check out the NCCoE session recaps:  https://www.nccoe.nist.gov/events/rsa-conference-2020   
  • In case you missed it, check out the recording of the " Promoting Cyber Interoperability: The Path Forward " event hosted by CSIS
  • Version 1.0 of the voluntary @NIST # Privacy Framework  was just released! Check it out and consider adopting today.
  • Consider registering for the  Privacy Framework Webinar , on January 29th, which will talk about its relationship with the Cybersecurity Framework. Also, consider the upcoming  NICE Webinar , also on January 29th, which will talk about learning principles for cybersecurity practice
  • Thank you to those who participated in the December 10th  SMB Webinar . For those who missed it, the recording is now available!  
  • Check out the latest blog on Framework engagement with the international community  HERE !
  • Check out our newest Success Story that comes from the Israel National Cyber Directorate, check it out  HERE !
  • Save the Date: NIST plans to host a workshop on Cybersecurity Online Informative References at the National Cybersecurity Center of Excellence(NCCoE), 9700 Great Seneca Highway, Rockville, Maryland on December 3 rd , 2019. Click here for the conference notice!
  • National Cybersecurity Awareness Month (NCSAM) 2019 has now come to a close. At NIST, we worked throughout the month of October to celebrate cybersecurity through awareness of our publications and work, news, and special events. Thank you for celebrating right along with us!
  • OAS and AWS recently released a  White Paper  to Strengthen Cybersecurity Capacity in the Americas through the NIST Cybersecurity Framework
  • On August 16-17, Amy Mahn from the Applied Cybersecurity Division participated in a workshop organized by the International Trade Administration (ITA) on “Facilitating Trade through Adherence to Globally-Recognized Cybersecurity Standards and Best Practices” as part of the Asia-Pacific Economic Cooperation (APEC) Senior Officials Meeting in Puerto Varas, Chile.
  • Amy Mahn, International Policy Specialist at NIST, stresses the importance of international collaboration and alignment for the Cybersecurity Framework effort in the new article, “Picking up the Cybersecurity Framework’s Pace Internationally.” See:  https://www.nist.gov/cyberframework/picking-frameworks-pace-internationally . 
  • At the  U.S. Chamber's Cybersecurity Series  in Seattle on June 19th, NIST's Adam Sedgewick discussed how small businesses can put the Framework to use in  managing cybersecurity risks. 
  • A draft implementation guide ( NISTIR 8183A ) for the Cybersecurity Framework  Manufacturing Profile  Low Security Level has been developed for managing cybersecurity risk for manufacturers.
  • We are excited to announce that the Framework has been translated into  Portuguese !
  • Roadmap for Cybersecurity Framework Version 1.1 has just been released, check it out  HERE !
  • NISTIR 8204 has now been release, check it out  HERE !
  • The recording of our April 26th webinar:  " Next Up! Cybersecurity Framework Webcast: A Look Back, A Look Ahead " is now available  HERE .
  • Version 1.1 of the Baldrige Cybersecurity Excellence Builder has just been released, check it out  HERE !
  • The NIST director's  remarks on Cybersecurity and Privacy updates  at RSA are now available
  • Come check us out at RSA !
  • Check out our new infographic which highlights the impact the Framework has had across industry.
  • Happy Anniversary!  It has been five years since the release of the Framework for Improving Critical Infrastructure Cybersecurity and organizations across all sectors of the economy are creatively deploying this voluntary approach to better management of cybersecurity-related risks.
  • The  Framework  has now been downloaded more than half a million times, with Version 1.1 eclipsing over a quarter million downloads in just over nine months!
  • New  Success Stories  demonstrate how several diverse organizations all leverage the Cybersecurity Framework differently to improve their cybersecurity risk management.
  • With over 900 registrants and a packed  agenda , the  Cybersecurity Risk Management Conference  in Baltimore, MD was a great success! If you haven't already, please let us know what you think about the conference through the participant survey and Guidebook ratings. Presentation slides will be made available in the coming weeks, stay tuned. 
  • The video recording of the  "Next Up!" Webcast  which focused on recent multi-sector work-products that exemplify best practices for cybersecurity risk management incorporating the Framework is now available. 
  • In just six months since its April 2018 release, V1.1 of the Cybersecurity Framework already has been downloaded over 205,000 times. That compares with approximately 262,000 total downloads of V1.0 over four years!
  • We are getting close to the  Cybersecurity Risk Management Conference ! 
  • Registration for the 2018  NIST Cybersecurity Risk Management Conference  -- to be held November 7-9, 2018, at the Renaissance Baltimore Harborplace Hotel, in Baltimore, Maryland -- is now open. Sponsored by NIST, the three-day conference is expected to attract leaders from industry, academia, and government at all levels, including international attendees.   
  • A recording of the July 9th webcast: 'Lessons Learned in Using the Baldrige Cybersecurity Excellence Builder with the Cybersecurity Framework' is now available. It can be found  HERE .
  • Save the Date: NIST plans to host the Cybersecurity Risk Management Conference -- likely in Baltimore, MD -- during the week of November 4th. This event will expand on previous Framework workshops and incorporate other elements of cybersecurity risk management. Stay tuned! 
  • Version 1.1 of the Framework  was published on April 16, 2018. The document has evolved to be even more informative, useful, and inclusive for all kinds of organizations.  Version 1.1  is fully compatible with Version 1.0 and remains flexible, voluntary, and cost-effective. Among other refinements and enhancements, the document provides a more comprehensive treatment of identity management and additional description of how to manage supply chain cybersecurity.  
  • The recorded version of the April 27th webcast is available.
  • Success Stories regarding Framework use / Implementation have been added to the website! Our first Success Story comes from the University of Chicago, check it out  HERE !
  • Start Using the Baldrige Cybersecurity Tool: Here's Help. First, the Information Security Team of the University of Kansas Medical Center (KUMC) began using the  Baldrige Cybersecurity Excellence Builder  (BCEB) -- which is a voluntary self-assessment tool based on the Cybersecurity Framework. Learn about their experience at:  https://www.nist.gov/blogs/blogrige/start-using-baldrige-cybersecurity-tool-heres-help   Also, the next Baldrige Cybersecurity Excellence Builder Workshop, April 8, 8:30-3:30 pm, in Baltimore, MD. It's a practical, interactive workshop on using the  Baldrige Cybersecurity Excellence Builder  (BCEB).  Details at:  https://www.nist.gov/baldrige/qe/baldrige-cybersecurity-excellence-builder-workshop
  • RFC comments received on  Draft 2  of Framework Version 1.1 and the  Roadmap  are now being reviewed. All responses will be published publicly in the coming weeks. NIST appreciates your feedback and as always, any additional comments can be directed to  cyberframework [at] nist.gov (cyberframework[at]nist[dot]gov(link sends e-mail)) . 
  • Two December 2017  webcasts  about Framework basics and the proposed updates to Framework and Roadmap are now available for playback.
  • A mapping of the Framework Core to NIST SP 800-171 Revision 1 has recently been published. This can be found in Appendix D of the  publication(link is external) .
  • A blog entry on protecting critical infrastructure has been posted.  A Framework for Protecting our Critical Infrastructure .
  • Update on the Cybersecurity Framework  July 1, 2015
  • Update on the Cybersecurity Framework  December 5, 2014
  • Update on the Cybersecurity Framework  July 31, 2014
  • Update on Development of the Cybersecurity Framework  January 15, 2014
  • Update on Development of the Cybersecurity Framework  December 4, 2013
  • Update on Development of the Cybersecurity Framework July 24, 2013

Home » Introduction the the North West

Introduction the the North West

The updated research framework, project methodology.

  • Appendix 1: Table of steering group ...
  • Appendix 2: May 2017 Conference ...
  • Appendix 3: April 2018 Conference ...

This resource was developed as part of a national strategy to create a series of self-sustaining regional historic environment research frameworks for England. It builds on the original North West Region Archaeological Research Framework which was published in 2006 and 2007 in two volumes: Resource Assessment and Objectives. Since then a large number of projects have taken place which have a bearing on the framework. These, together with changes in the way the resource is managed and the advancement of new analytical techniques, led to Historic England funding an update of the Research Framework. But there are key differences for this updated version: it is called the North West Regional Research Framework for the Historic Environment to reflect a greater engagement with the historic built environment, and has been transformed into an interactive, updatable and sustainable web-based resource.

The update Research Framework has been prepared in collaboration with stakeholders from across the historic environment spectrum under the guidance of a Steering Group comprising regional, period and subject specialists (see Appendix). In its new form, the Research Framework comprises:

  • updated period-based resource assessments representing the current state of knowledge of the historic environment in the North West of England
  • research questions organised by period and themes
  • measures for advancing knowledge and understanding set out as supporting statements and research strategies
  • a comprehensive bibliography and list of online sources.

The Research Framework can be used to:

  • help define research questions and strategies in project designs submitted to research funding bodies
  • prepare Written Schemes of Investigation in support of the planning process
  • provide background information for use during research and analysis

The first stage of the project was to compile an updated resource assessment that aimed to supplement the dataset collated in 2006, rather than supersede the earlier work. This consisted of a review by period specialists of key projects and research findings from the last eleven years, covering the period from 2018-19, together with an overview of historic buildings analysis and research. To start the process, local government archaeological advisers provided an overview of key projects and resources in their geographic area, broken down into periods. Finds Liaison Officers with the Portable Antiquities Scheme also offered relevant data to the period specialists to inform their overviews. This information was then collated and provided to period specialists to prepare an update summary based on the period chapter sub-headings used in the original publication. The period chapters and specialists are: Prehistory (Andrew Myers and Sue Stallibrass), Late Prehistory (Mike Nevell), Roman (Rob Philpott), Early Medieval (Rachel Newman), Later Medieval (Carolanne King), Post-Medieval (Ian Miller), Industrial & Modern (Mike Nevell), with an additional one on built heritage (Marion Barter). The challenge of presenting a summary for the historic built environment is considerable. This is a vast subject area which has not been tackled before on a regional basis, although there have been a number of key overviews by county, local authority or by building type. Additionally, the span of time needs to go back beyond 2006 as this subject was not specifically covered in the previous publication. It was felt that, within the constraints of the project budget and timescale, it would be best for a broad overview to be put forward, with an opportunity for the historic environment community to offer information to fill in the gaps and provide more detail.

Initial results of these studies were presented at a Resource Assessment conference in Lancaster on 5 th May 2017 (Appendix 1). Prior to this, the North West branch of the Council for British Archaeology (CBA NW) set up a blog site to bring the project to the attention of the community practising in or interested in the region’s historic environment https://archaeologynw.wordpress.com/standards-and-guidance/ . The period specialist summaries were uploaded in draft form, with the blog being used to make them available for consultation and feedback so that the historic environment community was given a chance to submit comments and relevant additional information. In addition, a wide-ranging bibliography was produced for the publications relating to research/investigations on the historic environment in the North-West. This was divided into the period/historic buildings chapters but is also presented as whole. The bibliography was compiled by Dr Michael Nevell and individual period specialists, and was a work in progress throughout the life of the project, with additional entries being submitted through the CBA NW blog site. The importance of the Portable Antiquities Scheme was recognised by having a dedicated presentation at the Resource Assessment conference.

The second stage of the project saw thematic and period workshops delivered in autumn 2017 and early 2018 to re-evaluate and update the Research Framework. This took account of changes in our understanding of the historic environment and the way it is managed. Each period had a dedicated workshop, along with thematic sessions on Built Heritage, Community Engagement, and Strategy.

The workshops took place across the North West, as follows:

  • Early Medieval and Late Medieval – Quakers’ Meeting House, Penrith – 12 th  September 2017
  • Post-Medieval and Industrial & Modern – Liverpool Museum – 25 th  September 2017
  • Prehistory and Roman – Chester Cathedral – 19 th  October 2017
  • Built Heritage – Old Fire Station, Salford University – 2 nd  November 2017
  • Strategy – Masonic Hall, Preston – 16 th  February 2018
  • Community – Quakers Meeting House, Penrith – 16 th  March 2017

They were organised by Penny Dargan-Makin, Rachael Reader, and Kirsty Lloyd from the Centre for Applied Archaeology, with support from the Period Co-ordinators and the NWRRF Steering Group.

The workshops took the following format:

  • Set up room with 2007 agenda displayed on A1 sheets to facilitate post-it notes and written comments
  • Short introduction to the NWRRF project
  • An overview of the period and key pointers by the period specialist, informed by resource assessment update
  • Post-it note session where participants given the chance to add comments on suitability of original agenda items and sub-headings, research questions, strategic aims, priorities
  • Attendees split into two groups, each being assigned half the A1 agenda sheets, and looking through and commenting on the previous agenda and post it notes
  • Group leaders’ round-up of key comments

The consensus was that the format worked well and provided everyone with an opportunity to contribute. The sessions were generally well attended, and whilst the process was intensive, it was thought provoking and produced good results. Several key changes arose from the workshops:

  • the existing Prehistoric research agenda was found wanting and it was agreed that it should be split into Early Prehistory and Late Prehistory (with a middle Bronze Age split);
  • Built Heritage had its own dedicated workshop were it was agreed that research questions should be integrated into period chapters;
  • some themes needed adding or re-wording to properly reflect the subject material eg.

Leisure and Recreation was identified as a theme for the Industrial & Modern period;

  • the original research agenda has been reformatted into questions and strategies/supporting statements – allowing for review and refinement;
  • the community workshop demonstrated the importance of research by volunteer groups and individuals, many of them with specialist knowledge, but also highlighted the need for support networks and training;
  • it became evident at the Strategy workshop that there were many research questions that cut across several or all periods, which need to be separated out into a general research theme;
  • the Strategy workshop also identified key themes or points for each period, and these are reviewed in the concluding chapter.

A final conference in spring 2018 (Appendix 3) presented the changes to the Research Strategy with the new format of questions and supporting statements presented. Speakers illustrated this by linking case studies of recent research projects to updated research questions. To complete the project all feedback comments on the resource chapters and revised questions derived from the workshops and the CBA blog site were collated by Norman Redhead and Kirsty Lloyd, and stake holders re-consulted, before Dr Sam Rowe uploaded them on to the interactive Research Framework wiki platform prepared under the auspices of Historic England. Built heritage questions arising from the workshop were allocated to the relevant period. It was agreed that there should be a point-in-time publication and CBA NW kindly offered to take this forward. Historic England provide maintenance and support for the wiki platform, which is designed to host regional research frameworks across the country. Moderation and promotion of the content is the responsibility of individual regions and their steering groups. The managed website has the facility for new projects and data to be uploaded, so that in future the Research Framework can be kept up to date.

Key points made in discussion at the workshops have been set out at the beginning of each resource assessment chapter. Effectively, these form strategic observations or objectives for each period. The concluding chapter summarises these and presents an overview of the thematic workshops (built heritage, community and strategy) as well as setting out recommendations arising from the project.

Appendix 1: Table of steering group members

Appendix 2: may 2017 conference programme.

Friday 5 th May – The Storey Institute, Lancaster

MORNING SESSION

9.30am    Registration with tea and coffee

10.00am                  Welcome from the Chair Mike Nevell

10.05am                  The national context Dan Miles

10.20am                  The North West Research Regional Framework – Setting the Scene   Norman Redhead

10.40am                  Prehistoric Overview Sue Stallibrass

11.10am Comfort Break

11.30am                  Roman Overview    Rob Philpott

12.00pm                  Portable Antiquities Scheme key finds   Vanessa Oakden

12.30pm                  Early Medieval Overview Rachel Newman

1.00pm    Lunch

AFTERNOON SESSION

2.00pm                    Later Medieval Overview Carolanne King

2.30pm                    Post-Medieval Overview Ian Miller

3.00pm    Tea Break with tea and coffee

3.20pm                    Industrial & Modern Overview Mike Nevell

3.50pm                    Historic Buildings    Marion Barter

4.20pm                    Questions, closing remarks, next steps Mike Nevell

4.30pm    Finish     

Appendix 3: April 2018 Conference Programme

CBA North West Spring Conference

North West Regional Research Framework for the Historic Environment:

An updated research strategy

Saturday 28 th  April The Old Fire Station, University of Salford

9.30amArrival and refreshments

10.00am                  Welcome

10.0am                    Introduction to Research Frameworks Mike Nevell

10.25am                  The North West project: transforming the research agenda: Questions and Strategies Norman Redhead

Applied case studies:

10:50 am                 Early Medieval: Early Medieval Burials Rachel Newman

11.20 am Comfort break

11.30am                  Early Prehistory: Stainton West, a persistent place on the River Eden: new insights into northern hunter-gatherer landscapes and the Neolithic transition Fraser Brown

12.00am                  Later Prehistory: Hillforts and Husbandry: West Cheshire in the first millennium BC Dan Garner

12.30am                  Roman: Finds from the North West Rob Philpott

1.00pm   Lunch

2.00pm                    Medieval: Paget’s Disease of Bone at Norton Priory: using an archaeological collection to help modern medical research Lynn Smith

2.30pm    Post Medieval: North West England in the Post Medieval Period: Archaeological Research since 2006 Ian Miller

3.00pm    Tea Break

3.20pm                    Industrial & Modern: Workers Housing Investigations Chris Wild

3.50pm                    Questions, Summing up and next steps Mike Nevell

4.45         Close

Leave a Reply Cancel reply

You must be logged in to post a comment.

  • Accessibility Policy
  • Skip to content
  • QUICK LINKS
  • Oracle Cloud Infrastructure
  • Oracle Fusion Cloud Applications
  • Download Java
  • Careers at Oracle

 alt=

Database 23ai: Feature Highlights

Learn how Oracle Database 23ai brings AI to your data, making it simple to power app development and mission critical workloads with AI. Each week, we'll share a new feature of Oracle Database 23ai with examples so you can get up and running quickly. Save this page and check back each week to see new highlighted features.

research update framework

Larry Ellison and Juan Loaiza discuss the GenAI strategy behind Oracle Database 23ai.

research update framework

Oracle Database 23ai Feature highlights for developers

Check out some of the features we’ve built with developers in mind:

AI Vector Search brings AI to your data by letting you build generative AI pipelines using your business data, directly within the database. Easy-to-use native vector capabilities let your developers build next-gen AI applications that combine relational database processing with similarity search and retrieval augmented generation. Running vector search directly on your business data eliminates data movement as well as the complexity, cost, and data consistency headaches of managing and integrating multiple databases.

Other features developers should get to know include:

  • JSON Relational Duality
  • Property Graph

Previously highlighted features

  • Administration/Performance
  • Languages/Drivers
  • SQL/Data Types
  • Transactions/Microservices

research update framework

Application availability—zero downtime for database clients

Transparent Application Continuity shields C/C++, Java, .NET, Python, and Node.js applications from the outages of underlying software, hardware, communications, and storage layers...

research update framework

Automatic Transaction Rollback

If a transaction does not commit or rollback for a long time while holding row locks, it can potentially block other high-priority transactions...

research update framework

DBMS_Search

DBMS_SEARCH implements Oracle Text ubiquitous search. DBMS_SEARCH makes it very easy to create a single index over multiple tables and views...

research update framework

Fast Ingest enhancements

We've added enhancements to Memoptimized Rowstore Fast Ingest with support for partitioning, compressed tables, fast flush using direct writes, and direct in-memory column store population support...

research update framework

Raft-based replication in Globally Distributed Database

Oracle Globally Distributed Database introduced the Raft replication feature in Oracle Database 23c. This allows us to achieve very fast (sub 3 seconds) failover with zero data loss in case of a node or a data center outage...

research update framework

  • SQL Analysis Report

This week we’re turning the spotlight on SQL Analysis Report, an easy-to-use feature that helps developers write better SQL statements...

research update framework

Transparent Application Continuity shields C/C++, Java, .NET, Python, and Node.js applications from the outages of underlying software, hardware, communications, and storage layers. With Oracle Real Application Clusters (RAC), Active Data Guard (ADG), and Autonomous Database (Shared and Dedicated), Oracle Database remains accessible even when a node or a subset of the RAC cluster fails or is taken offline for maintenance.

Oracle Database 23c brings many new enhancements, including batch applications support, for example, open cursors, also called session state stable cursors.

  • HikariCP Best Practices for Oracle Database and Spring Boot
  • Auding Enhancements in Oracle Database 23c
  • How to Make Application Continuity Most Effective in Oracle Database 23c
  • Oracle .NET Application Continuity — Getting Started

Documentation

  • ODP.NET and Application Continuity
  • Application Continuity for Java
  • OCI and Application Continuity

research update framework

If a transaction does not commit or rollback for a long time while holding row locks, it can potentially block other high-priority transactions. This feature allows applications to assign priorities to transactions, and administrators to set timeouts for each priority. The database will automatically rollback a lower-priority transaction and release the row locks held if it blocks a higher-priority transaction beyond the set timeout, allowing the higher-priority transaction to proceed.

Automatic Transaction Rollback reduces the administrative burden while also helping to maintain transaction latencies/SLAs on higher-priority transactions.

  • Automatic Transaction Rollback in Database 23c with high-, medium-, and low-priority transactions
  • Automatic Transaction Rollback in Oracle Database 23c—Is this the end of Row Lock Contention in Oracle Database?
  • Managing Transactions

research update framework

DBMS_SEARCH implements Oracle Text ubiquitous search. DBMS_SEARCH makes it very easy to create a single index over multiple tables and views. Just create a DBMS_SEARCH index and add tables and views. All searchable values, including VARCHAR, CLOB, JSON, and numeric columns will be included in the index, which is automatically maintained as the table or view contents change.

  • Oracle 23c DBMS_SEARCH—Ubiquitous Search
  • Easy Text Search over Multiple Tables and Views with DBMS_SEARCH in Oracle Database 23c
  • DBMS_SEARCH Package
  • Performing Ubiquitous Database Search with the DBMS_SEARCH APIs

research update framework

We've added enhancements to Memoptimized Rowstore Fast Ingest with support for partitioning, compressed tables, fast flush using direct writes, and direct in-memory column store population support. These enhancements make the Fast Ingest feature easier to incorporate in more situations where fast data ingest is required. Now Oracle Database provides better support for applications requiring fast data ingest capabilities. Data can be ingested and then processed all in the same database. This reduces the need for special loading environments and thus reduces complexity and data redundancy.

  • Oracle Database 23c Fast Ingest Enhancements
  • Memoptimized Rowstore—Fast Ingest Updates
  • Enabling High Performance Data Streaming with the Memoptimized Rowstore

research update framework

Oracle Globally Distributed Database introduced the Raft replication feature in Oracle Database 23c. This allows us to achieve very fast (sub 3 seconds) failover with zero data loss in case of a node or a data center outage. Raft replication uses a consensus-based commit protocol and is configured declaratively by specifying the replication factor. All shards in a Distributed Database act as leaders and followers for a subset of data. This enables an active/active/active symmetric distributed database architecture where all shards serve application traffic.

This helps improve availability with zero data loss, simplify management, and optimize hardware utilization for Globally Distributed Database environments.

  • Oracle Globally Distributed Database supports Raft replication in Oracle Database 23c
  • Using Raft replication in Oracle Globally Distributed Database

research update framework

This week we’re turning the spotlight on SQL Analysis Report, an easy-to-use feature that helps developers write better SQL statements. SQL Analysis Report reports common issues with SQL statements, particularly those that can lead to poor SQL performance. It’s available in DBMS_XPLAN and SQL Monitor.

  • SQL Analysis Report in Oracle Database 23c

research update framework

Blockchain tables

Blockchain and immutable tables, available since the release of Oracle Database 19c, use crypto-secure methods to help protect data from tampering or deletion by external hackers and rogue or compromised insiders...

research update framework

Schema privileges

Oracle Database now supports schema privileges in addition to existing object, system, and administrative privileges...

research update framework

SQL Firewall

Use SQL Firewall to detect anomalies and prevent SQL injection attacks. SQL Firewall examines all SQL, including session context information such as IP address and OS user...

research update framework

DB_DEVELOPER_ROLE

Oracle Database 23c includes the new role DB_DEVELOPER_ROLE, which provides an application developer with all the necessary privileges to design, implement, debug, and deploy applications on Oracle Databases...

research update framework

Blockchain and immutable tables, available since the release of Oracle Database 19c, use crypto-secure methods to help protect data from tampering or deletion by external hackers and rogue or compromised insiders. This includes insert-only restrictions that prevent updates or deletions (even by DBAs), cryptographic hash chains to enable verification, signed table digests to detect any large-scale rollbacks, and end user signing of inserted rows using their private keys. Oracle Database 23c introduces many enhancements, including support for logical replication via Oracle GoldenGate and rolling upgrades using Active Data Guard, support for distributed transactions that involve blockchain tables, efficient partition-based bulk dropping for expired rows, and performance optimizations for inserts/commits.

This release also introduces the ability to add/drop columns without impacting cryptographic hash chaining, user-specific chains and table digests for filtered rows, delegate-signing capability, and database countersigning. It also expands crypto-secure data management to regular tables by enabling an audit of historical changes to a non-blockchain table via Flashback archive defined to use a blockchain history table.

Great for built-in audit trail or journaling use cases, these capabilities can be used for financial ledgers, payments history, regulated compliance tracking, legal logs, and any data representing assets where tampering or deletions could lead to significant legal, reputation, or financial consequences.

  • Blockchain Tables in Oracle Database 21c (4:15)
  • Database In-Memory and Blockchain tables (55:42)
  • Reclaiming unused space in Oracle Database 23c with 'tablespace_shrink'
  • Blockchain Table Enhancements in Oracle Database 23c
  • Immutable Table Enhancements in Oracle Database 23c
  • Why Oracle implemented blockchain in Oracle Database 23c
  • Prevent and Detect Fraud Using Blockchain Tables on Oracle Autonomous Database
  • Managing Blockchain Tables
  • Managing Immutable Tables

research update framework

Oracle Database now supports schema privileges in addition to existing object, system, and administrative privileges. This feature improves security by simplifying authorization for database objects to better implement the principle of least privilege and keep the guesswork out of who should have access to what.

  • Security made so much SIMPLER in 23c! (3:55)
  • So much simpler security management in 23c (1:18)
  • ACE Tim Hall: Schema privileges in Oracle Database 23c
  • ACE Peter Finnigan: Oracle 23c schema-level grants
  • ACE Gavin Soorma: Oracle 23c schema-level privileges and schema-only users
  • Schema-level privilege grants with Database 23c

Sample code

  • Tutorial on Database 23c schema privilege grants
  • Configuring Privilege and Role Authorization

research update framework

Use SQL Firewall to detect anomalies and prevent SQL injection attacks. SQL Firewall examines all SQL, including session context information such as IP address and OS user. Embedded into the database kernel, SQL Firewall logs and (if enabled) blocks unauthorized SQL, ensuring that it can’t be bypassed. By enforcing an allow-list of SQL and approved session contexts, SQL Firewall can prevent many zero-day attacks and reduce the risk of credential theft or abuse.

  • SQL Firewall now built into Oracle Database 23c
  • Oracle Database 23c new feature—SQL Firewall by ACE Director Gavin Soorma
  • The three new PL/SQL packages in Oracle Database 23c by ACE Director Julian Dontcheff
  • SQL Firewall in Oracle Database 23c by ACE Director Tim Hall
  • SQL Firewall, Oracle Database 23c by database security expert Pete Finnigan: Part 1 , Part 2 , Part 3

Hands-on tutorials

  • Oracle SQL Firewall sample demo scripts
  • Using SQL Firewall

research update framework

Oracle Database 23c includes the new role DB_DEVELOPER_ROLE, which provides an application developer with all the necessary privileges to design, implement, debug, and deploy applications on Oracle Databases. By using this role, administrators no longer have to guess which privileges may be necessary for application development.

  • DB_DEVELOPER_ROLE in Oracle Database 23c
  • Comparing the RESOURCE, CONNECT, and DEVELOPER roles
  • Use of the DB_DEVELOPER_ROLE Role for Application Developers

research update framework

Boolean data type

Oracle Database now supports the ISO SQL standard-compliant Boolean data type. This enables you to store True and False values in tables and use Boolean expressions in SQL statements...

research update framework

  • Direct Joins for UPDATE and DELETE Statements

Oracle Database now allows you to join the target table in UPDATE and DELETE statements to other tables using the FROM clause. These other tables can limit the rows that are changed or be the source of new values...

research update framework

GROUP BY column alias

You can now use column alias or SELECT item position in GROUP BY, GROUP BY CUBE, GROUP BY ROLLUP, and GROUP BY GROUPING SETS clauses. Additionally, the HAVING clause supports column aliases...

research update framework

IF [NOT] EXISTS

DDL object creation, modification, and deletion in Oracle Database now supports the IF EXISTS and IF NOT EXISTS syntax modifiers...

research update framework

INTERVAL data type aggregations

Oracle Database 23c makes it easier for developers to calculate totals and averages over INTERVAL values...

research update framework

RETURNING INTO clause

The RETURNING INTO clause for INSERT, UPDATE, and DELETE statements has been enhanced to report old and new values affected by the respective statement...

research update framework

SELECT without FROM clause

You can now run SELECT expression-only queries without a FROM clause. This new feature improves SQL code portability and ease of use for developers.

research update framework

Create SQL macros to factor out common SQL expressions and statements into reusable, parameterized constructs that can be used in other SQL statements...

research update framework

  • SQL Transpiler

PL/SQL functions within SQL statements are automatically converted (transpiled) into SQL expressions whenever possible...

research update framework

Table Value Constructor

The Oracle Database SQL engine now supports a VALUES clause for many types of statements...

research update framework

Usage Annotations

Annotations enable you to store and retrieve metadata about database objects. They are free-form text fields applications can use to customize business logic or user interfaces...

research update framework

Usage Domains

Usage Domains (sometimes called SQL domains or Application Usage Domains) are high-level dictionary objects that act as lightweight type modifiers and centrally document intended data usage for applications...

research update framework

Wide tables—now 4,096 columns max

Now you can store a larger number of attributes in a single row, which may simplify application design and implementation for some applications...

research update framework

Oracle Database now supports the ISO SQL standard-compliant Boolean data type. This enables you to store True and False values in tables and use Boolean expressions in SQL statements. The Boolean data type standardizes the storage of Yes and No values and makes it easier to migrate to Oracle Database.

  • Boom! Boolean is here in 23c, and it's easy to use (1:36)
  • Oracle 23c - Unlock the Power of Boolean Data Types (0:59)
  • Boolean data type in Oracle Database 23c (Oracle-Base)
  • Oracle 23c - Tipo de Datos BOOLEAN en SQL (Spanish language)
  • Oracle 23c Boolean support in SQL
  • More Boolean features in 23c
  • Boolean data type in Oracle Database 23c (Medium)
  • SQL Boolean Data Type

research update framework

Oracle Database now allows you to join the target table in UPDATE and DELETE statements to other tables using the FROM clause. These other tables can limit the rows that are changed or be the source of new values. Direct joins make it easier to write SQL to change and delete data.

  • UPDATE and DELETE Statements via Direct Joins in Oracle Database 23c
  • ACE Lisandro Fernigrini: Oracle Database 23c—Joins en DELETE y UPDATE
  • ACE Timothy Hall: Direct Joins for UPDATE and DELETE Statements in Oracle Database 23c

research update framework

You can now use column alias or SELECT item position in GROUP BY, GROUP BY CUBE, GROUP BY ROLLUP, and GROUP BY GROUPING SETS clauses. Additionally, the HAVING clause supports column aliases. These new Database 23c enhancements make it easier to write GROUP BY and HAVING clauses, making SQL queries much more readable and maintainable while providing better SQL code portability.

  • SQL tips DBAs should know | Aliases in GROUP BY (0:59)
  • Oracle Database 23c: Simplifying Query Development with Improved GROUP BY and HAVING Clauses
  • GROUP BY Column Alias or Position

research update framework

DDL object creation, modification, and deletion in Oracle Database now supports the IF EXISTS and IF NOT EXISTS syntax modifiers. This enables you to control whether an error should be raised if a given object exists or does not exist, simplifying error handling in scripts and by applications.

  • Coding Tips Developers Need to Know | Unleash the power of IF [NOT] EXISTS clause with Oracle Database 23c (1:00)
  • Improved table management in Oracle Database 23c: Introducing the “IF [NOT] EXISTS” clause
  • ACE Timothy Hall: IF [NOT] EXISTS DDL Clause in Oracle Database 23c
  • ACE Lisandro Fernigrini: Oracle Database 23c—IF [NOT] EXISTS en Sentencias DDL (Spanish language)
  • Using IF EXISTS and IF NOT EXISTS

research update framework

Oracle Database 23c makes it easier for developers to calculate totals and averages over INTERVAL values. With this enhancement, you now can pass INTERVAL data types to the SUM and AVG aggregate and analytic functions.

  • Aggregation over INTERVAL data types
  • Aggregation over INTERVAL data types in Oracle Database 23c
  • Oracle Database 23c INTERVAL data type aggregations

research update framework

The RETURNING INTO clause for INSERT, UPDATE, and DELETE statements has been enhanced to report old and new values affected by the respective statement. This allows developers to use the same logic for each of these DML types to obtain values pre- and post-statement execution. Old and new values are valid only for UPDATE statements. INSERT statements don't report old values and DELETE statements don't report new values.

The ability to obtain old and new values affected by INSERT, UPDATE, and DELETE statements as part of the SQL command’s execution offers developers a uniform approach to reading these values and reduces the amount of work the database must perform.

  • YouTube: Shorts: Check out Oracle Database 23’s new enhanced returning clause (0:55)
  • Enhancements in Oracle 23c: Introducing the New/Old Returning Clause
  • SQL UPDATE RETURN Clause Enhancements

research update framework

  • Game-Changing Developer Feature (0:59)
  • SELECT without FROM Clause in Oracle Database 23c
  • Oracle Database 23c Enhanced Querying: Eliminating the “FROM DUAL” Clause
  • SELECT Without FROM Clause

research update framework

Create SQL macros to factor out common SQL expressions and statements into reusable, parameterized constructs that can be used in other SQL statements. SQL macros can be scalar expressions that are typically used in SELECT lists as well as WHERE, GROUP BY, and HAVING clauses. SQL macros can also be used to encapsulate calculations and business logic or can be table expressions, typically used in a FROM clause. Compared to PL/SQL constructs, SQL macros can improve performance. SQL macros increase developer productivity, simplify collaborative development, and improve code quality.

  • Create reusable SQL expressions with SQL macros (1:01:29)
  • Pattern Matching + SQL Macros = Pure SQL Awesomeness! (58:03)
  • Using SQL Macros Scalar and Table Expressions
  • How to Make Reusable SQL Pattern Matching Clauses with SQL Macros
  • SQL Macros: Creating Parameterized Views
  • How to create a parameterized view in Oracle
  • SQL macros have arrived in Autonomous Database
  • How to Make SQL Easier to Understand, Test, and Maintain
  • SQL_MACRO Clause

research update framework

PL/SQL functions within SQL statements are automatically converted (transpiled) into SQL expressions whenever possible. Transpiling PL/SQL functions into SQL statements can speed up overall execution time.

  • Automatic PL/SQL to SQL Transpiler in Oracle Database 23c
  • Automatic PL/SQL to SQL Transpiler

research update framework

The Oracle Database SQL engine now supports a VALUES clause for many types of statements. This enables you to materialize rows of data on the fly by specifying them using the new syntax without relying on existing tables. Oracle Database 23c supports the VALUES clause for the SELECT, INSERT, and MERGE statements. The introduction of the new VALUES clause allows developers to write less code for ad-hoc SQL commands, leading to better readability with less effort.

  • Using the table value constructor (0:59)
  • New value constructor in Oracle Database 23c
  • Oracle 23c SQL Syntax for Efficient Data Manipulation: Table Value Constructor
  • Table Value Constructor in Oracle Database 23c

research update framework

Annotations enable you to store and retrieve metadata about database objects. They are free-form text fields applications can use to customize business logic or user interfaces. Annotations are name-value pairs or simply a name. They help you use database objects in the same way across all applications, simplifying development and improving data quality.

  • Annotations: The new metadata in Database 23c
  • Annotations in Oracle Database 23c
  • Application Usage Annotations

research update framework

Usage Domains (sometimes called SQL domains or Application Usage Domains) are high-level dictionary objects that act as lightweight type modifiers and centrally document intended data usage for applications. Usage Domains can be used to define data usage and standardize operations to encapsulate a set of check constraints, display properties, ordering rules, and other usage properties—without requiring application-level meta data.

Usage Domains for one or more columns in a table do not modify the underlying data type and can, therefore, also be added to existing data without breaking applications or creating portability issues.

  • Less coding with SQL domains in Oracle Database 23c
  • Application Usage Domains

research update framework

Now you can store a larger number of attributes in a single row, which may simplify application design and implementation for some applications.

The maximum number of columns allowed in a database table or view has been increased to 4,096. This feature goes beyond the previous 1,000-column limit, allowing you to build applications that can store attributes in a single table. Some applications such as machine learning and streaming Internet of Things (IoT) application workloads may require the use of de-normalized tables with more than 1,000 columns.

  • Oracle Database In-Memory blog: Oracle Database 23c Free—Wide Tables
  • Oracle-Base: MAX_COLUMNS: Increase the Maximum Number of Columns for a Table (Wide Tables) in Oracle Database 23c
  • Wide Tables documentation

research update framework

Connection management for extreme scalability

Oracle Database 23c and CMAN-TDM now bring best-in-class connection management and monitoring capabilities with implicit connection pooling, multi-pool DRCP, per-PDB PRCP, and much more...

research update framework

Database driver asynchronous programming and pipelining

With Oracle Database 23c, the Pipelining feature enables .NET, Java, and C/C++ applications to send multiple requests to the Database without waiting for the response from the server...

research update framework

JavaScript stored procedures

Multilingual engine (MLE) module calls allow developers to invoke JavaScript functions stored in modules from SQL and PL/SQL. Call specifications written in PL/SQL link JavaScript to PL/SQL code units...

research update framework

Multicloud configuration and security integration

A new feature of Oracle Database 23c is the client capability to store Oracle configuration information, such as connection strings, in Microsoft Azure App Configuration or Oracle Cloud Infrastructure Object Storage...

research update framework

Observability, OpenTelemetry, and diagnosability for Java and .NET applications

The three pillars of observability are metrics, logging, and distributed tracing. This release brings enhanced logging, new debugging (diagnose on first failure), and new tracing capabilities...

research update framework

Transportable Binary XML

Oracle Database 23c introduces Transportable Binary XML (TBX), a new self-contained XMLType storage method. TBX supports sharding, XML search index, and Exadata pushdown operations, providing better performance and scalability than other XML storage options...

research update framework

Oracle Database 23c and CMAN-TDM now bring best-in-class connection management and monitoring capabilities with implicit connection pooling, multi-pool DRCP, per-PDB PRCP, and much more. Enhance the scalability and power of your C, Java, Python, Node.js, and ODP.NET applications with the latest and greatest features in DRCP and PRCP. Monitor the usage of PRCP pool effectively with statistics from the new V$TDM_STATS dynamic view in Oracle Database 23c.

  • Per-PDB Proxy Resident Connection Pooling
  • Medium: Multi-pool DRCP in Oracle Database 23c
  • Implicit Connection Pooling
  • Using Multi-pool DRCP
  • Per-PDB PRCP
  • TDM_PERPDB_ PRCP_CONNFACTOR—Per-PDB PRCP parameter
  • CMAN-TDM and PRCP Monitoring—V$TDM_STATS
  • JDBC Support for DRCP

research update framework

With Oracle Database 23c, the Pipelining feature enables .NET, Java, and C/C++ applications to send multiple requests to the Database without waiting for the response from the server. Oracle Database queues and processes those requests one by one, allowing the client applications to continue working until notification of the completion of the requests. These enhancements provide a better end user experience, improved data-driven application responsiveness, end-to-end scalability, avoidance of performance bottlenecks, and efficient resource utilization on the server and the client sides.

For the client request to return immediately, Oracle Database Pipelining requires an asynchronous or reactive API in .NET, Java, and C/C++ drivers. These mechanisms can be used against Oracle Database, with or without Database Pipelining.

For Java, Oracle Database 23c furnishes the Reactive Extensions in Java Database Connectivity (JDBC), Universal Connection Pool (UCP), and the Oracle R2DBC Driver. It also supports the Java virtual threads in the driver (Project Loom) as well as the Reactive Streams libraries, such as Reactor, RxJava, Akka Streams, Vert.x, and more.

  • Oracle 23c .NET development features
  • What's in Oracle Database 23c for Java Developers? (PDF)
  • ODP.NET async code sample
  • ODP.NET Asynchronous Programming and Pipelining
  • JDBC Support for Pipelined Database Operations

research update framework

Multilingual engine (MLE) module calls allow developers to invoke JavaScript functions stored in modules from SQL and PL/SQL. Call specifications written in PL/SQL link JavaScript to PL/SQL code units. This feature enables developers to use JavaScript functions anywhere PL/SQL functions are called.

  • Introduction to JavaScript in Oracle Database 23c Free—Developer Release
  • Using JavaScript community modules in Oracle Database 23c Free—Developer Release
  • How to import JavaScript ES modules in Oracle Database 23c Free and use them in SQL queries
  • APEX + Server Side JavaScript (MLE)
  • Simple Data Driven applications using JavaScript in Oracle Database 23c Free-Developer Release
  • Overview of JavaScript in Oracle Database

research update framework

A new feature of Oracle Database 23c is the client capability to store Oracle configuration information, such as connection strings, in Microsoft Azure App Configuration or Oracle Cloud Infrastructure Object Storage. This new capability simplifies application cloud configuration, deployment, and connectivity with Oracle JDBC, .NET, Python, Node.js, and Oracle Call Interface data access drivers. The information is stored in configuration providers, which provides the benefit of separating application code and configuration.

Use with OAuth 2.0 single sign-on to the cloud and database to further enhance the ease of administration. Oracle Database 23c clients can use Microsoft Entra ID, Azure Active Directory, or Oracle Cloud Infrastructure access tokens for database sign-on.

  • Database 23c JDBC Seamless Authentication with OCI Identity and Access Management and Azure Active Directory
  • JDBC Configuration Via App Config Providers and Vaults
  • ODP.NET Centralized Configuration Providers
  • ODP.NET and Azure Active Directory
  • ODP.NET and OCI Identity and Access Management

research update framework

The three pillars of observability are metrics, logging, and distributed tracing. This release brings enhanced logging, new debugging (diagnose on first failure), and new tracing capabilities. The JDBC and ODP.NET drivers have also been instrumented with a hook for tracing database calls; this hook enables distributed tracing using OpenTelemetry.

  • Java and .NET Application Observability with OpenTelemetry and Oracle Database
  • ODP.NET OpenTelemetry documentation
  • JDBC Trace Event Listener documentation
  • Oracle JDBC Trace Event Listener Javadoc
  • Oracle JDBC OpenTelemetry Provider

research update framework

Oracle Database 23c introduces Transportable Binary XML (TBX), a new self-contained XMLType storage method. TBX supports sharding, XML search index, and Exadata pushdown operations, providing better performance and scalability than other XML storage options.

With the support of more database architectures, such as sharding or Exadata, and its capability to easily migrate and exchange XML data among different servers, containers, and PDBs, TBX allows your applications to take full advantage of this new XML storage format on more platforms and architectures.

You can migrate existing XMLType storage of a different format to TBX format in any of the following ways:

Insert-as select or create-as-select

Online redefinition

Oracle Data Pump

  • Database 23c new features for XML: Sharding of XML and XML Search Index (1:14:37)
  • Transportable Binary XML—Modern XML document storage in Oracle Database 23c
  • Introduction to Choosing an XMLType Storage Model and Indexing Approaches

research update framework

JSON binary data type

The JSON data type is an Oracle-optimized binary JSON format called OSON. It is designed for faster query and DML performance in the database and in database clients from release 21c and on...

research update framework

JSON Relational Duality views

JSON Relational Duality, an innovation introduced in Oracle Database 23c, unifies the relational and document data models to provide the best of both worlds...

research update framework

  • JSON Schema

Oracle Database supports JSON to store and process schema-flexible data. With Oracle Database 23c, Oracle Database now supports JSON Schema to validate structure and values of JSON data...

research update framework

PL/SQL JSON constructor support for aggregate types

The PL/SQL JSON constructor is enhanced to accept an instance of a corresponding PL/SQL aggregate type, returning a JSON object or array type populated with the aggregate type data.

research update framework

MongoDB-compatible API

With the Oracle Database API for MongoDB, developers can continue to use MongoDB's tools and drivers connected to an Oracle Database while gaining access to Oracle's multimodel capabilities and self-driving database...

research update framework

The JSON data type is an Oracle-optimized binary JSON format called OSON. It is designed for faster query and DML performance in the database and in database clients from release 21c and on.

  • JSON data type support in Oracle 21c
  • Native JSON Data Type Support: Maturing SQL and NoSQL Convergence in Oracle Database (PDF)
  • JSON Data Type

research update framework

JSON Relational Duality, an innovation introduced in Oracle Database 23c, unifies the relational and document data models to provide the best of both worlds. Developers can build applications in either relational or JSON paradigms with a single source of truth and benefit from the strengths of both models. Data is held once but can be accessed, written, and modified with either approach. Developers benefit from ACID-compliant transactions and concurrency controls, which means they no longer have to make trade-offs between complex object-relational mappings or data inconsistency issues.

  • Medium: ODP.NET and JSON Relational Duality and Oracle Database 23c Free
  • Key benefits of JSON Relational Duality
  • Use JSON Relational Duality with Oracle Database API for MongoDB
  • REST with JSON Relational Duality
  • JSON Relational Duality: The Revolutionary Convergence of Document, Object, and Relational Models
  • JSON Relational Duality Views Overview

research update framework

Oracle Database supports JSON to store and process schema-flexible data. With Oracle Database 23c, Oracle Database now supports JSON Schema to validate structure and values of JSON data. The SQL operator IS JSON was enhanced to accept a JSON Schema, and various PL/SQL functions were added to validate JSON and to describe database objects such as tables, views, and types as JSON Schema documents.

By default, JSON data is schemaless, providing flexibility. However, you may want to ensure that JSON data has a particular structure and typing, which can be done via industry-standard JSON Schema validation.

Contribute to JSON Schema Oracle actively contributes to JSON Schema, an open source effort to standardize a JSON-based declarative language that allows you to annotate and validate JSON documents. It is currently in Request for Comments (RFC).

  • Review Oracle's contributions to JSON Schema and comment
  • Or you can contribute via GitHub
  • JSON/JSON_VALUE will Convert PL/SQL Aggregate Type to/from JSON (12:36)
  • Mastering Oracle Database 23c Free: SQL Domains and JSON Schema

research update framework

The PL/SQL JSON_VALUE operator is enhanced so its returning clause can accept a type name that defines the type of the instance that the operator is to return. JSON constructor support for aggregate data types streamlines data interchange between PL/SQL applications and languages that support JSON.

  • JSON_VALUE Function Enhancements in Oracle Database 23c
  • JSON Data Type Constructor Enhancements in Oracle Database 23c
  • Application development documentation

research update framework

With the Oracle Database API for MongoDB, developers can continue to use MongoDB's tools and drivers connected to an Oracle Database while gaining access to Oracle's multimodel capabilities and self-driving database. Customers can run MongoDB workloads on Oracle Cloud Infrastructure (OCI). Often, little or no changes are required to existing MongoDB applications—you simply need to change the connection string.

The Oracle Database API for MongoDB is part of standard Oracle REST Data Services. It is preconfigured and fully managed as part of the Oracle Autonomous Database.

  • Demos and QA: Oracle Database API for MongoDB (55:01)
  • Demonstration of Oracle Database API for Mongo DB (6:18)
  • Oracle Database API for MongoDB
  • Installing Database API for MongoDB for any Oracle Database
  • Oracle Database API for MongoDB—Best Practices
  • SQL, JSON, and MongoDB API: Unify worlds with Oracle Database 23c Free
  • Use the Oracle Database API for MongoDB
  • Overview of Oracle Database API for MongoDB

research update framework

Operational property graphs

Oracle Database offers native support for property graph data structures and graph queries...

research update framework

Oracle Database offers native support for property graph data structures and graph queries. If you're looking for flexibility to build graphs in conjunction with transactional data, JSON, Spatial, and other data types, we got you covered. Developers can now easily build graph applications with SQL using existing SQL development tools and frameworks.

  • Create, Query, and Visualize a Property Graph with SQL Oracle Database 23c Free—Developer Release (3:53)
  • When property graphs join SQL—Oracle CloudWorld 2022 (30:29)
  • Operational property graphs in Oracle Database 23c Free—Developer Release
  • Property graphs in SQL Developer Release 23.1
  • Get started with property graphs in Oracle Database 23c Free—Developer Release
  • Lucas Jellema: SQL Property Graph for Network-Style Querying
  • Lucas Jellema: Graph Database Style Explorations of Relational Database with Formula One Data (Github content here )
  • ACE Timothy Hall: SQL Property Graphs and SQL/PGQ in Oracle Database 23c
  • Exploring Operational Property Graphs in Oracle Database 23c Free
  • SQL Property Graphs

research update framework

Happy Holidays!

As we wrap up 2023, here's a recap of the new features in Oracle Database 23c that we highlighted throughout the year...

research update framework

As we wrap up 2023, here's a recap of the new features in Oracle Database 23c that we highlighted throughout the year. If you haven't had a chance to try out our latest Oracle Database release yet—especially if you’re a developer—check out the different options here or at oracle.com/database/free .

  • Oracle Database 23c: The next long-term support release
  • Oracle Database 23c blog posts from SQLMaria
  • How to set up Oracle Database 23c Free—Developer Release and ORDS on OCI
  • Oracle Database 23c Free—Developer Release: getting started…
  • Deploying Oracle Database 23c Free—Developer Release on Kubernetes with Helm
  • Exploring JSON-relational duality views in Oracle Database 23c Free—Developer Release
  • Getting Started with Oracle Database 23c Free—Developer Release

Hands-On Labs/Downloads

  • Oracle Database Free Get Started
  • Oracle Database Software Downloads
  • Oracle Database 23c

research update framework

AQ to TxEventQ Online Migration Tool

Oracle Database 23c introduces an online migration tool that simplifies migration from Oracle Advanced Queuing (AQ) to Transactional Event Queues (TxEventQ) with orchestration automation, source, and target compatibility diagnostics and remediation and a unified user experience...

research update framework

Oracle Database 23c provides even more refined compatibility for Apache Kafka applications with Oracle Database...

research update framework

Lock-free column value reservations

Lock-Free Reservations enable concurrent transactions to proceed without being blocked on updates of heavily updated rows. Lock-Free Reservations are held on the rows instead of locking them...

research update framework

Grafana observability

Oracle continues to expand its cloud native and Kubernetes support with our new Observability Exporter for Oracle Database...

research update framework

Saga APIs in Oracle Database 23c

The Saga framework introduced in Oracle Database 23c provides a unified framework for building async Saga applications in the database. ..

research update framework

Oracle Database 23c introduces an online migration tool that simplifies migration from Oracle Advanced Queuing (AQ) to Transactional Event Queues (TxEventQ) with orchestration automation, source, and target compatibility diagnostics and remediation and a unified user experience. Migration scenarios can be short- or long-lived and be performed with or without AQ downtime, eliminating operational disruption.

Existing AQ customers interested in higher throughput queues and with Kafka compatibility using a Kafka Java Client and Confluent-like REST APIs, can easily migrate from AQ to TxEventQ. TxEventQ offers scalability, performance, key-based partitioning, and native JSON payload support, which makes event-driven microservices/application writing easier in multiple languages, including Java, JavaScript, PL/SQL, Python, and more.

  • Streamlining Oracle Advanced Queue to Transactional Event Queues Migration
  • Navigating DBMS_AQMIGTOOL Package in Oracle Database 23c: A Starter’s Guide
  • DBMS_AQMIGTOOL package documentation
  • Sample steps to migrate from AQ to TxEventQ
  • Example walkthrough

research update framework

Oracle Database 23c provides even more refined compatibility for Apache Kafka applications with Oracle Database. This new feature provides easy migration for Kafka Java applications to Transactional Event Queues (TxEventQ). Kafka Java APIs can now connect to Oracle Database server and use TxEventQ as a messaging platform.

Developers can easily migrate an existing Java application that uses Kafka to Oracle Database using the JDBC thin driver. And with the Oracle Database 23c client-side library feature, Kafka applications can now connect to Oracle Database instead of a Kafka cluster and use TxEventQ's messaging platform transparently.

  • Simplify Event-driven Apps with TxEventQ in Oracle Database (with Kafka interoperability)
  • Kafka interoperability in Oracle Database 23c
  • New 23c version of Kafka-compatible Java APIs for Transactional Event Queues published
  • Playing with Kafka Java Client for TxEventQ – creating the simplest of producers and consumers
  • Oracle REST Data Services 22.3 brings new REST APIs for Transactional Event Queueing
  • Interoperability of Transactional Event Queue with Apache Kafka (Java APIs)
  • Kafka Java Client Interface for Oracle Transactional Event Queues (Java APIs)
  • Kafka Java Client for Oracle Transactional Event Queues (Java APIs)
  • Kafka Connectors for TxEventQ (Connectors)
  • Oracle Transactional Event Queues REST Endpoints (REST APIs)

research update framework

Lock-Free Reservations enable concurrent transactions to proceed without being blocked on updates of heavily updated rows. Lock-Free Reservations are held on the rows instead of locking them. It verifies if the updates can succeed and defers the updates until the transaction commit time. Lock-Free Reservations improves the user experience and concurrency in transactions.

  • TikTok: Rethink everything you think you know about row locking in relational databases (0:29)
  • ACE Lucas Jellema: Oracle Database 23c—Fine-grained locking—Lock-Free Reservations
  • ACE Tim Hall: Lock-Free Reservations to prevent blocking sessions in Oracle Database 23c
  • Oracle Schema-Level Privileges and Lock-Free Column Reservations
  • Using Lock-Free Reservations

research update framework

Oracle continues to expand its cloud native and Kubernetes support with our new Observability Exporter for Oracle Database, which allows customers to easily export database and application metrics in industry-standard Prometheus format, and to easily create Grafana dashboards to monitor the performance of their Oracle Databases and applications.

  • DevOps meets DataOps (50:10)
  • Introducing Oracle Database Observability Exporter
  • Unified Observability for Oracle Database
  • Unified Observability in Grafana with converged Oracle Database

research update framework

The Saga framework introduced in Oracle Database 23c provides a unified framework for building async Saga applications in the database. Saga makes modern, high performance microservices application development easier and more reliable.

A Saga is a business transaction spanning multiple databases, implemented as a series of independent local transactions. Sagas avoid the global transaction duration locking found with synchronous distributed transactions and simplify consistency requirements for maintaining a global application state. The Saga framework integrates with Lock-Free reservable columns in Oracle Database 23c to provide automatic Saga compensation, simplifying application development.

The Saga framework emulates the MicroProfile LRA specification.

  • Developing Event-Driven, Auto-Compensating Transactions With Oracle Database Sagas and Lock-Free Reservation
  • Oracle Saga documentation
  • Oracle Saga CloudBank demo

IMAGES

  1. How to develop and present a conceptual framework in a research paper?

    research update framework

  2. What Is A Framework In Research

    research update framework

  3. Research methodology framework

    research update framework

  4. A new framework for developing and evaluating complex interventions

    research update framework

  5. How to Use a Theory to Frame Your Research Study

    research update framework

  6. Research Framework

    research update framework

VIDEO

  1. Conference XP Project Update

  2. Introduction to the Research Information Centre

  3. What is Framework Programmes for Research and Technological Development?

  4. Research Design ~ Why & What

  5. Software Ecosystems: A New Research Agenda

  6. Conference XP

COMMENTS

  1. The Consolidated Framework for Implementation Research

    As part of the update process, a CFIR Outcomes Addendum was published to establish conceptual distinctions between implementation and innovation outcomes and their potential determinants. ... while the CFIR's utility as a framework to guide empirical research is not fully established, it is consistent with the vast majority of frameworks and ...

  2. The updated Consolidated Framework for Implementation Research based on

    Provides an update of the Consolidated Framework for Implementation Research (CFIR), one of the most highly cited frameworks in implementation science. Addresses important user critiques of the CFIR based on the literature and a survey, including better centering innovation recipients, and adding determinants to equity in implementation.

  3. Research Update Organization

    Research Update Organization [IRS 501(c)(3) registered-tax-exempt] is a non-profit educational organization founded to promote clinical research and its application to enrich community health. We offer International Medical Graduates (IMGs) and medical students clinical research education, experience, electives, opportunities, and publication skills.

  4. A new framework for developing and evaluating complex ...

    The UK Medical Research Council's widely used guidance for developing and evaluating complex interventions has been replaced by a new framework, commissioned jointly by the Medical Research Council and the National Institute for Health Research, which takes account of recent developments in theory and methods and the need to maximise the efficiency, use, and impact of research.

  5. What is a framework? Understanding their purpose, value ...

    Positioning frameworks within a theory of science can aid in knowing the purpose and value of framework use. This article provides a meta-framework for visualizing and engaging the four mediating processes for framework development and application: (1) empirical generalization, (2) theoretical fitting, (3) application, and (4) hypothesizing.

  6. (PDF) The updated Consolidated Framework for Implementation Research

    The purpose of this project was to elicit feedback from experienced CFIR users to inform updates to the framework. Methods User feedback was obtained from two sources: (1) a literature review with ...

  7. The new framework

    The framework aims to improve the design and conduct of complex intervention research to increase its utility, efficiency and impact. Consistent with the principles of increasing the value of research and minimising research waste,22 the framework (1) emphasises the use of diverse research perspectives and the inclusion of research users, clinicians, patients and the public in research teams ...

  8. Framework for the development and evaluation of complex interventions

    The Medical Research Council published the second edition of its framework in 2006 on developing and evaluating complex interventions. Since then, there have been considerable developments in the field of complex intervention research. The objective of this project was to update the framework in the light of these developments.

  9. Simplifying Review of Research Project Grant Applications

    Learn more. NIH is implementing a simplified framework for the peer review of the majority of competing research project grant (RPG) applications, beginning with submissions with due dates of January 25, 2025. The simplified peer review framework aims to better facilitate the mission of scientific peer review - identification of the strongest ...

  10. Research Framework

    Abstract. This section presents the research design, provides a description and justification of the methodological approach and methods used, and details the research framework for the study. In addition, it presents the research objectives and highlights the research hypothesis; discusses about the research area, sampling techniques used, and ...

  11. What is a research framework and why do we need one?

    A research framework provides an underlying structure or model to support our collective research efforts. Up until now, we've referenced, referred to and occasionally approached research as more of an amalgamated set of activities. But as we know, research comes in many different shapes and sizes, is variable in scope, and can be used to ...

  12. Research Data Framework (RDaF)

    Update on Research Data Framework Version 1.5. In this video, Robert Hanisch, Director of the Office of Data and Informatics (ODI) within the Material Measurement Laboratory, describes the updates made to the Research Data Framework in response to community feedback and the release of Version 1.5, which is the subject of a Request for ...

  13. The updated Consolidated Framework for Implementation Research based on

    The Consolidated Framework for Implementation Research (CFIR) is one of the most commonly used determinant frameworks to assess these contextual factors; however, it has been over 10 years since publication and there is a need for updates. ... The purpose of this project was to elicit feedback from experienced CFIR users to inform updates to ...

  14. NIST Research Data Framework (RDaF)

    Fig. 1 — Partial organizational structure of the framework foundation. The components of the RDaF foundation shown in Fig. 1—lifecycle stages and their associated topics and subtopics—are defined in this document. In addition, most subtopics have several informative references—resources such as guidelines, standards, and policies—that assist stakeholders in addressing that subtopic.

  15. PDF The updated Consolidated Framework for Implementation Research based on

    e updated CFIR manuscript: •Provides an update of the Consolidated Framework for Implementation Research (CFIR), one of the most highly cited frameworks in implementation science. • Addresses important user critiques of the CFIR based on the literature and a survey, including better center- ing innovation recipients, and adding determinants ...

  16. Antecedents and Consequences of App Update: An Integrated Research

    The integrated research framework of app updates. Full size image. App updates are related to decisions on update rate and update volume (i.e., how much features added in each update [4,5,6]). We define app performance as the overall ranking and rank volatility in app markets. Rank is a comprehensive indicator of the market performance of an ...

  17. NIH Simplified Peer Review Framework for Research Project Grants (RPG

    The National Institutes of Health (NIH) is simplifying the framework for the peer review of most Research Project Grant (RPG) applications, effective for due dates on or after January 25, 2025. These changes are designed to address the complexity of the peer review process and mitigate potential bias.

  18. Chapter 3 Updated systematic review

    The purpose of the current review was to update the previous review,2 including a summary of the range of approaches used in health research impact assessment, and to collate the quantitative findings from studies assessing the impact of multiproject programmes. First, we present a summary of the literature that is reported in the large number of studies. Second, we describe 20 conceptual ...

  19. An Incremental Update Framework for Online Recommenders with Data

    To address the aforementioned issue, we propose an incremental update framework for online recommenders with Data-Driven Prior (DDP), which is composed of Feature Prior (FP) and Model Prior (MP). ... Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (2022). Google Scholar Digital ...

  20. New biological research framework for Alzheimer's seeks to spur

    November 2023 Update. In recent months, the Alzheimer's Association formed a workgroup whose charge was to examine the 2018 Alzheimer's disease research framework and the 2011 clinical guidance in the context of current scientific knowledge and, if appropriate, update the diagnostic criteria.

  21. Updates Archive

    NIST has released NIST IR 8323 Revision 1 | Foundational PNT Profile: Applying the Cybersecurity Framework for the Responsible Use of PNT Services. NIST has released the "Cybersecurity Framework 2.0 Concept Paper: Potential Significant Updates to the Cybersecurity Framework," outlining potential significant changes to the Cybersecurity ...

  22. Research Density Update, Framework for Decision ...

    Research Density Update, Framework for Decision-Making, and Other New Resources. December 29, 2020. Dear Research Community: The Bay Area continues to experience high rates of COVID-19 infections and decreased capacity in intensive care units. These problems are expected to worsen due to holiday travel, and city and state restrictions will ...

  23. Introduction the the North West

    The Updated Research Framework This resource was developed as part of a national strategy to create a series of self-sustaining regional historic environment research frameworks for England. It builds on the original North West Region Archaeological Research Framework which was published in 2006 and 2007 in two volumes: Resource Assessment and Objectives. Since then a […]

  24. Circular economy in supply chain management: a framework for database

    Material and methods. A narrative review was used in this study. It is intended for subjects that have been examined in diverse ways by researchers within different fields [Citation 6].It offers a general view of the literature, the conceptualisation of new studies, or the re-conceptualisation of established research [Citation 7].A narrative review can synthesise current knowledge, map a ...

  25. Database 23ai

    Oracle Database 23ai Feature highlights for developers. Check out some of the features we've built with developers in mind: AI Vector Search brings AI to your data by letting you build generative AI pipelines using your business data, directly within the database. Easy-to-use native vector capabilities let your developers build next-gen AI applications that combine relational database ...

  26. SEBI

    Securities and Exchange Board of India is made for protect the interests of investors in securities and to promote the development of, and to regulate the securities market and for matters connected therewith or incidental thereto