The Essex website uses cookies. By continuing to browse the site you are consenting to their use. Please visit our cookie policy to find out which cookies we use and why. View cookie policy.

Research risk assessment

It's the responsibility of the principal investigators (PI) and researchers to identify reasonably foreseeable risks associated with their research and control the risks so far as is reasonably practicable.

All participants and research assistants have the right to expect protection from physical, psychological, social, legal and economic harm at all times during an investigation. Certain research may also present reputational, legal and / or economic risks to the University.

As part of the ethical approval process for research involving human participants you are required to identify potential risks associated with your research and the action you will take to mitigate risk. You may be asked to submit your risk assessment.

The risk assessment process is a careful examination of what could cause harm, who/what could be harmed and how. It will help you to determine what risk control measures are needed and whether you are doing enough. 

Risk assessment responsibility

The PI and researchers need to take responsibility for all assessments associated with their projects. Occasionally you may need research workers or students to risk assess an aspect of the work and you will need to check the assessments are adequate and sign them off.

Risk assessors need to be competent and you’ll need to ensure they have adequate training and resource to do the assessments. There is risk assessment training available and help and advice help and advice help and advice from your Health and Safety adviser and safety specialists (for health and safety risks), or the REO Research Governance team for other risks. In some cases, the hazards are so unique to the research that the PI and their team might be the only people who know the work well enough to make valid judgements about the risk and justify their conclusions.

Risk assessment process

The risk assessment process is a careful examination of what could cause harm, who/what could be harmed and how. It will help you to determine what risk control measures are needed and whether you are doing enough.

To simplify the process you can use the health and safety risk assessment templates, risk estimation tool and guidance for all risks associated with your research project. Please refer to the research risk estimation guidance under how to carry out a risk assessment below to assist you. 

Research risks

Typical risks that need to be considered as part of research ethics are:

  • Social risks: disclosures that could affect participants standing in the community, in their family, and their job.
  • Legal risks: activities that could result in the participant, researchers and / or University committing an offence; activities that might lead to a participant disclosing criminal activity to a researcher which would necessitate reporting to enforcement authorities; activities that could result in a civil claim for compensation.
  • Economic harm: financial harm to participant, researcher and / or University through disclosure or other event.
  • Reputational risk: damage to public perception of University or the University/researchers’ reputation in the eyes of funders, the research community and / or the general public. 
  • Safeguarding risks:   Risk to young people, vulnerable adults and / or researcher from improper behaviour, abuse or exploitation. Risk to researcher of being in a comprising situation, in which there might be accusations of improper behaviour.
  • Health and safety risks: risks of harm to health, physical injury or psychological harm to participants or the researcher. Further information on health and safety risks is given below.

Health and safety risks

The potential hazards and risks in research can be many and varied. You will need to be competent and familiar with the work or know where to obtain expert advice to ensure you have identified reasonably foreseeable risks. Here are some common research hazards and risks:

  • Location hazards Location hazards Location hazards and risks are associated with where the research is carried out. For example: fire; visiting or working in participant’s homes; working in remote locations and in high crime areas; overseas travel; hot, cold or extreme weather conditions; working on or by water. Also hazardous work locations, such as construction sites, confined spaces, roofs or laboratories. For overseas travel, you will need to check country / city specific information, travel health requirements and consider emergency arrangements as part of your research planning, by following the University’s overseas travel  health and safety standard .  
  • Activity hazards Activity hazards Activity hazards and risks associated with the tasks carried out. For example: potentially mentally harmful activities; distressing and stressful work and content; driving; tripping, or slipping; falling from height; physically demanding work; lifting, carrying, pushing and pulling loads; night time and weekend working.
  • Machinery and equipment Machinery and equipment Machinery and equipment . For example: ergonomic hazards, including computer workstations and equipment; contact with electricity; contact with moving, rotating, ejecting or cutting parts in machinery and instruments; accidental release of energy from machines and instruments.
  • Chemicals and other hazardous substances . The use, production, storage, waste, transportation and accidental release of chemicals and hazardous substances; flammable, dangerous and explosive substances; asphyxiating gases; allergens; biological agents, blood and blood products. You’ll need to gather information about the amount, frequency and duration of exposure and carry out a COSHH or DSEAR assessment which will inform whether you may need health surveillance for yourself and / or your research participants.
  • Physical agents Physical agents Physical agents . For example: excessive noise exposure, hand-arm vibration and whole body vibration; ionising radiation; lasers; artificial optical radiation and electromagnetic fields. You’ll need to gather information about the amount, frequency and duration of exposure inform whether you may need health surveillance for yourself and / or your research participants.

When to carry out a risk assessment

Carrying out initial risk assessments as part of the planning process will help you identify whether existing resources and facilities are adequate to ensure risk control, or if the project needs to be altered accordingly. It will also help you to identify potential costs that need to be considered as part of the funding bid.

Once the project is approved, research specific risk assessments need to be carried out before work starts.

The research may need ethical approval if there is significant risk to participants, researchers or the University.

How to carry out a risk assessment

The University standard on risk assessments provides guidance, tips on getting it right, as well as resources and the forms to help you produce suitable and sufficient risk assessments and must be used.

  • Risk assessment template (.dotx)
  • Flow chart to research risk assessment (.pdf)
  • Research risk assessment: Risk estimation tool (.pdf)
  • Example of a Social Science research risk assessment (.pdf)

Refer to carrying out a risk assessment carrying out a risk assessment carrying out a risk assessment for step by step guidance.

Risk assessments must relate to the actual work and must be monitored by the PI. If there are significant changes to the activities, locations, equipment or substances used, the risk assessment will need to reviewed, updated and the old version archived. Risk assessments should also consider the end of projects, arrangements for waste disposal, equipment, controlled area decommission and emergencies. 

Things to consider:

  • The risks may be specialist in nature or general. Information can found from legislation, sector guidance, safety data sheets, manufacturers equipment information, research documents, forums and health and safety professionals.
  • Practical research might involve less well-known hazards. Do you or your team have the expertise to assess the risk adequately? Do you know who to go to for expert advice?
  • The capabilities, training, knowledge, skills and experience of the project team members. Are they competent or are there gaps?
  • In fast changing research environments, is there a need to carry out dynamic risk assessments? Are they understood and recorded?
  • The right personal protective equipment for the hazards identified and training in how to use it.
  • Specific Occupational Health vaccinations, health surveillance and screening requirements identified and undertaken. With physical agents and substances you’ll need to make an informed decision about the amount, frequency and duration of exposure. If you need help with this contact Health and Safety.
  • Associated activities: storage, transport/travel, cleaning, maintenance, foreseeable emergencies (eg spillages), decommissioning and disposal.
  • The safe design, testing and maintenance of the facilities and equipment.
  • Planned and preventative maintenance of general plant and specialist equipment.

These risk assessments relate to the actual work and must be monitored by the PI. If there are significant changes to the activities, locations, equipment or substances used, the risk assessment will need to reviewed, updated and the old version archived. Risk assessments should also consider the end of projects, arrangements for waste disposal, equipment and controlled area decommission and emergencies.

Training 

If you would like training on completing a risk assessment, please book onto our Risk Assessment Essentials course via HR Organiser. If you are unable to access this, please email [email protected] 

  • Carrying out a risk assessment Carrying out a risk assessment Carrying out a risk assessment
  • People especially at risk People especially at risk People especially at risk
  • IOSH/USHA/UCEA guidance on managing health and safety in research (.pdf) 
  • Research governance: Ethical approval

Arrow symbol

  • For enquiries contact your Student Services Hub
  • University of Essex
  • Wivenhoe Park
  • Colchester CO4 3SQ
  • Accessibility
  • Privacy and Cookie Policy

Homepage image

Data Science Journal

Ubiquity Press logo

  • Download PDF (English) XML (English)
  • Alt. Display

Research Papers

Risk assessment for scientific data.

  • Matthew S. Mayernik
  • Kelsey Breseman
  • Robert R. Downs
  • Alexis Garretson
  • Chung-Yi (Sophie) Hou
  • Environmental Data Governance Initiative (EDGI) and Earth Science Information Partners (ESIP) Data Stewardship Committee
  • risk assessment
  • data preservation
  • data stewardship

Introduction

At 1 the “The Rescue of Data At Risk” workshop held in Boulder, Colorado on September 8th and 9th, 2016, 2 participants were asked the following question: “How would you define ‘at risk’ data?” Discussions on this point ranged widely, and touched on several challenges, including lack of funding or personnel support for data management, natural and political disasters, and metadata loss. One participant’s organization’s definition of risk, however, stood out: “data were considered to be at risk unless they had a dedicated plan to not be at risk.” This simple statement vividly depicts how data’s default state is being in a state of risk. Thus, ongoing stewardship is required to keep data collections and archives in existence.

The risk factors that a given data collection or archive may face vary depending on the data’s characteristics, the data’s current environment, and the priorities and resources available at the time. Many risks can be reduced or eliminated by following best practices codified as certifications and guidelines, such as the CoreTrustSeal Data Repository Certification ( 2018 ) and the ISO 16363:2012. This ISO standard defines audit and certification procedures for trustworthy digital repositories ( ISO 2012b ). Both the CoreTrustSeal certification and ISO 16363:2012 are based on the ISO 14721:2012 standard that defines the Open Archival Information System (OAIS) Reference Model ( ISO 2012a ). But these certifications can be large and complex. Additionally, many of the organizations that hold valuable scientific data collections may not be aware of these standards, even if the organizations are potentially resourced to tackle the challenge ( Maemura, Moles & Becker 2017 ). Further, the attainment of such certifications does not necessarily reduce the risks to data that are outside of the scope of a particular certification instrument.

This paper presents an analysis of data risk factors that scientific data collections and archives may face, and a matrix to support data risk assessments to help ameliorate those risks. The three driving questions for this analysis are:

  • How to assess what data are at risk?
  • How to characterize what risk factors data collections and/or archives face?
  • How to make risks more transparent, internally and/or externally?

The goals of this work are to inform and enable effective data risk assessment by: a) individuals and organizations who manage data collections, and b) individuals and organizations who want to help to reduce the risks associated with data preservation and stewardship. Stakeholders for these two activities include producers, stewards, sponsors, and users of data, as well as the management and staff of the institutions that employ them.

This project has been coordinated through the Data Stewardship Committee within the Earth Science Information Partners (ESIP), a non-profit organization that exists to support collection, stewardship, and use of Earth science data, information, and knowledge. 3 The immediate motivation for the project stemmed from the Data Stewardship Committee members engaging with groups who were undertaking grass-roots “data rescue” initiatives after the 2016 US presidential election. At that time, a number of loosely organized and coordinated efforts were initiated to duplicate data from US government organizations to prevent potential politically motivated data deletion or obfuscation (See for example Dennis 2016 ; Varinsky 2017 ). In many cases, these initiatives specifically focused on duplicating government-hosted Earth science data.

ESIP Data Stewardship Committee members wrote a white paper to provide the Earth science data centers’ perspective on these grass-roots “data rescue” activities ( Mayernik et al. 2017 ). That document described essential considerations within day-to-day work of existing federal and federally-funded Earth science data archiving organizations, including data centers’ constant focus on documentation, traceability, and persistence of scientific data. The white paper also provided suggestions for how the grass-roots efforts might productively engage with the data centers themselves.

One point that was emphasized in the white paper was that the actual risks faced by the data collections may not be transparent from the outside. In other words, “data rescue” activities may have in fact been duplicating data that were at minimal risk of being lost ( Lamdan 2018 ). This point, and the white paper in general, was well received by people inside and outside of these grass-roots initiatives ( Cornelius & Pasquetto 2017 ; McGovern 2017 ). Questions then came back to the ESIP Data Stewardship Committee about how to understand what data held by government agencies were actually at risk.

The analysis presented in this paper was initiated in response to these questions. Since then, these grass-roots “data rescue” initiatives have had mixed success in sustaining and formalizing their efforts ( Allen, Stewart & Wright 2017 ; Chodacki 2018 ; Janz 2018 ). The intention of our paper is to enable more effective data risk assessment broadly. Rescuing data after they have been corrupted, deleted, or lost can be time and effort intensive, and may be impossible ( Pienta & Lyle 2018 ). Thus, we aim to provide guidelines to any individual or organization that manages and provides access to scientific data. In turn, these individuals and organizations can better assess the risks that their data face, and characterize those risks.

When discussing risk and, in particular, data risk, it is useful to ask the question: what is the objective that is being challenged by the possible risk factors? With regard to data, in general, discussions of risk might presume that “risks” threaten the current or future access to data by the potential data users. Currently, continuing public access to and use of scientific data is particularly relevant in light of recent open data and open science initiatives. In this regard, risks for scientific data include factors that could hinder, constrain, or limit current or future data use. Identifying such risk factors to data use offers further analysis opportunities to prevent, mitigate, or eliminate the risks.

Data Risk Assessment

Risk assessment is a regular activity within many organizations. In a general sense, risk management plans are complementary to project management plans ( Cervone 2006 ). Organizational assessment of digital data and information collections is likewise not new ( Maemura, Moles & Becker 2017 ). The analysis presented in this paper builds on prior work in a number of areas: 1) research on data risks, 2) data rescue initiatives within government agencies & specific disciplines, 3) CODATA and RDA working groups & meetings, 4) trusted repository certifications, and 5) knowledge and experience of the ESIP Data Stewardship Committee members. Table 1 summarizes data risk factors that emerge from these knowledge bases. The list of risk factors shown in Table 1 is not meant to be exhaustive. Rather, it provides a useful illustration of the diverse ways in which data sets, collections, and archives might encounter risks to data usability and accessibility. The rest of this section details further key insights from the five areas of prior work noted above.

Risk factors for scientific data collections.

Risk FactorDescription
1.Lack of useData are rarely accessed and dubbed ‘unwanted’, thus getting thrown away
2.Loss of funding for archiveThe whole archive loses its funding source
3.Loss of funding for specific datasetsLack of funding to monitor, maintain, and otherwise work with specific data
4.Loss of knowledge around context or accessThe loss of individuals who know how to access the data or know the metadata associated with these data that make the data useable to others, e.g. due to retirement or death
5.Lack of documentation & metadataData cannot be interpreted due to lack of contextual knowledge
6.Data mislabelingData are lost because they are poorly identified (either physically or digitally)
7.CatastrophesFires, floods, wars/human conflicts, etc
8.Poor data governanceUncertain or unknown decision making processes impede effective data management
9.Legal status for ownership and useUncertain, unknown, or restrictive legal status limits the possible uses of data
10.Media deteriorationPhysical media deterioration prevents data from being accessed (paper, tape, or digital media)
11.Missing filesData files are lost without any known reason
12.Dependence on service providerRisks due to potential single point of failure problems if a particular service provider goes out of business
13.Accidental deletionData are accidentally deleted by a staff error
14.Lack of planningLack of planning puts data collections at risk of being susceptible to unexpected events
15.Cybersecurity breachData are intentionally deleted or corrupted via a security breach, e.g. malware
16.Over-abundanceDifficulty dealing with too much data results in reduction in value or quality of whole collections
17.Political interferenceData deleted or made inaccessible due to political decisions
18.Lack of provenance informationData cannot be trusted or understood because of a lack of information about data processing steps, or about data stewardship chains of trust
19.File format obsolescenceData cannot be accessed due to lack of knowledge, equipment, or software for reading a specific file format
20.Storage hardware breakdownSudden & catastrophic malfunction of storage hardware
21.Bit rot and data corruptionGradual corruption of digital data due to an accumulation of non-critical failures (bits flipping) in a data storage device

Research on data risks

A range of studies have explored the kinds of risks that scientific data may face, and potential ways to mitigate specific risk factors. Many of these studies touch on practices that are typical of scientific data archives. Metadata, for example, can be considered both a risk factor and a mitigation strategy. Insufficient metadata is itself a potential factor that can reduce the discoverability, usability, and preservability of data, particularly in situations where direct human knowledge of the data is absent ( Michener et al. 1997 ). In fact, many data rescue projects find that the “rescue” efforts must be targeted much more toward metadata than data (see Knapp, Bates & Barkstrom 2007 ; Hsu et al. 2015 ). This might be the case for a couple of reasons. First, insufficient or missing metadata might prevent data from being usable regardless of the condition of the data themselves. Examples include missing column headers in tabular data that prevent a user from knowing what the data are representing, and insufficient provenance metadata that prevent users from trusting the data due to lack of context about data collection and quality control. Second, metadata are also central to documenting and mitigating risks as they manifest while preventing risks from becoming problematic in the future ( Anderson et al. 2011 ). For example, documenting data ownership and usage rights is an essential step in mitigating the risk factor “Legal status for ownership and use” from Table 1 .

Different kinds of metadata might be necessary to reduce specific data risks. For example, specifications of file format structures are a critical type of metadata for mitigating risks associated with digital file format obsolescence. Open specifications complement other critical mitigation practices and tools related to file format obsolescence. As one example, keeping rendering software available is an important way to retain access to particular file formats, but this typically also requires maintaining documentation of how the rendering software works ( Ryan 2014 ).

Other risk factors (listed in Table 1 ) relate to the sustainability and transparency of the archiving organization. These factors are important in ensuring the accessibility of the data and the trustworthiness of the archive. As Yakel et al. ( 2013 ) note, “[t]rust in the repository is a separate and distinct factor from trust in the data” (pg. 154). For people outside of the repository, “institutional reputation appears to be the strongest structural assurance indicator of trust” (pg. 154). Effective communication about data risks and steps taken to eliminate problems is helpful in ensuring users that the archive is trustworthy ( Yoon 2017 ).

Data that face extreme or unusual risks, however, may not be manageable via typical data curation workflows. Downs and Chen ( 2017 ) note that dealing with some data risk factors “may well require divergence from regular data curation procedures as tradeoffs may be necessary” (pg. 273). For example, Gallaher et al. ( 2015 ) undertook an extensive project to recover, reconstruct, and reprocess data from early satellite missions into modern formats that are usable by modern scientists. This project involved dealing with degrading and fragile magnetic tapes, extracting data from the tapes’ unusual format, and recreating documentation for the data. Natural disasters, fires, and floods also present unpredictable risk factors to data collections of all kinds. While these kinds of events can be planned for and steps can be taken to prevent the occurrence of some of them (e.g. fires), they can still cause major data loss and/or require significant recovery effort.

Mitigating risks, of whatever kind, takes effort and resources. The time required to create metadata, re-format files, create contingency plans, and communicate these efforts to user communities can be considerable. This time investment can be the greatest barrier to performing risk assessment and mitigation activities ( Thompson, Robertson & Greenberg, 2014 ). Putting focus on assessment of data risk factors may mean that “certain priorities need to be re-ordered, new skills acquired and taught, resources redirected, and new networks constructed” ( Griffin 2015, pg. 93 ). It can be possible to automate some components of risk assessment ( Graf et al. 2017 ), but most of the steps require human effort. This intensive effort is vividly illustrated by the many data rescue initiatives that have taken place within government agencies and other kinds of organizations over the past few decades.

Data rescue initiatives within government agencies & specific disciplines

Legacy data are data collected in the past with different technologies and data formats than in use today. These data often face the largest numbers of risk factors that could lead to data loss. A wide range of government agencies and other organizations have conducted legacy data rescue initiatives to modernize data and make them more accessible and usable for today’s science. Each data rescue project typically faces many different kinds of data risks. For example, a recent satellite data rescue effort had to address the “loss of datasets, reconciliation of actual media contents with metadata available, deviation of the actual data format from expectations or documentation, and retiring expertise” ( Poli et al. 2017, pg. 1481 ). Data rescue projects typically involve work to prevent future risk factors from manifesting, in addition to modernizing data for accessibility and usability. For example, data rescue projects migrate data to less endangered data formats, and create new metadata and quality control documentation ( Levitus 2012 ).

CODATA/RDA working groups & meetings

Relevant professional organizations, including the International Council for Science (ICSU) Committee on Data for Science and Technology (CODATA) and the Research Data Alliance (RDA), also have been actively identifying improvements for data stewardship practices that can reduce potential risks to data. For example, the former Data At Risk Task Group (DAR-TG), of CODATA, raised awareness about the value of heritage data and described the benefits obtained from several data rescue projects ( Griffin 2015 ). This group also organized the 2016 “Rescue of Data At Risk” workshop mentioned in the introduction of this paper. That workshop led to a document titled, “Guidelines to the Rescue of Data At Risk” ( 2017 ). Subsequently, the Data Rescue Interest Group ( 2018 ) of the Research Data Alliance (RDA), spawned from the CODATA DAR-TG, also focuses on efforts to increase awareness of data rescue projects.

Repository certifications and maturity assessment

Many data repositories have conducted self-assessments and external assessments to document their compliance with the standards for trusted repositories and attain certification of their capabilities and practices for managing data. In addition to emphasizing organizational issues, repository certification instruments, such as ISO 16363 ( 2012b ) and CoreTrustSeal (2018) certification, also focus on digital object management and infrastructure capabilities. Engaging in such assessments offers benefits to repositories and their stakeholders. A key benefit is the identification of areas where improvements have been completed or need to be completed to reduce risks to data (CoreTrustSeal 2018). In an examination of perceptions of repository certification, Donaldson et al. ( 2017 ) found that process improvement was often reported by repository staff as a benefit of repository certification.

In addition to (or complementary to) formal certifications, data repositories may conduct data stewardship maturity assessment exercises to help in identifying data risks and informing data risk mitigation strategies ( Faundeen 2017 ). “Maturity” is used in the sense presented by Peng et al. ( 2015 ), and refers to the level of performance attained to ensure preservability, accessibility, usability, transparency/traceability, and sustainability of data, along with the level of performance in data quality assurance, data quality control/monitoring, data quality assessment, and data integrity checks. Maturity at the institutional (or archive) level in areas such as policy, funding, and infrastructure does not necessarily translate to comprehensive maturity at the dataset level ( Peng 2018 ). Data stewardship maturity assessment should therefore be performed both at the institutional level and at the dataset level. It is recognized that performing stewardship maturity assessments can be time consuming and resource intensive. However, the stewardship organizations are encouraged to perform self-assessment using “stage by stage” or “a la carte” approach (see example in Peng et al. 2019 ). Ultimately, both formal certifications and informal maturity assessments help organizations not only gain self-awareness, but also identify better solutions for their data that might be at risk of being lost or rendered unusable.

Developing a Data Risk Assessment Matrix

Risk assessment is a well-established field, with 30–40 years of history ( National Research Council 1983 ; Aven 2016 ). However, the practice of applying risk assessment methodologies to scientific data collections is less formally established, though regular audits and reviews of data management systems are common in some organizations ( Ramapriyan 2017 ).

The starting point for this project was to establish a process for categorizing the data risk factors shown in Table 1 . The initial idea of our effort was that if data risk factors could be categorized into a logical structure, it would allow data managers to assess the risks to their data collections via a set of predefined and consistent categories. To develop a logical categorization, we held a session to conduct a “card sorting” exercise at the 2018 ESIP Summer Meeting, which took place in July 2018 in Tucson, Arizona. “Card sorting” is an established method for developing categorizations of concepts, vocabulary terms, or web sites ( Zimmerman & Akerelrea 2002 ; Usability.gov   2019 ). Following the card sorting methodology, participants in the 2018 ESIP meeting session were provided the list of data risks in Table 1 , and asked to complete the following task: “Looking at the list of data risk factors, how would you group these factors, based on the categories you would define?”

Approximately 15 attendees engaged in the exercise. We used a combination of an online card sorting tool and hand-written recommendations to collect the completed card sorting categorizations. Following the completion of the exercise, the results were displayed in front of the session participants and a group discussion took place. The outcome of the card sorting exercise and subsequent discussion was a clear recognition that there could be many valid and useful ways of categorizing data risks. No single method for categorizing the risk factors would be sufficient to cover the diverse organizations and situations within which data collections exist. Depending on the situation(s), a data curation organization or individual is facing, they may need to categorize data risks in different ways. This characteristic is common in risk assessments generally, as risk prioritization and categorizations are dependent on the phenomena being assessed, the characteristics of the situation, and the goals of the organizations or people performing the assessment ( Slovic 1999 ).

Through subsequent discussion and analysis of the data risk assessment literature noted above, we identified at least ten different ways that data risk factors could be assessed. Many of these categorization methods are applicable to risk assessments of any kind ( Cervone 2006 ). The list below is not meant to be exhaustive, and some methods are likely related. Data risk factors could be categorized or prioritized according to the methods listed in Table 2 .

Methods for Categorizing Data Risks.

Categorization MethodDescription
Severity of riskHow much impact could this risk factor have on the data itself, regardless of the current importance of data to the user?
Likelihood of occurrenceHow likely a risk factor is to occur
Length of recovery timeHow long it would take to recover data or re-establish data accessibility
Impact on userHow significantly data users are impacted by data loss or loss of data accessibility
Who is responsible for addressing the problemWho has the expertise and responsibility to mitigate or respond to particular risk factors
Cause of problemWhat caused a data risk factor to occur
Degree of controlHow much control an organization or individual has over whether a risk factor is present or will occur
Proactive vs reactive responseWhether risk factors can be mitigated via preventative measures, or whether they must be responded to upon occurrence
Nature of mitigationWhat steps must be taken or processes put in place to prevent a risk, or mitigate a risk after it has occurred
Resources required for mitigationWhat time, money, or personnel resources will be necessary to mitigate risk factors

The lists shown in Tables 1 and 2 offer characteristics on which data risk assessments can be built. Combining the categorization methods from Table 2 with the selected risk factors from Table 1 leads to a risk assessment matrix, as shown in Table 3 . This figure shows an example of a selection of specific data risk factors and the categorization methods. Depending on the situation or data collection being assessed, different risk factors and/or categorization methods may be more applicable than the ones shown in Table 3 . Those conducting a data risk assessment can then use the matrix as a way to organize, prioritize, or potentially quantify the selected risks according to the categorization methods that are most relevant for the specific case at hand. The next section provides more detailed illustrations of the use of the data risk assessment matrix. Appendix I shows the full data risk assessment template, with all risks and categorization methods from Tables 1 and 2 .

Example of a blank data risk assessment matrix, after selection of specific risk factors and categorization methods of interest.

Risk Factors
Lack of use
Loss of knowledge
Lack of docs & metadata
Catastrophes
Poor data governance
Media deterioration

Application of the Data Risk Assessment Matrix

Three case studies are described below in which the data risk assessment matrix was used to develop a better understanding of data risks for particular resources. These cases enable evaluation of the data risk assessment framework presented in this paper, clarifying its strengths and weaknesses, and pinpointing the situations in which it can be most useful ( Becker, Maemura & Moles 2020 ).

Case 1 – NCAR Library Analog Data Collection

The National Center for Atmospheric Research (NCAR) Library maintains an analog data collection that consists of about 300 data sets in support of atmospheric and meteorological research conducted by NCAR scientists. These assets are largely compilations of measurements and statistics published by national and international meteorological services and other kinds of government entities. Many of these assets have been in the NCAR Library’s collections for decades, and most were minimally cataloged when they were first brought into the collection. As such, the current usage of the collection is minimal. A prior assessment done by the NCAR Library and a student assistant sought to identify individual assets that were of higher potential value and interest for current science. This assessment effort resulted in a modernization prioritization based on a geographic and temporal framework, and improved metadata records for about 5% of the collection ( Mayernik et al. 2018 ). This effort did not, however, include any kind of risk assessment related to the physical assets themselves.

The data risk assessment matrix was therefore helpful in doing a second-level priority analysis for these NCAR Library analog data assets. We used the matrix as a way to identify which risk factors were most important for these materials, and to characterize the mitigation efforts that were needed for each risk factor. In particular, we focused the risk assessment on the data assets that were previously identified as having high geospatial and temporal interest. The NCAR Library use of the matrix involved a series of steps:

  • Step 1 – A number of risk factors listed in the matrix were identified as being of most importance, with the focus being on factors that prevented or impeded the use of these data within current scientific studies. The most immediate risk factors were identified to be the “lack of use” and the “lack of documentation/metadata” for these assets. Other risks that were secondary in immediacy, but still potentially important, were: Data mislabeling, the questionable legal status for ownership and use, media deterioration, lack of planning, and poor data governance.
  • Step 2 – The second step was to identify which categorization methods shown in the matrix were most applicable/appropriate for the NCAR Library’s management and maintenance of this collection. The methods selected were: a) Length of recovery time, b) Who is responsible for addressing the problem, c) Nature of mitigation, and d) Resources required for mitigation.
  • Step 3 – The third step was to fill in the boxes in the matrix for the risk factors and categorization methods. For example, for the “Length of recovery time” question, we used a simple 1–3 scale to indicate relative differences in how long it would take to mitigate the two most important risk factors: “lack of use” and the “lack of documentation/metadata”. As one example, some data assets were published by international agencies and therefore have title pages and documentation that are not in English. In turn, due to the lack of relevant foreign language expertise in the NCAR Library staff, developing new metadata for these resources will take more effort than for those assets that were published by English-speaking countries. For the “Resources required for mitigation” categorization method, a numerical scale was not as appropriate. Instead, we filled in the matrix with text descriptions of the resources required to mitigate the risk factors. An example entry under the “lack of documentation & metadata” risk factor was: “We would need to create new metadata for the library catalog, then transform to ISO for inclusion in NCAR DASH Search, with added challenge of needing to look at microfilm files (no current working reader in Library).”

In summary, the matrix was very useful as “something to think with.” In other words, it jump-started the process for doing the risk assessment because the NCAR Library staff did not need to spend time developing a comprehensive list of risk factors that may apply for these data, or brainstorm about how to categorize those risks. The risk factor matrix provided a ready-made starting point for the assessment. Because the matrix does not dictate how the cells should be filled in, the NCAR Library staff made decisions about how to apply the matrix for each categorization method that was chosen. The matrix structure could potentially be applied or customized to create a prioritization rubric, by supporting the creation of a numeric scoring process for categories where that is appropriate.

Case 2 – Mohonk Preserve Daniel Smiley Research Library

Mohonk Preserve is a land trust and nature preserve in New Paltz, New York covering more than 8,000 acres of a northern section of the Appalachian Mountains known as the Shawangunk Mountains. Mohonk Preserve’s conservation science division, the Daniel Smiley Research Center (DSCR), is affiliated with the Organization of Biological Field Stations (OBFS) and acts as a NOAA Climate Observation Center. DSRC staff and citizen scientists carry out a variety of long-term monitoring projects and manage an extensive archive of historical observations. The archive houses 60,000 physical items, 9,000 photographs, 86 years of natural history observations, 123 years of daily weather data, and a research library of legacy titles. The physical items include more than 3,000 herbarium specimens, 107 bird specimens, 140 butterfly specimens, 139 mammal specimens, 400 arthropod specimens, and over 14,000 index cards with handwritten and typed observations. The digitization process of the archive holdings is ongoing, but the packaging and publishing of datasets in the Environmental Data Initiative is a priority ( Mohonk Preserve et al. 2018a , 2018b , 2019 ). These data and natural history collections underpin the Mohonk Preserve’s land management and stewardship and have been crucial to an increasing number of scientific publications (e.g., Cook et al. 2009 ; Cook et al. 2008 ; Charifson et al. 2015 ; Richardson et al. 2016 ), but the collections remain underutilized.

The data rescue effort for the archives has largely consisted of digitization and cataloging. Hence, the data risk assessment matrix was used to guide the prioritization of datasets for publication and assess other data rescue needs and considerations for the archives. The most critical risk factors identified through the process were ‘lack of documentation & metadata’, ‘loss of knowledge’, and ‘lack of use.’ In order to address the lack of use, we collaboratively developed a prioritization of the data holdings for publication in a repository, particularly based around the value of data collected for scientific investigations, the temporal coverage of the dataset, and an assessment of the resources required for the digitization, packaging, and publishing of the relevant dataset.

We also realized through the data risk matrix process that many of our risk factors are interdependent – for example, the lack of documentation may not be because the documentation does not exist in the library, but rather that it may not be discoverable in the archives due to incomplete digitization or cataloging of the relevant records or field notes. For example, during the assessment process for our vernal pool monitoring dataset ( Mohonk Preserve et al. 2019 ), we discovered previously unknown environmental quality notes in narrative sections of an undigitized collection of field notes. This supported the current emphasis on the digitization and cataloging of the holdings and suggested areas of high importance, particularly the narrative sections of field notebooks. Additionally, the lack of documentation and metadata is directly related to the loss of knowledge through leadership transitions. Like many long-term ongoing collections projects, metadata and documentation– particularly related to data collection protocols– are held as tacit knowledge by key stakeholders who have been involved with the project for an extended period of time. The loss of those stakeholders or their knowledge, through retirement or employment changes, poses a significant risk to the long-term value of the associated data ( Michener et al. 1997 ).

Because the holdings largely consist of physical items, a subset of the risk factors in the matrix were not directly applicable to the collections but had corollaries in physical collections management. For example, bit rot and data corruption are not a concern for the physical items, but pests present a similar concern that needs mitigation in a physical archive setting. Additionally, storage hardware breakdown is not directly applicable to herbarium collections but ensuring that the mounting sheets are acid-free is key to ensuring the protection of the specimens and preventing deterioration over time. Considering physical risks to the collection media remains a crucial aspect of managing and planning for the future of physical specimen holdings. Through the data assessment process, one of the key risk areas identified through the assessment was the loss of knowledge and documentation due to retirement, so planning for mitigating this risk is ongoing. Overall, the matrix provided a helpful starting point for guiding conversations relating to the stewardship of the archives and proactively planning and allocating resources to make the data more accessible to scientists and researchers.

Case 3 – EDGI Response to the Deer Park Chemical Fire

On March 17 2019, tanks of chemicals at the Intercontinental Terminals Company (ITC) in Deer Park, Texas, caught fire and began a blaze that would last several days, emitting a chemical-laden plume of smoke over surrounding communities. The Environmental Data & Governance Initiative (EDGI) was approached for assistance in rapid-response archival backup of digital environmental data relevant to the fire in case of future tampering or loss of availability.

There were two major causes for concern: (1) evident tampering– the closest air quality monitor was taken down during the fire, and (2) potential conflict of interest– the entity furnishing the data might have some culpability in a future legal case using the data. The approaching organization hopes to use saved data as evidence in legal cases that may take several years (potentially due to the long timespan for benzene-related illnesses to surface in then-students at the local school and workers in nearby factories).

On a limited timeline and with little capacity, EDGI needed to downselect from hundreds to thousands of possibly relevant data sources (including air and water quality monitors affected by the fire’s plume, and plans and response documents surrounding handling of the fire). The primary mission: ensure the data of primary concern is backed up in a legally legitimate (traceable) format that will be usable in a decade or more.

The data risk matrix from this paper was not used at the time

Prioritization.

The approaching organization suggested a few directories of static data to archive. With additional investigation, EDGI also found some API-accessible structured data from the air monitors that was updated daily.

The information proposed for potential rescue included:

  • Data from the Deer Park air quality monitor data that was taken down
  • Data from other nearby air quality monitors
  • Air quality monitors downstream of the plume (potentially very many of them, as the plume traveled more than 20 miles)
  • Three years of back-data from any air quality monitors, to establish baseline
  • Water quality monitors– local, downstream, and down-plume, in case relevant (no evidence of contamination yet, but the situation still developing), and three years of back data to establish baseline
  • Future data from any monitors, to track the still-developing situation and archive it in case of any present risk
  • Contextual information: air sampling plans, disaster response plans, air and water quality sampling maps, PDFs of additional air and water quality sampling from different entities than provide the API-callable data

There was no formal review process for deciding what to save. There was some brief discussion internally around technical feasibility and potential environmental justice-focused mapping efforts, but the major use anticipated for the saved data was the legal case. The whole process from request to data backup took just a few days. Ultimately, EDGI’s choices of data to save depended primarily on the abilities and assessment of the two volunteers available. The volunteers used the skills they had and their best intuitions– lacking a clear prioritization between different data that could be saved.

Applying the data risk matrix to this situation, the two major risk factors can be immediately identified as “catastrophe” and “political interference”. Both risk factors are relevant, likely, and potentially catastrophic in effect. This highlights the urgency and source of the risk.

The risk matrix is not as helpful in prioritizing which data to save under capacity and urgency constraints. The risk matrix identifies the type and intensity of risk, but since all of the data is equally high-risk in this use case, the context of the data and its use case (evidence in a far-future legal case) are necessary for the following tasks of identifying, locating, and prioritizing data to save. This was done based on the best assessment and abilities of the available volunteers.

Ultimately, EDGI saved:

  • Structured data from the Deer Park and nearby Lynchburg Ferry air quality monitors: saved with metadata to IPFS via Qri ( qri.io ) with script to keep pulling updates
  • All of the PDF data (primarily directories of 20–100 links, typically to PDFs, including maps, images, narratives, and tables of data): saved to the Internet Archive as a full site snapshot

Assessing risks to rescued data

Following the data rescue operation, this risk matrix was used to assess ongoing risks to the repositories of rescued data: (1) the PDF data saved to the Internet Archive and (2) the structured data from air quality monitors saved to IPFS. The risk matrix was very effective for identification of vulnerabilities and potential next steps to better secure the data.

The full matrix (all of the categories and all of the risk factors) was applied twice: to PDF data saved to the Internet Archive, and to structured data saved on the decentralized web (IPFS). A scale of numeric values (1 (low) to 3 (high)) was used to rate the category versus the risk factor. For example, the risk factor of Media Deterioration was rated 3 (high) for Severity of Risk, but 1 (low) for Likelihood of Occurrence. This numeric rating was important to use of the full matrix– instead of removing columns as irrelevant, they could be down-rated where the risk was low.

Use of the matrix immediately highlighted the difference in risks important to the data stored on IPFS versus the Internet Archive. For example, data on the Internet Archive is well-governed and reasonably easy to find, but much more susceptible to natural disaster and hardware deterioration than the data on IPFS. IPFS is a new technology designed to store data across many physical locations– so it’s very resilient to location-based risks, but its format may become obsolete as the technology develops.

The risk matrix is particularly useful when combined with spreadsheet tools. For example, a quick to-do list for EDGI as a data manager can be produced using a formula such as:

  • Likelihood of occurrence > 1
  • Resources for mitigation < 3
  • Type of action: proactive
  • Responsible party: EDGI
  • Print mitigation action for rows where all of the above are true

Overall, the Risk Matrix outlined in this paper is a very useful tool for identifying risks to data and prioritizing next steps for mitigation– as long as the user has or can assume control over the data. However, in a data rescue use case, this risk matrix must be supplemented by additional context in order to prioritize which at-risk data should be saved when capacity is limited.

Conclusions and Lessons Learned

Risk assessments are instrumental for ensuring that existing data collections continue to be useful for scientific research and societal applications. Risk assessments are also an essential component of data rescue efforts in which interventions take place to prevent or minimize data loss. The data risk assessment framework presented in this paper provides a platform from which risk assessments can quickly begin.

To close out this paper, we discuss some observations and lessons learned in developing and applying the data risk assessment matrix. Data risk assessments can get significantly more in-depth and detailed than the basic template presented in Table 3 and Appendix I. As one example, the US Geological Survey (USGS) has undertaken a substantive project to create risk calculations for USGS-held data collections based on a number of criteria ( USGS 2019 ). The USGS process has involved the development of detailed formulas and weighting schemes to produce quantified assessments of data risk. The risk assessment matrix presented in this paper does not provide “out of the box” quantification measures or data risk prioritizations of the level of detail of the USGS project. The data risk matrix does, however, provide the foundations for an individual or organization to develop a more customized risk assessment rubric. The specifics of how risks were quantified or qualified, and how they were prioritized varied across the different uses of the matrix presented in the three case studies.

The three cases did demonstrate a common use pattern for the data risk matrix. The first step in each case was to review Tables 1 and 2 to determine which risk factors and categorization methods were most relevant. Clearly not all of the risk factors are applicable to all cases, and some of the risk factors are closely related, such as the “lack of documentation & metadata” and “lack of provenance information.” Once the risk factors and categorization methods have been filtered down into a smaller matrix, the next step is to determine how to fill in the matrix cells for particular datasets or collections. It may not be obvious how this would work for some data collections. Our cases involved using a mix of quantitative, qualitative, and ordinal rankings (such as using “high, medium, and low” designations for particular cells). This step may take some trial and error by the matrix user(s) to determine ranking approaches that are the most useful.

The third step is then to use the cell values in the matrix to guide conversations and decisions about risk mitigation priorities. In this sense, the matrix exercise can provide a high level overview of data collections, risks they may face, and the relative urgency and challenges that those risks present to the data stewards. The matrix can serve as a common reference point for discussions of resource allocations and stewardship priorities. However, as exemplified in the EDGI use case, prioritization in real-time, as would be required during catastrophic events such as disasters or wars particularly where there may be political interference, is difficult if not impossible. As such, preventing or minimizing data loss requires pre-planning at a scale rarely available.

The goal in creating this data risk assessment matrix has been to provide a light-weight way for data collections to be reviewed, documented, and evaluated against a set of known data risk factors. As the understanding of the value that scientific data have for research and societal uses increases, many initiatives recognize that “old data is the new data” ( NIWA 2019 ). Risk assessments are critical to ensure that “old data” can become “new data,” and are also critical to ensure that new data can continue to be newly useful into the future.

We list EDGI and the ESIP Data Stewardship Committee as authors due to the contributions of many individuals from both organizations to the work described in this paper. The named authors are the individuals involved in each organization who contributed directly to the paper’s text.  

The workshop was organized under the auspices of the Research Data Alliance (RDA) and the Committee on Data (CODATA) within the International Science Council, http://www.codata.org/task-groups/data-at-risk/dar-workshops .  

http://wiki.esipfed.org/index.php/Preservation_and_Stewardship .  

Appendix I – Data Risk Assessment Template

RISK FACTORS
Lack of use
Loss of funding for archive
Loss of funding for specific datasets
Loss of knowledge
Lack of docs & metadata
Data mislabeling
Catastrophes
Poor data governance
Legal status for ownership and use
Media deterioration
Missing files
Dependence on service provider
Accidental deletion
Lack of planning
Cybersecurity breach
Over-abundance
Political interference
Lack of provenance information
File format obsolescence
Storage hardware breakdown
Bit rot and data corruption

Acknowledgements

This project was organized and supported by the Data Stewardship Committee within the Earth Science Information Partners (ESIP). We thank ESIP and the committee participants for feedback on the project at numerous points in the past few years.

The work of Robert R. Downs was supported by the National Aeronautics and Space Administration under Contract 80GSFC18C0111 for the Socioeconomic Data and Applications Distributed Active Archive Center (DAAC).

Alexis Garretson acknowledges the support of the Environmental Data Initiative Summer Fellowship program and the Earth Science Information Partners Community Fellows Program. Alexis also acknowledges Mohonk Preserve staff, particularly the staff of the Daniel Smiley Research Center: Elizabeth C. Long, Megan Napoli, and Natalie Feldsine. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. 1842191. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.

The work of Chung-Yi (Sophie) Hou was supported by the National Center for Atmospheric Research.

The National Center for Atmospheric Research is sponsored by the U.S. National Science Foundation. Any opinions, findings, and conclusions or recommendations expressed in this publication are those of the author(s) and do not necessarily reflect the views of NCAR or the NSF.

Competing Interests

The authors have no competing interests to declare.

Allen, L, Stewart, C and Wright, S. 2017. Strategic open data preservation. College & Research Libraries News , 78(9). https://crln.acrl.org/index.php/crlnews/article/view/16771/18312 . DOI: https://doi.org/10.5860/crln.78.9.482  

Anderson, WL, Faundeen, JL, Greenberg, J and Taylor, F. 2011. Metadata for data rescue and data at risk. In: Conference on Ensuring Long-Term Preservation in Adding Value to Scientific and Technical Data. http://hdl.handle.net/2152/20056  

Aven, T. 2016. Risk assessment and risk management: Review of recent advances on their foundation. European Journal of Operational Research , 253(1): 1–13. DOI: https://doi.org/10.1016/j.ejor.2015.12.023  

Becker, C, Maemura, E and Moles, N. 2020. The design and use of assessment frameworks in digital curation. Journal of the Association for Information Science and Technology (JASIST) , 71(1): 55–68. DOI: https://doi.org/10.1002/asi.24209  

Cervone, HF. 2006. Project risk management. OCLC Systems & Services: International Digital Library Perspectives , 22(4): 256–62. DOI: https://doi.org/10.1108/10650750610706970  

Charifson, DM, Huth, PC, Thompson, JE, Angyal, RK, Flaherty, MJ and Richardson, DC. 2015. History of Fish Presence and Absence Following Lake Acidification and Recovery in Lake Minnewaska, Shawangunk Ridge, NY. Northeastern Naturalist , 22: 762–781. DOI: https://doi.org/10.1656/045.022.0411  

Chodacki, J. 2018. Data Mirror: Complementing data producers. Against the Grain , 29: 35. https://escholarship.org/uc/item/2n1715ff . DOI: https://doi.org/10.7771/2380-176X.7877  

CoreTrustSeal Data Repository Certification. 2018. https://www.coretrustseal.org/ .  

Cornelius, KB and Pasquetto, IV. 2018. “What data?” Records and data policy coordination during presidential transitions. In: Transforming Digital Worlds , 155–163. Springer International Publishing. DOI: https://doi.org/10.1007/978-3-319-78105-1_20  

Cook, BI, Cook, ER, Anchukaitis, KJ, Huth, PC, Thompson, JE and Smiley, SF. 2009. A Homogeneous Record (1896–2006) of Daily Weather and Climate at Mohonk Lake, New York. Journal of Applied Meteorology and Climatology , 49: 544–555. DOI: https://doi.org/10.1175/2009JAMC2221.1  

Cook, BI, Cook, ER, Huth, PC, Thompson, JE, Forster, A and Smiley, D. 2008. A cross-taxa phenological dataset from Mohonk Lake, NY and its relationship to climate. International Journal of Climatology , 28: 1369–1383. DOI: https://doi.org/10.1002/joc.1629  

Data Rescue Interest Group. 2018. Research Data Alliance. https://rd-alliance.org/groups/data-rescue.html .  

Dennis, B. 2016. Scientists are frantically copying U.S. climate data, fearing it might vanish under Trump. The Washington Post , Dec. 13, 2016. https://www.washingtonpost.com/news/energy-environment/wp/2016/12/13/scientists-are-frantically-copying-u-s-climate-data-fearing-it-might-vanish-under-trump/ .  

Donaldson, DR, Dillo, I, Downs, R and Ramdeen, S. 2017. The perceived value of acquiring Data Seals of Approval. International Journal of Digital Curation , 12(1). DOI: https://doi.org/10.2218/ijdc.v12i1.481  

Downs, RR and Chen, RS. 2017. Curation of scientific data at risk of loss: Data rescue and dissemination. In: Johnston, L (ed.), Curating Research Data. Volume One, Practical Strategies for Your Digital Repository . Association of College and Research Libraries. DOI: https://doi.org/10.7916/D8W09BMQ  

Faundeen, J. 2017. Developing criteria to establish trusted digital repositories. Data Science Journal , 16: 22. DOI: https://doi.org/10.5334/dsj-2017-022  

Gallaher, D, Campbell, GG, Meier, W, Moses, J and Wingo, D. 2015. The process of bringing dark data to light: The rescue of the early Nimbus satellite data. GeoResJ , 6: 124–134. DOI: https://doi.org/10.1016/j.grj.2015.02.013  

Graf, R, Ryan, HM, Houzanme, T and Gordea, S. 2017. A decision support system to facilitate file format selection for digital preservation. Libellarium: Journal for the Research of Writing, Books, and Cultural Heritage Institutions , 9(2). DOI: https://doi.org/10.15291/libellarium.v9i2.274  

Griffin, RE. 2015. When are old data new data? GeoResJ , 6: 92–97. DOI: https://doi.org/10.1016/j.grj.2015.02.004  

Guidelines to the Rescue of Data At Risk. 2017. Research Data Alliance. https://www.rd-alliance.org/guidelines-rescue-data-risk .  

Hsu, L, Lehnert, KA, Goodwillie, A, Delano, JW, Gill, JB, Tivey, MA, Ferrini, VL, Carbotte, SM and Arko, RA. 2015. Rescue of long-tail data from the ocean bottom to the Moon: IEDA Data Rescue Mini-Awards. GeoResJ , 6: 108–114. DOI: https://doi.org/10.1016/j.grj.2015.02.012  

[ISO] International Organization for Standardization. 2012a. ISO 14721:2012 (CCSDS 650.0-M-2): Space data and information transfer systems — Open Archival Information System (OAIS) — Reference model. https://www.iso.org/standard/57284.html .  

[ISO] International Organization for Standardization. 2012b. ISO 16363:2012 (CCSDS 652.0-R-1): Space data and information transfer systems — Audit and certification of trustworthy digital repositories. https://www.iso.org/standard/56510.html .  

Janz, M. 2018. Maintaining access to public data: Lessons from Data Refuge. Against the Grain , 29: 30–33. DOI: https://doi.org/10.31229/osf.io/yavzh  

Knapp, KR, Bates, JJ and Barkstrom, B. 2007. Scientific data stewardship: Lessons learned from a satellite–data rescue effort. Bulletin of the American Meteorological Society , 88(9): 1359–1362. DOI: https://doi.org/10.1175/BAMS-88-9-1359  

Lamdan, S. 2018. Lessons from Datarescue: The limitations of grassroots climate change data preservation and the need for federal records law reform. University of Pennsylvania Law Review Online , 166(1): Article 12. https://scholarship.law.upenn.edu/penn_law_review_online/vol166/iss1/12 .  

Levitus, S. 2012. The UNESCO-IOC-IODE “Global Oceanographic Data Archeology and Rescue” (GODAR) Project and “World Ocean Database” Project. Data Science Journal , 11: 46–71. DOI: https://doi.org/10.2481/dsj.012-014  

Maemura, E, Moles, N and Becker, C. 2017. Organizational assessment frameworks for digital preservation: A literature review and mapping. Journal of the Association for Information Science and Technology , 68(7): 1619–1637. DOI: https://doi.org/10.1002/asi.23807  

Mayernik, MS, Downs, RR, Duerr, R, Hou, C-Y, Meyers, N, Ritchey, N, Thomer, A and Yarmey, L. 2017. Stronger together: The case for cross-sector collaboration in identifying and preserving at-risk data. Figshare. DOI: https://doi.org/10.6084/m9.figshare.4816474.v1  

Mayernik, MS, Huddle, J, Hou, C-Y and Phillips, J. 2018. Modernizing library metadata for historical weather and climate data collections. Journal of Library Metadata , 17(3/4): 219–239. DOI: https://doi.org/10.1080/19386389.2018.1440927  

McGovern, NY. 2017. Data rescue. ACM SIGCAS Computers and Society , 47(2): 19–26. DOI: https://doi.org/10.1145/3112644.3112648  

Michener, WK, et al. 1997. Nongeospatial metadata for the ecological sciences. Ecological Applications , 7(1): 330–342. DOI: https://doi.org/10.1890/1051-0761(1997)007[0330:NMFTES]2.0.CO;2  

Mohonk Preserve, Belardo, C, Feldsine, N, Forester, A, Huth, P, Long, E, Morgan, V, Napoli, M, Pierce, E, Richardson, D, Smiley, D, Smiley, S and Thompson, J. 2018a. History of Acid Precipitation on the Shawangunk Ridge: Mohonk Preserve Precipitation Depths and pH, 1976 to Present. Environmental Data Initiative . DOI: https://doi.org/10.6073/pasta/734ea90749e78613452eacec489f419c  

Mohonk Preserve, Forester, A, Huth, P, Long, E, Morgan, V, Napoli, M, Pierce, E, Smiley, D, Smiley, S and Thompson, J. 2018b. Mohonk Preserve Ground Water Springs Data, 1991 to Present. Environmental Data Initiative . DOI: https://doi.org/10.6073/pasta/928feed7ee748509ab065de7e3791966  

Mohonk Preserve, Feldsine, N, Forester, A, Garretson, A, Huth, P, Long, E, Napoli, M, Pierce, E, Smiley, D, Smiley, S and Thompson, J. 2019. Mohonk Preserve Amphibian and Water Quality Monitoring Dataset at 11 Vernal Pools from 1931–Present. Environmental Data Initiative . DOI: https://doi.org/10.6073/pasta/864aea25998b73c5d1a5b5f36cb6583e  

National Research Council. 1983. Risk Assessment in the Federal Government: Managing the Process. Washington, DC: The National Academies Press. DOI: https://doi.org/10.17226/366  

NIWA. 2019. The week it snowed everywhere. NIWA Media Release , Nov. 21, 2019. https://niwa.co.nz/news/the-week-it-snowed-everywhere .  

Peng, G. 2018. The state of assessing data stewardship maturity – An overview. Data Science Journal , 17: 7. DOI: https://doi.org/10.5334/dsj-2018-007  

Peng, G, Milan, A, Ritchey, NA, Partee, RP, II, Zinn, S, McQuinn, E, Casey, KS, Lemieux, P, III, Ionin, R, Jones, P, Jakositz, A and Collins, D. 2019. Practical application of a data stewardship maturity matrix for the NOAA OneStop project. Data Science Journal , 18: 41. DOI: https://doi.org/10.5334/dsj-2019-041  

Peng, G, Privette, JL, Kearns, EJ, Ritchey, NA and Ansari, S. 2015. A unified framework for measuring stewardship practices applied to digital environmental datasets. Data Science Journal , 13: 231–253. DOI: https://doi.org/10.2481/dsj.14-049  

Pienta, AM and Lyle, J. 2018. Retirement in the 1950s: Rebuilding a longitudinal research database. IASSIST Quarterly , 42(1). DOI: https://doi.org/10.29173/iq19  

Poli, P, Dee, DP, Saunders, R, John, VO, Rayer, P, Schulz, J, Bojinski, S, et al. 2017. Recent advances in satellite data rescue. Bulletin of the American Meteorological Society , 98(7): 1471–1484. DOI: https://doi.org/10.1175/BAMS-D-15-00194.1  

Ramapriyan, HK. 2017. NASA’s EOSDIS: Trust and Certification. Presented at: 2017 ESIP Summer Meeting, Bloomington, IN. Figshare. DOI: https://doi.org/10.6084/m9.figshare.5258047.v1  

Richardson, DC, Charifson, DM, Stanson, VJ, Stern, EM, Thompson, JE and Townley, LA. 2016. Reconstructing a trophic cascade following unintentional introduction of golden shiner to Lake Minnewaska, New York, USA. Inland Waters , 6: 29–33. DOI: https://doi.org/10.5268/IW-6.1.915  

Ryan, H. 2014. Occam’s razor and file format endangerment factors. In: Proceedings of the 11th International Conference on Digital Preservation (iPres), October 6–10, 2014, Melbourne, Australia, 179–188. https://www.nla.gov.au/sites/default/files/ipres2014-proceedings-version_1.pdf .  

Slovic, P. 1999. Trust, emotion, sex, politics, and science: Surveying the risk-assessment battlefield. Risk Analysis , 19(4): 689–701. DOI: https://doi.org/10.1023/A:1007041821623  

Thompson, CA, Robertson, WD and Greenberg, J. 2014. Where have all the scientific data gone? LIS perspective on the data-at-risk predicament. College & Research Libraries , 75(6): 842–861. DOI: https://doi.org/10.5860/crl.75.6.842  

Usability.gov . 2019. Card Sorting. US Department of Health & Human Services. https://www.usability.gov/how-to-and-tools/methods/card-sorting.html .  

USGS. 2019. USGS Data at Risk: Expanding Legacy Data Inventory and Preservation Strategies. US Geological Survey. https://www.sciencebase.gov/catalog/item/58b5ddc3e4b01ccd54fde3fa .  

Varinsky, D. 2017. Scientists across the US are scrambling to save government research in ‘Data Rescue’ events. Business Insider , Feb. 11, 2017. http://www.businessinsider.com/data-rescue-government-data-preservation-efforts-2017-2 .  

Yakel, E, Faniel, I, Kriesberg, A and Yoon, A. 2013. Trust in digital repositories. International Journal of Digital Curation , 8(1). DOI: https://doi.org/10.2218/ijdc.v8i1.251  

Yoon, A. 2017. Data reusers’ trust development. Journal of the Association for Information Science and Technology , 68(4): 946–956. DOI: https://doi.org/10.1002/asi.23730  

Zimmerman, DE and Akerelrea, C. 2002. A group card sorting methodology for developing informational web sites. In: Proceedings IEEE International Professional Communication Conference, 437–445. IEEE. DOI: https://doi.org/10.1109/IPCC.2002.1049127  

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 08 April 2024

A case study on the relationship between risk assessment of scientific research projects and related factors under the Naive Bayesian algorithm

  • Xuying Dong 1 &
  • Wanlin Qiu 1  

Scientific Reports volume  14 , Article number:  8244 ( 2024 ) Cite this article

532 Accesses

Metrics details

  • Computer science
  • Mathematics and computing

This paper delves into the nuanced dynamics influencing the outcomes of risk assessment (RA) in scientific research projects (SRPs), employing the Naive Bayes algorithm. The methodology involves the selection of diverse SRPs cases, gathering data encompassing project scale, budget investment, team experience, and other pertinent factors. The paper advances the application of the Naive Bayes algorithm by introducing enhancements, specifically integrating the Tree-augmented Naive Bayes (TANB) model. This augmentation serves to estimate risk probabilities for different research projects, shedding light on the intricate interplay and contributions of various factors to the RA process. The findings underscore the efficacy of the TANB algorithm, demonstrating commendable accuracy (average accuracy 89.2%) in RA for SRPs. Notably, budget investment (regression coefficient: 0.68, P < 0.05) and team experience (regression coefficient: 0.51, P < 0.05) emerge as significant determinants obviously influencing RA outcomes. Conversely, the impact of project size (regression coefficient: 0.31, P < 0.05) is relatively modest. This paper furnishes a concrete reference framework for project managers, facilitating informed decision-making in SRPs. By comprehensively analyzing the influence of various factors on RA, the paper not only contributes empirical insights to project decision-making but also elucidates the intricate relationships between different factors. The research advocates for heightened attention to budget investment and team experience when formulating risk management strategies. This strategic focus is posited to enhance the precision of RAs and the scientific foundation of decision-making processes.

Similar content being viewed by others

risk assessment research paper sample

A method for managing scientific research project resource conflicts and predicting risks using BP neural networks

risk assessment research paper sample

Prediction of SMEs’ R&D performances by machine learning for project selection

risk assessment research paper sample

Machine learning in project analytics: a data-driven framework and case study

Introduction.

Scientific research projects (SRPs) stand as pivotal drivers of technological advancement and societal progress in the contemporary landscape 1 , 2 , 3 . The dynamism of SRP success hinges on a multitude of internal and external factors 4 . Central to effective project management, Risk assessment (RA) in SRPs plays a critical role in identifying and quantifying potential risks. This process not only aids project managers in formulating strategic decision-making approaches but also enhances the overall success rate and benefits of projects. In a recent contribution, Salahuddin 5 provides essential numerical techniques indispensable for conducting RAs in SRPs. Building on this foundation, Awais and Salahuddin 6 delve into the assessment of risk factors within SRPs, notably introducing the consideration of activation energy through an exploration of the radioactive magnetohydrodynamic model. Further expanding the scope, Awais and Salahuddin 7 undertake a study on the natural convection of coupled stress fluids. However, RA of SRPs confronts a myriad of challenges, underscoring the critical need for novel methodologies 8 . Primarily, the intricate nature of SRPs renders precise RA exceptionally complex and challenging. The project’s multifaceted dimensions, encompassing technology, resources, and personnel, are intricately interwoven, posing a formidable challenge for traditional assessment methods to comprehensively capture all potential risks 9 . Furthermore, the intricate and diverse interdependencies among various project factors contribute to the complexity of these relationships, thereby limiting the efficacy of conventional methods 10 , 11 , 12 . Traditional approaches often focus solely on the individual impact of diverse factors, overlooking the nuanced relationships that exist between them—an inherent limitation in the realm of RA for SRPs 13 , 14 , 15 .

The pursuit of a methodology capable of effectively assessing project risks while elucidating the intricate interplay of different factors has emerged as a focal point in SRPs management 16 , 17 , 18 . This approach necessitates a holistic consideration of multiple factors, their quantification in contributing to project risks, and the revelation of their correlations. Such an approach enables project managers to more precisely predict and respond to risks. Marx-Stoelting et al. 19 , current approaches for the assessment of environmental and human health risks due to exposure to chemical substances have served their purpose reasonably well. Additionally, Awais et al. 20 highlights the significance of enthalpy changes in SRPs risk considerations, while Awais et al. 21 delve into the comprehensive exploration of risk factors in Eyring-Powell fluid flow in magnetohydrodynamics, particularly addressing viscous dissipation and activation energy effects. The Naive Bayesian algorithm, recognized for its prowess in probability and statistics, has yielded substantial results in information retrieval and data mining in recent years 22 . Leveraging its advantages in classification and probability estimation, the algorithm presents a novel approach for RA of SRPs 23 . Integrating probability analysis into RA enables a more precise estimation of project risks by utilizing existing project data and harnessing the capabilities of the Naive Bayesian algorithms. This method facilitates a quantitative, statistical analysis of various factors, effectively navigating the intricate relationships between them, thereby enhancing the comprehensiveness and accuracy of RA for SRPs.

This paper seeks to employ the Naive Bayesian algorithm to estimate the probability of risks by carefully selecting distinct research project cases and analyzing multidimensional data, encompassing project scale, budget investment, and team experience. Concurrently, Multiple Linear Regression (MLR) analysis is applied to quantify the influence of these factors on the assessment results. The paper places particular emphasis on exploring the intricate interrelationships between different factors, aiming to provide a more specific and accurate reference framework for decision-making in SRPs management.

This paper introduces several innovations and contributions to the field of RA for SRPs:

Comprehensive Consideration of Key Factors: Unlike traditional research that focuses on a single factor, this paper comprehensively considers multiple key factors, such as project size, budget investment, and team experience. This holistic analysis enhances the realism and thoroughness of RA for SRPs.

Introduction of Tree-Enhanced Naive Bayes Model: The naive Bayes algorithm is introduced and further improved through the proposal of a tree-enhanced naive Bayes model. This algorithm exhibits unique advantages in handling uncertainty and complexity, thereby enhancing its applicability and accuracy in the RA of scientific and technological projects.

Empirical Validation: The effectiveness of the proposed method is not only discussed theoretically but also validated through empirical cases. The analysis of actual cases provides practical support and verification, enhancing the credibility of the research results.

Application of MLR Analysis: The paper employs MLR analysis to delve into the impact of various factors on RA. This quantitative analysis method adds specificity and operability to the research, offering a practical decision-making basis for scientific and technological project management.

Discovery of New Connections and Interactions: The paper uncovers novel connections and interactions, such as the compensatory role of team experience for budget-related risks and the impact of the interaction between project size and budget investment on RA results. These insights provide new perspectives for decision-making in technology projects, contributing significantly to the field of RA for SRPs in terms of both importance and practical value.

The paper is structured as follows: “ Introduction ” briefly outlines the significance of RA for SRPs. Existing challenges within current research are addressed, and the paper’s core objectives are elucidated. A distinct emphasis is placed on the innovative aspects of this research compared to similar studies. The organizational structure of the paper is succinctly introduced, providing a brief overview of each section’s content. “ Literature review ” provides a comprehensive review of relevant theories and methodologies in the domain of RA for SRPs. The current research landscape is systematically examined, highlighting the existing status and potential gaps. Shortcomings in previous research are analyzed, laying the groundwork for the paper’s motivation and unique contributions. “ Research methodology ” delves into the detailed methodologies employed in the paper, encompassing data collection, screening criteria, preprocessing steps, and more. The tree-enhanced naive Bayes model is introduced, elucidating specific steps and the purpose behind MLR analysis. “ Results and discussion ” unfolds the results and discussions based on selected empirical cases. The representativeness and diversity of these cases are expounded upon. An in-depth analysis of each factor’s impact and interaction in the context of RA is presented, offering valuable insights. “ Discussion ” succinctly summarizes the entire research endeavor. Potential directions for further research and suggestions for improvement are proposed, providing a thoughtful conclusion to the paper.

Literature review

A review of ra for srps.

In recent years, the advancement of SRPs management has led to the evolution of various RA methods tailored for SRPs. The escalating complexity of these projects poses a challenge for traditional methods, often falling short in comprehensively considering the intricate interplay among multiple factors and yielding incomplete assessment outcomes. Scholars, recognizing the pivotal role of factors such as project scale, budget investment, and team experience in influencing project risks, have endeavored to explore these dynamics from diverse perspectives. Siyal et al. 24 pioneered the development and testing of a model geared towards detecting SRPs risks. Chen et al. 25 underscored the significance of visual management in SRPs risk management, emphasizing its importance in understanding and mitigating project risks. Zhao et al. 26 introduced a classic approach based on cumulative prospect theory, offering an optional method to elucidate researchers’ psychological behaviors. Their study demonstrated the enhanced rationality achieved by utilizing the entropy weight method to derive attribute weight information under Pythagorean fuzzy sets. This approach was then applied to RA for SRPs, showcasing a model grounded in the proposed methodology. Suresh and Dillibabu 27 proposed an innovative hybrid fuzzy-based machine learning mechanism tailored for RA in software projects. This hybrid scheme facilitated the identification and ranking of major software project risks, thereby supporting decision-making throughout the software project lifecycle. Akhavan et al. 28 introduced a Bayesian network modeling framework adept at capturing project risks by calculating the uncertainty of project net present value. This model provided an effective means for analyzing risk scenarios and their impact on project success, particularly applicable in evaluating risks for innovative projects that had undergone feasibility studies.

A review of factors affecting SRPs

Within the realm of SRPs management, the assessment and proficient management of project risks stand as imperative components. Consequently, a range of studies has been conducted to explore diverse methods and models aimed at enhancing the comprehension and decision support associated with project risks. Guan et al. 29 introduced a new risk interdependence network model based on Monte Carlo simulation to support decision-makers in more effectively assessing project risks and planning risk management actions. They integrated interpretive structural modeling methods into the model to develop a hierarchical project risk interdependence network based on identified risks and their causal relationships. Vujović et al. 30 provided a new method for research in project management through careful analysis of risk management in SRPs. To confirm the hypothesis, the study focused on educational organizations and outlined specific project management solutions in business systems, thereby improving the business and achieving positive business outcomes. Muñoz-La Rivera et al. 31 described and classified the 100 identified factors based on the dimensions and aspects of the project, assessed their impact, and determined whether they were shaping or directly affecting the occurrence of research project accidents. These factors and their descriptions and classifications made significant contributions to improving the security creation of the system and generating training and awareness materials, fostering the development of a robust security culture within organizations. Nguyen et al. concentrated on the pivotal risk factors inherent in design-build projects within the construction industry. Effective identification and management of these factors enhanced project success and foster confidence among owners and contractors in adopting the design-build approach 32 . Their study offers valuable insights into RA in project management and the adoption of new contract forms. Nguyen and Le delineated risk factors influencing the quality of 20 civil engineering projects during the construction phase 33 . The top five risks identified encompass poor raw material quality, insufficient worker skills, deficient design documents and drawings, geographical challenges at construction sites, and inadequate capabilities of main contractors and subcontractors. Meanwhile, Nguyen and Phu Pham concentrated on office building projects in Ho Chi Minh City, Vietnam, to pinpoint key risk factors during the construction phase 34 . These factors were classified into five groups based on their likelihood and impact: financial, management, schedule, construction, and environmental. Findings revealed that critical factors affecting office building projects encompassed both natural elements (e.g., prolonged rainfall, storms, and climate impacts) and human factors (e.g., unstable soil, safety behavior, owner-initiated design changes), with schedule-related risks exerting the most significant influence during the construction phase of Ho Chi Minh City’s office building projects. This provides construction and project management practitioners with fresh insights into risk management, aiding in the comprehensive identification, mitigation, and management of risk factors in office building projects.

While existing research has made notable strides in RA for SRPs, certain limitations persist. These studies limitations in quantifying the degree of influence of various factors and analyzing their interrelationships, thereby falling short of offering specific and actionable recommendations. Traditional methods, due to their inherent limitations, struggle to precisely quantify risk degrees and often overlook the intricate interplay among multiple factors. Consequently, there is an urgent need for a comprehensive method capable of quantifying the impact of diverse factors and revealing their correlations. In response to this exigency, this paper introduces the TANB model. The unique advantages of this algorithm in the RA of scientific and technological projects have been fully realized. Tailored to address the characteristics of uncertainty and complexity, the model represents a significant leap forward in enhancing applicability and accuracy. In comparison with traditional methods, the TANB model exhibits greater flexibility and a heightened ability to capture dependencies between features, thereby elevating the overall performance of RA. This innovative method emerges as a more potent and reliable tool in the realm of scientific and technological project management, furnishing decision-makers with more comprehensive and accurate support for RA.

Research methodology

This paper centers on the latest iteration of ISO 31000, delving into the project risk management process and scrutinizing the RA for SRPs and their intricate interplay with associated factors. ISO 31000, an international risk management standard, endeavors to furnish businesses, organizations, and individuals with a standardized set of risk management principles and guidelines, defining best practices and establishing a common framework. The paper unfolds in distinct phases aligned with ISO 31000:

Risk Identification: Employing data collection and preparation, a spectrum of factors related to project size, budget investment, team member experience, project duration, and technical difficulty were identified.

RA: Utilizing the Naive Bayes algorithm, the paper conducts RA for SRPs, estimating the probability distribution of various factors influencing RA results.

Risk Response: The application of the Naive Bayes model is positioned as a means to respond to risks, facilitating the formulation of apt risk response strategies based on calculated probabilities.

Monitoring and Control: Through meticulous data collection, model training, and verification, the paper illustrates the steps involved in monitoring and controlling both data and models. Regular monitoring of identified risks and responses allows for adjustments when necessary.

Communication and Reporting: Maintaining effective communication throughout the project lifecycle ensures that stakeholders comprehend the status of project risks. Transparent reporting on discussions and outcomes contributes to an informed project environment.

Data collection and preparation

In this paper, a meticulous approach is undertaken to select representative research project cases, adhering to stringent screening criteria. Additionally, a thorough review of existing literature is conducted and tailored to the practical requirements of SRPs management. According to Nguyen et al., these factors play a pivotal role in influencing the RA outcomes of SRPs 35 . Furthermore, research by He et al. underscored the significant impact of team members’ experience on project success 36 . Therefore, in alignment with our research objectives and supported by the literature, this paper identifies variables such as project scale, budget investment, team member experience, project duration, and technical difficulty as the focal themes. To ensure the universality and scientific rigor of our findings, the paper adheres to stringent selection criteria during the project case selection process. After preliminary screening of SRPs completed in the past 5 years, considering factors such as project diversity, implementation scales, and achieved outcomes, five representative projects spanning diverse fields, including engineering, medicine, and information technology, are ultimately selected. These project cases are chosen based on their capacity to represent various scales and types of SRPs, each possessing a typical risk management process, thereby offering robust and comprehensive data support for our study. The subsequent phase involves detailed data collection on each chosen project, encompassing diverse dimensions such as project scale, budget investment, team member experience, project cycle, and technical difficulty. The collected data undergo meticulous preprocessing to ensure data quality and reliability. The preprocessing steps comprised data cleaning, addressing missing values, handling outliers, culminating in the creation of a self-constructed dataset. The dataset encompasses over 500 SRPs across diverse disciplines and fields, ensuring statistically significant and universal outcomes. Particular emphasis is placed on ensuring dataset diversity, incorporating projects of varying scales, budgets, and team experience levels. This comprehensive coverage ensures the representativeness and credibility of the study on RA in SRPs. New influencing factors are introduced to expand the research scope, including project management quality (such as time management and communication efficiency), historical success rate, industry dynamics, and market demand. Detailed definitions and quantifications are provided for each new variable to facilitate comprehensive data processing and analysis. For project management quality, consideration is given to time management accuracy and communication frequency and quality among team members. Historical success rate is determined by reviewing past project records and outcomes. Industry dynamics are assessed by consulting the latest scientific literature and patent information. Market demand is gauged through market research and user demand surveys. The introduction of these variables enriches the understanding of RA in SRPs and opens up avenues for further research exploration.

At the same time, the collected data are integrated and coded in order to apply Naive Bayes algorithm and MLR analysis. For cases involving qualitative data, this paper uses appropriate coding methods to convert it into quantitative data for processing in the model. For example, for the qualitative feature of team member experience, numerical values are used to represent different experience levels, such as 0 representing beginners, 0 representing intermediate, and 2 representing advanced. The following is a specific sample data set example (Table 1 ). It shows the processed structured data, and the values in the table represent the specific characteristics of each project.

Establishment of naive Bayesian model

The Naive Bayesian algorithm, a probabilistic and statistical classification method renowned for its effectiveness in analyzing and predicting multi-dimensional data, is employed in this paper to conduct the RA for SRPs. The application of the Naive Bayesian algorithm to RA for SRPs aims to discern the influence of various factors on the outcomes of RA. The Naive Bayesian algorithm, depicted in Fig.  1 , operates on the principles of Bayesian theorem, utilizing posterior probability calculations for classification tasks. The fundamental concept of this algorithm hinges on the assumption of independence among different features, embodying the “naivety” hypothesis. In the context of RA for SRPs, the Naive Bayesian algorithm is instrumental in estimating the probability distribution of diverse factors affecting the RA results, thereby enhancing the precision of risk estimates. In the Naive Bayesian model, the initial step involves the computation of posterior probabilities for each factor, considering the given RA result conditions. Subsequently, the category with the highest posterior probability is selected as the predictive outcome.

figure 1

Naive Bayesian algorithm process.

In Fig.  1 , the data collection process encompasses vital project details such as project scale, budget investment, team member experience, project cycle, technical difficulty, and RA results. This meticulous collection ensures the integrity and precision of the dataset. Subsequently, the gathered data undergoes integration and encoding to convert qualitative data into quantitative form, facilitating model processing and analysis. Tailored to specific requirements, relevant features are chosen for model construction, accompanied by essential preprocessing steps like standardization and normalization. The dataset is then partitioned into training and testing sets, with the model trained on the former and its performance verified on the latter. Leveraging the training data, a Naive Bayesian model is developed to estimate probability distribution parameters for various features across distinct categories. Ultimately, the trained model is employed to predict new project features, yielding RA results.

Naive Bayesian models, in this context, are deployed to forecast diverse project risk levels. Let X symbolize the feature vector, encompassing project scale, budget investment, team member experience, project cycle, and technical difficulty. The objective is to predict the project’s risk level, denoted as Y. Y assumes discrete values representing distinct risk levels. Applying the Bayesian theorem, the posterior probability P(Y|X) is computed, signifying the probability distribution of projects falling into different risk levels given the feature vector X. The fundamental equation governing the Naive Bayesian model is expressed as:

In Eq. ( 1 ), P(Y|X) represents the posterior probability, denoting the likelihood of the project belonging to a specific risk level. P(X|Y) signifies the class conditional probability, portraying the likelihood of the feature vector X occurring under known risk level conditions. P(Y) is the prior probability, reflecting the antecedent likelihood of the project pertaining to a particular risk level. P(X) acts as the evidence factor, encapsulating the likelihood of the feature vector X occurring.

The Naive Bayes, serving as the most elementary Bayesian network classifier, operates under the assumption of attribute independence given the class label c , as expressed in Eq. ( 2 ):

The classification decision formula for Naive Bayes is articulated in Eq. ( 3 ):

The Naive Bayes model, rooted in the assumption of conditional independence among attributes, often encounters deviations from reality. To address this limitation, the Tree-Augmented Naive Bayes (TANB) model extends the independence assumption by incorporating a first-order dependency maximum-weight spanning tree. TANB introduces a tree structure that more comprehensively models relationships between features, easing the constraints of the independence assumption and concurrently mitigating issues associated with multicollinearity. This extension bolsters its efficacy in handling intricate real-world data scenarios. TANB employs conditional mutual information \(I(X_{i} ;X_{j} |C)\) to gauge the dependency between attributes \(X_{j}\) and \(X_{i}\) , thereby constructing the maximum weighted spanning tree. In TANB, any attribute variable \(X_{i}\) is permitted to have at most one other attribute variable as its parent node, expressed as \(Pa\left( {X_{i} } \right) \le 2\) . The joint probability \(P_{con} \left( {x,c} \right)\) undergoes transformation using Eq. ( 4 ):

In Eq. ( 4 ), \(x_{r}\) refers to the root node, which can be expressed as Eq. ( 5 ):

TANB classification decision equation is presented below:

In the RA of SRPs, normal distribution parameters, such as mean (μ) and standard deviation (σ), are estimated for each characteristic dimension (project scale, budget investment, team member experience, project cycle, and technical difficulty). This estimation allows the calculation of posterior probabilities for projects belonging to different risk levels under given feature vector conditions. For each feature dimension \({X}_{i}\) , the mean \({mu}_{i,j}\) and standard deviation \({{\text{sigma}}}_{i,j}\) under each risk level are computed, where i represents the feature dimension, and j denotes the risk level. Parameter estimation employs the maximum likelihood method, and the specific calculations are as follows.

In Eqs. ( 7 ) and ( 8 ), \({N}_{j}\) represents the number of projects belonging to risk level j . \({x}_{i,k}\) denotes the value of the k -th item in the feature dimension i . Finally, under a given feature vector, the posterior probability of a project with risk level j is calculated as Eq. ( 9 ).

In Eq. ( 9 ), d represents the number of feature dimensions, and Z is the normalization factor. \(P(Y=j)\) represents the prior probability of category j . \(P({X}_{i}\mid Y=j)\) represents the normal distribution probability density function of feature dimension i under category j . The risk level of a project can be predicted by calculating the posterior probabilities of different risk levels to achieve RA for SRPs.

This paper integrates the probability estimation of the Naive Bayes model with actual project risk response strategies, enabling a more flexible and targeted response to various risk scenarios. Such integration offers decision support to project managers, enhancing their ability to address potential challenges effectively and ultimately improving the overall success rate of the project. This underscores the notion that risk management is not solely about problem prevention but stands as a pivotal factor contributing to project success.

MLR analysis

MLR analysis is used to validate the hypothesis to deeply explore the impact of various factors on RA of SRPs. Based on the previous research status, the following research hypotheses are proposed.

Hypothesis 1: There is a positive relationship among project scale, budget investment, and team member experience and RA results. As the project scale, budget investment, and team member experience increase, the RA results also increase.

Hypothesis 2: There is a negative relationship between the project cycle and the RA results. Projects with shorter cycles may have higher RA results.

Hypothesis 3: There is a complex relationship between technical difficulty and RA results, which may be positive, negative, or bidirectional in some cases. Based on these hypotheses, an MLR model is established to analyze the impact of factors, such as project scale, budget investment, team member experience, project cycle, and technical difficulty, on RA results. The form of an MLR model is as follows.

In Eq. ( 10 ), Y represents the RA result (dependent variable). \({X}_{1}\) to \({X}_{5}\) represent factors, such as project scale, budget investment, team member experience, project cycle, and technical difficulty (independent variables). \({\beta }_{0}\) to \({\beta }_{5}\) are the regression coefficients, which represent the impact of various factors on the RA results. \(\epsilon\) represents a random error term. The model structure is shown in Fig.  2 .

figure 2

Schematic diagram of an MLR model.

In Fig.  2 , the MLR model is employed to scrutinize the influence of various independent variables on the outcomes of RA. In this specific context, the independent variables encompass project size, budget investment, team member experience, project cycle, and technical difficulty, all presumed to impact the project’s RA results. Each independent variable is denoted as a node in the model, with arrows depicting the relationships between these factors. In an MLR model, the arrow direction signifies causality, illustrating the influence of an independent variable on the dependent variable.

When conducting MLR analysis, it is necessary to estimate the parameter \(\upbeta\) in the regression model. These parameters determine the relationship between the independent and dependent variables. Here, the Ordinary Least Squares (OLS) method is applied to estimate these parameters. The OLS method is a commonly used parameter estimation method aimed at finding parameter values that minimize the sum of squared residuals between model predictions and actual observations. The steps are as follows. Firstly, based on the general form of an MLR model, it is assumed that there is a linear relationship between the independent and dependent variables. It can be represented by a linear equation, which includes regression coefficients β and the independent variable X. For each observation value, the difference between its predicted and actual values is calculated, which is called the residual. Residual \({e}_{i}\) can be expressed as:

In Eq. ( 11 ), \({Y}_{i}\) is the actual observation value, and \({\widehat{Y}}_{i}\) is the value predicted by the model. The goal of the OLS method is to adjust the regression coefficients \(\upbeta\) to minimize the sum of squared residuals of all observations. This can be achieved by solving an optimization problem, and the objective function is the sum of squared residuals.

Then, the estimated value of the regression coefficient \(\upbeta\) that minimizes the sum of squared residuals can be obtained by taking the derivative of the objective function and making the derivative zero. The estimated values of the parameters can be obtained by solving this system of equations. The final estimated regression coefficient can be expressed as:

In Eq. ( 13 ), X represents the independent variable matrix. Y represents the dependent variable vector. \(({X}^{T}X{)}^{-1}\) is the inverse of a matrix, and \(\widehat{\beta }\) is a parameter estimation vector.

Specifically, solving for the estimated value of regression coefficient \(\upbeta\) requires matrix operation and statistical analysis. Based on the collected project data, substitute it into the model and calculate the residual. Then, the steps of the OLS method are used to obtain parameter estimates. These parameter estimates are used to establish an MLR model to predict RA results and further analyze the influence of different factors.

The degree of influence of different factors on the RA results can be determined by analyzing the value of the regression coefficient β. A positive \(\upbeta\) value indicates that the factor has a positive impact on the RA results, while a negative \(\upbeta\) value indicates that the factor has a negative impact on the RA results. Additionally, hypothesis testing can determine whether each factor is significant in the RA results.

The TANB model proposed in this paper extends the traditional naive Bayes model by incorporating conditional dependencies between attributes to enhance the representation of feature interactions. While the traditional naive Bayes model assumes feature independence, real-world scenarios often involve interdependencies among features. To address this, the TANB model is introduced. The TANB model introduces a tree structure atop the naive Bayes model to more accurately model feature relationships, overcoming the limitation of assuming feature independence. Specifically, the TANB model constructs a maximum weight spanning tree to uncover conditional dependencies between features, thereby enabling the model to better capture feature interactions.

Assessment indicators

To comprehensively assess the efficacy of the proposed TANB model in the RA for SRPs, a self-constructed dataset serves as the data source for this experimental evaluation, as outlined in Table 1 . The dataset is segregated into training (80%) and test sets (20%). These indicators cover the accuracy, precision, recall rate, F1 value, and Area Under the Curve (AUC) Receiver Operating Characteristic (ROC) of the model. The following are the definitions and equations for each assessment indicator. Accuracy is the proportion of correctly predicted samples to the total number of samples. Precision is the proportion of Predicted Positive (PP) samples to actual positive samples. The recall rate is the proportion of correctly PP samples among the actual positive samples. The F1 value is the harmonic average of precision and recall, considering the precision and comprehensiveness of the model. The area under the ROC curve measures the classification performance of the model, and a larger AUC value indicates better model performance. The ROC curve suggests the relationship between True Positive Rate and False Positive Rate under different thresholds. The AUC value can be obtained by accumulating the area of each small rectangle under the ROC curve. The confusion matrix is used to display the prediction property of the model in different categories, including True Positive (TP), True Negative (TN), False Positive (FP), and False Negative (FN).

The performance of TANB in RA for SRPs can be comprehensively assessed to understand the advantages, disadvantages, and applicability of the model more comprehensively by calculating the above assessment indicators.

Results and discussion

Accuracy analysis of naive bayesian algorithm.

On the dataset of this paper, Fig.  3 reveals the performance of TANB algorithm under different assessment indicators.

figure 3

Performance assessment of TANB algorithm on different projects.

From Fig.  3 , the TANB algorithm performs well in various projects, ranging from 0.87 to 0.911 in accuracy. This means that the overall accuracy of the model in predicting project risks is quite high. The precision also maintains a high level in various projects, ranging from 0.881 to 0.923, indicating that the model performs well in classifying high-risk categories. The recall rate ranges from 0.872 to 0.908, indicating that the model can effectively capture high-risk samples. Meanwhile, the AUC values in each project are relatively high, ranging from 0.905 to 0.931, which once again emphasizes the effectiveness of the model in risk prediction. From multiple assessment indicators, such as accuracy, precision, recall, F1 value, and AUC, the TANB algorithm has shown good risk prediction performance in representative projects. The performance assessment results of the TANB algorithm under different feature dimensions are plotted in Figs.  4 , 5 , 6 and 7 .

figure 4

Prediction accuracy of TANB algorithm on different budget investments.

figure 5

Prediction accuracy of TANB algorithm on different team experiences.

figure 6

Prediction accuracy of TANB algorithm at different risk levels.

figure 7

Prediction accuracy of TANB algorithm on different project scales.

From Figs.  4 , 5 , 6 and 7 , as the level of budget investment increases, the accuracy of most projects also shows an increasing trend. Especially in cases of high budget investment, the accuracy of the project is generally high. This may mean that a higher budget investment helps to reduce project risks, thereby improving the prediction accuracy of the TANB algorithm. It can be observed that team experience also affects the accuracy of the model. Projects with high team experience exhibit higher accuracy in TANB algorithms. This may indicate that experienced teams can better cope with project risks to improve the performance of the model. When budget investment and team experience are low, accuracy is relatively low. This may imply that budget investment and team experience can complement each other to affect the model performance.

There are certain differences in the accuracy of projects under different risk levels. Generally speaking, the accuracy of high-risk and medium-risk projects is relatively high, while the accuracy of low-risk projects is relatively low. This may be because high-risk and medium-risk projects require more accurate predictions, resulting in higher accuracy. Similarly, project scale also affects the performance of the model. Large-scale and medium-scale projects exhibit high accuracy in TANB algorithms, while small-scale projects have relatively low accuracy. This may be because the risks of large-scale and medium-scale projects are easier to identify and predict to promote the performance of the model. In high-risk and large-scale projects, accuracy is relatively high. This may indicate that the impact of project scale is more significant in specific risk scenarios.

Figure  8 further compares the performance of the TANB algorithm proposed here with other similar algorithms.

figure 8

Performance comparison of different algorithms in RA of SRPs.

As depicted in Fig.  8 , the TANB algorithm attains an accuracy and precision of 0.912 and 0.920, respectively, surpassing other algorithms. It excels in recall rate and F1 value, registering 0.905 and 0.915, respectively, outperforming alternative algorithms. These findings underscore the proficiency of the TANB algorithm in comprehensively identifying high-risk projects while sustaining high classification accuracy. Moreover, the algorithm achieves an AUC of 0.930, indicative of its exceptional predictive prowess in sample classification. Thus, the TANB algorithm exhibits notable potential for application, particularly in scenarios demanding the recognition and comprehensiveness requisite for high-risk project identification. The evaluation results of the TANB model in predicting project risk levels are presented in Table 2 :

Table 2 demonstrates that the TANB model surpasses the traditional Naive Bayes model across multiple evaluation metrics, including accuracy, precision, and recall. This signifies that, by accounting for feature interdependence, the TANB model can more precisely forecast project risk levels. Furthermore, leveraging the model’s predictive outcomes, project managers can devise tailored risk mitigation strategies corresponding to various risk scenarios. For example, in high-risk projects, more assertive measures can be implemented to address risks, while in low-risk projects, risks can be managed more cautiously. This targeted risk management approach contributes to enhancing project success rates, thereby ensuring the seamless advancement of SRPs.

The exceptional performance of the TANB model in specific scenarios derives from its distinctive characteristics and capabilities. Firstly, compared to traditional Naive Bayes models, the TANB model can better capture the dependencies between attributes. In project RA, project features often exhibit complex interactions. The TANB model introduces first-order dependencies between attributes, allowing features to influence each other, thereby more accurately reflecting real-world situations and improving risk prediction precision. Secondly, the TANB model demonstrates strong adaptability and generalization ability in handling multidimensional data. SRPs typically involve data from multiple dimensions, such as project scale, budget investment, and team experience. The TANB model effectively processes these multidimensional data, extracts key information, and achieves accurate RA for projects. Furthermore, the paper explores the potential of using hybrid models or ensemble learning methods to further enhance model performance. By combining other machine learning algorithms, such as random forests and support vector regressors with sigmoid kernel, through ensemble learning, the shortcomings of individual models in specific scenarios can be overcome, thus improving the accuracy and robustness of RA. For example, in the study, we compared the performance of the TANB model with other algorithms in RA, as shown in Table 3 .

Table 3 illustrates that the TANB model surpasses other models in terms of accuracy, precision, recall, F1 value, and AUC value, further confirming its superiority and practicality in RA. Therefore, the TANB model holds significant application potential in SRPs, offering effective decision support for project managers to better evaluate and manage project risks, thereby enhancing the likelihood of project success.

Analysis of the degree of influence of different factors

Table 4 analyzes the degree of influence and interaction of different factors.

In Table 4 , the regression analysis results reveal that budget investment and team experience exert a significantly positive impact on RA outcomes. This suggests that increasing budget allocation and assembling a team with extensive experience can enhance project RA outcomes. Specifically, the regression coefficient for budget investment is 0.68, and for team experience, it is 0.51, both demonstrating significant positive effects (P < 0.05). The P-values are all significantly less than 0.05, indicating a significant impact. The impact of project scale is relatively small, at 0.31, and its P-value is also much less than 0.05. The degree of interaction influence is as follows. The impact of interaction terms is also significant, especially the interaction between budget investment and team experience and the interaction between budget investment and project scale. The P value of the interaction between budget investment and project scale is 0.002, and the P value of the interaction between team experience and project scale is 0.003. The P value of the interaction among budget investment, team experience, and project scale is 0.005. So, there are complex relationships and interactions among different factors, and budget investment and team experience significantly affect the RA results. However, the budget investment and project scale slightly affect the RA results. Project managers should comprehensively consider the interactive effects of different factors when making decisions to more accurately assess the risks of SRPs.

The interaction between team experience and budget investment

The results of the interaction between team experience and budget investment are demonstrated in Table 5 .

From Table 5 , the degree of interaction impact can be obtained. Budget investment and team experience, along with the interaction between project scale and technical difficulty, are critical factors in risk mitigation. Particularly in scenarios characterized by large project scales and high technical difficulties, adequate budget allocation and a skilled team can substantially reduce project risks. As depicted in Table 5 , under conditions of high team experience and sufficient budget investment, the average RA outcome is 0.895 with a standard deviation of 0.012, significantly lower than assessment outcomes under other conditions. This highlights the synergistic effects of budget investment and team experience in effectively mitigating risks in complex project scenarios. The interaction between team experience and budget investment has a significant impact on RA results. Under high team experience, the impact of different budget investment levels on RA results is not significant, but under medium and low team experience, the impact of different budget investment levels on RA results is significantly different. The joint impact of team experience and budget investment is as follows. Under high team experience, the impact of budget investment is relatively small, possibly because high-level team experience can compensate for the risks brought by insufficient budget to some extent. Under medium and low team experience, the impact of budget investment is more significant, possibly because the lack of team experience makes budget investment play a more important role in RA. Therefore, team experience and budget investment interact in RA of SRPs. They need to be comprehensively considered in project decision-making. High team experience can compensate for the risks brought by insufficient budget to some extent, but in the case of low team experience, the impact of budget investment on RA is more significant. An exhaustive consideration of these factors and their interplay is imperative for effectively assessing the risks inherent in SRPs. Merely focusing on budget allocation or team expertise may not yield a thorough risk evaluation. Project managers must scrutinize the project’s scale, technical complexity, and team proficiency, integrating these aspects with budget allocation and team experience. This holistic approach fosters a more precise RA and facilitates the development of tailored risk management strategies, thereby augmenting the project’s likelihood of success. In conclusion, acknowledging the synergy between budget allocation and team expertise, in conjunction with other pertinent factors, is pivotal in the RA of SRPs. Project managers should adopt a comprehensive outlook to ensure sound decision-making and successful project execution.

Risk mitigation strategies

To enhance the discourse on project risk management in this paper, a dedicated section on risk mitigation strategies has been included. Leveraging the insights gleaned from the predictive model regarding identified risk factors and their corresponding risk levels, targeted risk mitigation measures are proposed.

Primarily, given the significant influence of budget investment and team experience on project RA outcomes, project managers are advised to prioritize these factors and devise pertinent risk management strategies.

For risks stemming from budget constraints, the adoption of flexible budget allocation strategies is advocated. This may involve optimizing project expenditures, establishing financial reserves, or seeking additional funding avenues.

In addressing risks attributed to inadequate team experience, measures such as enhanced training initiatives, engagement of seasoned project advisors, or collaboration with experienced teams can be employed to mitigate the shortfall in expertise.

Furthermore, recognizing the impact of project scale, duration, and technical complexity on RA outcomes, project managers are advised to holistically consider these factors during project planning. This entails adjusting project scale as necessary, establishing realistic project timelines, and conducting thorough assessments of technical challenges prior to project commencement.

These risk mitigation strategies aim to equip project managers with a comprehensive toolkit for effectively identifying, assessing, and mitigating risks inherent in SRPs.

This paper delves into the efficacy of the TANB algorithm in project risk prediction. The findings indicate that the algorithm demonstrates commendable performance across diverse projects, boasting high precision, recall rates, and AUC values, thereby outperforming analogous algorithms. This aligns with the perspectives espoused by Asadullah et al. 37 . Particular emphasis was placed on assessing the impact of variables such as budget investment levels, team experience, and project size on algorithmic performance. Notably, heightened budget investment and extensive team experience positively influenced the results, with project size exerting a comparatively minor impact. Regression analysis elucidates the magnitude and interplay of these factors, underscoring the predominant influence of budget investment and team experience on RA outcomes, whereas project size assumes a relatively marginal role. This underscores the imperative for decision-makers in projects to meticulously consider the interrelationships between these factors for a more precise assessment of project risks, echoing the sentiments expressed by Testorelli et al. 38 .

In sum, this paper furnishes a holistic comprehension of the Naive Bayes algorithm’s application in project risk prediction, offering robust guidance for practical project management. The paper’s tangible applications are chiefly concentrated in the realm of RA and management for SRPs. Such insights empower managers in SRPs to navigate risks with scientific acumen, thereby enhancing project success rates and performance. The paper advocates several strategic measures for SRPs management: prioritizing resource adjustments and team training to elevate the professional skill set of team members in coping with the impact of team experience on risks; implementing project scale management strategies to mitigate potential risks by detailed project stage division and stringent project planning; addressing technical difficulty as a pivotal risk factor through assessment and solution development strategies; incorporating project cycle adjustment and flexibility management to accommodate fluctuations and mitigate associated risks; and ensuring the integration of data quality management strategies to bolster data reliability and enhance model accuracy. These targeted risk responses aim to improve the likelihood of project success and ensure the seamless realization of project objectives.

Achievements

In this paper, the application of Naive Bayesian algorithm in RA of SRPs is deeply explored, and the influence of various factors on RA results and their relationship is comprehensively investigated. The research results fully prove the good accuracy and applicability of Naive Bayesian algorithm in RA of science and technology projects. Through probability estimation, the risk level of the project can be estimated more accurately, which provides a new decision support tool for the project manager. It is found that budget input and team experience are the most significant factors affecting the RA results, and their regression coefficients are 0.68 and 0.51 respectively. However, the influence of project scale on the RA results is relatively small, and its regression coefficient is 0.31. Especially in the case of low team experience, the budget input has a more significant impact on the RA results. However, it should also be admitted that there are some limitations in the paper. First, the case data used is limited and the sample size is relatively small, which may affect the generalization ability of the research results. Second, the factors concerned may not be comprehensive, and other factors that may affect RA, such as market changes and policies and regulations, are not considered.

The paper makes several key contributions. Firstly, it applies the Naive Bayes algorithm to assess the risks associated with SRPs, proposing the TANB and validating its effectiveness empirically. The introduction of the TANB model broadens the application scope of the Naive Bayes algorithm in scientific research risk management, offering novel methodologies for project RA. Secondly, the study delves into the impact of various factors on RA for SRPs through MLR analysis, highlighting the significance of budget investment and team experience. The results underscore the positive influence of budget investment and team experience on RA outcomes, offering valuable insights for project decision-making. Additionally, the paper examines the interaction between team experience and budget investment, revealing a nuanced relationship between the two in RA. This finding underscores the importance of comprehensively considering factors such as team experience and budget investment in project decision-making to achieve more accurate RA. In summary, the paper provides crucial theoretical foundations and empirical analyses for SRPs risk management by investigating RA and its influencing factors in depth. The research findings offer valuable guidance for project decision-making and risk management, bolstering efforts to enhance the success rate and efficiency of SRPs.

This paper distinguishes itself from existing research by conducting an in-depth analysis of the intricate interactions among various factors, offering more nuanced and specific RA outcomes. The primary objective extends beyond problem exploration, aiming to broaden the scope of scientific evaluation and research practice through the application of statistical language. This research goal endows the paper with considerable significance in the realm of science and technology project management. In comparison to traditional methods, this paper scrutinizes project risk with greater granularity, furnishing project managers with more actionable suggestions. The empirical analysis validates the effectiveness of the proposed method, introducing a fresh perspective for decision-making in science and technology projects. Future research endeavors will involve expanding the sample size and accumulating a more extensive dataset of SRPs to enhance the stability and generalizability of results. Furthermore, additional factors such as market demand and technological changes will be incorporated to comprehensively analyze elements influencing the risks of SRPs. Through these endeavors, the aim is to provide more precise and comprehensive decision support to the field of science and technology project management, propelling both research and practice in this domain to new heights.

Limitations and prospects

This paper, while employing advanced methodologies like TANB models, acknowledges inherent limitations that warrant consideration. Firstly, like any model, TANB has its constraints, and predictions in specific scenarios may be subject to these limitations. Subsequent research endeavors should explore alternative advanced machine learning and statistical models to enhance the precision and applicability of RA. Secondly, the focus of this paper predominantly centers on the RA for SRPs. Given the unique characteristics and risk factors prevalent in projects across diverse fields and industries, the generalizability of the paper results may be limited. Future research can broaden the scope of applicability by validating the model across various fields and industries. The robustness and generalizability of the model can be further ascertained through the incorporation of extensive real project data in subsequent research. Furthermore, future studies can delve into additional data preprocessing and feature engineering methods to optimize model performance. In practical applications, the integration of research outcomes into SRPs management systems could provide more intuitive and practical support for project decision-making. These avenues represent valuable directions for refining and expanding the contributions of this research in subsequent studies.

Data availability

All data generated or analysed during this study are included in this published article [and its Supplementary Information files].

Moshtaghian, F., Golabchi, M. & Noorzai, E. A framework to dynamic identification of project risks. Smart and sustain. Built. Environ. 9 (4), 375–393 (2020).

Google Scholar  

Nunes, M. & Abreu, A. Managing open innovation project risks based on a social network analysis perspective. Sustainability 12 (8), 3132 (2020).

Article   Google Scholar  

Elkhatib, M. et al. Agile project management and project risks improvements: Pros and cons. Mod. Econ. 13 (9), 1157–1176 (2022).

Fridgeirsson, T. V. et al. The VUCAlity of projects: A new approach to assess a project risk in a complex world. Sustainability 13 (7), 3808 (2021).

Salahuddin, T. Numerical Techniques in MATLAB: Fundamental to Advanced Concepts (CRC Press, 2023).

Book   Google Scholar  

Awais, M. & Salahuddin, T. Radiative magnetohydrodynamic cross fluid thermophysical model passing on parabola surface with activation energy. Ain Shams Eng. J. 15 (1), 102282 (2024).

Awais, M. & Salahuddin, T. Natural convection with variable fluid properties of couple stress fluid with Cattaneo-Christov model and enthalpy process. Heliyon 9 (8), e18546 (2023).

Article   CAS   PubMed   PubMed Central   Google Scholar  

Guan, L., Abbasi, A. & Ryan, M. J. Analyzing green building project risk interdependencies using Interpretive Structural Modeling. J. Clean. Prod. 256 , 120372 (2020).

Gaudenzi, B. & Qazi, A. Assessing project risks from a supply chain quality management (SCQM) perspective. Int. J. Qual. Reliab. Manag. 38 (4), 908–931 (2021).

Lee, K. T., Park, S. J. & Kim, J. H. Comparative analysis of managers’ perception in overseas construction project risks and cost overrun in actual cases: A perspective of the Republic of Korea. J. Asian Archit. Build. Eng. 22 (4), 2291–2308 (2023).

Garai-Fodor, M., Szemere, T. P. & Csiszárik-Kocsir, Á. Investor segments by perceived project risk and their characteristics based on primary research results. Risks 10 (8), 159 (2022).

Senova, A., Tobisova, A. & Rozenberg, R. New approaches to project risk assessment utilizing the Monte Carlo method. Sustainability 15 (2), 1006 (2023).

Tiwari, P. & Suresha, B. Moderating role of project innovativeness on project flexibility, project risk, project performance, and business success in financial services. Glob. J. Flex. Syst. Manag. 22 (3), 179–196 (2021).

de Araújo, F., Lima, P., Marcelino-Sadaba, S. & Verbano, C. Successful implementation of project risk management in small and medium enterprises: A cross-case analysis. Int. J. Manag. Proj. Bus. 14 (4), 1023–1045 (2021).

Obondi, K. The utilization of project risk monitoring and control practices and their relationship with project success in construction projects. J. Proj. Manag. 7 (1), 35–52 (2022).

Atasoy, G. et al. Empowering risk communication: Use of visualizations to describe project risks. J. Constr. Eng. Manage. 148 (5), 04022015 (2022).

Dandage, R. V., Rane, S. B. & Mantha, S. S. Modelling human resource dimension of international project risk management. J. Global Oper. Strateg. Sourcing 14 (2), 261–290 (2021).

Wang, L. et al. Applying social network analysis to genetic algorithm in optimizing project risk response decisions. Inf. Sci. 512 , 1024–1042 (2020).

Marx-Stoelting, P. et al. A walk in the PARC: developing and implementing 21st century chemical risk assessment in Europe. Arch. Toxicol. 97 (3), 893–908 (2023).

Awais, M., Salahuddin, T. & Muhammad, S. Evaluating the thermo-physical characteristics of non-Newtonian Casson fluid with enthalpy change. Thermal Sci. Eng. Prog. 42 , 101948 (2023).

Article   CAS   Google Scholar  

Awais, M., Salahuddin, T. & Muhammad, S. Effects of viscous dissipation and activation energy for the MHD Eyring-Powell fluid flow with Darcy-Forchheimer and variable fluid properties. Ain Shams Eng. J. 15 (2), 102422 (2024).

Yang, L., Lou, J. & Zhao, X. Risk response of complex projects: Risk association network method. J. Manage. Eng. 37 (4), 05021004 (2021).

Acebes, F. et al. Project risk management from the bottom-up: Activity Risk Index. Cent. Eur. J. Oper. Res. 29 (4), 1375–1396 (2021).

Siyal, S. et al. They can’t treat you well under abusive supervision: Investigating the impact of job satisfaction and extrinsic motivation on healthcare employees. Rationality Society 33 (4), 401–423 (2021).

Chen, D., Wawrzynski, P. & Lv, Z. Cyber security in smart cities: A review of deep learning-based applications and case studies. Sustain. Cities Soc. 66 , 102655 (2021).

Zhao, M. et al. Pythagorean fuzzy TODIM method based on the cumulative prospect theory for MAGDM and its application on risk assessment of science and technology projects. Int. J. Fuzzy Syst. 23 , 1027–1041 (2021).

Suresh, K. & Dillibabu, R. A novel fuzzy mechanism for risk assessment in software projects. Soft Comput. 24 , 1683–1705 (2020).

Akhavan, M., Sebt, M. V. & Ameli, M. Risk assessment modeling for knowledge based and startup projects based on feasibility studies: A Bayesian network approach. Knowl.-Based Syst. 222 , 106992 (2021).

Guan, L., Abbasi, A. & Ryan, M. J. A simulation-based risk interdependency network model for project risk assessment. Decis. Support Syst. 148 , 113602 (2021).

Vujović, V. et al. Project planning and risk management as a success factor for IT projects in agricultural schools in Serbia. Technol. Soc. 63 , 101371 (2020).

Muñoz-La Rivera, F., Mora-Serrano, J. & Oñate, E. Factors influencing safety on construction projects (FSCPs): Types and categories. Int. J. Environ. Res. Public Health 18 (20), 10884 (2021).

Article   PubMed   PubMed Central   Google Scholar  

Nguyen, P. T. & Nguyen, P. C. Risk management in engineering and construction: A case study in design-build projects in Vietnam. Eng. Technol. Appl. Sci. Res 10 , 5237–5241 (2020).

Nguyen PT, Le TT. Risks on quality of civil engineering projects-an additive probability formula approach//AIP Conference Proceedings. AIP Publishing, 2798(1) (2023).

Nguyen, P.T., Phu, P.C., Thanh, P.P., et al . Exploring critical risk factors of office building projects. 8 (2), 0309–0315 (2020).

Nguyen, H. D. & Macchion, L. Risk management in green building: A review of the current state of research and future directions. Environ. Develop. Sustain. 25 (3), 2136–2172 (2023).

He, S. et al. Risk assessment of oil and gas pipelines hot work based on AHP-FCE. Petroleum 9 (1), 94–100 (2023).

Asadullah, M. et al. Evaluation of machine learning techniques for hypertension risk prediction based on medical data in Bangladesh. Indones. J. Electr. Eng. Comput. Sci. 31 (3), 1794–1802 (2023).

Testorelli, R., de Araujo, F., Lima, P. & Verbano, C. Fostering project risk management in SMEs: An emergent framework from a literature review. Prod. Plan. Control 33 (13), 1304–1318 (2022).

Download references

Author information

Authors and affiliations.

Institute of Policy Studies, Lingnan University, Tuen Mun, 999077, Hong Kong, China

Xuying Dong & Wanlin Qiu

You can also search for this author in PubMed   Google Scholar

Contributions

Xuying Dong and Wanlin Qiu played a key role in the writing of Risk Assessment of Scientific Research Projects and the Relationship between Related Factors Based on Naive Bayes Algorithm. First, they jointly developed clearly defined research questions and methods for risk assessment using the naive Bayes algorithm at the beginning of the research project. Secondly, Xuying Dong and Wanlin Qiu were responsible for data collection and preparation, respectively, to ensure the quality and accuracy of the data used in the research. They worked together to develop a naive Bayes algorithm model, gain a deep understanding of the algorithm, ensure the effectiveness and performance of the model, and successfully apply the model in practical research. In the experimental and data analysis phase, the collaborative work of Xuying Dong and Wanlin Qiu played a key role in verifying the validity of the model and accurately assessing the risks of the research project. They also collaborated on research papers, including detailed descriptions of methods, experiments and results, and actively participated in the review and revision process, ensuring the accuracy and completeness of the findings. In general, the joint contribution of Xuying Dong and Wanlin Qiu has provided a solid foundation for the success of this research and the publication of high-quality papers, promoted the research on the risk assessment of scientific research projects and the relationship between related factors, and made a positive contribution to the progress of the field.

Corresponding author

Correspondence to Wanlin Qiu .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary information., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Dong, X., Qiu, W. A case study on the relationship between risk assessment of scientific research projects and related factors under the Naive Bayesian algorithm. Sci Rep 14 , 8244 (2024). https://doi.org/10.1038/s41598-024-58341-y

Download citation

Received : 30 October 2023

Accepted : 27 March 2024

Published : 08 April 2024

DOI : https://doi.org/10.1038/s41598-024-58341-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Naive Bayesian algorithm
  • Scientific research projects
  • Risk assessment
  • Factor analysis
  • Probability estimation
  • Decision support
  • Data-driven decision-making

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

risk assessment research paper sample

Page Tips

Home / Resources / ISACA Journal / Issues / 2021 / Volume 2 / Risk Assessment and Analysis Methods

Risk assessment and analysis methods: qualitative and quantitative.

Risk Assessment

A risk assessment determines the likelihood, consequences and tolerances of possible incidents. “Risk assessment is an inherent part of a broader risk management strategy to introduce control measures to eliminate or reduce any potential risk- related consequences.” 1 The main purpose of risk assessment is to avoid negative consequences related to risk or to evaluate possible opportunities.

It is the combined effort of:

  • “…[I]dentifying and analyzing possible future events that could adversely affect individuals, assets, processes and/or the environment (i.e.,risk analysis)”
  • “…[M]aking judgments about managing and tolerating risk on the basis of a risk analysis while considering influencing factors (i.e., risk evaluation)” 2

Relationships between assets, processes, threats, vulnerabilities and other factors are analyzed in the risk assessment approach. There are many methods available, but quantitative and qualitative analysis are the most widely known and used classifications. In general, the methodology chosen at the beginning of the decision-making process should be able to produce a quantitative explanation about the impact of the risk and security issues along with the identification of risk and formation of a risk register. There should also be qualitative statements that explain the importance and suitability of controls and security measures to minimize these risk areas. 3

In general, the risk management life cycle includes seven main processes that support and complement each other ( figure 1 ):

  • Determine the risk context and scope, then design the risk management strategy.
  • Choose the responsible and related partners, identify the risk and prepare the risk registers.
  • Perform qualitative risk analysis and select the risk that needs detailed analysis.
  • Perform quantitative risk analysis on the selected risk.
  • Plan the responses and determine controls for the risk that falls outside the risk appetite.
  • Implement risk responses and chosen controls.
  • Monitor risk improvements and residual risk.

Figure 1

Qualitative and Quantitative Risk Analysis Techniques

Different techniques can be used to evaluate and prioritize risk. Depending on how well the risk is known, and if it can be evaluated and prioritized in a timely manner, it may be possible to reduce the possible negative effects or increase the possible positive effects and take advantage of the opportunities. 4 “Quantitative risk analysis tries to assign objective numerical or measurable values” regardless of the components of the risk assessment and to the assessment of potential loss. Conversely, “a qualitative risk analysis is scenario-based.” 5

Qualitative Risk The purpose of qualitative risk analysis is to identify the risk that needs detail analysis and the necessary controls and actions based on the risk’s effect and impact on objectives. 6 In qualitative risk analysis, two simple methods are well known and easily applied to risk: 7

  • Keep It Super Simple (KISS) —This method can be used on narrow-framed or small projects where unnecessary complexity should be avoided and the assessment can be made easily by teams that lack maturity in assessing risk. This one-dimensional technique involves rating risk on a basic scale, such as very high/high/medium/low/very.
  • Probability/Impact —This method can be used on larger, more complex issues with multilateral teams that have experience with risk assessments. This two-dimensional technique is used to rate probability and impact. Probability is the likelihood that a risk will occur. The impact is the consequence or effect of the risk, normally associated with impact to schedule, cost, scope and quality. Rate probability and impact using a scale such as 1 to 10 or 1 to 5, where the risk score equals the probability multiplied by the impact.

Qualitative risk analysis can generally be performed on all business risk. The qualitative approach is used to quickly identify risk areas related to normal business functions. The evaluation can assess whether peoples’ concerns about their jobs are related to these risk areas. Then, the quantitative approach assists on relevant risk scenarios, to offer more detailed information for decision-making. 8 Before making critical decisions or completing complex tasks, quantitative risk analysis provides more objective information and accurate data than qualitative analysis. Although quantitative analysis is more objective, it should be noted that there is still an estimate or inference. Wise risk managers consider other factors in the decision-making process. 9

Although a qualitative risk analysis is the first choice in terms of ease of application, a quantitative risk analysis may be necessary. After qualitative analysis, quantitative analysis can also be applied. However, if qualitative analysis results are sufficient, there is no need to do a quantitative analysis of each risk.

Quantitative Risk A quantitative risk analysis is another analysis of high-priority and/or high-impact risk, where a numerical or quantitative rating is given to develop a probabilistic assessment of business-related issues. In addition, quantitative risk analysis for all projects or issues/processes operated with a project management approach has a more limited use, depending on the type of project, project risk and the availability of data to be used for quantitative analysis. 10

The purpose of a quantitative risk analysis is to translate the probability and impact of a risk into a measurable quantity. 11 A quantitative analysis: 12

  • “Quantifies the possible outcomes for the business issues and assesses the probability of achieving specific business objectives”
  • “Provides a quantitative approach to making decisions when there is uncertainty”
  • “Creates realistic and achievable cost, schedule or scope targets”

Consider using quantitative risk analysis for: 13

  • “Business situations that require schedule and budget control planning”
  • “Large, complex issues/projects that require go/no go decisions”
  • “Business processes or issues where upper management wants more detail about the probability of completing on schedule and within budget”

The advantages of using quantitative risk analysis include: 14

  • Objectivity in the assessment
  • Powerful selling tool to management
  • Direct projection of cost/benefit
  • Flexibility to meet the needs of specific situations
  • Flexibility to fit the needs of specific industries
  • Much less prone to arouse disagreements during management review
  • Analysis is often derived from some irrefutable facts

THE MOST COMMON PROBLEM IN QUANTITATIVE ASSESSMENT IS THAT THERE IS NOT ENOUGH DATA TO BE ANALYZED.

To conduct a quantitative risk analysis on a business process or project, high-quality data, a definite business plan, a well-developed project model and a prioritized list of business/project risk are necessary. Quantitative risk assessment is based on realistic and measurable data to calculate the impact values that the risk will create with the probability of occurrence. This assessment focuses on mathematical and statistical bases and can “express the risk values in monetary terms, which makes its results useful outside the context of the assessment (loss of money is understandable for any business unit). 15  The most common problem in quantitative assessment is that there is not enough data to be analyzed. There also can be challenges in revealing the subject of the evaluation with numerical values or the number of relevant variables is too high. This makes risk analysis technically difficult.

There are several tools and techniques that can be used in quantitative risk analysis. Those tools and techniques include: 16

  • Heuristic methods —Experience-based or expert- based techniques to estimate contingency
  • Three-point estimate —A technique that uses the optimistic, most likely and pessimistic values to determine the best estimate
  • Decision tree analysis —A diagram that shows the implications of choosing various alternatives
  • Expected monetary value (EMV) —A method used to establish the contingency reserves for a project or business process budget and schedule
  • Monte Carlo analysis —A technique that uses optimistic, most likely and pessimistic estimates to determine the business cost and project completion dates
  • Sensitivity analysis —A technique used to determine the risk that has the greatest impact on a project or business process
  • Fault tree analysis (FTA) and failure modes and effects analysis (FMEA) —The analysis of a structured diagram that identifies elements that can cause system failure

There are also some basic (target, estimated or calculated) values used in quantitative risk assessment. Single loss expectancy (SLE) represents the money or value expected to be lost if the incident occurs one time, and an annual rate of occurrence (ARO) is how many times in a one-year interval the incident is expected to occur. The annual loss expectancy (ALE) can be used to justify the cost of applying countermeasures to protect an asset or a process. That money/value is expected to be lost in one year considering SLE and ARO. This value can be calculated by multiplying the SLE with the ARO. 17 For quantitative risk assessment, this is the risk value. 18

USING BOTH APPROACHES CAN IMPROVE PROCESS EFFICIENCY AND HELP ACHIEVE DESIRED SECURITY LEVELS.

By relying on factual and measurable data, the main benefits of quantitative risk assessment are the presentation of very precise results about risk value and the maximum investment that would make risk treatment worthwhile and profitable for the organization. For quantitative cost-benefit analysis, ALE is a calculation that helps an organization to determine the expected monetary loss for an asset or investment due to the related risk over a single year.

For example, calculating the ALE for a virtualization system investment includes the following:

  • Virtualization system hardware value: US$1 million (SLE for HW)
  • Virtualization system management software value: US$250,000 (SLE for SW)
  • Vendor statistics inform that a system catastrophic failure (due to software or hardware) occurs one time every 10 years (ARO = 1/10 = 0.1)
  • ALE for HW = 1M * 1 = US$100,000
  • ALE for SW = 250K * 0.1 = US$25,000

In this case, the organization has an annual risk of suffering a loss of US$100,000 for hardware or US$25,000 for software individually in the event of the loss of its virtualization system. Any implemented control (e.g., backup, disaster recovery, fault tolerance system) that costs less than these values would be profitable.

Some risk assessment requires complicated parameters. More examples can be derived according to the following “step-by-step breakdown of the quantitative risk analysis”: 19

  • Conduct a risk assessment and vulnerability study to determine the risk factors.
  • Determine the exposure factor (EF), which is the percentage of asset loss caused by the identified threat.
  • Based on the risk factors determined in the value of tangible or intangible assets under risk, determine the SLE, which equals the asset value multiplied by the exposure factor.
  • Evaluate the historical background and business culture of the institution in terms of reporting security incidents and losses (adjustment factor).
  • Estimate the ARO for each risk factor.
  • Determine the countermeasures required to overcome each risk factor.
  • Add a ranking number from one to 10 for quantifying severity (with 10 being the most severe) as a size correction factor for the risk estimate obtained from company risk profile.
  • Determine the ALE for each risk factor. Note that the ARO for the ALE after countermeasure implementation may not always be equal to zero. ALE (corrected) equals ALE (table) times adjustment factor times size correction.
  • Calculate an appropriate cost/benefit analysis by finding the differences before and after the implementation of countermeasures for ALE.
  • Determine the return on investment (ROI) based on the cost/benefit analysis using internal rate of return (IRR).
  • Present a summary of the results to management for review.

Using both approaches can improve process efficiency and help achieve desired security levels. In the risk assessment process, it is relatively easy to determine whether to use a quantitative or a qualitative approach. Qualitative risk assessment is quick to implement due to the lack of mathematical dependence and measurements and can be performed easily. Organizations also benefit from the employees who are experienced in asset/processes; however, they may also bring biases in determining probability and impact. Overall, combining qualitative and quantitative approaches with good assessment planning and appropriate modeling may be the best alternative for a risk assessment process ( figure 2 ). 20

Figure 2

Qualitative risk analysis is quick but subjective. On the other hand, quantitative risk analysis is optional and objective and has more detail, contingency reserves and go/no-go decisions, but it takes more time and is more complex. Quantitative data are difficult to collect, and quality data are prohibitively expensive. Although the effect of mathematical operations on quantitative data are reliable, the accuracy of the data is not guaranteed as a result of being numerical only. Data that are difficult to collect or whose accuracy is suspect can lead to inaccurate results in terms of value. In that case, business units cannot provide successful protection or may make false-risk treatment decisions and waste resources without specifying actions to reduce or eliminate risk. In the qualitative approach, subjectivity is considered part of the process and can provide more flexibility in interpretation than an assessment based on quantitative data. 21 For a quick and easy risk assessment, qualitative assessment is what 99 percent of organizations use. However, for critical security issues, it makes sense to invest time and money into quantitative risk assessment. 22 By adopting a combined approach, considering the information and time response needed, with data and knowledge available, it is possible to enhance the effectiveness and efficiency of the risk assessment process and conform to the organization’s requirements.

1 ISACA ® , CRISC Review Manual, 6 th Edition , USA, 2015, https://store.isaca.org/s/store#/store/browse/detail/a2S4w000004Ko8ZEAS 2 Ibid. 3 Schmittling, R.; A. Munns; “Performing a Security Risk Assessment,” ISACA ® Journal , vol. 1, 2010, https://www.isaca.org/resources/isaca-journal/issues 4 Bansal,; "Differentiating Quantitative Risk and Qualitative Risk Analysis,” iZenBridge,12 February 2019, https://www.izenbridge.com/blog/differentiating-quantitative-risk-analysis-and-qualitative-risk-analysis/ 5 Tan, D.; Quantitative Risk Analysis Step-By-Step , SANS Institute Information Security Reading Room, December 2020, https://www.sans.org/reading-room/whitepapers/auditing/quantitative-risk-analysis-step-by-step-849 6 Op cit Bansal 7 Hall, H.; “Evaluating Risks Using Qualitative Risk Analysis,” Project Risk Coach, https://projectriskcoach.com/evaluating-risks-using-qualitative-risk-analysis/ 8 Leal, R.; “Qualitative vs. Quantitative Risk Assessments in Information Security: Differences and Similarities,” 27001 Academy, 6 March 2017, https://advisera.com/27001academy/blog/2017/03/06/qualitative-vs-quantitative-risk-assessments-in-information-security/ 9 Op cit Hall 10 Goodrich, B.; “Qualitative Risk Analysis vs. Quantitative Risk Analysis,” PM Learning Solutions, https://www.pmlearningsolutions.com/blog/qualitative-risk-analysis-vs-quantitative-risk-analysis-pmp-concept-1 11 Meyer, W. ; “Quantifying Risk: Measuring the Invisible,” PMI Global Congress 2015—EMEA, London, England, 10 October 2015, https://www.pmi.org/learning/library/quantitative-risk-assessment-methods-9929 12 Op cit Goodrich 13 Op cit Hall 14 Op cit Tan 15 Op cit Leal 16 Op cit Hall 17 Tierney, M.; “Quantitative Risk Analysis: Annual Loss Expectancy," Netwrix Blog, 24 July 2020, https://blog.netwrix.com/2020/07/24/annual-loss-expectancy-and-quantitative-risk-analysis 18 Op cit Leal 19 Op cit Tan 20 Op cit Leal 21 ISACA ® , Conductin g a n IT Security Risk Assessment, USA, 2020, https://store.isaca.org/s/store#/store/browse/detail/a2S4w000004KoZeEAK 22 Op cit Leal

Volkan Evrin, CISA, CRISC, COBIT 2019 Foundation, CDPSE, CEHv9, ISO 27001-22301-20000 LA

Has more than 20 years of professional experience in information and technology (I&T) focus areas including information systems and security, governance, risk, privacy, compliance, and audit. He has held executive roles on the management of teams and the implementation of projects such as information systems, enterprise applications, free software, in-house software development, network architectures, vulnerability analysis and penetration testing, informatics law, Internet services, and web technologies. He is also a part-time instructor at Bilkent University in Turkey; an APMG Accredited Trainer for CISA, CRISC and COBIT 2019 Foundation; and a trainer for other I&T-related subjects. He can be reached at [email protected] .

risk assessment research paper sample

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • HHS Author Manuscripts

Logo of hhspa

A case study exploring field-level risk assessments as a leading safety indicator

Lead research behavioral scientist and research behavioral scientist, respectively, National Institute for Occupational Safety and Health, Pittsburgh, PA, USA

B.P. Connor

J. vendetti.

Manager, mining operations, Solvay Soda Ash & Derivatives North America, Green River, WY, USA

CSP, Mine production superintendent, Solvay Chemicals Inc., Green River, WY, USA

Health and safety indicators help mine sites predict the likelihood of an event, advance initiatives to control risks, and track progress. Although useful to encourage individuals within the mining companies to work together to identify such indicators, executing risk assessments comes with challenges. Specifically, varying or inaccurate perceptions of risk, in addition to trust and buy-in of a risk management system, contribute to inconsistent levels of participation in risk programs. This paper focuses on one trona mine’s experience in the development and implementation of a field-level risk assessment program to help its organization understand and manage risk to an acceptable level. Through a transformational process of ongoing leadership development, support and communication, Solvay Green River fostered a culture grounded in risk assessment, safety interactions and hazard correction. The application of consistent risk assessment tools was critical to create a participatory workforce that not only talks about safety but actively identifies factors that contribute to hazards and potential incidents. In this paper, reflecting on the mine’s previous process of risk-assessment implementation provides examples of likely barriers that sites may encounter when trying to document and manage risks, as well as a variety of mini case examples that showcase how the organization worked through these barriers to facilitate the identification of leading indicators to ultimately reduce incidents.

Introduction

Work-related health and safety incidents often account for lost days on the job, contributing to organizational/financial and personal/social burdens ( Blumenstein et al., 2011 ; Pinto, Nunes and Ribeiro, 2011 ). Accompanying research demonstrates that risk and ambiguity around risk contribute to almost every decision that individuals make throughout the day ( Golub, 1997 ; Suijs, 1999 ). In response, understanding individual attitudes toward risk has been linked to predicting health and safety behavior ( Dohmen et al., 2011 ). Although an obvious need exists to identify more comprehensive methods to assess and mitigate potential hazards, some argue that risk management is not given adequate attention in occupational health and safety ( Haslam et al., 2016 ). Additionally, research suggests that a current lack of knowledge, skills and motivation are primary barriers to worker participation in mitigating workplace risks ( Dohmen et al., 2011 ; Golub, 1997 ; Haslam et al., 2016 ; Suijs, 1999 ). Therefore, enhancing knowledge and awareness around risk-based decisions, including individuals’ abilities to understand, measure and assign levels of risk to determine an appropriate response, is increasingly important in hazardous environments to predict and prevent incidents.

This paper focuses on one field-level risk assessment (FLRA) program, including a matrix that anyone can use to assess site-wide risks and common barriers to participating in such activities. We use a trona mine in Green River, WY, to illustrate that a variety of methods may be needed to successfully implement a proactive risk management program. By discussing the mine’s tailored FLRA program, this paper contributes to the literature by providing (1) common barriers that may prevent proactive risk assessment programs in the workplace and (2) case examples in the areas of teamwork, front-line leadership development, and tangible and intangible communication efforts to foster a higher level of trust and empowerment among the workforce.

Risk assessment practices to reveal leading indicators

Risk assessment is a process used to gather knowledge and information around a specific health threat or safety hazard ( Smith and Harrison, 2005 ). Based on the probability of a negative incident, risk assessment also includes determining whether or not the level of risk is acceptable ( Lindhe et al., 2010 ; International Electrotechnical Commission, 1995 ; Pinto, Nunes and Ribeiro, 2011 ). Risk assessments can occur quantitatively or qualitatively. Research values both types in high-risk occupations to ensure that all possible hazards and outcomes have been identified, considered and reduced, if needed ( Boyle, 2012 ; Haas and Yorio, 2016 ; Hallenbeck, 1993 ; International Council on Mining & Metals (ICMM), 2012 ; World Health Organization (WHO), 2008 ). Quantitative methods are commonly found where the site is trying to reduce a specific health or environmental exposure, such as respirable dust or another toxic substance ( Van Ryzin, 1980 ). These methods focus on a specific part of an operation or task within a system, rather than the system as a whole ( Lindhe et al., 2010 ). Conversely, a qualitative approach is useful for potential or recently identified risks to decide where more detailed assessments may be needed and prioritize actions ( Boyle, 2012 ; ICMM, 2012 ; WHO, 2008 ).

Although mine management can use risk assessments to inform procedural decisions and policy changes, they are more often used by workers to identify, assess and respond to worksite risks. A common risk assessment practice is to formulate a matrix that prompts workers to identify and consider the likelihood of a hazardous event and the severity of the outcome to yield a risk ranking ( Pinto, Nunes and Ribeiro, 2011 ). After completing such a matrix and referring to the discretized scales, any organizational member should be able to determine and anticipate the risk of a hazard, action or situation, from low to high ( Bartram, 2009 ; Hokstad et al., 2010 ; Rosén et al., 2006 ). The combination of these two “scores” is used to determine whether the risk is acceptable, and subsequently, to identify an appropriate response. For example, a list of hazards may be developed and evaluated for future interventions, depending upon the severity and probability of the hazards. Additionally, risk assessments often reveal a prioritization of identified risks that inform where risk-reduction actions are more critical ( Lindhe et al., 2010 ), which may result in changes to a policy or protocol ( Boyle, 2012 ).

If initiated and completed consistently, risk assessments allow root causes of accidents and patterns of risky behavior to emerge — in other words, leading indicators ( Markowski, Mannan and Bigoszewska, 2009 ). Leading indicators demonstrate pre-incident trends rather than direct measures of performance, unlike lagging indicators such as incident rates, and as a result, are useful for worker knowledge and motivation ( Juglaret et al., 2011 ). Recently, high-risk industries have allocated more resources to preventative activities — not only to prevent injuries but also to avoid the financial costs associated with incidents — which has produced encouraging results ( Maniati, 2014 ; Robson et al., 2007 ). However, research has pointed to workers’ general confusion about the interpretation of hazards and assignment of probabilities as a hindrance to appropriate risk identification and response ( Apeland, Aven and Nilsen, 2002 ; Reason, 2013 ). In response, better foresight into the barriers of risk management is needed to (1) engage workers in risk identification and assessment, and (2) develop pragmatic solutions to prevent incidents.

Methods and materials

In December 2015, Haas and Connor, two U.S. National Institute for Occupational Safety and Health (NIOSH) researchers, traveled to Solvay Green River’s mine in southwest Wyoming. This trona mine produces close to 3 Mt/a of soda ash using a combination of longwall and solution mining and borer miners ( Fiscor, 2015 ). A health, safety and risk management framework had been introduced in phases during 2009 and 2010 to the mine’s workforce of more than 450 to help reduce risks to an acceptable level, and NIOSH wanted to understand all aspects of this FLRA program and how it became integrated into everyday work processes. We collected an extensive amount of qualitative data, analyzed the material and triangulated the results to inform a case study in health and safety system implementation ( Denzin and Lincoln, 2000 ; Pattson, 2002 ; Yin, 2014 ). The combination of expert interviews, existing documentary materials, and observation of onsite activities provided a holistic view of both post-hoc and current data points, allowing for various contexts to be compared and contrasted to determine consistency and saturation of the data ( Wrede, 2013 ).

Participants

We collected several qualitative data points, including all-day expert interviews and discussions with mine-site senior-level management such as the mine manager, health and safety manager, and mine foremen/supervisors, some of whom were hourly workers at the time of the risk assessment program implementation ( Flick, 2009 ). Additionally, we heard presentations from the mine managers and site supervisors, received archived risk assessment documents and were able to engage in observations on the surface and in the underground mine operation during the visit, where several mineworkers engaged in conversations about the FLRA, hazard interactions, and general safety culture on site.

Retrospective data analysis of risk assessment in action

Typically, qualitative analysis and triangulation of case study data use constant comparison techniques, sometimes within a grounded theory framework ( Corbin and Strauss, 2008 ; Glaser and Strauss, 1967 ). We employed the constant comparison method within a series of iterative coding steps. First, we typed the field notes and interview notes, and scanned the various risk assessment example documents received during the visit. Each piece of data was coded for keywords and themes through an initial, focused and then constant comparison approach ( Boyatzis, 1998 ; Fram, 2013 ).

Throughout the paper, quotes and examples from employees who participated in the visit are shared to better demonstrate their process to establish the FLRA program. To address the reliability and validity of our interpretation of the data, the two primary, expert information providers during the field visit, Vendetti and Heiser, became coauthors and served as member checkers of the data to ensure all information was described in a way that is accurate and appropriate for research translation to other mine sites ( Kitchener, 2002 ).

It is important to know that in 2009 Solvay experienced a sharp increase in incidents in its more-than-450-employee operation. Although no fatalities occurred, there were three major amputations and injury frequencies that were increasing steadily. The root causes of these incidents — torn ligaments/tendons/muscles requiring surgical repair or restricted duty; lacerations requiring sutures; and fractures ( Mine Safety and Health Administration, 2017 ) — showed that inconsistent perceptions of risk and mitigation efforts were occurring on site among all types of work positions, from bolters to maintenance workers. These incidents caused frustration and disappointment among the workforce.

Intervention implementation, pre- and post-FLRA program

Faced with inconsistencies in worker knowledge of risks and varying levels of risk tolerance, management could have taken a punitive, “set an example” response, based on an accountability framework. Instead, they began a process in 2009 to bring new tools, methods and mindset to safety performance at the site. Specifically, based on previous research and experience, such as from 1998, they saw the advantages of creating a common, site-wide set of tools and metrics to guide workers in a consistent approach to risk assessment in the field. This involvement trickled down to hourly workers in the form of a typical risk assessment matrix ( Table 1 ) described earlier to identify, assess and evaluate risks. Management indicated that if everyone had tools, then “It doesn’t matter what you knew or what you didn’t, you had tools to assess and manage a situation.” They hypothesized that matrices populated by workers would reveal leading indicators to proactively identify and prevent incidents that had been occurring on site. Workers were expected to utilize this matrix daily to help identify and evaluate risks.

Risk assessment matrix used by Solvay ( Heiser and Vendetti, 2015 ).

ProbabilityConsequence
12345
246810
3691215
48121620
510152025

To complete the matrix, workers rate consequences of a risk using the scales/key depicted in Table 2 . As shown in the color-coded matrix, multiplying the scores for these two areas yields a risk ranking of low, moderate, high or critical, thereby providing guidance on what energies or hazards to mitigate immediately. Although the matrix approach, specifically, may not be new to the industry, the implementation and evaluation of such efforts offer value in the form of heightened engagement, leadership and eventually behavior change.

Evaluation matrix key ( Heiser and Vendetti, 2015 ).

ProbabilityConsequence
1. RARE, practically impossible1. Could cause 1st aid injury/minor damage
2. UNLIKELY, not likely to occur2. Could cause minor injuries (recordable)
3. MODERATE, possibility to occur3. Could cause moderate damage (LTA)
4. LIKELY, to happen at some point4. Could cause permanent disability or fatality
5. ALMOST CERTAIN, to happen5. Could cause multiple fatalities
Assessment
15 — 25: CRITICAL
9 — 12: HIGH
5 — 8: MODERATE
1 — 4: LO W

Observing incidents post-implementation of the FLRA intervention during 2009 and front-line leadership efforts during 2010, much can be learned to understand where and how impact occurred on site. Figure 1 shows Green River’s 2009 spike in non-fatal days lost (NFDL) incidents with a consistent drop thereafter, providing cursory support of the program.

An external file that holds a picture, illustration, etc.
Object name is nihms940190f1.jpg

Solvay non-fatal days lost operator injuries, 2006–2016 ( MSHA, 2017 ).

Seeing a drop in incidents provides initial support for the FLRA program that Solvay introduced. Knowing that many covariates may account for a drop in incidents, however, additional data were garnered from MSHA’s website to account for hours worked. Still, the incident rate declined consistently, as shown in Fig. 2 .

An external file that holds a picture, illustration, etc.
Object name is nihms940190f2.jpg

Non-fatal days lost operator injury incidence rate (injuries by hours worked), 2006–2016 ( MSHA, 2017 ).

From a quantitative tracking effort of these lagging indicators, it can be gleaned that the implemented program was successful. However, it is important to understand what, how and why incidents decreased over time to maintain consistency in implementation and evaluation efforts. In response, this paper focuses on the qualitative data that NIOSH collected in hopes of sharing how common barriers to risk assessment can be addressed to identify leading indicators on site.

During the iterative analysis of the data, researchers sorted the initial and ongoing barriers to continuous risk assessment. The results provide insight into promising ways to measure and document as well as support and manage a risk-based program over several years. After common barriers to risk assessment implementation are discussed, mini case examples to illustrate how the organization improved and used their FLRA process to identify leading indicators follow. Ultimately, these barriers and organizational responses show that an FLRA program can help (1) measure direct/indirect precursors to harm and provide opportunities for preventative action, (2) allow the discovery of proactive leadership risk reduction strategies, and (3) provide warning before an undesired event occurs and develop a database of response strategies ( Blumenstein et al., 2011 ; ICMM, 2012 ).

Barrier to risk assessment intervention: Varying levels of risk tolerance and documentation

An initial challenge, not uncommon in occupational health and safety, was the varying levels of risk tolerance possessed by the workforce. Research shows that individuals have varying levels of knowledge, awareness and tolerance in their abilities to recognize and perceive risks as unacceptable ( Brun, 1992 ; Reason, 2013 ; Ruan, Liu and Carchon, 2003 ). Managers and workers reflected that assessments of a risk were quite broad, having an impact on the organization’s ability to consistently identify and categorize hazards. One employee who was an hourly worker at the time of the FLRA implementation said, “It took time to establish a sensitivity to potential hazards.” This is not particularly surprising; as individuals gain experience, they can become complacent with health and safety risks and, eventually, have a lower sense of perceived susceptibility and severity of a negative outcome ( Zohar and Erev, 2006 ). As a result, abilities to consistently notice and believe that a hazard poses threat to their personal health and safety decreases. The health and safety manager said, “It took a long time to get through to people that this isn’t the same as what they do every day. To really assess a risk you have to mentally stop what you’re doing and consider something.”

Eventually, management developed an understanding that risk tolerance differed individually and generationally onsite, acknowledging that sources of risk are always changing in some regard and tend to be more complicated for some employees to see than others. In response, discussions about the importance of encouraging conscious efforts of risk management became ongoing to support a new level of awareness on site. Additionally, the value of documenting risk assessment efforts on an individual and group level became more apparent. One area emphasized was encouraging team communication around risk assessment if it was warranted. An example of this process and outcome is detailed below to help elucidate how Solvay overcame disparate perceptions of risk through teamwork.

Case example: FLRA discussion and documentation in action

An example of the FLRA in action as a leading indicator was provided by the maintenance supervisor during the visit. This example included an installation of a horizontal support beam. Workers collectively completed an FLRA to determine if they could simply remove the gantry system without compromising the integrity of the headframe. As part of their FLRA process, workers were expected to identify energies/hazards that could exist during this job task. Hazards that they recorded for this process for consideration within the matrix as possible indicators included:

  • Working from heights/falling.
  • Striking against/being struck by objects.
  • Pinch points.
  • Traction and balance.
  • Hand placement.
  • Caught in/on/between objects.

An initial risk rank was provided for each of the identified hazards, based on the matrix ( Tables 1 and ​ and2). 2 ). Based on the initial risk rank, workers decided which controls to implement to minimize the risk to an acceptable level. Examples of controls implemented included:

  • Review the critical lift plan.
  • Conduct a pre-job safety and risk assessment meeting.
  • Inspect all personal protective equipment (PPE) fitting and harnesses.
  • Understand structural removal sequence.
  • Communicate between crane operator and riggers.
  • Assure 100 percent of tie-off protocol is followed.
  • Watch out for coworkers.
  • Participate in housekeeping activities.

Upon determining and implementing controls, a final risk rank was rendered to make a decision for the job task: whether or not the headframe could be removed in one section. Ultimately, workers decided it could safely be done. However, management emphasized the importance of staying true to their FLRA. They said that 50 percent of their hoisting capabilities are based on wind and that if the wind is too high, they shut down the task, which happened one day during this process. So, although an FLRA was completed and provided a documented measurement and direction about what decisions to carry out, the idea of staying true to a minute-by-minute risk assessment was important and adhered to for this task.

In this sense, the FLRAs served as a communication platform to share a common language and ultimately, common proactive behavior. In general, vagueness of data on health and safety risks can prevent hazard recognition, impair decision-making, and disrupt risk-based decisions among workers ( Ruan, Liu and Carchon, 2003 ). This example showed that the more workers understood what constitutes an acceptable level of risk, the greater sense of shared responsibility they had to prevent hazards and make protective decisions on the job ( Reason, 1998 ) such as shutting down a procedure due to potential problems. Now, workers have the ability to implement their own check-and-balance system to determine if a response is needed and their decision is supported. Treating the FLRA as a check-and-balance system allowed workers to improve their own risk assessment knowledge, skills and motivation, a common barrier to hazard identification ( Haslam et al., 2016 ). In theory, as FLRAs are increasingly used to predetermine possible incidents and response strategies are developed and referenced, the occurrence of lagging indicators should decrease, as has been the case at Solvay in recent years.

Barrier to risk assessment intervention: Resisting formal risk assessment methods

Worksites often face challenges of determining the best ways to measure and develop suitable tools to facilitate consistent risk measurement ( Boyle, 2012 ; Haas and Yorio, 2016 ; Haas, Willmer and Cecala, 2016 ). For example, research shows that assessing site risks using a series of checklists or general observations during site walkthroughs is more common ( Navon and Kolten, 2006 ). Although practical, checklists and observations require little cognitive investment and have more often been insufficient in revealing potential safety problems ( Jou et al., 2009 ). Due to familiarity with “the way things were,” implementing the system of risk assessments at Solvay came with challenges. Workers experienced initial resistance to moving toward something more formal.

For example, at the outset, hourly workers said they felt, “I do this in my head all the time. I just don’t write it down.” Particularly, individuals who were hourly workers at the time of the FLRA program implementation felt that they already did some form of risk identification and that they did not need to go into more detail to assess the risk. Just as some workers did not see a difference with what they did implicitly, and so discounted the value of conducting an FLRA, others did not think they needed to take action based on their matrix risk ranking. As one worker reflected on the previous mindset, he said, “It would be okay to be in the red, so long as you knew you were in the red.” Because of the varying levels of initial acceptance, there were inconsistencies in the quality of the completed risk assessment matrices. Management noted, “Initially, people were doing them, but not to the quality they could have been.” In response, Solvay management focused on strengthening their frontline leadership skills to help facilitate hourly buy-in, as described in the following case example.

Case example: Starting with frontline leadership to facilitate buy-in, “The Club”

To facilitate wider commitment and buy-in, senior-level management took additional steps with their frontline supervisors. To train frontline leaders on how to understand rather than punish worker actions, Solvay management started a working group in 2010 called “The Club.” This group consisted of supervisory personnel within various levels of the organization. The purpose of The Club was to develop leaders and a different sort of accountability with respect to safety. One of its first actions was to, as a group, agree on qualities of a safety leader. From there, they eventually executed a quality leadership program that embraced the use of the risk assessment tools and their outcomes ( Fiscor, 2015 ; Heiser and Vendetti, 2015 ).

After receiving this leadership training and engaging in discussions about FLRA, the execution of model leadership from The Club started. Specifically, the frontline foremen that the researchers talked with indicated that they were better able to communicate about and manage safety across the site. Prior to The Club and adapting to the FLRA, one of these supervisors reflected, “No one wanted to make a safety decision.” Senior management acknowledged with their frontline leadership that the FLRA identifies steps that anyone might miss because they are interlocked components of a system. Because of the complex risks present on site, they discussed the importance of sitting down and reviewing with hourly workers if something happened or went wrong. They shared the importance of supportive language: “We say ‘let’s not do this again,’ but they don’t get in trouble.”

To further illustrate the leadership style and communicative focus, one manager shared a conversation conducted with a worker after an incident. Rather than reprimanding the worker’s error in judgement, the manager asked: “What was going through your mind before, during this task? I just want to understand you, your choices, your thought process, so we can prevent someone else from doing the same thing, making those same choices.” After the worker acknowledged he did not have the right tools but tried to improvise, the manager asked him what other risky choices he had made that turned out okay. This process engaged the worker, and he “really opened up” about his perceptions and behaviors on site. This incident is an example of site leaders establishing accountability for action but ensuring that adequate resources and site support were available to facilitate safer practice in the future ( Yorio and Willmer, 2015 ; Zohar and Luria, 2005 ). In other words, management used these conversations not only to educate the workers about hazards involved in complex systems, but also to enact their positive safety culture.

Importantly, this communication and documentation among The Club allowed insight into how employees think, serving as a leading indicator for health and safety management. The stack of FLRAs that were pulled out — completed between 2009 and 2015 — were filled out with greater detail as the years progressed. It was apparent that the hourly workforce continually adapted, resulting in an improved sense of organizational motivation, culture and trust. Management indicated to NIOSH that workers now have an increased sense of empowerment to identify and mitigate risks. Contrary to how workers used to document their risk assessments, a management member said: “You pull one out today, and even if it isn’t perfect, the fundamentals are all there, even if it isn’t exactly how we would do it. And more likely than not, you’d pull out one and find it to be terrific.”

Barrier to risk assessment intervention: Communicate and show tangible support for risk assessment methods

A lack of management commitment, poor communication and poor worker involvement have all been identified as features of a safety climate that inhibit workers’ willingness to proactively identify risks ( Rundmo, 2000 ; Zohar and Luria, 2005 ). Therefore, promoting these organizational factors was needed to encourage workers to identify hazards and prevent incidents ( Pinto et al., 2011 ). When first rolling out their FLRA process, Solvay management knew that if they were going to transform safety practices at the mine, there had to be open communication between hourly and salary workers about site conditions and practices ( Fiscor, 2015 ; Heiser and Vendetti, 2015 ; Neal and Griffin, 2006 ; Reason, 1998 ; Rundmo, 2000 ; Wold and Laumann, 2015 ; Zohar and Luria, 2005 ). They discussed preparing themselves to be “exposed” to such information and commit as a group to react in a way that would maintain buy-in, use and behavior.

Creating a process of open sharing meant that, especially at the outset, management was likely to hear things that they didn’t necessarily want to hear. Despite perhaps not wanting to hear feedback against a policy in place or attitude of risk acceptance, all levels of management wanted to communicate their understanding for changing risks and hazards, and the need to sometimes adapt policies in place based on changing energies in the environment, as revealed by the FLRAs that the workers were taking time to complete. The following case example showcases the value of ongoing communication to maintain a risk assessment program and buy-in from workers.

Case example: Illustrating flexibility with site procedures

During the visit, managers and workers both discussed the conscious efforts made during group meetings and one-on-one interactions to improve their organizational leadership and communication, noting the difficulty of incorporating the FLRA as a complement to existing rules and regulations on site: “We needed to continually stress the importance of utilizing the risk assessment tool, and if something were to occur, to evaluate the level of controls implemented during a reassessment of the task.” To encourage worker accountability, the managers wanted to show their commitment to the FLRA process and that they could be flexible in changing a rule or policy if the risk assessment showed a need. As an example, they showed NIOSH a “general isolation” procedure about lock-out/tag-out that was distributed at their preshift safety meeting that morning. They handed out a piece of paper saying that, “While a visual disconnect secured with individual locks is always the preferred method of isolation, there are specific isolation procedures for tasks unique to underground operations.” The handout went on to state: “In rare circumstances, when a visual disconnect with lock is not used and circumstances other than those specifically identified are encountered, a formal documented risk assessment will be performed. All potential energies will be identified and understood, every practical barrier at the appropriate level will be identified and implemented, and the foreman in charge of the task will approve with his/her signature prior to performing the work. All personnel involved in the job or task must review and understand the energies and barriers implemented prior to any work being performed…”

This example shows the site’s commitment to risk assessment while also showing that, if leading indicators are identified, a policy can be changed to avoid a potential incident. Noting that they would change a procedure if workers identified something, the document illustrated management’s confidence and value in the FLRA process. Workers indicated that these behaviors are a support mechanism for them and their hazard identification efforts. Along the same lines, the managers we talked with noted the importance of not just training to procedure but also to emphasize: “High-level policies complement but don’t drive safety.” This example showcases their leadership and communicative commitment.

The lock-out/tag-out example is just one safety share that occurred at a preshift meeting. These shares “might be no more than five minutes, they might go a half-hour, but they’re allowed to take as long as they need,” one manager said. This continued commitment to foster the use of leading indicators to support a health and safety management program has shown that the metrics used to assess risks are only as good as the response to those metrics to support and encourage health and safety as well as afforded workers an opportunity to engage in improving the policies and rules on site. This continued consistency in communication helped to create a sense of ownership among workers, which led them to recognize the need for a minute-to-minute thought process that helped them foresee consequences, probabilities, and deliberate different response options. As one manager said, “You can have a defined plan but an actual risk assessment shows the dynamics of a situation and allows different plans to emerge.”

Limitations and conclusions

The purpose of this paper was to illustrate an example in which everyone could participate to identify leading safety indicators. In everyone’s judgment, it took about four to five years until Solvay actually saw the change in action, meaning that the process was sustained by workers and they were using the risk assessment terminology in their everyday discussions. In addition to providing how leading indicators can be developed or look “in action,” this paper advanced the discussion to provide insight into common barriers to risk assessment, and potential responses to these barriers. As Figs. 1 and ​ and2 2 show, incidents had been down at Solvay since the implementation of the FLRA program and enhanced leadership training of frontline supervisors, showing the impact of the FLRAs as a strong leading indicator for health and safety. Additionally, hourly workers discussed how much better the culture is on site now than it was several years ago, noting their appreciation for having a common language on site to communicate about risks. It is rare that both sides — hourly and salary — see benefits in a written tool from an operational and behavioral standpoint. The cooperation on site speaks to the positive attributes discussed within this case study and mini examples provided that cannot be shown in a graph.

Although the results of this study are only part of a small case study and cannot be generalized across the industry, data support the argument that poor leadership and an overall lack of trust on site can inhibit workers’ willingness to participate in risk measurement, documentation and decision-making. Obviously, the researchers could not talk with every worker and manager present on site, so not all opinions are reflected in this paper. However, the consistency in messages from both levels of the organization showed saturation of insights that reflect the impact of the FLRAs. It is acknowledged that some of this information may already be known and utilized by mine site leadership. However, because the focus of the study was not only on the development and use of specific risk measurement tools, but the organizational practices that are needed to foster such proactive behavior, the results provide several potential areas of improvement for the industry in terms of formal risk assessment over a period of time.

In lieu of these limitations, mine operators should consider this information when interpreting the results in terms of (1) how to establish formal risk assessment on site, especially when trying to identify and mitigate hazards, (2) what the current mindset of frontline leadership may be and how they could support (or hinder) such an risk assessment program and (3) methods to consistently support a participatory risk assessment program. Gaining an in-depth view of Solvay’s own health and safety journey provides expectations and a possible roadmap for encouraging worker participation in risk management at other mine sites to proactively prevent health and safety incidents.

Acknowledgments

The authors wish to thank the Solvay Green River operation for its participation and cooperation in this case study and for openly sharing their experiences.

The findings and conclusions in this paper are those of the authors and do not necessarily represent the views of NIOSH. Reference to specific brand names does not imply endorsement by NIOSH.

Contributor Information

E.J. Haas, Lead research behavioral scientist and research behavioral scientist, respectively, National Institute for Occupational Safety and Health, Pittsburgh, PA, USA.

B.P. Connor, Lead research behavioral scientist and research behavioral scientist, respectively, National Institute for Occupational Safety and Health, Pittsburgh, PA, USA.

J. Vendetti, Manager, mining operations, Solvay Soda Ash & Derivatives North America, Green River, WY, USA.

R. Heiser, CSP, Mine production superintendent, Solvay Chemicals Inc., Green River, WY, USA.

  • Apeland S, Aven T, Nilsen T. Quantifying uncertainty under a predictive epistemic approach to risk analysis. Reliability Engineering and System Safety. 2002; 75 :93–102. https://doi.org/10.1016/s0951-8320(01)00122-3 . [ Google Scholar ]
  • Bartram J. Water safety plan manual: step-by-step risk management for drinking-water suppliers. World Health Organization; Geneva: 2009. [ Google Scholar ]
  • Blumenstein D, Ferriter R, Powers J, Reiher M. Accidents – The Total Cost: A Guide for Estimating The Total Cost of Accidents. Western Mining Safety and Health Training and Translation Center, Colorado School of Mines, Mine Safety and Health Program technical staff. 2011 http://inside.mines.edu/UserFiles/File/MSHP/GuideforEstimatingtheTotalCostofAccidentspercent20FINAL(8-10-11).pdf .
  • Boyatzis RE. Transforming Qualitative Information: Thematic Analysis and Code Development. Sage; Thousand Oaks, CA: 1998. [ Google Scholar ]
  • Boyle T. Health And Safety: Risk Management. Routledge; New York, NY: 2012. [ Google Scholar ]
  • Brun W. Cognitive components in risk perception: Natural versus manmade risks. Journal of Behavioral Decision Making. 1992; 5 :117–132. https://doi.org/10.1002/bdm.3960050204 . [ Google Scholar ]
  • Corbin J, Strauss A. Basics of Qualitative Research. 3. Sage; Thousand Oaks, CA: 2008. [ Google Scholar ]
  • Denzin NK, Lincoln YS. The discipline and practice of qualitative research. In: Denzin NK, Lincoln YS, editors. Handbook of Qualitative Research. 2. Sage; Thousand Oaks, CA: 2000. pp. 1–28. [ Google Scholar ]
  • Dohmen T, Falk A, Huffman D, Sunde U, Schupp J, Wagner GG. Individual risk attitudes: measurement, determinants, and behavioral consequences. Journal of the European Economic Association. 2011; 9 (3):522–550. https://doi.org/10.1111/j.1542-4774.2011.01015.x . [ Google Scholar ]
  • Fiscor S. Solvay implements field level risk assessment program. Engineering and Mining Journal. 2015; 216 (9):38–42. [ Google Scholar ]
  • Flick U. An Introduction to Qualitative Research. Sage; Thousand Oaks, CA: 2009. [ Google Scholar ]
  • Fram SM. The constant comparative method outside of grounded theory. The Qualitative Report. 2013; 18 (1):1–25. [ Google Scholar ]
  • Glaser B, Strauss A. The Discovery of Grounded Theory. Adeline; Chicago, IL: 1967. [ Google Scholar ]
  • Golub A. Decision Analysis: An Integrated Approach. Wiley; New York, NY: 1997. [ Google Scholar ]
  • Haas EJ, Yorio P. Exploring the state of health and safety management system performance measurement in mining organizations. Safety Science. 2016; 83 :48–58. https://doi.org/10.1016/j.ssci.2015.11.009 . [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Haas EJ, Willmer DR, Cecala AB. Formative research to reduce mine worker respirable silica dust exposure: a feasibility study to integrate technology into behavioral interventions. Pilot and Feasibility Studies. 2016; 2 (6) https://doi.org/10.1186/s40814-016-0047-1 . [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Hallenbeck WH. Quantitative Risk Assessment for Environmental and Occupational Health. 2. Lewis Publishers; Boca Raton, NY: 1993. [ Google Scholar ]
  • Haslam C, O’Hara J, Kazi A, Twumasi R, Haslam R. Proactive occupational safety and health management: Promoting good health and good business. Safety Science. 2016; 81 :99–108. https://doi.org/10.1016/j.ssci.2015.06.010 . [ Google Scholar ]
  • Heiser R, Vendetti JA. Field Level Risk Assessment - A Safety Culture. Longwall USA Exhibition and Convention; June 16, 2016; Pittsburgh, PA. 2015. [ Google Scholar ]
  • Hokstad P, Røstum J, Sklet S, Rosén L, Lindhe A, Pettersson T, Sturm S, Beuken R, Kirchner D, Niewersch C. Deliverable No. D 4.2.4. Techneau; 2010. Methods for Analysing Risks of Drinking Water Systems from Source to Tap. [ Google Scholar ]
  • International Council on Mining & Metals. Overview of Leading Indicators For Occupational Health And Safety In Mining. 2012 Nov; https://www.icmm.com/en-gb/publications/health-and-safety/overview-of-leading-indicators-for-occupational-health-and-safety-in-mining .
  • International Electrotechnical Commission. IEC 300-3-9. Geneva: 1995. Dependability Management – Risk Analysis of Technological Systems. [ Google Scholar ]
  • Jou Y, Lin C, Yenn T, Yang C, Yang L, Tsai R. The implementation of a human factors engineering checklist for human–system interfaces upgrade in nuclear power plants. Safety Science. 2009; 47 :1016–1025. https://doi.org/10.1016/j.ssci.2008.11.004 . [ Google Scholar ]
  • Juglaret F, Rallo JM, Textoris R, Guarnieri F, Garbolino E. New balanced scorecard leading indicators to monitor performance variability in OHS management systems. In: Hollnagel E, Rigaud E, Besnard D, editors. Proceedings of the fourth Resilience Engineering Symposium; June 8–10, 2011; Sophia-Antipolis, France, Presses des Mines, Paris. 2011. pp. 121–127. https://doi.org/10.4000/books.pressesmines.1015 . [ Google Scholar ]
  • Kitchener M. Mobilizing the logic of managerialism in professional fields: The case of academic health centre mergers. Organization Studies. 2002; 23 (3):391–420. https://doi.org/10.1177/0170840602233004 . [ Google Scholar ]
  • Lindhe A, Sturm S, Røstum J, Kožíšek F, Gari DW, Beuken R, Swartz C. Deliverable No. D4.1.5g. Techneau; 2010. Risk Assessment Case Studies: Summary Report. https://www.techneau.org/fileadmin/files/Publications/Publications/Deliverables/D4.1.5g.pdf . [ Google Scholar ]
  • Markowski A, Mannan S, Bigoszewska A. Fuzzy logic for process safety analysis. Journal of Loss Prevention in the Process Industries. 2009; 22 :695–702. https://doi.org/10.1016/j.jlp.2008.11.011 . [ Google Scholar ]
  • Maniati M. The Business Benefits of Health and Safety: A Literature Review. British Safety Council; 2014. https://www.britsafe.org/media/1569/the-business-benefits-health-and-safety-literature-review.pdf . [ Google Scholar ]
  • Mine Safety and Health Administration (MSHA) Data & Reports. U.S. Department of Labor; 2017. https://www.msha.gov/data-reports . [ Google Scholar ]
  • Navon R, Kolton O. Model for automated monitoring of fall hazards in building construction. Journal of Construction Engineering and Management. 2006; 132 (7):733–740. https://doi.org/10.1061/(asce)0733-9364(2006)132:7(733) [ Google Scholar ]
  • Neal A, Griffin MA. A study of the lagged relationships among safety climate, safety motivation, safety behavior, and accidents at the individual and group levels. Journal of Applied Psychology. 2006; 91 (4):946–953. https://doi.org/10.1037/0021-9010.91.4.946 . [ PubMed ] [ Google Scholar ]
  • Pattson MQ. Qualitative Research and Evaluation Methods. 3. Sage; Thousand Oaks, CA: 2002. [ Google Scholar ]
  • Pinto A, Nunes IL, Ribeiro RA. Occupational risk assessment in construction industry – Overview and reflection. Safety Science. 2011; 49 :616–624. https://doi.org/10.1016/j.ssci.2011.01.003 . [ Google Scholar ]
  • Reason J. Achieving a safe culture: Theory and practice. Work & Stress. 1998; 12 (3):293–306. https://doi.org/10.1080/02678379808256868 . [ Google Scholar ]
  • Reason J. A Life in Error: From Little Slips to Big Disasters. Ashgate Publishing; Burlington, VT: 2013. [ Google Scholar ]
  • Robson LS, Clarke JA, Cullen K, Bielecky A, Severin C, Bigelow PL, Mahood Q. The effectiveness of occupational health and safety management system interventions: a systematic review. Safety Science. 2007; 45 (3):329–353. https://doi.org/10.1016/j.ssci.2006.07.003 . [ Google Scholar ]
  • Rosén L, Hokstad P, Lindhe A, Sklet S, Røstum J. Generic Framework and Methods for Integrated. Water Science and Technology. 2006; 43 :31–38. [ Google Scholar ]
  • Ruan D, Liu J, Carchon R. Linguistic assessment approach for managing nuclear safeguards indicators information. Logistics Information Management. 2003; 16 (6):401–419. https://doi.org/10.1108/09576050310503385 . [ Google Scholar ]
  • Rundmo T. Safety climate, attitudes and risk perception in Norsk Hydro. Safety Science. 2000; 34 (1):47–59. https://doi.org/10.1016/s0925-7535(00)00006-0 . [ Google Scholar ]
  • Smith SP, Harrison MD. Measuring reuse in hazard analysis. Reliability Engineering & System Safety. 2005; 89 (1):93–104. https://doi.org/10.1016/j.ress.2004.08.010 . [ Google Scholar ]
  • Suijs J. Cooperative Decision-Making Under Risk. Kluwer Academic Publishers, Springer Science+Business Media New York; NY: 1999. [ Google Scholar ]
  • Van Ryzin J. Quantitative risk assessment. Journal of Occupational and Environmental Medicine. 1980; 22 (5):321–326. https://doi.org/10.1097/00043764-198005000-00004 . [ PubMed ] [ Google Scholar ]
  • Wold T, Laumann K. Safety management systems as communication in an oil and gas producing company. Safety Science. 2015; 72 :23–30. https://doi.org/10.1016/j.ssci.2014.08.004 . [ Google Scholar ]
  • World Health Organization. Recommendations. 3. Vol. 1. World Health Organization; Geneva: 2008. Guidelines for Drinking-Water Quality [Electronic Resource]: Incorporating First and Second Addenda. [ Google Scholar ]
  • Wrede S. How country matters: Studying health policy in a comparative perspective. In: Bourgeault I, Dingwall R, de Vries R, editors. The SAGE Handbook of Qualitative Methods in Health Research. Sage; Thousand Oaks, CA: 2013. [ Google Scholar ]
  • Yin RK. Case Study Research: Design and Methods. 5. Sage; Thousand Oaks, CA: 2014. [ Google Scholar ]
  • Yorio PaL, Willmer DR. Explorations in Pursuit of Risk-Based Health and Safety Management Systems. SME Annual Conference & Expo; Feb. 15–18, 2015; Denver, CO: Society for Mining, Metallurgy & Exploration; 2015. [ Google Scholar ]
  • Zohar D, Erev I. On the difficulty of promoting workers’ safety behaviour: Overcoming the underweighting of routine risks. International Journal of Risk Assessment and Management. 2006; 7 (2):122–136. https://doi.org/10.1504/ijram.2007.011726 . [ Google Scholar ]
  • Zohar D, Luria G. A multilevel model of safety climate: cross-level relationships between organization and group-level climates. Journal of Applied Psychology. 2005; 90 (4):616–628. https://doi.org/10.1037/0021-9010.90.4.616 . [ PubMed ] [ Google Scholar ]

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

ijerph-logo

Article Menu

risk assessment research paper sample

  • Subscribe SciFeed
  • Recommended Articles
  • PubMed/Medline
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

Risk assessment matrices for workplace hazards: design for usability.

risk assessment research paper sample

1. Introduction

1.1. background on risk assessment, 1.2. diverse options for design, 1.3. usability issues.

  • Improbable and seldom.
  • Often, frequent, and probable.
  • Disastrous and catastrophic.

1.4. Reasons for a Second Survey

2. materials and methods, 2.1. the survey instrument.

  • For rating severity terms, the end points were No harm and Worst harm.
  • For likelihood and probability terms, the end points were Impossible and Certain.
  • For extent of exposure terms, the end points were No exposure and Constant exposure.

2.2. Rationale for Terms Included in the Survey

2.3. procedures, 3.1. demographics of respondents, 3.2. ratings of terms in present survey, 3.3. parallel wording.

  • Extremely improbable and extremely unlikely: (median 6, 7|mean 15.2, 15.9).
  • Somewhat improbable and somewhat unlikely: (median 22, 25.5|mean 28.5, 24.8).
  • Moderately probable and moderately likely: (median 57.5, 55|mean 61.1, 56.9).
  • Probable and likely: (median 67, 65|mean 65.3, 67.2).
  • Highly probable and highly likely: (median 88.5, 81|mean 87.1, 84.2).
  • Improbable and unlikely: (median 10, 20|mean 14.1, 20.8).
  • Somewhat probable and somewhat likely: (median 56, 40|mean 57.2, 45.6).

3.4. Rating from Two Surveys Compared

4. discussion, 4.1. selectively removing terms, 4.2. calendar-based terms, 4.3. limitations, 5. recommendations, 6. conclusions.

  • The survey confirmed the prior recommendations for severity terms. However, the authors recommend limiting use of the set containing the word “damage” to hazards concerned with harm to equipment, facilities, products and the environment.
  • The survey confirmed the prior recommendations for likelihood terms with some suggestions. The term somewhat likely had a median in this survey of 40, but a median of 60 in the prior survey. That does not negate use of the term, but due to the inconsistent ratings, we suggest using moderately likely with a median rating of 55.
  • Based on ratings in both surveys, the ratings for the terms for the lowest likelihood category did not produce a winner. Three terms intended for naming the lowest category, with their medians, are: very unlikely (11), extremely unlikely (7), or highly unlikely (10). We express no preference.
  • The survey found concerns with some terms in the probability sets. The prior survey did not include terms with rating in the middle range of probability, so four terms were added to the survey: fairly normal, moderately, somewhat probable, and somewhat improbable. Rating for these terms provides alternatives for the word occasionally in the sets found in Table 16 . The authors recommend replacing occasionally in the upper set with fairly normal, and in the three lower sets with somewhat improbable.
  • The survey confirmed the prior recommendations for extent of exposure with small changes. An improvement incorporated into the present survey was adding the word “exposed” to four words in the prior survey to make four terms—regularly exposed, occasionally exposed, seldom exposed, and rarely exposed.

Supplementary Materials

Author contributions, institutional review board statement, informed consent statement, data availability statement, conflicts of interest, appendix a. rationale for normalized, equal-axis risk matrices.

Click here to enlarge figure

  • Friend, M.A.; Zontek, T.L.; Ogle, B.R. Planning and Managing Safety: A History. In Planning and Managing the Safety System ; Friend, M.W., Ed.; Bernham: Lanham, MD, USA, 2017; pp. 1–16. [ Google Scholar ]
  • Jensen, R.C. Risk-Reduction Methods for Occupational Safety and Health , 2nd ed.; Wiley: Hoboken, NJ, USA, 2019; pp. 65–81. ISBN 978-1-1194-9399-0. [ Google Scholar ]
  • Kjellén, U.; Albrechtsen, E. Prevention of Accidents and Unwanted Occurrences: Theory, Methods, and Tools in Safety Management , 2nd ed.; CRC: Boca Raton, FL, USA, 2017; pp. 1–7, 339–351. ISBN 978-1-4987-3659-6. [ Google Scholar ]
  • ISO, 45001 ; Occupational Health and Management Systems—Requirements with Guidance for Use, 2018 ed. International Organisation for Standardization: Geneva, Switzerland, 2018.
  • Main, B.W. Risk assessment: A review of the fundamental principles. Prof. Saf. 2004 , 49 , 37–47. [ Google Scholar ]
  • Rausand, M. Risk Assessment: Theory, Methods, and Applications ; Wiley: Hoboken, NJ, USA, 2011; pp. 99–102. ISBN 978-1-4398-0684-5. [ Google Scholar ]
  • Pawlowska, Z. Occupational risk assessment. In Handbook of Occupational Safety and Health ; Koradecka, D., Ed.; CRC: Boca Raton, FL, USA, 2010; pp. 473–481. ISBN 978-1-4398-0684-5. [ Google Scholar ]
  • Baybutt, P. Guidelines for designing risk matrices. Process Saf. Prog. 2018 , 37 , 49–55. [ Google Scholar ] [ CrossRef ]
  • Ale, B.; Burnap, P.; Slater, D. On the origin of PDCS—(Probability consequence diagrams). Saf. Sci. 2015 , 72 , 229–239. [ Google Scholar ] [ CrossRef ]
  • Baybutt, P. Calibration of risk matrices for process safety. J. Loss Prev. Process Ind. 2015 , 38 , 163–168. [ Google Scholar ] [ CrossRef ]
  • Duijm, N.J. Recommendations on the use and design of risk matrices. Saf. Sci. 2015 , 76 , 21–31. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Ball, D.J.; Watt, J. Further thoughts on the utility of risk matrices. Risk Anal. 2013 , 33 , 2068–2078. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Cox, L.A., Jr. What’s wrong with risk matrices? Risk Anal. 2008 , 28 , 497–512. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Bao, C.; Wu, D.; Wan, J.; Li, J.; Chen, J. Comparison of different methods to design risk matrices from the perspective of applicability. Procedia Comput. Sci. 2017 , 122 , 455–462. [ Google Scholar ] [ CrossRef ]
  • Pons, D.J. Alignment of the safety method with New Zealand legislative responsibilities. Safety 2019 , 5 , 59. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Aven, T. Improving risk characterisations in practical situations by highlighting knowledge aspects, with applications to risk matrices. Reliab. Eng. Syst. Saf. 2017 , 17 , 42–48. [ Google Scholar ] [ CrossRef ]
  • Cox, L.A., Jr.; Babayev, D.; Huber, W. Some limitations of qualitative risk rating systems. Risk Anal. 2005 , 25 , 651–662. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Goerlandt, F.; Reniers, G. On the assessment of uncertainty in risk diagrams. Saf. Sci. 2016 , 84 , 67–77. [ Google Scholar ] [ CrossRef ]
  • Card, A.J.; Ward, J.R.; Clarkson, J. Trust-level risk evaluation and risk control guidance in the NHS east of England. Risk Anal. 2014 , 34 , 1469–1481. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Kaya, G.K.; Ward, J.; Clarkson, J. A review of risk matrices used in acute hospitals in England. Risk Anal. 2019 , 39 , 1060–1070. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Jensen, R.C.; Hansen, H. Selecting appropriate words for naming the rows and columns of risk assessment matrices. Int. J. Environ. Res. Public Health 2020 , 17 , 5521. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • U.S. Department of Defense. MIL-STD-882E, Standard Practice for System Safety. p. 12. Available online: www.system-safety.org/Documents/MIL=STD-882E.pdf. (accessed on 28 December 2019).
  • Hamka, M.A. Safety risk assessment on container terminal using hazard identification and risk assessment and fault tree analysis methods. Procedia Manuf. 2017 , 194 , 307–314. [ Google Scholar ]
  • Ruan, X.; Yin, Z.; Frangopol, D.M. Risk matrix integrating risk attitudes based on utility theory. Risk Anal. 2015 , 35 , 1437–1447. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Minitab Inc. Minitab Statistical Software, Version 19 ; Minitab Inc.: State College, PA, USA, 2020. [ Google Scholar ]
Axis ParameterNumber of
Categories
Number of Sets
Recommended
Example
SeverityThree3
Four1 a
Five2
ProbabilityThree1
Four1 b
Five1
Six1
LikelihoodThree1
Four1 c
Five1
Six1
Extent of ExposureTwo1
Three2 d
Probability TermsLikelihood Terms
Term StudiedSameDifferentTerm StudiedSameDifferent
Highly probableX Highly likelyX
ProbableX LikelyX
ImprobableX Somewhat likelyX
Remote XSomewhat unlikelyX
Fairly normal XUnlikelyX
Moderately probable XCertainX
Extremely probable XAlmost certainX
Extremely improbable XExtremely unlikely X
Somewhat probable XExtremely likely X
Somewhat improbable XModerately likely X
Fairly normal X
Very unlikely X
Very likely X
Severity TermsExtent of Exposure Terms
Current Study TermPrior StudyCurrent Study TermsPrior Study Terms
CatastrophicSameVery frequentlyVery frequent
Medical treatment caseSameFrequentlyFrequent
SevereSameSomewhat frequentlySomewhat frequent
ModerateSameInfrequentlyInfrequent
Minor damageSameVery infrequentlyVery infrequent
InsignificantSame
SeriousSameRegularly exposedRegularly
Severe lossSameOccasionally exposedOccasionally
Major damageSameSeldom exposedSeldom
NegligibleSameRarely exposedRarely
Permanent injury/illnessSame
CriticalSameAnnuallySame
MinorSameMonthlySame
Death of a person SameWeeklySame
First aid only caseSameDailySame
MarginalSame
AgeNPrct.GenderNPrct.Ethnicity or RaceNPrct.
60–69102.7Male2569.4White/Caucasian2775.0
50–59513.5Female1130.6Hispanic/Latinx411.1
40–491129.7Decline1NAAsian38.3
30–391232.4 Native American 12.8
20–29821.6 Other (African)12.8
Total3799.9 37100.0 36100.0
Most ExperienceNPrct.Sector EmployedNPrct.
Occupational Safety513.5Private Industrial925.0
Industrial Hygiene1232.4Private Commercial513.9
Occupational S&H Combined1232.4Education411.1
Environmental Protection616.2Federal Military38.3
Responder12.7Federal Non-Military719.4
Other (not specified)12.7Non-Federal Government719.4
Other12.8
Total3799.9 Total3699.9
Term RatedNMeanSt. Dev.Median
Death of a person 3299.71.4100.0
Catastrophic3396.46.5100.0
Permanent Injury/Illness3387.318.992.0
Severe Loss3477.111.885.0
Critical3478.913.481.0
Severe3477.111.880.0
Serious3471.013.870.0
Major Damage3071.317.070.5
Medical Treatment Case3457.419.160.0
Moderate3444.912.650.0
First Aid Only Case3425.916.424.5
Marginal3326.412.821.0
Minor Damage3322.98.720.0
Minor3320.610.120.0
Negligible2921.326.010.0
Insignificant2610.516.55.5
Term RatedNMeanSt. Dev.Median
Certain3195.112.4100.0
Extremely Probable3193.94.995.0
Almost Certain3392.17.094.0
Highly Probable3087.87.988.5
Probable3165.416.867.0
Moderately Probable3061.114.257.5
Somewhat Probable2857.315.056.0
Fairly Normal3153.522.751.0
Somewhat Improbable3128.514.222.0
Remote3025.122.416.5
Improbable2714.19.310.0
Extremely Improbable2515.224.96.0
Term RatedNMeanSt. Dev.Median
Certain3195.112.4100.0
Almost Certain3392.17.094.0
Extremely Likely3087.016.490.0
Highly Likely3184.29.481.0
Likely3167.216.965.0
Moderately Likely3156.912.555.0
Fairly Normal3153.522.751.0
Somewhat Likely3145.516.440.0
Somewhat Unlikely3224.813.025.5
Unlikely2720.812.320.0
Remote3025.122.416.5
Extremely Unlikely2815.926.77.0
Term RatedNMeanSt. Dev.Median
Daily 3090.112.294.0
Very Frequently3180.817.082.0
Regularly Exposed3075.118.777.5
Frequently3172.716.275.0
Weekly 3064.621.470.5
Somewhat Frequently3156.716.260.0
Monthly 3142.418.344.0
Occasionally exposed3139.224.831.0
Somewhat Infrequently3130.117.627.0
Infrequently3022.216.120.5
Annually 3021.917.719.0
Seldom Exposed3117.419.311.0
Very Infrequently2914.721.010.0
Rarely Exposed259.611.46.0
Terms for Severity of HarmPrevious Survey:
Undergraduates
Present Survey:
Experienced
∆ Medians % Diff
MeanMedianMeanMedian
Minor21.82020.6200.00.0
Catastrophic96.810096.41000.00.0
Minor damage25.02022.3200.00.0
Negligible15.71021.3100.00.0
Moderate48.95044.9500.00.0
Death of one person 97.110099.81000.00.0 *
Serious74.97471.0704.0−5.4
Permanent Injury/illness94.49687.3924.0−4.2
Severe83.88477.1804.0−4.8 *
Insignificant12.61010.55.54.5−45.0 *
Severe loss86.99085.1855.5−5.6 *
Critical84.59078.9819.0−10.0 *
Marginal32.93124.82110.0−32.3 *
First aid only case41.837.525.924.513.0−34.7 *
Medical treatment case74.07457.46014.0−18.9 *
Major damage81.78671.370.515.5−18.0 *
Terms for Likelihood and ProbabilityPrevious Survey:
Undergraduates
Present Survey:
Experienced
∆ Medians % Diff
MeanMedianMeanMedian
Certain94.9100.095.1100.00.00.0
Highly Likely80.780.584.281.0−0.5−0.6
Unlikely24.921.020.820.01.04.8
Probable67.470.065.467.03.04.3
Likely65.270.067.265.05.04.8
Highly Probable81.782.087.888.5−6.5−7.9 *
Somewhat Unlikely34.434.024.825.58.525.0 *
Almost Certain81.485.092.194.0−9.0−10.6 *
Improbable18.720.014.110.010.050.0 *
Somewhat Likely53.460.045.540.020.033.3 *
Term for Extent of
Exposure
Previous Survey:
Undergraduates
Present Survey:
Experienced
∆ Medians % Diff
MeanMedianMeanMedian
Very Infrequently15.010.014.710.00.00.0
Infrequently23.120.022.220.5−0.5−2.5
Weekly65.970.062.570.5−0.5−0.7
Somewhat Frequently54.059.556.760.0−0.5−0.8
Regularly Exposed74.174.075.177.5−3.5−4.7
Frequently72.072.572.775.0−2.5−3.4
Remote 16.714.025.116.5−2.5−17.9
Daily86.890.090.194.0−4.0−4.4
Occasionally exposed39.636.039.231.05.013.9
Monthly49.350.042.444.06.012.0
Very Frequently85.088.580.882.06.57.3
Seldom Exposed19.718.017.411.07.038.9 *
Rarely Exposed15.614.09.66.08.057.1 *
Annually36.229.521.919.010.535.6 *
TermPrior Survey MedianPresent Survey MedianDifference
Daily90.094.0−4.0
Weekly70.070.50.0
Monthly50.044.06.0
Annually29.519.010.5
Sets of Terms from Prior SurveyPrior SurveySurvey of GraduatesRecommendations
MeanMedianMeanMedian
Severe83.88477.185.0Recommended with no change
Moderate48.95044.950.0
Minor21.82020.620.0
Severe loss86.98577.185.0Recommended but replace severe loss with severe
Moderate48.95044.950.0
Minor21.82020.620.0
Major damage81.98671.370.5Recommended for equipment, facilities, environment but not for human safety and health.
Moderate48.95044.950.0
Minor damage25.62022.920.0
Catastrophic96.910096.4100.0Recommended with no change
Serious74.97471.070.0
Marginal32.93126.421.0
Negligible15.71021.310.0
Catastrophic96.910096.4100.0Recommended with no change
Severe83.38477.180.0
Moderate48.95044.950.0
Marginal32.93126.421.0
Insignificant12.61010.55.5
Catastrophic96.910096.4100.0Recommended with no changes
Serious74.97471.070.0
Moderate48.95044.950.0
Marginal32.93126.421.0
Insignificant12.61010.55.5
Sets of Terms from Prior Survey [ ]Prior SurveySurvey of GraduatesRecommendations
MeanMedianMeanMedian
Highly likely80.780.584.281.0Recommended with options to consider in footnotes 1 and 2
Somewhat likely 53.660.045.540.0
Very unlikely 14.611.0No matchNo match
Highly likely80.780.584.281.0Recommended with options to consider in footnotes 1 and 2
Somewhat likely 53.660.045.540.0
Somewhat unlikely34.434.024.825.5
Highly unlikely 13.310.0No matchNo match
Certain96.010095.1100.0Recommended with options to consider in footnotes 1 and 2
Highly likely80.780.584.281.0
Somewhat likely 53.660.045.540.0
Somewhat unlikely34.434.024.825.5
Highly unlikely 13.310.0No matchNo match
Highly likely80.780.584.281.0Recommended with options to consider in footnotes 1 and 2
Likely66.070.067.265.0
Somewhat likely 53.660.045.540.0
Somewhat unlikely34.434.024.825.5
Unlikely24.622.020.820.0
Highly unlikely 13.310.0No matchNo match
Sets of Terms from Prior SurveyPrior SurveySurvey of GraduatesRecommendations
MeanMedianMeanMedian
Highly probable81.78287.888.5Recommended with options to consider footnotes 1 and 2
Occasionally 40.236No matchNo match
Highly improbable 14.310No matchNo match
Highly probable81.78287.888.5Recommend with options to consider in footnotes 1 and 2.
Probable68.27065.467.0
Occasionally 40.236No matchNo match
Highly improbable 14.310No matcchNo match
Highly probable81.78287.888.5Recommend with comments:
Replace possible with somewhat probable (mean 57.3, median 56).
Replace occasionally with somewhat improbable (mean 28.5, median 22).
Probable68.27065.467.0
Possible59.460No matchNo match
Occasionally 40.236No matchNo match
Highly improbable 14.310No matchNo match
Certain96.010095.1100.0Recommend with options to consider in footnotes 1 and 2
Highly probable81.78287.888.5
Probable68.27065.467.0
Possible59.460No matchNo match
Occasionally 40.236No matchNo match
Highly improbable 14.310No matchNo match
Sets of Terms from Prior SurveyPrior SurveySurvey of GraduatesRecommendations
MeanMedianMeanMedian
Regularly 74.174.075.177.5Recommended with minor word change
Seldom 19.718.017.411.0
Regularly 74.174.075.177.5Recommended with minor word change
Occasionally 40.236.039.231.0
Rarely 15.814.09.66.0
Very frequent 85.088.580.882.0Recommended with minor word change
Somewhat frequent 54.759.556.760.0
Very infrequent 15.010.014.710.0
MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

Jensen, R.C.; Bird, R.L.; Nichols, B.W. Risk Assessment Matrices for Workplace Hazards: Design for Usability. Int. J. Environ. Res. Public Health 2022 , 19 , 2763. https://doi.org/10.3390/ijerph19052763

Jensen RC, Bird RL, Nichols BW. Risk Assessment Matrices for Workplace Hazards: Design for Usability. International Journal of Environmental Research and Public Health . 2022; 19(5):2763. https://doi.org/10.3390/ijerph19052763

Jensen, Roger C., Royce L. Bird, and Blake W. Nichols. 2022. "Risk Assessment Matrices for Workplace Hazards: Design for Usability" International Journal of Environmental Research and Public Health 19, no. 5: 2763. https://doi.org/10.3390/ijerph19052763

Article Metrics

Article access statistics, supplementary material.

ZIP-Document (ZIP, 16 KiB)

Further Information

Mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

Copyright © SurveySparrow Inc. 2024 Privacy Policy Terms of Service SurveySparrow Inc.

Risk Assessment Questionnaires (With Sample Templates and Questions)

blog author

Kate Williams

Last Updated: 29 May 2024

20 min read

Risk Assessment Questionnaires (With Sample Templates and Questions)

Table Of Contents

  • Risk Assessment Questionnaires
  • An Overview
  • Importance!
  • Best practices
  • Questionnaires with SurveySparrow

Risk Assessment Questionnaires are structured tools that help organizations identify and manage risks.

But what exactly are they, and how can you create one yourself?

Let’s find out.

What are Risk Assessment Questionnaires?

Risk Assessment Questionnaires, or Third-Party Risk Assessment Questionnaires, are standardized questions designed to gather information about potential risks associated with a specific entity, such as a vendor, partner, or even a new employee.

These questionnaires usually involve giving scores to questions about how likely and severe risks are. The scores provide an overall risk level, like low, medium, high, or extreme.

Studies show that more than 50% of data breaches involve third-party vendors. These forms are important for spotting and dealing with risks so organizations can plan how to avoid them and decide where to focus their efforts.

They cover topics such as organizational risk culture, risk appetite, oversight mechanisms, contingency planning, financial controls, compliance issues, communication climate, staff turnover, safety measures, IT systems reliability, and impact assessment in case of identified risks.

Here’s a sample questionnaire template.

Vendor Risk Assessment Questionnaire Template

Preview Template

 Vendor Risk Assessment Questionnaire Template

Components of a Risk Assessment Questionnaire

It comprises several key components, including:

  • Identification of potential risks: Questions aimed at identifying potential hazards or vulnerabilities within the organization.
  • Evaluation of risk severity: Inquiries assessing the potential impact and likelihood of identified risks.
  • Mitigation strategies: Sections dedicated to outlining preventive measures and mitigation strategies to address identified risks effectively.

Purpose of Risk Assessment Questionnaires

The primary purpose is to help organizations identify and understand potential risks they may face.

  • Identify potential risks: These questionnaires help find possible weaknesses and threats in a company.
  • Evaluate risk severity: They also help determine the severity of those risks.
  • Inform decision-making: Their information helps you make smart choices about partnerships, vendors, and where to focus resources to manage risks better.

Importance of Risk Assessment Questionnaires

The significance of third-party risk assessment questionnaires lies in their ability to enhance organizational resilience and protect against potential threats.

importance-of-risk-assessment-questionnaires

Proactive Risk Management

These surveys help protect businesses by spotting and fixing problems before they become big. Also, they act as early warning systems for businesses, allowing them to identify potential issues and vulnerabilities before they escalate into significant problems.

By conducting these surveys regularly, organizations can stay vigilant and address emerging risks promptly. This minimizes the likelihood of costly disruptions to their operations.

Regulatory Compliance

They also ensure companies follow the rules and standards set by the industry. Compliance with industry regulations and standards is essential for businesses to operate ethically and avoid legal repercussions.

These assessments help companies assess their compliance status by identifying areas where they may fall short of regulatory requirements. This enables organizations to take corrective actions and ensure their operations align with applicable laws and guidelines.

Informed Decision-Making

Make good choices. Informed decision-making is crucial for success. Insights from these assessments provide valuable information about potential risks and their impact on various aspects of the business.

By analyzing the data collected, companies can make strategic decisions about resource allocation, risk mitigation strategies, and long-term planning, maximizing their chances of success.

Stakeholder Confidence

Displaying a commitment to safety enhances a company’s reputation and credibility. By regularly assessing risks and taking proactive measures to address them, companies show that they prioritize the well-being of their stakeholders.

With this, you can build trust and confidence among customers, partners, investors, and regulators.

Continuous Improvement

It is important to keep improving.

Risk management is an ongoing process that requires continuous monitoring, evaluation, and improvement. Organizations can adapt to changing circumstances and emerging threats by regularly reviewing and updating their assessments.

This iterative approach allows companies to avoid potential risks and continually improve their resilience and preparedness.

Are we clear about the significance? Let’s move on to an interesting section:

Types of Risk Assessment Questionnaires

Risk assessment questionnaires come in different types, each with its job to keep businesses safe and legal.

It is vital to have a clear idea of what strengths each form holds to use for enhanced results.

(I’ve also included sample templates created with SurveySparrow so you can get a feel for how they work. Feel free to give them a try!)

Oh! A few extra questions have been added to each section. You can use them in the templates, remove the pre-populated fields, or add more to personalize them.

The first one in the lot is:

1. Change Management Risk Assessment Questionnaire

change-management-risk-assessment-questionnaire

This is used to evaluate the potential risks associated with implementing a change. This questionnaire helps identify, assess, and address risks that may arise during change initiatives, like system upgrades or policy revisions.

Questions will be about the nature of the change, possible risks, and strategies to mitigate them. Using it, you can proactively manage risks, ensuring smoother implementation of changes and minimizing disruptions to business operations.

Used By: Management teams, Project Managers, Human Resources Professionals

Risk Assessment Sample Questions

  • How many people will this change impact?
  • Have we done something similar before? How’d it go?
  • Can we easily fix things if there’s a problem?
  • Will people need help learning how to do things differently?
  • Do people understand why this change is happening?

2. Investment Risk Assessment Questionnaire

investment-risk-assessment-questionnaire-made-with-surveysparrow

This is all about investment opportunities. It helps investors understand their risk tolerance and preferences, allowing them to make informed decisions about where to invest their money.

This questionnaire typically asks questions about factors such as investment goals, time horizon, and willingness to tolerate fluctuations in the value of investments.

By completing this assessment, investors gain insights into their risk profile, enabling them to make investment choices aligned with their financial goals and risk comfort.

Used By: Investors, Financial Advisors

Sample Questions

  • In how many years do you expect to need this money?
  • Which is more important to you: potentially higher returns or preserving your principal?
  • How would you react if your investment portfolio dropped by 10% in a month?
  • Do you have any upcoming significant expenses (e.g., down payment, education) that might require accessing this money?
  • Which statement best describes your overall financial situation? (e.g., Debt-free with emergency savings, Have some debt but manageable)

3. AML Risk Assessment Questionnaire

template-for-anti-money-laundering-aml-made-with-surveysparrow

An AML (Anti-Money Laundering) Risk Assessment evaluates the risks of money laundering and terrorism financing in a business. It helps organizations identify weaknesses and take steps to reduce these risks.

The questionnaire covers customer checks, transaction monitoring, staff training, and overall AML compliance. This assessment allows businesses to see where they’re vulnerable and improve their anti-money laundering measures to follow the rules and protect their reputation.

Used By: Money Services Businesses, Cryptocurrency Exchanges, Real Estate Agents

  • Do you handle a large volume of cash transactions in your day-to-day operations?
  • Does your company serve many customers from countries considered high-risk for money laundering?
  • How often are your company’s Anti-Money Laundering (AML) policies and procedures reviewed and updated?
  • Is there a clear and accessible process for employees to report suspicious activity to the appropriate authorities?
  • To your knowledge, has the company ever faced any fines or sanctions for violations of AML regulations?

4. Cybersecurity Risk Assessment Questionnaire

cyber-security-risk-assessment-template-by-surveysparrow

You can’t play with data security. This questionnaire evaluates the level of cybersecurity risks within an organization.

It helps develop strategies to minimize the chances of data breaches and other cyber threats. It typically addresses governance and organizational structure, information security and privacy, physical and data center security, web application security, and infrastructure security.

Used By: IT Departments, Chief Information Security Officers (CISOs), Cybersecurity Professionals

  • Are you familiar with the company’s security policies on passwords, data access, and acceptable technology use?
  • Do you create strong passwords and avoid using the same password for work and personal accounts?
  • Have you participated in any cybersecurity training the company offers, such as phishing awareness or secure browsing practices?
  • Do you avoid connecting personal devices to the company network unless explicitly allowed and following security guidelines?
  • Do you feel comfortable asking questions or reporting any concerns about cybersecurity at work?

5. Health Risk Assessment Questionnaire

template-for-assessing-health-related-risks

A Health Risk Assessment Questionnaire helps people see their health and find possible health problems. It asks about things like medical history, lifestyle, and family health.

By filling out this form, people can learn about health risks and decide what they can do to stay healthy.

The focus would be on lifestyle habits, medical history, family history, and demographic factors.

Used By: Doctors and Nurses, Health Insurance Companies

Questions you can add

  • Have you ever been diagnosed with high blood pressure, diabetes, or high cholesterol?
  • Do you schedule regular checkups with a doctor or other healthcare professional?
  • Have you noticed any significant changes in your weight or energy levels in the past year?
  • Do you experience high stress levels regularly?
  • Do you get at least 7 hours of sleep most nights?

6. Fall Risk Assessment Questionnaire

template-made-with-surveysparrow

With this, you can check how likely someone fall and get hurt or sick. It looks at your overall well-being- how well you move, your balance, and if you take any medicines. By answering these questions, healthcare workers can find people who might fall a lot and help them avoid it.

It makes identifying potential risks for certain deceased and conditions.

Used By: Hospitals and Clinics, Nursing Homes

  • Do you experience any dizziness, lightheadedness, or unsteadiness at work? (Yes/No)
  • Do you have any pain in your feet, legs, or hips that affects your balance?
  • Do you have any concerns about tripping hazards in your environment?
  • Have you recently been diagnosed with any new medical conditions?
  • Do you feel comfortable reporting any recent changes in your health that might increase your risk of falling sick?

7. Vendor Risk Assessment Questionnaire

template-made-with-surveysparrow

It primarily tells you how to stay safe.

A Vendor Risk Assessment Questionnaire checks how safe it is to work with other companies. It asks about their finances, how they protect data, and if they follow rules. By answering these questions, businesses can see if working with a company is risky.

Identify potential risks and ensure compliance with regulations like GDPR.

Used By: Businesses that work with other companies, Procurement Teams

  • Are you compliant with relevant industry regulations (e.g., HIPAA, PCI DSS)?
  • Do you outsource any critical functions to other vendors?
  • How do you collect, store, and use customer data?
  • Do you have a process for assessing the risks of your third-party vendors?
  • Do you have a written information security policy?

8. Internal Audit Risk Assessment Questionnaire

internal-audit-risk-assessment-questionnaire-template-made-with-surveysparrow

If you want to identify risks within an organization’s operations, finances, and compliance, this set of questions will help.

It looks into financial risks, compliance with regulations, and strategic plans. Companies can improve their internal controls and governance practices by pinpointing areas of vulnerability. It’s instrumental in ensuring regulatory compliance and optimizing business performance.

Used by: Companies of all sizes, Internal Audit Departments

Example Questions

  • Are there documented policies and procedures for risk identification?
  • Are these policies clearly communicated and readily accessible to employees?
  • On a scale of 1 (low) to 5 (severe), what is the potential impact of this risk on the organization?
  • Have any recent changes in regulations or industry standards impacted this department?
  • Have any internal control weaknesses been identified in this area recently? (Internal Control Weaknesses)

9. Compliance Risk Assessment Questionnaire

template-made-with-surveysparrow

This assessment helps organizations evaluate their adherence to regulatory requirements and industry standards. It covers compliance programs, regulatory changes, and enforcement actions.

You can implement measures to mitigate legal and regulatory exposures by identifying compliance risks. This helps in ethical business conduct and maintaining trust with stakeholders.

Used By: Legal Departments, Regulatory Agencies

Questions to Ask

  • Do you have a documented compliance program that outlines policies, procedures, and responsibilities?
  • Are there any compliance requirements that could limit your ability to innovate or compete in the market?
  • How does the company monitor compliance and identify potential violations in day-to-day operations?
  • What metrics does the company use to measure the effectiveness of its compliance program?
  • From your perspective, what are the most significant compliance risks facing the company right now?

10. Cancer Risk Assessment Questionnaire

survey-template-made-with-surveysparrow

Individuals use this to assess their risk of developing cancer. You can analyze and conclude based on family history, lifestyle choices, and environmental exposures.

By understanding their cancer risk, people can make informed decisions about preventive measures, screening tests, and lifestyle modifications to reduce their risk of developing cancer.

Used By: Cancer Centers

  • Do you have a family history of cancer?
  • What is your typical diet like? (Diet plays a role in cancer risk)
  • Have you ever undergone any radiation therapy or chemotherapy treatments?
  • Have you ever had significant sun exposure without proper protection?
  • Are you taking any medications that could potentially increase cancer risk?

Note: These are just general questions. A healthcare professional can provide a more comprehensive assessment based on your medical history and risk factors.

11. Lead Risk Assessment Questionnaire

template-made-with-surveysparrow

This assessment evaluates the risk of lead exposure in various settings, such as homes, schools, and workplaces.

It examines environmental issues such as lead-based paint, water contamination, and occupational exposure. By identifying lead hazards, organizations and individuals can take measures to mitigate exposure and protect health, which is particularly important for children.

Used By: Environmental Health Agencies, Lead Abatement Programs

Did You Know that Lead paint was banned in the US in 1978?

  • Was your home built before 1978?
  • Do you live near a lead smelter, battery recycling plant, or other industry that may release lead into the air?
  • Do you have bare soil around your home, especially where children play?
  • Do you or anyone in your household drink water from lead pipes or soldered copper pipes?
  • Do you or your child (if applicable) frequently eat canned food?

12. Enterprise Risk Assessment Questionnaire

template-made-with-surveysparrow

This helps organizations identify and manage risks across all areas of their operations. It covers strategic, financial, operational, and compliance risks. You get a comprehensive view of potential threats to the organization.

Companies can prioritize risk mitigation efforts by conducting enterprise risk assessments and strengthening their resilience to external and internal risks.

Used by: Executive Management, Risk Management Teams, Board of Directors

  • What are the key strategic objectives of the organization?
  • How could changes in the market landscape (e.g., technology, competition, regulations) impact our ability to achieve these objectives?
  • What are the major sources of revenue and cost for the organization?
  • What are the potential events or actions that could damage the organization’s reputation with customers, investors, or the public?
  • Do you have a sound financial management strategy to mitigate these risks?

13. Information Security Risk Assessment Questionnaire

template-made-with-surveysparrow

This questionnaire is all about risks to information assets, such as data breaches, unauthorized access, and cyber-attacks. It assesses security controls, vulnerabilities, and threats to determine the effectiveness of information security measures.

Organizations can identify and address security gaps by conducting information security risk assessments, safeguarding sensitive information, and maintaining data integrity.

Used By: Information Security Officers, IT Departments

  • How frequently do you conduct a comprehensive Information Security Risk Assessment?
  • How does the organization monitor its network activity to detect and respond to potential cyberattacks?
  • Does you have a bring-your-own-device (BYOD) policy, and if so, what security controls are implemented for personal devices accessing the network?
  • What measures are in place to ensure the secure backup and recovery of critical data in case of a disaster?
  • Can you share any success stories or lessons learned from past security incidents? ( Note: This question can be adjusted depending on the organization’s willingness to share such information)

14. Diabetes Risk Assessment Questionnaire

template-made-with-surveysparrow

A diabetes risk assessment helps individuals evaluate their likelihood of developing diabetes. And nobody wants that.

It considers factors such as family history, lifestyle choices, and medical conditions to identify potential risk factors. By understanding their risk level, individuals can make lifestyle changes and seek medical advice to prevent or manage diabetes.

Used by: Healthcare Providers, Health Clinics, Health Insurance Companies

Questions You Can Ask

  • Are you physically active for at least 30 minutes most days of the week?
  • Do you typically eat a healthy diet rich in fruits, vegetables, and whole grains?
  • Do you ever experience excessive thirst, urination, or unexplained weight loss?
  • Do you smoke cigarettes or use any other tobacco products?
  • Have you ever been diagnosed with prediabetes?

Best Practices to Follow

  • Encourage Diverse Teams: Mix things up! It is important to bring people with different backgrounds together to get various ideas and perspectives on risks. The more variety, the better!
  • Use What-If Scenarios: Play out different “what if” scenarios. Like, what if a big storm hits? What would we do then? Imagine different situations to see what risks could happen and how they might affect the organization’s plans.
  • Keep Watching for Risks: Use tools to watch for risks all the time. You never know when one might sneak up on you!
  • Bring Different Departments Together: Get people from different parts of the organization to work together on identifying and dealing with risks.
  • Encourage Speaking Up About Risks: Make sure everyone feels comfortable discussing risks so problems can be fixed before they become big issues.
  • Use Technology to Help: Use computers and special software to make risk assessments faster and more accurate.

How to Create a Risk Assessment Questionnaire with SurveySparrow

Let me walk you through the process. You can start straight away and create from scratch. Or, let artificial intelligence take over as you let the AI surveys build them for you. You can also use the ChatGPT plugin.

Right now, I’ll walk you through how we do it with the pre-designed templates:

Step 1: Access Your SurveySparrow Account

Log in to your SurveySparrow account and find the ‘ New survey’ button on your Home page.

Don’t have an account? Maybe this is the perfect time to create one!

14-day free trial • Cancel Anytime • No Credit Card Required • No Strings Attached

Step 2: Select or Build from a Template

Choose a pre-designed template. You can find them by clicking on “ Browse Classic Templates .”

Step 3: Customize

Once you’ve selected a template, you’ll see pre-written questions.

You can change or delete them as needed. Personalize the welcome and thank you screens with your brand logo and style. You can also use the wing feature to edit the questions how you want them.

Step 4: Integrate

Connect your questionnaire with your favorite apps like HubSpot or Mailchimp for better management. SurveySparrow supports many popular tools for seamless integration.

Step 5: Share

Voila! Your questionnaire is now ready to be shared.

You can send it through email or WhatsApp or embed it on your website.

And don’t worry; SurveySparrow saves your changes automatically.

In conclusion, crafting risk assessment questionnaires doesn’t have to be complicated. You can create effective surveys tailored to your needs by following the above steps.

Remember to customize your questions, integrate with relevant tools for better management, and share your surveys through various channels. With these strategies, you’ll be well-equipped to collect valuable data and make informed decisions for your organization’s success.

If you have any queries regarding SurveySparrow, feel free to reach out!

Happy Exploring!

blog author image

Content Marketer at SurveySparrow

You Might Also Like

Design Research: Types, Methods, and Importance

Design Research: Types, Methods, and Importance

Why is everyone talking about closed loop feedback system? Why you should too!

Survey Tips

Why is everyone talking about closed loop feedback system? Why you should too!

Quizizz vs Quizlet: A Comparison Between Top 2 Quiz Tools

Alternative

Quizizz vs Quizlet: A Comparison Between Top 2 Quiz Tools

Typeform vs Formstack: Choosing the Right Online Form Builder

Typeform vs Formstack: Choosing the Right Online Form Builder

Turn every feedback into a growth opportunity.

14-day free trial • Cancel Anytime • No Credit Card Required • Need a Demo?

  • Open access
  • Published: 07 September 2024

A case study of the informative value of risk of bias and reporting quality assessments for systematic reviews

  • Cathalijn H. C. Leenaars   ORCID: orcid.org/0000-0002-8212-7632 1 ,
  • Frans R. Stafleu 2 ,
  • Christine Häger 1 &
  • André Bleich 1  

Systematic Reviews volume  13 , Article number:  230 ( 2024 ) Cite this article

Metrics details

While undisputedly important, and part of any systematic review (SR) by definition, evaluation of the risk of bias within the included studies is one of the most time-consuming parts of performing an SR. In this paper, we describe a case study comprising an extensive analysis of risk of bias (RoB) and reporting quality (RQ) assessment from a previously published review (CRD42021236047). It included both animal and human studies, and the included studies compared baseline diseased subjects with controls, assessed the effects of investigational treatments, or both. We compared RoB and RQ between the different types of included primary studies. We also assessed the “informative value” of each of the separate elements for meta-researchers, based on the notion that variation in reporting may be more interesting for the meta-researcher than consistently high/low or reported/non-reported scores. In general, reporting of experimental details was low. This resulted in frequent unclear risk-of-bias scores. We observed this both for animal and for human studies and both for disease-control comparisons and investigations of experimental treatments. Plots and explorative chi-square tests showed that reporting was slightly better for human studies of investigational treatments than for the other study types. With the evidence reported as is, risk-of-bias assessments for systematic reviews have low informative value other than repeatedly showing that reporting of experimental details needs to improve in all kinds of in vivo research. Particularly for reviews that do not directly inform treatment decisions, it could be efficient to perform a thorough but partial assessment of the quality of the included studies, either of a random subset of the included publications or of a subset of relatively informative elements, comprising, e.g. ethics evaluation, conflicts of interest statements, study limitations, baseline characteristics, and the unit of analysis. This publication suggests several potential procedures.

Peer Review reports

Introduction

Researchers performing systematic reviews (SRs) face bias at two potential levels: first, at the level of the SR methods themselves, and second, at the level of the included primary studies [ 1 ]. To safeguard correct interpretation of the review’s results, transparency is required at both levels. For bias at the level of the SR methods, this is ensured by transparent reporting of the full SR methods, at least to the level of detail as required by the PRISMA statement [ 2 ]. For bias at the level of the included studies, study reporting quality (RQ) and/or risk of bias (RoB) are evaluated at the level of the individual included study. Specific tools are available to evaluate RoB in different study types [ 3 ]. Also, for reporting of primary studies, multiple guidelines and checklists are available to prevent missing important experimental details and more become available for different types of studies over time [ 4 , 5 ]. Journal endorsement of these types of guidelines has been shown to improve study reporting quality [ 6 ].

While undisputedly important, evaluation of the RoB and/or RQ of the included studies is one of the most time-consuming parts of an SR. Experienced reviewers need 10 min to an hour to complete an individual RoB assessment [ 7 ], and every included study needs to be evaluated by two reviewers. Besides spending substantial amounts of time on RoB or RQ assessments, reviewers tend to become frustrated because of the scores frequently being unclear or not reported (personal experience from the authors, colleagues and students). While automation of RoB seems to be possible without loss of accuracy [ 8 , 9 ], so far, this automation has not had significant impact on the speed; in a noninferiority randomised controlled trial of the effect of automation on person-time spent on RoB assessment, the confidence interval for the time saved ranged from − 5.20 to + 2.41 min [ 8 ].

In any scientific endeavour, there is a balance between reliability and speed; to guarantee reliability of a study, time investments are necessary. RoB or RQ assessment is generally considered to be an essential part of the systematic review process to warrant correct interpretation of the findings, but with so many studies scoring “unclear” or “not reported”, we wondered if all this time spent on RoB assessments is resulting in increased reliability of reviews.

Overall unclear risk of bias in the included primary studies is a conclusion of multiple reviews, and these assessments are useful in pinpointing problems in reporting, thereby potentially improving the quality of future publications of primary studies. However, the direct goal of most SRs is to answer a specific review question, and in that respect, unclear RoB/not reported RQ scores contribute little to the validity of the review’s results. If all included studies score “unclear” or “high” RoB on at least one of the analysed elements, the overall effect should be interpreted as inconclusive.

While it is challenging to properly evaluate the added validity value of a methodological step, we had data available allowing for an explorative case study to assess the informative value of various RoB and RQ elements in different types of studies. We previously performed an SR of the nasal potential difference (nPD) for cystic fibrosis (CF) in animals and humans, aiming to quantify the predictive value of animal models for people with CF [ 10 , 11 ]. That review comprised between-subject comparisons of both baseline versus disease-control and treatment versus treatment control. For that review, we performed full RoB and RQ analyses. This resulted in data allowing for comparisons of RoB and RQ between animal and human studies, but also between baseline and treatment studies, which are both presented in this manuscript. RoB evaluations were based on the Cochrane collaboration’s tool [ 12 ] for human studies and SYRCLE’s tool [ 13 ] for animal studies. RQ was tested based on the ARRIVE guidelines [ 14 ] for animal studies and the 2010 CONSORT guidelines [ 15 ] for human studies. Brief descriptions of these tools are provided in Table  1 .

All these tools are focussed on interventional studies. Lacking more specific tools for baseline disease-control comparisons, we applied them as far as relevant for the baseline comparisons. We performed additional analyses on our RQ and RoB assessments to assess the amount of distinctive information gained from them.

The analyses described in this manuscript are based on a case study SR of the nPD related to cystic fibrosis (CF). That review was preregistered on PROSPERO (CRD42021236047) on 5 March 2021 [ 16 ]. Part of the results were published previously [ 10 ]. The main review questions are answered in a manuscript that has more recently been published [ 11 ]. Both publications show a simple RoB plot corresponding to the publication-specific results.

For the ease of the reader, we provide a brief summary of the overall review methods. The full methods have been described in our posted protocol [ 16 ] and the earlier publications [ 10 , 11 ]. Comprehensive searches were performed in PubMed and Embase, unrestricted for publication date or language, on 23 March 2021. Title-abstract screening and full-text screening were performed by two independent reviewers blinded to the other’s decision (FS and CL) using Rayyan [ 17 ]. We included animal and/or human studies describing nPD in CF patients and/or CF animal models. We restricted to between-subject comparisons, either CF versus healthy controls or experimental CF treatments versus CF controls. Reference lists of relevant reviews and included studies were screened (single level) for snowballing. Discrepancies were all resolved by discussions between the reviewers.

Data were extracted by two independent reviewers per reference in several distinct phases. Relevant to this manuscript, FS and CL extracted RoB and RQ data in Covidence [ 18 ], in two separate projects using the same list of 48 questions for studies assessing treatment effects and studies assessing CF-control differences. The k  = 11 studies that were included in both parts of the overarching SR were included twice in the current data set, as RoB was separately scored for each comparison. Discrepancies were all resolved by discussions between the reviewers. In violation of the protocol, no third reviewer was involved.

RoB and SQ data extraction followed our review protocol, which states the following: “For human studies, risk of bias will be assessed with the Cochrane Collaboration’s tool for assessing risk of bias. For animal studies, risk of bias will be assessed with SYRCLE’s RoB tool. Besides, we will check compliance with the ARRIVE and CONSORT guidelines for reporting quality”. The four tools contain overlapping questions. To prevent unnecessary repetition of our own work, we created a single list of 48 items, which were ordered by topic for ease of extraction. For RoB, this list contains the same elements as the original tools, with the same response options (high/unclear/low RoB). For RQ, we created checklists with all elements as listed in the original tools, with the response options reported yes/no. For (RQ and RoB) elements specific to some of the included studies, the response option “irrelevant” was added. We combined these lists, only changing the order and merging duplicate elements. We do not intend this list to replace the individual tools; it was created for this specific study only.

In our list, each question was preceded by a short code indicating the tool it was derived from (A for ARRIVE, C for CONSORT, and S for SYRCLE’s) to aid later analyses. When setting up, we started with the animal-specific tools, with which the authors are more familiar. After preparing data extraction for those, we observed that all elements from the Cochrane tool had already been addressed. Therefore, this list was not explicit in our extractions. The extraction form always allowed free text to support the response. Our extraction list is provided with our supplementary data.

For RoB, the tools provide relatively clear suggestions for which level to score and when, with signalling questions and examples [ 12 , 13 ]. However, this still leaves some room for interpretation, and while the signalling questions are very educative, there are situations where the response would in our opinion not correspond to the actual bias. The RQ tools have been developed as guidelines on what to report when writing a manuscript, and not as a tool to assess RQ [ 14 , 15 ]. This means we had to operationalise upfront which level we would find sufficient to score “reported”. Our operationalisations and corrections of the tools are detailed in Table  2 .

Data were exported from Covidence into Microsoft’s Excel, where the two projects were merged and spelling and capitalisation were harmonised. Subsequent analyses were performed in R [ 21 ] version 4.3.1 (“Beagle Scouts”) via RStudio [ 22 ], using the following packages: readxl [ 23 ], dplyr [ 24 ], tidyr [ 25 ], ggplot2 [ 26 ], and crosstable [ 27 ].

Separate analyses were performed for RQ (with two levels per element) and RoB (with three levels per element). For both RoB and RQ, we first counted the numbers of irrelevant scores overall and per item. Next, irrelevant scores were deleted from further analyses. We then ranked the items by percentages for reported/not reported, or for high/unclear/low scores, and reported the top and bottom 3 (RoB) or 5 (RQ) elements.

While 100% reported is most informative to understand what actually happened in the included studies, if all authors continuously report a specific element, scoring of this element for an SR is not the most informative for meta-researchers. If an element is not reported at all, this is bad news for the overall level of confidence in an SR, but evaluating it per included study is also not too efficient except for highlighting problems in reporting, which may help to improve the quality of future (publications of) primary studies. For meta-researchers, elements with variation in reporting may be considered most interesting because these elements highlight differences between the included studies. Subgroup analyses based on specific RQ/RoB scores can help to estimate the effects of specific types of bias on the overall effect size observed in meta-analyses, as has been done for example randomisation and blinding [ 28 ]. However, these types of subgroup analyses are only possible if there is some variation in the reporting. Based on this idea, we defined a “distinctive informative value” (DIV) for RQ elements, based on the optimal variation being 50% reported and either 0% or 100% reporting being minimally informative. Thus, this “DIV” was calculated as follows:

Thus, the DIV could range from 0 (no informative value) to 50 (maximally informative), visualised in Fig.  1 .

figure 1

Visual explanation of the DIV value

The DIV value was only used for ranking. The results were visualised in a heatmap, in which the intermediate shades correspond to high DIV values.

For RoB, no comparable measure was calculated. With only 10 elements but at 3 distinct levels, we thought a comparable measure would sooner hinder interpretation of informative value than help it. Instead, we show the results in an RoB plot split by population and study design type.

Because we are interested in quantifying the predictive value of animal models for human patients, we commonly perform SRs including both animal and human data (e.g. [ 29 , 30 ]). The dataset described in the current manuscript contained baseline and intervention studies in animals and humans. Because animal studies are often held responsible for the reproducibility crisis, but also to increase the external validity of this work, explorative chi-square tests (the standard statistical test for comparing percentages for binary variables) were performed to compare RQ and RoB between animal and human studies and between studies comparing baselines and treatment effects. They were performed with the base R “chisq.test” function. No power calculations were performed, as these analyses were not planned.

Literature sample

We extracted RoB and RQ data from 164 studies that were described in 151 manuscripts. These manuscripts were published from 1981 through 2020. Overall, 164 studies comprised 78 animal studies and 86 human studies, 130 comparisons of CF versus non-CF control, and 34 studies assessing experimental treatments. These numbers are detailed in a crosstable (Table  3 ).

The 48 elements in our template were completed for these 164 studies, which results in 7872 assessed elements. In total, 954 elements (12.1%) were irrelevant for various reasons (mainly for noninterventional studies and for human studies). The 7872 individual scores per study are available from the data file on OSF.

Of the 48 questions in our extraction template, 38 addressed RQ, and 10 addressed RoB.

Overall reporting quality

Of the 6232 elements related to RQ, 611 (9.8%) were deemed irrelevant. Of the remainder, 1493 (26.6% of 5621) were reported. The most reported elements were background of the research question (100% reported), objectives (98.8% reported), interpretation of the results (98.2% reported), generalisability (86.0% reported), and the experimental groups (83.5% reported). The least-reported elements were protocol violations, interim analyses + stopping rules and when the experiments were performed (all 0% reported), where the experiments were performed (0.6% reported), and all assessed outcome measures (1.2% reported).

The elements with most distinctive variation in reporting (highest DIV, refer to the “ Methods ” section for further information) were as follows: ethics evaluation (64.6% reported), conflicts of interest (34.8% reported), study limitations (29.3% reported), baseline characteristics (26.2% reported), and the unit of analysis (26.2% reported). RQ elements with DIV values over 10 are shown in Table  4 .

Overall risk of bias

Of the 1640 elements related to RoB, 343 (20.9%) were deemed irrelevant. Of the remainder, 219 (16.9%) scored high RoB, and 68 (5.2%) scored low RoB. The overall RoB scores were highest for selective outcome reporting (97.6% high), baseline group differences (19.5% high), and other biases (9.8% high); lowest for blinding of participants, caregivers, and investigators (13.4% low); blinding of outcome assessors (11.6% low) and baseline group differences (8.5% low); and most unclear for bias due to animal housing (100% unclear), detection bias due to the order of outcome measurements (99.4% unclear), and selection bias in sequence generation (97.1% unclear). The baseline group differences being both in the highest and the lowest RoB score are explained by the baseline values being reported better than the other measures, resulting in fewer unclear scores.

Variation in reporting is relatively high for most of the elements scoring high or low. Overall distinctive value of the RoB elements is low, with most scores being unclear (or, for selective outcome reporting, most scores being high).

Animal versus human studies

For RQ, the explorative chi-square tests indicated differences in reporting between animal and human studies for baseline values ( Χ 1  = 50.3, p  < 0.001), ethical review ( Χ 1  = 5.1, p  = 0.02), type of study ( Χ 1  = 11.2, p  < 0.001), experimental groups ( Χ 1  = 3.9, p  = 0.050), inclusion criteria ( Χ 1  = 24.6, p  < 0.001), the exact n value per group and in total ( Χ 1  = 26.0, p  < 0.001), (absence of) excluded datapoints ( Χ 1  = 4.5, p  = 0.03), adverse events ( Χ 1  = 5.5, p  = 0.02), and study limitations ( Χ 1  = 8.2, p  = 0.004). These explorative findings are visualised in a heatmap (Fig.  2 ).

figure 2

Heatmap of reporting by type of study. Refer to Table  3 for absolute numbers of studies per category

For RoB, the explorative chi-square tests indicated differences in risk of bias between animal and human studies for baseline differences between the groups ( Χ 2  = 34.6, p  < 0.001) and incomplete outcome data ( Χ 2  = 7.6, p  = 0.02). These explorative findings are visualised in Fig.  3 .

figure 3

Risk of bias by type of study. Refer to Table  3 for absolute numbers of studies per category. Note that the data shown in these plots overlap with those in the two preceding publications [ 10 , 11 ]

Studies assessing treatment effects versus studies assessing baseline differences

For RQ, the explorative chi-square tests indicated differences in reporting between comparisons of disease with control versus comparisons of treatment effects for the title listing the type of study ( X 1  = 5.0, p  = 0.03), the full paper explicitly mentioning the type of study ( X 1  = 14.0, p  < 0.001), explicit reporting of the primary outcome ( X 1  = 11.7, p  < 0.001), and reporting of adverse events X 1  = 25.4, p  < 0.001). These explorative findings are visualised in Fig.  2 .

For RoB, the explorative chi-square tests indicated differences in risk of bias between comparisons of disease with control versus comparisons of treatment effects for baseline differences between the groups ( Χ 2  = 11.4, p  = 0.003), blinding of investigators and caretakers ( Χ 2  = 29.1, p  < 0.001), blinding of outcome assessors ( Χ 2  = 6.2, p  = 0.046), and selective outcome reporting ( Χ 2  = 8.9, p  = 0.01). These explorative findings are visualised in Fig.  3 .

Overall, our results suggest lower RoB and higher RQ for human treatment studies compared to the other study types.

This literature study shows that reporting of experimental details is low, frequently resulting in unclear risk-of-bias assessments. We observed this both for animal and for human studies, with two main study designs: disease-control comparisons and, in a smaller sample, investigations of experimental treatments. Overall reporting is slightly better for elements that contribute to the “story” of a publication, such as the background of the research question, interpretation of the results and generalisability, and worst for experimental details that relate to differences between what was planned and what was actually done, such as protocol violations, interim analyses, and assessed outcome measures. The latter also results in overall high RoB scores for selective outcome reporting.

Of note, we scored this more stringently than SYRCLE’s RoB tool [ 13 ] suggests and always scored a high RoB if no protocol was posted, because only comparing the “Methods” and “Results” sections within a publication would, in our opinion, result in an overly optimistic view. Within this sample, only human treatment studies reported posting protocols upfront [ 31 , 32 ]. In contrast to selective outcome reporting, we would have scored selection, performance, and detection bias due to sequence generation more liberally for counterbalanced designs (Table  2 ), because randomisation is not the only appropriate method for preventing these types of bias. Particularly when blinding is not possible, counterbalancing [ 33 , 34 ] and Latin-square like designs [ 35 ] can decrease these biases, while randomisation would risk imbalance between groups due to “randomisation failure” [ 36 , 37 ]. We would have scored high risk of bias for blinding for these types of designs, because of increased sequence predictability. However, in practice, we did not include any studies reporting Latin-square-like or other counterbalancing designs.

One of the “non-story” elements that is reported relatively well, particularly for human treatment studies, is the blinding of participants, investigators, and caretakers. This might relate to scientists being more aware of potential bias of participants; they may consider themselves to be more objective than the general population, while the risk of influencing patients could be considered more relevant.

The main strength of this work is that it is a full formal analysis of RoB and RQ in different study types: animal and human, baseline comparisons, and treatment studies. The main limitation is that it is a single case study from a specific topic: the nPD test in CF. The results shown in this paper are not necessarily valid for other fields, particularly as we hypothesise that differences in scientific practice between medical fields relate to differences in translational success [ 38 ]. Thus, it is worth to investigate field-specific informative values before selecting which elements to score and analyse in detail.

Our comparisons of different study and population types show lower RoB and higher RQ for human treatment studies compared to the other study types for certain elements. Concerning RQ, the effects were most pronounced for the type of experimental design being explicitly mentioned and the reporting of adverse events. Concerning RoB, the effects were most pronounced for baseline differences between the groups, blinding of investigators and caretakers, and selective outcome reporting. Note, however, that the number of included treatment studies is a lot lower than the number of included baseline studies, and that the comparisons were based on only k  = 12 human treatment studies. Refer to Table  3 for absolute numbers of studies per category. Besides, our comparisons may be confounded to some extent by the publication date. The nPD was originally developed for human diagnostics [ 39 , 40 ], and animal studies only started to be reported at a later date [ 41 ]. Also, the use of the nPD as an outcome in (pre)clinical trials of investigational treatments originated at a later date [ 42 , 43 ].

Because we did not collect our data to assess time effects, we did not formally analyse them. However, we had an informal look at the publication dates by RoB score for blinding of the investigators and caretakers, and by RQ score for ethics evaluation (in box plots with dot overlay), showing more reported and fewer unclear scores in the more recent publications (data not shown). While we thus cannot rule out confounding of our results by publication date, the results are suggestive of mildly improved reporting of experimental details over time.

This study is a formal comparison of RoB and RQ scoring for two main study types (baseline comparisons and investigational treatment studies), for both animals and humans. Performing these comparisons within the context of a single SR [ 16 ] resulted in a small, but relatively homogeneous sample of primary studies about the nPD in relation to CF. On conferences and from colleagues in the animal SR field, we heard that reporting would be worse for animal than for human studies. Our comparisons allowed us to show that particularly for baseline comparisons of the nPD in CF versus control, this is not the case.

The analysed tools [ 12 , 13 , 15 ] were developed for experimental interventional studies. While some of the elements are less appropriate for other types of studies, such as animal model comparisons, our results show that many of the elements can be used and could still be useful, particularly if the reporting quality of the included studies would be better.

Implications

To correctly interpret the findings of a meta-analysis, awareness of the RoB in the included studies is more relevant than the RQ on its own. However, it is impossible to evaluate the RoB if the experimental details have not been reported, resulting in many unclear scores. With at least one unclear or high RoB score per included study, the overall conclusions of the review become inconclusive. For SRs of overall treatment effects that are performed to inform evidence-based treatment guidelines, RoB analyses remain crucial, even though the scores will often be unclear. Ideally, especially for SRs that will be used to plan future experiments/develop treatment guidelines, analyses should only include those studies consistently showing low risk of bias (i.e. low risk on all elements). However, in practice, consistently low RoB studies in our included literature samples (> 20 SRs to date) are too scarce for meaningful analyses. For other types of reviews, we think it is time to consider if complete RoB assessment is the most efficient use of limited resources. While these assessments regularly show problems in reporting, which may help to improve the quality of future primary studies, the unclear scores do not contribute much to understanding the effects observed in meta-analyses.

With PubMed already indexing nearly 300,000 mentioning the term “systematic review” in the title, abstract, or keywords, we can assume that many scientists are spending substantial amounts of time and resources on RoB and RQ assessments. Particularly for larger reviews, it could be worthwhile to restrict RoB assessment to either a random subset of the included publications or a subset of relatively informative elements. Even a combination of these two strategies may be sufficiently informative if the results of the review are not directly used to guide treatment decisions. The subset could give a reasonable indication of the overall level of evidence of the SR while saving resources. Different suggested procedures are provided in Table  5 . The authors of this work would probably have changed to such a strategy during their early data extraction phase, if the funder would not have stipulated full RoB assessment in their funding conditions.

We previously created a brief and simple taxonomy of systematised review types [ 44 ], in which we advocate RoB assessments to be a mandatory part of any SR. We would still urge anyone calling their review “systematic” to stick to this definition and perform some kind of RoB and/or RQ assessment, but two independent scientists following a lengthy and complex tool for all included publications, resulting in 74.6% of the assessed elements not being reported, or 77.9% unclear RoB, can, in our opinion, in most cases be considered inefficient and unnecessary.

Our results show that there is plenty of room for improvement in the reporting of experimental details in medical scientific literature, both for animal and for human studies. With the current status of the primary literature as it is, full RoB assessment may not be the most efficient use of limited resources, particularly for SRs that are not directly used as the basis for treatment guidelines or future experiments.

Availability of data and materials

The data described in this study are available from the Open Science Platform ( https://osf.io/fmhcq/ ) in the form of a spreadsheet file. In the data file, the first tab shows the list of questions that were used for data extraction with their respective short codes. The second tab shows the full individual study-level scores, with lines per study and columns per short code.

Abbreviations

  • Cystic fibrosis

High risk of bias

Low risk of bias

No, not reported

  • Nasal potential difference
  • Risk of bias
  • Reporting quality

Systematic review

Unclear risk of bias

Yes, reported

Drucker AM, Fleming P, Chan AW. Research techniques made simple: assessing risk of bias in systematic reviews. J Invest Dermatol. 2016;136(11):e109–14.

Article   PubMed   CAS   Google Scholar  

Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ. 2021;372:n71.

Article   PubMed   PubMed Central   Google Scholar  

Page MJ, McKenzie JE, Higgins JPT. Tools for assessing risk of reporting biases in studies and syntheses of studies: a systematic review. BMJ Open. 2018;8(3):e019703.

Wang X, Chen Y, Yang N, Deng W, Wang Q, Li N, et al. Methodology and reporting quality of reporting guidelines: systematic review. BMC Med Res Methodol. 2015;15:74.

Zeng X, Zhang Y, Kwong JS, Zhang C, Li S, Sun F, et al. The methodological quality assessment tools for preclinical and clinical studies, systematic review and meta-analysis, and clinical practice guideline: a systematic review. J Evid Based Med. 2015;8(1):2–10.

Article   PubMed   Google Scholar  

Turner L, Shamseer L, Altman DG, Schulz KF, Moher D. Does use of the CONSORT statement impact the completeness of reporting of randomised controlled trials published in medical journals? A Cochrane review Syst Rev. 2012;1:60.

Savovic J, Weeks L, Sterne JA, Turner L, Altman DG, Moher D, et al. Evaluation of the Cochrane collaboration’s tool for assessing the risk of bias in randomized trials: focus groups, online survey, proposed recommendations and their implementation. Syst Rev. 2014;3:37.

Arno A, Thomas J, Wallace B, Marshall IJ, McKenzie JE, Elliott JH. Accuracy and efficiency of machine learning-assisted risk-of-bias assessments in “real-world” systematic reviews : a noninferiority randomized controlled trial. Ann Intern Med. 2022;175(7):1001–9.

Jardim PSJ, Rose CJ, Ames HM, Echavez JFM, Van de Velde S, Muller AE. Automating risk of bias assessment in systematic reviews: a real-time mixed methods comparison of human researchers to a machine learning system. BMC Med Res Methodol. 2022;22(1):167.

Leenaars C, Hager C, Stafleu F, Nieraad H, Bleich A. A systematic review of the effect of cystic fibrosis treatments on the nasal potential difference test in animals and humans. Diagnostics (Basel). 2023;13(19):3098.

Article   PubMed   PubMed Central   CAS   Google Scholar  

Leenaars CHC, Stafleu FR, Hager C, Nieraad H, Bleich A. A systematic review of animal and human data comparing the nasal potential difference test between cystic fibrosis and control. Sci Rep. 2024;14(1):9664.

Higgins JPT, Savović J, Page MJ, Elbers RG, Sterne JAC. Chapter 8: Assessing risk of bias in a randomized trial. Cochrane Handbook for Systematic Reviews of Interventions. 2022.

Google Scholar  

Hooijmans CR, Rovers MM, de Vries RB, Leenaars M, Ritskes-Hoitinga M, Langendam MW. SYRCLE’s risk of bias tool for animal studies. BMC Med Res Methodol. 2014;14:43.

Kilkenny C, Browne WJ, Cuthill IC, Emerson M, Altman DG. Improving bioscience research reporting: the ARRIVE guidelines for reporting animal research. PLoS Biol. 2010;8(6):e1000412.

Begg C, Cho M, Eastwood S, Horton R, Moher D, Olkin I, et al. Improving the quality of reporting of randomized controlled trials. The CONSORT statement JAMA. 1996;276(8):637–9.

CAS   Google Scholar  

Leenaars C, Stafleu F, Bleich A. The nasal potential difference test for diagnosing cystic fibrosis and assessing disease severity: a systematic review. 2021.

Ouzzani M, Hammady H, Fedorowicz Z, Elmagarmid A. Rayyan-a web and mobile app for systematic reviews. Syst Rev. 2016;5(1):210.

Covidence systematic review software Melbourne, Australia: Veritas Health Innovation. Available from: www.covidence.org .

Percie du Sert N, Hurst V, Ahluwalia A, Alam S, Avey MT, Baker M, et al. The ARRIVE guidelines 20: updated guidelines for reporting animal research. J Cereb Blood Flow Metab. 2020;40(9):1769–77.

Knowles MR, Gatzy JT, Boucher RC. Aldosterone metabolism and transepithelial potential difference in normal and cystic fibrosis subjects. Pediatr Res. 1985;19(7):676–9.

Team RC. a language and environment for statistical computing. R Foundation for Statistical Computing. 2021.

RStudio_Team. RStudio: integrated development for R. Boston, MA.: RStudio, Inc.; 2019. Available from: http://www.rstudio.com/ .

Wickham H, Bryan J. readxl: read Excel files. R package version 1.3.1. 2019.

Wickham H, François R, Henry L, Müller K. dplyr: a grammar of data manipulation. R package version 1.0.3. 2021.

Wickham H, Girlich M. tidyr: tidy messy data. R package version 1.2.0. 2022.

Wickham H. ggplot2: elegant graphics for data analysis. New York: Springer-Verlag; 2016.

Book   Google Scholar  

Chaltiel D. Crosstable: crosstables for descriptive analyses. R package version 0.5.0. 2022.

Macleod MR, van der Worp HB, Sena ES, Howells DW, Dirnagl U, Donnan GA. Evidence for the efficacy of NXY-059 in experimental focal cerebral ischaemia is confounded by study quality. Stroke. 2008;39(10):2824–9.

Leenaars C, Stafleu F, de Jong D, van Berlo M, Geurts T, Coenen-de Roo T, et al. A systematic review comparing experimental design of animal and human methotrexate efficacy studies for rheumatoid arthritis: lessons for the translational value of animal studies. Animals (Basel). 2020;10(6):1047.

Leenaars CHC, Kouwenaar C, Stafleu FR, Bleich A, Ritskes-Hoitinga M, De Vries RBM, et al. Animal to human translation: a systematic scoping review of reported concordance rates. J Transl Med. 2019;17(1):223.

Kerem E, Konstan MW, De Boeck K, Accurso FJ, Sermet-Gaudelus I, Wilschanski M, et al. Ataluren for the treatment of nonsense-mutation cystic fibrosis: a randomised, double-blind, placebo-controlled phase 3 trial. Lancet Respir Med. 2014;2(7):539–47.

Rowe SM, Liu B, Hill A, Hathorne H, Cohen M, Beamer JR, et al. Optimizing nasal potential difference analysis for CFTR modulator development: assessment of ivacaftor in CF subjects with the G551D-CFTR mutation. PLoS ONE. 2013;8(7): e66955.

Reese HW. Counterbalancing and other uses of repeated-measures Latin-square designs: analyses and interpretations. J Exp Child Psychol. 1997;64(1):137–58.

Article   Google Scholar  

Zeelenberg R, Pecher D. A method for simultaneously counterbalancing condition order and assignment of stimulus materials to conditions. Behav Res Methods. 2015;47(1):127–33.

Richardson JTE. The use of Latin-square designs in educational and psychological research. Educ Res Rev. 2018;24:84–97.

King G, Nielsen R, Coberley C, Pope JE, Wells A. Avoiding randomization failure in program evaluation, with application to the Medicare Health Support program. Popul Health Manag. 2011;14(Suppl 1):S11-22.

Meier B, Nietlispach F. Fallacies of evidence-based medicine in cardiovascular medicine. Am J Cardiol. 2019;123(4):690–4.

Van de Wall G, Van Hattem A, Timmermans J, Ritskes-Hoitinga M, Bleich A, Leenaars C. Comparing translational success rates across medical research fields - a combined analysis of literature and clinical trial data. Altex. 2023;40(4):584–94.

PubMed   Google Scholar  

Knowles MR, Gatzy JT, Boucher RC. Increased bioelectric potential differences across respiratory epithelia in cystic fibrosis. N Engl Med. 1981;305:1489–95.

Article   CAS   Google Scholar  

Unal-Maelger OH, Urbanek R. Status of determining the transepithelial potential difference (PD) of the respiratory epithelium in the diagnosis of mucoviscidosis. Monatsschr Kinderheilkd. 1988;136(2):76–80.

PubMed   CAS   Google Scholar  

Dorin JR, Dickinson P, Alton EW, Smith SN, Geddes DM, Stevenson BJ, et al. Cystic fibrosis in the mouse by targeted insertional mutagenesis. Nature. 1992;359(6392):211–5.

Alton EW, Middleton PG, Caplen NJ, Smith SN, Steel DM, Munkonge FM, et al. Non-invasive liposome-mediated gene delivery can correct the ion transport defect in cystic fibrosis mutant mice. Nat Genet. 1993;5(2):135–42.

Caplen NJ, Alton EW, Middleton PG, Dorin JR, Stevenson BJ, Gao X, et al. Liposome-mediated CFTR gene transfer to the nasal epithelium of patients with cystic fibrosis. Nat Med. 1995;1(1):39–46.

Leenaars C, Tsaioun K, Stafleu F, Rooney K, Meijboom F, Ritskes-Hoitinga M, et al. Reviewing the animal literature: how to describe and choose between different types of literature reviews. Lab Anim. 2021;55(2):129–41.

Download references

Acknowledgements

The authors kindly acknowledge Dr. Hendrik Nieraad for his help in study classification.

Open Access funding enabled and organized by Projekt DEAL. This research was funded by the BMBF, grant number 01KC1904. During grant review, the BMBF asked for changes in the review design which we incorporated. Publication of the review results was a condition of the call. Otherwise, the BMBF had no role in the collection, analysis and interpretation of data, or in writing the manuscript.

Author information

Authors and affiliations.

Institute for Laboratory Animal Science, Hannover Medical School, Carl Neubergstrasse 1, 30625, Hannover, Germany

Cathalijn H. C. Leenaars, Christine Häger & André Bleich

Department of Animals in Science and Society, Utrecht University, Yalelaan 2, Utrecht, 3584 CM, the Netherlands

Frans R. Stafleu

You can also search for this author in PubMed   Google Scholar

Contributions

CL and AB acquired the grant to perform this work and designed the study. CL performed the searches. FS and CL extracted the data. CL performed the analyses. CH performed quality controls for the data and analyses. CL drafted the manuscript. All authors revised the manuscript and approved of the final version.

Corresponding author

Correspondence to Cathalijn H. C. Leenaars .

Ethics declarations

Declarations.

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Leenaars, C.H.C., Stafleu, F.R., Häger, C. et al. A case study of the informative value of risk of bias and reporting quality assessments for systematic reviews. Syst Rev 13 , 230 (2024). https://doi.org/10.1186/s13643-024-02650-w

Download citation

Received : 10 April 2024

Accepted : 28 August 2024

Published : 07 September 2024

DOI : https://doi.org/10.1186/s13643-024-02650-w

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Systematic reviews
  • Informative value

Systematic Reviews

ISSN: 2046-4053

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

risk assessment research paper sample

Managing risks and risk assessment at work

Subscribe for the latest health and safety news and updates.

3. Risk assessment template and examples

You can use a risk assessment template to help you keep a simple record of:

  • who might be harmed and how
  • what you're already doing to control the risks
  • what further action you need to take to control the risks
  • who needs to carry out the action
  • when the action is needed by
  • Risk assessment template (Word Document Format)
  • Risk assessment template (Open Document Format) (.odt)

Example risk assessments

These typical examples show how other businesses have managed risks. You can use them as a guide to think about:

  • some of the hazards in your business
  • the steps you need to take to manage the risks

Do not just copy an example and put your company name to it as that would not satisfy the law and would not protect your employees. You must think about the specific hazards and controls your business needs.

  • Office-based business
  • Local shop/newsagent
  • Food preparation and service
  • Motor vehicle repair shop  
  • Factory maintenance work

View a printable version of the whole guide

Is this page useful?

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • Current issue
  • For authors
  • New editors
  • BMJ Journals

You are here

  • Volume 58, Issue 17
  • Where is the research on sport-related concussion in Olympic athletes? A descriptive report and assessment of the impact of access to multidisciplinary care on recovery
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • http://orcid.org/0000-0002-3298-5719 Thomas Romeas 1 , 2 , 3 ,
  • http://orcid.org/0000-0003-1748-7241 Félix Croteau 3 , 4 , 5 ,
  • Suzanne Leclerc 3 , 4
  • 1 Sport Sciences , Institut national du sport du Québec , Montreal , Quebec , Canada
  • 2 School of Optometry , Université de Montréal , Montreal , Quebec , Canada
  • 3 IOC Research Centre for Injury Prevention and Protection of Athlete Health , Réseau Francophone Olympique de la Recherche en Médecine du Sport , Montreal , Quebec , Canada
  • 4 Sport Medicine , Institut national du sport du Québec , Montreal , Quebec , Canada
  • 5 School of Physical and Occupational Therapy , McGill University , Montreal , Quebec , Canada
  • Correspondence to Dr Thomas Romeas; thomas.romeas{at}umontreal.ca

Objectives This cohort study reported descriptive statistics in athletes engaged in Summer and Winter Olympic sports who sustained a sport-related concussion (SRC) and assessed the impact of access to multidisciplinary care and injury modifiers on recovery.

Methods 133 athletes formed two subgroups treated in a Canadian sport institute medical clinic: earlier (≤7 days) and late (≥8 days) access. Descriptive sample characteristics were reported and unrestricted return to sport (RTS) was evaluated based on access groups as well as injury modifiers. Correlations were assessed between time to RTS, history of concussions, the number of specialist consults and initial symptoms.

Results 160 SRC (median age 19.1 years; female=86 (54%); male=74 (46%)) were observed with a median (IQR) RTS duration of 34.0 (21.0–63.0) days. Median days to care access was different in the early (1; n SRC =77) and late (20; n SRC =83) groups, resulting in median (IQR) RTS duration of 26.0 (17.0–38.5) and 45.0 (27.5–84.5) days, respectively (p<0.001). Initial symptoms displayed a meaningful correlation with prognosis in this study (p<0.05), and female athletes (52 days (95% CI 42 to 101)) had longer recovery trajectories than male athletes (39 days (95% CI 31 to 65)) in the late access group (p<0.05).

Conclusions Olympic athletes in this cohort experienced an RTS time frame of about a month, partly due to limited access to multidisciplinary care and resources. Earlier access to care shortened the RTS delay. Greater initial symptoms and female sex in the late access group were meaningful modifiers of a longer RTS.

  • Brain Concussion
  • Cohort Studies
  • Retrospective Studies

Data availability statement

Data are available on reasonable request. Due to the confidential nature of the dataset, it will be shared through a controlled access repository and made available on specific and reasonable requests.

https://doi.org/10.1136/bjsports-2024-108211

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

WHAT IS ALREADY KNOWN ON THIS TOPIC

Most data regarding the impact of sport-related concussion (SRC) guidelines on return to sport (RTS) are derived from collegiate or recreational athletes. In these groups, time to RTS has steadily increased in the literature since 2005, coinciding with the evolution of RTS guidelines. However, current evidence suggests that earlier access to care may accelerate recovery and RTS time frames.

WHAT THIS STUDY ADDS

This study reports epidemiological data on the occurrence of SRC in athletes from several Summer and Winter Olympic sports with either early or late access to multidisciplinary care. We found the median time to RTS for Olympic athletes with an SRC was 34.0 days which is longer than that reported in other athletic groups such as professional or collegiate athletes. Time to RTS was reduced by prompt access to multidisciplinary care following SRC, and sex-influenced recovery in the late access group with female athletes having a longer RTS timeline. Greater initial symptoms, but not prior concussion history, were also associated with a longer time to RTS.

HOW THIS STUDY MIGHT AFFECT RESEARCH, PRACTICE OR POLICY

Considerable differences exist in access to care for athletes engaged in Olympic sports, which impact their recovery. In this cohort, several concussions occurred during international competitions where athletes are confronted with poor access to organised healthcare. Pathways for prompt access to multidisciplinary care should be considered by healthcare authorities, especially for athletes who travel internationally and may not have the guidance or financial resources to access recommended care.

Introduction

After two decades of consensus statements, sport-related concussion (SRC) remains a high focus of research, with incidence ranging from 0.1 to 21.5 SRC per 1000 athlete exposures, varying according to age, sex, sport and level of competition. 1 2 Evidence-based guidelines have been proposed by experts to improve its identification and management, such as those from the Concussion in Sport Group. 3 Notably, they recommend specific strategies to improve SRC detection and monitoring such as immediate removal, 4 prompt access to healthcare providers, 5 evidence-based interventions 6 and multidisciplinary team approaches. 7 It is believed that these guidelines contribute to improving the early identification and management of athletes with an SRC, thereby potentially mitigating its long-term consequences.

Nevertheless, evidence regarding the impact of SRC guidelines implementation remains remarkably limited, especially within high-performance sport domains. In fact, most reported SRC data focus on adolescent student-athletes, collegiate and sometimes professional athletes in the USA but often neglect Olympians. 1 2 8–11 Athletes engaged in Olympic sports, often referred to as elite amateurs, are typically classified among the highest performers in elite sport, alongside professional athletes. 12 13 They train year-round and uniquely compete regularly on the international stage in sports that often lack professional leagues and rely on highly variable resources and facilities, mostly dependent on winning medals. 14 Unlike professional athletes, Olympians do not have access to large financial rewards. Although some Olympians work or study in addition to their intensive sports practice, they can devote more time to full-time sports practice compared with collegiate athletes. Competition calendars in Olympians differ from collegiate athletes, with periodic international competitions (eg, World Cups, World Championships) throughout the whole year rather than regular domestic competitions within a shorter season (eg, semester). Olympians outclass most collegiate athletes, and only the best collegiate athletes will have the chance to become Olympians and/or professionals. 12 13 15 In Canada, a primary reason for limited SRC data in Olympic sports is that the Canadian Olympic and Paralympic Sports Institute (COPSI) network only adopted official guidelines in 2018 to standardise care for athletes’ SRC nationwide. 16 17 The second reason could be the absence of a centralised medical structure and surveillance systems, identified as key factors contributing to the under-reporting and underdiagnosis of athletes with an SRC. 18

Among the available evidence on the evolution of SRC management, a 2023 systematic review and meta-analysis in athletic populations including children, adolescents and adults indicated that a full return to sport (RTS) could take up to a month but is estimated to require 19.8 days on average (15.4 days in adults), as opposed to the initial expectation of approximately 10.0 days based on studies published prior to 2005. 19 In comparison, studies focusing strictly on American collegiate athletes report median times to RTS of 16 days. 9 20 21 Notably, a recent study of military cadets reported an even longer return to duty times of 29.4 days on average, attributed to poorer access to care and fewer incentives to return to play compared with elite sports. 22 In addition, several modifiers have also been identified as influencing the time to RTS, such as the history of concussions, type of sport, sex, past medical problems (eg, preinjury modifiers), as well as the initial number of symptoms and their severity (eg, postinjury modifiers). 20 22 The evidence regarding the potential influence of sex on the time to RTS has yielded mixed findings in this area. 23–25 In fact, females are typically under-represented in SRC research, highlighting the need for additional studies that incorporate more balanced sample representation across sexes and control for known sources of bias. 26 Interestingly, a recent Concussion Assessment, Research and Education Consortium study, which included a high representation of concussed female athletes (615 out of 1071 patients), revealed no meaningful differences in RTS between females and males (13.5 and 11.8 days, respectively). 27 Importantly, findings in the sporting population suggested that earlier initiation of clinical care is linked to shorter recovery after concussion. 5 28 However, these factors affecting the time to RTS require a more thorough investigation, especially among athletes engaged in Olympic sports who may or may not have equal access to prompt, high-quality care.

Therefore, the primary objective of this study was to provide descriptive statistics among athletes with SRC engaged in both Summer and Winter Olympic sport programmes over a quadrennial, and to assess the influence of recommended guidelines of the COPSI network and the fifth International Consensus Conference on Concussion in Sport on the duration of RTS performance. 16 17 Building on available evidence, the international schedule constraints, variability in resources 14 and high-performance expectation among this elite population, 22 prolonged durations for RTS, compared with what is typically reported (eg, 16.0 or 15.4 days), were hypothesised in Olympians. 3 19 The secondary objective was to more specifically evaluate the impact of access to multidisciplinary care and injury modifiers on the time to RTS. Based on current evidence, 5 7 29 30 the hypothesis was formulated that athletes with earlier multidisciplinary access would experience a faster RTS. Regarding injury modifiers, it was expected that female and male athletes would show similar time to RTS despite presenting sex-specific characteristics of SRC. 31 The history of concussions, the severity of initial symptoms and the number of specialist consults were expected to be positively correlated to the time to RTS. 20 32

Participants

A total of 133 athletes (F=72; M=61; mean age±SD: 20.7±4.9 years old) who received medical care at the Institut national du sport du Québec, a COPSI training centre set up with a medical clinic, were included in this cohort study with retrospective analysis. They participated in 23 different Summer and Winter Olympic sports which were classified into six categories: team (soccer, water polo), middle distance/power (rowing, swimming), speed/strength (alpine skiing, para alpine skiing, short and long track speed skating), precision/skill-dependent (artistic swimming, diving, equestrian, figure skating, gymnastics, skateboard, synchronised skating, trampoline) and combat/weight-making (boxing, fencing, judo, para judo, karate, para taekwondo, wrestling) sports. 13 This sample consists of two distinct groups: (1) early access group in which athletes had access to a medical integrated support team of multidisciplinary experts within 7 days following their SRC and (2) late access group composed of athletes who had access to a medical integrated support team of multidisciplinary experts eight or more days following their SRC. 5 30 Inclusion criteria for the study were participation in a national or international-level sports programme 13 and having sustained at least one SRC diagnosed by an authorised healthcare practitioner (eg, physician and/or physiotherapist).

Clinical context

The institute clinic provides multidisciplinary services for care of patients with SRC including a broad range of recommended tests for concussion monitoring ( table 1 ). The typical pathway for the athletes consisted of an initial visit to either a sports medicine physician or their team sports therapist. A clinical diagnosis of SRC was then confirmed by a sports medicine physician, and referral for the required multidisciplinary assessments ensued based on the patient’s signs and symptoms. Rehabilitation progression was based on the evaluation of exercise tolerance, 33 priority to return to cognitive tasks and additional targeted support based on clinical findings of a cervical, visual or vestibular nature. 17 The expert team worked in an integrated manner with the athlete and their coaching staff for the rehabilitation phase, including regular round tables and ongoing communication. 34 For some athletes, access to recommended care was fee based, without a priori agreements with a third party payer (eg, National Sports Federation).

  • View inline

Main evaluations performed to guide the return to sport following sport-related concussion

Data collection

Data were collected at the medical clinic using a standardised injury surveillance form based on International Olympic Committee guidelines. 35 All injury characteristics were extracted from the central injury database between 1 July 2018 and 31 July 2022. This period corresponds to a Winter Olympic sports quadrennial but also covers 3 years for Summer Olympic sports due to the postponing of the Tokyo 2020 Olympic Games. Therefore, the observation period includes a typical volume of competitions across sports and minimises differences in exposure based on major sports competition schedules. The information extracted from the database included: participant ID, sex, date of birth, sport, date of injury, type of injury, date of their visit at the clinic, clearance date of unrestricted RTS (eg, defined as step 6 of the RTS strategy with a return to normal gameplay including competitions), the number and type of specialist consults, mechanism of injury (eg, fall, hit), environment where the injury took place (eg, training, competition), history of concussions, history of modifiers (eg, previous head injury, migraines, learning disability, attention deficit disorder or attention deficit/hyperactivity disorder, depression, anxiety, psychotic disorder), as well as the number of symptoms and the total severity score from the first Sport Concussion Assessment Tool 5 (SCAT5) assessment following SRC. 17

Following a Shapiro-Wilk test, medians, IQR and non-parametric tests were used for the analyses because of the absence of normal distributions for all the variables in the dataset (all p<0.001). The skewness was introduced by the presence of individuals that required lengthy recovery periods. One participant was removed from the analysis because their time to consult with the multidisciplinary team was extremely delayed (>1 year).

Descriptive statistics were used to describe the participant’s demographics, SRC characteristics and risk factors in the total sample. Estimated incidences of SRC were also reported for seven resident sports at the institute for which it was possible to quantify a detailed estimate of training volume based on the annual number of training and competition hours as well as the number of athletes in each sport.

To assess if access to multidisciplinary care modified the time to RTS, we compared time to RTS between early and late access groups using a method based on median differences described elsewhere. 36 Wilcoxon rank sum tests were also performed to make between-group comparisons on single variables of age, time to first consult, the number of specialists consulted and medical visits. Fisher’s exact tests were used to compare count data between groups on variables of sex, history of concussion, time since the previous concussion, presence of injury modifiers, environment and mechanism of injury. Bonferroni corrections were applied for multiple comparisons in case of meaningful differences.

To assess if injury modifiers modified time to RTS in the total sample, we compared time to RTS between sexes, history of concussions, time since previous concussion or other injury modifiers using a method based on median differences described elsewhere. 36 Kaplan-Meier curves were drawn to illustrate time to RTS differences between sexes (origin and start time: date of injury; end time: clearance date of unrestricted RTS). Trajectories were then assessed for statistical differences using Cox proportional hazards model. Wilcoxon rank sum tests were employed for comparing the total number of symptoms and severity scores on the SCAT5. The association of multilevel variables on return to play duration was evaluated in the total sample with Kruskal-Wallis rank tests for environment, mechanism of injury, history of concussions and time since previous concussion. For all subsequent analyses of correlations between SCAT5 results and secondary variables, only data obtained from SCAT5 assessments within the acute phase of injury (≤72 hours) were considered (n=65 SRC episodes in the early access group). 37 Spearman rank correlations were estimated between RTS duration, history of concussions, number of specialist consults and total number of SCAT5 symptoms or total symptom severity. All statistical tests were performed using RStudio (R V.4.1.0, The R Foundation for Statistical Computing). The significance level was set to p<0.05.

Equity, diversity and inclusion statement

The study population is representative of the Canadian athletic population in terms of age, gender, demographics and includes a balanced representation of female and male athletes. The study team consists of investigators from different disciplines and countries, but with a predominantly white composition and under-representation of other ethnic groups. Our study population encompasses data from the Institut national du sport du Québec, covering individuals of all genders, ethnicities and geographical regions across Canada.

Patient and public involvement

The patients or the public were not involved in the design, conduct, reporting or dissemination plans of our research.

Sample characteristics

During the 4-year period covered by this retrospective chart review, a total of 160 SRC episodes were recorded in 132 athletes with a median (IQR) age of 19.1 (17.8–22.2) years old ( table 2 ). 13 female and 10 male athletes had multiple SRC episodes during this time. The sample had a relatively balanced number of females (53.8%) and males (46.2%) with SRC included. 60% of the sample reported a history of concussion, with 35.0% reporting having experienced more than two episodes. However, most of these concussions had occurred more than 1 year before the SRC for which they were being treated. Within this sample, 33.1% of participants reported a history of injury modifiers. Importantly, the median (IQR) time to first clinic consult was 10.0 (1.0–20.0) days and the median (IQR) time to RTS was 34.0 (21.0–63.0) days in this sample ( table 3 ). The majority of SRCs occurred during training (56.3%) rather than competition (33.1%) and were mainly due to a fall (63.7%) or a hit (31.3%). The median (IQR) number of follow-up consultations and specialists consulted after the SRC were, respectively, 9 (5.0–14.3) and 3 (2.0–4.0).

Participants demographics

Sport-related concussion characteristics

Among seven sports of the total sample (n=89 SRC), the estimated incidence of athletes with SRC was highest in short-track speed skating (0.47/1000 hours; 95% CI 0.3 to 0.6), and lower in boxing, trampoline, water polo, judo, artistic swimming, and diving (0.24 (95% CI 0.0 to 0.5), 0.16 (95% CI 0.0 to 0.5), 0.13 (95% CI 0.1 to 0.2), 0.11 (95% CI 0.1 to 0.2), 0.09 (95% CI 0.0 to 0.2) and 0.06 (95% CI 0.0 to 0.1)/1000, respectively ( online supplemental material ). Furthermore, most athletes sustained an SRC in training (66.5%; 95% CI 41.0 to 92.0) rather than competition (26.0%; 95% CI 0.0 to 55.0) except for judo athletes (20.0% (95% CI 4.1 to 62.0) and 80.0% (95% CI 38.0 to 96.0), respectively). Falls were the most common injury mechanism in speed skating, trampoline and judo while hits were the most common injury mechanism in boxing, water polo, artistic swimming and diving.

Supplemental material

Access to care.

The median difference in time to RTS was 19 days (95% CI 9.3 to 28.7; p<0.001) between the early (26 (IQR 17.0–38.5) days) and late (45 (IQR 27.5–84.5) days) access groups ( table 3 ; figure 1 ). Importantly, the distribution of SRC environments was different between both groups (p=0.008). The post hoc analysis demonstrated a meaningful difference in the distribution of SRC in training and competition environments between groups (p=0.029) but not for the other comparisons. There was a meaningful difference between the groups in time to first consult (p<0.001; 95% CI −23.0 to −15.0), but no meaningful differences between groups in median age (p=0.176; 95% CI −0.3 to 1.6), sex distribution (p=0.341; 95% CI 0.7 to 2.8), concussion history (p=0.210), time since last concussion (p=0.866), mechanisms of SRC (p=0.412), the presence of modifiers (p=0.313; 95% CI 0.3 to 1.4) and the number of consulted specialists (p=0.368; 95% CI −5.4 to 1.0) or medical visits (p=0.162; 95% CI −1.0 to 3.0).

  • Download figure
  • Open in new tab
  • Download powerpoint

Time to return to sport following sport-related concussion as a function of group’s access to care and sex. Outliers: below=Q1−1.5×IQR; above=Q3+1.5×IQR.

The median difference in time to RTS was 6.5 days (95% CI −19.3 to 5.3; p=0.263; figure 1 ) between female (37.5 (IQR 22.0–65.3) days) and male (31.0 (IQR 20.0–48.0) days) athletes. Survival analyses highlighted an increased hazard of longer recovery trajectory in female compared with male athletes (HR 1.4; 95% CI 1.4 to 0.7; p=0.052; figure 2A ), which was mainly driven by the late (HR 1.8; 95% CI 1.8 to 0.6; p=0.019; figure 2C ) rather than the early (HR 1.1; 95% CI 1.1 to 0.9; p=0.700; figure 2B ) access group. Interestingly, a greater number of female athletes (n=15) required longer than 100 days for RTS as opposed to the male athletes (n=6). There were no meaningful differences between sexes for the total number of symptoms recorded on the SCAT5 (p=0.539; 95% CI −1.0 to 2.0) nor the total symptoms total severity score (p=0.989; 95% CI −5.0 to 5.0).

Time analysis of sex differences in the time to return to sport following sport-related concussion in the (A) total sample, as well as (B) early, and (C) late groups using survival curves with 95% confidence bands and tables of time-specific number of patients at risk (censoring proportion: 0%).

History of modifiers

SRC modifiers are presented in table 2 , and their influence on RTP is shown in table 4 . The median difference in time to RTS was 1.5 days (95% CI −10.6 to 13.6; p=0.807) between athletes with none and one episode of previous concussion, was 3.5 days (95% CI −13.9 to 19.9; p=0.728) between athletes with none and two or more episodes of previous concussion, and was 2 days (95% CI −12.4 to 15.4; p=0.832) between athletes with one and two or more episodes of previous concussion. The history of concussions (none, one, two or more) had no meaningful impact on the time to RTS (p=0.471). The median difference in time to RTS was 4.5 days (95% CI −21.0 to 30.0; p=0.729) between athletes with none and one episode of concussion in the previous year, was 2 days (95% CI −10.0 to 14.0; p=0.744) between athletes with none and one episode of concussion more than 1 year ago, and was 2.5 days (95% CI −27.7 to 22.7; p=0.846) between athletes with an episode of concussion in the previous year and more than 1 year ago. Time since the most recent concussion did not change the time to RTS (p=0.740). The longest time to RTS was observed in the late access group in which athletes had a concussion in the previous year, with a very large spread of durations (65.0 (IQR 33.0–116.5) days). The median difference in time to RTS was 3 days (95% CI −13.1 to 7.1; p=0.561) between athletes with and without other injury modifiers. The history of other injury modifiers had no meaningful influence on the time to RTS (95% CI −6.0 to 11.0; p=0.579).

Preinjury modifiers of time to return to sport following SRC

SCAT5 symptoms and severity scores

Positive associations were observed between the time to RTS and the number of initial symptoms (r=0.3; p=0.010; 95% CI 0.1 to 0.5) or initial severity score (r=0.3; p=0.008; 95% CI 0.1 to 0.5) from the SCAT5. The associations were not meaningful between the number of specialist consultations and the initial number of symptoms (r=−0.1; p=0.633; 95% CI −0.3 to 0.2) or initial severity score (r=−0.1; p=0.432; 95% CI −0.3 to 0.2). Anecdotally, most reported symptoms following SRC were ‘headache’ (86.2%) and ‘pressure in the head’ (80.0%), followed by ‘fatigue’ (72.3%), ‘neck pain’ (70.8%) and ‘not feeling right’ (67.7%; online supplemental material ).

This study is the first to report descriptive data on athletes with SRC collected across several sports during an Olympic quadrennial, including athletes who received the most recent evidence-based care at the time of data collection. Primarily, results indicate that the time to RTS in athletes engaged in Summer and Winter Olympic sports may require a median (IQR) of 34.0 (21.0–63.0) days. Importantly, findings demonstrated that athletes with earlier (≤7 days) access to multidisciplinary concussion care showed faster RTS compared with those with late access. Time to RTS exhibited large variability where sex had a meaningful influence on the recovery pathway in the late access group. Initial symptoms, but not history of concussion, were correlated with prognosis in this sample. The main reported symptoms were consistent with previous studies. 38 39

Time to RTS in Olympic sports

This study provides descriptive data on the impact of SRC monitoring programmes on recovery in elite athletes engaged in Olympic sports. As hypothesised, the median time to RTS found in this study (eg, 34.0 days) was about three times longer than those found in reports from before 2005, and 2 weeks longer than the typical median values (eg, 19.8 days) recently reported in athletic levels including youth (high heterogeneity, I 2 =99.3%). 19 These durations were also twice as long as the median unrestricted time to RTS observed among American collegiate athletes, which averages around 16 days. 9 20 21 However, they were more closely aligned with findings from collegiate athletes with slow recovery (eg, 34.7 days) and evidence from military cadets with poor access where return to duty duration was 29.4 days. 8 22 Several reasons could explain such extended time to RTS, but the most likely seems to be related to the diversity in access among these sports to multidisciplinary services (eg, 10.0 median days (1–20)), well beyond the delays experienced by collegiate athletes, for example (eg, 0.0 median days (0–2)). 40 In the total sample, the delays to first consult with the multidisciplinary clinic were notably mediated by the group with late access, whose athletes had more SRC during international competition. One of the issues for athletes engaged in Olympic sports is that they travel abroad year-round for competitions, in contrast with collegiate athletes who compete domestically. These circumstances likely make access to quality care very variable and make the follow-up of care less centralised. Also, access to resources among these sports is highly variable (eg, medal-dependant), 14 and at the discretion of the sport’s leadership (eg, sport federation), who may decide to prioritise more or fewer resources to concussion management considering the relatively low incidence of this injury. Another explanation for the longer recovery times in these athletes could be the lack of financial incentives to return to play faster, which are less prevalent among Olympic sports compared with professionals. However, the stakes of performance and return to play are still very high among these athletes.

Additionally, it is plausible that studies vary their outcome with shifting operational definitions such as resolution of symptoms, return to activities, graduated return to play or unrestricted RTS. 19 40 It is understood that resolution of symptoms may occur much earlier than return to preinjury performance levels. Finally, an aspect that has been little studied to date is the influence of the sport’s demands on the RTS. For example, acrobatic sports requiring precision/technical skills such as figure skating, trampoline and diving, which involve high visuospatial and vestibular demands, 41 might require more time to recover or elicit symptoms for longer times. Anecdotally, athletes who experienced a long time to RTS (>100 days) were mostly from precision/skill-dependent sports in this sample. The sports demand should be further considered as an injury modifier. More epidemiological reports that consider the latest guidelines are therefore necessary to gain a better understanding of the true time to RTS and impact following SRC in Olympians.

Supporting early multidisciplinary access to care

In this study, athletes who obtained early access to multidisciplinary care after SRC recovered faster than those with late access to multidisciplinary care. This result aligns with findings showing that delayed access to a healthcare practitioner delays recovery, 19 including previous evidence in a sample of patients from a sports medicine clinic (ages 12–22), indicating that the group with a delayed first clinical visit (eg, 8–20 days) was associated with a 5.8 times increased likelihood of a recovery longer than 30 days. 5 Prompt multidisciplinary approach for patients with SRC is suggested to yield greater effectiveness over usual care, 3 6 17 which is currently evaluated under randomised controlled trial. 42 Notably, early physical exercise and prescribed exercise (eg, 48 hours postinjury) are effective in improving recovery compared with strict rest or stretching. 43 44 In fact, preclinical and clinical studies have shown that exercise has the potential to improve neurotransmission, neuroplasticity and cerebral blood flow which supports that the physically trained brain enhanced recovery. 45 46 Prompt access to specialised healthcare professionals can be challenging in some contexts (eg, during international travel), and the cost of accessing medical care privately may prove further prohibitive. This barrier to recovery should be a priority for stakeholders in Olympic sports and given more consideration by health authorities.

Estimated incidences and implications

The estimated incidences of SRC were in the lower range compared with what is reported in other elite sport populations. 1 2 However, the burden of injury remained high for these sports, and the financial resources as well as expertise required to facilitate athletes’ rehabilitation was considerable (median number of consultations: 9.0). Notably, the current standard of public healthcare in Canada does not subsidise the level of support recommended following SRC as first-line care, and the financial subsidisation of this recommended care within each federation is highly dependent on the available funding, varying significantly between sports. 14 Therefore, the ongoing efforts to improve education, prevention and early recognition, modification of rules to make the environments safer and multidisciplinary care access for athletes remain crucial. 7

Strength and limitations

This unique study provides multisport characteristics following the evolution of concussion guidelines in Summer and Winter Olympic sports in North America. Notably, it features a balance between the number of female and male athletes, allowing the analysis of sex differences. 23 26 In a previous review of 171 studies informing consensus statements, samples were mostly composed of more than 80% of male participants, and more than 40% of these studies did not include female participants at all. 26 This study also included multiple non-traditional sports typically not encompassed in SRC research, feature previously identified as a key requirement of future epidemiological research. 47

However, it must be acknowledged that potential confounding factors could influence the results. For example, the number of SRC detected during the study period does not account for potentially unreported concussions. Nevertheless, this figure should be minimal because these athletes are supervised both in training and in competition by medical staff. Next, the sport types were heterogeneous, with inconsistent risk for head impacts or inconsistent sport demand which might have an influence on recovery. Furthermore, the number of participants or sex in each sport was not evenly distributed, with short-track speed skaters representing a large portion of the overall sample (32.5%), for example. Additionally, the number of participants with specific modifiers was too small in the current sample to conclude whether the presence of precise characteristics (eg, history of concussion) impacted the time to RTS. Also, the group with late access was more likely to consist of athletes who sought specialised care for persistent symptoms. These complex cases are often expected to require additional time to recover. 48 Furthermore, athletes in the late group may have sought support outside of the institute medical clinic, without a coordinated multidisciplinary approach. Therefore, the estimation of clinical consultations was tentative for this group and may represent a potential confounding factor in this study.

This is the first study to provide evidence of the prevalence of athletes with SRC and modifiers of recovery in both female and male elite-level athletes across a variety of Summer and Winter Olympic sports. There was a high variability in access to care in this group, and the median (IQR) time to RTS following SRC was 34.0 (21.0–63.0) days. Athletes with earlier access to multidisciplinary care took nearly half the time to RTS compared with those with late access. Sex had a meaningful influence on the recovery pathway in the late access group. Initial symptom number and severity score but not history of concussion were meaningful modifiers of recovery. Injury surveillance programmes targeting national sport organisations should be prioritised to help evaluate the efficacy of recommended injury monitoring programmes and to help athletes engaged in Olympic sports who travel a lot internationally have better access to care. 35 49

Ethics statements

Patient consent for publication.

Not applicable.

Ethics approval

This study involves human participants and was approved by the ethics board of Université de Montréal (certificate #2023-4052). Participants gave informed consent to participate in the study before taking part.

Acknowledgments

The authors would like to thank the members of the concussion interdisciplinary clinic of the Institut national du sport du Québec for collecting the data and for their unconditional support to the athletes.

  • Glover KL ,
  • Chandran A ,
  • Morris SN , et al
  • Patricios JS ,
  • Schneider KJ ,
  • Dvorak J , et al
  • Guskiewicz KM , et al
  • Kontos AP ,
  • Jorgensen-Wagers K ,
  • Trbovich AM , et al
  • Critchley ML ,
  • Anderson V , et al
  • Eliason PH ,
  • Galarneau J-M ,
  • Kolstad AT , et al
  • McAllister TW ,
  • Broglio SP ,
  • Katz BP , et al
  • Liebel SW ,
  • Van Pelt KL ,
  • Pasquina PF , et al
  • Pellman EJ ,
  • Lovell MR ,
  • Viano DC , et al
  • Casson IR , et al
  • McKinney J ,
  • Fee J , et al
  • McKay AKA ,
  • Stellingwerff T ,
  • Smith ES , et al
  • Government of Canada
  • Pereira LA ,
  • Cal Abad CC ,
  • Kobal R , et al
  • ↵ COPSI - sport related concussion guidelines . Available : https://www.ownthepodium.org/en-CA/Initiatives/Sport-Science-Innovation/2018-COPSI-Network-Concussion-Guidelines [Accessed 25 May 2023 ].
  • McCrory P ,
  • Meeuwisse W ,
  • Dvořák J , et al
  • Gardner AJ ,
  • Quarrie KL ,
  • Putukian M ,
  • Purcell L ,
  • Schneider KJ , et al
  • Nguyen JN , et al
  • Lempke LB ,
  • Caccese JB ,
  • Syrydiuk RA , et al
  • D’Lauro C ,
  • Johnson BR ,
  • McGinty G , et al
  • Crossley KM ,
  • Bo K , et al
  • Covassin T ,
  • Harris W , et al
  • Swanik CB ,
  • Swope LM , et al
  • Master CL ,
  • Arbogast KB , et al
  • Walton SR ,
  • Kelshaw PM ,
  • Munce TA , et al
  • Barron TF , et al
  • Tsushima WT ,
  • Riegler K ,
  • Amalfe S , et al
  • Monteiro D ,
  • Silva F , et al
  • Dijkstra HP ,
  • Pollock N ,
  • Chakraverty R , et al
  • Clarsen B ,
  • Derman W , et al
  • Matthews JN ,
  • Echemendia RJ ,
  • Bruce JM , et al
  • Yeates KO ,
  • Räisänen AM ,
  • Premji Z , et al
  • Breedlove K ,
  • McAllister TW , et al
  • Hennig L , et al
  • Register-Mihalik JK ,
  • Guskiewicz KM ,
  • Marshall SW , et al
  • Toomey CM , et al
  • Mannix R , et al
  • Barkhoudarian G ,
  • Haider MN ,
  • Ellis M , et al
  • Harmon KG ,
  • Clugston JR ,
  • Dec K , et al
  • Carson JD ,
  • Lawrence DW ,
  • Kraft SA , et al
  • Martens G ,
  • Edouard P ,
  • Tscholl P , et al

Supplementary materials

Supplementary data.

This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.

  • Data supplement 1

X @ThomasRomeas

Correction notice This article has been corrected since it published Online First. The ORCID details have been added for Dr Croteau.

Contributors TR, FC and SL were involved in planning, conducting and reporting the work. François Bieuzen and Magdalena Wojtowicz critically reviewed the manuscript. TR is guarantor.

Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

Competing interests None declared.

Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.

Provenance and peer review Not commissioned; externally peer reviewed.

Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.

Read the full text or download the PDF:

COMMENTS

  1. (PDF) Risk assessment and risk management: Review of ...

    abstract. Risk assessment and management was established as a scientific field some 30-40 years ago. Principles. and methods were developed for how to conceptualise, assess and manage risk ...

  2. Invited Review Risk assessment and risk management: Review of recent

    Risk assessment and risk management: Review of recent ...

  3. Research risk assessment

    Research risk assessment. It's the responsibility of the principal investigators (PI) and researchers to identify reasonably foreseeable risks associated with their research and control the risks so far as is reasonably practicable. All participants and research assistants have the right to expect protection from physical, psychological, social ...

  4. Risk Assessment for Scientific Data

    This paper presents an analysis of data risk factors that scientific data collections may face, and a data risk assessment matrix to support data risk assessments to help ameliorate those risks. The goals of this work are to inform and enable effective data risk assessment by: a) individuals and organizations who manage data collections, and b ...

  5. Risk Analysis in Healthcare Organizations: Methodological Framework and

    Risk Analysis in Healthcare Organizations: Methodological ...

  6. A framework to support risk assessment in hospitals

    A framework to support risk assessment in hospitals - PMC

  7. A case study on the relationship between risk assessment of scientific

    This paper delves into the nuanced dynamics influencing the outcomes of risk assessment (RA) in scientific research projects (SRPs), employing the Naive Bayes algorithm. The methodology involves ...

  8. PDF Risk Management for Research and Development Projects

    2003]. The risk management is done using risk register for risk identification and assessment. [12] ―Analysis of risk and time to market during the conceptual design of new systems‖ paper treats every knowledge gap as a risk and the way to mitigate that risk is closing the knowledge gap [Hari, 2003]. In this paper, all the

  9. Occupational health and safety risk assessment: A systematic literature

    Occupational health and safety risk assessment

  10. PDF Dissertation Risk Assessments Guidance

    from engaging in the process of risk assessment. Dissertation work falls into the following categories: 1. Desk-based research 2. Laboratory-based research 3. Fieldwork 1 Desk Based Research Long periods in front of a computer can give rise to aches and pains from poor posture and repetitive movements.

  11. Risk Assessment and Analysis Methods: Qualitative and Quantitative

    Risk Assessment and Analysis Methods: Qualitative and ...

  12. (PDF) Hazard Analysis and Risk Assessment

    quantitative risk assessment" model. quantitative risk assessment" model. Hazard Analysis Definition. A Hazard Analysis (HA) is a comprehensive study that. identifies, and, analyzes. the ...

  13. A case study exploring field-level risk assessments as a leading safety

    Risk assessment practices to reveal leading indicators. Risk assessment is a process used to gather knowledge and information around a specific health threat or safety hazard (Smith and Harrison, 2005).Based on the probability of a negative incident, risk assessment also includes determining whether or not the level of risk is acceptable (Lindhe et al., 2010; International Electrotechnical ...

  14. PDF RISK ASSESSMENT

    The risk assessment framework and templates provided in this monograph (Annexes 1 and 2) are intended to be supplemented with relevant biosafety information in the fourth edition of the Laboratory biosafety manual (1)to guide laboratory professionals to assess risk at their own institutions.

  15. PDF Information on writing a Risk Assessment and Management Plan

    Information on writing a Risk Assessment and ...

  16. PDF Conducting a Research Risk Assessment

    Total Risk - The risk of an event occurring without consideration for internal controls Manageable Risk - The risk that can be mitigated through internal controls • Not static • Based on a multitude of factors Remaining Risk-The risk that remains after considering current controls • Not static • Based on a multitude of factors 5

  17. Risk Assessment Matrices for Workplace Hazards: Design for Usability

    In occupational safety and health (OSH), the process of assessing risks of identified hazards considers both the (i) foreseeable events and exposures that can cause harm and (ii) the likelihood or probability of occurrence. To account for both, a table format known as a risk assessment matrix uses rows and columns for ordered categories of the foreseeable severity of harm and likelihood ...

  18. PDF Conducting a Risk Assessment

    Conducting a Risk Assessment . A risk assessment can be a valuable tool to help your unit identify, evaluate and prioritize its risks in order to improve decision-making and resource allocation. Harvard's Institutional Risk Management (IRM) program recommends the following process for c onducting risk assessments. We are here to consult with

  19. PDF Appendix 2: Risk assessment matrix

    46 Research Ethics Support and Review in Research Organisations NOTES: The assessment of risk may be one of those areas where some degree of overlap between research governance and independent ethics review might occur. Institutions might themselves allocate a clear division of labour between governance and

  20. Risk Assessment Questionnaires (With Sample Templates and Questions)

    Risk Assessment Questionnaires (With Sample Templates ...

  21. Risk Assessment Examples & Sample Templates

    Risk Assessment Examples & Sample Templates

  22. A case study of the informative value of risk of bias and reporting

    While undisputedly important, and part of any systematic review (SR) by definition, evaluation of the risk of bias within the included studies is one of the most time-consuming parts of performing an SR. In this paper, we describe a case study comprising an extensive analysis of risk of bias (RoB) and reporting quality (RQ) assessment from a previously published review (CRD42021236047).

  23. Example Risk Assessment For Excavations

    Example Risk Assessment for Excavations - Free download as Word Doc (.doc), PDF File (.pdf), Text File (.txt) or view presentation slides online. This risk assessment was conducted for excavation work to construct foundations and sewers for two houses. It identifies potential hazards such as buried services, falls into excavations, plant or vehicle accidents, collapse of excavation sides ...

  24. Risk assessment: Template and examples

    Risk assessment: Template and examples

  25. Where is the research on sport-related concussion in Olympic athletes

    Objectives This cohort study reported descriptive statistics in athletes engaged in Summer and Winter Olympic sports who sustained a sport-related concussion (SRC) and assessed the impact of access to multidisciplinary care and injury modifiers on recovery. Methods 133 athletes formed two subgroups treated in a Canadian sport institute medical clinic: earlier (≤7 days) and late (≥8 days ...