Logo for Boise State Pressbooks

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Module 1: Case Studies & Examples

In this section, we will review some examples of how to generate an initial estimate using two very basic methods. Then, we are going to walk through some case studies so that you can put what you’ve learned into the context of a cyber risk scenario.

The Value of the Initial Analysis

In any organization, decision-making is a crucial process that can significantly impact the success or failure of the organization. Making informed decisions requires access to accurate and relevant information. It does not, however, require in-depth, time-consuming, and expensive research and analysis. The initial analysis provides a quick, cost-effective analysis of risk. It allows decision-makers to have a timely analysis based on readily available data. If decision-makers determine that a more in-depth analysis is warranted, this gives them the opportunity to clearly scope the effort and provide their authorization for the expenditure of additional funds and resources.

What is an Initial Analysis?

An initial analysis is a preliminary assessment of a situation or problem. It involves gathering and analyzing information to understand the situation comprehensively. An initial analysis is typically conducted before making any significant decisions or taking any action. Its purpose is to provide decision-makers with the information they need to make informed decisions. In the case of quantifying risk, you are making estimates with fairly broad ranges (such as 20% or more). This provides an accurate, if broad, estimate. With more detail, the estimate becomes more precise.

Benefits of an Initial Analysis for Decision Support

An initial analysis is valuable for decision support because it gives decision-makers a comprehensive overview of the situation. It allows decision-makers to make informed decisions based on accurate and relevant information. There are several benefits of conducting an initial analysis.

Benefits of Conducting an Initial Analysis

  • Provides a Comprehensive Overview : An initial analysis gives decision-makers a comprehensive overview of the situation. It helps decision-makers to understand the situation, including the challenges, risks, and opportunities. This comprehensive overview allows decision-makers to make informed decisions based on accurate and relevant information.
  • Identifies Risks and Opportunities : An initial analysis helps to identify risks and opportunities associated with the situation. It allows decision-makers to assess the potential impact of these risks and opportunities on the organization. This information is critical to making informed decisions considering potential risks and opportunities.
  • Helps to Identify and Prioritize Options : An initial analysis helps to identify and prioritize options for addressing the situation. It provides decision-makers with a range of options and the potential benefits and risks associated with each option. This information is critical to making informed decisions that consider all available options.
  • Facilitates Consensus-Building : An initial analysis helps to facilitate consensus-building among decision-makers. It provides decision-makers with a shared understanding of the situation, which can help to build consensus around the best course of action. This consensus-building is critical to ensuring that decisions are made with the support of all decision-makers.
  • Reduces the Risk of Making Poor Decisions : An initial analysis helps to reduce the risk of making poor decisions. It provides decision-makers with accurate and relevant information, which can help to reduce the risk of making decisions based on incomplete or inaccurate information. This can help avoid costly mistakes and ensure that decisions are made in the organization’s best interests.
  • Approval for Additional Time and Resources : An initial analysis is typically conducted before making any significant decisions or taking any action. Its purpose is to provide decision-makers with the information they need to make informed decisions. However, in some cases, decision-makers may require additional information before deciding. In these cases, an initial analysis can serve as a basis for approving additional time and resources to produce a more in-depth analysis. This additional analysis can provide decision-makers with more detailed information, which can help to make more informed decisions. By using the initial analysis as a basis for approving additional time and resources, decision-makers can ensure that the additional analysis is focused on the most critical issues and provides the information they need to make informed decisions.

Always begin with an initial analysis.

Figure 6 NOTE: Always begin with an initial analysis

General Guidelines for Developing Estimates

  • Internet-facing assets generally represent a very high likelihood of compromise if there is an exploitable vulnerability. Any asset with a directly accessible interface to the internet could be considered to meet this criterion if it has an exploitable vulnerability.
  • Vulnerabilities in perimeter defenses generally represent a very high likelihood of compromise.
  • Vulnerabilities in high-value assets generally represent a very high risk.
  • Vulnerabilities on web-based servers and applications represent a very high likelihood of compromise.
  • Vulnerabilities on workstations generally represent a high likelihood of compromise.
  • Vulnerabilities in databases represent a high likelihood of compromise.
  • Vulnerabilities on unsupported systems or products may be considered a higher likelihood of compromise.
  • Vulnerabilities that could cause extreme outages generally represent a very high risk.
  • Vulnerabilities that could lead to initial access or privilege escalation generally represent a very high risk.
  • Vulnerabilities that could lead to system compromise generally represent a higher risk.
  • If you know what percentage of systems have a particular vulnerability, you can use this as the basis for a threat estimate.
  • Zero-day vulnerabilities generally represent a very high risk.
  • Perimeter defense Zero-Day vulnerabilities generally represent a very high risk.
  • Web servers with Zero-Day vulnerabilities generally represent a very high risk.
  • Web server and application exploits such as SQL and Cross-site scripting vulnerabilities generally represent a very high risk.
  • Unsupported operating systems and applications generally represent a very high risk as these are frequently targets of attack.
  • Remote code execution vulnerabilities generally represent a higher risk.
  • Named exploits such as man-in-the-middle type attacks generally represent a higher risk.
  • Vulnerabilities for which there may be known, or ongoing exploits generally represent a higher risk.
  • Vulnerabilities with a public proof-of-concept generally represent a higher risk. Any vulnerability that can lead to initial access or privilege escalation generally represents a higher risk.
  • Internal exploitable vulnerabilities generally represent an elevated risk.
  • Strong perimeter defense can be a mitigating factor.
  • Security by obscurity is not considered a mitigating factor.
  • Policies or procedures may be considered a mitigating factor.
  • Mitigating factors generally can reduce an estimate by a single 20% range. A very strong mitigation generally can reduce an estimate by two 20% ranges.
  • Financially motivated cyber-criminals are generally very successful. You may want to specify the targeted system or data to refine the scope of your estimate.
  • Insider threats are generally very successful.
  • APTs or nation-states are generally very successful. You may want to specify a particular APT or nation-state to refine your estimate.
  • An accidental misconfiguration is as dangerous as an intentional act.
  • Poor processes and procedures can represent a risk, especially if they may be undocumented and not consistently applied.
  • It is useful to stipulate the time period for your estimate and whether it is a factor in the likelihood of compromise. In some cases, this may be the time period until a patch or remediation is in place. In some cases, the longer the time period, the higher the likelihood of compromise. Similarly, in some cases, a shorter period of exposure may indicate a slightly lower likelihood of compromise.

Using a 1-5 Scale

Risk is an inherent part of any business or organizational activity. It is the possibility of an event occurring that could adversely impact the organization’s objectives. Risk can be expressed in various ways, including verbally, numerically, or graphically. One commonly used method of verbally expressing risk is through a 1-5 scale using the labels very low, low, moderate, high, and very high values.

The Five-Point Scale

The five-point scale is a simple and effective way to express risk verbally. It uses five categories to describe the level of risk associated with an event or activity. The categories are very low, low, moderate, high, and very high. Each category represents a different level of risk, with very low representing the lowest level of risk and very high representing the highest level of risk.

image

Figure 7 The 5-Point Scale Labels

This scale is beneficial because it allows for quick and easy understanding and consensus-building among different organizational groups. It is a simple and intuitive way to express risk that people with different levels of expertise in risk management can easily understand.

Converting the Scale to 20% Ranges

While the five-point scale is a useful way to express risk qualitatively, it can also be adapted into numerical form, represented by 20% ranges, to quantify the risk. This allows for a more precise and objective assessment of risk that can be used to make informed decisions about risk management.

To convert the five-point scale to 20% ranges, each category is assigned a range of probabilities. The ranges are as follows:

  • Very Low: 0% – 20%
  • Low: 21% – 40%
  • Moderate: 41% – 60%
  • High: 61% – 80%
  • Very High: 81% – 100%

Five-point scale

Figure 8 The 5-Point Scale Range Values

By assigning each category a range of probabilities, the level of risk associated with an event or activity can be quantified. When communicating this, you should note that this estimate is based on an initial range of 20% for each.

Benefits of Using the Scale

Using the five-point scale with values of very low, low, moderate, high, and very high is a good way to begin thinking, speaking, and quantifying risk. It provides a simple and intuitive way to express risk that people with different levels of expertise in risk management can easily understand. It also allows for quick and easy consensus-building among different organizational groups.

One of the benefits of using the 1-5 scale is the same as found by L. Hoffman and D. Clement (1970) 19 , which is the value of using “intuitive linguistic variables” for range variables. Another benefit is a five-point scale avoids the issues found in a three-point scale by allowing wider disbursement among the mid-range values. A simple three-point scale is susceptible to bias (most people are averse to using either the lowest or highest extremes and tend to default to mid-range values).

The conversion of the scale to 20% ranges provides a more precise and objective assessment of risk that can be used to make informed decisions about risk management. This allows for a more systematic and consistent approach to risk management that can help organizations identify, assess, and manage risk.

In addition, using the five-point scale can help promote a risk management culture within an organization. Providing a simple and intuitive way to express risk can encourage employees to think more proactively about risk and take appropriate steps to manage risk in their daily activities.

A five-point scale provides a simple and intuitive way to express risk that people with different levels of expertise in risk management can easily understand. Translating the qualitative descriptors of the five-point scale into corresponding 20% probability ranges enhances the precision of risk evaluations, allowing for a more quantifiable and objective approach to risk assessment. Using this scale can help promote a risk management culture within an organization and aid in consensus-building among different organizational groups.

Back-of-the-Napkin Math

This method is an easy way to quantify risk without advanced tools or models. It approximates an advanced method known as the Monte Carlo Simulation using ranges described in the 5-point scale method. This method produces a usable approximation but lacks the level of detail or ability to generate meaningful probability distribution charts available with the Monte Carlo simulation method. You only need a sheet of paper and a pen or pencil to use this method, which is why I call it the “back-of-the-napkin” method.

The Three-Point Range Values

Using three-point values is a simple and effective way to express a range, such as the level of threat and likelihood associated with an event or activity. The three values are minimum, most likelihood, and maximum.

When we quantify risk, we use the formula Threat x Likelihood = Risk . Each of these (threat, likelihood, and risk) is expressed as a range.

To this equation, we can add the impact as a way to rate the risk. Risk x Impact = Rating

The impact can be financial or operational, and whether the impact is Very High or Very Low is always established by the organization. If the impact is financial it is expressed as a dollar value.

Let’s look at how the three-point values are used to quantify risk.

Assume the threat values of .10, .20, and .30. Then assume the likelihood values are .20, .80, and .60. How do we multiply ranges?

Follow these steps to multiply two 3-value ranges:

  • Multiply the first value of the first range by the first value of the second range.
  • Multiply the second value of the first range by the second value of the second range.
  • Multiply the third value of the first range by the third value of the third range.

[.10 .20 .30] x [.20 .60 .80] = [.10 x .20] [.20 x .60] [.30 x .80]

Now, just give the final three values.

.10 x .20 = .02

.20 x .60 = .12

.30 x .80 = .24

You get the following range [.02 .12 .24].

Now, let’s estimate the range for impact . Assume $10K, $20K, and $50K as the values.

[.20 .16 .18] x [ $10K $20K $50K] = [$2,000 $2,400 $12,000]

.20 x $10,000 = $2,000

.16 x $20,000 = $2,400

.18 x $50,000 = $12,000

Developing a Range Estimate from a Single Point Value

In many instances, you will only have a single-point value, such as the percentage of assets missing a patch. In this case, you can use the single point value as your most likely value and add +/- 10% to get a 20% range.

Example : If 20% of workstations are missing a patch, you could use the +/- 10% to produce the range .10-.20-.30. When using this method, you should note in your communications that this is a +/- 10% estimate based on the initial value of the weakness finding (20% of workstations with a missing patch).

Developing a Range from Multiple Variables .

When you have multiple variables, one approach to establishing your range is to take the highest and lowest values in the set, then establish your mid-point value by subtracting the lowest value from the highest and dividing that value by 2, then add that value to the lowest value. BYJUS.com, a global EdTech firm, has a basic explainer for ranges available at BYJUS.com “Range”. https://byjus.com/maths/range/ .

Example : 20% of servers are missing a patch and 45% of servers have a weak configuration that leaves them open to compromise. We can use 20% as the low value and 45% as the high value. To calculate the mid-range value, we subtract the lower value from the higher value (45-20=25) and divide that by 2 (25/2=12.5), then add that to the lower value (20+12.5=32.5). That gives us .20-.32.5-.45.

image

Figure 9 Back-of-the-Napkin Worksheet

Case Studies

For each of the scenarios provided, use the five-point scale to convert estimates of threat (weakness), likelihood (the likelihood that the weakness will be leveraged against the organization), risk, impact (a range of financial cost), and score. Reading and understanding the examples will guide your evaluation process and prepare you for the module quiz and final project.

The Branch Manager

As the branch manager sat in her office, she received an urgent message from the corporate security team about a newly released patch that addressed a critical vulnerability in the company’s network. Concerned about the potential risk to her branch, she immediately contacted the network operations group to inquire about the patch.

The network administrator reviewed the vulnerability data and determined that 28% of their web servers required the patch. She knew that this was a significant number of web servers involved. She also knew that a critical vulnerability on web facing servers posed a high risk to the organization.

However, the operations group could not apply the patch for a week due to other scheduled maintenance. The network administrator explained to the branch manager that the patch required significant testing and validation before being deployed to the production environment. She assured the branch manager that the operations group was working diligently to ensure the patch would be deployed as soon as possible.

  • Assign a range to weakness . In this example, we have a percentage of the threat landscape that is missing a required patch. We can use this as the basis for our initial range for threat. 28% falls within the low range, so we can use this to justify a low rating for weakness. With 28% as a midpoint, we add +/- 10%, giving us a range of .18-.28-.38 for threat.
  • Assign a range to likelihood . In the example we are told the missing patch has a critical severity and that it is on web servers. We can review our guidance for establishing an initial estimate and consider the criticality of the vulnerability and location (web servers); we can justify a very high risk range of .80-.90-1.0.
  • Set the time period for the estimate . We will use the time period of “until patches are applied”. We could note that the longer this takes the more the likelihood of compromise increases.
  • Calculate initial estimate .

image

University Case Study

The college has always prided itself on its commitment to technology and innovation. With a sprawling campus and a diverse student population, the college relies heavily on its network infrastructure to provide critical services to its students, faculty, and staff.

However, in recent months, the college has experienced several issues with its network infrastructure. Users across the campus had reported slow performance, intermittent outages, and other issues. Concerned about the potential impact of these issues, the college decided to perform an internal audit of its network infrastructure.

The audit revealed a number of significant issues with the college’s network infrastructure. The most pressing issue was that 70% of the college’s workstations required system upgrades due to recent end-of-life notices that hadn’t been tracked. The previous network administrator had recently left, and it had taken some time for the new administrator to come up to speed. As a result, critical updates and patches had been missed, leaving the college’s network vulnerable to potential cyber-attacks.

The new administrator found that there was little network documentation, and in fact, there was little segment across the campus. This meant that if a cyber-attacker were to gain access to one part of the network, they would have access to the entire network.

The new administrator was alarmed by the audit’s findings. She knew that the college’s network was vulnerable to potential cyber-attacks and that urgent action was needed to address the issues.

As she continued to review the network infrastructure, the new administrator read about a recent cyber-attack at another university. In that attack, the threat actor had moved laterally across the network and could compromise and exfiltrate sensitive data from the administration office. The attack had caused significant damage to the university’s reputation and resulted in a loss of trust among students, faculty, and staff.

  • Assign a range to weakness . In this example, we are given the statistic that 70% of workstations are on an unsupported operating system version. We can use this percentage of the threat landscape (workstations) as the basis for an initial estimate. Using 70 as our mid-range value, we get .60-.70-.80, which is moderate to high.
  • Assign a range to likelihood . For likelihood, we consider the network’s lack of segmentation and documentation and the recent attack on another university in which this weakness was leveraged, resulting in the exfiltration of sensitive data. This activity raises the likelihood that the university would be a target. We can use a range of very high , giving us .80-.90-1.0.

image

  • Assign a range to impact . We can consider the impact experienced by the recent attack at another university as a potential impact on this university, given the lack of segmentation and documentation. We also know that 70% of workstations (including administrative) use an unsupported operating system. Combined, we can justify a very high impact range of .80-.90-1.0.

image

  • Indicate applicable time period. We considered two key variables: vulnerable workstations and lack of network segmentation. Both of these would need to be addressed to change the risk, impact, or rating. When we indicate our applicable time periods, we need to note this and state that this estimate is applicable until these weaknesses are sufficiently addressed.

Health Care Facility Case Study

As the HIPAA compliance auditor arrived at the healthcare provider, she was ready to conduct a thorough audit of their HIPAA compliance measures. The healthcare provider hired an auditor to identify any systems vulnerabilities and provide recommendations for improvement.

As the auditor began her assessment, she quickly identified several areas of concern. She discovered that over 60% of the staff were not provided with HIPAA compliance training. The auditor found that the healthcare provider had not implemented a comprehensive training program to educate their staff on HIPAA compliance policies and procedures. This presented a significant risk, as the staff may unknowingly violate HIPAA regulations, leading to potential legal and financial liabilities.

In addition, the auditor found that 12% of the staff did not have dedicated laptops. This created a risk of unauthorized access to patient information, as multiple staff members with varying degrees of “need to know” shared laptops, potentially allowing staff who did not have the “need to know” to access patient records.

The auditor also discovered that 48% of the logging system was missing or inoperable due to some network configurations that were only partially implemented. This meant that the healthcare provider could not track and monitor access to patient records. This potentially meant that they could have a privacy violation or loss of sensitive information and not be aware of the violation, which could expose them to civil penalties or even criminal charges.

The auditor also found that patient data was not partitioned from other data on the network. This presented a significant risk, as the healthcare provider’s network could be compromised by external threat actors, and the lack of data partitioning could allow lateral movement, resulting in sensitive data being stolen or ransomed.

After compiling her assessment, the auditor estimated that the healthcare provider’s HIPAA compliance posture did have significant weaknesses, with a significant risk of unauthorized internal access. She noted that the lack of HIPAA compliance training, the inadequate number of workstations, the missing logging system, and the lack of data partitioning presented a significant risk of HIPAA violations and data breaches. She estimated that the healthcare provider’s legal liability from the identified weaknesses could be significant, as the provider could be held responsible for any financial losses or damages suffered by patients due to the breach.

The auditor’s report included detailed recommendations for the healthcare provider to improve their HIPPA compliance measures. She advised the provider to implement a comprehensive HIPPA compliance training program to educate their staff on HIPPA regulations and procedures. She also recommended that the provider increase the number of laptops from 132 to 150 to ensure that patient records were not left unintentionally exposed to staff that lacked the “need to know.”

To address the missing logging system, the auditor recommended that the healthcare provider implement a comprehensive system that tracks and monitors access to patient records. She advised the provider to implement least privilege role-based access controls and appropriate network segmentation to separate patient data from other network data.

The estimated cost to implement the auditor’s recommendations was significant. The healthcare provider would need to invest between $50,000 to $100,000.

  • Estimate the weakness . We can use the 12% estimate of missing laptops as the basis for estimating the weakness as a percentage of the threat landscape. We can use a very low estimate of 0-.12-.22.  The lack of sufficient data separation was linked to the risk of external threat actors moving laterally and potentially stealing or ransoming sensitive data.  The lack of logging is of concern, but it is not a weakness that can be leveraged to result in an attack. Rather, it results in a lack of visibility and awareness.
  • Estimate the likelihood . We can use the 60% of staff lacking the training to estimate the likelihood of inadvertent unauthorized access to patient-sensitive data. We could use a .50-.60-.70 range or moderate to high. We have insufficient data to estimate the likelihood of an external attack because no relevant weaknesses were identified in the audit.

image

Accounting Firm Case Study

The cybersecurity auditor arrived at the accounting firm of Smith and Associates, ready to conduct a thorough audit of their cybersecurity measures. The firm hired the auditor to identify any systems vulnerabilities and provide recommendations for improvement.

As the auditor began his assessment, he quickly identified several areas of concern. He discovered that 67% of the firm’s workstations had outdated software, including operating systems and applications. This presented a significant risk, as obsolete software can contain known vulnerabilities that cyber-attackers can exploit.

In addition, the auditor found that 29% of the workstations had outdated anti-virus software. This was a significant concern, as anti-virus software is the first line of defense against malware and other cyber threats. Outdated anti-virus software can be ineffective against new and emerging threats, leaving the firm’s systems vulnerable to attack.

The auditor also discovered that the firm’s public-facing web server had multiple SQL vulnerabilities. SQL vulnerabilities are a common target for cyber-attackers, as they can be exploited to gain unauthorized access to databases and steal sensitive data. The auditor was particularly concerned about this vulnerability, as it posed a significant risk to the firm’s clients and their confidential financial information.

After completing his assessment, the auditor stated that the firm’s cybersecurity posture has several significant weaknesses that could likely be leveraged in an attack. He noted that the outdated software and anti-virus, combined with the SQL vulnerabilities on the public-facing web server, created a significant risk of cyber-attack. He recommended that the firm immediately address these vulnerabilities and improve its cybersecurity posture.

According to a recent report by IBM, the average data breach cost is $3.86 million. This includes costs associated with detecting and containing the breach, notifying affected individuals, and providing identity theft protection services. The report also found that the cost per lost or stolen record containing sensitive information was $180.

If the accounting firm suffered a data breach, the financial impact could be substantial. For example, if the attackers had stolen 10,000 client records, the cost of the breach could have been $1.8 million.

  • Estimate the weakness. We have two weaknesses related to the workstations: 67% are using outdated operating systems and applications, and 29% have outdated anti-virus. We subtract the lowest value from the highest value (67-29=38) and divide that by 2 (38/2=19), then add that to the lowest value (29+29=48). That gives us the range of .29-.48-.67, which is low-high. We have one web server with an SQL vulnerability, which we consider very high by default. That range is .80-.90-1.0.
  • Estimate the likelihood. For the workstations we will estimate the likelihood as high or .60-.70-.80. We will estimate the likelihood of compromise for the web server as very high or .80-.90-1.0.

image

  • Estimate the risk rating for workstations and web server , each based on a $ 5 0,000, $ 5 50,000, and $ 2, 00,000 cost range . Compare to determine which source is more likely to result in a higher financial impact . In this example we are not splitting the financial cost between two probable risk sources, rather we’re comparing the two potential sources of a potential data breach with a single potential financial impact and comparing the resulting rating which is given in financial terms.

image

Cybersecurity Risk Quantification Copyright © 2024 by Charlene Deaver-Vazquez is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Enterprise Risk Management Case Studies: Heroes and Zeros

By Andy Marker | April 7, 2021

  • Share on Facebook
  • Share on LinkedIn

Link copied

We’ve compiled more than 20 case studies of enterprise risk management programs that illustrate how companies can prevent significant losses yet take risks with more confidence.   

Included on this page, you’ll find case studies and examples by industry , case studies of major risk scenarios (and company responses), and examples of ERM successes and failures .

Enterprise Risk Management Examples and Case Studies

With enterprise risk management (ERM) , companies assess potential risks that could derail strategic objectives and implement measures to minimize or avoid those risks. You can analyze examples (or case studies) of enterprise risk management to better understand the concept and how to properly execute it.

The collection of examples and case studies on this page illustrates common risk management scenarios by industry, principle, and degree of success. For a basic overview of enterprise risk management, including major types of risks, how to develop policies, and how to identify key risk indicators (KRIs), read “ Enterprise Risk Management 101: Programs, Frameworks, and Advice from Experts .”

Enterprise Risk Management Framework Examples

An enterprise risk management framework is a system by which you assess and mitigate potential risks. The framework varies by industry, but most include roles and responsibilities, a methodology for risk identification, a risk appetite statement, risk prioritization, mitigation strategies, and monitoring and reporting.

To learn more about enterprise risk management and find examples of different frameworks, read our “ Ultimate Guide to Enterprise Risk Management .”

Enterprise Risk Management Examples and Case Studies by Industry

Though every firm faces unique risks, those in the same industry often share similar risks. By understanding industry-wide common risks, you can create and implement response plans that offer your firm a competitive advantage.

Enterprise Risk Management Example in Banking

Toronto-headquartered TD Bank organizes its risk management around two pillars: a risk management framework and risk appetite statement. The enterprise risk framework defines the risks the bank faces and lays out risk management practices to identify, assess, and control risk. The risk appetite statement outlines the bank’s willingness to take on risk to achieve its growth objectives. Both pillars are overseen by the risk committee of the company’s board of directors.  

Risk management frameworks were an important part of the International Organization for Standardization’s 31000 standard when it was first written in 2009 and have been updated since then. The standards provide universal guidelines for risk management programs.  

Risk management frameworks also resulted from the efforts of the Committee of Sponsoring Organizations of the Treadway Commission (COSO). The group was formed to fight corporate fraud and included risk management as a dimension. 

Once TD completes the ERM framework, the bank moves onto the risk appetite statement. 

The bank, which built a large U.S. presence through major acquisitions, determined that it will only take on risks that meet the following three criteria:

  • The risk fits the company’s strategy, and TD can understand and manage those risks. 
  • The risk does not render the bank vulnerable to significant loss from a single risk.
  • The risk does not expose the company to potential harm to its brand and reputation. 

Some of the major risks the bank faces include strategic risk, credit risk, market risk, liquidity risk, operational risk, insurance risk, capital adequacy risk, regulator risk, and reputation risk. Managers detail these categories in a risk inventory. 

The risk framework and appetite statement, which are tracked on a dashboard against metrics such as capital adequacy and credit risk, are reviewed annually. 

TD uses a three lines of defense (3LOD) strategy, an approach widely favored by ERM experts, to guard against risk. The three lines are as follows:

  • A business unit and corporate policies that create controls, as well as manage and monitor risk
  • Standards and governance that provide oversight and review of risks and compliance with the risk appetite and framework 
  • Internal audits that provide independent checks and verification that risk-management procedures are effective

Enterprise Risk Management Example in Pharmaceuticals

Drug companies’ risks include threats around product quality and safety, regulatory action, and consumer trust. To avoid these risks, ERM experts emphasize the importance of making sure that strategic goals do not conflict. 

For Britain’s GlaxoSmithKline, such a conflict led to a breakdown in risk management, among other issues. In the early 2000s, the company was striving to increase sales and profitability while also ensuring safe and effective medicines. One risk the company faced was a failure to meet current good manufacturing practices (CGMP) at its plant in Cidra, Puerto Rico. 

CGMP includes implementing oversight and controls of manufacturing, as well as managing the risk and confirming the safety of raw materials and finished drug products. Noncompliance with CGMP can result in escalating consequences, ranging from warnings to recalls to criminal prosecution. 

GSK’s unit pleaded guilty and paid $750 million in 2010 to resolve U.S. charges related to drugs made at the Cidra plant, which the company later closed. A fired GSK quality manager alerted regulators and filed a whistleblower lawsuit in 2004. In announcing the consent decree, the U.S. Department of Justice said the plant had a history of bacterial contamination and multiple drugs created there in the early 2000s violated safety standards.

According to the whistleblower, GSK’s ERM process failed in several respects to act on signs of non-compliance with CGMP. The company received warning letters from the U.S. Food and Drug Administration in 2001 about the plant’s practices, but did not resolve the issues. 

Additionally, the company didn’t act on the quality manager’s compliance report, which advised GSK to close the plant for two weeks to fix the problems and notify the FDA. According to court filings, plant staff merely skimmed rejected products and sold them on the black market. They also scraped by hand the inside of an antibiotic tank to get more product and, in so doing, introduced bacteria into the product.

Enterprise Risk Management Example in Consumer Packaged Goods

Mars Inc., an international candy and food company, developed an ERM process. The company piloted and deployed the initiative through workshops with geographic, product, and functional teams from 2003 to 2012. 

Driven by a desire to frame risk as an opportunity and to work within the company’s decentralized structure, Mars created a process that asked participants to identify potential risks and vote on which had the highest probability. The teams listed risk mitigation steps, then ranked and color-coded them according to probability of success. 

Larry Warner, a Mars risk officer at the time, illustrated this process in a case study . An initiative to increase direct-to-consumer shipments by 12 percent was colored green, indicating a 75 percent or greater probability of achievement. The initiative to bring a new plant online by the end of Q3 was coded red, meaning less than a 50 percent probability of success. 

The company’s results were hurt by a surprise at an operating unit that resulted from a so-coded red risk identified in a unit workshop. Executives had agreed that some red risk profile was to be expected, but they decided that when a unit encountered a red issue, it must be communicated upward when first identified. This became a rule. 

This process led to the creation of an ERM dashboard that listed initiatives in priority order, with the profile of each risk faced in the quarter, the risk profile trend, and a comment column for a year-end view. 

According to Warner, the key factors of success for ERM at Mars are as follows:

  • The initiative focused on achieving operational and strategic objectives rather than compliance, which refers to adhering to established rules and regulations.
  • The program evolved, often based on requests from business units, and incorporated continuous improvement. 
  • The ERM team did not overpromise. It set realistic objectives.
  • The ERM team periodically surveyed business units, management teams, and board advisers.

Enterprise Risk Management Example in Retail

Walmart is the world’s biggest retailer. As such, the company understands that its risk makeup is complex, given the geographic spread of its operations and its large number of stores, vast supply chain, and high profile as an employer and buyer of goods. 

In the 1990s, the company sought a simplified strategy for assessing risk and created an enterprise risk management plan with five steps founded on these four questions:

  • What are the risks?
  • What are we going to do about them?
  • How will we know if we are raising or decreasing risk?
  • How will we show shareholder value?

The process follows these five steps:

  • Risk Identification: Senior Walmart leaders meet in workshops to identify risks, which are then plotted on a graph of probability vs. impact. Doing so helps to prioritize the biggest risks. The executives then look at seven risk categories (both internal and external): legal/regulatory, political, business environment, strategic, operational, financial, and integrity. Many ERM pros use risk registers to evaluate and determine the priority of risks. You can download templates that help correlate risk probability and potential impact in “ Free Risk Register Templates .”
  • Risk Mitigation: Teams that include operational staff in the relevant area meet. They use existing inventory procedures to address the risks and determine if the procedures are effective.
  • Action Planning: A project team identifies and implements next steps over the several months to follow.
  • Performance Metrics: The group develops metrics to measure the impact of the changes. They also look at trends of actual performance compared to goal over time.
  • Return on Investment and Shareholder Value: In this step, the group assesses the changes’ impact on sales and expenses to determine if the moves improved shareholder value and ROI.

To develop your own risk management planning, you can download a customizable template in “ Risk Management Plan Templates .”

Enterprise Risk Management Example in Agriculture

United Grain Growers (UGG), a Canadian grain distributor that now is part of Glencore Ltd., was hailed as an ERM innovator and became the subject of business school case studies for its enterprise risk management program. This initiative addressed the risks associated with weather for its business. Crop volume drove UGG’s revenue and profits. 

In the late 1990s, UGG identified its major unaddressed risks. Using almost a century of data, risk analysts found that extreme weather events occurred 10 times as frequently as previously believed. The company worked with its insurance broker and the Swiss Re Group on a solution that added grain-volume risk (resulting from weather fluctuations) to its other insured risks, such as property and liability, in an integrated program. 

The result was insurance that protected grain-handling earnings, which comprised half of UGG’s gross profits. The greater financial stability significantly enhanced the firm’s ability to achieve its strategic objectives. 

Since then, the number and types of instruments to manage weather-related risks has multiplied rapidly. For example, over-the-counter derivatives, such as futures and options, began trading in 1997. The Chicago Mercantile Exchange now offers weather futures contracts on 12 U.S. and international cities. 

Weather derivatives are linked to climate factors such as rainfall or temperature, and they hedge different kinds of risks than do insurance. These risks are much more common (e.g., a cooler-than-normal summer) than the earthquakes and floods that insurance typically covers. And the holders of derivatives do not have to incur any damage to collect on them.

These weather-linked instruments have found a wider audience than anticipated, including retailers that worry about freak storms decimating Christmas sales, amusement park operators fearing rainy summers will keep crowds away, and energy companies needing to hedge demand for heating and cooling.

This area of ERM continues to evolve because weather and crop insurance are not enough to address all the risks that agriculture faces. Arbol, Inc. estimates that more than $1 trillion of agricultural risk is uninsured. As such, it is launching a blockchain-based platform that offers contracts (customized by location and risk parameters) with payouts based on weather data. These contracts can cover risks associated with niche crops and small growing areas.

Enterprise Risk Management Example in Insurance

Switzerland’s Zurich Insurance Group understands that risk is inherent for insurers and seeks to practice disciplined risk-taking, within a predetermined risk tolerance. 

The global insurer’s enterprise risk management framework aims to protect capital, liquidity, earnings, and reputation. Governance serves as the basis for risk management, and the framework lays out responsibilities for taking, managing, monitoring, and reporting risks. 

The company uses a proprietary process called Total Risk Profiling (TRP) to monitor internal and external risks to its strategy and financial plan. TRP assesses risk on the basis of severity and probability, and helps define and implement mitigating moves. 

Zurich’s risk appetite sets parameters for its tolerance within the goal of maintaining enough capital to achieve an AA rating from rating agencies. For this, the company uses its own Zurich economic capital model, referred to as Z-ECM. The model quantifies risk tolerance with a metric that assesses risk profile vs. risk tolerance. 

To maintain the AA rating, the company aims to hold capital between 100 and 120 percent of capital at risk. Above 140 percent is considered overcapitalized (therefore at risk of throttling growth), and under 90 percent is below risk tolerance (meaning the risk is too high). On either side of 100 to 120 percent (90 to 100 percent and 120 to 140 percent), the insurer considers taking mitigating action. 

Zurich’s assessment of risk and the nature of those risks play a major role in determining how much capital regulators require the business to hold. A popular tool to assess risk is the risk matrix, and you can find a variety of templates in “ Free, Customizable Risk Matrix Templates .”

In 2020, Zurich found that its biggest exposures were market risk, such as falling asset valuations and interest-rate risk; insurance risk, such as big payouts for covered customer losses, which it hedges through diversification and reinsurance; credit risk in assets it holds and receivables; and operational risks, such as internal process failures and external fraud.

Enterprise Risk Management Example in Technology

Financial software maker Intuit has strengthened its enterprise risk management through evolution, according to a case study by former Chief Risk Officer Janet Nasburg. 

The program is founded on the following five core principles:

  • Use a common risk framework across the enterprise.
  • Assess risks on an ongoing basis.
  • Focus on the most important risks.
  • Clearly define accountability for risk management.
  • Commit to continuous improvement of performance measurement and monitoring. 

ERM programs grow according to a maturity model, and as capability rises, the shareholder value from risk management becomes more visible and important. 

The maturity phases include the following:

  • Ad hoc risk management addresses a specific problem when it arises.
  • Targeted or initial risk management approaches risks with multiple understandings of what constitutes risk and management occurs in silos. 
  • Integrated or repeatable risk management puts in place an organization-wide framework for risk assessment and response. 
  • Intelligent or managed risk management coordinates risk management across the business, using common tools. 
  • Risk leadership incorporates risk management into strategic decision-making. 

Intuit emphasizes using key risk indicators (KRIs) to understand risks, along with key performance indicators (KPIs) to gauge the effectiveness of risk management. 

Early in its ERM journey, Intuit measured performance on risk management process participation and risk assessment impact. For participation, the targeted rate was 80 percent of executive management and business-line leaders. This helped benchmark risk awareness and current risk management, at a time when ERM at the company was not mature.

Conduct an annual risk assessment at corporate and business-line levels to plot risks, so the most likely and most impactful risks are graphed in the upper-right quadrant. Doing so focuses attention on these risks and helps business leaders understand the risk’s impact on performance toward strategic objectives. 

In the company’s second phase of ERM, Intuit turned its attention to building risk management capacity and sought to ensure that risk management activities addressed the most important risks. The company evaluated performance using color-coded status symbols (red, yellow, green) to indicate risk trend and progress on risk mitigation measures.

In its third phase, Intuit moved to actively monitoring the most important risks and ensuring that leaders modified their strategies to manage risks and take advantage of opportunities. An executive dashboard uses KRIs, KPIs, an overall risk rating, and red-yellow-green coding. The board of directors regularly reviews this dashboard.

Over this evolution, the company has moved from narrow, tactical risk management to holistic, strategic, and long-term ERM.

Enterprise Risk Management Case Studies by Principle

ERM veterans agree that in addition to KPIs and KRIs, other principles are equally important to follow. Below, you’ll find examples of enterprise risk management programs by principles.

ERM Principle #1: Make Sure Your Program Aligns with Your Values

Raytheon Case Study U.S. defense contractor Raytheon states that its highest priority is delivering on its commitment to provide ethical business practices and abide by anti-corruption laws.

Raytheon backs up this statement through its ERM program. Among other measures, the company performs an annual risk assessment for each function, including the anti-corruption group under the Chief Ethics and Compliance Officer. In addition, Raytheon asks 70 of its sites to perform an anti-corruption self-assessment each year to identify gaps and risks. From there, a compliance team tracks improvement actions. 

Every quarter, the company surveys 600 staff members who may face higher anti-corruption risks, such as the potential for bribes. The survey asks them to report any potential issues in the past quarter.

Also on a quarterly basis, the finance and internal controls teams review higher-risk profile payments, such as donations and gratuities to confirm accuracy and compliance. Oversight and compliance teams add other checks, and they update a risk-based audit plan continuously.

ERM Principle #2: Embrace Diversity to Reduce Risk

State Street Global Advisors Case Study In 2016, the asset management firm State Street Global Advisors introduced measures to increase gender diversity in its leadership as a way of reducing portfolio risk, among other goals. 

The company relied on research that showed that companies with more women senior managers had a better return on equity, reduced volatility, and fewer governance problems such as corruption and fraud. 

Among the initiatives was a campaign to influence companies where State Street had invested, in order to increase female membership on their boards. State Street also developed an investment product that tracks the performance of companies with the highest level of senior female leadership relative to peers in their sector. 

In 2020, the company announced some of the results of its effort. Among the 1,384 companies targeted by the firm, 681 added at least one female director.

ERM Principle #3: Do Not Overlook Resource Risks

Infosys Case Study India-based technology consulting company Infosys, which employees more than 240,000 people, has long recognized the risk of water shortages to its operations. 

India’s rapidly growing population and development has increased the risk of water scarcity. A 2020 report by the World Wide Fund for Nature said 30 cities in India faced the risk of severe water scarcity over the next three decades. 

Infosys has dozens of facilities in India and considers water to be a significant short-term risk. At its campuses, the company uses the water for cooking, drinking, cleaning, restrooms, landscaping, and cooling. Water shortages could halt Infosys operations and prevent it from completing customer projects and reaching its performance objectives. 

In an enterprise risk assessment example, Infosys’ ERM team conducts corporate water-risk assessments while sustainability teams produce detailed water-risk assessments for individual locations, according to a report by the World Business Council for Sustainable Development .

The company uses the COSO ERM framework to respond to the risks and decide whether to accept, avoid, reduce, or share these risks. The company uses root-cause analysis (which focuses on identifying underlying causes rather than symptoms) and the site assessments to plan steps to reduce risks. 

Infosys has implemented various water conservation measures, such as water-efficient fixtures and water recycling, rainwater collection and use, recharging aquifers, underground reservoirs to hold five days of water supply at locations, and smart-meter usage monitoring. Infosys’ ERM team tracks metrics for per-capita water consumption, along with rainfall data, availability and cost of water by tanker trucks, and water usage from external suppliers. 

In the 2020 fiscal year, the company reported a nearly 64 percent drop in per-capita water consumption by its workforce from the 2008 fiscal year. 

The business advantages of this risk management include an ability to open locations where water shortages may preclude competitors, and being able to maintain operations during water scarcity, protecting profitability.

ERM Principle #4: Fight Silos for Stronger Enterprise Risk Management

U.S. Government Case Study The terrorist attacks of September 11, 2001, revealed that the U.S. government’s then-current approach to managing intelligence was not adequate to address the threats — and, by extension, so was the government’s risk management procedure. Since the Cold War, sensitive information had been managed on a “need to know” basis that resulted in data silos. 

In the case of 9/11, this meant that different parts of the government knew some relevant intelligence that could have helped prevent the attacks. But no one had the opportunity to put the information together and see the whole picture. A congressional commission determined there were 10 lost operational opportunities to derail the plot. Silos existed between law enforcement and intelligence, as well as between and within agencies. 

After the attacks, the government moved toward greater information sharing and collaboration. Based on a task force’s recommendations, data moved from a centralized network to a distributed model, and social networking tools now allow colleagues throughout the government to connect. Staff began working across agency lines more often.

Enterprise Risk Management Examples by Scenario

While some scenarios are too unlikely to receive high-priority status, low-probability risks are still worth running through the ERM process. Robust risk management creates a culture and response capacity that better positions a company to deal with a crisis.

In the following enterprise risk examples, you will find scenarios and details of how organizations manage the risks they face.

Scenario: ERM and the Global Pandemic While most businesses do not have the resources to do in-depth ERM planning for the rare occurrence of a global pandemic, companies with a risk-aware culture will be at an advantage if a pandemic does hit. 

These businesses already have processes in place to escalate trouble signs for immediate attention and an ERM team or leader monitoring the threat environment. A strong ERM function gives clear and effective guidance that helps the company respond.

A report by Vodafone found that companies identified as “future ready” fared better in the COVID-19 pandemic. The attributes of future-ready businesses have a lot in common with those of companies that excel at ERM. These include viewing change as an opportunity; having detailed business strategies that are documented, funded, and measured; working to understand the forces that shape their environments; having roadmaps in place for technological transformation; and being able to react more quickly than competitors. 

Only about 20 percent of companies in the Vodafone study met the definition of “future ready.” But 54 percent of these firms had a fully developed and tested business continuity plan, compared to 30 percent of all businesses. And 82 percent felt their continuity plans worked well during the COVID-19 crisis. Nearly 50 percent of all businesses reported decreased profits, while 30 percent of future-ready organizations saw profits rise. 

Scenario: ERM and the Economic Crisis  The 2008 economic crisis in the United States resulted from the domino effect of rising interest rates, a collapse in housing prices, and a dramatic increase in foreclosures among mortgage borrowers with poor creditworthiness. This led to bank failures, a credit crunch, and layoffs, and the U.S. government had to rescue banks and other financial institutions to stabilize the financial system.

Some commentators said these events revealed the shortcomings of ERM because it did not prevent the banks’ mistakes or collapse. But Sim Segal, an ERM consultant and director of Columbia University’s ERM master’s degree program, analyzed how banks performed on 10 key ERM criteria. 

Segal says a risk-management program that incorporates all 10 criteria has these characteristics: 

  • Risk management has an enterprise-wide scope.
  • The program includes all risk categories: financial, operational, and strategic. 
  • The focus is on the most important risks, not all possible risks. 
  • Risk management is integrated across risk types.
  • Aggregated metrics show risk exposure and appetite across the enterprise.
  • Risk management incorporates decision-making, not just reporting.
  • The effort balances risk and return management.
  • There is a process for disclosure of risk.
  • The program measures risk in terms of potential impact on company value.
  • The focus of risk management is on the primary stakeholder, such as shareholders, rather than regulators or rating agencies.

In his book Corporate Value of Enterprise Risk Management , Segal concluded that most banks did not actually use ERM practices, which contributed to the financial crisis. He scored banks as failing on nine of the 10 criteria, only giving them a passing grade for focusing on the most important risks. 

Scenario: ERM and Technology Risk  The story of retailer Target’s failed expansion to Canada, where it shut down 133 loss-making stores in 2015, has been well documented. But one dimension that analysts have sometimes overlooked was Target’s handling of technology risk. 

A case study by Canadian Business magazine traced some of the biggest issues to software and data-quality problems that dramatically undermined the Canadian launch. 

As with other forms of ERM, technology risk management requires companies to ask what could go wrong, what the consequences would be, how they might prevent the risks, and how they should deal with the consequences. 

But with its technology plan for Canada, Target did not heed risk warning signs. 

In the United States, Target had custom systems for ordering products from vendors, processing items at warehouses, and distributing merchandise to stores quickly. But that software would need customization to work with the Canadian dollar, metric system, and French-language characters. 

Target decided to go with new ERP software on an aggressive two-year timeline. As Target began ordering products for the Canadian stores in 2012, problems arose. Some items did not fit into shipping containers or on store shelves, and information needed for customs agents to clear imported items was not correct in Target's system. 

Target found that its supply chain software data was full of errors. Product dimensions were in inches, not centimeters; height and width measurements were mixed up. An internal investigation showed that only about 30 percent of the data was accurate. 

In an attempt to fix these errors, Target merchandisers spent a week double-checking with vendors up to 80 data points for each of the retailer’s 75,000 products. They discovered that the dummy data entered into the software during setup had not been altered. To make any corrections, employees had to send the new information to an office in India where staff would enter it into the system. 

As the launch approached, the technology errors left the company vulnerable to stockouts, few people understood how the system worked, and the point-of-sale checkout system did not function correctly. Soon after stores opened in 2013, consumers began complaining about empty shelves. Meanwhile, Target Canada distribution centers overflowed due to excess ordering based on poor data fed into forecasting software. 

The rushed launch compounded problems because it did not allow the company enough time to find solutions or alternative technology. While the retailer fixed some issues by the end of 2014, it was too late. Target Canada filed for bankruptcy protection in early 2015. 

Scenario: ERM and Cybersecurity System hacks and data theft are major worries for companies. But as a relatively new field, cyber-risk management faces unique hurdles.

For example, risk managers and information security officers have difficulty quantifying the likelihood and business impact of a cybersecurity attack. The rise of cloud-based software exposes companies to third-party risks that make these projections even more difficult to calculate. 

As the field evolves, risk managers say it’s important for IT security officers to look beyond technical issues, such as the need to patch a vulnerability, and instead look more broadly at business impacts to make a cost benefit analysis of risk mitigation. Frameworks such as the Risk Management Framework for Information Systems and Organizations by the National Institute of Standards and Technology can help.  

Health insurer Aetna considers cybersecurity threats as a part of operational risk within its ERM framework and calculates a daily risk score, adjusted with changes in the cyberthreat landscape. 

Aetna studies threats from external actors by working through information sharing and analysis centers for the financial services and health industries. Aetna staff reverse-engineers malware to determine controls. The company says this type of activity helps ensure the resiliency of its business processes and greatly improves its ability to help protect member information.

For internal threats, Aetna uses models that compare current user behavior to past behavior and identify anomalies. (The company says it was the first organization to do this at scale across the enterprise.) Aetna gives staff permissions to networks and data based on what they need to perform their job. This segmentation restricts access to raw data and strengthens governance. 

Another risk initiative scans outgoing employee emails for code patterns, such as credit card or Social Security numbers. The system flags the email, and a security officer assesses it before the email is released.

Examples of Poor Enterprise Risk Management

Case studies of failed enterprise risk management often highlight mistakes that managers could and should have spotted — and corrected — before a full-blown crisis erupted. The focus of these examples is often on determining why that did not happen. 

ERM Case Study: General Motors

In 2014, General Motors recalled the first of what would become 29 million cars due to faulty ignition switches and paid compensation for 124 related deaths. GM knew of the problem for at least 10 years but did not act, the automaker later acknowledged. The company entered a deferred prosecution agreement and paid a $900 million penalty. 

Pointing to the length of time the company failed to disclose the safety problem, ERM specialists say it shows the problem did not reside with a single department. “Rather, it reflects a failure to properly manage risk,” wrote Steve Minsky, a writer on ERM and CEO of an ERM software company, in Risk Management magazine. 

“ERM is designed to keep all parties across the organization, from the front lines to the board to regulators, apprised of these kinds of problems as they become evident. Unfortunately, GM failed to implement such a program, ultimately leading to a tragic and costly scandal,” Minsky said.

Also in the auto sector, an enterprise risk management case study of Toyota looked at its problems with unintended acceleration of vehicles from 2002 to 2009. Several studies, including a case study by Carnegie Mellon University Professor Phil Koopman , blamed poor software design and company culture. A whistleblower later revealed a coverup by Toyota. The company paid more than $2.5 billion in fines and settlements.

ERM Case Study: Lululemon

In 2013, following customer complaints that its black yoga pants were too sheer, the athletic apparel maker recalled 17 percent of its inventory at a cost of $67 million. The company had previously identified risks related to fabric supply and quality. The CEO said the issue was inadequate testing. 

Analysts raised concerns about the company’s controls, including oversight of factories and product quality. A case study by Stanford University professors noted that Lululemon’s episode illustrated a common disconnect between identifying risks and being prepared to manage them when they materialize. Lululemon’s reporting and analysis of risks was also inadequate, especially as related to social media. In addition, the case study highlighted the need for a system to escalate risk-related issues to the board. 

ERM Case Study: Kodak 

Once an iconic brand, the photo film company failed for decades to act on the threat that digital photography posed to its business and eventually filed for bankruptcy in 2012. The company’s own research in 1981 found that digital photos could ultimately replace Kodak’s film technology and estimated it had 10 years to prepare. 

Unfortunately, Kodak did not prepare and stayed locked into the film paradigm. The board reinforced this course when in 1989 it chose as CEO a candidate who came from the film business over an executive interested in digital technology. 

Had the company acknowledged the risks and employed ERM strategies, it might have pursued a variety of strategies to remain successful. The company’s rival, Fuji Film, took the money it made from film and invested in new initiatives, some of which paid off. Kodak, on the other hand, kept investing in the old core business.

Case Studies of Successful Enterprise Risk Management

Successful enterprise risk management usually requires strong performance in multiple dimensions, and is therefore more likely to occur in organizations where ERM has matured. The following examples of enterprise risk management can be considered success stories. 

ERM Case Study: Statoil 

A major global oil producer, Statoil of Norway stands out for the way it practices ERM by looking at both downside risk and upside potential. Taking risks is vital in a business that depends on finding new oil reserves. 

According to a case study, the company developed its own framework founded on two basic goals: creating value and avoiding accidents.

The company aims to understand risks thoroughly, and unlike many ERM programs, Statoil maps risks on both the downside and upside. It graphs risk on probability vs. impact on pre-tax earnings, and it examines each risk from both positive and negative perspectives. 

For example, the case study cites a risk that the company assessed as having a 5 percent probability of a somewhat better-than-expected outcome but a 10 percent probability of a significant loss relative to forecast. In this case, the downside risk was greater than the upside potential.

ERM Case Study: Lego 

The Danish toy maker’s ERM evolved over the following four phases, according to a case study by one of the chief architects of its program:

  • Traditional management of financial, operational, and other risks. Strategic risk management joined the ERM program in 2006. 
  • The company added Monte Carlo simulations in 2008 to model financial performance volatility so that budgeting and financial processes could incorporate risk management. The technique is used in budget simulations, to assess risk in its credit portfolio, and to consolidate risk exposure. 
  • Active risk and opportunity planning is part of making a business case for new projects before final decisions.
  • The company prepares for uncertainty so that long-term strategies remain relevant and resilient under different scenarios. 

As part of its scenario modeling, Lego developed its PAPA (park, adapt, prepare, act) model. 

  • Park: The company parks risks that occur slowly and have a low probability of happening, meaning it does not forget nor actively deal with them.
  • Adapt: This response is for risks that evolve slowly and are certain or highly probable to occur. For example, a risk in this category is the changing nature of play and the evolution of buying power in different parts of the world. In this phase, the company adjusts, monitors the trend, and follows developments.
  • Prepare: This category includes risks that have a low probability of occurring — but when they do, they emerge rapidly. These risks go into the ERM risk database with contingency plans, early warning indicators, and mitigation measures in place.
  • Act: These are high-probability, fast-moving risks that must be acted upon to maintain strategy. For example, developments around connectivity, mobile devices, and online activity are in this category because of the rapid pace of change and the influence on the way children play. 

Lego views risk management as a way to better equip itself to take risks than its competitors. In the case study, the writer likens this approach to the need for the fastest race cars to have the best brakes and steering to achieve top speeds.

ERM Case Study: University of California 

The University of California, one of the biggest U.S. public university systems, introduced a new view of risk to its workforce when it implemented enterprise risk management in 2005. Previously, the function was merely seen as a compliance requirement.

ERM became a way to support the university’s mission of education and research, drawing on collaboration of the system’s employees across departments. “Our philosophy is, ‘Everyone is a risk manager,’” Erike Young, deputy director of ERM told Treasury and Risk magazine. “Anyone who’s in a management position technically manages some type of risk.”

The university faces a diverse set of risks, including cybersecurity, hospital liability, reduced government financial support, and earthquakes.  

The ERM department had to overhaul systems to create a unified view of risk because its information and processes were not linked. Software enabled both an organizational picture of risk and highly detailed drilldowns on individual risks. Risk managers also developed tools for risk assessment, risk ranking, and risk modeling. 

Better risk management has provided more than $100 million in annual cost savings and nearly $500 million in cost avoidance, according to UC officials. 

UC drives ERM with risk management departments at each of its 10 locations and leverages university subject matter experts to form multidisciplinary workgroups that develop process improvements.

APQC, a standards quality organization, recognized UC as a top global ERM practice organization, and the university system has won other awards. The university says in 2010 it was the first nonfinancial organization to win credit-rating agency recognition of its ERM program.

Examples of How Technology Is Transforming Enterprise Risk Management

Business intelligence software has propelled major progress in enterprise risk management because the technology enables risk managers to bring their information together, analyze it, and forecast how risk scenarios would impact their business.

ERM organizations are using computing and data-handling advancements such as blockchain for new innovations in strengthening risk management. Following are case studies of a few examples.

ERM Case Study: Bank of New York Mellon 

In 2021, the bank joined with Google Cloud to use machine learning and artificial intelligence to predict and reduce the risk that transactions in the $22 trillion U.S. Treasury market will fail to settle. Settlement failure means a buyer and seller do not exchange cash and securities by the close of business on the scheduled date. 

The party that fails to settle is assessed a daily financial penalty, and a high level of settlement failures can indicate market liquidity problems and rising risk. BNY says that, on average, about 2 percent of transactions fail to settle.

The bank trained models with millions of trades to consider every factor that could result in settlement failure. The service uses market-wide intraday trading metrics, trading velocity, scarcity indicators, volume, the number of trades settled per hour, seasonality, issuance patterns, and other signals. 

The bank said it predicts about 40 percent of settlement failures with 90 percent accuracy. But it also cautioned against overconfidence in the technology as the model continues to improve. 

AI-driven forecasting reduces risk for BNY clients in the Treasury market and saves costs. For example, a predictive view of settlement risks helps bond dealers more accurately manage their liquidity buffers, avoid penalties, optimize their funding sources, and offset the risks of failed settlements. In the long run, such forecasting tools could improve the health of the financial market. 

ERM Case Study: PwC

Consulting company PwC has leveraged a vast information storehouse known as a data lake to help its customers manage risk from suppliers.

A data lake stores both structured or unstructured information, meaning data in highly organized, standardized formats as well as unstandardized data. This means that everything from raw audio to credit card numbers can live in a data lake. 

Using techniques pioneered in national security, PwC built a risk data lake that integrates information from client companies, public databases, user devices, and industry sources. Algorithms find patterns that can signify unidentified risks.

One of PwC’s first uses of this data lake was a program to help companies uncover risks from their vendors and suppliers. Companies can violate laws, harm their reputations, suffer fraud, and risk their proprietary information by doing business with the wrong vendor. 

Today’s complex global supply chains mean companies may be several degrees removed from the source of this risk, which makes it hard to spot and mitigate. For example, a product made with outlawed child labor could be traded through several intermediaries before it reaches a retailer. 

PwC’s service helps companies recognize risk beyond their primary vendors and continue to monitor that risk over time as more information enters the data lake.

ERM Case Study: Financial Services

As analytics have become a pillar of forecasting and risk management for banks and other financial institutions, a new risk has emerged: model risk . This refers to the risk that machine-learning models will lead users to an unreliable understanding of risk or have unintended consequences.

For example, a 6 percent drop in the value of the British pound over the course of a few minutes in 2016 stemmed from currency trading algorithms that spiralled into a negative loop. A Twitter-reading program began an automated selling of the pound after comments by a French official, and other selling algorithms kicked in once the currency dropped below a certain level.

U.S. banking regulators are so concerned about model risk that the Federal Reserve set up a model validation council in 2012 to assess the models that banks use in running risk simulations for capital adequacy requirements. Regulators in Europe and elsewhere also require model validation.

A form of managing risk from a risk-management tool, model validation is an effort to reduce risk from machine learning. The technology-driven rise in modeling capacity has caused such models to proliferate, and banks can use hundreds of models to assess different risks. 

Model risk management can reduce rising costs for modeling by an estimated 20 to 30 percent by building a validation workflow, prioritizing models that are most important to business decisions, and implementing automation for testing and other tasks, according to McKinsey.

Streamline Your Enterprise Risk Management Efforts with Real-Time Work Management in Smartsheet

Empower your people to go above and beyond with a flexible platform designed to match the needs of your team — and adapt as those needs change. 

The Smartsheet platform makes it easy to plan, capture, manage, and report on work from anywhere, helping your team be more effective and get more done. Report on key metrics and get real-time visibility into work as it happens with roll-up reports, dashboards, and automated workflows built to keep your team connected and informed. 

When teams have clarity into the work getting done, there’s no telling how much more they can accomplish in the same amount of time.  Try Smartsheet for free, today.

Discover why over 90% of Fortune 100 companies trust Smartsheet to get work done.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • HHS Author Manuscripts

Logo of hhspa

A case study exploring field-level risk assessments as a leading safety indicator

Lead research behavioral scientist and research behavioral scientist, respectively, National Institute for Occupational Safety and Health, Pittsburgh, PA, USA

B.P. Connor

J. vendetti.

Manager, mining operations, Solvay Soda Ash & Derivatives North America, Green River, WY, USA

CSP, Mine production superintendent, Solvay Chemicals Inc., Green River, WY, USA

Health and safety indicators help mine sites predict the likelihood of an event, advance initiatives to control risks, and track progress. Although useful to encourage individuals within the mining companies to work together to identify such indicators, executing risk assessments comes with challenges. Specifically, varying or inaccurate perceptions of risk, in addition to trust and buy-in of a risk management system, contribute to inconsistent levels of participation in risk programs. This paper focuses on one trona mine’s experience in the development and implementation of a field-level risk assessment program to help its organization understand and manage risk to an acceptable level. Through a transformational process of ongoing leadership development, support and communication, Solvay Green River fostered a culture grounded in risk assessment, safety interactions and hazard correction. The application of consistent risk assessment tools was critical to create a participatory workforce that not only talks about safety but actively identifies factors that contribute to hazards and potential incidents. In this paper, reflecting on the mine’s previous process of risk-assessment implementation provides examples of likely barriers that sites may encounter when trying to document and manage risks, as well as a variety of mini case examples that showcase how the organization worked through these barriers to facilitate the identification of leading indicators to ultimately reduce incidents.

Introduction

Work-related health and safety incidents often account for lost days on the job, contributing to organizational/financial and personal/social burdens ( Blumenstein et al., 2011 ; Pinto, Nunes and Ribeiro, 2011 ). Accompanying research demonstrates that risk and ambiguity around risk contribute to almost every decision that individuals make throughout the day ( Golub, 1997 ; Suijs, 1999 ). In response, understanding individual attitudes toward risk has been linked to predicting health and safety behavior ( Dohmen et al., 2011 ). Although an obvious need exists to identify more comprehensive methods to assess and mitigate potential hazards, some argue that risk management is not given adequate attention in occupational health and safety ( Haslam et al., 2016 ). Additionally, research suggests that a current lack of knowledge, skills and motivation are primary barriers to worker participation in mitigating workplace risks ( Dohmen et al., 2011 ; Golub, 1997 ; Haslam et al., 2016 ; Suijs, 1999 ). Therefore, enhancing knowledge and awareness around risk-based decisions, including individuals’ abilities to understand, measure and assign levels of risk to determine an appropriate response, is increasingly important in hazardous environments to predict and prevent incidents.

This paper focuses on one field-level risk assessment (FLRA) program, including a matrix that anyone can use to assess site-wide risks and common barriers to participating in such activities. We use a trona mine in Green River, WY, to illustrate that a variety of methods may be needed to successfully implement a proactive risk management program. By discussing the mine’s tailored FLRA program, this paper contributes to the literature by providing (1) common barriers that may prevent proactive risk assessment programs in the workplace and (2) case examples in the areas of teamwork, front-line leadership development, and tangible and intangible communication efforts to foster a higher level of trust and empowerment among the workforce.

Risk assessment practices to reveal leading indicators

Risk assessment is a process used to gather knowledge and information around a specific health threat or safety hazard ( Smith and Harrison, 2005 ). Based on the probability of a negative incident, risk assessment also includes determining whether or not the level of risk is acceptable ( Lindhe et al., 2010 ; International Electrotechnical Commission, 1995 ; Pinto, Nunes and Ribeiro, 2011 ). Risk assessments can occur quantitatively or qualitatively. Research values both types in high-risk occupations to ensure that all possible hazards and outcomes have been identified, considered and reduced, if needed ( Boyle, 2012 ; Haas and Yorio, 2016 ; Hallenbeck, 1993 ; International Council on Mining & Metals (ICMM), 2012 ; World Health Organization (WHO), 2008 ). Quantitative methods are commonly found where the site is trying to reduce a specific health or environmental exposure, such as respirable dust or another toxic substance ( Van Ryzin, 1980 ). These methods focus on a specific part of an operation or task within a system, rather than the system as a whole ( Lindhe et al., 2010 ). Conversely, a qualitative approach is useful for potential or recently identified risks to decide where more detailed assessments may be needed and prioritize actions ( Boyle, 2012 ; ICMM, 2012 ; WHO, 2008 ).

Although mine management can use risk assessments to inform procedural decisions and policy changes, they are more often used by workers to identify, assess and respond to worksite risks. A common risk assessment practice is to formulate a matrix that prompts workers to identify and consider the likelihood of a hazardous event and the severity of the outcome to yield a risk ranking ( Pinto, Nunes and Ribeiro, 2011 ). After completing such a matrix and referring to the discretized scales, any organizational member should be able to determine and anticipate the risk of a hazard, action or situation, from low to high ( Bartram, 2009 ; Hokstad et al., 2010 ; Rosén et al., 2006 ). The combination of these two “scores” is used to determine whether the risk is acceptable, and subsequently, to identify an appropriate response. For example, a list of hazards may be developed and evaluated for future interventions, depending upon the severity and probability of the hazards. Additionally, risk assessments often reveal a prioritization of identified risks that inform where risk-reduction actions are more critical ( Lindhe et al., 2010 ), which may result in changes to a policy or protocol ( Boyle, 2012 ).

If initiated and completed consistently, risk assessments allow root causes of accidents and patterns of risky behavior to emerge — in other words, leading indicators ( Markowski, Mannan and Bigoszewska, 2009 ). Leading indicators demonstrate pre-incident trends rather than direct measures of performance, unlike lagging indicators such as incident rates, and as a result, are useful for worker knowledge and motivation ( Juglaret et al., 2011 ). Recently, high-risk industries have allocated more resources to preventative activities — not only to prevent injuries but also to avoid the financial costs associated with incidents — which has produced encouraging results ( Maniati, 2014 ; Robson et al., 2007 ). However, research has pointed to workers’ general confusion about the interpretation of hazards and assignment of probabilities as a hindrance to appropriate risk identification and response ( Apeland, Aven and Nilsen, 2002 ; Reason, 2013 ). In response, better foresight into the barriers of risk management is needed to (1) engage workers in risk identification and assessment, and (2) develop pragmatic solutions to prevent incidents.

Methods and materials

In December 2015, Haas and Connor, two U.S. National Institute for Occupational Safety and Health (NIOSH) researchers, traveled to Solvay Green River’s mine in southwest Wyoming. This trona mine produces close to 3 Mt/a of soda ash using a combination of longwall and solution mining and borer miners ( Fiscor, 2015 ). A health, safety and risk management framework had been introduced in phases during 2009 and 2010 to the mine’s workforce of more than 450 to help reduce risks to an acceptable level, and NIOSH wanted to understand all aspects of this FLRA program and how it became integrated into everyday work processes. We collected an extensive amount of qualitative data, analyzed the material and triangulated the results to inform a case study in health and safety system implementation ( Denzin and Lincoln, 2000 ; Pattson, 2002 ; Yin, 2014 ). The combination of expert interviews, existing documentary materials, and observation of onsite activities provided a holistic view of both post-hoc and current data points, allowing for various contexts to be compared and contrasted to determine consistency and saturation of the data ( Wrede, 2013 ).

Participants

We collected several qualitative data points, including all-day expert interviews and discussions with mine-site senior-level management such as the mine manager, health and safety manager, and mine foremen/supervisors, some of whom were hourly workers at the time of the risk assessment program implementation ( Flick, 2009 ). Additionally, we heard presentations from the mine managers and site supervisors, received archived risk assessment documents and were able to engage in observations on the surface and in the underground mine operation during the visit, where several mineworkers engaged in conversations about the FLRA, hazard interactions, and general safety culture on site.

Retrospective data analysis of risk assessment in action

Typically, qualitative analysis and triangulation of case study data use constant comparison techniques, sometimes within a grounded theory framework ( Corbin and Strauss, 2008 ; Glaser and Strauss, 1967 ). We employed the constant comparison method within a series of iterative coding steps. First, we typed the field notes and interview notes, and scanned the various risk assessment example documents received during the visit. Each piece of data was coded for keywords and themes through an initial, focused and then constant comparison approach ( Boyatzis, 1998 ; Fram, 2013 ).

Throughout the paper, quotes and examples from employees who participated in the visit are shared to better demonstrate their process to establish the FLRA program. To address the reliability and validity of our interpretation of the data, the two primary, expert information providers during the field visit, Vendetti and Heiser, became coauthors and served as member checkers of the data to ensure all information was described in a way that is accurate and appropriate for research translation to other mine sites ( Kitchener, 2002 ).

It is important to know that in 2009 Solvay experienced a sharp increase in incidents in its more-than-450-employee operation. Although no fatalities occurred, there were three major amputations and injury frequencies that were increasing steadily. The root causes of these incidents — torn ligaments/tendons/muscles requiring surgical repair or restricted duty; lacerations requiring sutures; and fractures ( Mine Safety and Health Administration, 2017 ) — showed that inconsistent perceptions of risk and mitigation efforts were occurring on site among all types of work positions, from bolters to maintenance workers. These incidents caused frustration and disappointment among the workforce.

Intervention implementation, pre- and post-FLRA program

Faced with inconsistencies in worker knowledge of risks and varying levels of risk tolerance, management could have taken a punitive, “set an example” response, based on an accountability framework. Instead, they began a process in 2009 to bring new tools, methods and mindset to safety performance at the site. Specifically, based on previous research and experience, such as from 1998, they saw the advantages of creating a common, site-wide set of tools and metrics to guide workers in a consistent approach to risk assessment in the field. This involvement trickled down to hourly workers in the form of a typical risk assessment matrix ( Table 1 ) described earlier to identify, assess and evaluate risks. Management indicated that if everyone had tools, then “It doesn’t matter what you knew or what you didn’t, you had tools to assess and manage a situation.” They hypothesized that matrices populated by workers would reveal leading indicators to proactively identify and prevent incidents that had been occurring on site. Workers were expected to utilize this matrix daily to help identify and evaluate risks.

Risk assessment matrix used by Solvay ( Heiser and Vendetti, 2015 ).

ProbabilityConsequence
12345
246810
3691215
48121620
510152025

To complete the matrix, workers rate consequences of a risk using the scales/key depicted in Table 2 . As shown in the color-coded matrix, multiplying the scores for these two areas yields a risk ranking of low, moderate, high or critical, thereby providing guidance on what energies or hazards to mitigate immediately. Although the matrix approach, specifically, may not be new to the industry, the implementation and evaluation of such efforts offer value in the form of heightened engagement, leadership and eventually behavior change.

Evaluation matrix key ( Heiser and Vendetti, 2015 ).

ProbabilityConsequence
1. RARE, practically impossible1. Could cause 1st aid injury/minor damage
2. UNLIKELY, not likely to occur2. Could cause minor injuries (recordable)
3. MODERATE, possibility to occur3. Could cause moderate damage (LTA)
4. LIKELY, to happen at some point4. Could cause permanent disability or fatality
5. ALMOST CERTAIN, to happen5. Could cause multiple fatalities
Assessment
15 — 25: CRITICAL
9 — 12: HIGH
5 — 8: MODERATE
1 — 4: LO W

Observing incidents post-implementation of the FLRA intervention during 2009 and front-line leadership efforts during 2010, much can be learned to understand where and how impact occurred on site. Figure 1 shows Green River’s 2009 spike in non-fatal days lost (NFDL) incidents with a consistent drop thereafter, providing cursory support of the program.

An external file that holds a picture, illustration, etc.
Object name is nihms940190f1.jpg

Solvay non-fatal days lost operator injuries, 2006–2016 ( MSHA, 2017 ).

Seeing a drop in incidents provides initial support for the FLRA program that Solvay introduced. Knowing that many covariates may account for a drop in incidents, however, additional data were garnered from MSHA’s website to account for hours worked. Still, the incident rate declined consistently, as shown in Fig. 2 .

An external file that holds a picture, illustration, etc.
Object name is nihms940190f2.jpg

Non-fatal days lost operator injury incidence rate (injuries by hours worked), 2006–2016 ( MSHA, 2017 ).

From a quantitative tracking effort of these lagging indicators, it can be gleaned that the implemented program was successful. However, it is important to understand what, how and why incidents decreased over time to maintain consistency in implementation and evaluation efforts. In response, this paper focuses on the qualitative data that NIOSH collected in hopes of sharing how common barriers to risk assessment can be addressed to identify leading indicators on site.

During the iterative analysis of the data, researchers sorted the initial and ongoing barriers to continuous risk assessment. The results provide insight into promising ways to measure and document as well as support and manage a risk-based program over several years. After common barriers to risk assessment implementation are discussed, mini case examples to illustrate how the organization improved and used their FLRA process to identify leading indicators follow. Ultimately, these barriers and organizational responses show that an FLRA program can help (1) measure direct/indirect precursors to harm and provide opportunities for preventative action, (2) allow the discovery of proactive leadership risk reduction strategies, and (3) provide warning before an undesired event occurs and develop a database of response strategies ( Blumenstein et al., 2011 ; ICMM, 2012 ).

Barrier to risk assessment intervention: Varying levels of risk tolerance and documentation

An initial challenge, not uncommon in occupational health and safety, was the varying levels of risk tolerance possessed by the workforce. Research shows that individuals have varying levels of knowledge, awareness and tolerance in their abilities to recognize and perceive risks as unacceptable ( Brun, 1992 ; Reason, 2013 ; Ruan, Liu and Carchon, 2003 ). Managers and workers reflected that assessments of a risk were quite broad, having an impact on the organization’s ability to consistently identify and categorize hazards. One employee who was an hourly worker at the time of the FLRA implementation said, “It took time to establish a sensitivity to potential hazards.” This is not particularly surprising; as individuals gain experience, they can become complacent with health and safety risks and, eventually, have a lower sense of perceived susceptibility and severity of a negative outcome ( Zohar and Erev, 2006 ). As a result, abilities to consistently notice and believe that a hazard poses threat to their personal health and safety decreases. The health and safety manager said, “It took a long time to get through to people that this isn’t the same as what they do every day. To really assess a risk you have to mentally stop what you’re doing and consider something.”

Eventually, management developed an understanding that risk tolerance differed individually and generationally onsite, acknowledging that sources of risk are always changing in some regard and tend to be more complicated for some employees to see than others. In response, discussions about the importance of encouraging conscious efforts of risk management became ongoing to support a new level of awareness on site. Additionally, the value of documenting risk assessment efforts on an individual and group level became more apparent. One area emphasized was encouraging team communication around risk assessment if it was warranted. An example of this process and outcome is detailed below to help elucidate how Solvay overcame disparate perceptions of risk through teamwork.

Case example: FLRA discussion and documentation in action

An example of the FLRA in action as a leading indicator was provided by the maintenance supervisor during the visit. This example included an installation of a horizontal support beam. Workers collectively completed an FLRA to determine if they could simply remove the gantry system without compromising the integrity of the headframe. As part of their FLRA process, workers were expected to identify energies/hazards that could exist during this job task. Hazards that they recorded for this process for consideration within the matrix as possible indicators included:

  • Working from heights/falling.
  • Striking against/being struck by objects.
  • Pinch points.
  • Traction and balance.
  • Hand placement.
  • Caught in/on/between objects.

An initial risk rank was provided for each of the identified hazards, based on the matrix ( Tables 1 and ​ and2). 2 ). Based on the initial risk rank, workers decided which controls to implement to minimize the risk to an acceptable level. Examples of controls implemented included:

  • Review the critical lift plan.
  • Conduct a pre-job safety and risk assessment meeting.
  • Inspect all personal protective equipment (PPE) fitting and harnesses.
  • Understand structural removal sequence.
  • Communicate between crane operator and riggers.
  • Assure 100 percent of tie-off protocol is followed.
  • Watch out for coworkers.
  • Participate in housekeeping activities.

Upon determining and implementing controls, a final risk rank was rendered to make a decision for the job task: whether or not the headframe could be removed in one section. Ultimately, workers decided it could safely be done. However, management emphasized the importance of staying true to their FLRA. They said that 50 percent of their hoisting capabilities are based on wind and that if the wind is too high, they shut down the task, which happened one day during this process. So, although an FLRA was completed and provided a documented measurement and direction about what decisions to carry out, the idea of staying true to a minute-by-minute risk assessment was important and adhered to for this task.

In this sense, the FLRAs served as a communication platform to share a common language and ultimately, common proactive behavior. In general, vagueness of data on health and safety risks can prevent hazard recognition, impair decision-making, and disrupt risk-based decisions among workers ( Ruan, Liu and Carchon, 2003 ). This example showed that the more workers understood what constitutes an acceptable level of risk, the greater sense of shared responsibility they had to prevent hazards and make protective decisions on the job ( Reason, 1998 ) such as shutting down a procedure due to potential problems. Now, workers have the ability to implement their own check-and-balance system to determine if a response is needed and their decision is supported. Treating the FLRA as a check-and-balance system allowed workers to improve their own risk assessment knowledge, skills and motivation, a common barrier to hazard identification ( Haslam et al., 2016 ). In theory, as FLRAs are increasingly used to predetermine possible incidents and response strategies are developed and referenced, the occurrence of lagging indicators should decrease, as has been the case at Solvay in recent years.

Barrier to risk assessment intervention: Resisting formal risk assessment methods

Worksites often face challenges of determining the best ways to measure and develop suitable tools to facilitate consistent risk measurement ( Boyle, 2012 ; Haas and Yorio, 2016 ; Haas, Willmer and Cecala, 2016 ). For example, research shows that assessing site risks using a series of checklists or general observations during site walkthroughs is more common ( Navon and Kolten, 2006 ). Although practical, checklists and observations require little cognitive investment and have more often been insufficient in revealing potential safety problems ( Jou et al., 2009 ). Due to familiarity with “the way things were,” implementing the system of risk assessments at Solvay came with challenges. Workers experienced initial resistance to moving toward something more formal.

For example, at the outset, hourly workers said they felt, “I do this in my head all the time. I just don’t write it down.” Particularly, individuals who were hourly workers at the time of the FLRA program implementation felt that they already did some form of risk identification and that they did not need to go into more detail to assess the risk. Just as some workers did not see a difference with what they did implicitly, and so discounted the value of conducting an FLRA, others did not think they needed to take action based on their matrix risk ranking. As one worker reflected on the previous mindset, he said, “It would be okay to be in the red, so long as you knew you were in the red.” Because of the varying levels of initial acceptance, there were inconsistencies in the quality of the completed risk assessment matrices. Management noted, “Initially, people were doing them, but not to the quality they could have been.” In response, Solvay management focused on strengthening their frontline leadership skills to help facilitate hourly buy-in, as described in the following case example.

Case example: Starting with frontline leadership to facilitate buy-in, “The Club”

To facilitate wider commitment and buy-in, senior-level management took additional steps with their frontline supervisors. To train frontline leaders on how to understand rather than punish worker actions, Solvay management started a working group in 2010 called “The Club.” This group consisted of supervisory personnel within various levels of the organization. The purpose of The Club was to develop leaders and a different sort of accountability with respect to safety. One of its first actions was to, as a group, agree on qualities of a safety leader. From there, they eventually executed a quality leadership program that embraced the use of the risk assessment tools and their outcomes ( Fiscor, 2015 ; Heiser and Vendetti, 2015 ).

After receiving this leadership training and engaging in discussions about FLRA, the execution of model leadership from The Club started. Specifically, the frontline foremen that the researchers talked with indicated that they were better able to communicate about and manage safety across the site. Prior to The Club and adapting to the FLRA, one of these supervisors reflected, “No one wanted to make a safety decision.” Senior management acknowledged with their frontline leadership that the FLRA identifies steps that anyone might miss because they are interlocked components of a system. Because of the complex risks present on site, they discussed the importance of sitting down and reviewing with hourly workers if something happened or went wrong. They shared the importance of supportive language: “We say ‘let’s not do this again,’ but they don’t get in trouble.”

To further illustrate the leadership style and communicative focus, one manager shared a conversation conducted with a worker after an incident. Rather than reprimanding the worker’s error in judgement, the manager asked: “What was going through your mind before, during this task? I just want to understand you, your choices, your thought process, so we can prevent someone else from doing the same thing, making those same choices.” After the worker acknowledged he did not have the right tools but tried to improvise, the manager asked him what other risky choices he had made that turned out okay. This process engaged the worker, and he “really opened up” about his perceptions and behaviors on site. This incident is an example of site leaders establishing accountability for action but ensuring that adequate resources and site support were available to facilitate safer practice in the future ( Yorio and Willmer, 2015 ; Zohar and Luria, 2005 ). In other words, management used these conversations not only to educate the workers about hazards involved in complex systems, but also to enact their positive safety culture.

Importantly, this communication and documentation among The Club allowed insight into how employees think, serving as a leading indicator for health and safety management. The stack of FLRAs that were pulled out — completed between 2009 and 2015 — were filled out with greater detail as the years progressed. It was apparent that the hourly workforce continually adapted, resulting in an improved sense of organizational motivation, culture and trust. Management indicated to NIOSH that workers now have an increased sense of empowerment to identify and mitigate risks. Contrary to how workers used to document their risk assessments, a management member said: “You pull one out today, and even if it isn’t perfect, the fundamentals are all there, even if it isn’t exactly how we would do it. And more likely than not, you’d pull out one and find it to be terrific.”

Barrier to risk assessment intervention: Communicate and show tangible support for risk assessment methods

A lack of management commitment, poor communication and poor worker involvement have all been identified as features of a safety climate that inhibit workers’ willingness to proactively identify risks ( Rundmo, 2000 ; Zohar and Luria, 2005 ). Therefore, promoting these organizational factors was needed to encourage workers to identify hazards and prevent incidents ( Pinto et al., 2011 ). When first rolling out their FLRA process, Solvay management knew that if they were going to transform safety practices at the mine, there had to be open communication between hourly and salary workers about site conditions and practices ( Fiscor, 2015 ; Heiser and Vendetti, 2015 ; Neal and Griffin, 2006 ; Reason, 1998 ; Rundmo, 2000 ; Wold and Laumann, 2015 ; Zohar and Luria, 2005 ). They discussed preparing themselves to be “exposed” to such information and commit as a group to react in a way that would maintain buy-in, use and behavior.

Creating a process of open sharing meant that, especially at the outset, management was likely to hear things that they didn’t necessarily want to hear. Despite perhaps not wanting to hear feedback against a policy in place or attitude of risk acceptance, all levels of management wanted to communicate their understanding for changing risks and hazards, and the need to sometimes adapt policies in place based on changing energies in the environment, as revealed by the FLRAs that the workers were taking time to complete. The following case example showcases the value of ongoing communication to maintain a risk assessment program and buy-in from workers.

Case example: Illustrating flexibility with site procedures

During the visit, managers and workers both discussed the conscious efforts made during group meetings and one-on-one interactions to improve their organizational leadership and communication, noting the difficulty of incorporating the FLRA as a complement to existing rules and regulations on site: “We needed to continually stress the importance of utilizing the risk assessment tool, and if something were to occur, to evaluate the level of controls implemented during a reassessment of the task.” To encourage worker accountability, the managers wanted to show their commitment to the FLRA process and that they could be flexible in changing a rule or policy if the risk assessment showed a need. As an example, they showed NIOSH a “general isolation” procedure about lock-out/tag-out that was distributed at their preshift safety meeting that morning. They handed out a piece of paper saying that, “While a visual disconnect secured with individual locks is always the preferred method of isolation, there are specific isolation procedures for tasks unique to underground operations.” The handout went on to state: “In rare circumstances, when a visual disconnect with lock is not used and circumstances other than those specifically identified are encountered, a formal documented risk assessment will be performed. All potential energies will be identified and understood, every practical barrier at the appropriate level will be identified and implemented, and the foreman in charge of the task will approve with his/her signature prior to performing the work. All personnel involved in the job or task must review and understand the energies and barriers implemented prior to any work being performed…”

This example shows the site’s commitment to risk assessment while also showing that, if leading indicators are identified, a policy can be changed to avoid a potential incident. Noting that they would change a procedure if workers identified something, the document illustrated management’s confidence and value in the FLRA process. Workers indicated that these behaviors are a support mechanism for them and their hazard identification efforts. Along the same lines, the managers we talked with noted the importance of not just training to procedure but also to emphasize: “High-level policies complement but don’t drive safety.” This example showcases their leadership and communicative commitment.

The lock-out/tag-out example is just one safety share that occurred at a preshift meeting. These shares “might be no more than five minutes, they might go a half-hour, but they’re allowed to take as long as they need,” one manager said. This continued commitment to foster the use of leading indicators to support a health and safety management program has shown that the metrics used to assess risks are only as good as the response to those metrics to support and encourage health and safety as well as afforded workers an opportunity to engage in improving the policies and rules on site. This continued consistency in communication helped to create a sense of ownership among workers, which led them to recognize the need for a minute-to-minute thought process that helped them foresee consequences, probabilities, and deliberate different response options. As one manager said, “You can have a defined plan but an actual risk assessment shows the dynamics of a situation and allows different plans to emerge.”

Limitations and conclusions

The purpose of this paper was to illustrate an example in which everyone could participate to identify leading safety indicators. In everyone’s judgment, it took about four to five years until Solvay actually saw the change in action, meaning that the process was sustained by workers and they were using the risk assessment terminology in their everyday discussions. In addition to providing how leading indicators can be developed or look “in action,” this paper advanced the discussion to provide insight into common barriers to risk assessment, and potential responses to these barriers. As Figs. 1 and ​ and2 2 show, incidents had been down at Solvay since the implementation of the FLRA program and enhanced leadership training of frontline supervisors, showing the impact of the FLRAs as a strong leading indicator for health and safety. Additionally, hourly workers discussed how much better the culture is on site now than it was several years ago, noting their appreciation for having a common language on site to communicate about risks. It is rare that both sides — hourly and salary — see benefits in a written tool from an operational and behavioral standpoint. The cooperation on site speaks to the positive attributes discussed within this case study and mini examples provided that cannot be shown in a graph.

Although the results of this study are only part of a small case study and cannot be generalized across the industry, data support the argument that poor leadership and an overall lack of trust on site can inhibit workers’ willingness to participate in risk measurement, documentation and decision-making. Obviously, the researchers could not talk with every worker and manager present on site, so not all opinions are reflected in this paper. However, the consistency in messages from both levels of the organization showed saturation of insights that reflect the impact of the FLRAs. It is acknowledged that some of this information may already be known and utilized by mine site leadership. However, because the focus of the study was not only on the development and use of specific risk measurement tools, but the organizational practices that are needed to foster such proactive behavior, the results provide several potential areas of improvement for the industry in terms of formal risk assessment over a period of time.

In lieu of these limitations, mine operators should consider this information when interpreting the results in terms of (1) how to establish formal risk assessment on site, especially when trying to identify and mitigate hazards, (2) what the current mindset of frontline leadership may be and how they could support (or hinder) such an risk assessment program and (3) methods to consistently support a participatory risk assessment program. Gaining an in-depth view of Solvay’s own health and safety journey provides expectations and a possible roadmap for encouraging worker participation in risk management at other mine sites to proactively prevent health and safety incidents.

Acknowledgments

The authors wish to thank the Solvay Green River operation for its participation and cooperation in this case study and for openly sharing their experiences.

The findings and conclusions in this paper are those of the authors and do not necessarily represent the views of NIOSH. Reference to specific brand names does not imply endorsement by NIOSH.

Contributor Information

E.J. Haas, Lead research behavioral scientist and research behavioral scientist, respectively, National Institute for Occupational Safety and Health, Pittsburgh, PA, USA.

B.P. Connor, Lead research behavioral scientist and research behavioral scientist, respectively, National Institute for Occupational Safety and Health, Pittsburgh, PA, USA.

J. Vendetti, Manager, mining operations, Solvay Soda Ash & Derivatives North America, Green River, WY, USA.

R. Heiser, CSP, Mine production superintendent, Solvay Chemicals Inc., Green River, WY, USA.

  • Apeland S, Aven T, Nilsen T. Quantifying uncertainty under a predictive epistemic approach to risk analysis. Reliability Engineering and System Safety. 2002; 75 :93–102. https://doi.org/10.1016/s0951-8320(01)00122-3 . [ Google Scholar ]
  • Bartram J. Water safety plan manual: step-by-step risk management for drinking-water suppliers. World Health Organization; Geneva: 2009. [ Google Scholar ]
  • Blumenstein D, Ferriter R, Powers J, Reiher M. Accidents – The Total Cost: A Guide for Estimating The Total Cost of Accidents. Western Mining Safety and Health Training and Translation Center, Colorado School of Mines, Mine Safety and Health Program technical staff. 2011 http://inside.mines.edu/UserFiles/File/MSHP/GuideforEstimatingtheTotalCostofAccidentspercent20FINAL(8-10-11).pdf .
  • Boyatzis RE. Transforming Qualitative Information: Thematic Analysis and Code Development. Sage; Thousand Oaks, CA: 1998. [ Google Scholar ]
  • Boyle T. Health And Safety: Risk Management. Routledge; New York, NY: 2012. [ Google Scholar ]
  • Brun W. Cognitive components in risk perception: Natural versus manmade risks. Journal of Behavioral Decision Making. 1992; 5 :117–132. https://doi.org/10.1002/bdm.3960050204 . [ Google Scholar ]
  • Corbin J, Strauss A. Basics of Qualitative Research. 3. Sage; Thousand Oaks, CA: 2008. [ Google Scholar ]
  • Denzin NK, Lincoln YS. The discipline and practice of qualitative research. In: Denzin NK, Lincoln YS, editors. Handbook of Qualitative Research. 2. Sage; Thousand Oaks, CA: 2000. pp. 1–28. [ Google Scholar ]
  • Dohmen T, Falk A, Huffman D, Sunde U, Schupp J, Wagner GG. Individual risk attitudes: measurement, determinants, and behavioral consequences. Journal of the European Economic Association. 2011; 9 (3):522–550. https://doi.org/10.1111/j.1542-4774.2011.01015.x . [ Google Scholar ]
  • Fiscor S. Solvay implements field level risk assessment program. Engineering and Mining Journal. 2015; 216 (9):38–42. [ Google Scholar ]
  • Flick U. An Introduction to Qualitative Research. Sage; Thousand Oaks, CA: 2009. [ Google Scholar ]
  • Fram SM. The constant comparative method outside of grounded theory. The Qualitative Report. 2013; 18 (1):1–25. [ Google Scholar ]
  • Glaser B, Strauss A. The Discovery of Grounded Theory. Adeline; Chicago, IL: 1967. [ Google Scholar ]
  • Golub A. Decision Analysis: An Integrated Approach. Wiley; New York, NY: 1997. [ Google Scholar ]
  • Haas EJ, Yorio P. Exploring the state of health and safety management system performance measurement in mining organizations. Safety Science. 2016; 83 :48–58. https://doi.org/10.1016/j.ssci.2015.11.009 . [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Haas EJ, Willmer DR, Cecala AB. Formative research to reduce mine worker respirable silica dust exposure: a feasibility study to integrate technology into behavioral interventions. Pilot and Feasibility Studies. 2016; 2 (6) https://doi.org/10.1186/s40814-016-0047-1 . [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Hallenbeck WH. Quantitative Risk Assessment for Environmental and Occupational Health. 2. Lewis Publishers; Boca Raton, NY: 1993. [ Google Scholar ]
  • Haslam C, O’Hara J, Kazi A, Twumasi R, Haslam R. Proactive occupational safety and health management: Promoting good health and good business. Safety Science. 2016; 81 :99–108. https://doi.org/10.1016/j.ssci.2015.06.010 . [ Google Scholar ]
  • Heiser R, Vendetti JA. Field Level Risk Assessment - A Safety Culture. Longwall USA Exhibition and Convention; June 16, 2016; Pittsburgh, PA. 2015. [ Google Scholar ]
  • Hokstad P, Røstum J, Sklet S, Rosén L, Lindhe A, Pettersson T, Sturm S, Beuken R, Kirchner D, Niewersch C. Deliverable No. D 4.2.4. Techneau; 2010. Methods for Analysing Risks of Drinking Water Systems from Source to Tap. [ Google Scholar ]
  • International Council on Mining & Metals. Overview of Leading Indicators For Occupational Health And Safety In Mining. 2012 Nov; https://www.icmm.com/en-gb/publications/health-and-safety/overview-of-leading-indicators-for-occupational-health-and-safety-in-mining .
  • International Electrotechnical Commission. IEC 300-3-9. Geneva: 1995. Dependability Management – Risk Analysis of Technological Systems. [ Google Scholar ]
  • Jou Y, Lin C, Yenn T, Yang C, Yang L, Tsai R. The implementation of a human factors engineering checklist for human–system interfaces upgrade in nuclear power plants. Safety Science. 2009; 47 :1016–1025. https://doi.org/10.1016/j.ssci.2008.11.004 . [ Google Scholar ]
  • Juglaret F, Rallo JM, Textoris R, Guarnieri F, Garbolino E. New balanced scorecard leading indicators to monitor performance variability in OHS management systems. In: Hollnagel E, Rigaud E, Besnard D, editors. Proceedings of the fourth Resilience Engineering Symposium; June 8–10, 2011; Sophia-Antipolis, France, Presses des Mines, Paris. 2011. pp. 121–127. https://doi.org/10.4000/books.pressesmines.1015 . [ Google Scholar ]
  • Kitchener M. Mobilizing the logic of managerialism in professional fields: The case of academic health centre mergers. Organization Studies. 2002; 23 (3):391–420. https://doi.org/10.1177/0170840602233004 . [ Google Scholar ]
  • Lindhe A, Sturm S, Røstum J, Kožíšek F, Gari DW, Beuken R, Swartz C. Deliverable No. D4.1.5g. Techneau; 2010. Risk Assessment Case Studies: Summary Report. https://www.techneau.org/fileadmin/files/Publications/Publications/Deliverables/D4.1.5g.pdf . [ Google Scholar ]
  • Markowski A, Mannan S, Bigoszewska A. Fuzzy logic for process safety analysis. Journal of Loss Prevention in the Process Industries. 2009; 22 :695–702. https://doi.org/10.1016/j.jlp.2008.11.011 . [ Google Scholar ]
  • Maniati M. The Business Benefits of Health and Safety: A Literature Review. British Safety Council; 2014. https://www.britsafe.org/media/1569/the-business-benefits-health-and-safety-literature-review.pdf . [ Google Scholar ]
  • Mine Safety and Health Administration (MSHA) Data & Reports. U.S. Department of Labor; 2017. https://www.msha.gov/data-reports . [ Google Scholar ]
  • Navon R, Kolton O. Model for automated monitoring of fall hazards in building construction. Journal of Construction Engineering and Management. 2006; 132 (7):733–740. https://doi.org/10.1061/(asce)0733-9364(2006)132:7(733) [ Google Scholar ]
  • Neal A, Griffin MA. A study of the lagged relationships among safety climate, safety motivation, safety behavior, and accidents at the individual and group levels. Journal of Applied Psychology. 2006; 91 (4):946–953. https://doi.org/10.1037/0021-9010.91.4.946 . [ PubMed ] [ Google Scholar ]
  • Pattson MQ. Qualitative Research and Evaluation Methods. 3. Sage; Thousand Oaks, CA: 2002. [ Google Scholar ]
  • Pinto A, Nunes IL, Ribeiro RA. Occupational risk assessment in construction industry – Overview and reflection. Safety Science. 2011; 49 :616–624. https://doi.org/10.1016/j.ssci.2011.01.003 . [ Google Scholar ]
  • Reason J. Achieving a safe culture: Theory and practice. Work & Stress. 1998; 12 (3):293–306. https://doi.org/10.1080/02678379808256868 . [ Google Scholar ]
  • Reason J. A Life in Error: From Little Slips to Big Disasters. Ashgate Publishing; Burlington, VT: 2013. [ Google Scholar ]
  • Robson LS, Clarke JA, Cullen K, Bielecky A, Severin C, Bigelow PL, Mahood Q. The effectiveness of occupational health and safety management system interventions: a systematic review. Safety Science. 2007; 45 (3):329–353. https://doi.org/10.1016/j.ssci.2006.07.003 . [ Google Scholar ]
  • Rosén L, Hokstad P, Lindhe A, Sklet S, Røstum J. Generic Framework and Methods for Integrated. Water Science and Technology. 2006; 43 :31–38. [ Google Scholar ]
  • Ruan D, Liu J, Carchon R. Linguistic assessment approach for managing nuclear safeguards indicators information. Logistics Information Management. 2003; 16 (6):401–419. https://doi.org/10.1108/09576050310503385 . [ Google Scholar ]
  • Rundmo T. Safety climate, attitudes and risk perception in Norsk Hydro. Safety Science. 2000; 34 (1):47–59. https://doi.org/10.1016/s0925-7535(00)00006-0 . [ Google Scholar ]
  • Smith SP, Harrison MD. Measuring reuse in hazard analysis. Reliability Engineering & System Safety. 2005; 89 (1):93–104. https://doi.org/10.1016/j.ress.2004.08.010 . [ Google Scholar ]
  • Suijs J. Cooperative Decision-Making Under Risk. Kluwer Academic Publishers, Springer Science+Business Media New York; NY: 1999. [ Google Scholar ]
  • Van Ryzin J. Quantitative risk assessment. Journal of Occupational and Environmental Medicine. 1980; 22 (5):321–326. https://doi.org/10.1097/00043764-198005000-00004 . [ PubMed ] [ Google Scholar ]
  • Wold T, Laumann K. Safety management systems as communication in an oil and gas producing company. Safety Science. 2015; 72 :23–30. https://doi.org/10.1016/j.ssci.2014.08.004 . [ Google Scholar ]
  • World Health Organization. Recommendations. 3. Vol. 1. World Health Organization; Geneva: 2008. Guidelines for Drinking-Water Quality [Electronic Resource]: Incorporating First and Second Addenda. [ Google Scholar ]
  • Wrede S. How country matters: Studying health policy in a comparative perspective. In: Bourgeault I, Dingwall R, de Vries R, editors. The SAGE Handbook of Qualitative Methods in Health Research. Sage; Thousand Oaks, CA: 2013. [ Google Scholar ]
  • Yin RK. Case Study Research: Design and Methods. 5. Sage; Thousand Oaks, CA: 2014. [ Google Scholar ]
  • Yorio PaL, Willmer DR. Explorations in Pursuit of Risk-Based Health and Safety Management Systems. SME Annual Conference & Expo; Feb. 15–18, 2015; Denver, CO: Society for Mining, Metallurgy & Exploration; 2015. [ Google Scholar ]
  • Zohar D, Erev I. On the difficulty of promoting workers’ safety behaviour: Overcoming the underweighting of routine risks. International Journal of Risk Assessment and Management. 2006; 7 (2):122–136. https://doi.org/10.1504/ijram.2007.011726 . [ Google Scholar ]
  • Zohar D, Luria G. A multilevel model of safety climate: cross-level relationships between organization and group-level climates. Journal of Applied Psychology. 2005; 90 (4):616–628. https://doi.org/10.1037/0021-9010.90.4.616 . [ PubMed ] [ Google Scholar ]

risk assessment case study examples

You are using an outdated browser. Please upgrade your browser to improve your experience.

Risk Assessment Case Studies | Machine Safety Specialists

Case studies.

Live Event: Open Enrollment for Machine Safety Specialists (MSS) Virtual Machine Safety and Risk Assessment Training Class. Click Here to enroll today!

risk assessment case study examples

What are “Unbiased Risk Assessments”?

Unbiased  Risk Assessments are guided by safety experts that have your best interests in mind.  Product companies, integrators, and solution providers may steer you toward expensive, overly-complex technical solutions.   Machine Safety Specialists provides unbiased Risk Assessments.   See examples below.

Biased  risk assessments can happen when a safety products company, integrator, or solution provider participates in the risk assessment. The participant has a conflict of interest and may steer you towards overly expensive or complex solutions that they want to sell you.  Some safety product companies will do anything to get involved in the risk assessment, knowing they will “make up for it” by selling you overly expensive solutions.  Safety product companies have sales targets and you could be one of them.

risk assessment case study examples

Machine Safety Specialists are experts in OSHA, ANSI, NFPA, RIA and ISO/EN safety standards. We can solve your machine safety compliance issues, provide  unbiased  Risk Assessments, or help you develop your corporate Machine Safety and Risk Assessment program.

Case Study: Machine Safety Verification and Validation

A multi-national food processing company had a problem.  A recent amputation at a U.S. food processing plant generated negative publicity, earned another OSHA citation, and caused significant financial losses due to lost production.  Another amputation, if it occurred, would likely result in more lost production, an OSHA crackdown, and, if posted on social media, irreparable damage to the company’s brand.

After multiple injuries and OSHA citations, the company contacted MSS for help.  First, the company needed to know if the existing machine safeguarding systems provided “effective alternative protective measures” as required by OSHA.  MSS was contracted through the company’s legal counsel to audit three (3) plants with various type of machines and deliver detailed Machine Safety compliance reports for each machine to the client under Attorney Client Privilege.  A summary in our safeguarding audit reports for one plant was as follows:

risk assessment case study examples

For the 19 high risk and 8 medium risk (poorly guarded) machines, action by applying risk reduction measures through the use of the hierarchy of controls was required.  MSS provided a Machine Safeguarding specification for the machines and worked with our client to select qualified local fabricators and integrators that performed the work in an aggressive schedule.  MSS provided specifications and consulting services, and our client contracted the fabrication and integration contractors directly, under the guidance of MSS.

During the design phase of the Machine Safeguarding implementation, MSS provided safety  verification services  and detailed design reviews.  Due to the stringent legal requirements and the need for global compliance, the safety design verification included  SISTEMA analysis  of the functional safety systems.  MSS provided a detailed compliance report with written compliance statements covering OSHA compliance, hazardous energy controls (Lockout/Tagout, LOTO, OSHA 1910.147) and effective alternative protective methods, per OSHA’s minor servicing exception.  Due to the complexity of the machines and global safety requirements, MSS verified and validated the machine to numerous U.S. and international safety standards, including ANSI Z244.1, ANSI B11.19, ANSI B11.26, ISO 14120, ISO 13854, ISO 13855, ISO 13857 and ISO 13849.

Then, after installation and before placing the machines into production, MSS was contracted to perform safety validation services as required by the standards.  During the validation phase of the project, MSS traveled to site to inspect all machine safeguarding and validate the functional safety systems.  This safety validation included all aspects of the safety system, including barrier guards, interlocked barrier guards, light curtains, area scanners, the safety controllers and safety software, safety servo systems, variable frequency safety drives (safety VFDs), and pneumatic (air) systems.

After the machine safeguarding design verification, installation, and safety system validation, MSS was pleased to provide the following data in the executive summary of the report.

risk assessment case study examples

By involving Machine Safety Specialists (MSS) early in the project, we can ensure your project complies with OSHA, ANSI/RIA, NFPA and ISO safety standards.  By helping you implement the project, we kept the safety project on-track.  Our validation testing and detailed test reports provide peace of mind, and evidence of due diligence if OSHA pays you a visit.   Contact MSS  for all of your Machine Safety Training, Safeguarding Verification, and on-site functional safety validation needs.

Case Study:  Collaborative Robot System

risk assessment case study examples

  • Is this Collaborative Robot system safe?
  • How can we validate the safety of the Collaborative Robot system before duplicating it?
  • If we ship these globally, will we comply with global safety standards?
  • What if the Collaborative Robot hurts someone?
  • What about OSHA?

The OEM called Machine Safety Specialists (MSS) to solve these concerning problems.   Prepared to help, our TÜV certified Machine Safety engineers discussed the Collaborative Robot system, entered an NDA, and requested system drawings and technical information.  On-site, we inspected the Collaborative Robot, took measurements, gathered observations and findings, validated safety functions, and spoke with various plant personnel (maintenance, production, EHS, engineering, etc.).  As part of our investigation, we prepared a gap analysis of the machine relative to RIA TR R15.606, ISO 10218-2, OSHA, ANSI, and ISO standards.  The final report included Observations, Risk Assessments, and specific corrective actions needed to achieve US and global safety compliance.   Examples of our findings and corrective actions include:

  • Identification of the correct safeguarding modes (according to RIA TR R15.606-2016 and ISO/TS 15066-2016).
  • Observation that Area Scanners (laser scanners) provided by the machine builder were not required, given the Cobot’s modes of operation. Recommended removal of the area scanners, greatly simplifying the system.
  • Observation that the safety settings for maximum force, given the surface area of the tooling, provided pressure that exceeds US and global safety requirements. Recommended a minimum surface area for the tooling and provided calculations to the client’s engineers.
  • Observation that the safety settings for maximum speed were blank (not set) and provided necessary safety formulas and calculations to the client’s engineers.
  • Recommended clear delineation of the collaborative workspace with yellow/black marking tape around the perimeter.

With corrective actions complete, we re-inspected the machine and confirmed all safety settings.  MSS provided a Declaration of Conformance to all applicable US and global safety standards.  The customer then duplicated the machines and successfully installed the systems at 12 plants globally, knowing the machines were safe and that global compliance was achieved.   Another success story by MSS…

[drawattention ID=”5446″]

Case Study:  Robot Manufacturing

risk assessment case study examples

The manufacturer hired a robotics integrator and a brief engineering study determined that speed and force requirements required a high-performance Industrial Robot (not a  Cobot ).  The client issued a PO to the integrator, attached a manufacturing specification, and generically required the system to meet “OSHA Standards”.  Within 3 months, the robot integrator had the prototype system working beautifully in their shop and was requesting final acceptance of the system.   This is when the second problem hit –  the US manufacturer experienced a serious robot-related injury .

In the process of handling the injury and related legal matters, the manufacturer learned that generic “OSHA Standards” were not sufficient for robotic systems.  To prevent fines and damages in excess of $250,000, our client needed to make their existing industrial robots safe, while also correcting any new systems in development.   The manufacturer then turned to Machine Safety Specialists (MSS) for help.

Prepared to help, our TÜV certified and experienced robot safety engineers discussed the Industrial Robot application with the client.  MSS entered an NDA and a formal agreement with the client and the client’s attorney.  On-site, MSS inspected the Industrial Robot system, took measurements, gathered observations and findings, tested (validated) safety functions, and met with the client’s robotics engineer to complete a compliance checklist.   As part of our investigation, we prepared a Risk Assessment in compliance with ANSI/RIA standards, an RIA compliance matrix, and performed a gap analysis of the industrial robot systems relative to ANSI/RIA standards.  The final report included a formal Risk Assessment, a compliance matrix, our observations, and specific corrective actions needed to achieve safety compliance.

Examples of our findings and corrective actions included:

  • A formal Risk Assessment was required in compliance with ANSI/RIA standards (this was completed by MSS and the client as part of the scope of work).
  • Critical interlock circuity needed upgrading to Category 3, PL d, as defined by ISO-13849. (MSS provided specific mark-ups to the electrical drawings and worked with the integrator to ensure proper implementation).
  • The light curtain reset button was required to be relocated. (MSS provided specific placement guidance.)
  • The safeguarding reset button was required to be accompanied by specific administrative. (MSS worked with the integrator to implement these into the HMI system and documentation).
  • The robot required safety soft limits to be properly configured and tested (Fanuc: DCS, ABB: SafeMove2).
  • Specific content needed to be added to the “Information for Use” (operation and maintenance manuals).

With corrective actions complete, MSS re-inspected the machine, verified safety wiring, validated the safety functions and provided a Declaration of Conformance for the robot system. The customer then accepted the system, commissioned, and placed it into production.  The project was then deemed a huge success by senior management.   The industrial robot system now produces high-quality assemblies 24/7, the project team feels great about safety compliance, and the attorneys are now seeking other opportunities.   Another success story by MSS…

Case Study: Manufacturing Company

Risk Assessment Case Study

Another question….

Q:  Which safety product company do we trust to perform a risk assessment with your best interest in mind? A:  None of them. Companies selling safety products have a hidden agenda – sell the most products and charge insane dollars for installation! Machine Safety Specialists are safety engineers and consultants who have your best interest in mind. We will conduct an unbiased Risk Assessment and recommend the most sensible, lowest cost, compliant safeguards on the market – with no hidden sales agenda!

Case Study: Machine Safeguarding Example

One photo, two points of view…., safety product company recommendation:.

“Wow – This Customer needs $50K of functional safety equipment on each machine. Add light curtains, safety system, software, etc….”.   Problem solved for $50,000.

MSS Recommendation:

“Bolt down the existing guard, add end cap, remove sharp edges and secure the air line. Add a warning sign with documented training….”.  Problem solved for $50.   Once again, this really happened  – don’t let it happen to you !

Case Study: Risk Reduction

“machine safety specialists’ comprehensive approach to  risk reduction ensured the most complete, sensible, and least expensive solution for compliance” – safety manager.

Green Circle (right): We use all methods of Risk Reduction (elimination, signs, training) – not just guards and protective devices. This is the least expensive and most comprehensive approach. Red Circle (right): Guarding company methods of risk reduction (guards and protective devices) are very expensive, time consuming, and do not mitigate all of the risk.

Case Study - Why Perform a Risk Assessment?

Another frequently asked question is: “ Why do I need a Risk Assessment?” To answer this, please see Case Study: “ Applicable U.S. Machine Safety Codes and Standards”, then please see below… Why Perform a Risk Assessment? A written workplace hazard assessment is required by law.  In section 1910.132(d)(2), OSHA requires a workplace hazard analysis to be performed.  The proposed Risk Assessment fulfils this requirement with respect to the machine(s).

1910.132(d)(2): “The employer shall verify that the required workplace hazard assessment has been performed through a written certification that identifies the workplace evaluated; the person certifying that the evaluation has been performed; the date(s) of the hazard assessment; and, which identifies the document as a certification of hazard assessment.”

A Risk Assessment (RA) is required by the following US standards:

  • ANSI Z244.1
  • ANSI B11.19
  • ANSI B155.1
  • ANSI / RIA R15.06

Please note the following excerpt from an actual OSHA citation :

“The machines which are not covered by specific OSHA standards are required under the Occupational Safety and Health Act (OSHA Act) and Section 29 CFR 1910.303(b)(1) to be free of recognized hazards which may cause death or serious injuries.”

In addition, the risk assessment forms the basis of design for the machine safeguarding system.  The risk assessment is a process by which the team assesses risk, risk reduction methods, and team acceptance of the solution.  This risk reduction is key in determining the residual risks to which personnel are exposed.  Without a risk assessment in place, you are in violation of US Safety Standards, and you may be liable for injuries from the un-assessed machines.

Contact Us Today for your Free Risk Assessment Spreadsheet

Download your Free Risk Assessment Spreadsheet

ANSI/RIA Risk Assessment Spreadsheet-Enhanced Three State

Case Study:  Applicable U.S. Machine Safety Codes and Standards

We are often asked: “What must I do for minimum OSHA compliance at our plant?  Do I have to follow ANSI standards?  Why?” The following information explains our answer… Please note the following excerpt from an actual OSHA citation:

 “These machines must be designed and maintained to meet or exceed the requirements of the applicable industry consensus standards.  In such situations, OSHA, may apply standards published by the American National Standards Institute (ANSI), such as standards contained in ANSI/NFPA 79, Electrical Standard for Industrial Machinery, to cover hazards that are not covered by specific OSHA standards .”

  U.S. regulations and standards used in our assessments include:

  • OSHA 29 CFR 1910, Subpart O
  • Plus, others as applicable….

Please note the following key concepts in the U.S. Safety Standards:

  • Control Reliability as defined in ANSI B11.19 and RIA 15.06
  • Risk assessment methods in ANSI B11.0, RIA 15.06, and ANSI/ASSE Z244.1
  • E-Stop function and circuits as defined in NFPA 79 and ANSI B11.19
  • OSHA general safety regulations as defined in OSHA 29 CFR 1910 Subpart O – Section 212
  • Power transmission, pinch and nip points as defined in OSHA 29 CFR 1910 Subpart O -Section 219
  • Electrical Safety as defined in NFPA 79 and ANSI B11.19.

Note:  OSHA is now citing for failure to meet ANSI B11.19 and NFPA 79 .

SISTEMA

Telephone: (740) 816-9178 E-mail: [email protected]

Contact Us Today – We Can Be There Tomorrow!

  • Work E-mail *

Live Event:

I am interested in:.

  • Machine Safety Audits and Risk Assessments
  • Functional Safety and Control Reliability Design Reviews (Verification)
  • Stop-time Measurement
  • Instructor Lead Training (ILT)
  • Online Instructor Lead Training (ILT)
  • Machine Safeguarding Plans
  • SISTEMA Analysis
  • Safety System Testing (Validation) and Testing Procedures
  • Industrial or Collaborative Robot Safety
  • Machine Safeguarding Specification
  • Consulting and Expert Witness
  • Machine SafetyProTM - Mobile Risk Assessment Software
  • Free Risk Assessment Spreadsheet
  • En-Tronic FT-50 / FT-100 Parts
  • Safety Signs and Labels
  • Safety Sign/Label Assessment
  • Where did you hear about Machine Safety Specialists?
  • Upload Machine Photos Drop files here or Select files Max. file size: 300 MB. empty to support CSS :empty selector. --> Photo Uploads: Please upload photos of machines to evaluate here and provide any additional instructions in the Message field
  • Name This field is for validation purposes and should be left unchanged.

P.O. Box 1111 Sunbury, OH 43074-9013

main-logo

Table of Contents

Understanding project risk management, definition and explanation of project risk management, 4 key components of project risk management, risk identification, risk assessment, risk response planning, risk monitoring and control, 5 project risk management case studies, gordie howe international bridge project, fujitsu’s early-career project managers, vodafone’s complex technology project, fehmarnbelt project, lend lease project, project risk management at designveloper, how we manage project risks, advancements in project risk management, project risk management: 5 case studies you should not miss.

May 21, 2024

risk assessment case study examples

Exploring project risk management, one can see how vital it is in today’s business world. This article from Designveloper, “Project Risk Management: 5 Case Studies You Should Not Miss”, exists in order to shed light on this important component of project management.

We’ll reference some new numbers and facts that highlight the significance of risk management in projects. These data points are based on legit reports and will help create a good basis of understanding on the subject matter.

In addition, we will discuss specific case studies when risk management was successfully applied and when it was not applied in project management. These real world examples are very much important for project managers and teams.

It is also important to keep in mind that each project has associated risks. However through project risk management these risks can be identified, analyzed, prioritized and managed in order to make the project achieve its objectives. Well then, let’s take this journey of understanding together. Watch out for an analysis of the five case studies you must not miss.

Risk management is a very critical component of any project. Risk management is a set of tools that allow determining the potential threats to the success of a project and how to address them. Let’s look at some more recent stats and examples to understand this better.

Understanding Project Risk Management

Statistics show that as high as 70% of all projects are unsuccessful . This high failure rate highlights the need for efficient project risk management. Surprisingly, organizations that do not attach much importance to project risk management face 50% chances of their project failure. This results in huge losses of money and untapped business potential.

Additionally, poor performance leads to approximated 10% loss of every dollar spent on projects. This translates to a loss of $99 for every $1 billion invested. These statistics demonstrate the importance of project risk management in improving project success rates and minimizing waste.

Let us consider a project management example to demonstrate the relevance of the issue discussed above. Consider a new refinery being constructed in the Middle East. The project is entering a key phase: purchasing. Poor risk management could see important decisions surrounding procurement strategy, or the timing of the tendering process result in project failure.

Project risk management in itself is a process that entails the identification of potential threats and their mitigation. It is not reactionary but proactive.

This process begins with the identification of potential risks. These could be any time from budget overruns to delayed deliveries. After the risks are identified they are then analyzed. This involves estimating the probability of each risk event and the potential consequences to the project.

The next stage is risk response planning. This could be in the form of risk reduction, risk shifting or risk acceptance. The goal here is to reduce the impact of risks on the project.

Finally, the process entails identifying and tracking these risks throughout the life of a project. This helps in keeping the project on course and any new risks that might arise are identified and managed.

Let’s dive into the heart of project risk management: its four key components. These pillars form the foundation of any successful risk management strategy. They are risk identification, risk analysis, risk response planning, and risk monitoring and control. Each plays a crucial role in ensuring project success. This section will provide a detailed explanation of each component, backed by data and real-world examples. So, let’s embark on this journey to understand the four key components of project risk management.

Risk identification is the first process in a project risk management process. It’s about proactively identifying risks that might cause a project to fail. This is very important because a recent study has shown that 77% of companies had operational surprises due to unidentified risks.

4 Key Components of Project Risk Management

There are different approaches to risk identification such as brainstorming, Delphi technique, SWOT analysis, checklist analysis, flowchart. These techniques assist project teams in identifying all potential risks.

Risk identification is the second stage of the project risk management process. It is a systematic approach that tries to determine the probability of occurrence and severity of identified risks. This step is very important; it helps to rank the identified risks and assists in the formation of risk response strategies.

Risk assessment involves two key elements: frequency and severity of occurrence. As for risk probability, it estimates the chances of a risk event taking place, and risk impact measures the impact associated with the risk event.

This is the third component of project risk management. It deals with planning the best ways to deal with the risks that have been identified. This step is important since it ensures that the risk does not have a substantial effect on the project.

One of the statistics stated that nearly three-quarters of organizations have an incident response plan and 63 percent of these organizations conduct the plan regularly. This explains why focusing only on risks’ identification and analysis without a plan of action is inadequate.

Risk response planning involves four key strategies: risk acceptance, risk sharing, risk reduction, and risk elimination. Each strategy is selected depending on the nature and potential of the risk.

Risk monitoring and control is the last step of project risk management. It’s about monitoring and controlling the identified risks and making sure that they are being addressed according to the plan.

Furthermore, risk control and management involve managing identified risks, monitoring the remaining risk, identifying new risks, implementing risk strategies, and evaluating their implementation during the project life cycle.

It is now high time to approach the practical side of project risk management. This section provides selected five case studies that explain the need and application of project risk management. Each case study gives an individual approach revealing how risk management can facilitate success of the project. Additionally, these case studies include construction projects, technology groups, among other industries. They show how effective project risk management can be, by allowing organizations to respond to uncertainties and successfully accomplish their project objectives. Let us now examine these case studies and understand the concept of risk in project management.

The Gordie Howe International Bridge is one of the projects that demonstrate the principles of project risk management. This is one of the biggest infrastructure projects in North America which includes the construction of a 6 lane bridge at the busiest commercial border crossing point between the U.S. and Canada.

Gordie Howe International Bridge Project

The project scope can be summarized as: New Port of Entry and Inspection facilities for the Canadian and US governments; Tolls Collection Facilities; Projects and modifications to multiple local bridges and roadways. The project is administered via Windsor-Detroit Bridge Authority, a nonprofit Canadian Crown entity.

Specifically, one of the project challenges associated with the fact that the project was a big one in terms of land size and the community of interests involved in the undertaking. Governance and the CI were fundamental aspects that helped the project team to overcome these challenges.

The PMBOK® Guide is the contractual basis for project management of the project agreement. This dedication to following the best practices for project management does not end with bridge construction: It spreads to all other requirements.

However, the project is making steady progress to the objective of finishing the project in 2024. This case study clearly demonstrates the role of project risk management in achieving success with large and complicated infrastructure projects.

Fujitsu is an international company that deals with the provision of a total information and communication technology system as well as its products and services. The typical way was to employ a few college and school leavers and engage them in a two-year manual management training and development course. Nevertheless, this approach failed in terms of the following.

Fujitsu’s Early-Career Project Managers

Firstly, the training was not comprehensive in its coverage of project management and was solely concerned with generic messaging – for example, promoting leadership skills and time management. Secondly it was not effectively reaching out to the need of apprentices. Thirdly the two year time frame was not sufficient to allow for a deep approach to the development of the required project management skills for this job. Finally the retention problems of employees in the train program presented a number of issues.

To tackle these issues, Fujitsu UK adopted a framework based on three dimensions: structured learning, learning from others, and rotation. This framework is designed to operate for the first five years of a participant’s career and is underpinned by the 70-20-10 model for learning and development. Rogers’ model acknowledges that most learning occurs on the job.

The initial training process starts with a three-week formal learning and induction program that includes the initial orientation to the organization and its operations, the fundamentals of project management, and business in general. Lastly, the participants are put on a rotational assignment in the PMO of the program for the first six to eight months.

Vodafone is a multinational mobile telecommunications group that manages telecommunications services in 28 countries across five continents and decided to undertake a highly complex technology project to replace an existing network with a fully managed GLAN in 42 locations. This project was much complex and thus a well grounded approach to risk management was needed.

Vodafone’s Complex Technology Project

The project team faced a long period of delay in signing the contract and frequent changes after the contract was signed until the project is baselined. These challenges stretched the time frame of the project and enhanced the project complexity.

In order to mitigate the risks, Vodafone employed PMI standards for their project management structure. This approach included conducting workshops, developing resource and risk management plan and tailoring project documentations as well as conducting regular lesson learned.

Like any other project, the Vodafone GLAN project was not an easy one either but it was completed on time and in some cases ahead of the schedule that the team had anticipated to complete the project. At the first stage 90% of migrated sites were successfully migrated at the first attempt and 100% – at second.

The Fehmarnbelt project is a real-life example of the strategic role of project risk management. It provides information about a mega-project to construct the world’s longest immersed tunnel between Germany and Denmark. It will be a four-lane highway and two-rail electrified tunnel extending for 18 kilometers and it will be buried 40 meters under the Baltic Sea.

Fehmarnbelt Project

This project is managed by Femern A/S which is a Danish government-owned company with construction value over more than €7 billion (£8. 2 billion). It is estimated to provide jobs for 3,000 workers directly in addition to 10,000 in the suppliers. Upon its completion, its travel between Denmark and Germany will be cut to 10 minutes by automobile and 7 minutes by rail.

The Femern risk management functions and controls in particular the role of Risk Manager Bo Nygaard Sørensen then initiated the process and developed some clear key strategic objectives for the project. They formulated a simple, dynamic, and comprehensive risk register to give a more complete risk view of the mega-project. They also created a risk index in order to assess all risks in a consistent and predictable manner, classify them according to their importance, and manage and overcome the risks in an appropriate and timely manner.

Predict! is a risk assessment and analysis tool that came in use by the team, which helps determine the effect of various risks on the cost of the construction of the link and to calculate the risk contingency needed for the project. This way they were able to make decisions on whether an immersed tunnel could be constructed instead of a bridge.

Lend Lease is an international property and infrastructure group that operates in over 20 countries in the world; the company offers a better example of managing project risks. The company has established a complex framework called the Global Minimum Requirements (GMRs) to identify risks to which it is exposed.

Lend Lease Project

The GMRs have scope for the phase of the project before a decision to bid for a job is taken. This framework includes factors related to flooding, heat, biodiversity, land or soil subsidence, water, weathering, infrastructure and insurance.

The GMRs are organized into five main phases in line with the five main development stages of a project. These stages guarantee that vital decisions are made at the ideal time. The stages include governance, investment, design and procurement, establishment, and delivery.

For instance, during the design and procurement stage, the GMRs identify requisite design controls that will prevent environment degradation during design as well as fatal risk elimination during planning and procurement. This approach aids in effective management of risks and delivery of successful projects in Lend Lease.

Let’s take a closer look at what risk management strategies are used here at Designveloper – a top web & software development firm in Vietnam. We also provide a range of other services, so it is essential that we manage risks on all our projects in similar and effective ways. The following part of the paper will try to give a glimpse of how we manage project risk in an exemplary manner using research from recent years and include specific cases.

The following steps explain the risk management process that we use—from the identification of potential risks to managing them: Discovering the risks. We will also mention here how our experience and expertise has helped us in this area.

Risk management as a function in project delivery is well comprehended at Designveloper. Our method of managing the project risk is proactive and systematic, which enables us to predict possible problems and create successful solutions to overcome them.

One of the problems we frequently encounter is the comprehension of our clients’ needs. In most cases, clients come to us with a basic idea or concept. To convert these ideas into particular requirements and feature lists, the business analysts of our company have to collaborate with the client. The whole process is often a time-waster, and having a chance is missed.

risk assessment case study examples

To solve this problem, we’ve created a library of features with their own time and cost estimate. This library is based on data of previous projects that we have documented, arranged, and consolidated. At the present time when a client approaches us with a request, we can search for similar features in our library and give an initial quote. This method has considerably cut the period of providing the first estimations to our clients and saving the time for all participants.

This is only one of the techniques we use to mitigate project risks at Designveloper. The focus on effective project risk management has been contributing significantly to our successful operation as a leading company in web and software development in Vietnam. It is a mindset that enables us to convert challenges into opportunities and provide outstanding results for our clients.

In Designveloper, we always aim at enhancing our project risk management actions. Below are a couple examples of the advancements we’ve made.

To reduce the waiting time, we have adopted continuous deployment. This enables us to provide value fast and effectively. We release a minimum feature rather than a big feature. It helps us to collect the input from our customers and keep on improving. What this translates into for our customers is that they start to derive value from the product quickly and that they have near-continuous improvement rather than have to wait for a “perfect” feature.

We also hold regular “sync-up” meetings between teams to keep the information synchronized and transparent from input (requirements) to output (product). Changes are known to all teams and thus teams can prepare to respond in a flexible and best manner.

Some of these developments in project risk management have enabled us to complete projects successfully, and be of an excellent service to our clients. They show our support of the never-ending improving and our capability to turn threats into opportunities. The strength of Designveloper is largely attributed to the fact that we do not just control project risks – we master them.

To conclude, project risk management is an important element of nearly all successful projects. It is all about identification of possible problems and organization necessary measures that will result in the success of the project. The case studies addressed in this article illustrate the significance and implementation of project risk management in different settings and fields. They show what efficient risk management can result in.

We have witnessed the advantages of solid project risk management at Designveloper. The combination of our approach, powered by our track record and professionalism, has enabled us to complete projects that met all client’s requirements. We are not only managing project risks but rather mastering them.

We trust you have found this article helpful in understanding project risk management and its significance in the fast-changing, complicated project environment of today. However, one needs to mind that proper project management is not only about task and resource management but also risk management. And at Designveloper, our team is there to guide you through those risks and to help you realize your project’s objectives.

Also published on

risk assessment case study examples

Share post on

cta-pillar-page

Insights worth keeping. Get them weekly.

body

Get in touch

Simply register below to receive our weekly newsletters with the newest blog posts

Read more topics

best-companies

risk decisions

  • Predict! Software Suite
  • Training and Coaching
  • Predict! Risk Controller
  • Rapid Deployment
  • Predict! Risk Analyser
  • Predict! Risk Reporter
  • Predict! Risk Visualiser
  • Predict! Cloud Hosting
  • BOOK A DEMO
  • Risk Vision
  • Win Proposals with Risk Analysis
  • Case Studies
  • Video Gallery
  • White Papers
  • Upcoming Events
  • Past Events

risk assessment case study examples

Fehmarnbelt case study

. . . . . learn more

risk assessment case study examples

Lend Lease case study

risk assessment case study examples

ASC case study

risk assessment case study examples

Tornado IPT case study

risk assessment case study examples

LLW Repository case study

risk assessment case study examples

OHL case study

risk assessment case study examples

Babcock case study

risk assessment case study examples

HUMS case study

risk assessment case study examples

UK Chinook case study

risk assessment case study examples

  • EMEA: +44 (0) 1865 987 466
  • Americas: +1 (0) 437 269 0697
  • APAC: +61 499 520 456

risk assessment case study examples

Subscribe for Updates

Copyright © 2024 risk decisions. All rights reserved.

  • Privacy Policy
  • Cookie Policy
  • Terms and Conditions
  • Company Registration No: 01878114

Powered by The Communications Group

PharmTech

OR WAIT null SECS

  • Editorial Info
  • Editorial Contacts
  • Editorial Advisory Board
  • Do Not Sell My Personal Information
  • Privacy Policy
  • Terms and Conditions

MJHLS Brand Logo

© 2024 MJH Life Sciences ™ and Pharmaceutical Technology . All rights reserved.

Quality Risk-Management Principles and PQRI Case Studies

A PQRI expert working group provides case study examples of risk-management applications.

The harmonized Q9 Quality Risk Management guideline from the International Conference on Harmonization (ICH) provides an excellent high-level framework for the use of risk management in pharmaceutical product development and manufacturing quality decision-making applications (1–2). It is a landmark document in acknowledging risk management as a standard and acceptable quality system practice to facilitate good decision-making with regard to risk identification, resource prioritization, and risk mitigation/elimination, as appropriate.

Recognizing the need to propagate and expedite holistic adoption of quality risk management across the pharmaceutical industry, the Product Quality Research Institute Manufacturing Technical Committee (PQRI–MTC) commissioned a small working group of industry and FDA representatives to seek out good case studies of actual risk-management practices used by large bio/pharmaceutical firms to share with the industry at large.

The working group spent approximately one year soliciting risk-management case studies from industry peers and contacts, and ultimately reviewed more than 20 of them. Each study was graded against six multiple criteria to assess applicability, usefulness, and alignment with ICH Q9. The highest graded case studies were measured against two additional criteria to ensure a balanced mix of examples for this report. Due to the size of a well-developed risk assessment, especially when applied to a complex problem or operating area, the presented case studies in most instances represent redacted versions of the actual assessments. Nonetheless, the provided summaries are effective in demonstrating the general thought process, risk application, and use of chosen risk methods.

As a byproduct of the working group's collaboration on risk-management practices, several common principles that reflect current industry and regulatory thinking emerged. These principles are aligned with, and in some instances expand beyond, those defined by ICH Q9 and are included in this report. In addition, several risk-management reference tools used by participating firms have been included as examples.

Risk-management principles, case studies, and supporting tools used by large bio/pharmaceutical manufacturers for effective quality oversight of product development and manufacturing operations are included in this report. Each case study notes the applicable corresponding quality system (i.e., Quality, Facilities & Engineering, Material, Production, Packaging & Labeling, or Laboratory Control) that is consistent with FDA's quality systems guidance document (3). In addition, the case studies identify the risk methodology that was used for ease of categorization, understanding, and potential application by the reader. Medical-device examples fall beyond the scope of this article, although the case studies and tools presented have relevance to device manufacturing. See the sidebar, "PQRI case studies," for details on the topics covered.

PQRI case studies

Principles and common practices

Core principles of quality risk management according to the ICH Q9 guideline include the following:

1. Compliance with applicable laws: Risk assessment should be used to assess how to ensure compliance and to determine the resulting prioritization for action—not for a decision regarding the need to fulfill applicable regulations or legal requirements.

2. Risk can only be effectively managed when it is identified, assessed, considered for further mitigation, and communicated. This principle embodies the four stages of an effective quality risk-management process as defined by ICH Q9: risk assessment (i.e., risk identification, analysis, and evaluation); risk control (i.e., risk reduction and acceptance); risk communication; and risk review.

3. All quality risk evaluations must be based on scientific and process-specific knowledge and ultimately linked primarily to the protection of the patient. Risk assessment is based on the strong understanding of the underlying science, applicable regulations, and related processes involved with the risk under analysis. Collectively, these components should be assessed first and foremost with regard to the potential impact to the patient (see Figure 1).

Figure 1: Quality risk-evaluation pyramid.

4. Effective risk management requires a sufficient understanding of the business, the potential impact of the risk, and ownership of the results of any risk-management assessment.

5. Risk assessment must take into account the probability of a negative event in combination with the severity of that event. This principle also serves as a useful working definition for risk (i.e., risk represents the combination of the probability and severity of any given event).

6. It is not necessary or appropriate to always use a formal risk-management process (e.g., standardized tools). Rather, the use of an informal risk-management process (e.g., empirical assessment) is acceptable for areas that are less complex and that have lower potential risk. Risk decisions are made by industry every day. The complexity of the events surrounding each decision and the potential risk involved are important inputs in determining the appropriate risk-assessment methodology and corresponding level of analysis required. For less complex, less risky decisions, a qualitative analysis (e.g., decision tree) of the options may be all that is required. In general, as the complexity and/or risk increases, so should the sophistication of the risk-assessment tool used. In the same regard, the level of documentation of the risk-management process to render an appropriate risk assessment should be commensurate with the level of risk (2). See Figure 2 for details.

Figure 2: Documentation level.

Risk-assessment supporting tools

A key early step in the execution of a risk analysis is to determine the appropriate risk-assessment tool, or methodology. There is no single best choice for any given assessment process, and the selection of the appropriate risk methodology should be based on the depth of analysis required, complexity of the subject risk of concern, and the familiarity with the assessment tool. Based on the industry examples reviewed by the PQRI–MTC working group, risk ranking and filtering (sometimes referred to as risk matrix) and flowcharting were the most popular tools used for basic risk-assessment activities. Correspondingly, failure-mode effect analysis (FMEA) appeared to be the most frequently used methodology for more advanced risk analysis. Some examples demonstrated the power of combining tools to help with more complex analysis. For example, fault-tree analysis (FTA) or a fishbone diagram can be used to initially scope and evaluate the fault modes of a particular problem and be used to feed a hazards analysis and critical control point (HACCP), or a similar tool to evaluate overall system control and effectiveness can be used. Table I provides a list of generally well-recognized risk-management tools.

Table I: Common risk-management tools.

Each risk subject and assessment warrants consideration of the applicable descriptors of potential risk and related consequences. Ideally, firms should establish a guidance document ahead of any risk analysis, such as the one provided in Table II, to help guide the risk-assessment process and provide for consistency in decision-making company-wide.

Table II: Severity categorization.

Risk trainers. In assembling this collection of case studies, the authors recognized the benefit of providing industry with additional background on core risk methodologies. Training tools for the application of risk ranking and filtering, FMEA, FTA, and HAZOP are available online with the web version of this article at PharmTech.com/PQRIstudies . These tools are meant to facilitate greater familiarity with the risk methodology used in each corresponding case study.

The PQRI–MTC Risk Management Working Group solicited and formatted a series of best-practice case studies aligned with ICH Q9 principles. The collected case studies demonstrate that there is a wide range of applications for the use of structured risk-management analysis to facilitate effective quality-decision activities. The studies demonstrate the baseline needed to choose the appropriate risk methodology for the targeted need, taking into account the degree of complexity and risk involved for the specific subject of concern. It is equally important to predefine the potential resulting risk categorizations so as to not be influenced by the assessment results in defining appropriate response actions. Finally, once risks have been appropriately assessed and prioritized, clear risk-mitigating actions must be defined, communicated, implemented and monitored for effectiveness.

Ted Frank is with Merck & Co; Stephen Brooks, Kristin Murray* and Steve Reich are with Pfizer; Ed Sanchez is with Johnson & Johnson; Brian Hasselbalch is with the FDA Center for Drug Evaluation and Research; Kwame Obeng is with Bristol Myers Squibb; and Richard Creekmore is with AstraZeneca.

*To whom all correspondence should be addressed, [email protected] .

1. FDA Global Harmonization Task Force, "Implementation of Risk Management Principles andActivities within a Quality Management System" (Rockville, MD, 2000).

2. ICH, Q9 Quality Risk Management, 2005.

3. FDA, Guidance for Industry: Quality Systems Approach to Pharmaceutical CGMP Regulations (Rockville, MD, 2006).

4. FDA, "Risk-Based Method for Prioritizing CGMP Inspections of Pharmaceutical Manufacturing Sites–A Pilot Risk Ranking Model," (Rockville, MD, 2004).

Related Content:

University of Oxford Chemists Develop Method for Vaccine Authenticity Screening

How to Do a Risk Assessment: A Case Study

John Pellowe

Healthy , Organizational Leadership , Winning Strategy | Execution , Organizational Health Management , Risk management , Strategic planning

how to do a risk assessment  a case study

Christian Leadership Reflections

An exploration of Christian ministry leadership led by CCCC's CEO John Pellowe

There’s no shortage of consultants and authors to tell boards and senior leaders that risk assessment is something that should be done. Everyone knows that. But in the chronically short-staffed world of the charitable sector, who has time to do it well? It’s too easy to cross your fingers and hope disaster won’t happen to you!

If that’s you crossing your fingers, the good news is that risk assessment isn’t as complicated as it sounds, so don’t be intimidated by it. It doesn’t have to take a lot of time, and you can easily prioritize the risks and attack them a few at a time. I recently did a risk assessment for CCCC and the process of creating it was quite manageable while also being very thorough.

I’ll share my experience of creating a risk assessment so you can see how easy it is to do.

Step 1: Identify Risks

The first step is obvious – identify the risks you face. The trick is how you identify those risks. On your own, you might get locked into one way of thinking about risk, such as people suing you, so you become fixated on legal risk. But what about technological risks or funding risks or any other kind of risk?

I found a helpful way to identify the full range of risks is to address risk from three perspectives:

  • Two of the mission-related risks we identified at CCCC were 1) if we gave wrong information that a member relied upon to their detriment; and 2) if a Certified member had a public scandal.
  • We listed several risks to organization health for CCCC. Among them were 1) a disaster that would shut down our operations at least temporarily, and 2) a major loss from an innovation that did not work.
  • We identified a risk related to the sociopolitical environment.

I began the risk assessment by reviewing CCCC from these three perspectives on my own. I scanned our theory of change, our strategy map, and our programs to identify potential risks. I then reviewed everything we had that related to organizational health, which included our Vision 2020 document (written to proactively address organizational health over the next five years),  financial trends, a consultant’s report on a member survey, and a review of our operations by an expert in Canadian associations. I also thought about our experience over the past few years and conversations I’ve had with people. Finally, I went over everything we know about our environments and did some Internet research to see what else was being said that might affect us.

With all of this information, I then answered questions such as the following:

  • What assumptions have I made about current or future conditions? How valid are the assumptions?
  • What are my nightmare scenarios?
  • What do I avoid thinking about or just hope never happens?
  • What have I heard that went wrong with other organizations like ours?
  • What am I confident will never happen to us? Hubris is the downfall of many!
  • What is becoming more scarce or difficult for us?

At this point, I created a draft list of about ten major risks and distributed it to my leadership team for discussion. At that meeting we added three additional risks. Since the board had asked for a report from staff for them to review and discuss at the next board meeting, we did not involve them at this point.

risk assessment case study examples

Step 2: Probability/Impact Assessment

Once you have the risks identified, you need to assess how significant they are in order to prioritize how you deal with them. Risks are rated on two factors:

  • How likely they are to happen (That is, their Probability )
  • How much of an effect could they have on your ministry (Their anticipated Impact )

Each of these two factors can be rated High , Medium , or Low . Here’s how I define those categories:

  • High : The risk either occurs regularly (such as hurricanes in Florida) or something specific is brewing and becoming more significant over time, such that it could affect your ministry in the next few years.
  • Medium : The risk happens from time to time each year, and someone will suffer from it (such as a fire or a burglary). You may have an elevated risk of suffering the problem or you might have just a general risk, such as everyone else has. There may also be a general trend that is not a particular problem at present but it could affect you over the longer term,
  • Low : It’s possible that it could happen, but it rarely does. The risk is largely hypothetical.
  • High : If the risk happened, it would be a critical life or death situation for the ministry. At the least, if you survive it would change the future of the ministry and at its worst, the ministry may not be able to recover from the damage and closure would be the only option.
  • Medium : The risk would create a desperate situation requiring possibly radical solutions, but there would be a reasonable chance of recovering from the effects of the risk without long term damage.
  • Low : The risk would cause an unwelcome interruption of normal activity, but the damage could be overcome with fairly routine responses. There would be no question of what to do, it would just be a matter of doing it.

I discussed my assessments of the risks with staff and then listed them in the agreed-upon priority order in six Probability/Impact combinations:

  • High/High – 2 risks
  • High/Medium – 1 risk
  • Medium/High – 2 risks
  • Medium/Medium – 3 risks
  • Low/High – 3 risks
  • Low/Medium – 2 risks

I felt that the combinations High/Low, Medium/Low, and Low/Low weren’t significant enough to include in the assessment. The point of prioritizing is to help you be a good steward as you allocate time and money to address the significant risks. With only thirteen risks, CCCC can address them all, but we know which ones need attention most urgently.

Step 3: Manage Risk

After you have assessed the risks your ministry faces (steps 1 and 2), you arrive at the point where you can start managing  the risks. The options for managing boil down to three strategies:

  • Prevent : The risk might be avoided by changing how you do things. It may mean purchasing additional equipment or redesigning a program. In most cases, though, you probably won’t actually be able to prevent the risk from ever happening. More likely you will only be able to mitigate the risk.
  • Mitigate : Mitigate means to make less severe, serious, or painful. There are two ways to mitigate risk: 1) find ways to make it less likely to happen; and 2) lessen the impact of the risk if it happens. Finding ways to mitigate risk and then implementing the plan will take up most of the time you spend on risk assessment and management. This is where you need to think creatively about possible strategies and action steps. You will also document the mitigating steps you have already taken.
  • Transfer  or Eliminate : If you can’t prevent the risk from happening or mitigate the likelihood or impact of the risk, you are left with either transferring the risk to someone else (such as by purchasing insurance) or getting rid of whatever is causing the risk so that the risk is no longer applicable. For example, a church with a rock climbing wall might purchase insurance to cover the risk or it might simply take the wall down so that the risk no longer exists.

Step 4: Final Assessment

Armed with all this information, it’s time to prepare a risk report for final review by management and then the board. I’ve included a download in this post to help you write the report. It is a template document with an executive summary and then a detailed report. They are partially filled out so you can see how it is used.

risk assessment case study examples

After preparing your report, review it and consider whether or not the mitigating steps and recommendations are sufficient, Do you really want to eliminate some aspect of your ministry to avoid risk? Do you believe that whatever action has been recommended is satisfactory and in keeping with the ministry’s mission and values? Are there any other ways to get the same goal achieved or purpose fulfilled without attracting risk?

Finally, after all the risk assessment and risk management work has been done, the ministry is left with two choices:

  • Accept whatever risk is left and get on with the ministry’s work
  • Reject the remaining risk and eliminate it by getting rid of the source of the risk

Step 5: Ongoing Risk Management

On a regular basis, in keeping with the type of risk and its threat, the risk assessment and risk management plan should be reviewed to see if it is still valid. Have circumstances changed? Are the plans working? Review the plan and adjust as necessary.

Key Thought: You have to deal with risk to be a good steward, and it is not hard to do.

membership

Share this post

Sign up for Christian Leadership Reflections today!

membership

More from Christian Leadership Reflections

  • The Long-Term Benefits of a Sabbatical (Jun. 14, 2023)
  • How to Release Your Mission Statement’s Power (May. 20, 2023)
  • A Theology of Strategy Development (May. 8, 2023)
  • God’s Christmas Gift to Us: Peace through Christ (Dec. 13, 2022)
  • Looking Around: Corporate Values (Oct. 18, 2022)
  • Adaptive(17)
  • Ample Resources(9)
  • Best Practices(10)
  • Christian(39)
  • Christian Faith(25)
  • Christian Fundraising(10)
  • Christian Identity(6)
  • Christian Mission(1)
  • Christian Spirituality(5)
  • Christian Witness(7)
  • Church-agency(2)
  • Collaborative(9)
  • Community Leadership(44)
  • COVID-19(1)
  • Effective(76)
  • Employee engagement(2)
  • Exemplary(46)
  • Flourishing People(34)
  • Fundraising(3)
  • Governance(25)
  • Great Leadership(115)
  • Great Leadership(1)
  • Healthy(180)
  • Impeccable(12)
  • Intellectual Creativity(15)
  • Leadership(2)
  • Leadership - Theology(6)
  • No category(2)
  • Organizational Health(2)
  • Organizational Leadership(54)
  • Personal Leadership(60)
  • Personal Reflection(7)
  • Planful(12)
  • Religious Philosophy(1)
  • Skillful Execution(9)
  • Spirituality of Leadership(32)
  • Strategy(34)
  • Team Leadership(29)
  • Teamship(5)
  • Thoughtful(38)
  • Trailblazing(16)
  • Uncategorized(8)
  • Winning Strategy(24)
  • A Milestone 360(3)
  • Appreciation at Work(3)
  • Christian Identity(3)
  • Conflict Resolution(4)
  • Corporate life as corporate witness(6)
  • Dad's Passing(2)
  • Delegation God's Way(1)
  • Essential Church Leadership(1)
  • Faithful Strategy Development(18)
  • Harvard Business School(12)
  • Hearing God speak(4)
  • How a board adds value(5)
  • Loving Teamship(3)
  • Oxford University(4)
  • Pastors: A Hope and a Future(24)
  • Program Evaluation(7)
  • Sabbatical(37)
  • Sector Narrative(4)
  • Stanford University(3)
  • Who We Serve
  • What We Value
  • 50th Anniversary
  • Board of Directors
  • Financials and Policies
  • Membership Options
  • Accreditation Program
  • Member Stories
  • Sector Representation
  • Legal Defence Fund
  • Member Support Team
  • Professional Associate Directory
  • HR Consulting
  • Employee Group Benefit Plans
  • Pension Plan
  • Christian Charity Jobs
  • Canadian Ministry Compensation Survey
  • Learning Table Online Courses
  • CCCC Knowledge Base
  • Live Webinars
  • Completing Your T3010
  • Free Resources
  • Spiritual Resources
  • Property and Liability Insurance
  • Protection for Vulnerable Persons
  • Give Confidently
  • CCCC Community Trust Fund
  • Donor Information
  • Fundraiser Information
  • Risk Management and Insurance
  • Risk Management

Risk Management in IT Projects – Case Study

  • December 2018
  • Trends Economics and Management 12(32):21

Biskupek Artur

Discover the world's research

  • 25+ million members
  • 160+ million publication pages
  • 2.3+ billion citations
  • Ranti Dwi Berlianti Bachtiar

Machmudin Eka Prasetya

  • Cristian Iulian Costache

Dragos Tohanean

  • Asusheh, A., Karami, A., Yazdani, H.
  • Ryszard Knosala

Iwona Łapuńka

  • J PROD INNOVAT MANAG
  • Sohel Ahmad

Debasish N. Mallick

  • Int J Proj Manag

Elmar Kutsch

  • Proj Manag J
  • Kenneth H. Rose
  • Carl L. Pritchard
  • Stephen Ward
  • Chris Chapman
  • Recruit researchers
  • Join for free
  • Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up

Page Tips

Home / Resources / ISACA Journal / Issues / 2020 / Volume 5 / How FAIR Risk Quantification Enables

Case study: how fair risk quantification enables information security decisions at swisscom.

journal volume 5

Swisscom is Switzerland’s leading telecom provider. Due to strategic, operational and regulatory requirements, Swisscom Security Function (known internally as Group Security) has implemented quantitative risk analysis using Factor Analysis of Information Risk (FAIR). Over time, Swisscom’s FAIR implementation has enabled Group Security to objectively assess, measure and aggregate security risk. Along the way, Swisscom’s Laura Voicu, a senior security architect, has led the Swisscom security risk initiative.

Introduction

Information risk is the reason businesses have security programs, and a risk management process can be a core security program enabler. With an effective risk program, business risk owners are well-informed about risk areas and take accountability for them. They are able to integrate risk considerations into managing value-producing business processes and strategies. They can express their risk tolerance (i.e., appetite) to technical and operational teams and, at a high level, direct the risk treatment strategies those teams take.

Most organizations now operate as digital businesses with a high reliance on IT. They can benefit by shifting the corporate culture from one that focuses on meeting IT compliance obligations to one that targets overall risk reduction. Visibility into the overall security of the organization plays an important role in establishing this new dialog. Security leaders can prioritize their security initiatives based on the top risk areas that an organization faces.

Swisscom uses quantifiable risk management enabled through Open FAIR to:

  • Communicate security risk to the business
  • Ascertain business risk appetites and improve business owner accountability for risk
  • Prioritize risk mitigation resources based on business impact
  • Calculate the return on investment (ROI) of security initiatives
  • Meet new and more stringent regulatory requirements

Company Background

Swisscom is the leading telecom provider in Switzerland and one of its foremost IT companies, headquartered in Ittigen, near the capital city of Bern. In 2019, 19,300 employees generated sales of CHF 11,453 (USD $12,490) million. It is 51 percent confederation-owned and is considered one of Switzerland’s most sustainable and innovative companies. Swisscom offers mobile telecommunications, fixed network, Internet, digital TV solutions and IT services for business and residential customers. Swisscom’s Group Security, which is a centrally managed function at Swisscom, provides policies and standards for all lines of business, while allowing each business to operate independently.

WHATEVER ITS MANY BENEFITS, DIGITIZATION IN THE VIRTUAL WORLD ALSO HAS A DARKER SIDE AND ORGANIZATIONS ARE FACING NEW KINDS OF RISK.

Figure 1

Qualitative Risk Analysis Pain Points

Prior to 2019, Swisscom managed and assessed information risk using qualitative analysis methods. The process was well-suited to quick decisions and easy to communicate with a visually appealing heat map. However, the Swisscom security team identified several fundamental flaws, including bias, ambiguity in meaning (e.g., What does "red” or “high" really mean?) and a probability that the person doing the measurement had not taken the time to clearly define what it is he or she just measured.

For reference, figure 1 illustrates a sample 5x5 heat map plotting nine risk areas (R1 to R9) on a graph where the vertical access plots the probability of a risk materializing and the horizontal access plots the hypothetical impact.

Risk Terminology

  • Risk (per FAIR) —The probable frequency and probable magnitude of future loss
  • Open FAIR —Factor Analysis of Risk (as standardized by The Open Group)
  • Information risk —Risk of business losses due to IT operational or cybersecurity events
  • Qualitative risk analysis —The practice of rating risk on ordinal scales, such as 1 equals low risk, 2 equals medium risk or 3 equals high risk
  • Quantitative risk analysis —The practice of assigning quantitative values, such as number of times per year for likelihood or frequency, and mapping impact to monetary values
  • Enterprise risk management —The methods and processes used by organizations to manage the business risk universe (e.g., financial, operational, market) as well as to seize opportunities related to the achievement of their objectives

Inconsistent Risk Estimates Qualitative risk estimates tended to be calculated in an inconsistent manner and were often found to be unhelpful. Because analysts did not use a rigorous risk quantification model such as FAIR to rate risk, they relied on the mental models or years of habit.

Early staff experiments with quantifying security risk also failed; per a senior security officer at Swisscom, the reasons for this were, “Too little transparency and too many assumptions. In short: a constant discussion about the evaluation method and not about the risk itself.”

Too Many “Mediums” Odd things happened: Virtually all risk areas were rated “medium.” A high rating is a strong statement and draws unwanted attention to the risk from business management, who might then demand some strong justification for the rating. A low rating would look foolish if something bad actually happened. Rating risk “medium” equals the safe way out.

Inability to Prioritize Risk Issues Although utilizing qualitative methods may provide some prioritization capability (a risk rated red is some degree worse than one rated yellow), Swisscom had no way of economically evaluating the difference between a red and yellow, between one red or two yellows, or even between two yellows such as R1 and R9 as shown in figure 1. In short, Swisscom had poor visibility into the security risk landscape, thus potentially misprioritizing critical issues. Over time, Swisscom staff came to share the FAIR practitioner community objections articulated in the article “Thirteen Reasons Why Heat Maps Must Die.” 1

Demand for More Accurate Risk Assessments After a Breach In 2018, Swisscom went public to announce a large data breach. Swisscom took immediate action to tighten the internal security measures to prevent such an incident from happening again. Further precautions were introduced in the course of the year.

Following the data breach, Swisscom IT and security executives sought to improve the risk assessment process. Staff had made early attempts to quantify security risk using single numerical values, or single-point estimates of risk by assigning values for discrete scenarios to see what the outcome might be in each. This technique provided little visibility into the uncertainty and variability surrounding the risk estimate.

Establishing a Quantitative Risk Analysis Program

Swisscom’s Group Security team learned about FAIR in 2018 and became convinced that its model was superior to in-house risk quantification approaches that the team had attempted to use in the past. FAIR allows security professionals to present estimates of risk (or loss exposure) that show decision-makers a range of probable outcomes. Using ranges brings a higher degree of accuracy to estimates with enough precision to be useful.

FAIR ALLOWS SECURITY PROFESSIONALS TO PRESENT ESTIMATES OF RISK (OR LOSS EXPOSURE) THAT SHOW DECISIONMAKERS A RANGE OF PROBABLE OUTCOMES.

The decision was made to use FAIR in 2018 and Senior Security Architect Laura Voicu was assigned to lead a core team of a few part-time FAIR practitioners. The risk project’s initial phase was to define risk scenarios in a consistent manner throughout Swisscom. As result of this work effort, the team produced a formal definition and consistent structure for normalizing risk register entries into FAIR-compliant nomenclature, shown in figure 2 .

Figure 2

The FAIR team performed multiple analyses and continued to deepen its experience with the quantitative approach. As a best practice, the team interviewed or held workshops with subject matter experts (SMEs) on controls, incidents, impacts and other areas representing variables in the FAIR analysis.

Starting in early 2019, a small group of stakeholders within the security organization conducted a proof of concept (POC) to perform assessments of the customer portal data breach risk, risk associated with different cloud workload migration strategies, outage of systems or networks due to ransomware and, recently, remote working use cases to continue operating amid the COVID-19 disruption.

In parallel, Group Security defined roles, analysis processes and risk management processes. The team defined the following roles:

  • Risk reporters —Security professionals who help identify and report security risk. Risk reporters work interdepartmentally to identify, assess and reduce security risk factors by recommending specific measures that can improve the overall security posture. They also have the overall responsibility to oversee the coordinated activities to direct and control risk.
  • Risk owners —Business owners and operations managers who manage the security risk scenarios that exist within their business areas. They are responsible for implementing corrective actions to address process and control deficiencies, and for maintaining effective controls on a day-to-day basis. They assume ownership, responsibility and accountability for directly controlling and mitigating risk.

The team also established the following processes:

  • Identification —Uncover the risk factors (or potential loss events) and define them in a detailed, structured format. Assign ownership to the areas of risk.
  • Assessment —Assess the probable frequency of risk occurrence, and the probable impacts. This helps prioritize risk. It also enables comparison of risk relative to each other and against the organization’s risk appetite.
  • Response —Define an approach for treating each assessed risk factor. Some may require no actions and only need to be monitored. Other risk factors considered unacceptable require an action plan to avoid, reduce or transfer them.
  • Monitoring and reporting —Reporting is a core part of driving decision-making in effective risk management. It enables transparent communication to the appropriate levels (according to Swisscom’s internal rules of procedure and accountability) of the net or residual risk.

Thus, the risk analysis processes normalize risk scenarios into the FAIR model, prioritize them and assess the actual financial loss exposure associated with each risk scenario. In parallel to the strategic risk analysis of the top risk areas, the FAIR team can also provide objective analysis to support tactical day-to-day risk or spending decisions. These analyses can help assess the significance of individual audit findings and efficacy of given controls, and can also justify investments and resource allocations based on cost-benefit.

The FAIR team is constantly improving and simplifying the process of conducting quantitative risk assessments using the FAIR methodology. In a workshop-based approach, the team tries to understand the people, processes and technologies that pose a risk to the business.

Ongoing Work Items

As of early 2020, Swisscom’s core FAIR team consists of three part-time staff members. This team is part of a virtual community of practitioners concerned with security risk management in the company.

The team continues to drive the following work items:

  • Risk scenario analysis
  • Risk scenario reporting
  • Risk portfolio analysis and reporting
  • Internal training
  • Improving the tool chain
  • Improving risk assessment processes

Risk Scenario Analysis The FAIR team performs the deep analysis of risk scenarios using an open-source tool adapted for Swisscom’s use. Based on the analysis, it provides quantitative estimates for discussion with risk, IT and business analysts ( figure 3 ).

Figure 3

Figure 3 ’s loss exceedance curve depicts a common visualization of FAIR risk analysis output. The Y axis, Probability of Loss or Greater, shows the percentage of Monte Carlo simulations that resulted in a loss exposure greater than the financial loss amount on the X axis. Each Monte Carlo simulation is like a combination of random coin tosses of all the risk components of the FAIR risk ontology shown in figure 2 . During the analysis, the FAIR team generates calibrated estimates for the range of values for each risk component. A calibrated estimate is an SME’s best estimate of the minimum, maximum and most likely probability of the risk factor. Each estimated risk factor in the ontology is fed into the Monte Carlo simulation by the FAIR tool.

THE FAIR TEAM PERFORMS THE DEEP ANALYSIS OF RISK SCENARIOS USING AN OPENSOURCE TOOL ADAPTED FOR SWISSCOM’S USE.

Although the SMEs tend to provide fact-based, objective information for use in estimates to the best of their abilities, challenges can arise when presenting initial completed analyses to stakeholders.

“Risk owners tend to want to push the numbers down, but security leaders try to keep them up,” Voicu explained.

Often, however, the stakeholders can meet in the middle for a consensus and come together on risk treatment proposals with a strong return on security investment (ROSI) measured by the difference between the inherent risk analysis and the residual risk analysis.

In the case of the customer portal data breach scenario, the FAIR team and the business stakeholders agreed on adding two-factor authentication (2FA) for portal users. This solution had a low cost because Swisscom already possessed the 2FA capability and needed only to change the default policy configuration to require 2FA. Figure 5 shows a diagram of the current (or inherent) vs. residual risk analysis amounts using fictional numbers aligned with the assessment shown in figure 4 . The current risk depicts the amount of risk estimated to exist without adding new controls to the current state. The residual risk shows the amount of risk estimated to exist after the hypothetical addition of the new 2FA control.

Figure 4

Risk Scenario Reporting Once the analysts reach a consensus on estimates during working meetings, the FAIR team provides management reports using one-page summaries with quantitatively scaled, red-yellow-green diagrams based on the risk thresholds (i.e., risk appetite) of the risk owner ( figure 4 ). The Swisscom FAIR team has found that often management trusts the teams’ analysis and does not want to see the FAIR details. However, the numerical analysis drill-down is available if management wishes to understand or question the risk ratings and recommendations.

Risk Portfolio Analysis and Reporting Strategic risk analyses are typically driven by boards and C-level executives with the intent of understanding, communicating and managing security risk holistically and from a business perspective. This enables executives to define their risk appetite and boards to approve it. The organization can also right-size security budgets, prioritize risk mitigation initiatives and accept certain levels of risk. Strategic risk analyses conducted by the FAIR team can be used to measure risk trending over time. The FAIR team began providing a strategic risk analysis report on a quarterly basis to the board of directors in early 2020. Figure 6 provides an example.

Figure 6

Internal Training The team began by socializing FAIR concepts among the cybersecurity functions and other internal groups to establish a broader FAIR adoption. The team provided workshops and training for additional security staff as well as stakeholders and aims to further extend training offerings.

Improving the Tool Chain Swisscom has assessed several FAIR risk quantification tools:

  • Basic risk analysis —Pen and paper, qualitative method using Measuring and Managing Information Risk: A FAIR Approach 2
  • FAIR-U —Free, basic version of RiskLens. For noncommercial use only. Registration required.
  • RiskLens —Commercial, fee-based FAIR application
  • Evaluator —Free open-source application, OpenFAIR implementation built and run on R + Shiny
  • PyFair —FAIR implementation built on Python
  • FAIR Tool —Free open-source application built on R + Shiny
  • OpenFAIR Risk Analysis Tool —OpenGroup’s Excel-based application. Registration required.
  • RiskQuant —Open-source application built in Python

In the end, Swisscom has opted for developing the tool in-house by adapting the RiskQuant analysis module. Swisscom is improving the tool chain by enhancing the analysis module with reporting capabilities and multiscenario aggregated analyses capabilities. The in-house tool is designed to support the entire security risk management life cycle—from risk identification and scoping to risk analysis and prioritization to the evaluation of risk mitigation options to risk reporting. The team is progressively adding additional modules to the in-house tool, such as:

  • Decision support —Enabling decisions on the best risk mitigation options based on their effectiveness in reducing financial loss exposure. The tool already provides the capability for conducting comparative and cost-benefit analyses to assess what changes in security strategy or what risk mitigation options provide the best ROI.
  • Security data warehouse —Swisscom’s existing security data warehouse defines, stores and manages critical assets in a central location. Risk tools can leverage this information in risk scenarios related to assets. Stakeholders can also view the risk areas and issues associated with their assets and understand the risk posture on a continuous basis.
  • Risk portfolio —The module aims to provide a deeper understanding of enterprise risk as well as aggregate or portfolio views of risk across business units. This module will also allow Swisscom to set key metrics to measure and manage cyberrisk, such as risk appetite, and conduct enterprise-level what-if analyses.

WHAT STARTED AS A SHORT-TERM OPPORTUNITY TO NORMALIZE AND PRIORITIZE RISK TURNED INTO A LONG-TERM JOURNEY TO MANAGE A PORTFOLIO OF SECURITY INVESTMENTS.

Improving Risk Assessment Processes To enhance Swisscom’s ability to identify risk scenarios deserving full FAIR analyses, the FAIR team is creating a triage questionnaire that will enable IT and security staff to perform a quick assessment of issues before submitting them as risk areas for analysis. The triage consists of 10 yes-or-no questions and requires less than 15 minutes to complete.

Lessons Learned

It is instructive to review lessons learned after establishing a risk program:

  • Bring the discussion to the business owners of the risk and the budget. Prior to the FAIR program, the risk acceptance process was not formally aligned to Swisscom’s rules of procedures and accountability. These rules provide a process whereby executives are authorized to accept risk up to certain levels, and how to decide whether higher risk can be accepted. When the FAIR program was introduced, Swisscom began identifying the executives who will end up covering the losses if risk scenarios actually materialize. With very rare exceptions, those identified business executives should also be responsible for owning or accepting risk.
  • Focus on the assumptions, not the numbers. As noted earlier, risk ratings or quantities can become politicized. Some parties may desire lower or higher results depending on their own agendas. The FAIR model can act as a neutral arbiter if stakeholders understand the assumptions. Although participants in the risk process will always have agendas, focusing on assumptions puts the discussion on a more logical footing.
  • Be flexible about reporting formats. Once risk analysts learn FAIR, there can be a temptation to take a “purist” position and evangelize the methodology too ardently. However, not all stakeholders were interested in the complexity of simulations and ontology. The Swisscom FAIR team found that the one-page risk summary using a familiar “speedometer” diagram (figure 4) facilitated easier acceptance of quantitative analysis results from the business risk owners. It should be noted that quantitative risk values still underlie the one-page summary. Behind the scenes, quantitative risk appetites and risk estimates determine a risk’s status as red, yellow or green.
  • Maintain momentum. When the FAIR journey started, the project scope was fluid. The FAIR team has found that the more the scope expanded, the more resources were required to provide increasing value. What started as a short-term opportunity to normalize and prioritize risk turned into a long-term journey to manage a portfolio of security investments.

Swisscom is currently preparing to begin tracking formal risk metrics. Figure 7 displays planned metrics and observations on the data collected or expected at this time.

Figure 7

Swisscom considers the benefits of the FAIR process to be that the company can:

  • Objectively assess information risk, which enhances the ability to approve large security initiatives
  • Measure aggregated information risk exposure
  • Break out risk exposure for business units, risk categories and top assets or crown jewels

The team is optimistic as of 2020 about the ability of the FAIR program to enable data-driven decision-making. The team is improving its risk reporting portfolio to produce reports such as the ones shown in figure 6 both at an enterprise level and at the business unit level. The team plans to conduct ROI analyses to assess the effectiveness of security spending. It is also currently in discussions with operational risk management and enterprise risk management (ERM) functions on the possibility of expanding the use of FAIR, especially in the domain of operational availability risk.

1 Salah, O.; “Thirteen Reasons Why Heat Maps Must Die,” FAIR Institute Blog, 28 November 2018, https://www.fairinstitute.org/blog/13-reasons-why-heat-maps-must-die 2 Freund, J.; J. Jones; Measuring and Managing Information Risk: A FAIR Approach , Butterworth-Heinemann, United Kingdom, 2014, p. 205–214

Dan Blum, CISSP, Open FAIR

Is an internationally recognized strategist in cybersecurity and risk management. His forthcoming book is Rational Cybersecurity for the Business . He was a Golden Quill Award-winning vice president and distinguished analyst at Gartner, Inc., has served as the security leader at several startups and consulting companies, and has advised hundreds of large corporations, universities and government organizations. Blum is a frequent speaker at industry events and participates in industry groups such as ISACA ® , FAIR Institute, IDPro, ISSA, the Cloud Security Alliance and the Kantara Initiative.

Laura Voicu, Ph.D.

Is an experienced and passionate enterprise architect with more than 10 years of experience in telecommunication and other industries. She is a leader in enterprise and data architecture, cybersecurity and quantitative risk analysis. Her latest passion is data science and driving innovation with a focus on big data and machine learning. Voicu frequently presents at conferences and volunteers as an ISACA SheLeadsTech Ambassador.

risk assessment case study examples

Open Access is an initiative that aims to make scientific research freely available to all. To date our community has made over 100 million downloads. It’s based on principles of collaboration, unobstructed discovery, and, most importantly, scientific progression. As PhD students, we found it difficult to access the research we needed, so we decided to create a new Open Access publisher that levels the playing field for scientists across the world. How? By making research easy to access, and puts the academic needs of the researchers before the business interests of publishers.

We are a community of more than 103,000 authors and editors from 3,291 institutions spanning 160 countries, including Nobel Prize winners and some of the world’s most-cited researchers. Publishing on IntechOpen allows authors to earn citations and find new collaborators, meaning more people see your work not only from your own field of study, but from other related fields too.

Brief introduction to this section that descibes Open Access especially from an IntechOpen perspective

Want to get in touch? Contact our London head office or media team here

Our team is growing all the time, so we’re always on the lookout for smart people who want to help us reshape the world of scientific publishing.

Home > Books > Risk Assessment

Risk Assessment for Collaborative Operation: A Case Study on Hand-Guided Industrial Robots

Reviewed: 17 August 2017 Published: 20 December 2017

DOI: 10.5772/intechopen.70607

Cite this chapter

There are two ways to cite this chapter:

From the Edited Volume

Risk Assessment

Edited by Valentina Svalova

To purchase hard copies of this book, please contact the representative in India: CBS Publishers & Distributors Pvt. Ltd. www.cbspd.com | [email protected]

Chapter metrics overview

1,845 Chapter Downloads

Impact of this chapter

Total Chapter Downloads on intechopen.com

IntechOpen

Total Chapter Views on intechopen.com

Risk assessment is a systematic and iterative process, which involves risk analysis, where probable hazards are identified, and then corresponding risks are evaluated along with solutions to mitigate the effect of these risks. In this article, the outcome of a risk assessment process will be detailed, where a large industrial robot is used as an intelligent and flexible lifting tool that can aid operators in assembly tasks. The realization of a collaborative assembly station has several benefits, such as increased productivity and improved ergonomic work environment. The article will detail the design of the layout of a collaborative assembly workstation, which takes into account the safety and productivity concerns of automotive assembly plants. The hazards associated with hand-guided collaborative operations will also be presented.

  • hand-guided robots
  • industrial system safety
  • collaborative operations
  • human-robot collaboration
  • risk assessment

Author Information

Varun gopinath *.

  • Division of Machine Design, Department of Management and Engineering, Linköping University, Sweden

Kerstin Johansen

Johan ölvander.

*Address all correspondence to: [email protected]

1. Introduction

In a manufacturing context, collaborative operations refer to specific applications where operators and robots share a common workspace [ 1 , 2 ]. This allows operators and industrial robots to share assembly tasks within the pre-defined workspace—referred to as collaborative workspace—and this ability to work collaboratively is expected to improve productivity as well as the working environment of the operator [ 3 ].

As pointed out by Marvel et al. [ 1 ], collaborative operation implies that there is a higher probability for occurrence of hazardous situations due to close proximity of humans and industrial robots. The hazardous situations can lead to serious injury and, therefore, safety needs to be guaranteed while developing collaborative applications [ 4 ].

ISO 10218-1 [ 5 ] and ISO 10218-2 [ 6 ] are international standards aimed at specifying requirements for safety on the design of industrial robots and robotic systems, respectively. They recognize collaborative applications and list four specific types of collaborative operations, namely (1) safety-rated monitored stop, (2) hand-guiding, (3) speed and separation monitoring, and (4) power and force limiting that can be implemented either individually or as a combination of one or more types.

As industrial robots and robotic systems are designed and integrated into specific manufacturing applications, the safety standards state that a risk assessment needs to be conducted is to ensure safe and reliable operations. Risk assessment, as standardized in ISO 12100 [ 7 ], is a detailed and iterative process of (1) risk analysis followed by (2) risk evaluation. The safety standards also state that the effect of residual risks needs to be eliminated or mitigated through appropriate risk reduction measures. The goal of a risk assessment program is to ensure that operators, equipment as well as the environment are protected.

As pointed out by Clifton and Ericson [ 8 ], hazard identification is a critical step, where the aim is the cognitive process of hazard recognition, whereas the solutions to mitigate the risks are relatively straightforward. Etherton et al. noted that designers lack a database of known hazards during innovation and design stages [ 9 ]. The robot safety standards (ISO 10218-1 [ 5 ] and ISO 10218-2 [ 6 ]) also have tabulated a list of significant hazards whose purpose is to inform risk assessors of probable inherent dangers associated with robot and robotic systems. Therefore, a case study [ 10 ] is used to investigate the characteristics of hazards and the associated risks that are relevant for collaborative operation. The study is focused on a collaborative assembly station, where large industrial robots and operators are to share a common workspace enabled through the application of a systematic and standardized risk assessment process followed by risk reduction measures.

This article is structured as follows: in Section 2, an overall description of the methodology used to conduct the research will be presented along with limitations; Section 3 will detail theoretical background; and Section 4 will present the results of the article followed by discussion of the result and conclude with remarks on future work.

1.1. Background

Recently, there have been many technological advances within the areas of robot control which aims to solve perceived issues associated with robot safety [ 11 ]. A safe collaborative assembly cell, where operators and industrial robots collaborate to complete assembly tasks, is seen as an important technological solution for several reasons including (1) ability to adapt to market fluctuations and trends [ 12 ]. (2) Have the possibility to decrease takt time [ 13 , 14 ]. (3) Improving working environment by decreasing the ergonomic load of the operator [ 15 ].

having a high production rate, where the capacity of the plant can vary significantly depending on several factors, such as variant, plant location, etc.

being dependent on manual labor as the nature of assembly tasks require highly dexterous motion with good hand-eye coordination along with general decision-making skills.

Though, operators are often aided by powered tools to carry out assembly tasks such as pneumatic nut-runners as well as lifting tools, there is a need to improve the ergonomics of their work environment. As pointed by Ore et al. [ 15 ], there is demonstrable potential for collaborative operations to aid operators in various tasks including assembly and quality control.

Earlier attempts at introducing automation devices, such as cobots [ 13 , 16 ], have resulted in custom machinery that functions as ergonomic support. Recently, industrial robots specifically designed for collaboration such as UR10 [ 17 ] and KUKA iiwa [ 18 ] are available that can be characterized as: (1) having the ability to detect collisions with any part of the robot structure; and (2) having the ability to carry smaller load and shorter reach compared to traditional industrial robots. This feature coupled with the ability to detect collisions fulfills the condition for power and force limiting.

Industrial robots that does not have power and force limiting feature, such as KUKA KR210 [ 18 ] or the ABB IRB 6600 [ 19 ], have traditionally been used within fenced workstations. In order to enter a robot workspace, the operator was required to deliberately open a gate, which is monitored by a safety device that stops all robot and manufacturing operations within the workstation. As mentioned before, the purpose of the research project was to explore collaborative operations where traditional industry robots are employed for assembly tasks. These robots have the capacity to carry heavy loads with long reach that can be effective for various assembly tasks. However, these advantages correspond to an inherent source of hazard that needs to be understood and managed with appropriate safety focused solutions.

2. Working methodology

To take advantage of the physical performance characteristics of large industrial robots along with the advances in sensor and control technologies, a research project ToMM [ 20 ] comprising of members representing the automotive industry, research, and academic institutions were tasked with understanding and specifying industry-relevant safety requirements for collaborative operations.

2.1. Industrial relevance

The requirements for safety that are relevant for the manufacturing industry are detailed in various standards such as ISO EN 12100 and ISO EN 10218 (parts 1 and 2) which are maintained by various organizations such as International Organization for Standardization (ISO [ 21 ]) and International Electrotechnical Commission (IEC [ 22 ]). Though these organizations do not have the authority to enforce the standards, a legislatory body such as the European Union, through the EU Machinery directive mandates compliance with normative standards [ 23 ] which are prefixed with an EN before their reference number.

2.2. Problem study and data collection

Regular meeting in order to have detailed discussion with engineers and line managers at the assembly plant [ 24 ].

Visits to the plant allowed the researchers to directly observe the functioning of the station. This also enabled the researchers to have informal interviews with line workers regarding the assembly tasks as well as the working environment.

The researchers participated in the assembly process, guided by the operators, allowed the researchers to gain intuitive understanding of the nature of the task.

Literature sourced from academia, books as well as documentation from various industrial equipment manufactures were reviewed.

2.3. Integrating safety in early design phase

Introduction of a robot into a manual assembly cell might lead to unforeseen hazards whose potential to cause harm needs to be eliminated or minimized. The machinery safety standard [ 7 ] suggests the practice of conducting risk assessment followed by risk reduction measures to ensure the safety of the operator as well as other manufacturing processes. The risk assessment process is iterative that concludes when all probable hazards have been identified along with solutions to mitigate the effects of these hazards have been implemented. This process is usually carried out through a safety program and can be documented according to [ 25 ].

Figure 1 depicts an overview of the safety-focused design strategy employed during the research and development phase. The case study was analyzed to understand the benefits of collaborative operations done through a conceptual study, where the overall robot, operator, and collaborative tasks were specified. Employing the results of the conceptual study, the risk assessment methodology followed by risk reduction was carried out where each phase was supported by the use of demonstrators. Björnsson [ 26 ] and Jonsson [ 27 ] have elaborated the principles of demonstrator-based design along with their perceived benefits and this methodology has been employed in this research work within the context of safety for collaborative operations.

risk assessment case study examples

Overview of the demonstrator-based design methodology employed to ensure a safe collaborative workstation.

3. Theoretical background

In this section, beginning with an overview of industrial robots, concepts from hazard theory, industrial system safety and reliability, and task-based risk assessment methodology will be detailed.

3.1. Industrial robotic system and collaborative operations

An industrial robot is defined as an automatically controlled, reprogrammable, multipurpose manipulator, programmable in three or more axes, which can be either fixed in place or mobile for use in industrial automation applications [ 28 ]. Figure 2(A) shows an illustration of an articulated six-axis manipulator along with the control cabinet and a teach pendant. The control cabinet houses various control equipment such as motor controller, input/output modules, network interfaces, etc.

risk assessment case study examples

(A) An example of a manipulator along with the control box and the teach pendant. Examples include KUKA KR-210 [ 18 ] and ABB IR 6620 [ 19 ]. (B) Illustrates the interaction between the three participants of a collaborative assembly cell within their corresponding workspaces [ 3 ].

The teach pendant is used to program the robot, where each line of code establish the robot pose—in terms of coordinates in x, y, z and angles A, B, C—which when executed allow the robot to complete a task. This method of programming is referred to as position control, where individual robot poses are explicitly hard coded. In contrast to position control, sensor-based control allows motion control to be regulated by sensor values. Examples of sensors include vision, force and torque, etc.

On a manufacturing line, robots can be programmed to move at high speed undertaking repetitive tasks. This mode of operation is referred to as automatic mode, and allows the robot controller to execute the program in a loop, provided all safety functions are active. Additionally, ISO 10218-1 [ 5 ] has defined manual reduced-speed to allows safe programming and testing of the intended function of the robotic system, where the speed is limited to 250 mm/s at the tool center point. The manual high-speed allows the robot to be moved at high speed, provided all safety functions are activate and this mode is used for verification of the intended function.

The workspace within the robotic station where robots run in automatic mode is termed Robot Workspace (see Figure 2(B) ). In collaborative operations, where operators and robots can share a workspace, a clearly defined Collaborative Workspace is suggested by [ 29 ]. Though the robot can be moved in automatic mode within the collaborative workspace, the speed of the robot is limited [ 29 ] and is determined during risk assessment.

Safety-rated monitored stop stipulates that the robot ceases its motion with a category stop 2 when the operator enters the collaborative workspace. In a category stop 2, the robot can decelerate to a stop in a controlled manner.

Hand-guiding allows the operator to send position commands to the robot with the help of a hand-guiding tool attached at or close to the end-effector.

Speed and separation monitoring allows the operator and the robot to move concurrently in the same workspace provided that there is a safe separation distance between them which is greater than the prescribed protective separation distance determined during risk assessment.

Power and force limiting operation refers to robots that are designed to be intrinsically safe and allows contact with the operator provided it does not exert force (either quasi-static or transient contact) larger than a prescribed threshold limit.

3.2. Robotic system safety and reliability

An industrial robot normally functions as part of an integrated manufacturing system (IMS) where multiple subsystems that perform different functions operate cohesively. As noted by Levenson (page 14 [ 30 ]), safety is a system property (not a component property) and needs to be controlled at the system level. This implies that safety as a property needs to be considered at early design phases, which Ericson (page 34 [ 8 ]) refers to as CD-HAT or Conceptual Design Hazard Analysis Type. CD-HAT is the first seven types of hazard analysis types, which needs to be considered during various design phases in order to avoid costly design rework.

To realize a functional IMS, a coordinated effort in the form of a system safety program (SSP [ 8 ]) which involve participants with various levels of involvement (such as operators, maintenance, line managers, etc.) are carried out. Risk assessment and risk reduction processes are conducted in conjecture with the development of an IMS, in order to promote safety, during development, commissioning, maintenance, upgradation, and finally decommissioning.

3.2.1. Functional safety and sensitive protective equipment (SPE)

Functional safety refers to the use of sensors to monitor for hazardous situations and take evasive actions upon detection of an imminent hazard. These sensors are referred to as sensitive protective equipment (SPE) and the selection, positioning, configuration, and commissioning of equipment have been standardized and detailed in IEC 62046 [ 31 ]. IEC 62046 defines the performance requirements for this equipment and as stated by Marvel and Norcross [ 32 ], when triggered, these sensors use electrical safety signals to trigger safety function of the system. They include provisions for two specific types: (1) electro-sensitive protective equipment (ESPE) and (2) pressure-sensitive protective equipment (PSPE). These are to be used for the detection of the presence of human beings and can be used as part of the safety-related system [ 31 ].

Electro-sensitive protective equipment (ESPE) uses optical, microwaves, and passive infrared techniques to detect operators entering a hazard zone. That is, unlike physical fence, where the operators and the machinery are physically separated, ESPE relies on the operators to enter a specific zone for the sensor to be triggered. Examples include laser curtains [ 33 ], laser scanners [ 34 ], and vision-based safety systems such as the SafetyEye [ 35 ].

Pressure-sensitive protective equipment (PSPE) has been standardized in parts 1–3 of ISO13856, and works on the principle of an operator physically engaging a specific part of the workstation. These include: (1) ISO 13856-1—pressure sensitive mats and floors [ 36 ]; (2) ISO 13856-2—pressure sensitive bars, edges [ 37 ]. (3) ISO 13856-3—bumpers, plates, wires, and similar devices [ 38 ].

3.2.2. System reliability

Successful robotic systems are both safe to use and reliable in operation. In an integrated manufacturing system (IMS), reliability is the probability that a component of the IMS will perform its intended function under pre-specified conditions [ 39 ]. One measure of reliability is MTTF (mean time to failure) and ranges of this measure has been standardized into five discrete level levels or performance levels (PL) ranging from a to e. For example, PL = d refers to a 10 –6  > MTTF ≥ 10 –7 , which is the required performance level with a category structure 3 ISO 10218-2 (page 10, Section 5.2.2 [ 6 ]). That is, in order to be viable to the industry, the final design of the robotic system should reach or exceed the minimum required performance level.

3.3. Hazard theory: hazards, risks, and accidents

Ericson [ 8 ] states that a mishap or an accident is an event which occurs when a hazard, or more specifically hazardous element, is actuated upon by an initiating mechanism. That is, a hazard is a pre-requisite for an accident to occur and is defined as a potential source of harm [ 7 ] and is composed of three basic components: (1) hazardous element (HE), (2) initiating mechanism (IM), and (3) target/threat (T/T).

A hazardous element is a resource that has the potential to create a hazard. A target/threat is the person or the equipment directly affected when the hazardous element is activated by an initiating mechanism. These three components, when combined together, can be referred to as a hazard (see Figure 3(A) ) and are essential components for it to exist. Based on these definitions, if any of the three components are removed or eliminated, by any means (see Section 3.4.2), it is possible to eliminate or reduce the effect of the hazard.

risk assessment case study examples

(A) The hazard triangle where the three components of hazards—hazardous element, initiating mechanism, and target/threat—are essential and required for the hazard to exist (adapted from page 17 [ 8 ]). (B) Shows the layout of the robotic workstation where a fatal accident took place on July 21, 1984 [ 40 ].

To better illustrate these concepts, consider the fatal accident that took place on July 21, 1984, where an experienced operator entered a robotic workstation while the robot was in automatic mode (see Figure 3(B) ). The robot was programmed to grasp a die-cast part, dip the part in a quenching tank and place it on an automatic trimming machine. According to Lee et al. [ 40 ], the operator was found pinned between the robot and a safety-pole by another operator of an adjacent die-cast station who became curious after hearing the hissing noise of the air-hose for 10–15 min. The function of the safety pole was to limit robot motion and together with the robot-arm can be considered to be a hazardous element. The hazard was initiated by the operator who intentionally entered the workstation either by jumping over the rails or through a 19-inch unguarded spacing and caused the accident. The operator was the target of this unfortunate accident and was pronounced dead after 5 days of the accident.

A hazard is designed into a system [ 8 , 30 ] and for accident to occur depends on two factors: (1) unique set of hazard components and (2) accident risk presented by the hazard components, where risk is defined

Ericson notes that a good hazard description can support the risk assessment team to better understand the problem and therefore can enable them to make better judgments (e.g., understanding the severity of the hazard), and therefore suggest that the a good hazard description needs to contain the three hazard components.

3.4. Task-based risk assessment and risk reduction

Risk assessment is a general methodology where the scope is to analyze and evaluate risks associated with complex system. Various industries have specific methodologies with the same objective. Etherton has summarized a critical review of various risk assessment methodologies for machine safety in [ 41 ]. According to ISO 12100, risk assessment (referred to as MSRA—machine safety risk assessment [ 41 ]) is an iterative process which involves two sequential steps: (1) risk analysis and (2) risk evaluation. ISO 12100 suggests that if risks are deemed serious, measures should be taken to either eliminate or mitigate the effects of the risks through risk reduction as depicted in Figure (4) .

risk assessment case study examples

An overview of the task-based risk assessment methodology.

3.4.1. Risk analysis and risk evaluation

Within the context of machine safety, risk analysis begins with identifying the limits of machinery, where the limits in terms of space, use, time are identified and specified. Within this boundary, activities focused on identifying hazards are undertaken. The preferred context for identifying hazards for robotics systems is task-based, where he tasks that needs to be undertaken during various phases of operations are first specified. Then the risk assessors specify the hazards associated with each tasks. Hazard identification is a critical step and ISO 10218-1 [ 5 ] and ISO 10218-2 [ 6 ] tabulates significant hazards associated with robotic systems. However, they do not explicitly state the hazards associated with collaborative operations.

Risk evaluation is based on a systematic metrics where severity of injury, exposure to hazard and avoidance of hazard are used to evaluate the hazard (see page 9, RIA TR R15.306-2014 [ 25 ]). The evaluation results in specifying the risk level in terms of negligible, low, medium-high, and very-high, and determine risk reduction measures to be employed. To support the activities associated with risk assessment, ISO TR 15066 [ 29 ] details information required to conduct risk assessment specifically for collaborative applications.

3.4.2. Risk reduction

When risks are deemed serious, the methodology demands measures to eliminate and/or mitigate the risks. The designers have a hierarchical methodology that can be employed to varying degree depending on the risks that have to be managed. The three hierarchical methods allow the designers to optimize the design and can choose either one or a combination of the methods to sufficiently eliminate/mitigate the risks. They are: (1) inherently safe design measures; (2) safeguarding and/or complementary protective measures; and (3) information for use.

4. Result: demonstrator for a safe hand-guided collaborative operation

In this section, the development and functioning of a safe assembly station will be detailed, where a large industrial robot is used in a hand-guided collaborative operation. In order to understand potential benefits with hand-guided industrial robots, an automotive assembly station will be presented as a case study in Section 4.1. With the aim to improve the ergonomics of the assembly station and increase the productivity, the assembly tasks are conceptualized as robot, operator, and collaborative task where the collaborative task is the hand-guided operation and is described in Section 4.2. The results of the iterative risk assessment and risk reduction process (see Section 3.4) will be detailed in Section 4.3. The final layout and the task sequence will be detailed in Section 4.4, and Table 1 will document the hazards that were identified during risk assessment that were used to improve the safety features of the assembly cell.

4.1. Case study: manual assembly of a flywheel housing cover

An operator picks up the flywheel housing cover (FWC) with the aid of a lifting device from position P1. The covers are placed on a material rack and can contain upto three part variants.

This operator moves from position P1 to P2 by pushing the FWC and installs it on the machine (integrated machinery) where secondary operations will be performed.

After the secondary operation, the operator pushes the FWC to the engine housing (position P3). Here, the operator needs to align the flywheel housing cover with the engine block with the aid of guiding pins. After the two parts are aligned, the operator pushes the flywheel housing cover forward until the two parts are in contact. The operator must exert force to mate these two surfaces.

Then the operators begin to fasten the parts with several bolts with the help of two pneumatically powered devices. In order to keep low takt time, these tasks are done in parallel and require the participation of more than one operator.

risk assessment case study examples

(A) Shows the manual workstation where several operators work together to assemble flywheel housing covers (FWC) on the engine block. (B) Shows the robot placing the FWC on the integrated machinery. (C) Shows the robot being hand-guided by an operator thereby reducing the ergonomic effort to position the flywheel housing cover on the engine block.

4.2. Task allocation and conceptual design of the hand-guiding tool

Figure 5(B) and (C) , shows ergonomic simulations reported by Ore et al. [ 15 ] and shows the operator being aided by an industrial robot to complete the task. The first two tasks can be automated by the robot, i.e., picking the FWC from Position 1 and moving it to the integrated machine (position P2, Figure 5(B) ). Then, the robot moves the FWC to the hand over position where the robot will come to a stop and signal to the operator that the collaborative mode is activated. This allows the operator to hand-guide the robot by grasping the FWC and directing the motion towards the engine block.

Once the motion of the robot is under human control, the operator can assemble the FWC onto the engine block and proceeds to secure it with bolts. After the bolts have been fastened, the operator then moves the robot back to the hand-over position and reactivates the automatic mode which starts the next cycle.

4.3. Safe hand-guiding in the collaborative workspace

The risk assessment identified several hazardous situations that can affect the safe functioning during the collaborative mode—that is when the operator goes into the workstation and hand-guides the robot to assemble the FWC—and has been tabulated in Table 1 .

The robot needs to be programmed to move at slow speed so that it can stop (in time) according to speed and separation monitoring mode of collaborative operation.

To implement speed and separation monitoring, a safety rated vision system might be probable solution. However, this may not be viable solution on the current factory floor.

risk assessment case study examples

(A) and (B) are two versions of the end-effector that was prototyped to verify and validate the design.

A change in design that would allow the operator to visually align the pins on the engine block with the mating holes on the FWC.

A change in design to improve reliability as well as avoid tampering through the use of standardized components. Ensure that the operator feel safer during hand-guiding by ensuring that the robot arms are not close to the operator.

risk assessment case study examples

The layout of the physical demonstrator installed in a laboratory environment.

No.Hazard descriptionHazardous element (HE)Initiating mechanism (IM)Target/threat (T/T)Risk reduction measure
1.The operator can accidentally enter robot workspace and collide with the robot moving at high speedFast moving robotOperator is unaware of the system stateOperators1. A light curtain to monitor the robot workspace. 2. A lamp to signal the system state
2.In collaborative mode, sensor-guided motion is active. Robot motion can be triggered unintentionally resulting in unpredictable motionCrushingOperator accidentally activate the sensor,Operator(s) and/or equipment(s)An enabling device, when actuated, will start sensor-guided motion. An ergonomically designed enabling device can act as a hand-guiding tool
3.The operator places their hands between the FWC and the engine, thereby crushing their handsCrushingOperator distracted due to assembly taskOperatorAn enabling device can ensure that the operator’s hands are at a predefined location.
4.While aligning the pins with the holes, the operator can break the pins by moving vertically or horizontallyImprecise hand-guided motionOperator fails to keep steady motionOperators1. Vertical hand-guided motion needs to be eliminated. 2. Operator training
5.The robot collides with an operator while being hand-guided by another operatorCollisionDesignated operator is not aware of others in the vicinityOperatorsThe designated operator has clear view of the station
6.An operator accidentally engages mode-change button though the collaborative task is incompleteError in judgment of the operatorsEngaging the mode-change buttonOperator/equipmentA button on the hand-guiding tool that the operator engages before exiting the workspace

Table 1.

The table describes the hazards that were identified during the risk assessment process.

Design featureDesign ADesign BDesign evaluation
1. Orientation of the end-effectorEnd-effector is parallel to the robot wristEnd-effector is perpendicular to the robot wrist.In design A, the last two links of the robot are close to the operator which might make the operators feel unsafe. Design B might allow for an overall safer design due to use of standardized components
2. Position of Flywheel housing cover (FWC)The FWC is positioned left to the operatorThe FWC is positioned in front of the operatorDesign A requires more effort from the operator to align the locating pins (on the engine block) and the mating holes (on the FWC). The operator loses sight of the pins when the two parts are close to each other. In Design B, it is possible to align the two parts by visually aligning the outer edges
3. Location of Emergency stopGood location and easy to actuateGood location and easy to actuateIn design A, it was evaluated that the E-stop can be accidentally actuated which might lead to unproductive stops
4. Location of visual interfacesGood location and visibilityNo visual interfacesEvaluation of design A resulted in the decision that interfaces need to be visible to all working within the vicinity
5. Location of physical interfacesGood location with easy reach.Minimal physical interfacesEvaluation of design A resulted in the decision that interfaces are optimally placed outside the fences area
6. Overall ergonomic designThe handles are angled and is more comfortableThe distance between the handles is shortDesigns A and B have good overall design. Design B uses standardized components. Design A employs softer materials and interfaces that are easily visible

Table 2.

Feature comparison of two versions of the end-effector shown in Figure 6(A) and (B) .

4.4. Demonstrator for a safe hand-guided collaborative assembly workstation

Figure 7 shows a picture of the demonstrator developed in a laboratory environment. Here, a KUKA KR-210 industrial robot is part of the robotic system where the safeguarding solutions include the use of physical fences as well as sensor-based solutions.

The robot tasks, which are preprogramed tasks undertaken in automatic mode. When the robot tasks are completed, it is programmed to stop at the hand-over position.

The collaborative task which begins when the operators enters the monitored space and takes control of the robot using the hand-guiding device. The collaborative mode is complete when the operator returns the robot to the hand-over position and restarts the automatic mode.

The operator task is the fastening of the bolts required to secure the FWC to the engine block. The operators need to fasten several bolts and therefore use pneumatically powered tool (not shown here) to help them with this task.

risk assessment case study examples

The figure describes the task sequence of the collaborative assembly station where an industrial robot is used as an intelligent and flexible lifting tool. The tasks are decomposed into three — Operator task (OT), Collaborative task (CT) and Robot task (RT) — which are detailed in Table 3 .

TasksTask description
1. Robot taskThe robot tasks are to pick up the flywheel housing cover, place the part on the fixture and when the secondary operators are completed, pick up the part and wait at the hand-over position. During this mode, the warning lamp is red, signaling automatic mode. The hand-over position is located inside the enclosed area and is monitored by laser curtains. The robot will stop if an operator accidentally enters this workspace and can be restarted by the auto-continue button ( )
2. Operator taskEnter collaborative space: When the warning lamp turns to green, the laser curtains are deactivated; the operator enters the collaborative workspace
3. Collaborative taskEngage enabling switch: the operator begins hand-guiding by engaging both the enabling switches simultaneously. This activates the sensor-guided motion and the operator can move the robot by applying force on the enabling device. If the operator releases the enabling switch, the motion is deactivated (see point 2 in ). To reactivate motion, the operator engages both the enabling switches
4. Collaborative taskHand-guide the robot: the operator moves the FWC from the hand-over position to the assembly point. Then removes the clamp and return the robot back to the hand-over position
5. Collaborative taskEngage automatic mode: before going out of the assembly station, the operator needs to engage the three-button switch. This deliberate action signals to the robot that the collaborative task is complete
6. Robot taskThe operator goes out and engages the mode-change button. Then, the following sequence of events is carried out: (1) laser curtains are activated, (2) warning lamp turns from green to red, and (3) the robot starts the next cycle

Table 3.

The table articulates the sequence of tasks that were formulated during the risk assessment process.

4.4.1. Safeguarding

With an understanding that operators are any personnel within the vicinity of hazardous machinery [ 7 ], the physical fences can be used to ensure that they do not accidentally enter a hazardous zone. The design requirements stated that the engine block needs to be outside the enclosed zone, meant that the robot needs to move out of the fenced area during collaborative mode (see Figure 8 ). Therefore, the hand over position is located inside the enclosure and the assembly point is located outside of the enclosure and both these points are part of the collaborative workspace. The opening in the fences is monitored during automatic mode using laser curtains.

4.4.2. Interfaces

During risk evaluation, the decision to have several interfaces was motivated. A single warning LED lamp (see Figure 8 ) can convey that when the robot has finished the preprogrammed task and waiting to be hand-guided. Additionally, the two physical buttons outside the enclosure has separate functions. The Auto-continue button allows the operator to let the robot continue in automatic mode if the laser curtains were accidentally triggered by an operator and this button is located where it is not easily reached. The second button is meant to start the next assembly cycle (see Table 1 ). Table 1 (Nos. 2 and 3) motivates the use of enabling devices to trigger the sensor guided motion (see Figure 6(B) ). The two enabling devices provide the following functions: (1) it acts as a hand-guiding tool that the operator can use to precisely maneuver the robot. (2) By specifying that the switches on the enabling device are engaged for hand-guiding motion, the operators hands are at a prespecified and safe location. (3) Additionally, by engaging the switch, the operator is deliberately changing the mode of the robot to collaborative-mode. This ensures that unintended motion of the robot is avoided.

5. Discussion

In this section, the discussion will be focused on the application of the risk assessment methodology and the hazards that were identified during this process.

5.1. Task-based risk assessment methodology

A risk assessment (RA) is done on a system that exists in a form that can function as a context within which hazards can be documented. In the case study, a force/torque sensor was used to hand-guide the robot and this technique was chosen at the conceptual stage. RA based on this technique led to decision of introducing enabling devices (No. 2 in Table 1 ) to ensure that, while the operator is hand guiding the robot, the hands are within a predetermined safe location and is engaged. Another industrially viable solution is the use of joysticks to hand-guide the robot but this option was not explored further during discussion as it might be less intuitive than force/torque based control. Regardless, it is implicit that the choice of technique poses its own hazardous situation and the risk assessors need a good understanding of the system boundary.

Additionally, during risk assessment, the failure of the various components was not considered explicitly. For example, what if laser curtains failed to function as intended? The explanation lies in the choice of components. As stated in Section 3.2.2, a robotic system to be considered reliable, the components must have a performance level PL = d, which implies a very low probability of failure. Most safety-equipment manufactures publish their MTTF values along with their performance levels and the intended use.

5.2. Hazards

The critical step in conducting risk assessment (RA) is hazard identification. In Section 3.3, a hazard was decomposed into three: (1) hazardous element (HE), (2) initiating mechanism (IM), and (3) target/threat (T/T). The three sides of the hazard triangle (Section 3.3) have lengths proportional to the degree with which these components can trigger the hazard and cause an accident. That is, if the length of IM side is much larger than the other two, then the most influencing factor to cause an accident is IM. The discussion on risk assessment (Section 3.4) stresses on eliminating/mitigating hazards which implies that the goal of risk assessment can be understood as a method to reduce/remove one or more of the sides of the hazard triangle. Therefore, documenting the hazards in terms of its components might allow for simplified and straightforward downstream RA activities.

The hazards presented in Table 1 can be summarized as follows: (1) the main source of hazardous element (HE) is slow/fast motion of the robot. (2) The initiating mechanism (IM) can be attributed to unintended actions by an operator. (3) The safety of the operator can be compromised and has the possibility to damage machinery and disrupt production. It can also be motivated, based on the presented case study, that through the use of systematic risk assessment process, hazards associated with collaborative motion can be identified and managed to an acceptable level of risk.

As noted by Eberts and Salvendy [ 44 ] and Parsons [ 45 ], human factors play a major role in robotic system safety. There are various parameters that can be used to better understand the effect of human behavior in system such as overloaded and/or underloaded working environment, perception of safety, etc. The risk assessors need to be aware of human tendencies and take into consideration while proposing safety solutions. Incidentally, in the fatal accident discussed in Section 3.3, perhaps the operator did not perceive the robot as a serious threat and referred to the robot as Robby [ 40 ].

In an automotive assembly plant, as the production volume is relatively high and requires collaborating with other operators, there is a higher probability for an operator to make errors. In Table 1 (No. 6), a three-button switch was specified to ensure unintentional mode change of the robot. It is probable that an operator can accidentally engage the mode-change button (see Figure 7 ) while the robot is in collaborative mode or the hand-guiding operator did not intend the collaborative mode to be completed. In such a scenario, a robot operating in automatic mode was evaluated to have a high risk level, and therefore the decision was made to have a design change with an additional safety-interface—the three-button switch—that is accessible only to the hand-guiding operator.

Informal interviews suggested that the system should be inherently safe for the operators and that the task sequence—robot, operator, and collaborative tasks—should not demand constant monitoring by the operators as it might lead to increased stress. That is, operators should feel safe and in control and that the tasks should demand minimum attention and time.

6. Conclusion and future work

The article presents the results of a risk assessment program, where the objective was the development of an assembly workstation that involves the use of a large industrial robot in a hand-guiding collaborative operation. The collaborative workstation has been realized as a laboratory demonstrator, where the robot functions as an intelligent lifting device. That is, the tasks that can be automated have been tasked to the robot and these sequences of tasks are preprogrammed and run in automatic mode. During collaborative mode, operators are responsible for tasks that are cognitively demanding that require the skills and flexibility inherent to a human being. During this mode, the hand-guided robot carries the weight of the flywheel housing cover, thereby improving the ergonomics of the workstation.

In addition to the laboratory demonstrator, an analysis of the hazards pertinent to hand-guided collaborative operations has been presented. These hazards were identified during the risk assessment phase, where the hazardous element mainly stems from human error. The decisions taken during the risk reduction phase to eliminate or mitigate the risks associated with these hazards have also been presented.

The risk assessment was carried out through different phases, where physical demonstrators supported each phase of the process. The demonstrator-based approach allowed the researchers to have a common understanding of the nature of the system and the associated hazards. That is, it acted as platform for discussion. The laboratory workstation can act as a demonstration platform where operators and engineers can judge for themselves the advantage and disadvantages of collaborative operations. The demonstration activities can be beneficial to researchers as it can function as a feedback mechanism with respect to the decisions that have been made during the risk assessment process.

Therefore, the next step is to invite operators and engineers in trying out the hand-guided assembly workstation. The working hypothesis in inviting operators and engineers is that, personnel whose main responsibility during their working time in an assembly plant is to find the optimal balance between various production related parameters (such as maintenance time, productivity, safety, working environment, etc.) might have deeper insight into the challenges of introducing large industrial robots in the assembly line.

Acknowledgments

The authors would like to thank Björn Backman of Swerea IVF, Fredrik Ore and Lars Oxelmark of Scania CV for their valuable contributions during the research and development phase of this work. This work has been primarily funded within the FFI program and the authors would like to graciously thank them for their support. In addition, we would like to thank ToMM 2 project members for their valuable input and suggestions.

  • 1. Marvel JA, Falco J, Marstio I. Characterizing task-based human-robot collaboration safety in manufacturing. IEEE Transactions on Systems, Man, and Cybernetics: Systems. 2015; 45 (2):260-275
  • 2. Tsarouchi P, Matthaiakis A-S, Makris S. On a human-robot collaboration in an assembly. International Journal of Computer Integrated Manufacturing. 2016; 30 (6):580-589
  • 3. Gopinath V, Johansen K. Risk assessment process for collaborative assembly—A job safety analysis approach. Procedia CIRP. 2016; 44 :199-203
  • 4. Caputo AC, Pelagagge PM, Salini P. AHP-based methodology for selecting safety devices of industrial machinery. Safety Science. 2013; 53 :202-218
  • 5. Swedish Standards Institute. SS-ISO 10218-1:2011—Robots and Robotic Devices—Safety Requirements for Industrial Robots. Part 1: Robot. Stockholm, Sweden: Swedish Standards Institute; 2011
  • 6. Swedish Standards Institute. SS-ISO 10218-2:2011—Robots and Robotic Devices—Safety Requirements for Industrial Robots. Part 2: Robot Systems and Integration. Stockholm, Sweden: Swedish Standards Institute; 2011
  • 7. Swedish Standards Institute (SIS). SS-ISO 12100:2010: Safety of Machinery - General principles of Design - Risk assessment and risk reduction. Stockholm, Sweden: Swedish Standards Institute (SIS); 2010. 96 p.
  • 8. Clifton A, Ericson II. Hazard Analysis Techniques for System Safety. Hoboken, New Jersey, USA: John Wiley & Sons; 2015
  • 9. Etherton J, Taubitz M, Raafat H, Russell J, Roudebush C. Machinery risk assessment for risk reduction. Human and Ecological Risk Assessment: An International Journal. 2001; 7 (7):1787-1799
  • 10. Robert K.Yin. Case Study Research - Design and Methods. 5th ed. California, USA: Sage Publications Inc; 2014. 282 p
  • 11. Brogårdh T. Present and future robot control development – An industrial perspective. Annual Reviews in Control. 2007; 31 (1):69-79
  • 12. Krüger J, Lien TK, Verl A. Cooperation of human and machines in assembly lines. CIRP Annals - Manufacturing Technology. 2009; 58 (2):628-646
  • 13. Siciliano B, Khatib O, editors. Springer Handbook of Robotics. Secaucus, NJ, USA: Springer-Verlag New; 2007
  • 14. Krüger J, Bernhardt R, Surdilovic D. Intelligent assist systems for flexible. CIRP Annals - Manufacturing Technology. 2006; 55 (1):29-32
  • 15. Ore F, Hanson L, Delfs N, Wiktorsson M. Human industrial robot collaboration—Development and application of simulation software. International Journal of Human Factors Modelling and Simulation. 2015; 5 :164-185
  • 16. J. Edward Colgate, Michael Peshkin, and Witaya Wannasuphoprasit. Cobots: Robots for collaboration with human operators. Proceedings of the ASME Dynamic Systems and Control Division. Atlanta, GA, 1996; 58 :433-440.
  • 17. Universal Robots. Universal Robots [Internet]. Available from: https://www.universal-robots.com/ [Accessed: March 2017]
  • 18. KUKA AG. Available from: http://www.kuka.com/ [Accessed: March 2017]
  • 19. ABB AB. Available from: http://www.abb.com/ [Accessed: January 2017]
  • 20. ToMM2—Framtida-samarbete-mellan-manniska-och-robot/. Available from: https://www.vinnova.se/ [Accessed: June 2017]
  • 21. The International Organization for Standardization (ISO). Available from: https://www.iso.org/home.html [Accessed: June 2017]
  • 22. International Electrotechnical Commission (IEC). Available from: http://www.iec.ch/ [Accessed: June 2017]
  • 23. David Macdonald. Practical machinery safety. 1st ed. Jordan Hill, Oxford: Newnes; 2004. 304 p
  • 24. Leedy PD, Ormrod JE. Practical Research: Planning and Design. Upper Saddle River, New Jersey: Pearson; 2013
  • 25. Robotic Industrial Association. RIA TR R15.406-2014: Safeguarding. 1st ed. Ann Arbour, Michigan, USA: Robotic Industrial Association; 2014. 60 p
  • 26. Andreas Björnsson. Automated layup and forming of prepreg laminates [dissertation]. Linköping: Linköping University; 2017.
  • 27. Marie Jonsson. On Manufacturing Technology as an Enabler of Flexibility: Affordable Reconfigurable Tooling and Force-Controlled Robotics [dissertation]. Linköping, Sweden: Linköping Studies in Science and Technology. Dissertations: 1501; 2013.
  • 28. Swedish Standards Institute. SS-ISO 8373:2012—Industrial Robot Terminology. Stockholm, Sweden: Swedish Standards Institute; 2012
  • 29. The International Organization for Standardization. ISO/TS 15066: Robots and robotic devices—Collaborative robots. Switzerland: The International Organization for Standard-ization; 2016
  • 30. Leveson NG. Engineering a Safer World: Systems Thinking Applied to Safety. Engineering Systems ed. USA: MIT Press; 2011
  • 31. The International Electrotechnical Commision. IEC TS 62046:2008 – Safety of machiner- Application of protective equipment to detect the presence of persons. Switzerland: The International Electrotechnical Commision; 2008
  • 32. Marvel JA, Norcross R. Implementing speed and separation monitoring in collaborative robot workcells. Robotics and Computer-Integrated Manufacturing. 2017; 44 :144-155
  • 33. SICK AG. Available from: http://www.sick.com [Accessed: December 2016]
  • 34. REER Automation. Available from: http://www.reer.it/ [Accessed: December 2016]
  • 35. Pilz International. Safety EYE. Available from: http://www.pilz.com/ [Accessed: May 2014]
  • 36. The International Organization for Standardization. ISO 13856-1:2013 – Safety of machinery – Pressure-sensitive protective devices – Part 1: General principles for design and testing of pressure-sensitive mats and pressure-sensitive floors. Switzerland: The International Organization for Standardization; 2013
  • 37. The International Organization for Standardization. ISO 13856-2:2013 – Safety of machinery– Pressure-sensitive protective devices – Part 2: General principles for design and testing of pressure-sensitive edges and pressure-sensitive bars. Switzerland: The International Organization for Standardization; 2013
  • 38. The International Organization for Standardization. ISO 13856-3:2013 – Safety of machinery – Pressure-sensitive protective devices – Part 3: General principles for design and testing of pressure-sensitive bumpers, plates, wires and similar devices. Switzerland: The International Organization for Standardization; 2013
  • 39. Dhillon BS. Robot reliability and Safety. New York: Springer-Verlag; 1991
  • 40. Sanderson LM, Collins JW, McGlothlin JD. Robot-related fatality involving a U.S. manufacturing plant employee: Case report and recommendations. Journal of Occupational Accidents. 1986; 8 :13-23
  • 41. Etherton JR. Industrial machine systems risk assessment: A critical review of concepts and methods. Risk Analysis. 2007; 27 (1):17-82
  • 42. Varun Gopinath, Kerstin Johansen, and Åke Gustafsson. Design Criteria for Conceptual End Effector for Physical Human Robot Production Cell. In: Swedish Production Symposium; Göteborg, Sweden:2014.
  • 43. Gopinath V, Ore F, Johansen K. Safe assembly cell layout through risk assessment—An application with hand guided industrial robot. Procedia CIRP. 2017; 63 :430-435
  • 44. Eberts R, Salvendy G. The contribution of cognitive engineering to the safe. Journal of Occupational Accidents. 1986; 8 :49-67
  • 45. McIlvaine Parsons H. Human factors in industrial robot safety. Journal of Occupational Accidents. 1986; 8 (1-2):25-47

© 2017 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Continue reading from the same book

Published: 28 February 2018

By Riitta Molarius

1359 downloads

By Stig O. Johnsen

1664 downloads

By Safet Kozarevic, Emira Kozarevic, Pasqualina Porre...

1528 downloads

IntechOpen Author/Editor? To get your discount, log in .

Discounts available on purchase of multiple copies. View rates

Local taxes (VAT) are calculated in later steps, if applicable.

Support: [email protected]

Geoelectric studies in earthquake hazard assessment: the case of the Kozlodui nuclear power plant, Bulgaria

  • Original Paper
  • Open access
  • Published: 02 September 2024

Cite this article

You have full access to this open access article

risk assessment case study examples

  • S. Kovacikova   ORCID: orcid.org/0000-0003-2600-175X 1 ,
  • G. Boyadzhiev 2 &
  • I. Logvinov 3  

The study presents the results of geoelectric research for seismic risk assessment on the example of the Kozlodui nuclear power plant in Bulgaria. The image of the geoelectric structure in the study area was obtained using one-dimensional inverse electrical resistivity modeling of the full five-component magnetotelluric data and quasi-three-dimensional inverse conductivity modeling of the geomagnetic responses recorded during the summer 2021 field campaign. According to the presented results, the geoelectrically anomalous structure is divided into two levels. The near-surface anomalous structure in the immediate reach of human geotechnical activity corresponds to the electrically conductive sedimentary fill. The mid-crustal layer is coincident with the low seismic velocity zone at the brittle and ductile crust interface, revealed in previous studies. The presented results imply that the geological environment is not affected by large faults capable of transmitting seismic energy from tectonically active areas, however, in further studies, attention should be paid to the strike-slip fault systems adjacent to the study area.

Avoid common mistakes on your manuscript.

1 Introduction

Due to the contrasting electrical properties of the geological environment, geoelectric methods can be used to address a variety of engineering geological tasks related to the natural hazard assessment in karst (Satitpittakul et al. 2013 ) or landslide studies (Lapenna et al. 2003 ), in quarry operation (Magnusson et al. 2010 ), in hydrogeology (Parks et al. 2011 ) or in the construction of critical infrastructure facilities (Di et al. 2020 ).

The construction and operation of civil nuclear installations are governed by strict safety regulations issued by the International Atomic Energy Agency (IAEA). Their neglect or underestimation can lead to tragic consequences. Designing and installing nuclear facilities in tectonically active areas always pose a danger (Nadirov and Rzayev 2017 ; Ahmed et al. 2018 ), but unexpected intraplate seismicity can also be documented in stable ancient terranes (e.g. Chattopadhyay et al. 2020 ) and even apparently historically inactive faults can be potentially risky (Faure Walker 2021 ). Although these strategic facilities are currently equipped with seismic early warning systems (Wieland et al. 2000 ) and spatial displacements are monitored by geodetic networks implementing Global Navigation Satellite (GPS) System data (e.g. Savchyn and Vaskovets 2018 or Manevich et al. 2021 ), integration of other geological and geophysical data is desirable to ensure maximum safety. Seismic events can be accompanied or preceded by a range of phenomena such as isotopes emissions (Sano et al. 2016 ; Zafrir et al. 2020 ) or meteorological phenomena (Morozova 2012 ; Guangmeng and Jie 2013 ). A correlation has been observed between earthquakes and tides (Scholz et al. 2019 ). Prior to an earthquake, electromagnetic (EM) emissions may be recorded around the future epicenter as a result of tectonic forces (Mavrodiev et al. 2015 ; Petraki et al. 2015 ). When assessing seismic risk, however, recording of the natural EM field variations can be used not only for above-mentioned immediate monitoring of earthquake precursors. Due to the enhanced electrical conductivity of mineralized fluids migrating in faults and fracture systems, magnetotelluric (MT) and magnetovariational (MV exploiting only the magnetic EM field components) methods with a depth range covering levels from the earth’s surface through the crust to the mantle are established procedures, for example in geothermal exploration (e.g. Gasperikova et al. 2015 ) or studies of magmatic systems (Wynn et al. 2016 ). Likewise, the contrasting electrical properties of fluids can also be used in seismic risk studies to identify fluid pathways in faults and delineate potential hazard zones. Numerous studies address the topic of the association of low resistivity zones and seismicity with active strike-slip zones (Bourlange et al. 2012 ; Hoskin et al. 2015 ; Adam et al. 2016 ), and different scenarios are presented depending on the zone geometry, mechanical conditions of rocks, degree of deformation, porosity and hydrogeology, with both highly permeable and mechanically locked segments (e.g. Unsworth and Bedrosian 2004 ; Kaya et al. 2009 ; Ritter et al. 2014 ). Water and fluids of both surface meteoric and deep origin, penetrating fault systems, play substantial role in these systems. Shear deformation promotes the formation of interconnected networks for fluid migration, and high-pressure fluids promote fault creep. Creeping segments tend to be subject to frequent microseismicity, while rare strong earthquakes may occur at the transition between creeping and locked zones. Earthquake foci typically trace the boundary between high- and low-resistivity features, corresponding to the stress accumulation and brittle deformation zones (e.g. Convertito et al. 2020 ). Within the conductors themselves the stress is redistributed to meet the equivalent rheology and the fluid hydrodynamics. The measured MT data can thus help identify potentially risky areas and delineate zones of increased seismicity, which is crucial when designing large-scale engineering facilities.

When studying seismic hazard, it is important not only to map surface weakened geological structures with which human geotechnical activities directly interact, but also to track the deep course of faults and trace the deep origin of phenomena observed on the earth’s surface (Suzuki et al. 2000 ). The use of the MT method in solving strategic projects, such as site selection for nuclear power plants, was proposed by Adam and Vero ( 1990 ). Thus, the MT method can expand knowledge about the tectonic structure in the vicinity of objects of interest and provide additional information that can be used in evaluating the measures necessary to increase the safety of strategic facilities. As an example of such a procedure, in this paper we present the results of a case study of the deep geoelectric structure in the area of the Kozlodui nuclear power plant in Bulgaria, initiated by the National Institute of Geophysics, Geodesy and Geography of the Bulgarian Academy of Sciences (NIGGG-BAS) and the National Science Fund to update the National Emergency Prevention Action Plan.

2 Geologic setting and geophysical data

The position of the Balkan region is controlled by the dynamics of the Mediterranean seismic belt, and although the Moesian Plate, as a promontory of the East European Platform, nestled between the Southern and Serbian Carpathians and the Northern Balkans, seems to be a relatively rigid block, it nevertheless participates in the relative movements of the Eurasian, African and Arabian tectonic plates (Stanciu and Ioane 2017 ). As a result, the Moesia and its Danube part is cut by the faults of the Carpathian-Balkan arch trend and by transverse faults into a system of basement blocks and has a complex deformation behavior with neotectonic activity along a number of fault structures.

Kozlodui nuclear power plant (KNPP) is located in the southwestern, seismically least active part of the Moesian Plate (Fig.  1 a). However, the relative proximity of continuously tectonically active fault systems may carry the risk of noticeable earth movements. About 300 km northeast (see inset in Fig.  1 a) is located persistently highly geodynamically active Vrancea area with four-to-five medium depth events with magnitudes M ≥ 6.5 per century and the largest recorded shock of 7.9 (e.g. Petrescu et al. 2021 ). From the west, the Moesian Plate is bounded by a continuously tectonically active fault system (M reaching 4), including the Timok and Cerna faults (TF-CF, inset in Fig.  1 a), linking the Carpathians with the Balkanides (Bala et al. 2015 ; Vangelov et al. 2016 ; Mladenovic et al. 2019 ; Krstekanic et al. 2021 ; Oros et al. 2021 ).

figure 1

Geological setting— a The major tectonic zones of Bulgaria with the position of the Moesian Plate and Vrancea seismicity zone in the incut (from Cavazza et al. 2004 ): TF-CF—Timok-Cerna fault zone, KNPP—Kozlodui nuclear power plant, PAG—Panagjurishte geomagnetic observatory, green rectangle—study area; b Simplified tectonic map of the northwestern Bulgaria (modified after Cavazza et al. 2004 ; Kounov et al. 2017 ) with red crosses of experimental site network—MV (small) and full MT (big); faults (cross-hatched belts): Blk-Sub-Balkan, NFB-Northern Forebalkan, Vlm-Vinishte-Lom (Gostilski), Tsb—Tsibritsa, Ogs—Ogosta, Isk—Iskar, SMs – South Moesian, Dnb—Danube, Mtr—Motru, Jiu—Jiu fault (Dachev and Kornea 1980 ; Georgiev and Shanov 1991 ), thick dashed line—southern border of the Moesian Plate, magenta line—electrified railway (Bulgarian State Railways: https://www.bdz.bg ), M-BS—Makresh-Black Sea seismic profile (Dachev 1988 )

Several structural complexes can be identified within the Moesian Plate. Precambrian metamorphic rocks and Upper Paleozoic (Carboniferous to Permian) formations are covered by Triassic to Cenozoic sediments. On the Bulgarian territory, two large tectonic structures are distinguished on the Moesian Plate, the Lom depression, where the KNPP is located, and the North-Bulgarian uplift. East and north-east of the Lom depression, mainly on the Romanian territory, the Alexandria depression is delimited (Chemberski and Botoucharov 2013 ). Based on geophysical data (Dachev and Kornea 1980 ; Dachev et al. 1994 ), the total thickness of sediments of the Lom and Alexandria depressions reaches about 9 km (Fig.  2 a). The thickness of the Cenozoic sediments of the Lom depression reaches 1000 m (Fig.  2 b) (Zagorchev 2009 ). The Lom depression basement is formed by lowest tectonic blocks nested among the Danube fault in the north, Northern Forebalkan in the south and Vinischte-Lom and Iskar faults in the west and east respectively, separated from each other by the Ogosta and Tsibritsa faults (Fig.  1 b). The Vinishte-Lom fault is a strike-slip feature of the larger Oltenia tectonic zone cutting across a series of tectonic structures in the central part of the Balkan peninsula (Bala et al. 2015 ).

figure 2

a Depth contours of the consolidated basement in km (dashed lines); b Contours (in m) of the top of the Upper Cretaceous complex (Zagorchev 2009 ); c Schematic section of sedimentary rock resistivity in Northern Bulgaria (solid line) and the Balkan region (dotted line) (Dobrev et al. 1975 ); d Schematic S sed map of sedimentary rocks (dashed contours in Siemens) of the KNPP area (red star in all subfigures) according to Abramova et al. ( 1994 ) (private comm.); Green line—Lom depression boundary

2.1 Geoelectric characteristics

According to the results of laboratory and geoelectric field experiments (Hermance 1995 ; Haak and Hutton 1986 ; Nover 2005 and others), the electrical resistivity (ρ) of crystalline rocks of the continental crust significantly exceeds 1000 Ω∙m. Below is information on the lithological composition of sedimentary rocks and their resistivity according to Dobrev et al. ( 1975 ).

The geological section of Northern Bulgaria is characterized by a wide distribution of two types of red-bed strata: Permian–Triassic (compacted clay rocks, conglomerates, breccia conglomerates, sandstones – with ρ varying from 16 to 45 Ω∙m) and Triassic-Jurassic. Middle Triassic oil and gas bearing limestones and dolomites up to 650 m thick are characterized by ρ varying between 100 and 400 Ω m. A thick Malm-Valanginian (late Jurassic-early Kretaceous) complex with ρ varying from 130 to 3600 Ω m appears in the Moesian section. Cretaceous and Pliocene carbonate facies of the Lom depression are characterized by ρ of 40–250 Ω∙m. Regional features of the ρ distribution of sedimentary rocks according to logging data are shown in (Fig.  2 c). Similar values are also given in other publications (Nikolova 1980 ; Dachev 1988 ; Chemberski and Botoucharov 2013 ).

The most characteristic geoelectric parameter of the sedimentary cover is the integrated longitudinal conductivity (conductance) S sed  = D/ρ, where D is the layer thickness. Based on geological-geophysical and well logging data, L.M. Abramova (2013 personal communication), the initiator of previous deep EM studies in Bulgaria (Abramova et al. 1994 ), has compiled a schematic S sed map of the Balkanides and the Moesian Plate in Bulgaria. This map has been updated with new information on the thickness, composition, and geoelectric parameters of the Cenozoic sediments of the Lom depression, obtained as a result of the interpretation of MTS (MT sounding) curves (Logvinov et al. 2021 ). Using a similar method, a schematic S sed map was constructed for the Romanian territory (Demetrescu 2013 ). According to rough estimates (based on data on the thickness of sediments and their ρ), S sed of surface sediments overlying the crystalline basement rocks on the territory of the Balkanides does not exceed 50 Siemens. Figure  2 d shows the S sed map for the south of the Moesian Plate and the adjacent part of the Balkans.

2.2 Seismic results and seismicity

The study area is intersected by the quasi-latitudinal regional seismic Makresh-Black Sea profile (Figs. 1 , 3 ). From west to east along the profile, the thickness of sediments of all ages decreases. P-wave Seismic velocities for terrigenous and terrigenous-carbonate sedimentary formations of the Moesian Plate along the M-BS profile (Fig.  3 a) vary from 2 to 4.5 km/s (Dachev 1988 ). Lower velocities are typical for Cenozoic sediments.

figure 3

a Structure of the earth's crust along the Makresh-Black Sea (M-BS) seismic profile (Dachev 1988 ; Dachev et al. 1994 ). 1—sedimentary layer and seismic boundaries in it, 2—the Moesian Plate basement (numbers—seismic velocities, km/s), 3—Moho boundary, 4—supposed crustal zones of reduced seismic velocity. b , c Seismicity of the KNPP region for the period of years 1973–2020 (see sources in the text); fault zones Tsb, Dnb, SMs, Isk, Mtr, Jiu, NFB—see Fig.  1 b: b earthquake hypocenters by depth (in km); c) earthquakes by magnitude (thick circles with a numeral—strongest events with M > 3); crosses—unknown magnitude. Differences in the distribution of earthquake foci in subfigures ( b ) and ( c ) are given by the absence of depths/magnitudes for some events in the catalogues mainly before 2007

According to the seismic logging results, the seismic velocity of Paleogene-Neogene terrigenous deposits does not exceed 3.2 km/s (Volvovsky and Starostenko 1996 ). Both in the upper and lower consolidated earth's crust, low-velocity layers are distinguished along the profile (Fig.  3 a). The depth to the upper boundary of these layers is about 15 km and 27–30 km, their thickness is about 5 km and the seismic wave velocity decrease with respect to the surrounding environment is 0.5–0.7 km/s (Dachev et al. 1994 ).

Earthquakes are one of the most disastrous natural phenomena, the impact of which must be taken into account in the operation of nuclear facilities. Over the past 50 years, more than 15,000 earthquakes have been registered in Bulgaria, including the area belonging to the Moesian Plate, and some 750 events have been documented in the vicinity of the KNPP ( http://crustal.usgs.gov/geophysics/htm ; http://www.isc.ac.uk/iscbulletin/search/catalogue ; http://www.emsc-csem.org/Earthquake ; http://service.iris.edu/irisws/fedcatalog/1/ ; https:// earthquake.usgs.gov/earthquakes/search/; https://doi.org/10.7914/SN/BS ). The strongest events in the area around the KNPP occurred in 1987 in the northwestern tip of the Lom depression at the Tsibritsa, Motru and Danube faults tectonic knot at a depth of 10 km with a magnitude 3.3; another with a magnitude 3.7, and apparently related to the contact of the Danube fault with the branch of the Jiu fault, occurred in 1994 at the depth of 10 km, and another, with a magnitude of 4.4, took place in 2014 northeast of the study area in Romania at a depth of 12.1 km, near the junction of the Iskar, Danube and Jiu faults. The distribution of the events closest to the KNPP by depths and magnitudes is shown in Fig.  3 b and c respectively. For some events (specifically before 2007), depths or magnitudes are not specified in the catalogs cited above (hence the differences in the Figs.  3 b and c). It can be seen that a significant number of events occur south of the KNPP within a radius of 50 km (mainly already in the Pre-Balkans). The earthquakes seem to be linked to the intercrossing of the Ogosta, Tsibritsa and Iskar faults with the Northern Forebalkan fault (Fig.  3 b, c). The last represents an element of the Balkan fold-thrust belt, a complex system thrust onto Moesia from the south and dissected by transverse and oblique faults along which lateral displacements occur. Along the Ogosta fault with its hanging NW flank, there is a step-like dip towards the west. According to Georgiev and Shanov ( 1991 ), the block between Tsibritsa and Ogosta faults is still subsiding and the seismic activity of the Ogosta, Tsibritsa and Iskar faults is likely associated with the relative subsidence of the blocks between them. In recent years, several earthquakes with a magnitude exceeding 2 have been observed to the north and west of the KNNP. Tsibritsa fault with its hanging western flank is considered a satellite of the Motru fault, stretching north of the Danube in Romanian territory, which in turn is genetically connected to the Timok–Cerna fault system linking the Carpathians with the Balkanides (see Geological setting). Motru fault is also one of sources of seismic activity in the study area. It is deep-rooted, in the northwest, on Romanian territory, it is noticeably seismically active, and both left-lateral translation and descending movements occur along it. Some events seem to be related to the contact of the Danube fault with the Jiu fault–another active fault running on the Romanian territory from the Southern Carpathians in the NW–SE direction.

3 MT experimental data and inversion results

Geoelectric measurements were performed in the summer of 2021 using two GEOMAG-2 fluxgate magnetometers owned by the Institute of Geophysics of the National Academy of Sciences of Ukraine and the Institute of Mathematics and Informatics of the Bulgarian Academy of Sciences, ensuring registration of variations of MT field components with high sensitivity threshold (Dobrodnyak et al. 2014 ). The studies belong to the category of regional experiments, the purpose of which is to identify possible conductivity anomalies in the KNPP region. MT field observations were carried out at 21 points (Fig.  1 b). The distance between observation points was 10–15 km. The density and selection of locations for the installation of observation points were limited by local infrastructure and agricultural conditions.

EM field records in the study area were affected by significant disturbances associated with the proximity of electrified railroads, pipelines, power lines and other installations. Typically, interferences from these sources can have a significant impact at distances of up to 15–20 km. Figure  1 b shows the position of the KNPP and the nearest electrified railway, the presence of which automatically limited the area of the experiment. Interference on the magnetic components of the MT field decreases in proportion to the cube of the distance from the interference source. Taking into account the above, it was decided to register the magnetic components at the closest possible distance from the KNPP.

A detailed description of the processing of the recorded data and the distortion and dimensionality analysis were presented in Logvinov et al. ( 2021 ). Data processing was performed using Ladanivsky ( 2003 ) and Varentsov ( 2007 ) codes. The first phase of the geoelectric study was completed by estimating the parameters of impedance (Z) and the vertical magnetic transfer functions (VMTF) within the single-site processing scheme. Conditioned registration of the EM field electrical components was performed at four sites (Btn, Frn, Brv, Brn, see Fig.  1 b) and as a result, Z estimates (and derived apparent resistivity and impedance phase) were obtained for periods from 20 to 6400–8100 s. Meanwhile, the VMTF parameters were estimated at all observation points in the form of real (C u ) and imaginary (C v ) induction vectors (Schmucker 1970 ), presented on maps in the form of induction arrows, for the periods from 10–20 to 4900–10800 s.

3.1 1D inversion of MTS data

The nearby electrified railway and the measuring sites layout limited the data interpretation. Therefore, the first step was to estimate the geoelectric section parameters at the sites Btn, Frn, Brv, Brn by one-dimensional (1D) inversion of the interpreted MTS curves over the entire recorded period range. The results of 1D interpretation using two different inversion codes were also presented in Logvinov et al. ( 2021 ). The D + algorithm (Parker and Whaler 1981 ) approximates the geoelectric section through a finite number of layers of zero thickness and finite conductance isolated by a non-conductive medium, while the OCCAM 1D inversion (Constable et al. 1987 ) results in a section with smoothly varying conductivity. The minimum and maximum MTS curves obtained using the Eggers ( 1982 ) method were taken as experimental in the period range from a few seconds to 10 4  s. Before applying the inversion procedure, the MTS curves had to be normalized to eliminate galvanic effects on the MT field. Galvanic distortions arise as a result of the interaction of near-surface geoelectric heterogeneities and lead to a static shift of MTS amplitude curves (Berdichevsky and Dmitriev 2008 ). The normalization consisted in restoring the position of low-frequency asymptotes reflecting the electrical conductivity of the lower levels of the tectonosphere. It is assumed that at depths exceeding 400 km, horizontal changes in electrical conductivity are small, and the MTS curves obtained in different regions should converge at periods exceeding 3 h. In practice, the normalization of MTS curves usually consists in shifting the low-frequency branches of the MTS amplitude curves (ρ curves) along the vertical axis so that they match the ρ curve corresponding to the regional geoelectric structure of the study region (if the MTS phase curves agree with the reference curve). For the study area, data from the Panagjurishte geomagnetic observatory (PAG, 24.177°E, 42.515°N, Fig.  1 a) for the years 1988–2015 were used as a reference curve (Srebrov et al. 2013 ; Ladanivskyy et al. 2019 ). For 1D inversion, the recorded MTS curves were integrated with the reference curve at periods 2⋅10 4 –2⋅10 7  s (Fig.  4 ).

figure 4

Minimum and maximum experimental (circles) and model MTS curves at the sites Frn, Brv, Btn, Brn for two azimuths integrated at the period of 2·10 4  s with the reference data from the PAG observatory using D+ , OCCAM (from Logvinov et al. 2021 ) and 1D anisotropic inversion codes (Pek and Santos 2006 )

The 1D interpretation of the MT data already presented in Logvinov et al. ( 2021 ) was newly supplemented by a 1D anisotropic inversion using all components of the impedance tensor (Pek and Santos 2006 ). It should be noted here that the technique does not mean searching for real physical anisotropy in the earth and is used purely to apply the equivalent of a 1D anisotropic layered medium to the MTS curves in two directions at each site. To accommodate all impedance tensor components, the anisotropic inversion error floor in the anisotropic inversion was preset to 5%. Anomalous layers (conductors) with ρ much smaller than those lying above and below are identified on the inverse 1D models (Fig.  5 a). The differences in the distribution of geoelectric parameters calculated by OCCAM and anisotropic inversions are mainly due to the fact that in the OCCAM method, the experimental MTS curves were corrected to take into account galvanic distortion. Low resistivities of sedimentary rocks according to both inversion methods appear at depths of less than 1 km. According to the results of the anisotropic inversion, a low resistivity feature (ρ of about 10 ohmm) is identified at the Brv site at depths of about 4 km.

figure 5

a Geoelectric resistivity sections according to 1D models calculated using Occam inversion procedure (from Logvinov et al. 2021 ) and 1D anisotropic inversion by Pek and Santos ( 2006 ). 1—supposed zones of reduced seismic velocity along the M-BS seismic profile (Figs. 1 , 3 ). b Earth’s crust structure along the corresponding segment of the M-BS seismic profile (see Fig.  3 a), stars—seismic events within a distance of 10 km from the sites Frn, Btn, Brv and Brn ( a ) and from the M-BS seismic profile ( b )

The resistivity of the rocks underlying the sediments exceeds 100 ohmm. In both inverse models, at Frn, Brv and Brn, a conductor with a resistivity of 10 ohmm is distinguished at depths of 20 (+/− 5) km. The most distorted records of MT field variations were obtained at Btn, which was caused by the proximity of the high-voltage power line and affected the interpretation parameters and the inversion results. Comparison of the obtained geoelectric 1D models with the seismic section along the M-BS profile (Fig.  5 b) shows the coincidence of conductors with low-velocity layers in the depth interval 15–20 km.

3.2 Quasi-3D inversion of MV data

The next step in the interpretation of the 2021 geoelectric survey data was the modeling of the conductance S distribution of the sedimentary cover and the earth's crust using a quasi-3D inverse technique based on the Price thin-sheet approach and data fitting using Tikhonov parametric functional minimization with conjugate gradient optimization and the maximum smoothness stabilizing (Kováčiková et al. 2005 ). The purpose of the quasi-3D inversion application was: (1) to determine the spatial position of anomalous features in the studied area and explain the behavior of MV parameters; (2) to compare the obtained results with other geological and geophysical data.

The thin-sheet method involves only the magnetic MT field components. VMTF data from 21 stations over the entire period range of 50–2500 s were used in the inversion. The study area (90 km × 90 km) was divided into tiles with a side of 6 km × 6 km. The cell size was chosen with respect to the applied periods and the distance between observation points. The vertical conductivity distribution in the quasi-3D model was represented by a 1D layered section (Fig.  6 a–c) selected taking into account previous geophysical and geological data, geoelectrical characteristics of the sedimentary cover (see the previous divisions) and an earlier MT survey in the Bulgarian territory by Srebrov et al. ( 2013 ). Analysis of equivalent current systems at different depths commonly used in thin-sheet modeling (Banks 1979 ) did not provide the expected depth estimate of the upper level of the crustal anomaly source due to shielding by conductive surface sediments filling the Lom depression. The smooth pattern of the current function distribution becomes unstable and breaks down at a depth of 4 km as an effect of the continuation of the field below the upper boundary of the source, in this case represented by the conductive sediments of the Lom depression (see supplementary material). Therefore, the depth of the upper boundary of the crustal anomaly was taken from the 1D inversion results, which assumed the most conductive crustal objects in the depth interval of about 15–20 km (Fig.  5 ). The initial thin-sheet model for the iterative inversion procedure was represented by a homogeneous sheet with a uniform normal conductance distribution, located at a fixed depth.

figure 6

Results of the quasi-3D inversion—distribution of the conductance S (Siemens) in the thin sheet with corresponding input 1D sections: a thin sheet at the surface and recorded real and imaginary induction arrows for the period of 50 s; b thin sheet at the depth of 15 km and real and imaginary experimental induction arrows for the period of 2500 s; c two sheet model with the surface sheet (subfigure a ) and a crustal sheet at 15 km and real experimental and model induction arrows for the period of 2500 s; d experimental and model imaginary induction arrows for the same model as in the subfigure c . Faults (cross-hatched belts and other details as in Fig.  1 b; L—Lom depression, M—Moesian Plate, B—Balkans (Fore-Balkan)

Generally, the validity of the thin sheet approach is limited from below at short periods by near-surface disturbances and from above at long periods by source effects. Although given the geoelectric conditions in the Lom depression, the penetration depth at the shortest periods 50 and 100 s should allow reaching 20 and 30 km respectively, a series of inversions of geomagnetic responses at different depths at these periods showed that the best fit of the model geomagnetic responses and the experimental ones was achieved when the conductive thin sheet was placed on the surface, i.e. the resulting conductivity models reflect mainly the distribution of subsurface conductive sediments. The surface sheet substituted a sedimentary layer with an average depth of 4 km and a conductivity of 0.025 S/m (Fig.  6 a). Starting with the period of 900 s, the geoelectric image of crustal depths predominates in the conductivity models. This is accompanied by the reversal of the imaginary induction arrows pointing at short periods (50, 100 s) in the direction corresponding (or close) to the real arrows to the opposite orientation (Fig.  6 a, b). To depict the distribution of conductivity in the earth's crust, in inversions at periods of 900, 1600 and 2500 s, a thin sheet was placed at a depth of 15 km. However, the resulting conductivity model seemed to be influenced by the sub-surface sediments (Fig.  6 b). Therefore, to separate the effect of conductive sediments and the crustal anomaly source, a two-sheet model was chosen in the inversion of the VMTF’s at the periods 900, 1600 and 2500 s. The first layer with a thickness of 4 km corresponding to the average thickness of sediments of the Lom depression (Fig.  2 c) was substituted by a surface thin sheet with a fixed conductance derived from the single-sheet inversion at the period of 50 s (Fig.  6 a). The second sheet was immersed at a depth of 15 km (Fig.  6 c, d).

Modelling experiments to select the normal conductance at the thin sheet edges showed that the best data fit was obtained with a value of 100 S for the surface sheet simulating the sedimentary cover. The most satisfactory normal conductance for the crustal sheet was 1000 S (Fig.  6 a–c). In the inversion, the data weight multiplying the parametric functional (squared during the procedure) was uniform – 0.01, selected taking into account amplitudes of the recorded magnetic transfer functions (maximum 0.3). Starting with the normal conductance in the thin sheet (or two sheets), the inversion procedure converged typically after 20–35 iterations and finished reaching the data weight value between two iterations. Specifically, the presented surface model at a period of 50 s (Fig.  6 a) converged after 32 iterations, the one-sheet crustal model at 2500 s (Fig.  6 b) stopped after 24 iterations, the two-sheet model (Fig.  6 c) converged after 29 iterations. Data fit for the final two-sheet model is shown in Fig.  6 c, d.

4 Discussion

On Fig.  7 , the correlation of conductivity both in near-surface sediments and at mid-crustal depths (sub-figures a and b respectively) with seismicity is imaged. As was mentioned before in the Introduction, earthquake foci at both subfigures appear outside or at the margins of the conductors (both horizontally and vertically).

figure 7

Comparison of S near the surface ( a ) and at a depth of 15 km ( b ) (from Fig.  6 a and c respectively) and seismic events above (dots) and below (crosses with focal depths) the mid-crustal conductive layer. Faults (cross-hatched belts) and other details as in Fig.  1 b and Fig.  6 ; L—Lom depression, M—Moesian Plate, B—Balkans (Fore-Balkan)

According to the quasi-3D inversion results, near-surface anomalous conductivity distribution in the study area appears to be controlled by electrically conductive sediments of the Lom depression (Fig.  6 a). One anomaly close to the junction of the Iskar and Northern Forebalkan fault appears in the area of distribution of the Pleistocene loam complexes (Angelova 2001 ). Two anomalies west and east of the river Ogosta seem to correspond to areas of Neogene (Pliocene) clays distribution (Angelova 2008 ). The anomalous conductivity area at the intersection of the Danube Tsibritsa and Motru faults (and in the confluence of the Danube and Tsibritsa rivers) may be related to the intrusion of highly mineralized water from a deeper aquifer (Toteva and Shanov 2021 ).

At mid-crustal depths (Figs. 6 c, 7 ), the basement of the most subsiding block is non-conductive, separated by the Ogs, Tsb and NFB faults from the more conductive surroundings. An anomalous electrical conductivity structure appears at the intersections of the Ogosta fault with the Danube and South Moesian faults. Moderate seismic activity and recent vertical movements have been documented on the Ogosta fault (Georgiev and Shanov 1991 ; Angelova 2008 ), however, the hypocenters are concentrated at its intersection with the Northern Forebalkan fault and mostly south of the latter, while the central part of the conductive feature itself remains unaffected by seismic events (Fig.  7 ). Most of the hypocenters are located above the depth of the mentioned conductor (black dots in Fig.  7 ). The entire western and southwestern margin of the study area is also significantly electrically conductive. This electrically anomalous area is located west of the Tsibritsa fault and southwest of the Northern Forebalkan fault, which delimits the area that already belongs to the Balkanides (Fore Balkans) from the north. The highly conductive area may be associated with the effect of the significantly strike–slip TF-CF zone west of the study area (see Fig.  1 a). It may also represent the deep source of the near-surface anomaly at the intersection of the Danube and Tsibrica faults in the above-mentioned area of occurrence of the mineralized water spring.

Previous 1D inverse models at 4 points indicated the existence of the low resistivity objects in the depth interval of 15–25 km (Fig.  5 a). This corresponds to the crustal conductive feature identified by quasi-3D modeling in the area around the Btn site; Frn and Brv are located at the edge of this conductor, while Brn is located outside the conductive area. Also, the results of the anisotropic 1D inversion do not indicate the existence of a significant decrease in electrical resistivity at crustal depths. The results of both methods also point to the existence of near-surface low-resistivity/conductive layers around the Btn and Brv points.

An anomalous conductivity mid-crustal layer with an upper boundary at approximately 15 km resulting from the thin-sheet inversion also correlates with the seismic low-velocity layer revealed by Dachev et al. ( 1994 ) (Figs. 3 a, 5 ). Earthquake foci within a 10 km radius around the Brn, Frn, Brv, Btn sites (Fig.  7 ) were also superimposed on their 1D resistivity depth distribution (Fig.  5 a). Similarly, events occurring within 10 km to both sides of the M-BS seismic profile were shown (Figs. 5 b, 7 ). From the presented sample, it can be seen that most of the events took place at shallow depths above the low-velocity layers (and none at their depths). Mid-crustal reflective low-seismic-velocity layers with an upper boundary at a depth of about 15 km were described by Gutenberg ( 1954 ) and further reported in various studies and various regions (e.g. Zorin et al. 2002 ; Zhan et al. 2020 ). In seismically active regions, low-velocity zones associated with the presence of partial melt, residual magma, heat escaping from the mantle, or frictional heating at fault zones exposed to shear between contact blocks act as waveguides channeling seismic waves during earthquakes (e.g. Zhao et al. 2000 ; Qin et al. 2018 ; Nagar et al. 2021 ). However, low velocity layers are also widespread in stable cold crust regions, where they cannot be explained by increased heat and the presence of melt. Low velocity layers can often correlate in space with electrically conductive layers (Eaton 1980 ; Vanyan et al. 2001 ) and their mutual mechanism can be interpreted as a consequence of rheological stratification and processes at the brittle/ductile crust transition, influencing the increase in porosity, geometry of pore spaces, the amount of pore fluids, their salinity and consequently controlling both elasticity and electrical conductivity of rocks (Gough 1986 ; Marquis and Hyndman 1992 ; Unsworth and Rondenay 2013 ), although graphitization along fission planes due to ductile shear is also mentioned as an alternative mechanism of increased conductivity (Simpson 1999 ; Glover and Adam 2008 ). The fluid origin is most likely associated with dehydration (Jones 1992 ), while the most reasonable explanation for the increase in porosity itself is, according to Pavlenkova ( 2004 ), the dilatancy phenomenon, again associated with the influx of hydrous fluids. The low velocity/high conductivity layers are thought to act as detachment zones, separating weak and brittle parts of the crust on which most faults, except deep fault zones cutting the whole crust, flatten. Episodic seismic events occur above such interfaces or at their periphery.

The presented results support the outputs of other available studies (Antonov 2000 ; Groudev and Petrova 2017 ) concerning the geodetic monitoring, stress tests and natural hazard assessment for the KNPP operation, which state the stability and safety of the geological environment in the study area. According to regional GPS studies by Kotzev et al. ( 2001 ) overall kinematic pattern shows that the only tectonically active structures in northern Bulgaria lie east of the domain hosting the Lom basin. The western and southern boundaries of the domain are characterized by N-S to NE-SW extension. In the northwest, a system of NE-trending faults (the Vinishte-Lom fault in the study area, Fig.  1 b) shows left-lateral movement. The eastern and southeastern boundaries of the domain (along the Yantra river east of the study area, Fig.  3 a) is not distinct, however, with moderate right-lateral strike-slip and NE-SW compression. As already mentioned in Sect.  2.2 Seismic results and seismicity, the northern boundary is formed by the Danube dip-slip fault with a recently uplifted (in response to extension) northern block. Geodetic survey by Valev et al. ( 2016 ) focused on the area around the KNPP registered only weak and slow deformation of a variable character in the KNPP area. Also, results of DInSAR studies (Differential Synthetic Aperture Radar Interferometry) by Drakatou et al. ( 2015 ) reported the stability of the region with a negligible rate of deformation ranging between − 1 and + 1.5 mm/year. Seismic surveys by the Common Depth Point (CDP) and refraction methods for the exploration of coal-bearing horizons (Yaneva and Shanov 2015 ) prove the uniformity of the tectonic regime from the end of the Dacian (Pliocene) period to the present. The oil and gas prospecting seismic studies (Toteva and Shanov 2021 ) have noted the deep Tsibritsa fault topographically predetermining the eponymous Danube tributary, however, they do not address the question of whether the fault is active.

Although the presented MT survey results are consistent with the mentioned above studies that suggest no special measures for the KNPP safety, further research should be directed towards the creation of a complete 3-D image of the study area using 3D-inversion procedures, which would depict the vertical geoelectrical structure in more detail. These would require a set of broadband fully 5-component MT measurements with reference measurements and inter-station processing, with the presented results serving as a-priori input information. The dataset should be also supplemented with MT and GDS results from the adjacent Romanian part of the Moesian platform in the north (Stanica and Stanica 2011 ) and with completely missing data from the Serbian territory, from the arched belt of faults, namely the TF-CF strike-slip system, bending The Moesian plate from the west.

5 Conclusion

Although the main focus of engineering geology is in the area within the reach of human activity and its interaction with earth processes, it should not be limited to the earth's surface, since deep tectonic processes can significantly affect any engineering and geotechnical work. The results of a case study of a geological structure in the area of the Kozlodui nuclear power plant in Bulgaria showed how the analysis of geoelectric features can complement the complex of geological and geophysical information for seismic hazard assessment.

1D inverse resistivity modeling based on MT data recorded during the summer 2021 field experiment indicated the existence of mid-crustal low-resistivity features coincident with a seismic low-velocity layer revealed by a previous regional seismic survey. The subsequent quasi-3D inversion provided insight into the sub-surface sedimentary structure as well as the electrical conductivity distribution in the mid-crust. An electrically anomalous feature with an upper boundry at a depth of about 15 km appears at the intersection of the Ogosta with South Moesian and Danube faults. The conductive western and southwestern margin of the investigated area is probably related to the strike-slip fault systems bounding the Moesian Plate from the west. The mid-crustal high electrical conductivity and low seismic velocity layer is assumed to correspond to the transition zone between the brittle and ductile crust. Seismic events may occur at its outer boundary, however, no large fault structures with the potential to transfer seismic energy from tectonically active areas were revealed in the study area. The presented results support the conclusions of previous seismic hazard studies and confirm that the Kozlodui nuclear power plant is located in an area with a stable geological environment, however, in further research, the results of studies covering the fault system linking the Carpathians with the Balkanides west of the studied area should be included.

Adam A, Vero J (1990) Application of the telluric and magnetotelluric methods in selection of sites for nuclear plants. Proc Indian Acad Sci (earth Planet Sci) 99(4):657–667

Article   Google Scholar  

Adam A, Szarka L, Novak A, Wesztergom V (2016) Key results on deep electrical conductivity anomalies in the Pannonian Basin (PB) and their geodynamic aspects. Acta Geod Geophys 52(2):205–228. https://doi.org/10.1007/s40328-016-0192-2

Abramova AM, Varentsov IM, Velev A, Gavrilov R, Golubev NG, Zhdanov MS, Martanus ER, Sokolova EYu, Schneier VS (1994) Investigation of deep geoelectric structure of Bulgaria. Phys Earth 11:59–69 (in Russian)

Google Scholar  

Ahmed N, Ghazi S, Sami J (2018) Seismicity assessment of Fukushima region, fault kinematics and calculation of PGA value for Idosawa fault in Hamadori area, Japan. Nat Hazards 92:1065–1079. https://doi.org/10.1007/s11069-018-3240-0

Angelova D (2001) Quaternary geology, geomorphology and tectonics in the Iskar River valley system, the Danubian Plain (Bulgaria). Bull Geol Soc Greece 34(1):55–60. https://doi.org/10.12681/bgsg.16943

Angelova D (2008) Integral Environmental Assessment of Ogosta River Basin, (Northwestern Bulgaria). BALWOIS 2008–196:1–15

Antonov D (2000) “Kozloduy” NPP geological environment as a barrier against radionuclide migration. Transactions 32/14 International Youth Nuclear Congress 2000: Yuth, Future, Nuclear; Envirohmeint & Safety/87, SK01K0039

Bala A, Raileanu V, Dinu C, Diaconescu M (2015) Crustal seismicity and active fault systems in Romania. Rom Rep Phys 67(3):1176–1191

Banks RJ (1979) The use of the equivalent current systems in the interpretation of the geomagnetic deep sounding data. Geophys JR Astr Soc 87:139–157

Berdichevsky MN, Dmitriev VI (2008) Models and methods in magnetotellurics. Springer-Verlag, Berlin Heidelberg. https://doi.org/10.1007/978-3-540-77814-1

Book   Google Scholar  

Bourlange S, Mekkawi M, Conin M, Schnegg P-A (2012) Magnetotelluric study of the Remiremont-Epinal-Rambervillers zone of migrating seismicity, Vosges (France). Bulletin De La Société Géologique De France 183(5):461–470

Cavazza W, Roure F, Spackman W, Stampfli G, Ziegler P (eds) (2004) The mediterranean region from crust to mantle. The Transmed Atlas. Geological and Geophysical Framework of the Mediterranean and the Surrounding Areas. In: A publication of the Mediterranean Consortium for the 32nd International Geological Congress, Florence, Italy

Chemberski HI, Botoucharov ND (2013) Triassic Lithostratigraphic Correlation in the Moesian Platform (Bulgaria–Romania). Stratigr Geol Correl 21(6):609–627. https://doi.org/10.1134/S0869593813060087

Chattopadhyay A, Bhattacharjee D, Srivastava S (2020) Neotectonic fault movement and intraplate seismicity in the central Indian shield: a review and reappraisal. J Mineral Petrol Sci J-STAGE Adv Publ 115(2):136–149. https://doi.org/10.2465/jmps.190824b

Constable SC, Parker RL, Constable CG (1987) Occam’s inversion: a practical algorithm for the inversion of electromagnetic data. Geophysics 52:289–300

Convertito V, De Matteis R, Improta L, Pino NA (2020) Fluid-triggered aftershocks in an anisotropic hydraulic conductivity geological complex: the case of the 2016 Amatrice Sequence, Italy. Front Earth Sci 8:541323. https://doi.org/10.3389/feart.2020.541323

Dachev H (1988) Structure of the Earth Crust in Bulgaria. Technique 334 (in Bulgarian with a Summary in English)

Dachev H, Bokov P, Radulesku F, Demetresku K, Lazaresku V, Polonik G (1994) Moesian Plate. In: Chekunov AV (ed) Lithosphere of Central and Eastern Europe: Young Platform and Alpine Fold Belt. Naukova Dumka, Kiev, pp 197–198 (in Russian)

Dachev H, Kornea I (1980) Moesian Platform. In: Sologub VB, Guterch A, Prosen D (eds) The structure of the crust of Central and Eastern Europe according to geophysical studies. Naukova Dumka, Kiev, pp 59–68 (in Russian)

Demetrescu C (Project Director) (2013) Scientific Report. The project “The geomagnetic field under the heliospheric forcing. Determination of the internal structure of the Earth and evaluation of the geophysical hazard produced by solar eruptive phenomena”.Program IDEI. Contract 93/5.10.2011, Stage I-III. Institute of Geodynamics Romanian Academy. http://www.geodin.ro/IDEI2011/engl/index.html

Di Q, Fu Ch, An Zh, Wang R, Wang G, Wang M, Qi Sh, Liang P (2020) An application of CSAMT for detecting weak geological structures near the deeply buried long tunnel of the Shijiazhuang-Taiyuan passenger railway line in the Taihang Mountains. Eng Geol 268:105517. https://doi.org/10.1016/j.enggeo.2020.105517

Dobrev TB, Ivanova VP, Pishalov SS (1975) Regional characteristics of physical properties of main rock complexes in Bulgaria. Geophysical collection Institute of geophysics of Acad Sci Ukraine 52 (in Russian)

Dobrodnyak L, Logvinov I, Nakalov E, Rakhlin L, Timoshin S (2014) Application of magneto-telluric stations (Geomag-02) in geoelectric studies on the territory of Bulgaria. Seminar proceedings 3, 2013 INRNE-BAS, Sofia, Bulgaria, pp 148–151

Drakatou ML, Bignami Ch, Stramondo S, Parcharidis I (2015) Ground deformation observed at Kozloduy (Bulgaria) and Akkuyu (Turkey) NPPs by means of multitemporal SAP interferometry. Theofrastos Digital Library - Department of Geology. A.P.Th. http://geolib.geo.auth.gr

Eaton GP (1980) Geophysical and geological characteristics of the crust of the Basin and Range Province. In: Studies in geophysics, continental tectonics, edited by thenational research council, division on engineering and physical sciences, commission on physical sciences, mathematics, and applications, geophysics research board, assembly of mathematical and physical sciences, geophysics study committee, Washington DC, pp 107–108. http://www.nap.edu/catalog/203.htm

Eggers DE (1982) An eigenstate formulation the magnetotelluric impedance tensor. Geophysics 47:1204–1214

Faure Walker J (2021) Fukushima: why we need to look back thousands of years to get better at predicting earthquakes. https://theconversation.com/fukushima-why-we-need-to-look-back-thousands-of-years-to-get-better-at-predicting earthquakes:156882

Gasperikova E, Rosenkjaer GK, Arnason K, Newman GA, Lindsey NJ (2015) Resistivity characterization of the Krafla and Hengill geothermal fields through 3D MT inverse modeling. Geothermics 57:246–257. https://doi.org/10.1016/j.geothermics.2015.06.015

Georgiev T, Shanov S (1991) Contemporary Geodynamics of the Western Part of the Moesian Platform (Lom Depression). Bulg Geophys J XVII 3:3–9 ( (in Bulgarian) )

Glover PWJ, Adam A (2008) Correlation between crustal high conductivity zones and seismic activity and the role of carbon during shear deformation. J Geophys Res 113:B12210. https://doi.org/10.1029/2008JB005804

Gough DI (1986) Seismic reflectors, conductivity, water and stress in the continental crust. Lett Nature 323:143–144

Groudev P, Petrova P (2017) Overview of the available information concerning seismic hazard for the Kozloduy NPP site. Prog Nucl Energy 97:162–169. https://doi.org/10.1016/j.pnucene.2017.01.007

Guangmeng G, Jie Y (2013) Three attempts of earthquake prediction with satellite cloud images. Nat Hazards Earth Syst Sci 13:91–95. https://doi.org/10.5194/nhess-13-91-2013

Gutenberg B (1954) Low-velocity layers in the Earth’s mantle. Bull GSA 65:337–348

Haak V, Hutton R (1986) Electrical resistivity in continental lower crust. Geol Soc Lond Spec Publ 24:35–49. https://doi.org/10.1144/GSL.SP.1986.024.01.05

Hoskin T, Regenauer-lieb K, Jones A (2015) Deep conductivity anomaly of the Darling Fault Zone - implications for fluid transport in the Perth Basin. ASEG Ext Abstr 1:1–4. https://doi.org/10.1071/ASEG2015ab047

Hermance JF (1995) Electrical conductivity models of the crust and mantle. In: Ahrens TJ (ed) A handbook of physical constants: global earth physics. AGU Ref Shelf I, AGU, Washington DC, pp 190–205

Jones A (1992) Electrical conductivity of the continental lower crust. In: Kay RW (ed) Fountain DM, Arcu1us RJ. Continental Lower Crust, Elsevier, pp 81–143

Kaya T, Tank BS, Tuncer MK, Rokityansky II, Tolak E, Savchenko T (2009) Asperity along the North anatolian fault imaged by magnetotellurics at Duzce, Turkey. Earth Planets Space 61:871–884

Kotzev V, Nakov R, Burchfel BC, King R, Reilinger R (2001) GPS study of active tectonics in Bulgaria: results from 1996 to 1998. J Geodyn 31:189–200

Kounov A, Gerdjikov I, Vangelov D, Balkanska E, Lazarova L, Georgiev S, Blunt E, Stockli D (2017) First thermochronological constraints on the Cenozoic extension along the Balkan fold-thrust belt (Central Stara Planina Mountains, Bulgaria). Int J Earth Sci 107:1515–1538. https://doi.org/10.1007/s00531-017-1555-9

Article   CAS   Google Scholar  

Kováčiková S, Červ V, Praus O (2005) Modelling of the conductance distribution at the eastern margin of the european Hercynides. Stud Geophys Geod 49:403–421

Krstekanic N, Willingshofer E, Broerse T, Matenco L, Toljic M, Stojadinovic U (2021) Analogue modelling of strain partitioning along a curved strike-slip fault system during backarc-convex orocline formation: Implications for the Cerna-Timok fault system of the Carpatho-Balkanides. Journ Struct Geol 149:104386. https://doi.org/10.1016/j.jsg.2021.104386

Ladanivsky BT (2003) Algorithm for processing MTS data. Fifth geophysical readings of VV Fedynsky, February 27 - March 01, 2003. Abstracts of reports, pp 134–135 (in Russian)

Ladanivsky B, Logvinov I, Tarasov V (2019) Earth mantle conductivity beneath the Ukrainian territory. Stud Geophys Geod 63:290–303. https://doi.org/10.1007/s11200-018-0347-4

Lapenna V, Lorenzo P, Perrone A, Piscitelli S (2003) High-resolution geoelectrical tomographies in the study of Giarrossa landslide (southern Italy). Bull Eng Geol Env 62:259–268. https://doi.org/10.1007/s10064-002-0184-z

Logvinov I, Boyadzhiev G, Srebrov B, Rakhlin L, Logvinova G, Timoshin S (2021) Geoelectric studies of the Kozloduy nuclear power plant region, Bulgaria. Geophys Journ 6(43):3–22. https://doi.org/10.24028/gzhv43i6.251549

Magnusson MK, Fernlund JMR, Dahlin T (2010) Geoelectrical imaging in the interpretation of geological conditions affecting quarry operations. Bull Eng Geol Environ 69:465–486. https://doi.org/10.1007/s10064-010-0286-y

Manevich AI, Kaftan VI, Losev IV, Shevchuk RV (2021) Improvement of the deformation GNSS monitoring network of the Nizhne-Kansk Massif underground research laboratory site. Seismic Instrum 57(5):587–599. https://doi.org/10.3103/S0747921050042

Marquis G, Hyndman RD (1992) Geophysical support for aqueous fluids in the deep crust: seismic and electrical relationships. Geophys J Int 110:91–105

Mavrodiev SC, Pekevski L, Kikuashvili G, Botev E, Getsov P, Mardirossian G, Sotirov G, Teodossiev D (2015) On the imminent regional seismic activity forecasting using INTERMAGNET and sun-moon tide code data. Open J Earthq Res 4:102–113. https://doi.org/10.4236/ojer.2015.43010

Mladenovic A, Antic M, Trivic B, Cvetkovic V (2019) Investigating distant effects of the Moesian promontory: brittle tectonics along the western boundary of the Getic unit (East Serbia). Swiss J Geosci 112:143–161. https://doi.org/10.1007/s00015-018-0324-5

Morozova LI (2012) Clouds are the Forerunners of Earthquakes. Sci First Hand 2(32):81–91

Nadirov R, Rzayev O (2017) The metsamor nuclear power plant in the active tectonic zone of Armenia is a potential Caucasian Fukushima. J Geosci Environ Protect 5:46–55. https://doi.org/10.4236/gep.2017.54005

Nagar M, Pavankumar G, Mahesh P, Rakesh N, Chouhan AK, Nagarjuna D, Chopra S, Ravi KM (2021) Magnetotelluric evidence for trapped fluids beneath the seismogenic zone of the Mw60 Anjar earthquake, Kachchh intraplate region, Northwest India. Tectonophysics 814:228969. https://doi.org/10.1016/j.tecto.2021.228969

Nikolova JB (1980) The experience of studying laboratory and geophysical methods of physical properties of volcanic rocks in wells (North Bulgaria). In: Proceedings of the XI congress of the Carpathian-Balkan geological association, Geophysics, Naukova Dumka, Kiev, pp 132–140 (in Russian)

Nover G (2005) Electrical properties of crustal and mantle rocks-A review of laboratory measurements and their explanation. Surv Geophys 26:593–651. https://doi.org/10.1007/s10712-005-1759-6

Oros E, Placinta AO, Moldovan IA (2021) The analysis of earthquakes sequence generated in the Southern Carpathians, Orsova june-july 2020 (Romania): seismotectonic implications. Rom Rep Phys 73:706

Parker RL, Whaler KA (1981) Numerical method for establishing solution to the inverse problem of electromagnetic induction. J Geophys Res 86(B10):9574–9584

Parks EM, McBride JH, Nelson ST, Tingey DG, Mayo AL, Guthrie WS, Hoopes JC (2011) Comparing electromagnetic and seismic geophysical methods: estimating the depth to water in geologically simple and complex arid environments. Eng Geol 117(1–2):62–77

Pavlenkova NI (2004) Low velocity and low electrical resistivity layers in the middle crust. Ann Geophys 47(1):157–169. https://doi.org/10.4401/ag-3268

Pek J, Santos E (2006) Magnetotelluric inversion for anisotropic conductivities in layered media. Phys Earth Planet Int 158(2–4):139–158. https://doi.org/10.1016/j.pepi.2006.03.023

Petrescu L, Borleanu F, Radulian M, Ismail-Zadeh A, Matenco L (2021) Tectonic regimes and stress patterns in the Vrancea Seismic Zone: Insights into intermediate-depth earthquake nests in locked collisional settings. Tectonophysics 799:228688. https://doi.org/10.1016/j.tecto.2020.228688

Petraki E, Nikolopoulos D, Nomicos C, Stonham J, Cantzos D, Yannakopoulos P, Kottou S (2015) Electromagnetic pre-earthquake precursors: mechanisms, data and models-a review. J Earth Sci Clim Change 6(1):1000250. https://doi.org/10.4172/2157-7617.1000250

Qin W, Zhang S, Li M, Wu T, Chu Z (2018) Distribution of Intra-crustal low velocity zones beneath Yunnan from seismic ambient noise tomography. Journ Earth Sci 29(6):1409–1418. https://doi.org/10.1007/s12583-017-0815-8

Ritter O, Hoffmann-Rothe A, Bedrosian PA, Weckmann U, Haak V (2015) Electrical conductivity images of active and fossil fault zones. In: Bruhn, DF Burlini L (eds) High strain zones: structure and physical properties, vol 245, Special Publication, Geol. Soc. London, pp 165–186

Savchyn I, Vaskovets S (2018) Local geodynamics of the territory of Dniestr pumped storage power plant. Acta Geodyn Geomater 15(1):189. https://doi.org/10.13168/AGG.2018.0002

Sano Y, Takahata N, Kagoshima T, Shibata T, Onoue T, Zhao D (2016) Groundwater helium anomaly reflects strain change during the 2016 Kumamoto earthquake in Southwest Japan. Sci Rep 6:37939. https://doi.org/10.1038/srep37939

Satitpittakul A, Vachiratienchai C, Siripunvaraporn W (2013) Factors influencing cavity detection in Karst terrain on two-dimensional (2-D) direct current (DC) resistivity survey: a case study from the western part of Thailand. Eng Geol 152:162–171

Schmucker U (1970) Anomalies of geomagnetic variations in the southwestern United States. Bull Scripps Inst Oceanogr Univ Calif 13:13–32

Scholz CH, Tan YJ, Albino F (2019) The mechanism of tidal triggering of earthquakes at mid-ocean ridges. Nat Communications 10:2526. https://doi.org/10.1038/s41467-019-10605-2

Simpson F (1999) Stress and seismicity in the lower continental crust: a challenge to simple ductility and implications for electrical conductivity mechanisms. Surv Geophys 20:201–227

Srebrov B, Ladanivskyy B, Logvinov I (2013) Application of space generated geomagnetic variations for obtaining geoelectrical characteristics at Panagyurishte geomagnetic observatory region. Comptes Rendus De L’acade’mie Bulgare Des Sciences 66(6):857–864

Stanciu I-M, Ioane D (2017) Regional seismicity in the Moesian Platform and the Intramoesian Fault. Geo-Eco-Marina 23:263–271

Stanica D, Stanica DA (2011) Earthquakes precursors. In: D’Amico S (ed) Earthquake research and analysis. IntechOpen, pp 79–100. https://doi.org/10.5772/28262

Suzuki K, Toda S, Kusunoki K, Fujimitsu Y, Mogi T, Jomori A (2000) Case study of electrical and electromagnetic methods applied to mapping active faults beneath the thick quarternary. Eng Geol 56:29–45

Toteva A, Shanov S (2021) Chemical composition of groundwater in the zone of slow water exchange of the Upper Pontian aquifer, Northwestern Bulgaria. Eng Geol Hydrogeol 35:23–30. https://doi.org/10.52321/igh.35.1.23

Unsworth M, Bedrosian PA (2004) On the geoelectric structure of majorstrike-slip faults and shear zones. Earth Planet Space 56(12):1177–1184

Unsworth M, Rondenay S (2013) Mapping of the distribution of the fluids in the crust and lithospheric mantle utilizing geophysical methods. In: Harlov DE, Austrheim H (eds) Metasomatism and the chemical transformation of rock, lecture notes in earth system sciences. Springer-Verlag Berlin Heidelberg. https://doi.org/10.1007/978-3-642-28394-9_13

Valev G, Rainov G, Vassileva K (2016) Geodetic measurements in the area of Kozloduy Nuclear Power Plant. Coordinates XII 8:37–40

Vangelov D, Pavlova M, Gerdjikov I, Kounov A (2016) Timok fault and Tertiary strike-slip tectonics in part of Western Bulgaria. Annu Univ Min Geol St Ivan Rilski 59:112–117

Vanyan L, Tezkan B, Palshin N (2001) Low electrical resistivity and seismic velocity at the base of the upper crust as indicator of rheologically weak layer. Surv Geophys 22:131–154. https://doi.org/10.1023/A:1012937410685

Varentsov IM (2007) Joint robust inversion of MT and MV data// Electromagnetic sounding of the Earth’s interior. In: Spichak V (ed) Methods in geochemistry and geophysics 40. Elsevier, pp 189–222

Volvovsky BS, Starostenko VI (eds) (1996) Geophysical parameters of the lithosphere of the southern sector of the Alpine Orogene. Kiev-Naukova Dumka (in Russian)

Wieland M, Griesser L, Kuendig C (2000) Seismic early warning system for a nuclear power plant. In: 12th IWCEE,p 1781

Wynn J, Mosbrucker A, Pierce H, Spicer K (2016) Where is the hot rock and where is the ground water - using CSAMT to map beneath and around Mount St. Helens J Environ Eng Geophys 21:79–87

Yaneva M, Shanov SB (2015) Sedimentological Model of Lom Lignite Basin (North Bulgaria) – Integrated Use of Geophysical and Geological Data. In: 8th Congress of the Balkan Geophysical Society, EAGE, p 26678. https://doi.org/10.3997/2214-4609.201414133

Zafrir H, Barbosa S, Levintal E, Weisbrod N, Horin YB, Zalevsky Z (2020) The impact of atmospheric and tectonic constraints on radon-222 and carbon dioxide flow in geological porous media - a dozen-year research summary. Front Earth Sci 8:559298. https://doi.org/10.3389/feart.2020.559298

Zagorchev I (2009) Geomorfological formation of Bulgaria. Principles and state of the art. Comptes Rendus De L’acade’mie Bulgare Des Sciences 62(8):981–992

Zhan W, Pan L, Chen X (2020) A widespread mid-crustal low-velocity layer beneath Northern China revealed by the multimodal inversion of Rayleigh waves from ambient seismic noise. J Asian Earth Sci 196:104372. https://doi.org/10.1016/j.jseaes.2020.104372

Zhao DP, Ochi F, Hasegawa A, Yamamoto A (2000) Evidence for the location and cause of large crustal earthquakes in Japan. J Geophys Res Solid Earth 105(B6):13579–13594. https://doi.org/10.1029/2000jb900026

Zorin ZA, Mordvinova VV, Turutanov EK, Belichenko BG, Artemyev AA, Kosarev GL, Gao SS (2002) Low seismic velocity layers in the Earth’s crust beneath Eastern Siberia (Russia) and Central Mongolia: receiver function data and their possible geological implication. Tectonophysics 359:307–327

Download references

Acknowledgements

This work was carried out as part of the implementation a scientific project «Research on Partial Differential Equations and their applications in Modelling of non-linear processes», funded by Bulgarian National Science Fund, contract KP-06N42/2 and partially supported by scientific project 0117U000117 «Deep processes in the crust and upper mantle of Ukraine and formation of mineral deposits» funded by National Academy of Sciences of Ukraine. We thank the editor and the reviewers for their helpful comments and suggestions.

Open access publishing supported by the National Technical Library in Prague. The authors declare that no other funds, grants, or other support were received during the preparation of this manuscript and they have no financial or other non-financial interests.

Author information

Authors and affiliations.

Institute of Geophysics, Academy of Sciences Czech Republic, Bocni II/1401, 4-14131, Praha, Czech Republic

S. Kovacikova

Institute of Mathematics and Informatics, Bulgarian Academy of Sciences, Sofia, Bulgaria

G. Boyadzhiev

S.I. Subbotin Institute of Geophysics, National Academy of Sciences of Ukraine, Kiev, Ukraine

I. Logvinov

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to S. Kovacikova .

Ethics declarations

Conflict of interest.

The authors have not disclosed any competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

I. Logvinov: deceased

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Kovacikova, S., Boyadzhiev, G. & Logvinov, I. Geoelectric studies in earthquake hazard assessment: the case of the Kozlodui nuclear power plant, Bulgaria. Nat Hazards (2024). https://doi.org/10.1007/s11069-024-06867-9

Download citation

Received : 20 December 2022

Accepted : 02 August 2024

Published : 02 September 2024

DOI : https://doi.org/10.1007/s11069-024-06867-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Kozlodui nucler power plant
  • Moesian plate
  • Seismic risk
  • Magnetotellurics
  • Find a journal
  • Publish with us
  • Track your research

Global Report on Food Crises (GRFC) 2024

GRFC 2024

Published by the Food Security Information Network (FSIN) in support of the Global Network against Food Crises (GNAFC), the GRFC 2024 is the reference document for global, regional and country-level acute food insecurity in 2023. The report is the result of a collaborative effort among 16 partners to achieve a consensus-based assessment of acute food insecurity and malnutrition in countries with food crises and aims to inform humanitarian and development action.  

FSIN and Global Network Against Food Crises. 2024. GRFC 2024 . Rome.

When citing this report online please use this link:

https://www.fsinplatform.org/report/global-report-food-crises-2024/

Document File
Global Report on Food Crises 2023 - mid-year update
Global Report on Food Crises 2023
Global Report on Food Crises 2022
Global Report on Food Crises 2021 - September update
Global Report on Food Crises 2021
Global Report on Food Crises 2021 (In brief)
Global Report on Food Crises 2020 - September update In times of COVID-19
Global Report on Food Crises 2020
Global Report on Food Crises 2019 - September update
Global Report on Food Crises 2019
Global Report on Food Crises 2019 (In brief)
Global Report on Food Crises 2019 (Key Messages)
Global Report on Food Crises 2019 (Key Messages) - French
Global Report on Food Crises 2019 (Key Messages) - Arabic

Advanced search

Blog The Education Hub

https://educationhub.blog.gov.uk/2024/08/20/gcse-results-day-2024-number-grading-system/

GCSE results day 2024: Everything you need to know including the number grading system

risk assessment case study examples

Thousands of students across the country will soon be finding out their GCSE results and thinking about the next steps in their education.   

Here we explain everything you need to know about the big day, from when results day is, to the current 9-1 grading scale, to what your options are if your results aren’t what you’re expecting.  

When is GCSE results day 2024?  

GCSE results day will be taking place on Thursday the 22 August.     

The results will be made available to schools on Wednesday and available to pick up from your school by 8am on Thursday morning.  

Schools will issue their own instructions on how and when to collect your results.   

When did we change to a number grading scale?  

The shift to the numerical grading system was introduced in England in 2017 firstly in English language, English literature, and maths.  

By 2020 all subjects were shifted to number grades. This means anyone with GCSE results from 2017-2020 will have a combination of both letters and numbers.  

The numerical grading system was to signal more challenging GCSEs and to better differentiate between students’ abilities - particularly at higher grades between the A *-C grades. There only used to be 4 grades between A* and C, now with the numerical grading scale there are 6.  

What do the number grades mean?  

The grades are ranked from 1, the lowest, to 9, the highest.  

The grades don’t exactly translate, but the two grading scales meet at three points as illustrated below.  

The image is a comparison chart from the UK Department for Education, showing the new GCSE grades (9 to 1) alongside the old grades (A* to G). Grade 9 aligns with A*, grades 8 and 7 with A, and so on, down to U, which remains unchanged. The "Results 2024" logo is in the bottom-right corner, with colourful stripes at the top and bottom.

The bottom of grade 7 is aligned with the bottom of grade A, while the bottom of grade 4 is aligned to the bottom of grade C.    

Meanwhile, the bottom of grade 1 is aligned to the bottom of grade G.  

What to do if your results weren’t what you were expecting?  

If your results weren’t what you were expecting, firstly don’t panic. You have options.  

First things first, speak to your school or college – they could be flexible on entry requirements if you’ve just missed your grades.   

They’ll also be able to give you the best tailored advice on whether re-sitting while studying for your next qualifications is a possibility.   

If you’re really unhappy with your results you can enter to resit all GCSE subjects in summer 2025. You can also take autumn exams in GCSE English language and maths.  

Speak to your sixth form or college to decide when it’s the best time for you to resit a GCSE exam.  

Look for other courses with different grade requirements     

Entry requirements vary depending on the college and course. Ask your school for advice, and call your college or another one in your area to see if there’s a space on a course you’re interested in.    

Consider an apprenticeship    

Apprenticeships combine a practical training job with study too. They’re open to you if you’re 16 or over, living in England, and not in full time education.  

As an apprentice you’ll be a paid employee, have the opportunity to work alongside experienced staff, gain job-specific skills, and get time set aside for training and study related to your role.   

You can find out more about how to apply here .  

Talk to a National Careers Service (NCS) adviser    

The National Career Service is a free resource that can help you with your career planning. Give them a call to discuss potential routes into higher education, further education, or the workplace.   

Whatever your results, if you want to find out more about all your education and training options, as well as get practical advice about your exam results, visit the  National Careers Service page  and Skills for Careers to explore your study and work choices.   

You may also be interested in:

  • Results day 2024: What's next after picking up your A level, T level and VTQ results?
  • When is results day 2024? GCSEs, A levels, T Levels and VTQs

Tags: GCSE grade equivalent , gcse number grades , GCSE results , gcse results day 2024 , gsce grades old and new , new gcse grades

Sharing and comments

Share this page, related content and links, about the education hub.

The Education Hub is a site for parents, pupils, education professionals and the media that captures all you need to know about the education system. You’ll find accessible, straightforward information on popular topics, Q&As, interviews, case studies, and more.

Please note that for media enquiries, journalists should call our central Newsdesk on 020 7783 8300. This media-only line operates from Monday to Friday, 8am to 7pm. Outside of these hours the number will divert to the duty media officer.

Members of the public should call our general enquiries line on 0370 000 2288.

Sign up and manage updates

Follow us on social media, search by date.

August 2024
M T W T F S S
 1234
5 7891011
131415161718
2122232425
2627 29 31  

Comments and moderation policy

IMAGES

  1. Case study

    risk assessment case study examples

  2. (PDF) Developing Risk Assessments from the Perspective of the Patient

    risk assessment case study examples

  3. ⇉Risk Assessment Case Study Essay Example

    risk assessment case study examples

  4. Job Risk Assessment

    risk assessment case study examples

  5. risk assessment approach case study

    risk assessment case study examples

  6. A Risk Assessment Productivity Case Study: how to save time and money

    risk assessment case study examples

VIDEO

  1. Video Teaser: Site assessment case study

  2. Historic Research as a Tool in Unexploded Bomb Risk Assessment: Case Study Sarajevo

  3. Geospatial Multicriteria Analysis for Earthquake Risk Assessment: Case Study over Fujairah, UAE

  4. HAVS Risk Assessment Case Study Video

  5. Risk Assessment Case Study

  6. Client Risk Assessment Template

COMMENTS

  1. PDF Case Study

    Activities Performed. Configuration review: Performed configuration reviews (console and checklist based) on operating system and databases for Hyperion, OnAir, ERP, SAP, and PeopleSoft to identify data leakage related risks. Third Party Risk Management: Assisted in improving third party risk management and security management practices (For ...

  2. Module 1: Case Studies & Examples

    The Three-Point Range Values. Using three-point values is a simple and effective way to express a range, such as the level of threat and likelihood associated with an event or activity. The three values are minimum, most likelihood, and maximum. When we quantify risk, we use the formula Threat x Likelihood = Risk.

  3. Enterprise Risk Management Examples l Smartsheet

    In an enterprise risk assessment example, ... For example, the case study cites a risk that the company assessed as having a 5 percent probability of a somewhat better-than-expected outcome but a 10 percent probability of a significant loss relative to forecast. In this case, the downside risk was greater than the upside potential.

  4. A case study exploring field-level risk assessments as a leading safety

    The results provide insight into promising ways to measure and document as well as support and manage a risk-based program over several years. After common barriers to risk assessment implementation are discussed, mini case examples to illustrate how the organization improved and used their FLRA process to identify leading indicators follow.

  5. PDF Quality Risk Management Principles and Industry Case Studies

    Case study utilizes recognized quality risk management tools. Case study is appropriately simple and succinct to assure clear understanding. Case study provides areas for decreased and increased response actions. 7. Case study avoids excessive redundancy in subject and tools as compared to other planned models. 8.

  6. Risk Assessment Case Studies

    Case Study: Manufacturing Company. Background: A safety products company was contracted to perform a risk assessment. Result: The most expensive products and solutions were recommended by the product company. The client purchased and installed the materials, resulting in an improper application of a safety device.

  7. PDF Risk assessment case study

    The aim of the case studies was to apply and evaluate the applicability of different methods for risk analysis (i.e. hazard identification and risk estimation) and to some extent risk evaluation of drinking water supplies. The case studies will also provide a number of different examples on how risks in drinking water systems can

  8. Project Risk Management: 5 Case Studies You Should Not Miss

    5 Project Risk Management Case Studies. It is now high time to approach the practical side of project risk management. This section provides selected five case studies that explain the need and application of project risk management. Each case study gives an individual approach revealing how risk management can facilitate success of the project.

  9. Risk Management Case Studies

    How do different organisations use Predict! to manage their risks and opportunities? Read our risk management case studies to learn from their experiences and insights. Find out how Predict! helps them to achieve their strategic objectives, deliver projects on time and budget, and improve their risk culture.

  10. (PDF) A case study exploring field-level risk ...

    A case study exploring field-level risk assessments as a leading safety indicator. January 2017; Transactions 342(1):22-28; ... and scanned the various risk assessment example documents .

  11. Quality Risk-Management Principles and PQRI Case Studies

    The highest graded case studies were measured against two additional criteria to ensure a balanced mix of examples for this report. Due to the size of a well-developed risk assessment, especially when applied to a complex problem or operating area, the presented case studies in most instances represent redacted versions of the actual assessments.

  12. PDF CASE STUDY AUDIT PLANNING & RISK ASSESSMENT 1. INTRODUCTION

    1. INTRODUCTION. The objective of this case study is to reinforce the messages contained in the Audit Planning & Risk Assessment Guide through the completion of a practitioner based case study that will cover the following key stages in the audit planning and risk assessment cycle: Identification of the Audit Universe and related objectives;

  13. PDF Risk Management—the Revealing Hand

    we draw lessons from seven case studies about the multiple and contingent ways that a corporate risk function can foster highly interactive and intrusive dialogues tosurface and prioritize risks, help to allocate resources to mitigate them, and bring clarity to the value trade-offs and moral dilemmas that lurk in those decisions.

  14. How to Do a Risk Assessment: A Case Study

    Accept whatever risk is left and get on with the ministry's work; Reject the remaining risk and eliminate it by getting rid of the source of the risk; Step 5: Ongoing Risk Management. On a regular basis, in keeping with the type of risk and its threat, the risk assessment and risk management plan should be reviewed to see if it is still valid.

  15. Risk Management in IT Projects

    ges. It is an integral element of management. based on a holistic approach to risk, i.e. risk. is a collection of many di erent factors .". Szczepaniak (2013) distinguishes four . steps in the ...

  16. Risk Management Articles, Research, & Case Studies

    Risk Management―The Revealing Hand. by Robert S. Kaplan and Anette Mikes. This article explores the role, organization, and limitations of risk identification and risk management, especially in situations that are not amenable to quantitative risk modeling. It argues that firms can avoid the artificial choice between quantitative and ...

  17. PDF Case Study 1: Risk Assessment and Lifecycle Management Learning

    Risk assessment should be carried out initially and be repeated throughout development in order to assess in how far the identified risks have become controllable. The time point of the risk assessment should be clearly stated. A summary of all material quality attributes and process parameters.

  18. Case Study: How FAIR Risk Quantification Enables Information ...

    Security leaders can prioritize their security initiatives based on the top risk areas that an organization faces. Swisscom uses quantifiable risk management enabled through Open FAIR to: Communicate security risk to the business. Ascertain business risk appetites and improve business owner accountability for risk.

  19. Risk Assessment for Collaborative Operation: A Case Study on Hand

    Risk assessment is a systematic and iterative process, which involves risk analysis, where probable hazards are identified, and then corresponding risks are evaluated along with solutions to mitigate the effect of these risks. ... The case study was analyzed to understand the benefits of collaborative operations done through a conceptual study ...

  20. PDF fall risk case studies

    Timed Up and Go: 15 seconds with a cane on left, minimal arm swing noted. 30-Second Chair Stand Test: Able to rise from the chair 7 times without using her arms. 4-Stage Balance Test: Able to stand for 10 seconds in Position 1(feet side by side) and Position 2 (semi-tandem). However, she loses her balance after 3 seconds in Position 3 (tandem).

  21. Geoelectric studies in earthquake hazard assessment: the case of the

    The study presents the results of geoelectric research for seismic risk assessment on the example of the Kozlodui nuclear power plant in Bulgaria. The image of the geoelectric structure in the study area was obtained using one-dimensional inverse electrical resistivity modeling of the full five-component magnetotelluric data and quasi-three-dimensional inverse conductivity modeling of the ...

  22. Mitigate Flooding

    Examples of green infrastructure that enhance infiltration (e.g., permeable pavements) and reduce flooding include rain retention (e.g., rain gardens, bioswales). Communities susceptible to localized flooding can use models to learn more about the impact green infrastructure can have on managing their flood risk.

  23. Right to disconnect

    Example: Employee is compensated for reasonable out of hours contact. Elizabeth is an associate at a medium-sized architecture firm where she usually works 8.30 am to 5 pm. Elizabeth has been asked to fill in for her manager who is taking 3 months leave. During this period, Elizabeth will need to lead the delivery of a project for a major client.

  24. Global Report on Food Crises (GRFC) 2024

    The Global Report on Food Crises (GRFC) 2024 confirms the enormity of the challenge of achieving the goal of ending hunger by 2030. In 2023, nearly 282 million people or 21.5 percent of the analysed population in 59 countries/territories faced high levels of acute food insecurity requiring urgent food and livelihood assistance. This additional 24 million people since 2022 is explained by ...

  25. GCSE results day 2024: Everything you need to know including the number

    Apprenticeships combine a practical training job with study too. They're open to you if you're 16 or over, living in England, and not in full time education. As an apprentice you'll be a paid employee, have the opportunity to work alongside experienced staff, gain job-specific skills, and get time set aside for training and study related ...