There were no NDAs or BLAs for which CBER took regulatory action in FY 2023.
Does not include submissions before FY 2023. There were no final study reports to satisfy a PMR or PMC submitted in FY 2023 for which FDA took regulatory action in FY 2023. Final study reports to satisfy a PMR or PMC submitted in FY 2023 will be reported in the fiscal year in which FDA determines that the PMR or PMC was satisfied. The table below describes the characteristics of new protocols containing RWD as well as protocol amendments that added RWD to a study that did not previously include RWD. The numbers include submissions of interventional study protocols to an IND and submissions of non-interventional study protocols to an existing IND or to a pre-IND. The numbers do not include protocols or protocol amendments submitted only as part of a background package for a meeting with FDA. Protocols are reported in the fiscal year during which they are submitted.
Intended Regulatory Purpose | |
---|
To support the demonstration of safety and/or effectiveness for a product not previously approved by FDA | 0 |
To support labeling changes for an approved product, including: | 0 |
| 0 |
| 0 |
| 0 |
| 0 |
| 0 |
To satisfy a PMR | 2 |
To satisfy a PMC | 2 |
Data Source | |
---|
Electronic health records | 0 |
Medical claims | 2 |
Product, disease, or other registry | 2 |
Digital health technologies in non-research settings | 0 |
Other | 0 |
Study Design | |
---|
Randomized controlled trial | 0 |
Externally controlled trial | 0 |
Non-interventional (observational) study | 4 |
Other | 0 |
Studies often provide information on both effectiveness and safety. For this report, a study was classified as “safety” if it was conducted primarily to assess a known or potential safety risk. All other studies were classified as “effectiveness.” A study may have more than one regulatory purpose or data source and therefore may be included in more than one category. The term “registry” is sometimes used to refer to a non-interventional cohort study that is intended to address a specific regulatory question in a targeted population. For such studies, this report provides the original source(s) of study data. |
We use cookies. Read more about them in our Privacy Policy .
- Accept site cookies
- Reject site cookies
![evaluating a research report Grantham Research Institute on Climate Change and the Environment](https://www.lse.ac.uk/granthaminstitute/wp-content/themes/grantham/assets/images/logo-gri.png)
Global trends in climate change litigation: 2024 snapshot
![evaluating a research report](https://www.lse.ac.uk/granthaminstitute/wp-content/uploads/2024/06/GP0STSXB0_Low-res-with-credit-line-800px-768x512.jpg)
This report provides a numerical analysis of how many climate change litigation cases were filed in 2023, where and by whom, and a qualitative assessment of trends and themes in the types of cases filed. It is the sixth report in the series, produced by the Grantham Research Institute in partnership with the Sabin Center for Climate Change Law and drawing on the Sabin Center’s Climate Change Litigation Databases . Each report provides a synthesis of the latest research and developments in the climate change litigation field.
Key messages
- At least 230 new climate cases were filed in 2023. Many of these are seeking to hold governments and companies accountable for climate action. However, the number of cases expanded less rapidly last year than previously, which may suggest a consolidation and concentration of strategic litigation efforts in areas anticipated to have high impact.
- Climate cases have continued to spread to new countries, with cases filed for the first time in Panama and Portugal in 2023.
- 2023 was an important year for international climate change litigation, with major international courts and tribunals being asked to rule and advise on climate change. Just 5% of climate cases have been brought before international courts, but many of these cases have significant potential to influence domestic proceedings.
- There were significant successes in ‘government framework’ cases in 2023; these challenge the ambition or implementation of a government’s overall climate policy response. The European Court of Human Rights’ decision in April 2024 in the case of KlimaSeniorinnen and ors. v. Switzerland is likely to lead to the filing of further cases.
- The number of cases concerning ‘climate-washing’ has grown in recent years. 47 such cases were filed in 2023, bringing the recorded total to more than 140. These cases have met with significant success, with more than 70% of completed cases decided in favour of the claimants.
- There were important developments in ‘polluter pays’ cases: more than 30 cases worldwide are currently seeking to hold companies accountable for climate-related harm allegedly caused by their contributions to greenhouse gas emissions.
- Litigants continue to file new ‘corporate framework’ cases, which seek to ensure companies align their group-level policies and governance processes with climate goals. The New Zealand Supreme Court allowed one such case to proceed, although cases filed elsewhere have been dismissed. The landmark case of Milieudefensie v. Shell is under appeal.
- In this year’s analysis a new category of ‘transition risk’ cases was introduced, which includes cases filed against corporate directors and officers for their management of climate risks. Shareholders of Enea approved a decision to bring such a case against former directors for planned investments in a new coal power plant in Poland.
- ESG backlash cases, which challenge the incorporation of climate risk into financial decision-making.
- Strategic litigation against public participation (SLAPP) suits against NGOs and shareholder activists that seek to deter them from pursuing climate agendas.
- Just transition cases, which challenge the distributional impacts of climate policy or the processes by which policies were developed, normally on human rights grounds.
- Green v. green cases, which concern potential trade-offs between climate and biodiversity or other environmental aims.
Recent previous reports in the series:
2023 snapshot
2022 snapshot
Sign up to our newsletter
Information
Initiatives
You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.
All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .
Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.
Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.
Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.
Original Submission Date Received: .
- Active Journals
- Find a Journal
- Proceedings Series
- For Authors
- For Reviewers
- For Editors
- For Librarians
- For Publishers
- For Societies
- For Conference Organizers
- Open Access Policy
- Institutional Open Access Program
- Special Issues Guidelines
- Editorial Process
- Research and Publication Ethics
- Article Processing Charges
- Testimonials
- Preprints.org
- SciProfiles
- Encyclopedia
![Systems systems-logo](https://pub.mdpi-res.com/img/journals/systems-logo.png?8600e93ff98dbf14)
Article Menu
![evaluating a research report evaluating a research report](https://www.mdpi.com/bundles/mdpisciprofileslink/img/unknown-user.png)
- Subscribe SciFeed
- Recommended Articles
- Google Scholar
- on Google Scholar
- Table of Contents
Find support for a specific problem in the support section of our website.
Please let us know what you think of our products and services.
Visit our dedicated information section to learn more about MDPI.
JSmol Viewer
Cyber evaluation and management toolkit (cemt): face validity of model-based cybersecurity decision making.
![ORCID evaluating a research report](https://pub.mdpi-res.com/img/design/orcid.png?0465bc3812adeb52?1719563568)
1. Introduction
1.1. background.
“The DON needs a new approach to cybersecurity that goes beyond compliance because our over-reliance on compliance has resulted in insecure systems, which jeopardise the missions these systems support. Instead of a compliance mindset, the DON will shift to Cyber Ready, where the right to operate is earned and managed every day. The DON will make this transition by adhering to an operationally relevant, threat-informed process that affordably reduces risk and produces capabilities that remain secure after they have been delivered at speed”. [ 5 ] (p. 7)
1.2. Literature Review
1.3. cyberworthiness.
“The desired outcome of a range of policy and assurance activities that allow the operation of Defence platforms, systems and networks in a contested cyber environment. It is a pragmatic, outcome-focused approach designed to ensure all Defence capabilities are fit-for-purpose against cyber threats”. [ 43 ]
“2.10 The seaworthiness governance principles require that seaworthiness decisions are made:
a. mindfully—decisions are more effective and less likely to have unintended consequences when they are made with a thorough understanding of the context, the required outcome, the options available, and their implications now and in the future
b. collaboratively—obtaining input from all stakeholders and engaging in joint problem-solving results in better decisions (bearing in mind that collaboration does not necessarily require consensus)
c. accountably—decisions only become effective when people take accountability for making them happen
d. transparently—decisions are more effective when everyone understands what has been decided and why”. [ 44 ] (p.33)
1.4. Addressing the Problem
- Usability—there is limited ability to easily create and review these graph-based threat assessments, especially in large, complex systems;
- Efficiency—reusability of these assessments is limited in comparison to compliance-based approaches that re-apply a common control set;
- Maintainability—it is difficult to update complex graph-based assessments without specialised toolsets as the system or threat environment evolves.
- Are integrated threat models, developed using model-based systems engineering (MBSE) techniques, an effective and efficient basis for the assessment and evaluation of cyberworthiness?
- Do the developed threat models provide decision makers with the necessary understanding to make informed security risk decisions?
- Does the process provide sufficient reusability and maintainability that the methodology is more efficient than prevailing compliance-based approaches?
- Do cybersecurity risk practitioners prefer the integrated threat model approach to traditional security risk assessment processes?
2. Materials and Methods
2.1. threat-based cybersecurity engineering.
- Threat Context, derived from the system or capability design/architecture;
- Threat Identification, provided by the Cyber Threat Intelligence function within an organisation;
- Threat Insight, contributed by the Cyber Threat Emulation function within an organisation;
- Best Practice Controls, distilled from the various cybersecurity frameworks available within the cybersecurity body of knowledge.
- Preventative Controls, a baseline of preventative cybersecurity controls within the system, for inclusion in the system design;
- Detecting Controls, a baseline of detection and response controls relevant to the system, for implementation by the Cyber Operations function within an organisation;
- Recovery Controls, a baseline of recovery and resilience controls relevant to the system, for implementation by the System Operations function within an organisation;
- Residual Risk, the overall risk presented by the threats to the capability given the mitigation mechanisms that are in place.
2.2. Cyber Evaluation and Management Toolkit (CEMT)
2.3. cemt sample model, 2.3.1. threat modelling.
- Misuse case diagrams;
- Intermediate mal-activity diagrams;
- Detailed mal-activity diagrams.
2.3.2. Threat Mitigation
- Allocating assets to the threat model;
- Tracing controls to the threat model.
2.3.3. Risk Assessment
- Attack tree assessment;
- Parametric risk analysis;
- Risk evaluation.
2.4. Achieving Threat-Based Cybersecurity Engineering
2.5. efficiency through automation.
- Automated update of complex drawings and simulations to ensure that changes to the design or threat environment can be incorporated efficiently into the threat model;
- Automated model validation to ensure that basic review tasks are automated, allowing expert reviewers to focus on the actual threat assessment component;
- Automated documentation to ensure that the process of creating enduring design artefacts is efficient and accurate.
3.1. Face Validity Trial Setup
3.2. face validity trial data collection and setup, 4. discussion.
- Appropriateness of the assessed controls to the system being assessed, as demonstrated by the responses to Question 1;
- Prioritisation of controls, as demonstrated by the responses to Questions 6 and 14;
- Ability for non-expert decision makers to understand the assessment, as demonstrated by Questions 7, 8, and 17.
4.1. Significance
- Extended Model-Based Taxonomy—an extension of an open model-based systems engineering language such as UML or SysML; this is provided to facilitate a model-based approach;
- Threat Focused—the threats to the system, rather than a best-practice control baseline or asset hierarchy, is used as the focal point of the assessment;
- Detailed Adversary Modelling—the actions of the adversary are modelled in detail, facilitating a precise discussion and review of any threat analysis;
- Visualisation and Simulation of Threat—detailed adversary modelling is expressed in simplified graphs such as attack trees, and branches of those graphs can be simulated quantitatively;
- Explicit Traceability to Threats—derived security controls are directly traceable to adversary actions, facilitating discussion and review of the importance of each control in terms of the malicious action it mitigates.
4.2. Future Work
5. conclusions, author contributions, data availability statement, acknowledgments, conflicts of interest.
- Australian Government—Department of Home Affairs, Protective Security Policy Framework. Available online: https://www.protectivesecurity.gov.au (accessed on 25 April 2024).
- National Institute of Standards and Technology (NIST) Computer Security Resource Center (CSRC), NIST Risk Management Framework (RMF). Available online: https://csrc.nist.gov/projects/risk-management/about-rmf (accessed on 25 April 2024).
- Australian Government—Australian Signals Directorate, Information Security Manual (ISM). Available online: https://www.cyber.gov.au/resources-business-and-government/essential-cyber-security/ism (accessed on 25 April 2024).
- National Institute of Standards and Technology (NIST) Computer Security Resource Center (CSRC), NIST Special Publication 800-53 Rev. 5: Security and Privacy Controls for Information Systems and Organizations. Available online: https://csrc.nist.gov/pubs/sp/800/53/r5/upd1/final (accessed on 25 April 2024).
- U.S. Department of Navy; Cyber Strategy, November 2023. Available online: https://dvidshub.net/r/irstzr (accessed on 25 April 2024).
- Australian Government—Australian Signals Directorate, System Security Plan Annex Template (March 2024). Available online: https://www.cyber.gov.au/sites/default/files/2024-03/System%20Security%20Plan%20Annex%20Template%20%28March%202024%29.xlsx (accessed on 25 April 2024).
- National Institute of Standards and Technology (NIST) Computer Security Resource Center (CSRC), Control Catalog (spreadsheet). Available online: https://csrc.nist.gov/files/pubs/sp/800/53/r5/upd1/final/docs/sp800-53r5-control-catalog.xlsx (accessed on 25 April 2024).
- National Institute of Standards and Technology (NIST), OSCAL: The Open Security Controls Assessment Language. Available online: https://pages.nist.gov/OSCAL/ (accessed on 25 April 2024).
- MITRE ATT&CK Framework. Available online: https://attack.mitre.org/ (accessed on 25 April 2024).
- The Department of Defense Cyber Table Top Guide, Version 2, 16 September 2021. Available online: https://www.cto.mil/wp-content/uploads/2023/06/DoD-Cyber-Table-Top-Guide-v2-2021.pdf (accessed on 25 April 2024).
- Monroe, M.; Olinger, J. Mission-Based Risk Assessment Process for Cyber (MRAP-C). ITEA J. Test Eval. 2020 , 41 , 229–232. [ Google Scholar ]
- Kuzio de Naray, R.; Buytendyk, A.M. Analysis of Mission Based Cyber Risk Assessments (MBCRAs) Usage in DoD’s Cyber Test and Evaluation ; Institute for Defense Analyses: Alexandria, VA, USA, 2022; IDA Publication P-33109. [ Google Scholar ]
- Kordy, B.; Piètre-Cambacédès, L.; Schweitzer, P.P. DAG-based attack and defense modeling: Don’t miss the forest for the attack trees. Comput. Sci. Rev. 2014 , 13–14 , 1–38. [ Google Scholar ] [ CrossRef ]
- Weiss, J.D. A system security engineering process. In Proceedings of the 14th Annual NCSC/NIST National Computer Security Conference, Washington, DC, USA, 1–4 October 1991. [ Google Scholar ]
- Schneier, B. Attack trees: Modeling security threats. Dr Dobb’s J. Softw. Tools 1999 , 12–24 , 21–29. Available online: https://www.schneier.com/academic/archives/1999/12/attack_trees.html (accessed on 25 April 2024).
- Paul, S.; Vignon-Davillier, R. Unifying traditional risk assessment approaches with attack trees. J. Inf. Secur. Appl. 2014 , 19 , 165–181. [ Google Scholar ] [ CrossRef ]
- Kordy, B.; Pouly, M.; Schweitzer, P. Probabilistic reasoning with graphical security models. Inf. Sci. 2016 , 342 , 111–131. [ Google Scholar ] [ CrossRef ]
- Gribaudo, M.; Iacono, M.; Marrone, S. Exploiting Bayesian Networks for the analysis of combined Attack Trees. Electron. Notes Theor. Comput. Sci. 2015 , 310 , 91–111. [ Google Scholar ] [ CrossRef ]
- Holm, H.; Korman, M.; Ekstedt, M. A Bayesian network model for likelihood estimations of acquirement of critical software vulnerabilities and exploits. Inf. Softw. Technol. 2015 , 58 , 304–318. [ Google Scholar ] [ CrossRef ]
- Moskowitz, I.; Kang, M. An insecurity flow model. In Proceedings of the 1997 Workshop on New Security Paradigms, Cumbria, UK, 23–26 September 1997; pp. 61–74. [ Google Scholar ]
- McDermott, J.; Fox, C. Using abuse case models for security requirements analysis. In Proceedings of the 15th Annual Computer Security Applications Conference, Phoenix, AZ, USA, 6–10 December 1999; pp. 55–64. [ Google Scholar ]
- Sindre, G.; Opdahl, A.L. Eliciting security requirements with misuse cases. Requir. Eng. 2004 , 10 , 34–44. [ Google Scholar ] [ CrossRef ]
- Karpati, P.; Sindre, G.; Opdahl, A.L. Visualizing cyber attacks with misuse case maps. In Requirements Engineering: Foundation for Software Quality ; Springer: Berlin/Heidelberg, Germany, 2010; pp. 262–275. [ Google Scholar ]
- Abdulrazeg, A.; Norwawi, N.; Basir, N. Security metrics to improve misuse case model. In Proceedings of the 2012 International Conference on Cyber Security, Cyber Warfare and Digital Forensics, Kuala Lumpur, Malaysia, 26–28 June 2012. [ Google Scholar ]
- Saleh, F.; El-Attar, M. A scientific evaluation of the misuse case diagrams visual syntax. Inf. Softw. Technol. 2015 , 66 , 73–96. [ Google Scholar ] [ CrossRef ]
- Mai, P.; Goknil, A.; Shar, L.; Pastore, F.; Briand, L.C.; Shaame, S. Modeling Security and Privacy Requirements: A Use Case-Driven Approach. Inf. Softw. Technol. 2018 , 100 , 165–182. [ Google Scholar ] [ CrossRef ]
- Matuleviaius, R. Fundamentals of Secure System Modelling ; Springer International Publishing: Cham, Switzerland, 2017; pp. 93–115. [ Google Scholar ]
- Sindre, G. Mal-activity diagrams for capturing attacks on business processes. In Requirements Engineering: Foundation for Software Quality ; Springer: Berlin/Heidelberg, Germany, 2007; pp. 355–366. [ Google Scholar ]
- Opdahl, A.; Sindre, G. Experimental comparison of attack trees and misuse cases for security threat identification. Inf. Softw. Technol. 2009 , 51 , 916. [ Google Scholar ] [ CrossRef ]
- Karpati, P.; Redda, Y.; Opdahl, A.; Sindre, G. Comparing attack trees and misuse cases in an industrial setting. Inf. Softw. Technol. 2014 , 56 , 294. [ Google Scholar ] [ CrossRef ]
- Tondel, I.A.; Jensen, J.; Rostad, L. Combining Misuse Cases with Attack Trees and Security Activity Models. In Proceedings of the 2010 International Conference on Availability, Reliability and Security, Krakow, Poland, 15–18 February 2010; pp. 438–445. [ Google Scholar ]
- Meland, P.H.; Tondel, I.A.; Jensen, J. Idea: Reusability of threat models—Two approaches with an experimental evaluation. In Engineering Secure Software and Systems ; Springer: Berlin/Heidelberg, Germany, 2010; pp. 114–122. [ Google Scholar ]
- Purton, L.; Kourousis, K. Military Airworthiness Management Frameworks: A Critical Review. Procedia Eng. 2014 , 80 , 545–564. [ Google Scholar ] [ CrossRef ]
- Mo, J.P.T.; Downey, K. System Design for Transitional Aircraft Support. Int. J. Eng. Bus. Manag. 2014 , 6 , 45–56. [ Google Scholar ] [ CrossRef ]
- Hodge, R.J.; Craig, S.; Bradley, J.M.; Keating, C.B. Systems Engineering and Complex Systems Governance—Lessons for Better Integration. INCOSE Int. Symp. 2019 , 29 , 421–433. [ Google Scholar ] [ CrossRef ]
- Simmonds, S.; Cook, S.C. Use of the Goal Structuring Notation to Argue Technical Integrity. INCOSE Int. Symp. 2017 , 27 , 826–841. [ Google Scholar ] [ CrossRef ]
- United States Government Accountability Office. Weapon Systems Cybersecurity: DOD just Beginning to Grapple with Scale of Vulnerabilities. GAO-19-129 . 2018. Available online: https://www.gao.gov/products/gao-19-128 (accessed on 15 June 2024).
- Joiner, K.F.; Tutty, M.G. A tale of two allied defence departments: New assurance initiatives for managing increasing system complexity, interconnectedness and vulnerability. Aust. J. Multi-Discip. Eng. 2018 , 14 , 4–25. [ Google Scholar ] [ CrossRef ]
- Joiner, K.F. How Australia can catch up to U.S. cyber resilience by understanding that cyber survivability test and evaluation drives defense investment. Inf. Secur. J. A Glob. Perspect. 2017 , 26 , 74–84. [ Google Scholar ] [ CrossRef ]
- Thompson, M. Towards Mature ADF Information Warfare—Four Years of Growth. Defence Connect Multi-Domain . 2020. Available online: https://www.defenceconnect.com.au/supplements/multi-domain-2 (accessed on 15 June 2024).
- Fowler, S.; Sweetman, C.; Ravindran, S.; Joiner, K.F.; Sitnikova, E. Developing cyber-security policies that penetrate Australian defence acquisitions. Aust. Def. Force J. 2017 , 102 , 17–26. [ Google Scholar ]
- Australian Senate. Budget Hearings on Foreign Affairs Defence and Trade, Testimony by Vice Admiral Griggs, Major General Thompson and Minister of Defence (29 May, 2033–2035 hours). 2018. Available online: https://parlview.aph.gov.au/mediaPlayer.php?videoID=399539timestamp3:19:43 (accessed on 15 June 2024).
- Australian Government. ADF Cyberworthiness Governance Framework ; Australian Government: Canberra, Australia, 2020.
- Australian Government. Defence Seaworthiness Management System Manual. 2018. Available online: https://www.defence.gov.au/sites/default/files/2021-01/SeaworthinessMgmtSystemManual.pdf (accessed on 15 June 2024).
- Allen, M.S.; Robson, D.A.; Iliescu, D. Face Validity: A Critical but Ignored Component of Scale Construction in Psychological Assessment. Eur. J. Psychol. Assess. Off. Organ Eur. Assoc. Psychol. Assess. 2023 , 39 , 153–156. [ Google Scholar ] [ CrossRef ]
- Fowler, S.; Sitnikova, E. Toward a framework for assessing the cyber-worthiness of complex mission critical systems. In Proceedings of the 2019 Military Communications and Information Systems Conference (MilCIS), Canberra, Australia, 12–14 November 2019. [ Google Scholar ]
- Fowler, S.; Joiner, K.; Sitnikova, E. Assessing cyber-worthiness of complex system capabilities using MBSE: A new rigorous engineering methodology. IEEE Syst. J. 2022. submitted . Available online: https://www.techrxiv.org/users/680765/articles/677291-assessing-cyber-worthiness-of-complex-system-capabilities-using-mbse-a-new-rigorous-engineering-methodology (accessed on 25 April 2024).
- Cyber Evaluation and Management Toolkit (CEMT). Available online: https://github.com/stuartfowler/CEMT (accessed on 25 April 2024).
- Fowler, S. Cyberworthiness Evaluation and Management Toolkit (CEMT): A model-based approach to cyberworthiness assessments. In Proceedings of the Systems Engineering Test & Evaluation (SETE) Conference 2022, Canberra, Australia, 12–14 September 2022. [ Google Scholar ]
- National Institute of Standards and Technology (NIST) Computer Security Resource Center (CSRC), NIST Special Publication 800-160 Rev. 2: Developing Cyber-Resilient Systems: A Systems Security Engineering Approach. Available online: https://csrc.nist.gov/pubs/sp/800/160/v2/r1/final (accessed on 25 April 2024).
- National Institute of Standards and Technology (NIST), CSF 2.0: Cybersecurity Framework. Available online: https://www.nist.gov/cyberframework (accessed on 25 April 2024).
- Madni, A.; Purohit, S. Economic analysis of model-based systems engineering. Systems 2019 , 7 , 12. [ Google Scholar ] [ CrossRef ]
- Bussemaker, J.; Boggero, L.; Nagel, B. The agile 4.0 project: MBSE to support cyber-physical collaborative aircraft development. INCOSE Int. Symp. 2023 , 33 , 163–182. [ Google Scholar ] [ CrossRef ]
- Amoroso, E.G. Fundamentals of Computer Security Technology ; Pearson College Div: Englewood Cliffs, NJ, USA, 1994. [ Google Scholar ]
- INCOSE. Systems Engineering Vision 2020 ; International Council on Systems Engineering: Seattle, WA, USA, 2007. [ Google Scholar ]
- Madni, A.M.; Sievers, M. Model-based systems engineering: Motivation, current status, and research opportunities. Syst. Eng. 2018 , 21 , 172–190. [ Google Scholar ] [ CrossRef ]
- Huang, J.; Gheorghe, A.; Handley, H.; Pazos, P.; Pinto, A.; Kovacic, S.; Collins, A.; Keating, C.; Sousa-Poza, A.; Rabadi, G.; et al. Towards digital engineering—The advent of digital systems engineering. Int. J. Syst. Syst. Eng. 2020 , 10 , 234–261. [ Google Scholar ] [ CrossRef ]
- Chelouati, M.; Boussif, A.; Beugin, J.; El Koursi, E.-M. Graphical safety assurance case using goal structuring notation (gsn)– challenges, opportunities and a framework for autonomous trains. Reliab. Eng. Syst. Saf. 2023 , 230 , 108–933. [ Google Scholar ] [ CrossRef ]
- Sujan, M.; Spurgeon, P.; Cooke, M.; Weale, A.; Debenham, P.; Cross, S. The development of safety cases for healthcare services: Practical experiences, opportunities and challenges. Reliab. Eng. Syst. Saf. 2015 , 140 , 200–207. [ Google Scholar ] [ CrossRef ]
- Nguyen, P.H.; Ali, S.; Yue, T. Model-based security engineering for cyber-physical systems: A systematic mapping study. Inf. Softw. Technol. 2017 , 83 , 116–135. [ Google Scholar ] [ CrossRef ]
- Geismann, J.; Bodden, E. A systematic literature review of model-driven security engineering for cyber–physical systems. J. Syst. Softw. 2020 , 169 , 110697. [ Google Scholar ] [ CrossRef ]
- Carter, B.; Adams, S.; Bakirtzis, G.; Sherburne, T.; Beling, P.; Horowitz, B. A preliminary design-phase security methodology for cyber–physical systems. Systems 2019 , 7 , 21. [ Google Scholar ] [ CrossRef ]
- Larsen, M.H.; Muller, G.; Kokkula, S. A Conceptual Model-Based Systems Engineering Method for Creating Secure Cyber-Physical Systems. INCOSE Int. Symp. 2022 , 32 , 202–213. [ Google Scholar ] [ CrossRef ]
- Japs, S.; Anacker, H.; Dumitrescu, R. SAVE: Security & safety by model-based systems engineering on the example of automotive industry. In Proceedings of the 31st CIRP Design Conference, Online, 19–21 May 2021. [ Google Scholar ]
- Navas, J.; Voirin, J.; Paul, S.; Bonnet, S. Towards a model-based approach to systems and cybersecurity: Co-engineering in a product line context. Insight (Int. Counc. Syst. Eng.) 2020 , 23 , 39–43. [ Google Scholar ] [ CrossRef ]
- Geismann, J.; Gerking, C.; Bodden, E. Towards ensuring security by design in cyber-physical systems engineering processes. In Proceedings of the International Conference on the Software and Systems Process, Gothenburg, Sweden, 26–27 May 2018. [ Google Scholar ]
- Mažeika, D.; Butleris, R. MBSEsec: Model-based systems engineering method for creating secure systems. Appl. Sci. 2020 , 10 , 2574. [ Google Scholar ] [ CrossRef ]
- Object Management Group. UAF: Unified Architecture Framework. 2022. Available online: https://www.omg.org/spec/UAF. (accessed on 15 June 2024).
- Jurjens, J. Secure Systems Development with UML ; Springer: Berlin/Heidelberg, Germany, 2005. [ Google Scholar ]
- Apvrille, L.; Roudier, Y. Towards the model-driven engineering of secure yet safe embedded systems. Int. Workshop Graph. Models Secur. 2014 , 148 , 15–30. [ Google Scholar ] [ CrossRef ]
Click here to enlarge figure
| Survey Question | Strongly Disagree | Disagree | Neutral | Agree | Strongly Agree |
---|
Q1 | The CEMT produces risk assessments that are tailored to the context in which the system operates | 0 | 0 | 15 | 50 | 35 |
Q2 | Cyberworthiness assessments are simple to produce using the CEMT | 5 | 0 | 40 | 35 | 20 |
Q3 | The CEMT is an effective use of time | 0 | 0 | 30 | 25 | 45 |
Q4 | The CEMT process is intuitive | 0 | 5 | 25 | 45 | 25 |
Q5 | The CEMT encourages stakeholders to work collaboratively to determine the residual risk level | 0 | 0 | 10 | 35 | 55 |
Q6 | The CEMT clearly identifies which security controls are important to the system | 0 | 0 | 5 | 55 | 40 |
Q7 | The CEMT produces transparent cyberworthiness assessments | 0 | 5 | 10 | 40 | 45 |
Q8 | The CEMT facilitates informed decision making with respect to the identified cybersecurity risks | 0 | 0 | 5 | 50 | 45 |
Q9 | The CEMT produces cyberworthiness assessments that have ongoing value through the future phases of the capability life cycle | 0 | 0 | 10 | 40 | 50 |
Q10 | The CEMT would improve my understanding of the cyberworthiness of a system | 0 | 0 | 10 | 20 | 70 |
Q11 | The CEMT produces accurate assessments of a system’s cyberworthiness | 0 | 10 | 20 | 35 | 35 |
Q12 | The CEMT facilitates the engagement of stakeholders and the provision of meaningful input from those stakeholders into a cyberworthiness assessment | 0 | 0 | 20 | 40 | 40 |
Q13 | The cyberworthiness assessments produced by the CEMT are sufficiently detailed | 0 | 5 | 20 | 30 | 45 |
Q14 | The CEMT identifies the relative impact of security controls with respect to the cyberworthiness of the system | 0 | 5 | 15 | 40 | 40 |
Q15 | The CEMT is not overly dependent on the subjective opinion of subject matter experts | 0 | 0 | 30 | 50 | 20 |
Q16 | The CEMT provides sufficient information to allow decision makers to be accountable for their decisions | 0 | 10 | 15 | 35 | 40 |
Q17 | The CEMT clearly highlights the areas of greatest cyber risk to the system | 0 | 0 | 15 | 35 | 50 |
Q18 | The CEMT adds value to a system and/or project | 0 | 0 | 5 | 35 | 60 |
Q19 | The CEMT provides a complete and comprehensive approach to determining cyberworthiness | 5 | 10 | 10 | 50 | 25 |
Q20 | The CEMT is an improvement over existing cyberworthiness assessment processes | 0 | 5 | 10 | 20 | 65 |
| Model-Based Security Assessment Approach | Extended Model-Based Taxonomy | Threat Focused | Detailed Adversary Modelling | Visualisation and Simulation of Threats | Explicit Traceability to Threats |
---|
1 | CSRM [ ] | Y | N | N | N | N |
2 | Larsen et al. [ ] | Y | N | N | N | N |
3 | SAVE [ ] | Y | Y | N | N | N |
4 | Navas et al. [ ] | Y | Y | N | N | N |
5 | Geissman et al. [ ] | Y | Y | N | N | N |
6 | MBSESec [ ] | Y | Y | Y | N | N |
7 | UAF [ ] | Y | N | N | N | N |
8 | UMLSec [ ] | Y | N | N | N | N |
9 | SysML-Sec [ ] | Y | N | N | N | N |
10 | CEMT | Y | Y | Y | Y | Y |
| The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
Share and Cite
Fowler, S.; Joiner, K.; Ma, S. Cyber Evaluation and Management Toolkit (CEMT): Face Validity of Model-Based Cybersecurity Decision Making. Systems 2024 , 12 , 238. https://doi.org/10.3390/systems12070238
Fowler S, Joiner K, Ma S. Cyber Evaluation and Management Toolkit (CEMT): Face Validity of Model-Based Cybersecurity Decision Making. Systems . 2024; 12(7):238. https://doi.org/10.3390/systems12070238
Fowler, Stuart, Keith Joiner, and Siqi Ma. 2024. "Cyber Evaluation and Management Toolkit (CEMT): Face Validity of Model-Based Cybersecurity Decision Making" Systems 12, no. 7: 238. https://doi.org/10.3390/systems12070238
Article Metrics
Further information, mdpi initiatives, follow mdpi.
![MDPI Open Access Journals MDPI](https://pub.mdpi-res.com/img/design/mdpi-pub-logo-white-small.png?71d18e5f805839ab?1719563568)
Subscribe to receive issue release notifications and newsletters from MDPI journals
![](//omraadeinfo.online/777/templates/cheerup1/res/banner1.gif)
IMAGES
VIDEO
COMMENTS
Academic Journals. Evaluating Research in Academic Journals is a guide for students who are learning how to. evaluate reports of empirical research published in academic journals. It breaks down ...
The first variable to check is the dependent variable. It is the one that is getting changed based on how the researcher controls the experiment. The second variable is independent variables. It will be the one influencing the dependent variable. This variable should be chosen carefully and with a lot of considerations.
Definition: Evaluating Research refers to the process of assessing the quality, credibility, and relevance of a research study or project. This involves examining the methods, data, and results of the research in order to determine its validity, reliability, and usefulness. Evaluating research can be done by both experts and non-experts in the ...
Critical Appraisal. Critical appraisal is the process of systematically evaluating research using established and transparent methods. In critical appraisal, health professionals use validated checklists/worksheets as tools to guide their assessment of the research. It is a more advanced way of evaluating research than the more basic method ...
The example in Fig. 2 illustrates that Research emanates from at least one Question at Hand, and aims for at least one piece of New Knowledge.According to our definition (concept model), you cannot call something Research if it is not aiming for New Knowledge and does not emanate from a Question at Hand.This is the way we define the concept in concept modelling, and this small example only ...
Show how your research addresses a gap or contributes to a debate; Evaluate the current state of research and demonstrate your knowledge of the scholarly debates around your topic. Writing literature reviews is a particularly important skill if you want to apply for graduate school or pursue a career in research. We've written a step-by-step ...
Addressing problems: A research report can provide insights into problems or issues and suggest solutions or recommendations for addressing them. Evaluating programs or interventions: A research report can evaluate the effectiveness of programs or interventions, which can inform decision-making about whether to continue, modify, or discontinue ...
Understanding and Evaluating Research: A Critical Guide shows students how to be critical consumers of research and to appreciate the power of methodology as it shapes the research question, the use of theory in the study, the methods used, and how the outcomes are reported.
Evaluating Research in Academic Journals is a guide for students who are learning how to evaluate reports of empirical research published in academic journals. It breaks down the process of evaluating a journal article into easy-to-understand steps, and emphasizes the practical aspects of evaluating research - not just how to apply a list of technical terms from textbooks.
literature, evaluation and appraisal of the literature which are in essence the same thing (Bassett and Bassett, 2003). Terminology in research can be confusing for the novice research reader where a term like 'random' refers to an organized manner of selecting items or participants, and the word 'significance' is applied to a degree of chance ...
Evaluating Information in the Research Process: Evaluation Criteria. Created by Health Science Librarians Ask HSL. Welcome! Evaluation Criteria. Evaluation Criteria; Credibility; Bias; Accuracy; ... Report a problem. 208 Raleigh Street CB #3916 Chapel Hill, NC 27515-8890 919-962-1053. Locations & Hours Job Opportunities Accessibility
Research Report Quality Index= (Score obtained) x 100/ Maximum Actual Score. Suppose all the criteria were included in evaluating a research report, then the Maximum Actual Score would be 65. If the report gets a grand total score of 48, then the Research Report Quality Index score is 73.84.
The National Science Foundation Evaluation Report Template - This template provides a structure for evaluating research projects funded by the National Science Foundation. It includes sections on project background, research questions, evaluation methodology, data analysis, and conclusions and recommendations.
Lateral reading is the act of evaluating the credibility of a source by comparing it to other sources. This allows you to: Verify evidence. Contextualize information. Find potential weaknesses. If a source is using methods or drawing conclusions that are incompatible with other research in its field, it may not be reliable. Example: Lateral ...
Evaluate Sources With the Big 5 Criteria. The Big 5 Criteria can help you evaluate your sources for credibility: Currency: Check the publication date and determine whether it is sufficiently current for your topic. Coverage (relevance): Consider whether the source is relevant to your research and whether it covers the topic adequately for your ...
The report focuses on research evaluation in informatics. Excellent discussions of best practices for evaluating informatics researchers for promotion and tenure can be found in [ 16 , 40 ]. A scholarly dissection of the use of citations as proxies for impact is presented by Adler et al. [ 2 ].
General criteria for evaluating a research report are addressed. This outline of criteria can be used as a guide for nurses in critiquing research studies. A sample research report is summarized followed by a critique of the study. Readers have an opportunity to practice critiquing by doing their own analyses before reading the critique presented in the article.
Critically evaluate the research paper using the checklist provided, making notes on the key points and your overall impression. Discussion. Critical appraisal checklists are useful tools to help assess the quality of a study. Assessment of various factors, including the importance of the research question, the design and methodology of a study ...
Peer-review, the process by which a study is sent to other researchers in a particular field for their notes and thoughts, is essential in evaluating a study's findings. Since most consumers and members of the media are not well-trained enough to evaluate a study's design and researcher's findings, studies that pass muster with other ...
evaluation of research reports (articles) for publication.1 Guidelines are presented to facilitate preparation of research articles. The guidelines cover the types of details that are to be included, but more important, the rationale, logic, and flow of the article to facilitate communication and to advance the next stage of the research process.
Evaluation reports need to be as clear and precise as possible in their wording. Be especially careful about using the word 'proof' or 'prove'. To prove something requires 100% certainty, which you are very unlikely to have. 'Indicates', 'demonstrates', 'shows', 'suggests' or 'is evidence for' are useful alternative phrases. ...
Research Support Services for Employment of Young Adults on the Autism Spectrum (REYAAS) Project (November 2022) REYAAS is seeking to identify promising practices and policies to support employment of young adults (ages 16 through 28) on the autism spectrum. Recent estimates suggest that annually about 100,000 youth on the autism spectrum turn 18-years old in the United States.
a Studies often provide information on both effectiveness and safety. For this report, a study was classified as "safety" if it was conducted primarily to assess a known or potential safety ...
The Medical Research Future Fund (MRFF) Monitoring, evaluation and learning strategy provides an overarching framework for assessing the performance of the MRFF. Monitoring, evaluation and learning The MRFF Monitoring, Evaluation and Learning Strategy 2020-2021 to 2023-2024 (Evaluation Strategy) sets out the principles and approach used to ...
Principal Investigator: Mary Hrywna Affiliation: Assistant Professor, School of Public Health, Rutgers-New Brunswick Amount: $395,051 Sponsor: Institute for Nicotine and Tobacco Studies
The goal of this paper is to measure the relative technical inefficiency of Polish district courts for the period 2017-2021 in civil, criminal, and family cases. Unlike other papers on justice (in)efficiency, this study uses input-specific and output-specific production models combined with the Data Envelopment Analysis technique.
It is the sixth report in the series, produced by the Grantham Research Institute in partnership with the Sabin Center for Climate Change Law and drawing on the Sabin Center's Climate Change Litigation Databases. Each report provides a synthesis of the latest research and developments in the climate change litigation field. Key messages
The Cyber Evaluation and Management Toolkit (CEMT) is an open-source university research-based plugin for commercial digital model-based systems engineering tools that streamlines conducting cybersecurity risk evaluations for complex cyber-physical systems. The authors developed this research tool to assist the Australian Defence Force (ADF) with the cybersecurity evaluation of complicated ...