evaluating a research report

  • Voxco Online
  • Voxco Panel Management
  • Voxco Panel Portal
  • Voxco Audience
  • Voxco Mobile Offline
  • Voxco Dialer Cloud
  • Voxco Dialer On-premise
  • Voxco TCPA Connect
  • Voxco Analytics
  • Voxco Text & Sentiment Analysis

evaluating a research report

  • 40+ question types
  • Drag-and-drop interface
  • Skip logic and branching
  • Multi-lingual survey
  • Text piping
  • Question library
  • CSS customization
  • White-label surveys
  • Customizable ‘Thank You’ page
  • Customizable survey theme
  • Reminder send-outs
  • Survey rewards
  • Social media
  • Website surveys
  • Correlation analysis
  • Cross-tabulation analysis
  • Trend analysis
  • Real-time dashboard
  • Customizable report
  • Email address validation
  • Recaptcha validation
  • SSL security

Take a peek at our powerful survey features to design surveys that scale discoveries.

Download feature sheet.

  • Hospitality
  • Financial Services
  • Academic Research
  • Customer Experience
  • Employee Experience
  • Product Experience
  • Market Research
  • Social Research
  • Data Analysis

Explore Voxco 

Need to map Voxco’s features & offerings? We can help!

Watch a Demo 

Download Brochures 

Get a Quote

  • NPS Calculator
  • CES Calculator
  • A/B Testing Calculator
  • Margin of Error Calculator
  • Sample Size Calculator
  • CX Strategy & Management Hub
  • Market Research Hub
  • Patient Experience Hub
  • Employee Experience Hub
  • NPS Knowledge Hub
  • Market Research Guide
  • Customer Experience Guide
  • The Voxco Guide to Customer Experience
  • Survey Research Guides
  • Survey Template Library
  • Webinars and Events
  • Feature Sheets
  • Try a sample survey
  • Professional services

Find the best customer experience platform

Uncover customer pain points, analyze feedback and run successful CX programs with the best CX platform for your team.

Get the Guide Now

evaluating a research report

We’ve been avid users of the Voxco platform now for over 20 years. It gives us the flexibility to routinely enhance our survey toolkit and provides our clients with a more robust dataset and story to tell their clients.

VP Innovation & Strategic Partnerships, The Logit Group

  • Client Stories
  • Voxco Reviews
  • Why Voxco Research?
  • Careers at Voxco
  • Vulnerabilities and Ethical Hacking

Explore Regional Offices

  • Survey Software The world’s leading omnichannel survey software
  • Online Survey Tools Create sophisticated surveys with ease.
  • Mobile Offline Conduct efficient field surveys.
  • Text Analysis
  • Close The Loop
  • Automated Translations
  • NPS Dashboard
  • CATI Manage high volume phone surveys efficiently
  • Cloud/On-premise Dialer TCPA compliant Cloud on-premise dialer
  • IVR Survey Software Boost productivity with automated call workflows.
  • Analytics Analyze survey data with visual dashboards
  • Panel Manager Nurture a loyal community of respondents.
  • Survey Portal Best-in-class user friendly survey portal.
  • Voxco Audience Conduct targeted sample research in hours.
  • Predictive Analytics
  • Customer 360
  • Customer Loyalty
  • Fraud & Risk Management
  • AI/ML Enablement Services
  • Credit Underwriting

evaluating a research report

Find the best survey software for you! (Along with a checklist to compare platforms)

Get Buyer’s Guide

  • 100+ question types
  • SMS surveys
  • Banking & Financial Services
  • Retail Solution
  • Risk Management
  • Customer Lifecycle Solutions
  • Net Promoter Score
  • Customer Behaviour Analytics
  • Customer Segmentation
  • Data Unification

Explore Voxco 

Watch a Demo 

Download Brochures 

  • CX Strategy & Management Hub
  • Blogs & White papers
  • Case Studies

evaluating a research report

VP Innovation & Strategic Partnerships, The Logit Group

  • Why Voxco Intelligence?
  • Our clients
  • Client stories
  • Featuresheets

How to Evaluate a Research Report with Precision: Bridging the Gap

SHARE THE ARTICLE ON

Why do you need a research report?

According to questionpro , “ Research reports are recorded data prepared by researchers or statisticians after analysing the information gathered by conducting organized research, typically in the form of surveys or qualitative methods.” 

But the real question is, why is it important? Here are some of the reasons how a research report will help you:

  • With research reports, it is easy to figure out the finding of your entire research. 
  • It is a systematic way of investigating any gaps which need to be dealt with. 
  • A report can highlight the important findings, suggestions and any errors in your research. 
  • It represents your entire research and serves as an objective or a summary and a source of report. 
  • It provides first-hand original information regarding your research.
  • A research report helps to contribute to the existing knowledge by standing as an effective way of communicating the conclusions and findings. 
  • While providing information about the research, it also hints towards the other fields where systematic research is needed.
  • It helps you understand the market and its needs and trends. 
  • It is a systematic representation of your research hence it makes it easy to search through the topics that you need to refer to. 
  • It saves time as you don’t have to go through every minute detail to get the essence of the report. 
  • It is easily portable and can be sent through emails to your stakeholders.

See Voxco survey software in action with a Free demo.

How to evaluate a research report?

In this part, we will look into how to evaluate the research papers, the issues that might occur and a couple of other components too. Because just making a report and not being able to make the best knowledge out of it just won’t do right? 

Introductory chapter: Building the Rationale

When we start reading anything, the first thing we pay most attention to is the introduction chapter. The same goes for a research report, where the introductory chapter helps understand the researcher, his social and psychological views, his education and his reason behind the undertaking of this particular research. 

The key is to pay attention to the language of the report. And by that we mean the WAY it is written. As we know, research is conducted when a researcher decides to test a hypothesis. He is expecting a true or false result. But it may be the case that the researcher is influencing the research with their preferences or decisions. This causes a bias in the research and is generally detectable through the report’s language. 

Biases can be easy to figure out through paying attention to the efforts that researchers have put in gathering information from the resources and how diverse they are. Diversity will make sure that the researcher is not just referring to the sources that support his views, but they shed light on the topic from all the possible angles. 

Another thing to check in this chapter is the justification of the study and its relevance. A researcher has to study a wide range of information to make assumptions and test the theory. It is crucial to know how the researcher has related the topic to that vast area of information. How he has brought down the complexities of the topic to the form that is helping his research topic. 

Last comes the test of quality. Once you have verified the non-biases of the researcher, credibility of the resources and relevancy of the information, this will automatically paint a picture of the research’s quality for you. If the chapter satisfies the key requirements – relevance, importance, timeliness, researchable and researcher’s competency; then the introductory chapter has served its purpose well. 

Review of Literature

This section, depending on the researcher’s choice, can be included in the introductory chapter to build the rationale or can have a separate chapter just to elaborate it further. 

The first major areas that the researcher should elaborate on in this chapter are the findings. When it comes to that, there are three sub-areas to specifically look into:

  • Gaps – gaps in the research occurs when a broad area of the research is mapped out and see what pre-existing researches are conducted, this will create gaps in the study. In this gap area, variables are likely to influence the study in a wrong way and some of them may even be unstudied. 
  • Overlaps – overlap occurs when several studies are conducted in the same way, resulting in the frequent use of the variables that are not much different from each other. The literature review will help us identify these variables. 
  • Contradictions – it is possible that when one study is conducted in different circumstances, the results are different as well. In this case, a researcher must conclude whether the research is conclusive or not. 

The second area that this chapter brings to light is the methodology of the research. According to these guidelines, the research should be able to provide the measures of – sampling technique and size, research design, variables used, scaling techniques, research instruments used, data collection and quantitative or qualitative measures used for data analysis. 

Another indication of well structure literature is its scope. What is the scope of the review regarding time and how the literature was constructed and retrieved? Whether the significant data is missing or not. This will define the quality of the literature involved. 

Objectives and Hypothesis

In this section, the objectives are nothing but the questions that the researcher has raised concerning the problem for which he is searching a solution for. As these objectives are the heart of the research, they are likely to cover the following attributes:

  • The objectives should have a clarity of expression and direction to where the research is going to go and why it is being conducted in the first place. 
  • The objectives should be measurable when it comes to qualitative data. It should be easy to code and highlight the information so that it is easy to access. 
  • The objectives are the lines inside of which the research is conducted. Hence they should be comprehensive enough to cover all the aspects of the research and nothing should be outside the defined limits. 
  • The objectives should be judicious with regards to stating the objectives. “Recommending future results” which is mentioned commonly in the objectives is not very much feasible.  

The hypothesis is the reason behind the entire result, and they can be proved either true or false. Researchers construct their hypothesis based on the previous conclusions, studies, existing literature, etc. to evaluate the hypothesis, it is important to evaluate the following 4 factors:

  • Whether the hypothesis was null or directional, and in either of the case, the researcher has proved their point correctly. 
  • The hypothesis can be tested or not. 
  • The hypothesis is stated clearly and are implicating some relationship between the variables. 
  • When there are multiple variables, whether the relationship of independent variables to that of dependent variables is stated or not. 

Choice of Research Design

After the objectives and hypothesis is laid down correctly, the more important part is which research method or design to use. There are a lot of research designs that researchers use, depending on the requirements of the study. It is equally important to implement the perfect research design that will cover all the aspects of the research and answer all the questions that were put forth for the hypothesis. 

The efficient way to check the accuracy and suitableness of the design is to test it against the objective. If the hypothesis is evaluated concerning relationships, implementing a survey method to check that relationships will be valid. Whereas if the hypothesis is about the effects of treatment over a population, then experimental design will be helpful. There are factors like, treatment, sample size, nature of the study, that influences the researcher to choose the right research design for the study. 

Customer experience

Choice of Variables

Variables are the fundamental unit of the research. Everything you do with researching, testing experimenting, is being done on the variables. These three types of variables – independent, dependent and intervening variables. Further, depending on the nature of the study, they have sub-types. 

The first variable to check is the dependent variable. It is the one that is getting changed based on how the researcher controls the experiment. The second variable is independent variables. It will be the one influencing the dependent variable. This variable should be chosen carefully and with a lot of considerations. The third is the intervening variable. These are the variables that are generally missed in the study but intervene in the research externally. While conducting the research, it is important to keep track of any such variables that affect the results too. 

It is the researcher’s responsibility to choose the measurable variables, and if not then state their measures too.

Research Instrumentation

Now that we have variables, we need research instruments to measure them. Various research instruments are used such as questionnaires, interviews, tests, etc. to evaluate the research report it is important to choose an instrument to measure the variables. 

Researchers can select a wrong measure instrument and end up gaining incorrect or fewer data. To avoid this, here are some points to keep in mind while evaluating research instruments:

  • The chosen instrument can easily measure the variables or not. 
  • Whether the chosen instrument is being reused from the existing study or the researcher created the instrument on his own. 
  • Whether the chosen instrument is the best fit for the study and is feasible. 
  • They can also be measured based on the language and the method of recording responses. 

When we say research, we automatically picture studying a sample. It can be people, natural elements or any entity. And when we cannot study the entire population, we select a sample group that best represents the entire population. There are two components to evaluate samples in a study:

  • Sample size
  • Sampling techniques and types of samples. 

Choosing a sample size that suits your research is important and more difficult than it seems. For example, when the research needs to do a detailed interview, the sample size should be small but effective. Whereas while conducting a survey, the sample size can be large which will give more accurate results. In any of those cases, if your sample size is more, it is a waste of research resources and if the sample size is less, it can compromise the results. Various tools in the market calculate the sample size depending on the population amount. 

The second part of the section is sampling techniques. A researcher may use random sampling to make the groups equally treatable, but if the study prohibits randomization, he may have to conduct a study to assign participants in their suitable groups and then treat them as per the study. In any case, the researcher needs to state the reasons and considerations behind the technique which he chose to create samples. 

Data Collection and Analysis

As we go deep inside the research components, we get to the building block of the entire research – Data. No matter what research design you use or how you sample your subjects, it all comes down to the quality of your data. It is generally defined by the way your samples responds to your questions and how much unique they are. 

When the research instrument is sent to the responders, you no longer have control over how they will respond. And mostly it is mechanic. Meaning, the data cannot be relied upon. In such cases, your research can be diluted with the data that is not supposed to be there in the first place. So it is important to check the data for its credibility. 

Data analysis can be qualitative or quantitative . Both being the largely used methods of data analysis, the quantitative method is preferred as it gives statistical results. The size of the sample can be maximum and the questions don’t take much longer to understand and respond to. In qualitative data analysis, the responses are rather descriptive and vast. The research has to take into consideration various things like the language, nature of the respondent, their background, etc. and then conclude things from them and use them in the research. 

Findings and Implications

After evaluating the previous steps, it is now time for the final results. The value of the research lies in this bit of the report. The findings are descriptive. The researcher makes use of tables and graphs wherever necessary. The evaluator should note if tables and graphs are used in the appropriate places or not. 

Implications are equally important. Just stating the results aren’t necessary. What those results say about the study and how they conclude the research will serve the purpose of the entire research. In this section, the evaluator gets to examine the analytical skill of the researcher and see how the study has proceeded. 

Summary and Conclusions

This is a short glimpse of the research and the evaluator gets an idea about how the research is being conducted, what methodologies are used, documentation of the hypothesis and objectives and the final findings. 

Referencing requires skill and knowledge, and most of the students mistake it. A keen evaluator goes directly into the references. As said earlier, the resources that are used in the research play important role in validating the study itself. The evaluator looks for the sources of the information and how legitimate they are. The order that the references are listed also matters a lot. 

The last crucial bit is the annexures. It involves the materials used for the research purpose such as instruments, sampling frame, etc. annexures are also ordered properly. They help the evaluator understand the material used in the research. They need to be fully documented as well.

General Indicators

These are the things we need to pay attention to regarding the language, typing error, presentation and layout, etc. the language should be correct with the proper use of syntax and no grammatical errors whatsoever. Formatting of the report should be consistent throughout including the printing, font, margins, line spacing and the final binding of the pages. 

Explore all the survey question types possible on Voxco

Those were the parts of a proper well-framed research report. Each of the sections has its own set of guidelines and the researcher has to adhere to them and present a perfect report. Conducting research is a long and slow but detailed process and summarizing it in a book under these frames is a tough yet creative task.

Explore Voxco Survey Software

Online page new product image3 02.png 1

+ Omnichannel Survey Software 

+ Online Survey Software 

+ CATI Survey Software 

+ IVR Survey Software 

+ Market Research Tool

+ Customer Experience Tool 

+ Product Experience Software 

+ Enterprise Survey Software 

We use cookies in our website to give you the best browsing experience and to tailor advertising. By continuing to use our website, you give us consent to the use of cookies. Read More

Name Domain Purpose Expiry Type
hubspotutk www.voxco.com HubSpot functional cookie. 1 year HTTP
lhc_dir_locale amplifyreach.com --- 52 years ---
lhc_dirclass amplifyreach.com --- 52 years ---
Name Domain Purpose Expiry Type
_fbp www.voxco.com Facebook Pixel advertising first-party cookie 3 months HTTP
__hstc www.voxco.com Hubspot marketing platform cookie. 1 year HTTP
__hssrc www.voxco.com Hubspot marketing platform cookie. 52 years HTTP
__hssc www.voxco.com Hubspot marketing platform cookie. Session HTTP
Name Domain Purpose Expiry Type
_gid www.voxco.com Google Universal Analytics short-time unique user tracking identifier. 1 days HTTP
MUID bing.com Microsoft User Identifier tracking cookie used by Bing Ads. 1 year HTTP
MR bat.bing.com Microsoft User Identifier tracking cookie used by Bing Ads. 7 days HTTP
IDE doubleclick.net Google advertising cookie used for user tracking and ad targeting purposes. 2 years HTTP
_vwo_uuid_v2 www.voxco.com Generic Visual Website Optimizer (VWO) user tracking cookie. 1 year HTTP
_vis_opt_s www.voxco.com Generic Visual Website Optimizer (VWO) user tracking cookie that detects if the user is new or returning to a particular campaign. 3 months HTTP
_vis_opt_test_cookie www.voxco.com A session (temporary) cookie used by Generic Visual Website Optimizer (VWO) to detect if the cookies are enabled on the browser of the user or not. 52 years HTTP
_ga www.voxco.com Google Universal Analytics long-time unique user tracking identifier. 2 years HTTP
_uetsid www.voxco.com Microsoft Bing Ads Universal Event Tracking (UET) tracking cookie. 1 days HTTP
vuid vimeo.com Vimeo tracking cookie 2 years HTTP
Name Domain Purpose Expiry Type
__cf_bm hubspot.com Generic CloudFlare functional cookie. Session HTTP
Name Domain Purpose Expiry Type
_gcl_au www.voxco.com --- 3 months ---
_gat_gtag_UA_3262734_1 www.voxco.com --- Session ---
_clck www.voxco.com --- 1 year ---
_ga_HNFQQ528PZ www.voxco.com --- 2 years ---
_clsk www.voxco.com --- 1 days ---
visitor_id18452 pardot.com --- 10 years ---
visitor_id18452-hash pardot.com --- 10 years ---
lpv18452 pi.pardot.com --- Session ---
lhc_per www.voxco.com --- 6 months ---
_uetvid www.voxco.com --- 1 year ---

Banner

Evaluating Research Articles

Understanding research statistics, critical appraisal, help us improve the libguide.

evaluating a research report

Imagine for a moment that you are trying to answer a clinical (PICO) question regarding one of your patients/clients. Do you know how to determine if a research study is of high quality? Can you tell if it is applicable to your question? In evidence based practice, there are many things to look for in an article that will reveal its quality and relevance. This guide is a collection of resources and activities that will help you learn how to evaluate articles efficiently and accurately.

Is health research new to you? Or perhaps you're a little out of practice with reading it? The following questions will help illuminate an article's strengths or shortcomings. Ask them of yourself as you are reading an article:

  • Is the article peer reviewed?
  • Are there any conflicts of interest based on the author's affiliation or the funding source of the research?
  • Are the research questions or objectives clearly defined?
  • Is the study a systematic review or meta analysis?
  • Is the study design appropriate for the research question?
  • Is the sample size justified? Do the authors explain how it is representative of the wider population?
  • Do the researchers describe the setting of data collection?
  • Does the paper clearly describe the measurements used?
  • Did the researchers use appropriate statistical measures?
  • Are the research questions or objectives answered?
  • Did the researchers account for confounding factors?
  • Have the researchers only drawn conclusions about the groups represented in the research?
  • Have the authors declared any conflicts of interest?

If the answer to these questions about an article you are reading are mostly YESes , then it's likely that the article is of decent quality. If the answers are most NOs , then it may be a good idea to move on to another article. If the YESes and NOs are roughly even, you'll have to decide for yourself if the article is good enough quality for you. Some factors, like a poor literature review, are not as important as the researchers neglecting to describe the measurements they used. As you read more research, you'll be able to more easily identify research that is well done vs. that which is not well done.

evaluating a research report

Determining if a research study has used appropriate statistical measures is one of the most critical and difficult steps in evaluating an article. The following links are great, quick resources for helping to better understand how to use statistics in health research.

evaluating a research report

  • How to read a paper: Statistics for the non-statistician. II: “Significant” relations and their pitfalls This article continues the checklist of questions that will help you to appraise the statistical validity of a paper. Greenhalgh Trisha. How to read a paper: Statistics for the non-statistician. II: “Significant” relations and their pitfalls BMJ 1997; 315 :422 *On the PMC PDF, you need to scroll past the first article to get to this one.*
  • A consumer's guide to subgroup analysis The extent to which a clinician should believe and act on the results of subgroup analyses of data from randomized trials or meta-analyses is controversial. Guidelines are provided in this paper for making these decisions.

Statistical Versus Clinical Significance

When appraising studies, it's important to consider both the clinical and statistical significance of the research. This video offers a quick explanation of why.

If you have a little more time, this video explores statistical and clinical significance in more detail, including examples of how to calculate an effect size.

  • Statistical vs. Clinical Significance Transcript Transcript document for the Statistical vs. Clinical Significance video.
  • Effect Size Transcript Transcript document for the Effect Size video.
  • P Values, Statistical Significance & Clinical Significance This handout also explains clinical and statistical significance.
  • Absolute versus relative risk – making sense of media stories Understanding the difference between relative and absolute risk is essential to understanding statistical tests commonly found in research articles.

Critical appraisal is the process of systematically evaluating research using established and transparent methods. In critical appraisal, health professionals use validated checklists/worksheets as tools to guide their assessment of the research. It is a more advanced way of evaluating research than the more basic method explained above. To learn more about critical appraisal or to access critical appraisal tools, visit the websites below.

evaluating a research report

  • Last Updated: Jun 11, 2024 10:26 AM
  • URL: https://libguides.massgeneral.org/evaluatingarticles

evaluating a research report

  • Privacy Policy

Research Method

Home » Research Report – Example, Writing Guide and Types

Research Report – Example, Writing Guide and Types

Table of Contents

Research Report

Research Report

Definition:

Research Report is a written document that presents the results of a research project or study, including the research question, methodology, results, and conclusions, in a clear and objective manner.

The purpose of a research report is to communicate the findings of the research to the intended audience, which could be other researchers, stakeholders, or the general public.

Components of Research Report

Components of Research Report are as follows:

Introduction

The introduction sets the stage for the research report and provides a brief overview of the research question or problem being investigated. It should include a clear statement of the purpose of the study and its significance or relevance to the field of research. It may also provide background information or a literature review to help contextualize the research.

Literature Review

The literature review provides a critical analysis and synthesis of the existing research and scholarship relevant to the research question or problem. It should identify the gaps, inconsistencies, and contradictions in the literature and show how the current study addresses these issues. The literature review also establishes the theoretical framework or conceptual model that guides the research.

Methodology

The methodology section describes the research design, methods, and procedures used to collect and analyze data. It should include information on the sample or participants, data collection instruments, data collection procedures, and data analysis techniques. The methodology should be clear and detailed enough to allow other researchers to replicate the study.

The results section presents the findings of the study in a clear and objective manner. It should provide a detailed description of the data and statistics used to answer the research question or test the hypothesis. Tables, graphs, and figures may be included to help visualize the data and illustrate the key findings.

The discussion section interprets the results of the study and explains their significance or relevance to the research question or problem. It should also compare the current findings with those of previous studies and identify the implications for future research or practice. The discussion should be based on the results presented in the previous section and should avoid speculation or unfounded conclusions.

The conclusion summarizes the key findings of the study and restates the main argument or thesis presented in the introduction. It should also provide a brief overview of the contributions of the study to the field of research and the implications for practice or policy.

The references section lists all the sources cited in the research report, following a specific citation style, such as APA or MLA.

The appendices section includes any additional material, such as data tables, figures, or instruments used in the study, that could not be included in the main text due to space limitations.

Types of Research Report

Types of Research Report are as follows:

Thesis is a type of research report. A thesis is a long-form research document that presents the findings and conclusions of an original research study conducted by a student as part of a graduate or postgraduate program. It is typically written by a student pursuing a higher degree, such as a Master’s or Doctoral degree, although it can also be written by researchers or scholars in other fields.

Research Paper

Research paper is a type of research report. A research paper is a document that presents the results of a research study or investigation. Research papers can be written in a variety of fields, including science, social science, humanities, and business. They typically follow a standard format that includes an introduction, literature review, methodology, results, discussion, and conclusion sections.

Technical Report

A technical report is a detailed report that provides information about a specific technical or scientific problem or project. Technical reports are often used in engineering, science, and other technical fields to document research and development work.

Progress Report

A progress report provides an update on the progress of a research project or program over a specific period of time. Progress reports are typically used to communicate the status of a project to stakeholders, funders, or project managers.

Feasibility Report

A feasibility report assesses the feasibility of a proposed project or plan, providing an analysis of the potential risks, benefits, and costs associated with the project. Feasibility reports are often used in business, engineering, and other fields to determine the viability of a project before it is undertaken.

Field Report

A field report documents observations and findings from fieldwork, which is research conducted in the natural environment or setting. Field reports are often used in anthropology, ecology, and other social and natural sciences.

Experimental Report

An experimental report documents the results of a scientific experiment, including the hypothesis, methods, results, and conclusions. Experimental reports are often used in biology, chemistry, and other sciences to communicate the results of laboratory experiments.

Case Study Report

A case study report provides an in-depth analysis of a specific case or situation, often used in psychology, social work, and other fields to document and understand complex cases or phenomena.

Literature Review Report

A literature review report synthesizes and summarizes existing research on a specific topic, providing an overview of the current state of knowledge on the subject. Literature review reports are often used in social sciences, education, and other fields to identify gaps in the literature and guide future research.

Research Report Example

Following is a Research Report Example sample for Students:

Title: The Impact of Social Media on Academic Performance among High School Students

This study aims to investigate the relationship between social media use and academic performance among high school students. The study utilized a quantitative research design, which involved a survey questionnaire administered to a sample of 200 high school students. The findings indicate that there is a negative correlation between social media use and academic performance, suggesting that excessive social media use can lead to poor academic performance among high school students. The results of this study have important implications for educators, parents, and policymakers, as they highlight the need for strategies that can help students balance their social media use and academic responsibilities.

Introduction:

Social media has become an integral part of the lives of high school students. With the widespread use of social media platforms such as Facebook, Twitter, Instagram, and Snapchat, students can connect with friends, share photos and videos, and engage in discussions on a range of topics. While social media offers many benefits, concerns have been raised about its impact on academic performance. Many studies have found a negative correlation between social media use and academic performance among high school students (Kirschner & Karpinski, 2010; Paul, Baker, & Cochran, 2012).

Given the growing importance of social media in the lives of high school students, it is important to investigate its impact on academic performance. This study aims to address this gap by examining the relationship between social media use and academic performance among high school students.

Methodology:

The study utilized a quantitative research design, which involved a survey questionnaire administered to a sample of 200 high school students. The questionnaire was developed based on previous studies and was designed to measure the frequency and duration of social media use, as well as academic performance.

The participants were selected using a convenience sampling technique, and the survey questionnaire was distributed in the classroom during regular school hours. The data collected were analyzed using descriptive statistics and correlation analysis.

The findings indicate that the majority of high school students use social media platforms on a daily basis, with Facebook being the most popular platform. The results also show a negative correlation between social media use and academic performance, suggesting that excessive social media use can lead to poor academic performance among high school students.

Discussion:

The results of this study have important implications for educators, parents, and policymakers. The negative correlation between social media use and academic performance suggests that strategies should be put in place to help students balance their social media use and academic responsibilities. For example, educators could incorporate social media into their teaching strategies to engage students and enhance learning. Parents could limit their children’s social media use and encourage them to prioritize their academic responsibilities. Policymakers could develop guidelines and policies to regulate social media use among high school students.

Conclusion:

In conclusion, this study provides evidence of the negative impact of social media on academic performance among high school students. The findings highlight the need for strategies that can help students balance their social media use and academic responsibilities. Further research is needed to explore the specific mechanisms by which social media use affects academic performance and to develop effective strategies for addressing this issue.

Limitations:

One limitation of this study is the use of convenience sampling, which limits the generalizability of the findings to other populations. Future studies should use random sampling techniques to increase the representativeness of the sample. Another limitation is the use of self-reported measures, which may be subject to social desirability bias. Future studies could use objective measures of social media use and academic performance, such as tracking software and school records.

Implications:

The findings of this study have important implications for educators, parents, and policymakers. Educators could incorporate social media into their teaching strategies to engage students and enhance learning. For example, teachers could use social media platforms to share relevant educational resources and facilitate online discussions. Parents could limit their children’s social media use and encourage them to prioritize their academic responsibilities. They could also engage in open communication with their children to understand their social media use and its impact on their academic performance. Policymakers could develop guidelines and policies to regulate social media use among high school students. For example, schools could implement social media policies that restrict access during class time and encourage responsible use.

References:

  • Kirschner, P. A., & Karpinski, A. C. (2010). Facebook® and academic performance. Computers in Human Behavior, 26(6), 1237-1245.
  • Paul, J. A., Baker, H. M., & Cochran, J. D. (2012). Effect of online social networking on student academic performance. Journal of the Research Center for Educational Technology, 8(1), 1-19.
  • Pantic, I. (2014). Online social networking and mental health. Cyberpsychology, Behavior, and Social Networking, 17(10), 652-657.
  • Rosen, L. D., Carrier, L. M., & Cheever, N. A. (2013). Facebook and texting made me do it: Media-induced task-switching while studying. Computers in Human Behavior, 29(3), 948-958.

Note*: Above mention, Example is just a sample for the students’ guide. Do not directly copy and paste as your College or University assignment. Kindly do some research and Write your own.

Applications of Research Report

Research reports have many applications, including:

  • Communicating research findings: The primary application of a research report is to communicate the results of a study to other researchers, stakeholders, or the general public. The report serves as a way to share new knowledge, insights, and discoveries with others in the field.
  • Informing policy and practice : Research reports can inform policy and practice by providing evidence-based recommendations for decision-makers. For example, a research report on the effectiveness of a new drug could inform regulatory agencies in their decision-making process.
  • Supporting further research: Research reports can provide a foundation for further research in a particular area. Other researchers may use the findings and methodology of a report to develop new research questions or to build on existing research.
  • Evaluating programs and interventions : Research reports can be used to evaluate the effectiveness of programs and interventions in achieving their intended outcomes. For example, a research report on a new educational program could provide evidence of its impact on student performance.
  • Demonstrating impact : Research reports can be used to demonstrate the impact of research funding or to evaluate the success of research projects. By presenting the findings and outcomes of a study, research reports can show the value of research to funders and stakeholders.
  • Enhancing professional development : Research reports can be used to enhance professional development by providing a source of information and learning for researchers and practitioners in a particular field. For example, a research report on a new teaching methodology could provide insights and ideas for educators to incorporate into their own practice.

How to write Research Report

Here are some steps you can follow to write a research report:

  • Identify the research question: The first step in writing a research report is to identify your research question. This will help you focus your research and organize your findings.
  • Conduct research : Once you have identified your research question, you will need to conduct research to gather relevant data and information. This can involve conducting experiments, reviewing literature, or analyzing data.
  • Organize your findings: Once you have gathered all of your data, you will need to organize your findings in a way that is clear and understandable. This can involve creating tables, graphs, or charts to illustrate your results.
  • Write the report: Once you have organized your findings, you can begin writing the report. Start with an introduction that provides background information and explains the purpose of your research. Next, provide a detailed description of your research methods and findings. Finally, summarize your results and draw conclusions based on your findings.
  • Proofread and edit: After you have written your report, be sure to proofread and edit it carefully. Check for grammar and spelling errors, and make sure that your report is well-organized and easy to read.
  • Include a reference list: Be sure to include a list of references that you used in your research. This will give credit to your sources and allow readers to further explore the topic if they choose.
  • Format your report: Finally, format your report according to the guidelines provided by your instructor or organization. This may include formatting requirements for headings, margins, fonts, and spacing.

Purpose of Research Report

The purpose of a research report is to communicate the results of a research study to a specific audience, such as peers in the same field, stakeholders, or the general public. The report provides a detailed description of the research methods, findings, and conclusions.

Some common purposes of a research report include:

  • Sharing knowledge: A research report allows researchers to share their findings and knowledge with others in their field. This helps to advance the field and improve the understanding of a particular topic.
  • Identifying trends: A research report can identify trends and patterns in data, which can help guide future research and inform decision-making.
  • Addressing problems: A research report can provide insights into problems or issues and suggest solutions or recommendations for addressing them.
  • Evaluating programs or interventions : A research report can evaluate the effectiveness of programs or interventions, which can inform decision-making about whether to continue, modify, or discontinue them.
  • Meeting regulatory requirements: In some fields, research reports are required to meet regulatory requirements, such as in the case of drug trials or environmental impact studies.

When to Write Research Report

A research report should be written after completing the research study. This includes collecting data, analyzing the results, and drawing conclusions based on the findings. Once the research is complete, the report should be written in a timely manner while the information is still fresh in the researcher’s mind.

In academic settings, research reports are often required as part of coursework or as part of a thesis or dissertation. In this case, the report should be written according to the guidelines provided by the instructor or institution.

In other settings, such as in industry or government, research reports may be required to inform decision-making or to comply with regulatory requirements. In these cases, the report should be written as soon as possible after the research is completed in order to inform decision-making in a timely manner.

Overall, the timing of when to write a research report depends on the purpose of the research, the expectations of the audience, and any regulatory requirements that need to be met. However, it is important to complete the report in a timely manner while the information is still fresh in the researcher’s mind.

Characteristics of Research Report

There are several characteristics of a research report that distinguish it from other types of writing. These characteristics include:

  • Objective: A research report should be written in an objective and unbiased manner. It should present the facts and findings of the research study without any personal opinions or biases.
  • Systematic: A research report should be written in a systematic manner. It should follow a clear and logical structure, and the information should be presented in a way that is easy to understand and follow.
  • Detailed: A research report should be detailed and comprehensive. It should provide a thorough description of the research methods, results, and conclusions.
  • Accurate : A research report should be accurate and based on sound research methods. The findings and conclusions should be supported by data and evidence.
  • Organized: A research report should be well-organized. It should include headings and subheadings to help the reader navigate the report and understand the main points.
  • Clear and concise: A research report should be written in clear and concise language. The information should be presented in a way that is easy to understand, and unnecessary jargon should be avoided.
  • Citations and references: A research report should include citations and references to support the findings and conclusions. This helps to give credit to other researchers and to provide readers with the opportunity to further explore the topic.

Advantages of Research Report

Research reports have several advantages, including:

  • Communicating research findings: Research reports allow researchers to communicate their findings to a wider audience, including other researchers, stakeholders, and the general public. This helps to disseminate knowledge and advance the understanding of a particular topic.
  • Providing evidence for decision-making : Research reports can provide evidence to inform decision-making, such as in the case of policy-making, program planning, or product development. The findings and conclusions can help guide decisions and improve outcomes.
  • Supporting further research: Research reports can provide a foundation for further research on a particular topic. Other researchers can build on the findings and conclusions of the report, which can lead to further discoveries and advancements in the field.
  • Demonstrating expertise: Research reports can demonstrate the expertise of the researchers and their ability to conduct rigorous and high-quality research. This can be important for securing funding, promotions, and other professional opportunities.
  • Meeting regulatory requirements: In some fields, research reports are required to meet regulatory requirements, such as in the case of drug trials or environmental impact studies. Producing a high-quality research report can help ensure compliance with these requirements.

Limitations of Research Report

Despite their advantages, research reports also have some limitations, including:

  • Time-consuming: Conducting research and writing a report can be a time-consuming process, particularly for large-scale studies. This can limit the frequency and speed of producing research reports.
  • Expensive: Conducting research and producing a report can be expensive, particularly for studies that require specialized equipment, personnel, or data. This can limit the scope and feasibility of some research studies.
  • Limited generalizability: Research studies often focus on a specific population or context, which can limit the generalizability of the findings to other populations or contexts.
  • Potential bias : Researchers may have biases or conflicts of interest that can influence the findings and conclusions of the research study. Additionally, participants may also have biases or may not be representative of the larger population, which can limit the validity and reliability of the findings.
  • Accessibility: Research reports may be written in technical or academic language, which can limit their accessibility to a wider audience. Additionally, some research may be behind paywalls or require specialized access, which can limit the ability of others to read and use the findings.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Dissertation vs Thesis

Dissertation vs Thesis – Key Differences

Research Recommendations

Research Recommendations – Examples and Writing...

Purpose of Research

Purpose of Research – Objectives and Applications

Research Project

Research Project – Definition, Writing Guide and...

Research Summary

Research Summary – Structure, Examples and...

Dissertation

Dissertation – Format, Example and Template

Breadcrumbs Section. Click here to navigate to respective pages.

Evaluating Research in Academic Journals

Evaluating Research in Academic Journals

DOI link for Evaluating Research in Academic Journals

Get Citation

Evaluating Research in Academic Journals is a guide for students who are learning how to evaluate reports of empirical research published in academic journals. It breaks down the process of evaluating a journal article into easy-to-understand steps, and emphasizes the practical aspects of evaluating research – not just how to apply a list of technical terms from textbooks.

The book avoids oversimplification in the evaluation process by describing the nuances that may make an article publishable even when it has serious methodological flaws. Students learn when and why certain types of flaws may be tolerated, and why evaluation should not be performed mechanically.

Each chapter is organized around evaluation questions. For each question, there is a concise explanation of how to apply it in the evaluation of research reports. Numerous examples from journals in the social and behavioral sciences illustrate the application of the evaluation questions, and demonstrate actual examples of strong and weak features of published reports. Common-sense models for evaluation combined with a lack of jargon make it possible for students to start evaluating research articles the first week of class.

New to this edition

  • New chapters on:
  • evaluating mixed methods research
  • evaluating systematic reviews and meta-analyses
  • program evaluation research
  • Updated chapters and appendices that provide more comprehensive information and recent examples
  • Full new online resources: test bank questions and PowerPoint slides for instructors, and self-test chapter quizzes, further readings and additional journal examples for students.

TABLE OF CONTENTS

Chapter chapter 1 | 15  pages, background for evaluating research reports, chapter chapter 2 | 11  pages, evaluating titles, chapter chapter 3 | 11  pages, evaluating abstracts, chapter chapter 4 | 13  pages, evaluating introductions and literature reviews, chapter chapter 5 | 11  pages, a closer look at evaluating literature reviews, chapter chapter 6 | 17  pages, evaluating samples when researchers generalize, chapter chapter 7 | 8  pages, evaluating samples when researchers do not generalize, chapter chapter 8 | 16  pages, evaluating measures, chapter chapter 9 | 17  pages, evaluating experimental procedures, chapter chapter 10 | 8  pages, evaluating analysis and results sections: quantitative research, chapter chapter 11 | 12  pages, evaluating analysis and results sections: qualitative research, chapter chapter 12 | 14  pages, evaluating analysis and results sections: mixed methods research, chapter chapter 13 | 10  pages, evaluating discussion sections, chapter chapter 14 | 19  pages, evaluating systematic reviews and meta-analyses: towards evidence-based practice, chapter chapter 15 | 5  pages, putting it all together.

  • Privacy Policy
  • Terms & Conditions
  • Cookie Policy
  • Taylor & Francis Online
  • Taylor & Francis Group
  • Students/Researchers
  • Librarians/Institutions

Connect with us

Registered in England & Wales No. 3099067 5 Howick Place | London | SW1P 1WG © 2024 Informa UK Limited

Site logo

  • How to Write Evaluation Reports: Purpose, Structure, Content, Challenges, Tips, and Examples
  • Learning Center

Evaluation report

This article explores how to write effective evaluation reports, covering their purpose, structure, content, and common challenges. It provides tips for presenting evaluation findings effectively and using evaluation reports to improve programs and policies. Examples of well-written evaluation reports and templates are also included.

Table of Contents

What is an Evaluation Report?

What is the purpose of an evaluation report, importance of evaluation reports in program management, structure of evaluation report, best practices for writing an evaluation report, common challenges in writing an evaluation report, tips for presenting evaluation findings effectively, using evaluation reports to improve programs and policies, example of evaluation report templates, conclusion: making evaluation reports work for you.

An evaluatio n report is a document that presents the findings, conclusions, and recommendations of an evaluation, which is a systematic and objective assessment of the performance, impact, and effectiveness of a program, project, policy, or intervention. The report typically includes a description of the evaluation’s purpose, scope, methodology, and data sources, as well as an analysis of the evaluation findings and conclusions, and specific recommendations for program or project improvement.

Evaluation reports can help to build capacity for monitoring and evaluation within organizations and communities, by promoting a culture of learning and continuous improvement. By providing a structured approach to evaluation and reporting, evaluation reports can help to ensure that evaluations are conducted consistently and rigorously, and that the results are communicated effectively to stakeholders.

Evaluation reports may be read by a wide variety of audiences, including persons working in government agencies, staff members working for donors and partners, students and community organisations, and development professionals working on projects or programmes that are comparable to the ones evaluated.

Related: Difference Between Evaluation Report and M&E Reports .

The purpose of an evaluation report is to provide stakeholders with a comprehensive and objective assessment of a program or project’s performance, achievements, and challenges. The report serves as a tool for decision-making, as it provides evidence-based information on the program or project’s strengths and weaknesses, and recommendations for improvement.

The main objectives of an evaluation report are:

  • Accountability: To assess whether the program or project has met its objectives and delivered the intended results, and to hold stakeholders accountable for their actions and decisions.
  • Learning : To identify the key lessons learned from the program or project, including best practices, challenges, and opportunities for improvement, and to apply these lessons to future programs or projects.
  • Improvement : To provide recommendations for program or project improvement based on the evaluation findings and conclusions, and to support evidence-based decision-making.
  • Communication : To communicate the evaluation findings and conclusions to stakeholders , including program staff, funders, policymakers, and the general public, and to promote transparency and stakeholder engagement.

An evaluation report should be clear, concise, and well-organized, and should provide stakeholders with a balanced and objective assessment of the program or project’s performance. The report should also be timely, with recommendations that are actionable and relevant to the current context. Overall, the purpose of an evaluation report is to promote accountability, learning, and improvement in program and project design and implementation.

Evaluation reports play a critical role in program management by providing valuable information about program effectiveness and efficiency. They offer insights into the extent to which programs have achieved their objectives, as well as identifying areas for improvement.

Evaluation reports help program managers and stakeholders to make informed decisions about program design, implementation, and funding. They provide evidence-based information that can be used to improve program outcomes and address challenges.

Moreover, evaluation reports are essential in demonstrating program accountability and transparency to funders, policymakers, and other stakeholders. They serve as a record of program activities and outcomes, allowing stakeholders to assess the program’s impact and sustainability.

In short, evaluation reports are a vital tool for program managers and evaluators. They provide a comprehensive picture of program performance, including strengths, weaknesses, and areas for improvement. By utilizing evaluation reports, program managers can make informed decisions to improve program outcomes and ensure that their programs are effective, efficient, and sustainable over time.

evaluating a research report

The structure of an evaluation report can vary depending on the requirements and preferences of the stakeholders, but typically it includes the following sections:

  • Executive Summary : A brief summary of the evaluation findings, conclusions, and recommendations.
  • Introduction: An overview of the evaluation context, scope, purpose, and methodology.
  • Background: A summary of the programme or initiative that is being assessed, including its goals, activities, and intended audience(s).
  • Evaluation Questions : A list of the evaluation questions that guided the data collection and analysis.
  • Methodology: A description of the data collection methods used in the evaluation, including the sampling strategy, data sources, and data analysis techniques.
  • Findings: A presentation of the evaluation findings, organized according to the evaluation questions.
  • Conclusions : A summary of the main evaluation findings and conclusions, including an assessment of the program or project’s effectiveness, efficiency, and sustainability.
  • Recommendations : A list of specific recommendations for program or project improvements based on the evaluation findings and conclusions.
  • Lessons Learned : A discussion of the key lessons learned from the evaluation that could be applied to similar programs or projects in the future.
  • Limitations : A discussion of the limitations of the evaluation, including any challenges or constraints encountered during the data collection and analysis.
  • References: A list of references cited in the evaluation report.
  • Appendices : Additional information, such as detailed data tables, graphs, or maps, that support the evaluation findings and conclusions.

The structure of the evaluation report should be clear, logical, and easy to follow, with headings and subheadings used to organize the content and facilitate navigation.

In addition, the presentation of data may be made more engaging and understandable by the use of visual aids such as graphs and charts.

Writing an effective evaluation report requires careful planning and attention to detail. Here are some best practices to consider when writing an evaluation report:

Begin by establishing the report’s purpose, objectives, and target audience. A clear understanding of these elements will help guide the report’s structure and content.

Use clear and concise language throughout the report. Avoid jargon and technical terms that may be difficult for readers to understand.

Use evidence-based findings to support your conclusions and recommendations. Ensure that the findings are clearly presented using data tables, graphs, and charts.

Provide context for the evaluation by including a brief summary of the program being evaluated, its objectives, and intended impact. This will help readers understand the report’s purpose and the findings.

Include limitations and caveats in the report to provide a balanced assessment of the program’s effectiveness. Acknowledge any data limitations or other factors that may have influenced the evaluation’s results.

Organize the report in a logical manner, using headings and subheadings to break up the content. This will make the report easier to read and understand.

Ensure that the report is well-structured and easy to navigate. Use a clear and consistent formatting style throughout the report.

Finally, use the report to make actionable recommendations that will help improve program effectiveness and efficiency. Be specific about the steps that should be taken and the resources required to implement the recommendations.

By following these best practices, you can write an evaluation report that is clear, concise, and actionable, helping program managers and stakeholders to make informed decisions that improve program outcomes.

Catch HR’s eye instantly?

  • Resume Review
  • Resume Writing
  • Resume Optimization

Premier global development resume service since 2012

Stand Out with a Pro Resume

Writing an evaluation report can be a challenging task, even for experienced evaluators. Here are some common challenges that evaluators may encounter when writing an evaluation report:

  • Data limitations: One of the biggest challenges in writing an evaluation report is dealing with data limitations. Evaluators may find that the data they collected is incomplete, inaccurate, or difficult to interpret, making it challenging to draw meaningful conclusions.
  • Stakeholder disagreements: Another common challenge is stakeholder disagreements over the evaluation’s findings and recommendations. Stakeholders may have different opinions about the program’s effectiveness or the best course of action to improve program outcomes.
  • Technical writing skills: Evaluators may struggle with technical writing skills, which are essential for presenting complex evaluation findings in a clear and concise manner. Writing skills are particularly important when presenting statistical data or other technical information.
  • Time constraints: Evaluators may face time constraints when writing evaluation reports, particularly if the report is needed quickly or the evaluation involved a large amount of data collection and analysis.
  • Communication barriers: Evaluators may encounter communication barriers when working with stakeholders who speak different languages or have different cultural backgrounds. Effective communication is essential for ensuring that the evaluation’s findings are understood and acted upon.

By being aware of these common challenges, evaluators can take steps to address them and produce evaluation reports that are clear, accurate, and actionable. This may involve developing data collection and analysis plans that account for potential data limitations, engaging stakeholders early in the evaluation process to build consensus, and investing time in developing technical writing skills.

Presenting evaluation findings effectively is essential for ensuring that program managers and stakeholders understand the evaluation’s purpose, objectives, and conclusions. Here are some tips for presenting evaluation findings effectively:

  • Know your audience: Before presenting evaluation findings, ensure that you have a clear understanding of your audience’s background, interests, and expertise. This will help you tailor your presentation to their needs and interests.
  • Use visuals: Visual aids such as graphs, charts, and tables can help convey evaluation findings more effectively than written reports. Use visuals to highlight key data points and trends.
  • Be concise: Keep your presentation concise and to the point. Focus on the key findings and conclusions, and avoid getting bogged down in technical details.
  • Tell a story: Use the evaluation findings to tell a story about the program’s impact and effectiveness. This can help engage stakeholders and make the findings more memorable.
  • Provide context: Provide context for the evaluation findings by explaining the program’s objectives and intended impact. This will help stakeholders understand the significance of the findings.
  • Use plain language: Use plain language that is easily understandable by your target audience. Avoid jargon and technical terms that may confuse or alienate stakeholders.
  • Engage stakeholders: Engage stakeholders in the presentation by asking for their input and feedback. This can help build consensus and ensure that the evaluation findings are acted upon.

By following these tips, you can present evaluation findings in a way that engages stakeholders, highlights key findings, and ensures that the evaluation’s conclusions are acted upon to improve program outcomes.

Evaluation reports are crucial tools for program managers and policymakers to assess program effectiveness and make informed decisions about program design, implementation, and funding. By analyzing data collected during the evaluation process, evaluation reports provide evidence-based information that can be used to improve program outcomes and impact.

One of the primary ways that evaluation reports can be used to improve programs and policies is by identifying program strengths and weaknesses. By assessing program effectiveness and efficiency, evaluation reports can help identify areas where programs are succeeding and areas where improvements are needed. This information can inform program redesign and improvement efforts, leading to better program outcomes and impact.

Evaluation reports can also be used to make data-driven decisions about program design, implementation, and funding. By providing decision-makers with data-driven information, evaluation reports can help ensure that programs are designed and implemented in a way that maximizes their impact and effectiveness. This information can also be used to allocate resources more effectively, directing funding towards programs that are most effective and efficient.

Another way that evaluation reports can be used to improve programs and policies is by disseminating best practices in program design and implementation. By sharing information about what works and what doesn’t work, evaluation reports can help program managers and policymakers make informed decisions about program design and implementation, leading to better outcomes and impact.

Finally, evaluation reports can inform policy development and improvement efforts by providing evidence about the effectiveness and impact of existing policies. This information can be used to make data-driven decisions about policy development and improvement efforts, ensuring that policies are designed and implemented in a way that maximizes their impact and effectiveness.

In summary, evaluation reports are critical tools for improving programs and policies. By providing evidence-based information about program effectiveness and efficiency, evaluation reports can help program managers and policymakers make informed decisions, allocate resources more effectively, disseminate best practices, and inform policy development and improvement efforts.

There are many different templates available for creating evaluation reports. Here are some examples of template evaluation reports that can be used as a starting point for creating your own report:

  • The National Science Foundation Evaluation Report Template – This template provides a structure for evaluating research projects funded by the National Science Foundation. It includes sections on project background, research questions, evaluation methodology, data analysis, and conclusions and recommendations.
  • The CDC Program Evaluation Template – This template, created by the Centers for Disease Control and Prevention, provides a framework for evaluating public health programs. It includes sections on program description, evaluation questions, data sources, data analysis, and conclusions and recommendations.
  • The World Bank Evaluation Report Template – This template, created by the World Bank, provides a structure for evaluating development projects. It includes sections on project background, evaluation methodology, data analysis, findings and conclusions, and recommendations.
  • The European Commission Evaluation Report Template – This template provides a structure for evaluating European Union projects and programs. It includes sections on project description, evaluation objectives, evaluation methodology, findings, conclusions, and recommendations.
  • The UNICEF Evaluation Report Template – This template provides a framework for evaluating UNICEF programs and projects. It includes sections on program description, evaluation questions, evaluation methodology, findings, conclusions, and recommendations.

These templates provide a structure for creating evaluation reports that are well-organized and easy to read. They can be customized to meet the specific needs of your program or project and help ensure that your evaluation report is comprehensive and includes all of the necessary components.

  • World Health Organisations Reports
  • Checkl ist for Assessing USAID Evaluation Reports

In conclusion, evaluation reports are essential tools for program managers and policymakers to assess program effectiveness and make informed decisions about program design, implementation, and funding. By analyzing data collected during the evaluation process, evaluation reports provide evidence-based information that can be used to improve program outcomes and impact.

To make evaluation reports work for you, it is important to plan ahead and establish clear objectives and target audiences. This will help guide the report’s structure and content and ensure that the report is tailored to the needs of its intended audience.

When writing an evaluation report, it is important to use clear and concise language, provide evidence-based findings, and offer actionable recommendations that can be used to improve program outcomes. Including context for the evaluation findings and acknowledging limitations and caveats will provide a balanced assessment of the program’s effectiveness and help build trust with stakeholders.

Presenting evaluation findings effectively requires knowing your audience, using visuals, being concise, telling a story, providing context, using plain language, and engaging stakeholders. By following these tips, you can present evaluation findings in a way that engages stakeholders, highlights key findings, and ensures that the evaluation’s conclusions are acted upon to improve program outcomes.

Finally, using evaluation reports to improve programs and policies requires identifying program strengths and weaknesses, making data-driven decisions, disseminating best practices, allocating resources effectively, and informing policy development and improvement efforts. By using evaluation reports in these ways, program managers and policymakers can ensure that their programs are effective, efficient, and sustainable over time.

' data-src=

Well understanding, the description of the general evaluation of report are clear with good arrangement and it help students to learn and make practices

' data-src=

Patrick Kapuot

Thankyou for very much for such detail information. Very comprehensively said.

' data-src=

hailemichael

very good explanation, thanks

' data-src=

Lerato qhobo

This method of monitoring and evaluation is very crucial

Leave a Comment Cancel Reply

Your email address will not be published.

How strong is my Resume?

Only 2% of resumes land interviews.

Land a better, higher-paying career

evaluating a research report

Jobs for You

Junior program analyst/admin assistant – usaid lac/fo.

  • United States

Tax Coordinator – USAID Uganda

Monitoring and evaluation advisor.

  • Cuso International

Monitoring, Evaluation &Learning (MEL) Specialist

  • Brussels, Belgium
  • European Endowment for Democracy (EED)

Economics and Business Management Expert

Governance and sustainability expert, agriculture expert with irrigation background, nutritionist with food security background, director of impact and evaluation.

  • Glendale Heights, IL 60137, USA
  • Bridge Communities

USAID Benin Advisor / Program Officer

Usaid/drc elections advisor.

  • Democratic Republic of the Congo

Business Development Associate

Agriculture and resilience advisor, usaid/drc program officer, team leader, services you might be interested in, useful guides ....

How to Create a Strong Resume

Monitoring And Evaluation Specialist Resume

Resume Length for the International Development Sector

Types of Evaluation

Monitoring, Evaluation, Accountability, and Learning (MEAL)

LAND A JOB REFERRAL IN 2 WEEKS (NO ONLINE APPS!)

Sign Up & To Get My Free Referral Toolkit Now:

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base
  • Working with sources
  • Evaluating Sources | Methods & Examples

Evaluating Sources | Methods & Examples

Published on June 2, 2022 by Eoghan Ryan . Revised on May 31, 2023.

The sources you use are an important component of your research. It’s important to evaluate the sources you’re considering using, in order to:

  • Ensure that they’re credible
  • Determine whether they’re relevant to your topic
  • Assess the quality of their arguments

Table of contents

Evaluating a source’s credibility, evaluating a source’s relevance, evaluating a source’s arguments, other interesting articles, frequently asked questions about evaluating sources.

Evaluating the credibility of a source is an important way of sifting out misinformation and determining whether you should use it in your research. Useful approaches include the CRAAP test and lateral reading .

One of the best ways to evaluate source credibility is the CRAAP test . This stands for:

  • Currency: Does the source reflect recent research?
  • Relevance: Is the source related to your research topic?
  • Authority: Is it a respected publication? Is the author an expert in their field?
  • Accuracy: Does the source support its arguments and conclusions with evidence?
  • Purpose: What is the author’s intention?

How you evaluate a source using these criteria will depend on your subject and focus. It’s important to understand the types of sources and how you should use them in your field of research.

Lateral reading

Lateral reading is the act of evaluating the credibility of a source by comparing it to other sources. This allows you to:

  • Verify evidence
  • Contextualize information
  • Find potential weaknesses

If a source is using methods or drawing conclusions that are incompatible with other research in its field, it may not be reliable.

Rather than taking these figures at face value, you decide to determine the accuracy of the source’s claims by cross-checking them with official statistics such as census reports and figures compiled by the Department of Homeland Security’s Office of Immigration Statistics.

Prevent plagiarism. Run a free check.

How you evaluate the relevance of a source will depend on your topic, and on where you are in the research process . Preliminary evaluation helps you to pick out relevant sources in your search, while in-depth evaluation allows you to understand how they’re related.

Preliminary evaluation

As you cannot possibly read every source related to your topic, you can use preliminary evaluation to determine which sources might be relevant. This is especially important when you’re surveying a large number of sources (e.g., in a literature review or systematic review ).

One way to do this is to look at paratextual material, or the parts of a work other than the text itself.

  • Look at the table of contents to determine the scope of the work.
  • Consult the index for key terms or the names of important scholars.

You can also read abstracts , prefaces , introductions , and conclusions . These will give you a clear idea of the author’s intentions, the parameters of the research, and even the conclusions they draw.

Preliminary evaluation is useful as it allows you to:

  • Determine whether a source is worth examining in more depth
  • Quickly move on to more relevant sources
  • Increase the quality of the information you consume

While this preliminary evaluation is an important step in the research process, you should engage with sources more deeply in order to adequately understand them.

In-depth evaluation

Begin your in-depth evaluation with any landmark studies in your field of research, or with sources that you’re sure are related to your research topic.

As you read, try to understand the connections between the sources. Look for:

  • Key debates: What topics or questions are currently influencing research? How does the source respond to these key debates?
  • Major publications or critics: Are there any specific texts or scholars that have greatly influenced the field? How does the source engage with them?
  • Trends: Is the field currently dominated by particular theories or research methods ? How does the source respond to these?
  • Gaps: Are there any oversights or weaknesses in the research?

Even sources whose conclusions you disagree with can be relevant, as they can strengthen your argument by offering alternative perspectives.

Every source should contribute to the debate about its topic by taking a clear position. This position and the conclusions the author comes to should be supported by evidence from direct observation or from other sources.

Most sources will use a mix of primary and secondary sources to form an argument . It is important to consider how the author uses these sources. A good argument should be based on analysis and critique, and there should be a logical relationship between evidence and conclusions.

To assess an argument’s strengths and weaknesses, ask:

  • Does the evidence support the claim?
  • How does the author use evidence? What theories, methods, or models do they use?
  • Could the evidence be used to draw other conclusions? Can it be interpreted differently?
  • How does the author situate their argument in the field? Do they agree or disagree with other scholars? Do they confirm or challenge established knowledge?

Situating a source in relation to other sources ( lateral reading ) can help you determine whether the author’s arguments and conclusions are reliable and how you will respond to them in your own writing.

If you want to know more about ChatGPT, AI tools , citation , and plagiarism , make sure to check out some of our other articles with explanations and examples.

  • ChatGPT vs human editor
  • ChatGPT citations
  • Is ChatGPT trustworthy?
  • Using ChatGPT for your studies
  • What is ChatGPT?
  • Chicago style
  • Paraphrasing

 Plagiarism

  • Types of plagiarism
  • Self-plagiarism
  • Avoiding plagiarism
  • Academic integrity
  • Consequences of plagiarism
  • Common knowledge

As you cannot possibly read every source related to your topic, it’s important to evaluate sources to assess their relevance. Use preliminary evaluation to determine whether a source is worth examining in more depth.

This involves:

  • Reading abstracts , prefaces, introductions , and conclusions
  • Looking at the table of contents to determine the scope of the work
  • Consulting the index for key terms or the names of important scholars

Lateral reading is the act of evaluating the credibility of a source by comparing it with other sources. This allows you to:

A credible source should pass the CRAAP test  and follow these guidelines:

  • The information should be up to date and current.
  • The author and publication should be a trusted authority on the subject you are researching.
  • The sources the author cited should be easy to find, clear, and unbiased.
  • For a web source, the URL and layout should signify that it is trustworthy.

The CRAAP test is an acronym to help you evaluate the credibility of a source you are considering using. It is an important component of information literacy .

The CRAAP test has five main components:

  • Currency: Is the source up to date?
  • Relevance: Is the source relevant to your research?
  • Authority: Where is the source published? Who is the author? Are they considered reputable and trustworthy in their field?
  • Accuracy: Is the source supported by evidence? Are the claims cited correctly?
  • Purpose: What was the motive behind publishing this source?

Scholarly sources are written by experts in their field and are typically subjected to peer review . They are intended for a scholarly audience, include a full bibliography, and use scholarly or technical language. For these reasons, they are typically considered credible sources .

Popular sources like magazines and news articles are typically written by journalists. These types of sources usually don’t include a bibliography and are written for a popular, rather than academic, audience. They are not always reliable and may be written from a biased or uninformed perspective, but they can still be cited in some contexts.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Ryan, E. (2023, May 31). Evaluating Sources | Methods & Examples. Scribbr. Retrieved July 1, 2024, from https://www.scribbr.com/working-with-sources/evaluating-sources/

Is this article helpful?

Eoghan Ryan

Eoghan Ryan

Other students also liked, student guide: information literacy | meaning & examples, types of sources explained | examples & tips, what are credible sources & how to spot them | examples, get unlimited documents corrected.

✔ Free APA citation check included ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

  • Library Home
  • Research Guides

Writing a Research Paper

  • Evaluate Sources

Library Research Guide

  • Choose Your Topic
  • Organize Your Information
  • Draft Your Paper
  • Revise, Review, Refine

How Will This Help Me?

Evaluating your sources will help you:

  • Determine the credibility of information
  • Rule out questionable information
  • Check for bias in your sources

In general, websites are hosted in domains that tell you what type of site it is.

  • .com = commercial
  • .net = network provider
  • .org = organization
  • .edu = education
  • .mil = military
  • .gov = U.S. government

Commercial sites want to persuade you to buy something, and organizations may want to persuade you to see an issue from a particular viewpoint. 

Useful information can be found on all kinds of sites, but you must consider carefully whether the source is useful for your purpose and for your audience.

Content Farms

Content farms are websites that exist to host ads. They post about popular web searches to try to drive traffic to their sites. They are rarely good sources for research.

  • Web’s “Content Farms” Grow Audiences For Ads This article by Zoe Chace at National Public Radio describes the ways How To sites try to drive more traffic to their sites to see the ads they host.

Fact Checking

Fact checking can help you verify the reliability of a source. The following sites may not have all the answers, but they can help you look into the sources for statements made in U.S. politics.

  • FactCheck.org This site monitors the accuracy of statements made in speeches, debates, interviews, and more and links to sources so readers can see the information for themselves. The site is a project of the Annenberg Public Policy Center of the University of Pennsylvania.
  • PolitiFact This resource evaluates the accuracy of statements made by elected officials, lobbyists, and special interest groups and provides sources for their evaluations. PolitiFact is currently run by the nonprofit Poynter Institute for Media Studies.

Evaluate Sources With the Big 5 Criteria

The Big 5 Criteria can help you evaluate your sources for credibility:

  • Currency: Check the publication date and determine whether it is sufficiently current for your topic.
  • Coverage (relevance): Consider whether the source is relevant to your research and whether it covers the topic adequately for your needs.
  • Authority: Discover the credentials of the authors of the source and determine their level of expertise and knowledge about the subject.
  • Accuracy: Consider whether the source presents accurate information and whether you can verify that information. 
  • Objectivity (purpose): Think about the author's purpose in creating the source and consider how that affects its usefulness to your research. 

Evaluate Sources With the CRAAP Test

Another way to evaluate your sources is the CRAAP Test, which means evaluating the following qualities of your sources:

This video (2:17) from Western Libraries explains the CRAAP Test. 

Video transcript

Evaluating Sources ( Western Libraries ) CC BY-NC-ND 3.0

Evaluate Websites

Evaluating websites follows the same process as for other sources, but finding the information you need to make an assessment can be more challenging with websites. The following guidelines can help you decide if a website is a good choice for a source for your paper. 

  • Currency . A useful site is updated regularly and lets visitors know when content was published on the site. Can you tell when the site was last updated? Can you see when the content you need was added? Does the site show signs of not being maintained (broken links, out-of-date information, etc.)?
  • Relevance . Think about the target audience for the site. Is it appropriate for you or your paper's audience?
  • Authority . Look for an About Us link or something similar to learn about the site's creator. The more you know about the credentials and mission of a site's creators, as well as their sources of information, the better idea you will have about the site's quality. 
  • Accuracy. Does the site present references or links to the sources of information it presents? Can you locate these sources so that you can read and interpret the information yourself?
  • Purpose. Consider the reason why the site was created. Can you detect any bias? Does the site use emotional language? Is the site trying to persuade you about something? 

Identify Political Perspective

News outlets, think tanks, organizations, and individual authors can present information from a particular political perspective. Consider this fact to help determine whether sources are useful for your paper. 

evaluating a research report

Check a news outlet's website, usually under About Us or Contact Us , for information about their reporters and authors. For example, USA Today has the USA Today Reporter Index , and the LA Times has an Editorial & Newsroom Contacts . Reading a profile or bio for a reporter or looking at other articles by the author may tell you whether that person favors a particular viewpoint. 

If a particular organization is mentioned in an article, learn more about the organization to identify potential biases. Think tanks and other associations usually exist for a reason. Searching news articles about the organization can help you determine their political leaning. 

Bias is not always bad, but you must be aware of it. Knowing the perspective of a source helps contextualize the information presented. 

  • << Previous: Databases
  • Next: Organize Your Information >>
  • Last Updated: Feb 27, 2024 1:56 PM
  • URL: https://guides.lib.k-state.edu/writingresearchpaper

K-State Libraries

1117 Mid-Campus Drive North, Manhattan, KS 66506

785-532-3014 | [email protected]

  • Statements and Disclosures
  • Accessibility
  • © Kansas State University

Research Evaluation

  • First Online: 23 June 2020

Cite this chapter

evaluating a research report

  • Carlo Ghezzi 2  

1009 Accesses

1 Citations

  • The original version of this chapter was revised. A correction to this chapter can be found at https://doi.org/10.1007/978-3-030-45157-8_7

This chapter is about research evaluation. Evaluation is quintessential to research. It is traditionally performed through qualitative expert judgement. The chapter presents the main evaluation activities in which researchers can be engaged. It also introduces the current efforts towards devising quantitative research evaluation based on bibliometric indicators and critically discusses their limitations, along with their possible (limited and careful) use.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Change history

19 october 2021.

The original version of the chapter was inadvertently published with an error. The chapter has now been corrected.

Notice that the taxonomy presented in Box 5.1 does not cover all kinds of scientific papers. As an example, it does not cover survey papers, which normally are not submitted to a conference.

Private institutions and industry may follow different schemes.

Adler, R., Ewing, J., Taylor, P.: Citation statistics: A report from the international mathematical union (imu) in cooperation with the international council of industrial and applied mathematics (iciam) and the institute of mathematical statistics (ims). Statistical Science 24 (1), 1–14 (2009). URL http://www.jstor.org/stable/20697661

Esposito, F., Ghezzi, C., Hermenegildo, M., Kirchner, H., Ong, L.: Informatics Research Evaluation. Informatics Europe (2018). URL https://www.informatics-europe.org/publications.html

Friedman, B., Schneider, F.B.: Incentivizing quality and impact: Evaluating scholarship in hiring, tenure, and promotion. Computing Research Association (2016). URL https://cra.org/resources/best-practice-memos/incentivizing-quality-and-impact-evaluating-scholarship-in-hiring-tenure-and-promotion/

Hicks, D., Wouters, P., Waltman, L., de Rijcke, S., Rafols, I.: Bibliometrics: The leiden manifesto for research metrics. Nature News 520 (7548), 429 (2015). https://doi.org/10.1038/520429a . URL http://www.nature.com/news/bibliometrics-the-leiden-manifesto-for-research-metrics-1.17351

Parnas, D.L.: Stop the numbers game. Commun. ACM 50 (11), 19–21 (2007). https://doi.org/10.1145/1297797.1297815 . URL http://doi.acm.org/10.1145/1297797.1297815

Patterson, D., Snyder, L., Ullman, J.: Evaluating computer scientists and engineers for promotion and tenure. Computing Research Association (1999). URL https://cra.org/resources/best-practice-memos/incentivizing-quality-and-impact-evaluating-scholarship-in-hiring-tenure-and-promotion/

Saenen, B., Borrell-Damian, L.: Reflections on University Research Assessment: key concepts, issues and actors. European University Association (2019). URL https://eua.eu/component/attachments/attachments.html?id=2144

Download references

Author information

Authors and affiliations.

Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Milano, Italy

Carlo Ghezzi

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Carlo Ghezzi .

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this chapter

Ghezzi, C. (2020). Research Evaluation. In: Being a Researcher. Springer, Cham. https://doi.org/10.1007/978-3-030-45157-8_5

Download citation

DOI : https://doi.org/10.1007/978-3-030-45157-8_5

Published : 23 June 2020

Publisher Name : Springer, Cham

Print ISBN : 978-3-030-45156-1

Online ISBN : 978-3-030-45157-8

eBook Packages : Computer Science Computer Science (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

evaluating a research report

  • The Open University
  • Accessibility hub
  • Guest user / Sign out
  • Study with The Open University

My OpenLearn Profile

Personalise your OpenLearn profile, save your favourite content and get recognition for your learning

About this free course

Become an ou student, download this course, share this free course.

Succeeding in postgraduate study

Start this free course now. Just create an account and sign in. Enrol and complete the course for a free statement of participation or digital badge if available.

1 Important points to consider when critically evaluating published research papers

Simple review articles (also referred to as ‘narrative’ or ‘selective’ reviews), systematic reviews and meta-analyses provide rapid overviews and ‘snapshots’ of progress made within a field, summarising a given topic or research area. They can serve as useful guides, or as current and comprehensive ‘sources’ of information, and can act as a point of reference to relevant primary research studies within a given scientific area. Narrative or systematic reviews are often used as a first step towards a more detailed investigation of a topic or a specific enquiry (a hypothesis or research question), or to establish critical awareness of a rapidly-moving field (you will be required to demonstrate this as part of an assignment, an essay or a dissertation at postgraduate level).

The majority of primary ‘empirical’ research papers essentially follow the same structure (abbreviated here as IMRAD). There is a section on Introduction, followed by the Methods, then the Results, which includes figures and tables showing data described in the paper, and a Discussion. The paper typically ends with a Conclusion, and References and Acknowledgements sections.

The Title of the paper provides a concise first impression. The Abstract follows the basic structure of the extended article. It provides an ‘accessible’ and concise summary of the aims, methods, results and conclusions. The Introduction provides useful background information and context, and typically outlines the aims and objectives of the study. The Abstract can serve as a useful summary of the paper, presenting the purpose, scope and major findings. However, simply reading the abstract alone is not a substitute for critically reading the whole article. To really get a good understanding and to be able to critically evaluate a research study, it is necessary to read on.

While most research papers follow the above format, variations do exist. For example, the results and discussion sections may be combined. In some journals the materials and methods may follow the discussion, and in two of the most widely read journals, Science and Nature, the format does vary from the above due to restrictions on the length of articles. In addition, there may be supporting documents that accompany a paper, including supplementary materials such as supporting data, tables, figures, videos and so on. There may also be commentaries or editorials associated with a topical research paper, which provide an overview or critique of the study being presented.

Box 1 Key questions to ask when appraising a research paper

  • Is the study’s research question relevant?
  • Does the study add anything new to current knowledge and understanding?
  • Does the study test a stated hypothesis?
  • Is the design of the study appropriate to the research question?
  • Do the study methods address key potential sources of bias?
  • Were suitable ‘controls’ included in the study?
  • Were the statistical analyses appropriate and applied correctly?
  • Is there a clear statement of findings?
  • Does the data support the authors’ conclusions?
  • Are there any conflicts of interest or ethical concerns?

There are various strategies used in reading a scientific research paper, and one of these is to start with the title and the abstract, then look at the figures and tables, and move on to the introduction, before turning to the results and discussion, and finally, interrogating the methods.

Another strategy (outlined below) is to begin with the abstract and then the discussion, take a look at the methods, and then the results section (including any relevant tables and figures), before moving on to look more closely at the discussion and, finally, the conclusion. You should choose a strategy that works best for you. However, asking the ‘right’ questions is a central feature of critical appraisal, as with any enquiry, so where should you begin? Here are some critical questions to consider when evaluating a research paper.

Look at the Abstract and then the Discussion : Are these accessible and of general relevance or are they detailed, with far-reaching conclusions? Is it clear why the study was undertaken? Why are the conclusions important? Does the study add anything new to current knowledge and understanding? The reasons why a particular study design or statistical method were chosen should also be clear from reading a research paper. What is the research question being asked? Does the study test a stated hypothesis? Is the design of the study appropriate to the research question? Have the authors considered the limitations of their study and have they discussed these in context?

Take a look at the Methods : Were there any practical difficulties that could have compromised the study or its implementation? Were these considered in the protocol? Were there any missing values and, if so, was the number of missing values too large to permit meaningful analysis? Was the number of samples (cases or participants) too small to establish meaningful significance? Do the study methods address key potential sources of bias? Were suitable ‘controls’ included in the study? If controls are missing or not appropriate to the study design, we cannot be confident that the results really show what is happening in an experiment. Were the statistical analyses appropriate and applied correctly? Do the authors point out the limitations of methods or tests used? Were the methods referenced and described in sufficient detail for others to repeat or extend the study?

Take a look at the Results section and relevant tables and figures : Is there a clear statement of findings? Were the results expected? Do they make sense? What data supports them? Do the tables and figures clearly describe the data (highlighting trends etc.)? Try to distinguish between what the data show and what the authors say they show (i.e. their interpretation).

Moving on to look in greater depth at the Discussion and Conclusion : Are the results discussed in relation to similar (previous) studies? Do the authors indulge in excessive speculation? Are limitations of the study adequately addressed? Were the objectives of the study met and the hypothesis supported or refuted (and is a clear explanation provided)? Does the data support the authors’ conclusions? Maybe there is only one experiment to support a point. More often, several different experiments or approaches combine to support a particular conclusion. A rule of thumb here is that if multiple approaches and multiple lines of evidence from different directions are presented, and all point to the same conclusion, then the conclusions are more credible. But do question all assumptions. Identify any implicit or hidden assumptions that the authors may have used when interpreting their data. Be wary of data that is mixed up with interpretation and speculation! Remember, just because it is published, does not mean that it is right.

O ther points you should consider when evaluating a research paper : Are there any financial, ethical or other conflicts of interest associated with the study, its authors and sponsors? Are there ethical concerns with the study itself? Looking at the references, consider if the authors have preferentially cited their own previous publications (i.e. needlessly), and whether the list of references are recent (ensuring that the analysis is up-to-date). Finally, from a practical perspective, you should move beyond the text of a research paper, talk to your peers about it, consult available commentaries, online links to references and other external sources to help clarify any aspects you don’t understand.

The above can be taken as a general guide to help you begin to critically evaluate a scientific research paper, but only in the broadest sense. Do bear in mind that the way that research evidence is critiqued will also differ slightly according to the type of study being appraised, whether observational or experimental, and each study will have additional aspects that would need to be evaluated separately. For criteria recommended for the evaluation of qualitative research papers, see the article by Mildred Blaxter (1996), available online. Details are in the References.

Activity 1 Critical appraisal of a scientific research paper

A critical appraisal checklist, which you can download via the link below, can act as a useful tool to help you to interrogate research papers. The checklist is divided into four sections, broadly covering:

  • some general aspects
  • research design and methodology
  • the results
  • discussion, conclusion and references.

Science perspective – critical appraisal checklist [ Tip: hold Ctrl and click a link to open it in a new tab. ( Hide tip ) ]

  • Identify and obtain a research article based on a topic of your own choosing, using a search engine such as Google Scholar or PubMed (for example).
  • The selection criteria for your target paper are as follows: the article must be an open access primary research paper (not a review) containing empirical data, published in the last 2–3 years, and preferably no more than 5–6 pages in length.
  • Critically evaluate the research paper using the checklist provided, making notes on the key points and your overall impression.

Critical appraisal checklists are useful tools to help assess the quality of a study. Assessment of various factors, including the importance of the research question, the design and methodology of a study, the validity of the results and their usefulness (application or relevance), the legitimacy of the conclusions, and any potential conflicts of interest, are an important part of the critical appraisal process. Limitations and further improvements can then be considered.

Previous

How to Evaluate a Study

Not all studies should be treated equally. Below are a few key factors to consider when evaluating a study’s conclusions.

  • Has the study been reviewed by other experts ? Peer-review, the process by which a study is sent to other researchers in a particular field for their notes and thoughts, is essential in evaluating a study’s findings. Since most consumers and members of the media are not well-trained enough to evaluate a study’s design and researcher’s findings, studies that pass muster with other researchers and are accepted for publication in prestigious journals are generally more trustworthy.
  • Do other experts agree? Have other experts spoken out against the study’s findings? Who are these other experts and are their criticisms valid?
  • Are there reasons to doubt the findings? One of the most important items to keep in mind when reviewing studies is that correlation does not prove causation. For instance, just because there is an association between eating blueberries and weighing less does not mean that eating blueberries will make you lose weight. Researchers should look for other explanations for their findings, known as “confounding variables.” In this instance, they should consider that people who tend to eat blueberries also tend to exercise more and consume fewer calories overall.
  • How do the conclusions fit with other studies? It’s rare that a single study is enough to overturn the preponderance of research offering a different conclusion. Though studies that buck the established notion are not necessarily wrong, they should be scrutinized closely to ensure that their findings are accurate.
  • How big was the study? Sample size matters. The more patients or subjects involved in a study, the more likely it is that the study’s conclusions aren’t merely due to random chance and are, in fact, statistically significant.
  • Are there any major flaws in the study’s design? This is one of the most difficult steps if you aren’t an expert in a particular field, but there are ways to look for bias. For example, was the study a “double-blind” experiment or were the researchers aware of which subjects were the control set?
  • Have the researchers identified any flaws or limitations with their researc h? Often buried in the conclusion, researchers acknowledge limitations or possible other theories for their results. Because the universities, government agencies, or other organizations who’ve funded and promoted the study often want to highlight the boldest conclusion possible, these caveats can be overlooked. However, they’re important when considering how important the study’s conclusions really are.
  • Have the findings been replicated? With growing headlines of academic fraud and leading journals forced to retract articles based on artificial results, replication of results is increasingly important to judge the merit of a study’s findings. If other researchers can replicate an experiment and come to a similar conclusion, it’s much easier to trust those results than those that have only been peer reviewed.

evaluating a research report

General Election 2024

Read our updates on issues relevant to charities for the upcoming election. Learn more

  • Writing an evaluation report

Use this page to learn about the process of writing an evaluation report.

Writing an evaluation report helps you share key findings and recommendations with those in your organisation and the people and communities you work with. This is the next step in the evaluation cycle after our guidance on analysing and reporting on your evaluation .

A report can be used to:

  • suggest changes to how you work
  • communicate your value to funders
  • share good practice with other organisations
  • share learning with the people and communities you work with.

Once you’ve completed these parts of your project, you’ll be able to write your evaluation report:

  • You have data that you've collected and analysed.
  • You’ve got the software to help you design your report.
  • You have an understanding of the people who'll be reading your report.
  • There are helpful colleagues available to read your drafts.

Choose the right software for your report

You have several options for software. Here are some suggestions below to get you started:

The Microsoft suite

  • Word has a range of icons, images and smart art you can use - it is probably the most popular choice.
  • Slide documents (using PowerPoint) can be helpful for writing briefer reports. You can also create data visualisation within PowerPoint and import it to Microsoft Word if preferred.
  • You can create dashboards in Excel and/or import data visualisation graphs to other Microsoft applications.

Other applications

  • SurveyMonkey has a dashboard function which can be used for reporting.
  • Piktochart, Tablea and Canva are all design software. They have evaluation and impact report templates available.
  • If you're producing content for webpages, Google Charts and Datawrapper may prove helpful.

Consider your audience

Think about the people you're reporting to so you can tell them what they need to know. You should consider these points:

  • What kind of information they need. For example, whether they need to know more about the difference you’ve made or the way in which you’ve delivered your work.
  • How they'd like the information presented. For example, as a traditional evaluation report and/or data visualisation, webpages, or PowerPoint and when.
  • Why they need the information and what you want them to do as a result.
  • Whether there are any accessibility needs that you need to consider. For example, does the report need to work on a screen reader?

Plan your report

Having a clear structure makes your report easier to read. Before you write, plan your headings and subheadings. Most evaluation reports will include the following sections.

  • Executive summary – a summary of your key findings and recommendations.
  • Introduction – a brief description of what you're evaluating, the purpose of your evaluation and the methods you've used (for example, surveys and interviews).
  • Findings and discussion – information on what you delivered, how you delivered it and what outcomes came out of it.
  • Recommendations – actions that need to be taken to respond to the evaluation findings.

What to include in your report

Reports will vary depending on the nature of your work, but you'll probably need to include findings on the following:

  • Outcomes – What outcomes have been achieved, for whom and under what circumstances. You should also report on intended outcomes.
  • Activities and outputs – What has been delivered, when and to who. You should also report on how satisfied the people and communities you work with were.
  • Processes – Information about how you delivered your outputs. You may need this information to explain why something worked particularly well, or why it didn’t work.

Describe and interpret your data

In your report, you should describe your data and interpret it – analysing your data before you start writing will help with this.

Describing means presenting what the data tells you. You might describe, for example, what outcomes were achieved, by whom and in what circumstances.

Interpretation moves beyond description to say what the data means – make sure you word your report clearly so the reader can tell when you're describing data and when you're interpreting it.

To help you interpret data, you could do the following.

  • Make connections by looking for trends, patterns and links . For example, if two groups had very different outcomes, what factors might have led to this?
  • Put data in a meaningful context . Numbers don’t speak for themselves. Is 70% good or bad? How do you know?

When you interpret your data, you could discuss the following.

  • Why outcomes were achieved, or not achieved . Understanding this may help you make decisions about future service planning. Many funders will also want to know about this.
  • What worked and what didn’t . Knowing about this will put you in a good position to improve your work. It may also be useful to share with partners or funders to improve practice in the sector.
  • Answers to your evaluation questions . When you planned your evaluation , you may have had two or three key questions you wanted it to answer. For example, you may have wanted to know whether your service works equally well for all groups.

Choose how to present your data

A common mistake is to try to present all your data, rather than focusing on what’s most important. It helps to narrow down to what people reading your report need to know.

It’s also important to think about how you'll present your information. You could consider the following points.

Which key numbers do your audience need to know?

  • Decide whether to report using percentages, averages or other statistics.
  • Think about whether you need to compare numerical data for different groups. You may want to look at whether men were more likely to experience outcomes than women, for instance.
  • Read our guide on analysing quantitative data .

Which quotations will help you illustrate your themes?

  • Choose quotations that bring your outcomes to life. Don’t choose too many or they'll distract the reader from the point you want to make.
  • Have a mixture of typical responses and those that don’t fit easily into your categories.
  • Read our guide on analysing qualitative data .

What visual aids will you use?

  • Diagrams, graphs or charts should be used to highlight the most important information, rather than information which is less relevant.
  • It’s very easy for diagrams to mislead your audience. Here are some examples of misleading charts . If you think a diagram might be misleading, it’s better to leave it out.

As far as possible, present data that has been analysed or summarised rather than raw data, to make it as easy as possible for the reader to follow.

Check anonymity and consent

When you collected your data, respondents will have said whether they wanted to remain anonymous (most do) and whether you should check with them before using a quote or case study in your report. Make sure you do any checking with plenty of time before you need to complete the report.

Depending on the size of your sample and how easy it is to identify individuals, you may have to do more than just change the name to make someone anonymous.

You might have to change their age or other identifying details, or remove references to anything that would allow people to identify them as an individual.

Write accurately and clearly

It’s important to write accurately and clearly so that your report can be easily understood and is not misleading.

Be transparent

Being transparent means being open about what you can and can’t say, and clear about how you reached your conclusions and about the limitations of your data. 

Just as it's important to minimise bias when collecting or analysing data, it's equally important to minimise bias when reporting.

  • Avoid overclaiming your role in making a difference . Your work may not be solely responsible for the outcomes that have occurred for individuals or organisations you've worked with. Remember to report on evidence of any other contributing factors. For example, support received from other organisations or other sources.
  • Choose case studies carefully . Evaluation case studies are not the same as marketing case studies. They should illustrate your learning points, not just the very best of what you do. You won't have a representative group of case studies, but as far as possible, choose case studies – and quotations – that reflect the full range of responses you had.
  • Explore alternative interpretations or causal links . Sometimes, data is ambiguous and there could be more than one interpretation. All of us are prone to 'confirmation bias' – paying more attention to data that fits our existing beliefs. It's important to look for and talk about reasonable alternative interpretations or explanations of your data.
  • Be clear about the limitations of your data . If there was a group you weren't able to hear from, or your sample over- or under-represents a particular group, say so.
  • Be open about your sample size . In general, the smaller your sample, the less able you're to make generalisations about everyone in your target group.
  • Report negative findings . If the data shows something isn't working or an outcome hasn't been achieved, don’t ignore it. Reporting negative findings will help your audience to use the evaluation to learn and improve.

Use precise language

Evaluation reports need to be as clear and precise as possible in their wording. Be especially careful about using the word 'proof' or 'prove'.

To prove something requires 100% certainty, which you are very unlikely to have. 'Indicates', 'demonstrates', 'shows', 'suggests' or 'is evidence for' are useful alternative phrases.

Make your report easy to read

Subheadings will make your report clear for your readers. Looking back at your evaluation framework or theory of change can help you think of ideas for subheadings.

It often makes sense to have a subheading for each intended outcome.

Sometimes you'll have collected data about the same outcome from a range of different sources such as questionnaires, interviews, observation or secondary data.

When you analysed your data, you probably looked at each source separately.

In your report, it usually makes sense to write about all the data relating to each outcome together (rather than having separate sections on data from different sources).

Keep your language simple and straightforward. Remember to explain any terminology that might be unfamiliar to your audience.

Develop your recommendations

Your recommendations are likely to be one of the most important parts of your report. Good recommendations will make your evaluation findings more likely to be used.

Recommendations are more likely to be put in place if the following factors are considered.

  • Supported by evidence – Be clear about how the recommendations build on the key findings. It can help to structure the recommendations in the same order as the main findings to help readers understand the evidence base for each.
  • Specific – Say exactly what action needs to be taken and when within the control of the evaluation.
  • Users  – Make sure individuals or groups have the authority and capability to take forward what you’re suggesting.
  • Realistic and achievable  – Recommendations should be feasible. You can categorise them by which ones are easy to implement and which are less so. More ‘difficult’ recommendations might need budget or staff changes. These should still be stated, as well as the impact of it.
  • Prioritised  – It’s helpful to show some priorities for action. You could, for example, split your recommendations into ‘essential’ versus ‘optional’ or ‘for consideration’ versus ‘for action’. Make sure the number of recommendations you include is achievable.

Involve people in the reporting process

You can involve other internal staff and the poeple and communities you work with at several points. For example, you could share your report drafts and ask them to help you refine the conclusions.

This 'co-production' of findings can be valuable and provide interpretations you may not have thought about.

You can also co-produce recommendations by sharing the findings with those you work with and asking them to suggest and prioritise recommendations.

If you do this, take care to guide people to base their recommendations on the evidence, and not their own interests or preoccupations.

Finishing the report

Allow time for a couple of report drafts and make sure there are people available to review the report for you. It's good to have someone look at it with ‘fresh eyes’.

If the report is being widely shared, you could have someone from outside your sector review the draft to make sure it's clear for external audiences.

To complete the report, leave time for proofreading and editing, checking references, and design and print if needed.

You might include your data collection tools in appendices – this could help other organisations working in your field to improve their evaluation.

Once you’ve completed your report, read our guidance on using your findings to improve your work .

Waving man

Need information and guidance? We're here to help.

Contact our small charity helpdesk

Share this page

Tell a colleague

  • Share on Facebook
  • Share on Twitter

Last reviewed: 18 September 2023

Analysis and reporting

  • Analysing quantitative data for evaluation
  • Analysing qualitative data for evaluation

This page was last reviewed for accuracy on 18 September 2023

Reporting and data visualisation

Take a look at this example of reporting and data visualisation

Our evaluation training

See what training courses we have to support you in your evaluation work

Impact round-up: March 2024

Our latest impact and evaluation support, guidance and training

  • Impact and evaluation

Impact round-up: December 2023

The latest resources, events and training to help your organisation with its impact and evaluation

Impact round-up: Summer 2023

Senior consultant, Sandy Chidley, runs through some recent work on ethical evaluation and shares useful evaluation events and training opportunities

Impact round-up: Spring 2023

Evaluation consultant Lucy Lernelius takes a deep dive into measuring the impact of volunteering

Key findings from Time Well Spent 2023

New data reveals how much volunteering has transformed over recent years

  • Involving volunteers

Insights and reflections

Sign up for emails

Get regular updates on NCVO's help, support and services

U.S. flag

An official website of the United States government.

Here’s how you know

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Accommodations
  • Diversity and Inclusion
  • Federal Contractor Requirements
  • Federal Employers
  • Recruitment and Retention
  • Self-Employment and Entrepreneurship
  • Small Business
  • Stay at Work/Return to Work
  • Tax Incentives for Employers
  • Workforce Recruitment Program
  • Accessible Technology
  • Communications Access
  • Emergency Preparedness
  • Flexible Work Arrangements 
  • Health Care
  • Personal Assistance Services
  • Transportation
  • Universal Design
  • Direct Support Professionals
  • Financial Capability, Asset Development, and Work and Tax Incentives 
  • Older Workers 
  • Apprenticeship
  • Blending, Braiding, and Sequencing
  • Competitive Integrated Employment
  • Coronavirus Resources
  • Customized Employment
  • Mental Health
  • Workforce System
  • STATE POLICY
  • Current Research
  • Disability Data Blogs
  • Disability Employment Statistics
  • Past Research
  • Employment Population Ratio Map
  • Median Annual Earnings Map
  • Advancing State Policy Integration for Recovery and Employment (ASPIRE)
  • Campaign for Disability Employment (CDE)
  • Disability Employment Initiative (DEI)
  • Employment First
  • National Disability Employment Awareness Month (NDEAM)
  • National Expansion of Employment Opportunities Network (NEON)
  • Rehabilitation (Rehab) Act 50
  • Stay at Work/Return to Work (SAW/RTW)
  • Business Sense Newsletter
  • ODEP News Brief
  • Publications for Order and Download
  • Follow us on Social Media
  • Americans with Disabilities Act
  • Americans with Disabilities Act 30th Anniversary
  • Current and Recently Completed Work
  • Employment-Population Ratio Map

ODEP and our grantees, contractors and other partners undertake a variety of research, analysis and evaluation efforts to help us achieve our mission. This page contains information and links to work currently underway as well as work published in the past year. For earlier work, please visit our Past Work page . For tools and other resources related to research highlighted in this section, please visit project-specific pages.

2023 — 2024

Disability data series.

The Disability Data Series features a collection of papers employing qualitative and quantitative approaches to explore crucial aspects of disability data. The goal of the series is to spotlight disability data collection efforts, utilization and shed light on key disability employment topics to inform policy decisions.

Women with Disabilities and the Labor Market

(August 2023)  This brief examines the state of women with disabilities in the current labor market and related topics. The report uses data from the Current Population Survey , the American Community Survey  and the National Survey on Drug Use and Health . It examines a range of topics, including: •    Disability prevalence,  •    Labor market trends,  •    Representation in skilled trade occupations •    Educational attainment,  •    Telework and the recovery from the COVID-19 pandemic, •    Poverty and earnings,  •    Barriers to employment and  •    Mental health.

Download the Brief

Research Support Services for Employment of Young Adults on the Autism Spectrum (REYAAS) Project

(November 2022) REYAAS is seeking to identify promising practices and policies to support employment of young adults (ages 16 through 28) on the autism spectrum. Recent estimates suggest that annually about 100,000 youth on the autism spectrum turn 18-years old in the United States. A majority of young adults on the autism spectrum have one or more co-occurring health or mental health conditions, and young adults on the autism spectrum are more likely than young adults without disabilities to be living in households with income below the federal poverty level. Young adults on the autism spectrum can face challenges to entering the labor force, including a sudden drop in services when exiting high school (the “services cliff”). After leaving high school, they are less likely to participate in vocational or technical education and employment than young adults with other disabilities. Read the REYAAS project fact sheet .

Evaluation Design Options Report

This report builds on literature reviews and listening sessions the team conducted in the knowledge development phase of the REYAAS project, by presenting evaluation design options for the following interventions to improve employment outcomes for autistic young adults:

  • Enhanced access to Registered Apprenticeship and related support services,
  • Tailored YouthBuild program to the needs of autistic young adults,
  • Enhanced access to supported employment in Vocational Rehabilitation and
  • Incorporation of virtual interview training for transition-age youth in Job Corps.

Download the Report

Download the Spotlight

Data Analysis Report

This report examines the way young adults on the autism spectrum (ages 16 to 28) engaged with state Vocational Rehabilitation (VR) agencies, the characteristics of those who applied for VR services, the VR services that they used, and their employment outcomes. This study used Rehabilitation Services Administration—Case Service Reports (RSA-911) restricted-use files (RUF) for program years 2017 to 2020.

Literature Review #1

This report describes the range of programs, models, and strategies that have been implemented to support the transition to competitive integrated employment for young adults with intellectual or developmental disabilities including autism.

Download the Review

Literature Review #2

This report summarizes the evidence on the effectiveness of the approaches identified in the first literature review, and it also assesses whether that evidence is growing, lacking, consistent, or divergent.

Data Inventory Brief

This issue brief presents an inventory of 11 data sets, including administrative and survey data sources, which contain employment-related information for young adults on the autism spectrum that researchers can use to shed light on the topic, although these data sets are limited in their ability to provide a detailed, longitudinal assessment of this population and employment outcomes.

Listening Sessions Report

This report summarizes content from listening sessions aimed at understanding the barriers and facilitators to employment and careers for young adults on the autism spectrum, with stakeholder groups that included young adults on the autism spectrum, advocates, policymakers, direct service providers, educators, employers, and researchers.

Disability and Current Population Survey (CPS) COVID-19 Supplemental Data

(June 2022) Five questions were added to the basic monthly Current Population Survey (CPS) in May 2020, to address the unprecedented changes occurring as a result of the COVID-19 pandemic. This COVID-19 supplemental data permits further insights into the issues of telework/work-at-home, loss of work due to the employer closing or losing business, receiving pay for lost work, prevention from looking for work, and inability to receive necessary medical care. Researchers with the Department of Labor’s Office of Disability Employment Policy utilize this data to provide understanding of how these impacts differed for people with disabilities.

Disability and the Digital Divide: Internet Subscriptions, Internet Use and Employment Outcomes

(June 2022) The research brief examines the disability digital divide and how it may relate to disability employment. In addition, the brief observes associations of disability status, home internet subscription types and internet use with employment retention between 2019 and 2020. Further, it examines access rates to home internet subscriptions from 2015 to 2019 and online activity participation rates in November 2019 among people with and without disabilities.

Employment of Persons with a Disability: Analysis of Trends during the COVID-19 Pandemic– Findings in Brief

(February 2022) Since the beginning of the COVID-19 pandemic, there have been unprecedented changes in employment for America’s workforce. Many businesses ceased or scaled back operations and many state governments issued stay-at-home orders, prompting historically rapid declines and subsequent recovery in the labor market. Using data from the Census Bureau’s Current Population Survey (CPS), researchers with the Department of Labor’s Office of Disability Employment Policy provide insight into key labor force statistics, employment across industries and occupations, and the effect of the ease of social distancing and the ability to telework on occupational employment change.

Black Workers with Disabilities in the Labor Force: Did You Know?

(February 2022) Disability is a natural part of the human experience, including within the Black community. This brief provides some facts to know about Black adults (age 16+) with disabilities within the US workforce.

Access to Paid Leave for Family and Medical Reasons Among Workers with Disabilities

(December 2021) Most workers, including those with disabilities, will experience personal, medical and family caregiving events that demand time away from work. Access to paid leave for family and medical reasons could be especially important for both workers with disabilities and workers who are caregivers to family members with disabilities. This brief estimates the share of the workforce that has access to paid leave for certain family and medical reasons and highlights differences in access based on disability status.

Inclusive Apprenticeship

(May 2021) Inclusive apprenticeship programs — those that support and are designed to be inclusive of apprentices with disabilities — hold promise for improving long-term employment outcomes for participants. However, little is known about the prevalence and operations of inclusive apprenticeship programs. This report summarizes current information on experiences of people with disabilities in apprenticeship, drawing on the research literature, interviews with experts on inclusive apprenticeship, and administrative and survey data.

Spotlight on Women with Disabilities

(March 2021) This brief examines the state of women with disabilities in the current 

labor market. Using data from the Bureau of Labor Statistics’ Current Population Survey, we examine disability prevalence, education levels, employment and related characteristics, poverty, health insurance coverage and the impact of COVID-19 on the employment of women with disabilities.

Employment for PWD: Analysis of Trends during COVID-19 Pandemic – Findings in Brief

(March 2021) Since the beginning of the COVID-19 pandemic, there have been unprecedented changes in employment for America’s workforce. Many businesses ceased or scaled back operations and many state governments issued stay-at-home orders. Using key labor force statistics from the Census Bureau’s Current Population Survey (CPS) researchers with the Department of Labor’s Office of Disability Employment Policy sought to provide insight into the recent changes. This brief expands data published in September and explores changes in employment for people with and without a disability in various occupations and industries.

Stay-at-Work/Return-to-Work Models and Strategies

(September 2020) Improving the Stay-at-Work/Return-to-Work outcomes of individuals who experience an injury or illness that inhibits their ability to work remains a key policy goal. Researchers found SAW/RTW programs available on a large scale may have the most impact on those workers likely to leave the labor force without such assistance. Developing effective SAW/RTW programs requires information about the current policy landscape and evidence about what kinds of SAW/RTW assistance is effective and for whom. The findings from this project also highlight a number of areas for future research and provide guidance on five strategies to expand evidence on effective SAW/RTW interventions. Click here to view our research and publications that preceded this project.

Findings in Brief

This report includes a summary of information about the study, the process, and findings from each report.

Synthesis of SAW/RTW Programs, Models, Efforts, & Definitions

This report and the accompanying summary describe SAW/RTW programs operating in the U.S. at the time of publication.

Download the Summary

Synthesis of Evidence about SAW/RTW and Related Programs

This report and the companion summary provide a synthesis of evidence published between 2008 and 2018 on the effects of SAW/RTW or related programs on employment and the receipt of federal disability benefits.

Early Intervention Pathway Map and Population Profiles

SAW/RTW Intervention Pathways Dashboard

This report and the accompanying summary offer five options for new research to build evidence about the target populations for SAW/RTW and to test the effects of interventions on employment outcomes.

Community College Interventions for Youth and Young Adults with Disabilities - Evaluation

(December 2020) The Community College Interventions for Youth and Young Adults with Disabilities Demonstration and Evaluation contributes to the growing evidence base on ways to best serve students with disabilities in higher education. Findings help enhance the policies and services designed to increase the enrollment and completion of community college programs among students with disabilities.

Demonstration and Evaluation of Community College Interventions for Youth and Young Adults with Disabilities

Building accessible and inclusive community college environments for students with disabilities, survey of employer policies on the employment of people with disabilities.

(April 2020) To survey collected information from employers about organizational policies, practices, successes and challenges as well as attitudes and beliefs in the recruitment, retention, and advancement of people with disabilities. ODEP sponsored a similar effort in 2008.

Survey of Employer Policies on the Employment of People with Disabilities: Final Report

Implementation of disability-inclusive workplace policies and practices by federal contractors and non-federal contractors, implementation and effectiveness of disability-inclusive workplace practices and policies, employer practices and attitudes toward the employment of people with disabilities: issue brief.

  • Skip to main content
  • Skip to FDA Search
  • Skip to in this section menu
  • Skip to footer links

U.S. flag

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

U.S. Food and Drug Administration

  •   Search
  •   Menu
  • Vaccines, Blood & Biologics

Real-World Evidence Submissions to the Center for Biologics Evaluation and Research

As part of the reauthorization of the Prescription Drug User Fee Act (PDUFA VII), FDA committed to reporting aggregate and anonymized information on submissions to the Center for Drug Evaluation and Research (CDER) and the Center for Biologics Evaluation and Research (CBER) that contain real-world evidence (RWE).

The tables below describe submissions to CBER containing RWE that meet reporting criteria . This report is not intended to include all submissions to CBER containing analyses of real-world data (RWD). Columns will be added annually to represent submissions by fiscal year (FY) from FYs 2023 through 2027.

The table below provides an overview of submissions to CBER containing RWE by category. A study that generates RWE may be reflected in more than one category depending on the status of the study.

Protocol 4
New drug application (NDA)/biologics license application (BLA) 0
Final study report to satisfy a postmarketing requirement (PMR) or postmarketing commitment (PMC) 0
Submission of an interventional study protocol to an IND or submission of a non-interventional study protocol to an existing IND or to a pre-IND.
There were no NDAs or BLAs for which CBER took regulatory action in FY 2023.
Does not include submissions before FY 2023. There were no final study reports to satisfy a PMR or PMC submitted in FY 2023 for which FDA took regulatory action in FY 2023. Final study reports to satisfy a PMR or PMC submitted in FY 2023 will be reported in the fiscal year in which FDA determines that the PMR or PMC was satisfied.

The table below describes the characteristics of new protocols containing RWD as well as protocol amendments that added RWD to a study that did not previously include RWD. The numbers include submissions of interventional study protocols to an IND and submissions of non-interventional study protocols to an existing IND or to a pre-IND. The numbers do not include protocols or protocol amendments submitted only as part of a background package for a meeting with FDA. Protocols are reported in the fiscal year during which they are submitted.

Effectiveness 0
Safety 4

Intended Regulatory Purpose

To support the demonstration of safety and/or effectiveness for a product not previously approved by FDA

0

To support labeling changes for an approved product, including:

0
0
0
0
0
0
To satisfy a PMR 2

To satisfy a PMC

2
Data Source  
Electronic health records 0
Medical claims 2
Product, disease, or other registry 2
Digital health technologies in non-research settings 0
Other 0
Study Design 
Randomized controlled trial  0
Externally controlled trial 0
Non-interventional (observational) study 4
Other 0
Studies often provide information on both effectiveness and safety. For this report, a study was classified as “safety” if it was conducted primarily to assess a known or potential safety risk. All other studies were classified as “effectiveness.”
A study may have more than one regulatory purpose or data source and therefore may be included in more than one category.
The term “registry” is sometimes used to refer to a non-interventional cohort study that is intended to address a specific regulatory question in a targeted population. For such studies, this report provides the original source(s) of study data. 

We use cookies. Read more about them in our Privacy Policy .

  • Accept site cookies
  • Reject site cookies

Grantham Research Institute on Climate Change and the Environment

Global trends in climate change litigation: 2024 snapshot

evaluating a research report

This report provides a numerical analysis of how many climate change litigation cases were filed in 2023, where and by whom, and a qualitative assessment of trends and themes in the types of cases filed. It is the sixth report in the series, produced by the Grantham Research Institute in partnership with the Sabin Center for Climate Change Law and drawing on the Sabin Center’s Climate Change Litigation Databases . Each report provides a synthesis of the latest research and developments in the climate change litigation field.

Key messages

  • At least 230 new climate cases were filed in 2023. Many of these are seeking to hold governments and companies accountable for climate action. However, the number of cases expanded less rapidly last year than previously, which may suggest a consolidation and concentration of strategic litigation efforts in areas anticipated to have high impact.
  • Climate cases have continued to spread to new countries, with cases filed for the first time in Panama and Portugal in 2023.
  • 2023 was an important year for international climate change litigation, with major international courts and tribunals being asked to rule and advise on climate change. Just 5% of climate cases have been brought before international courts, but many of these cases have significant potential to influence domestic proceedings.
  • There were significant successes in ‘government framework’ cases in 2023; these challenge the ambition or implementation of a government’s overall climate policy response. The European Court of Human Rights’ decision in April 2024 in the case of KlimaSeniorinnen and ors. v. Switzerland is likely to lead to the filing of further cases.
  • The number of cases concerning ‘climate-washing’ has grown in recent years. 47 such cases were filed in 2023, bringing the recorded total to more than 140. These cases have met with significant success, with more than 70% of completed cases decided in favour of the claimants.
  • There were important developments in ‘polluter pays’ cases: more than 30 cases worldwide are currently seeking to hold companies accountable for climate-related harm allegedly caused by their contributions to greenhouse gas emissions.
  • Litigants continue to file new ‘corporate framework’ cases, which seek to ensure companies align their group-level policies and governance processes with climate goals. The New Zealand Supreme Court allowed one such case to proceed, although cases filed elsewhere have been dismissed. The landmark case of Milieudefensie v. Shell is under appeal.
  • In this year’s analysis a new category of ‘transition risk’ cases was introduced, which includes cases filed against corporate directors and officers for their management of climate risks. Shareholders of Enea approved a decision to bring such a case against former directors for planned investments in a new coal power plant in Poland.
  • ESG backlash cases, which challenge the incorporation of climate risk into financial decision-making.
  • Strategic litigation against public participation (SLAPP) suits against NGOs and shareholder activists that seek to deter them from pursuing climate agendas.
  • Just transition cases, which challenge the distributional impacts of climate policy or the processes by which policies were developed, normally on human rights grounds.
  • Green v. green cases, which concern potential trade-offs between climate and biodiversity or other environmental aims.

Recent previous reports in the series:

2023 snapshot

2022 snapshot

Sign up to our newsletter

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

systems-logo

Article Menu

evaluating a research report

  • Subscribe SciFeed
  • Recommended Articles
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

Cyber evaluation and management toolkit (cemt): face validity of model-based cybersecurity decision making.

evaluating a research report

1. Introduction

1.1. background.

“The DON needs a new approach to cybersecurity that goes beyond compliance because our over-reliance on compliance has resulted in insecure systems, which jeopardise the missions these systems support. Instead of a compliance mindset, the DON will shift to Cyber Ready, where the right to operate is earned and managed every day. The DON will make this transition by adhering to an operationally relevant, threat-informed process that affordably reduces risk and produces capabilities that remain secure after they have been delivered at speed”. [ 5 ] (p. 7)

1.2. Literature Review

1.3. cyberworthiness.

“The desired outcome of a range of policy and assurance activities that allow the operation of Defence platforms, systems and networks in a contested cyber environment. It is a pragmatic, outcome-focused approach designed to ensure all Defence capabilities are fit-for-purpose against cyber threats”. [ 43 ]
“2.10 The seaworthiness governance principles require that seaworthiness decisions are made:
a. mindfully—decisions are more effective and less likely to have unintended consequences when they are made with a thorough understanding of the context, the required outcome, the options available, and their implications now and in the future
b. collaboratively—obtaining input from all stakeholders and engaging in joint problem-solving results in better decisions (bearing in mind that collaboration does not necessarily require consensus)
c. accountably—decisions only become effective when people take accountability for making them happen
d. transparently—decisions are more effective when everyone understands what has been decided and why”. [ 44 ] (p.33)

1.4. Addressing the Problem

  • Usability—there is limited ability to easily create and review these graph-based threat assessments, especially in large, complex systems;
  • Efficiency—reusability of these assessments is limited in comparison to compliance-based approaches that re-apply a common control set;
  • Maintainability—it is difficult to update complex graph-based assessments without specialised toolsets as the system or threat environment evolves.
  • Are integrated threat models, developed using model-based systems engineering (MBSE) techniques, an effective and efficient basis for the assessment and evaluation of cyberworthiness?
  • Do the developed threat models provide decision makers with the necessary understanding to make informed security risk decisions?
  • Does the process provide sufficient reusability and maintainability that the methodology is more efficient than prevailing compliance-based approaches?
  • Do cybersecurity risk practitioners prefer the integrated threat model approach to traditional security risk assessment processes?

2. Materials and Methods

2.1. threat-based cybersecurity engineering.

  • Threat Context, derived from the system or capability design/architecture;
  • Threat Identification, provided by the Cyber Threat Intelligence function within an organisation;
  • Threat Insight, contributed by the Cyber Threat Emulation function within an organisation;
  • Best Practice Controls, distilled from the various cybersecurity frameworks available within the cybersecurity body of knowledge.
  • Preventative Controls, a baseline of preventative cybersecurity controls within the system, for inclusion in the system design;
  • Detecting Controls, a baseline of detection and response controls relevant to the system, for implementation by the Cyber Operations function within an organisation;
  • Recovery Controls, a baseline of recovery and resilience controls relevant to the system, for implementation by the System Operations function within an organisation;
  • Residual Risk, the overall risk presented by the threats to the capability given the mitigation mechanisms that are in place.

2.2. Cyber Evaluation and Management Toolkit (CEMT)

2.3. cemt sample model, 2.3.1. threat modelling.

  • Misuse case diagrams;
  • Intermediate mal-activity diagrams;
  • Detailed mal-activity diagrams.

2.3.2. Threat Mitigation

  • Allocating assets to the threat model;
  • Tracing controls to the threat model.

2.3.3. Risk Assessment

  • Attack tree assessment;
  • Parametric risk analysis;
  • Risk evaluation.

2.4. Achieving Threat-Based Cybersecurity Engineering

2.5. efficiency through automation.

  • Automated update of complex drawings and simulations to ensure that changes to the design or threat environment can be incorporated efficiently into the threat model;
  • Automated model validation to ensure that basic review tasks are automated, allowing expert reviewers to focus on the actual threat assessment component;
  • Automated documentation to ensure that the process of creating enduring design artefacts is efficient and accurate.

3.1. Face Validity Trial Setup

3.2. face validity trial data collection and setup, 4. discussion.

  • Appropriateness of the assessed controls to the system being assessed, as demonstrated by the responses to Question 1;
  • Prioritisation of controls, as demonstrated by the responses to Questions 6 and 14;
  • Ability for non-expert decision makers to understand the assessment, as demonstrated by Questions 7, 8, and 17.

4.1. Significance

  • Extended Model-Based Taxonomy—an extension of an open model-based systems engineering language such as UML or SysML; this is provided to facilitate a model-based approach;
  • Threat Focused—the threats to the system, rather than a best-practice control baseline or asset hierarchy, is used as the focal point of the assessment;
  • Detailed Adversary Modelling—the actions of the adversary are modelled in detail, facilitating a precise discussion and review of any threat analysis;
  • Visualisation and Simulation of Threat—detailed adversary modelling is expressed in simplified graphs such as attack trees, and branches of those graphs can be simulated quantitatively;
  • Explicit Traceability to Threats—derived security controls are directly traceable to adversary actions, facilitating discussion and review of the importance of each control in terms of the malicious action it mitigates.

4.2. Future Work

5. conclusions, author contributions, data availability statement, acknowledgments, conflicts of interest.

  • Australian Government—Department of Home Affairs, Protective Security Policy Framework. Available online: https://www.protectivesecurity.gov.au (accessed on 25 April 2024).
  • National Institute of Standards and Technology (NIST) Computer Security Resource Center (CSRC), NIST Risk Management Framework (RMF). Available online: https://csrc.nist.gov/projects/risk-management/about-rmf (accessed on 25 April 2024).
  • Australian Government—Australian Signals Directorate, Information Security Manual (ISM). Available online: https://www.cyber.gov.au/resources-business-and-government/essential-cyber-security/ism (accessed on 25 April 2024).
  • National Institute of Standards and Technology (NIST) Computer Security Resource Center (CSRC), NIST Special Publication 800-53 Rev. 5: Security and Privacy Controls for Information Systems and Organizations. Available online: https://csrc.nist.gov/pubs/sp/800/53/r5/upd1/final (accessed on 25 April 2024).
  • U.S. Department of Navy; Cyber Strategy, November 2023. Available online: https://dvidshub.net/r/irstzr (accessed on 25 April 2024).
  • Australian Government—Australian Signals Directorate, System Security Plan Annex Template (March 2024). Available online: https://www.cyber.gov.au/sites/default/files/2024-03/System%20Security%20Plan%20Annex%20Template%20%28March%202024%29.xlsx (accessed on 25 April 2024).
  • National Institute of Standards and Technology (NIST) Computer Security Resource Center (CSRC), Control Catalog (spreadsheet). Available online: https://csrc.nist.gov/files/pubs/sp/800/53/r5/upd1/final/docs/sp800-53r5-control-catalog.xlsx (accessed on 25 April 2024).
  • National Institute of Standards and Technology (NIST), OSCAL: The Open Security Controls Assessment Language. Available online: https://pages.nist.gov/OSCAL/ (accessed on 25 April 2024).
  • MITRE ATT&CK Framework. Available online: https://attack.mitre.org/ (accessed on 25 April 2024).
  • The Department of Defense Cyber Table Top Guide, Version 2, 16 September 2021. Available online: https://www.cto.mil/wp-content/uploads/2023/06/DoD-Cyber-Table-Top-Guide-v2-2021.pdf (accessed on 25 April 2024).
  • Monroe, M.; Olinger, J. Mission-Based Risk Assessment Process for Cyber (MRAP-C). ITEA J. Test Eval. 2020 , 41 , 229–232. [ Google Scholar ]
  • Kuzio de Naray, R.; Buytendyk, A.M. Analysis of Mission Based Cyber Risk Assessments (MBCRAs) Usage in DoD’s Cyber Test and Evaluation ; Institute for Defense Analyses: Alexandria, VA, USA, 2022; IDA Publication P-33109. [ Google Scholar ]
  • Kordy, B.; Piètre-Cambacédès, L.; Schweitzer, P.P. DAG-based attack and defense modeling: Don’t miss the forest for the attack trees. Comput. Sci. Rev. 2014 , 13–14 , 1–38. [ Google Scholar ] [ CrossRef ]
  • Weiss, J.D. A system security engineering process. In Proceedings of the 14th Annual NCSC/NIST National Computer Security Conference, Washington, DC, USA, 1–4 October 1991. [ Google Scholar ]
  • Schneier, B. Attack trees: Modeling security threats. Dr Dobb’s J. Softw. Tools 1999 , 12–24 , 21–29. Available online: https://www.schneier.com/academic/archives/1999/12/attack_trees.html (accessed on 25 April 2024).
  • Paul, S.; Vignon-Davillier, R. Unifying traditional risk assessment approaches with attack trees. J. Inf. Secur. Appl. 2014 , 19 , 165–181. [ Google Scholar ] [ CrossRef ]
  • Kordy, B.; Pouly, M.; Schweitzer, P. Probabilistic reasoning with graphical security models. Inf. Sci. 2016 , 342 , 111–131. [ Google Scholar ] [ CrossRef ]
  • Gribaudo, M.; Iacono, M.; Marrone, S. Exploiting Bayesian Networks for the analysis of combined Attack Trees. Electron. Notes Theor. Comput. Sci. 2015 , 310 , 91–111. [ Google Scholar ] [ CrossRef ]
  • Holm, H.; Korman, M.; Ekstedt, M. A Bayesian network model for likelihood estimations of acquirement of critical software vulnerabilities and exploits. Inf. Softw. Technol. 2015 , 58 , 304–318. [ Google Scholar ] [ CrossRef ]
  • Moskowitz, I.; Kang, M. An insecurity flow model. In Proceedings of the 1997 Workshop on New Security Paradigms, Cumbria, UK, 23–26 September 1997; pp. 61–74. [ Google Scholar ]
  • McDermott, J.; Fox, C. Using abuse case models for security requirements analysis. In Proceedings of the 15th Annual Computer Security Applications Conference, Phoenix, AZ, USA, 6–10 December 1999; pp. 55–64. [ Google Scholar ]
  • Sindre, G.; Opdahl, A.L. Eliciting security requirements with misuse cases. Requir. Eng. 2004 , 10 , 34–44. [ Google Scholar ] [ CrossRef ]
  • Karpati, P.; Sindre, G.; Opdahl, A.L. Visualizing cyber attacks with misuse case maps. In Requirements Engineering: Foundation for Software Quality ; Springer: Berlin/Heidelberg, Germany, 2010; pp. 262–275. [ Google Scholar ]
  • Abdulrazeg, A.; Norwawi, N.; Basir, N. Security metrics to improve misuse case model. In Proceedings of the 2012 International Conference on Cyber Security, Cyber Warfare and Digital Forensics, Kuala Lumpur, Malaysia, 26–28 June 2012. [ Google Scholar ]
  • Saleh, F.; El-Attar, M. A scientific evaluation of the misuse case diagrams visual syntax. Inf. Softw. Technol. 2015 , 66 , 73–96. [ Google Scholar ] [ CrossRef ]
  • Mai, P.; Goknil, A.; Shar, L.; Pastore, F.; Briand, L.C.; Shaame, S. Modeling Security and Privacy Requirements: A Use Case-Driven Approach. Inf. Softw. Technol. 2018 , 100 , 165–182. [ Google Scholar ] [ CrossRef ]
  • Matuleviaius, R. Fundamentals of Secure System Modelling ; Springer International Publishing: Cham, Switzerland, 2017; pp. 93–115. [ Google Scholar ]
  • Sindre, G. Mal-activity diagrams for capturing attacks on business processes. In Requirements Engineering: Foundation for Software Quality ; Springer: Berlin/Heidelberg, Germany, 2007; pp. 355–366. [ Google Scholar ]
  • Opdahl, A.; Sindre, G. Experimental comparison of attack trees and misuse cases for security threat identification. Inf. Softw. Technol. 2009 , 51 , 916. [ Google Scholar ] [ CrossRef ]
  • Karpati, P.; Redda, Y.; Opdahl, A.; Sindre, G. Comparing attack trees and misuse cases in an industrial setting. Inf. Softw. Technol. 2014 , 56 , 294. [ Google Scholar ] [ CrossRef ]
  • Tondel, I.A.; Jensen, J.; Rostad, L. Combining Misuse Cases with Attack Trees and Security Activity Models. In Proceedings of the 2010 International Conference on Availability, Reliability and Security, Krakow, Poland, 15–18 February 2010; pp. 438–445. [ Google Scholar ]
  • Meland, P.H.; Tondel, I.A.; Jensen, J. Idea: Reusability of threat models—Two approaches with an experimental evaluation. In Engineering Secure Software and Systems ; Springer: Berlin/Heidelberg, Germany, 2010; pp. 114–122. [ Google Scholar ]
  • Purton, L.; Kourousis, K. Military Airworthiness Management Frameworks: A Critical Review. Procedia Eng. 2014 , 80 , 545–564. [ Google Scholar ] [ CrossRef ]
  • Mo, J.P.T.; Downey, K. System Design for Transitional Aircraft Support. Int. J. Eng. Bus. Manag. 2014 , 6 , 45–56. [ Google Scholar ] [ CrossRef ]
  • Hodge, R.J.; Craig, S.; Bradley, J.M.; Keating, C.B. Systems Engineering and Complex Systems Governance—Lessons for Better Integration. INCOSE Int. Symp. 2019 , 29 , 421–433. [ Google Scholar ] [ CrossRef ]
  • Simmonds, S.; Cook, S.C. Use of the Goal Structuring Notation to Argue Technical Integrity. INCOSE Int. Symp. 2017 , 27 , 826–841. [ Google Scholar ] [ CrossRef ]
  • United States Government Accountability Office. Weapon Systems Cybersecurity: DOD just Beginning to Grapple with Scale of Vulnerabilities. GAO-19-129 . 2018. Available online: https://www.gao.gov/products/gao-19-128 (accessed on 15 June 2024).
  • Joiner, K.F.; Tutty, M.G. A tale of two allied defence departments: New assurance initiatives for managing increasing system complexity, interconnectedness and vulnerability. Aust. J. Multi-Discip. Eng. 2018 , 14 , 4–25. [ Google Scholar ] [ CrossRef ]
  • Joiner, K.F. How Australia can catch up to U.S. cyber resilience by understanding that cyber survivability test and evaluation drives defense investment. Inf. Secur. J. A Glob. Perspect. 2017 , 26 , 74–84. [ Google Scholar ] [ CrossRef ]
  • Thompson, M. Towards Mature ADF Information Warfare—Four Years of Growth. Defence Connect Multi-Domain . 2020. Available online: https://www.defenceconnect.com.au/supplements/multi-domain-2 (accessed on 15 June 2024).
  • Fowler, S.; Sweetman, C.; Ravindran, S.; Joiner, K.F.; Sitnikova, E. Developing cyber-security policies that penetrate Australian defence acquisitions. Aust. Def. Force J. 2017 , 102 , 17–26. [ Google Scholar ]
  • Australian Senate. Budget Hearings on Foreign Affairs Defence and Trade, Testimony by Vice Admiral Griggs, Major General Thompson and Minister of Defence (29 May, 2033–2035 hours). 2018. Available online: https://parlview.aph.gov.au/mediaPlayer.php?videoID=399539timestamp3:19:43 (accessed on 15 June 2024).
  • Australian Government. ADF Cyberworthiness Governance Framework ; Australian Government: Canberra, Australia, 2020.
  • Australian Government. Defence Seaworthiness Management System Manual. 2018. Available online: https://www.defence.gov.au/sites/default/files/2021-01/SeaworthinessMgmtSystemManual.pdf (accessed on 15 June 2024).
  • Allen, M.S.; Robson, D.A.; Iliescu, D. Face Validity: A Critical but Ignored Component of Scale Construction in Psychological Assessment. Eur. J. Psychol. Assess. Off. Organ Eur. Assoc. Psychol. Assess. 2023 , 39 , 153–156. [ Google Scholar ] [ CrossRef ]
  • Fowler, S.; Sitnikova, E. Toward a framework for assessing the cyber-worthiness of complex mission critical systems. In Proceedings of the 2019 Military Communications and Information Systems Conference (MilCIS), Canberra, Australia, 12–14 November 2019. [ Google Scholar ]
  • Fowler, S.; Joiner, K.; Sitnikova, E. Assessing cyber-worthiness of complex system capabilities using MBSE: A new rigorous engineering methodology. IEEE Syst. J. 2022. submitted . Available online: https://www.techrxiv.org/users/680765/articles/677291-assessing-cyber-worthiness-of-complex-system-capabilities-using-mbse-a-new-rigorous-engineering-methodology (accessed on 25 April 2024).
  • Cyber Evaluation and Management Toolkit (CEMT). Available online: https://github.com/stuartfowler/CEMT (accessed on 25 April 2024).
  • Fowler, S. Cyberworthiness Evaluation and Management Toolkit (CEMT): A model-based approach to cyberworthiness assessments. In Proceedings of the Systems Engineering Test & Evaluation (SETE) Conference 2022, Canberra, Australia, 12–14 September 2022. [ Google Scholar ]
  • National Institute of Standards and Technology (NIST) Computer Security Resource Center (CSRC), NIST Special Publication 800-160 Rev. 2: Developing Cyber-Resilient Systems: A Systems Security Engineering Approach. Available online: https://csrc.nist.gov/pubs/sp/800/160/v2/r1/final (accessed on 25 April 2024).
  • National Institute of Standards and Technology (NIST), CSF 2.0: Cybersecurity Framework. Available online: https://www.nist.gov/cyberframework (accessed on 25 April 2024).
  • Madni, A.; Purohit, S. Economic analysis of model-based systems engineering. Systems 2019 , 7 , 12. [ Google Scholar ] [ CrossRef ]
  • Bussemaker, J.; Boggero, L.; Nagel, B. The agile 4.0 project: MBSE to support cyber-physical collaborative aircraft development. INCOSE Int. Symp. 2023 , 33 , 163–182. [ Google Scholar ] [ CrossRef ]
  • Amoroso, E.G. Fundamentals of Computer Security Technology ; Pearson College Div: Englewood Cliffs, NJ, USA, 1994. [ Google Scholar ]
  • INCOSE. Systems Engineering Vision 2020 ; International Council on Systems Engineering: Seattle, WA, USA, 2007. [ Google Scholar ]
  • Madni, A.M.; Sievers, M. Model-based systems engineering: Motivation, current status, and research opportunities. Syst. Eng. 2018 , 21 , 172–190. [ Google Scholar ] [ CrossRef ]
  • Huang, J.; Gheorghe, A.; Handley, H.; Pazos, P.; Pinto, A.; Kovacic, S.; Collins, A.; Keating, C.; Sousa-Poza, A.; Rabadi, G.; et al. Towards digital engineering—The advent of digital systems engineering. Int. J. Syst. Syst. Eng. 2020 , 10 , 234–261. [ Google Scholar ] [ CrossRef ]
  • Chelouati, M.; Boussif, A.; Beugin, J.; El Koursi, E.-M. Graphical safety assurance case using goal structuring notation (gsn)– challenges, opportunities and a framework for autonomous trains. Reliab. Eng. Syst. Saf. 2023 , 230 , 108–933. [ Google Scholar ] [ CrossRef ]
  • Sujan, M.; Spurgeon, P.; Cooke, M.; Weale, A.; Debenham, P.; Cross, S. The development of safety cases for healthcare services: Practical experiences, opportunities and challenges. Reliab. Eng. Syst. Saf. 2015 , 140 , 200–207. [ Google Scholar ] [ CrossRef ]
  • Nguyen, P.H.; Ali, S.; Yue, T. Model-based security engineering for cyber-physical systems: A systematic mapping study. Inf. Softw. Technol. 2017 , 83 , 116–135. [ Google Scholar ] [ CrossRef ]
  • Geismann, J.; Bodden, E. A systematic literature review of model-driven security engineering for cyber–physical systems. J. Syst. Softw. 2020 , 169 , 110697. [ Google Scholar ] [ CrossRef ]
  • Carter, B.; Adams, S.; Bakirtzis, G.; Sherburne, T.; Beling, P.; Horowitz, B. A preliminary design-phase security methodology for cyber–physical systems. Systems 2019 , 7 , 21. [ Google Scholar ] [ CrossRef ]
  • Larsen, M.H.; Muller, G.; Kokkula, S. A Conceptual Model-Based Systems Engineering Method for Creating Secure Cyber-Physical Systems. INCOSE Int. Symp. 2022 , 32 , 202–213. [ Google Scholar ] [ CrossRef ]
  • Japs, S.; Anacker, H.; Dumitrescu, R. SAVE: Security & safety by model-based systems engineering on the example of automotive industry. In Proceedings of the 31st CIRP Design Conference, Online, 19–21 May 2021. [ Google Scholar ]
  • Navas, J.; Voirin, J.; Paul, S.; Bonnet, S. Towards a model-based approach to systems and cybersecurity: Co-engineering in a product line context. Insight (Int. Counc. Syst. Eng.) 2020 , 23 , 39–43. [ Google Scholar ] [ CrossRef ]
  • Geismann, J.; Gerking, C.; Bodden, E. Towards ensuring security by design in cyber-physical systems engineering processes. In Proceedings of the International Conference on the Software and Systems Process, Gothenburg, Sweden, 26–27 May 2018. [ Google Scholar ]
  • Mažeika, D.; Butleris, R. MBSEsec: Model-based systems engineering method for creating secure systems. Appl. Sci. 2020 , 10 , 2574. [ Google Scholar ] [ CrossRef ]
  • Object Management Group. UAF: Unified Architecture Framework. 2022. Available online: https://www.omg.org/spec/UAF. (accessed on 15 June 2024).
  • Jurjens, J. Secure Systems Development with UML ; Springer: Berlin/Heidelberg, Germany, 2005. [ Google Scholar ]
  • Apvrille, L.; Roudier, Y. Towards the model-driven engineering of secure yet safe embedded systems. Int. Workshop Graph. Models Secur. 2014 , 148 , 15–30. [ Google Scholar ] [ CrossRef ]

Click here to enlarge figure

Survey QuestionStrongly DisagreeDisagreeNeutralAgreeStrongly Agree
Q1The CEMT produces risk assessments that are tailored to the context in which the system operates00155035
Q2Cyberworthiness assessments are simple to produce using the CEMT50403520
Q3The CEMT is an effective use of time00302545
Q4The CEMT process is intuitive05254525
Q5The CEMT encourages stakeholders to work collaboratively to determine the residual risk level00103555
Q6The CEMT clearly identifies which security controls are important to the system0055540
Q7The CEMT produces transparent cyberworthiness assessments05104045
Q8The CEMT facilitates informed decision making with respect to the identified cybersecurity risks0055045
Q9The CEMT produces cyberworthiness assessments that have ongoing value through the future phases of the capability life cycle00104050
Q10The CEMT would improve my understanding of the cyberworthiness of a system00102070
Q11The CEMT produces accurate assessments of a system’s cyberworthiness010203535
Q12The CEMT facilitates the engagement of stakeholders and the provision of meaningful input from those stakeholders into a cyberworthiness assessment00204040
Q13The cyberworthiness assessments produced by the CEMT are sufficiently detailed05203045
Q14The CEMT identifies the relative impact of security controls with respect to the cyberworthiness of the system05154040
Q15The CEMT is not overly dependent on the subjective opinion of subject matter experts00305020
Q16The CEMT provides sufficient information to allow decision makers to be accountable for their decisions010153540
Q17The CEMT clearly highlights the areas of greatest cyber risk to the system00153550
Q18The CEMT adds value to a system and/or project0053560
Q19The CEMT provides a complete and comprehensive approach to determining cyberworthiness510105025
Q20The CEMT is an improvement over existing cyberworthiness assessment processes05102065
Model-Based Security
Assessment Approach
Extended Model-Based TaxonomyThreat FocusedDetailed Adversary ModellingVisualisation and Simulation of ThreatsExplicit Traceability to Threats
1CSRM [ ]YNNNN
2Larsen et al. [ ]YNNNN
3SAVE [ ]YYNNN
4Navas et al. [ ]YYNNN
5Geissman et al. [ ]YYNNN
6MBSESec [ ]YYYNN
7UAF [ ]YNNNN
8UMLSec [ ]YNNNN
9SysML-Sec [ ]YNNNN
10CEMTYYYYY
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

Fowler, S.; Joiner, K.; Ma, S. Cyber Evaluation and Management Toolkit (CEMT): Face Validity of Model-Based Cybersecurity Decision Making. Systems 2024 , 12 , 238. https://doi.org/10.3390/systems12070238

Fowler S, Joiner K, Ma S. Cyber Evaluation and Management Toolkit (CEMT): Face Validity of Model-Based Cybersecurity Decision Making. Systems . 2024; 12(7):238. https://doi.org/10.3390/systems12070238

Fowler, Stuart, Keith Joiner, and Siqi Ma. 2024. "Cyber Evaluation and Management Toolkit (CEMT): Face Validity of Model-Based Cybersecurity Decision Making" Systems 12, no. 7: 238. https://doi.org/10.3390/systems12070238

Article Metrics

Further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

IMAGES

  1. 12+ Sample Evaluation Reports

    evaluating a research report

  2. Checklist for Evaluating a Scientific Research Paper

    evaluating a research report

  3. Evaluate: Assessing Your Research Process and Findings

    evaluating a research report

  4. 26+ Research Report Templates

    evaluating a research report

  5. FREE Evaluation Report Template

    evaluating a research report

  6. Reporting and Evaluating Research

    evaluating a research report

VIDEO

  1. Evaluating research(ers): Are we rewarding what we value?

  2. EVALUATION OF RESEARCH REPORT

  3. GCSE Psychology

  4. Report Writing

  5. Writing evaluative reports

  6. HOW TO READ and ANALYZE A RESEARCH STUDY

COMMENTS

  1. Evaluating Research in Academic Journals: A Practical Guide to

    Academic Journals. Evaluating Research in Academic Journals is a guide for students who are learning how to. evaluate reports of empirical research published in academic journals. It breaks down ...

  2. How to Evaluate a Research Report: Unraveling Insights

    The first variable to check is the dependent variable. It is the one that is getting changed based on how the researcher controls the experiment. The second variable is independent variables. It will be the one influencing the dependent variable. This variable should be chosen carefully and with a lot of considerations.

  3. Evaluating Research

    Definition: Evaluating Research refers to the process of assessing the quality, credibility, and relevance of a research study or project. This involves examining the methods, data, and results of the research in order to determine its validity, reliability, and usefulness. Evaluating research can be done by both experts and non-experts in the ...

  4. Evaluating Research Articles

    Critical Appraisal. Critical appraisal is the process of systematically evaluating research using established and transparent methods. In critical appraisal, health professionals use validated checklists/worksheets as tools to guide their assessment of the research. It is a more advanced way of evaluating research than the more basic method ...

  5. Evaluating research: A multidisciplinary approach to assessing research

    The example in Fig. 2 illustrates that Research emanates from at least one Question at Hand, and aims for at least one piece of New Knowledge.According to our definition (concept model), you cannot call something Research if it is not aiming for New Knowledge and does not emanate from a Question at Hand.This is the way we define the concept in concept modelling, and this small example only ...

  6. How to Write a Literature Review

    Show how your research addresses a gap or contributes to a debate; Evaluate the current state of research and demonstrate your knowledge of the scholarly debates around your topic. Writing literature reviews is a particularly important skill if you want to apply for graduate school or pursue a career in research. We've written a step-by-step ...

  7. Research Report

    Addressing problems: A research report can provide insights into problems or issues and suggest solutions or recommendations for addressing them. Evaluating programs or interventions: A research report can evaluate the effectiveness of programs or interventions, which can inform decision-making about whether to continue, modify, or discontinue ...

  8. Understanding and Evaluating Research: A Critical Guide

    Understanding and Evaluating Research: A Critical Guide shows students how to be critical consumers of research and to appreciate the power of methodology as it shapes the research question, the use of theory in the study, the methods used, and how the outcomes are reported.

  9. Evaluating Research in Academic Journals

    Evaluating Research in Academic Journals is a guide for students who are learning how to evaluate reports of empirical research published in academic journals. It breaks down the process of evaluating a journal article into easy-to-understand steps, and emphasizes the practical aspects of evaluating research - not just how to apply a list of technical terms from textbooks.

  10. PDF Step'by-step guide to critiquing research. Part 1: quantitative research

    literature, evaluation and appraisal of the literature which are in essence the same thing (Bassett and Bassett, 2003). Terminology in research can be confusing for the novice research reader where a term like 'random' refers to an organized manner of selecting items or participants, and the word 'significance' is applied to a degree of chance ...

  11. Evaluation Criteria

    Evaluating Information in the Research Process: Evaluation Criteria. Created by Health Science Librarians Ask HSL. Welcome! Evaluation Criteria. Evaluation Criteria; Credibility; Bias; Accuracy; ... Report a problem. 208 Raleigh Street CB #3916 Chapel Hill, NC 27515-8890 919-962-1053. Locations & Hours Job Opportunities Accessibility

  12. PDF UNIT 4 EVALUATING RESEARCH REPORTS

    Research Report Quality Index= (Score obtained) x 100/ Maximum Actual Score. Suppose all the criteria were included in evaluating a research report, then the Maximum Actual Score would be 65. If the report gets a grand total score of 48, then the Research Report Quality Index score is 73.84.

  13. How to Write Evaluation Reports: Purpose, Structure, Content

    The National Science Foundation Evaluation Report Template - This template provides a structure for evaluating research projects funded by the National Science Foundation. It includes sections on project background, research questions, evaluation methodology, data analysis, and conclusions and recommendations.

  14. Evaluating Sources

    Lateral reading is the act of evaluating the credibility of a source by comparing it to other sources. This allows you to: Verify evidence. Contextualize information. Find potential weaknesses. If a source is using methods or drawing conclusions that are incompatible with other research in its field, it may not be reliable. Example: Lateral ...

  15. Research Guides: Writing a Research Paper: Evaluate Sources

    Evaluate Sources With the Big 5 Criteria. The Big 5 Criteria can help you evaluate your sources for credibility: Currency: Check the publication date and determine whether it is sufficiently current for your topic. Coverage (relevance): Consider whether the source is relevant to your research and whether it covers the topic adequately for your ...

  16. Research Evaluation

    The report focuses on research evaluation in informatics. Excellent discussions of best practices for evaluating informatics researchers for promotion and tenure can be found in [ 16 , 40 ]. A scholarly dissection of the use of citations as proxies for impact is presented by Adler et al. [ 2 ].

  17. The Research Critique General Criteria for Evaluating a Research Report

    General criteria for evaluating a research report are addressed. This outline of criteria can be used as a guide for nurses in critiquing research studies. A sample research report is summarized followed by a critique of the study. Readers have an opportunity to practice critiquing by doing their own analyses before reading the critique presented in the article.

  18. 1 Important points to consider when critically evaluating published

    Critically evaluate the research paper using the checklist provided, making notes on the key points and your overall impression. Discussion. Critical appraisal checklists are useful tools to help assess the quality of a study. Assessment of various factors, including the importance of the research question, the design and methodology of a study ...

  19. How to Evaluate a Study

    Peer-review, the process by which a study is sent to other researchers in a particular field for their notes and thoughts, is essential in evaluating a study's findings. Since most consumers and members of the media are not well-trained enough to evaluate a study's design and researcher's findings, studies that pass muster with other ...

  20. Preparing and Evaluating Research Reports

    evaluation of research reports (articles) for publication.1 Guidelines are presented to facilitate preparation of research articles. The guidelines cover the types of details that are to be included, but more important, the rationale, logic, and flow of the article to facilitate communication and to advance the next stage of the research process.

  21. Writing an evaluation report

    Evaluation reports need to be as clear and precise as possible in their wording. Be especially careful about using the word 'proof' or 'prove'. To prove something requires 100% certainty, which you are very unlikely to have. 'Indicates', 'demonstrates', 'shows', 'suggests' or 'is evidence for' are useful alternative phrases. ...

  22. Current and Recently Completed Work

    Research Support Services for Employment of Young Adults on the Autism Spectrum (REYAAS) Project (November 2022) REYAAS is seeking to identify promising practices and policies to support employment of young adults (ages 16 through 28) on the autism spectrum. Recent estimates suggest that annually about 100,000 youth on the autism spectrum turn 18-years old in the United States.

  23. Real-World Evidence Submissions to the Center for Biologics Evaluation

    a Studies often provide information on both effectiveness and safety. For this report, a study was classified as "safety" if it was conducted primarily to assess a known or potential safety ...

  24. MRFF monitoring, evaluation and learning

    The Medical Research Future Fund (MRFF) Monitoring, evaluation and learning strategy provides an overarching framework for assessing the performance of the MRFF. Monitoring, evaluation and learning The MRFF Monitoring, Evaluation and Learning Strategy 2020-2021 to 2023-2024 (Evaluation Strategy) sets out the principles and approach used to ...

  25. Evaluating a Novel Policy to Increase Access to Nicotine Replacement

    Principal Investigator: Mary Hrywna Affiliation: Assistant Professor, School of Public Health, Rutgers-New Brunswick Amount: $395,051 Sponsor: Institute for Nicotine and Tobacco Studies

  26. Evaluating input‐ and output‐specific ...

    The goal of this paper is to measure the relative technical inefficiency of Polish district courts for the period 2017-2021 in civil, criminal, and family cases. Unlike other papers on justice (in)efficiency, this study uses input-specific and output-specific production models combined with the Data Envelopment Analysis technique.

  27. Global trends in climate change litigation: 2024 snapshot

    It is the sixth report in the series, produced by the Grantham Research Institute in partnership with the Sabin Center for Climate Change Law and drawing on the Sabin Center's Climate Change Litigation Databases. Each report provides a synthesis of the latest research and developments in the climate change litigation field. Key messages

  28. Systems

    The Cyber Evaluation and Management Toolkit (CEMT) is an open-source university research-based plugin for commercial digital model-based systems engineering tools that streamlines conducting cybersecurity risk evaluations for complex cyber-physical systems. The authors developed this research tool to assist the Australian Defence Force (ADF) with the cybersecurity evaluation of complicated ...