• Privacy Policy

Research Method

Home » Research Findings – Types Examples and Writing Guide

Research Findings – Types Examples and Writing Guide

Table of Contents

Research Findings

Research Findings

Definition:

Research findings refer to the results obtained from a study or investigation conducted through a systematic and scientific approach. These findings are the outcomes of the data analysis, interpretation, and evaluation carried out during the research process.

Types of Research Findings

There are two main types of research findings:

Qualitative Findings

Qualitative research is an exploratory research method used to understand the complexities of human behavior and experiences. Qualitative findings are non-numerical and descriptive data that describe the meaning and interpretation of the data collected. Examples of qualitative findings include quotes from participants, themes that emerge from the data, and descriptions of experiences and phenomena.

Quantitative Findings

Quantitative research is a research method that uses numerical data and statistical analysis to measure and quantify a phenomenon or behavior. Quantitative findings include numerical data such as mean, median, and mode, as well as statistical analyses such as t-tests, ANOVA, and regression analysis. These findings are often presented in tables, graphs, or charts.

Both qualitative and quantitative findings are important in research and can provide different insights into a research question or problem. Combining both types of findings can provide a more comprehensive understanding of a phenomenon and improve the validity and reliability of research results.

Parts of Research Findings

Research findings typically consist of several parts, including:

  • Introduction: This section provides an overview of the research topic and the purpose of the study.
  • Literature Review: This section summarizes previous research studies and findings that are relevant to the current study.
  • Methodology : This section describes the research design, methods, and procedures used in the study, including details on the sample, data collection, and data analysis.
  • Results : This section presents the findings of the study, including statistical analyses and data visualizations.
  • Discussion : This section interprets the results and explains what they mean in relation to the research question(s) and hypotheses. It may also compare and contrast the current findings with previous research studies and explore any implications or limitations of the study.
  • Conclusion : This section provides a summary of the key findings and the main conclusions of the study.
  • Recommendations: This section suggests areas for further research and potential applications or implications of the study’s findings.

How to Write Research Findings

Writing research findings requires careful planning and attention to detail. Here are some general steps to follow when writing research findings:

  • Organize your findings: Before you begin writing, it’s essential to organize your findings logically. Consider creating an outline or a flowchart that outlines the main points you want to make and how they relate to one another.
  • Use clear and concise language : When presenting your findings, be sure to use clear and concise language that is easy to understand. Avoid using jargon or technical terms unless they are necessary to convey your meaning.
  • Use visual aids : Visual aids such as tables, charts, and graphs can be helpful in presenting your findings. Be sure to label and title your visual aids clearly, and make sure they are easy to read.
  • Use headings and subheadings: Using headings and subheadings can help organize your findings and make them easier to read. Make sure your headings and subheadings are clear and descriptive.
  • Interpret your findings : When presenting your findings, it’s important to provide some interpretation of what the results mean. This can include discussing how your findings relate to the existing literature, identifying any limitations of your study, and suggesting areas for future research.
  • Be precise and accurate : When presenting your findings, be sure to use precise and accurate language. Avoid making generalizations or overstatements and be careful not to misrepresent your data.
  • Edit and revise: Once you have written your research findings, be sure to edit and revise them carefully. Check for grammar and spelling errors, make sure your formatting is consistent, and ensure that your writing is clear and concise.

Research Findings Example

Following is a Research Findings Example sample for students:

Title: The Effects of Exercise on Mental Health

Sample : 500 participants, both men and women, between the ages of 18-45.

Methodology : Participants were divided into two groups. The first group engaged in 30 minutes of moderate intensity exercise five times a week for eight weeks. The second group did not exercise during the study period. Participants in both groups completed a questionnaire that assessed their mental health before and after the study period.

Findings : The group that engaged in regular exercise reported a significant improvement in mental health compared to the control group. Specifically, they reported lower levels of anxiety and depression, improved mood, and increased self-esteem.

Conclusion : Regular exercise can have a positive impact on mental health and may be an effective intervention for individuals experiencing symptoms of anxiety or depression.

Applications of Research Findings

Research findings can be applied in various fields to improve processes, products, services, and outcomes. Here are some examples:

  • Healthcare : Research findings in medicine and healthcare can be applied to improve patient outcomes, reduce morbidity and mortality rates, and develop new treatments for various diseases.
  • Education : Research findings in education can be used to develop effective teaching methods, improve learning outcomes, and design new educational programs.
  • Technology : Research findings in technology can be applied to develop new products, improve existing products, and enhance user experiences.
  • Business : Research findings in business can be applied to develop new strategies, improve operations, and increase profitability.
  • Public Policy: Research findings can be used to inform public policy decisions on issues such as environmental protection, social welfare, and economic development.
  • Social Sciences: Research findings in social sciences can be used to improve understanding of human behavior and social phenomena, inform public policy decisions, and develop interventions to address social issues.
  • Agriculture: Research findings in agriculture can be applied to improve crop yields, develop new farming techniques, and enhance food security.
  • Sports : Research findings in sports can be applied to improve athlete performance, reduce injuries, and develop new training programs.

When to use Research Findings

Research findings can be used in a variety of situations, depending on the context and the purpose. Here are some examples of when research findings may be useful:

  • Decision-making : Research findings can be used to inform decisions in various fields, such as business, education, healthcare, and public policy. For example, a business may use market research findings to make decisions about new product development or marketing strategies.
  • Problem-solving : Research findings can be used to solve problems or challenges in various fields, such as healthcare, engineering, and social sciences. For example, medical researchers may use findings from clinical trials to develop new treatments for diseases.
  • Policy development : Research findings can be used to inform the development of policies in various fields, such as environmental protection, social welfare, and economic development. For example, policymakers may use research findings to develop policies aimed at reducing greenhouse gas emissions.
  • Program evaluation: Research findings can be used to evaluate the effectiveness of programs or interventions in various fields, such as education, healthcare, and social services. For example, educational researchers may use findings from evaluations of educational programs to improve teaching and learning outcomes.
  • Innovation: Research findings can be used to inspire or guide innovation in various fields, such as technology and engineering. For example, engineers may use research findings on materials science to develop new and innovative products.

Purpose of Research Findings

The purpose of research findings is to contribute to the knowledge and understanding of a particular topic or issue. Research findings are the result of a systematic and rigorous investigation of a research question or hypothesis, using appropriate research methods and techniques.

The main purposes of research findings are:

  • To generate new knowledge : Research findings contribute to the body of knowledge on a particular topic, by adding new information, insights, and understanding to the existing knowledge base.
  • To test hypotheses or theories : Research findings can be used to test hypotheses or theories that have been proposed in a particular field or discipline. This helps to determine the validity and reliability of the hypotheses or theories, and to refine or develop new ones.
  • To inform practice: Research findings can be used to inform practice in various fields, such as healthcare, education, and business. By identifying best practices and evidence-based interventions, research findings can help practitioners to make informed decisions and improve outcomes.
  • To identify gaps in knowledge: Research findings can help to identify gaps in knowledge and understanding of a particular topic, which can then be addressed by further research.
  • To contribute to policy development: Research findings can be used to inform policy development in various fields, such as environmental protection, social welfare, and economic development. By providing evidence-based recommendations, research findings can help policymakers to develop effective policies that address societal challenges.

Characteristics of Research Findings

Research findings have several key characteristics that distinguish them from other types of information or knowledge. Here are some of the main characteristics of research findings:

  • Objective : Research findings are based on a systematic and rigorous investigation of a research question or hypothesis, using appropriate research methods and techniques. As such, they are generally considered to be more objective and reliable than other types of information.
  • Empirical : Research findings are based on empirical evidence, which means that they are derived from observations or measurements of the real world. This gives them a high degree of credibility and validity.
  • Generalizable : Research findings are often intended to be generalizable to a larger population or context beyond the specific study. This means that the findings can be applied to other situations or populations with similar characteristics.
  • Transparent : Research findings are typically reported in a transparent manner, with a clear description of the research methods and data analysis techniques used. This allows others to assess the credibility and reliability of the findings.
  • Peer-reviewed: Research findings are often subject to a rigorous peer-review process, in which experts in the field review the research methods, data analysis, and conclusions of the study. This helps to ensure the validity and reliability of the findings.
  • Reproducible : Research findings are often designed to be reproducible, meaning that other researchers can replicate the study using the same methods and obtain similar results. This helps to ensure the validity and reliability of the findings.

Advantages of Research Findings

Research findings have many advantages, which make them valuable sources of knowledge and information. Here are some of the main advantages of research findings:

  • Evidence-based: Research findings are based on empirical evidence, which means that they are grounded in data and observations from the real world. This makes them a reliable and credible source of information.
  • Inform decision-making: Research findings can be used to inform decision-making in various fields, such as healthcare, education, and business. By identifying best practices and evidence-based interventions, research findings can help practitioners and policymakers to make informed decisions and improve outcomes.
  • Identify gaps in knowledge: Research findings can help to identify gaps in knowledge and understanding of a particular topic, which can then be addressed by further research. This contributes to the ongoing development of knowledge in various fields.
  • Improve outcomes : Research findings can be used to develop and implement evidence-based practices and interventions, which have been shown to improve outcomes in various fields, such as healthcare, education, and social services.
  • Foster innovation: Research findings can inspire or guide innovation in various fields, such as technology and engineering. By providing new information and understanding of a particular topic, research findings can stimulate new ideas and approaches to problem-solving.
  • Enhance credibility: Research findings are generally considered to be more credible and reliable than other types of information, as they are based on rigorous research methods and are subject to peer-review processes.

Limitations of Research Findings

While research findings have many advantages, they also have some limitations. Here are some of the main limitations of research findings:

  • Limited scope: Research findings are typically based on a particular study or set of studies, which may have a limited scope or focus. This means that they may not be applicable to other contexts or populations.
  • Potential for bias : Research findings can be influenced by various sources of bias, such as researcher bias, selection bias, or measurement bias. This can affect the validity and reliability of the findings.
  • Ethical considerations: Research findings can raise ethical considerations, particularly in studies involving human subjects. Researchers must ensure that their studies are conducted in an ethical and responsible manner, with appropriate measures to protect the welfare and privacy of participants.
  • Time and resource constraints : Research studies can be time-consuming and require significant resources, which can limit the number and scope of studies that are conducted. This can lead to gaps in knowledge or a lack of research on certain topics.
  • Complexity: Some research findings can be complex and difficult to interpret, particularly in fields such as science or medicine. This can make it challenging for practitioners and policymakers to apply the findings to their work.
  • Lack of generalizability : While research findings are intended to be generalizable to larger populations or contexts, there may be factors that limit their generalizability. For example, cultural or environmental factors may influence how a particular intervention or treatment works in different populations or contexts.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Problem statement

Problem Statement – Writing Guide, Examples and...

Research Process

Research Process – Steps, Examples and Tips

Institutional Review Board (IRB)

Institutional Review Board – Application Sample...

Appendices

Appendices – Writing Guide, Types and Examples

Survey Instruments

Survey Instruments – List and Their Uses

Limitations in Research

Limitations in Research – Types, Examples and...

  • Search Menu

Sign in through your institution

  • Advance Articles
  • Editor's Choice
  • CME Reviews
  • Best of 2021 collection
  • Abbreviated Breast MRI Virtual Collection
  • Contrast-enhanced Mammography Collection
  • Author Guidelines
  • Submission Site
  • Open Access
  • Self-Archiving Policy
  • Accepted Papers Resource Guide
  • About Journal of Breast Imaging
  • About the Society of Breast Imaging
  • Guidelines for Reviewers
  • Resources for Reviewers and Authors
  • Editorial Board
  • Advertising Disclaimer
  • Advertising and Corporate Services
  • Journals on Oxford Academic
  • Books on Oxford Academic

Society of Breast Imaging

  • < Previous

A Step-by-Step Guide to Writing a Scientific Review Article

  • Article contents
  • Figures & tables
  • Supplementary Data

Manisha Bahl, A Step-by-Step Guide to Writing a Scientific Review Article, Journal of Breast Imaging , Volume 5, Issue 4, July/August 2023, Pages 480–485, https://doi.org/10.1093/jbi/wbad028

  • Permissions Icon Permissions

Scientific review articles are comprehensive, focused reviews of the scientific literature written by subject matter experts. The task of writing a scientific review article can seem overwhelming; however, it can be managed by using an organized approach and devoting sufficient time to the process. The process involves selecting a topic about which the authors are knowledgeable and enthusiastic, conducting a literature search and critical analysis of the literature, and writing the article, which is composed of an abstract, introduction, body, and conclusion, with accompanying tables and figures. This article, which focuses on the narrative or traditional literature review, is intended to serve as a guide with practical steps for new writers. Tips for success are also discussed, including selecting a focused topic, maintaining objectivity and balance while writing, avoiding tedious data presentation in a laundry list format, moving from descriptions of the literature to critical analysis, avoiding simplistic conclusions, and budgeting time for the overall process.

  • narrative discourse

Society of Breast Imaging

Society of Breast Imaging members

Personal account.

  • Sign in with email/username & password
  • Get email alerts
  • Save searches
  • Purchase content
  • Activate your purchase/trial code
  • Add your ORCID iD

Institutional access

Sign in with a library card.

  • Sign in with username/password
  • Recommend to your librarian
  • Institutional account management
  • Get help with access

Access to content on Oxford Academic is often provided through institutional subscriptions and purchases. If you are a member of an institution with an active account, you may be able to access content in one of the following ways:

IP based access

Typically, access is provided across an institutional network to a range of IP addresses. This authentication occurs automatically, and it is not possible to sign out of an IP authenticated account.

Choose this option to get remote access when outside your institution. Shibboleth/Open Athens technology is used to provide single sign-on between your institution’s website and Oxford Academic.

  • Click Sign in through your institution.
  • Select your institution from the list provided, which will take you to your institution's website to sign in.
  • When on the institution site, please use the credentials provided by your institution. Do not use an Oxford Academic personal account.
  • Following successful sign in, you will be returned to Oxford Academic.

If your institution is not listed or you cannot sign in to your institution’s website, please contact your librarian or administrator.

Enter your library card number to sign in. If you cannot sign in, please contact your librarian.

Society Members

Society member access to a journal is achieved in one of the following ways:

Sign in through society site

Many societies offer single sign-on between the society website and Oxford Academic. If you see ‘Sign in through society site’ in the sign in pane within a journal:

  • Click Sign in through society site.
  • When on the society site, please use the credentials provided by that society. Do not use an Oxford Academic personal account.

If you do not have a society account or have forgotten your username or password, please contact your society.

Sign in using a personal account

Some societies use Oxford Academic personal accounts to provide access to their members. See below.

A personal account can be used to get email alerts, save searches, purchase content, and activate subscriptions.

Some societies use Oxford Academic personal accounts to provide access to their members.

Viewing your signed in accounts

Click the account icon in the top right to:

  • View your signed in personal account and access account management features.
  • View the institutional accounts that are providing access.

Signed in but can't access content

Oxford Academic is home to a wide variety of products. The institutional subscription may not cover the content that you are trying to access. If you believe you should have access to that content, please contact your librarian.

For librarians and administrators, your personal account also provides access to institutional account management. Here you will find options to view and activate subscriptions, manage institutional settings and access options, access usage statistics, and more.

Short-term Access

To purchase short-term access, please sign in to your personal account above.

Don't already have a personal account? Register

Month: Total Views:
May 2023 171
June 2023 115
July 2023 113
August 2023 5,013
September 2023 1,500
October 2023 1,810
November 2023 3,849
December 2023 308
January 2024 401
February 2024 312
March 2024 415
April 2024 361
May 2024 306
June 2024 283
July 2024 309
August 2024 243
September 2024 143

Email alerts

Citing articles via.

  • Recommend to your Librarian
  • Journals Career Network

Affiliations

  • Online ISSN 2631-6129
  • Print ISSN 2631-6110
  • Copyright © 2024 Society of Breast Imaging
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Rights and permissions
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base
  • Dissertation
  • How to Write a Results Section | Tips & Examples

How to Write a Results Section | Tips & Examples

Published on August 30, 2022 by Tegan George . Revised on July 18, 2023.

A results section is where you report the main findings of the data collection and analysis you conducted for your thesis or dissertation . You should report all relevant results concisely and objectively, in a logical order. Don’t include subjective interpretations of why you found these results or what they mean—any evaluation should be saved for the discussion section .

Instantly correct all language mistakes in your text

Upload your document to correct all your mistakes in minutes

upload-your-document-ai-proofreader

Table of contents

How to write a results section, reporting quantitative research results, reporting qualitative research results, results vs. discussion vs. conclusion, checklist: research results, other interesting articles, frequently asked questions about results sections.

When conducting research, it’s important to report the results of your study prior to discussing your interpretations of it. This gives your reader a clear idea of exactly what you found and keeps the data itself separate from your subjective analysis.

Here are a few best practices:

  • Your results should always be written in the past tense.
  • While the length of this section depends on how much data you collected and analyzed, it should be written as concisely as possible.
  • Only include results that are directly relevant to answering your research questions . Avoid speculative or interpretative words like “appears” or “implies.”
  • If you have other results you’d like to include, consider adding them to an appendix or footnotes.
  • Always start out with your broadest results first, and then flow into your more granular (but still relevant) ones. Think of it like a shoe store: first discuss the shoes as a whole, then the sneakers, boots, sandals, etc.

Prevent plagiarism. Run a free check.

If you conducted quantitative research , you’ll likely be working with the results of some sort of statistical analysis .

Your results section should report the results of any statistical tests you used to compare groups or assess relationships between variables . It should also state whether or not each hypothesis was supported.

The most logical way to structure quantitative results is to frame them around your research questions or hypotheses. For each question or hypothesis, share:

  • A reminder of the type of analysis you used (e.g., a two-sample t test or simple linear regression ). A more detailed description of your analysis should go in your methodology section.
  • A concise summary of each relevant result, both positive and negative. This can include any relevant descriptive statistics (e.g., means and standard deviations ) as well as inferential statistics (e.g., t scores, degrees of freedom , and p values ). Remember, these numbers are often placed in parentheses.
  • A brief statement of how each result relates to the question, or whether the hypothesis was supported. You can briefly mention any results that didn’t fit with your expectations and assumptions, but save any speculation on their meaning or consequences for your discussion  and conclusion.

A note on tables and figures

In quantitative research, it’s often helpful to include visual elements such as graphs, charts, and tables , but only if they are directly relevant to your results. Give these elements clear, descriptive titles and labels so that your reader can easily understand what is being shown. If you want to include any other visual elements that are more tangential in nature, consider adding a figure and table list .

As a rule of thumb:

  • Tables are used to communicate exact values, giving a concise overview of various results
  • Graphs and charts are used to visualize trends and relationships, giving an at-a-glance illustration of key findings

Don’t forget to also mention any tables and figures you used within the text of your results section. Summarize or elaborate on specific aspects you think your reader should know about rather than merely restating the same numbers already shown.

A two-sample t test was used to test the hypothesis that higher social distance from environmental problems would reduce the intent to donate to environmental organizations, with donation intention (recorded as a score from 1 to 10) as the outcome variable and social distance (categorized as either a low or high level of social distance) as the predictor variable.Social distance was found to be positively correlated with donation intention, t (98) = 12.19, p < .001, with the donation intention of the high social distance group 0.28 points higher, on average, than the low social distance group (see figure 1). This contradicts the initial hypothesis that social distance would decrease donation intention, and in fact suggests a small effect in the opposite direction.

Example of using figures in the results section

Figure 1: Intention to donate to environmental organizations based on social distance from impact of environmental damage.

In qualitative research , your results might not all be directly related to specific hypotheses. In this case, you can structure your results section around key themes or topics that emerged from your analysis of the data.

For each theme, start with general observations about what the data showed. You can mention:

  • Recurring points of agreement or disagreement
  • Patterns and trends
  • Particularly significant snippets from individual responses

Next, clarify and support these points with direct quotations. Be sure to report any relevant demographic information about participants. Further information (such as full transcripts , if appropriate) can be included in an appendix .

When asked about video games as a form of art, the respondents tended to believe that video games themselves are not an art form, but agreed that creativity is involved in their production. The criteria used to identify artistic video games included design, story, music, and creative teams.One respondent (male, 24) noted a difference in creativity between popular video game genres:

“I think that in role-playing games, there’s more attention to character design, to world design, because the whole story is important and more attention is paid to certain game elements […] so that perhaps you do need bigger teams of creative experts than in an average shooter or something.”

Responses suggest that video game consumers consider some types of games to have more artistic potential than others.

Your results section should objectively report your findings, presenting only brief observations in relation to each question, hypothesis, or theme.

It should not  speculate about the meaning of the results or attempt to answer your main research question . Detailed interpretation of your results is more suitable for your discussion section , while synthesis of your results into an overall answer to your main research question is best left for your conclusion .

Don't submit your assignments before you do this

The academic proofreading tool has been trained on 1000s of academic texts. Making it the most accurate and reliable proofreading tool for students. Free citation check included.

research findings and review

Try for free

I have completed my data collection and analyzed the results.

I have included all results that are relevant to my research questions.

I have concisely and objectively reported each result, including relevant descriptive statistics and inferential statistics .

I have stated whether each hypothesis was supported or refuted.

I have used tables and figures to illustrate my results where appropriate.

All tables and figures are correctly labelled and referred to in the text.

There is no subjective interpretation or speculation on the meaning of the results.

You've finished writing up your results! Use the other checklists to further improve your thesis.

If you want to know more about AI for academic writing, AI tools, or research bias, make sure to check out some of our other articles with explanations and examples or go directly to our tools!

Research bias

  • Survivorship bias
  • Self-serving bias
  • Availability heuristic
  • Halo effect
  • Hindsight bias
  • Deep learning
  • Generative AI
  • Machine learning
  • Reinforcement learning
  • Supervised vs. unsupervised learning

 (AI) Tools

  • Grammar Checker
  • Paraphrasing Tool
  • Text Summarizer
  • AI Detector
  • Plagiarism Checker
  • Citation Generator

The results chapter of a thesis or dissertation presents your research results concisely and objectively.

In quantitative research , for each question or hypothesis , state:

  • The type of analysis used
  • Relevant results in the form of descriptive and inferential statistics
  • Whether or not the alternative hypothesis was supported

In qualitative research , for each question or theme, describe:

  • Recurring patterns
  • Significant or representative individual responses
  • Relevant quotations from the data

Don’t interpret or speculate in the results chapter.

Results are usually written in the past tense , because they are describing the outcome of completed actions.

The results chapter or section simply and objectively reports what you found, without speculating on why you found these results. The discussion interprets the meaning of the results, puts them in context, and explains why they matter.

In qualitative research , results and discussion are sometimes combined. But in quantitative research , it’s considered important to separate the objective results from your interpretation of them.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

George, T. (2023, July 18). How to Write a Results Section | Tips & Examples. Scribbr. Retrieved September 16, 2024, from https://www.scribbr.com/dissertation/results/

Is this article helpful?

Tegan George

Tegan George

Other students also liked, what is a research methodology | steps & tips, how to write a discussion section | tips & examples, how to write a thesis or dissertation conclusion, get unlimited documents corrected.

✔ Free APA citation check included ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

Warning: The NCBI web site requires JavaScript to function. more...

U.S. flag

An official website of the United States government

The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Browse Titles

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Lau F, Kuziemsky C, editors. Handbook of eHealth Evaluation: An Evidence-based Approach [Internet]. Victoria (BC): University of Victoria; 2017 Feb 27.

Cover of Handbook of eHealth Evaluation: An Evidence-based Approach

Handbook of eHealth Evaluation: An Evidence-based Approach [Internet].

Chapter 9 methods for literature reviews.

Guy Paré and Spyros Kitsiou .

9.1. Introduction

Literature reviews play a critical role in scholarship because science remains, first and foremost, a cumulative endeavour ( vom Brocke et al., 2009 ). As in any academic discipline, rigorous knowledge syntheses are becoming indispensable in keeping up with an exponentially growing eHealth literature, assisting practitioners, academics, and graduate students in finding, evaluating, and synthesizing the contents of many empirical and conceptual papers. Among other methods, literature reviews are essential for: (a) identifying what has been written on a subject or topic; (b) determining the extent to which a specific research area reveals any interpretable trends or patterns; (c) aggregating empirical findings related to a narrow research question to support evidence-based practice; (d) generating new frameworks and theories; and (e) identifying topics or questions requiring more investigation ( Paré, Trudel, Jaana, & Kitsiou, 2015 ).

Literature reviews can take two major forms. The most prevalent one is the “literature review” or “background” section within a journal paper or a chapter in a graduate thesis. This section synthesizes the extant literature and usually identifies the gaps in knowledge that the empirical study addresses ( Sylvester, Tate, & Johnstone, 2013 ). It may also provide a theoretical foundation for the proposed study, substantiate the presence of the research problem, justify the research as one that contributes something new to the cumulated knowledge, or validate the methods and approaches for the proposed study ( Hart, 1998 ; Levy & Ellis, 2006 ).

The second form of literature review, which is the focus of this chapter, constitutes an original and valuable work of research in and of itself ( Paré et al., 2015 ). Rather than providing a base for a researcher’s own work, it creates a solid starting point for all members of the community interested in a particular area or topic ( Mulrow, 1987 ). The so-called “review article” is a journal-length paper which has an overarching purpose to synthesize the literature in a field, without collecting or analyzing any primary data ( Green, Johnson, & Adams, 2006 ).

When appropriately conducted, review articles represent powerful information sources for practitioners looking for state-of-the art evidence to guide their decision-making and work practices ( Paré et al., 2015 ). Further, high-quality reviews become frequently cited pieces of work which researchers seek out as a first clear outline of the literature when undertaking empirical studies ( Cooper, 1988 ; Rowe, 2014 ). Scholars who track and gauge the impact of articles have found that review papers are cited and downloaded more often than any other type of published article ( Cronin, Ryan, & Coughlan, 2008 ; Montori, Wilczynski, Morgan, Haynes, & Hedges, 2003 ; Patsopoulos, Analatos, & Ioannidis, 2005 ). The reason for their popularity may be the fact that reading the review enables one to have an overview, if not a detailed knowledge of the area in question, as well as references to the most useful primary sources ( Cronin et al., 2008 ). Although they are not easy to conduct, the commitment to complete a review article provides a tremendous service to one’s academic community ( Paré et al., 2015 ; Petticrew & Roberts, 2006 ). Most, if not all, peer-reviewed journals in the fields of medical informatics publish review articles of some type.

The main objectives of this chapter are fourfold: (a) to provide an overview of the major steps and activities involved in conducting a stand-alone literature review; (b) to describe and contrast the different types of review articles that can contribute to the eHealth knowledge base; (c) to illustrate each review type with one or two examples from the eHealth literature; and (d) to provide a series of recommendations for prospective authors of review articles in this domain.

9.2. Overview of the Literature Review Process and Steps

As explained in Templier and Paré (2015) , there are six generic steps involved in conducting a review article:

  • formulating the research question(s) and objective(s),
  • searching the extant literature,
  • screening for inclusion,
  • assessing the quality of primary studies,
  • extracting data, and
  • analyzing data.

Although these steps are presented here in sequential order, one must keep in mind that the review process can be iterative and that many activities can be initiated during the planning stage and later refined during subsequent phases ( Finfgeld-Connett & Johnson, 2013 ; Kitchenham & Charters, 2007 ).

Formulating the research question(s) and objective(s): As a first step, members of the review team must appropriately justify the need for the review itself ( Petticrew & Roberts, 2006 ), identify the review’s main objective(s) ( Okoli & Schabram, 2010 ), and define the concepts or variables at the heart of their synthesis ( Cooper & Hedges, 2009 ; Webster & Watson, 2002 ). Importantly, they also need to articulate the research question(s) they propose to investigate ( Kitchenham & Charters, 2007 ). In this regard, we concur with Jesson, Matheson, and Lacey (2011) that clearly articulated research questions are key ingredients that guide the entire review methodology; they underscore the type of information that is needed, inform the search for and selection of relevant literature, and guide or orient the subsequent analysis. Searching the extant literature: The next step consists of searching the literature and making decisions about the suitability of material to be considered in the review ( Cooper, 1988 ). There exist three main coverage strategies. First, exhaustive coverage means an effort is made to be as comprehensive as possible in order to ensure that all relevant studies, published and unpublished, are included in the review and, thus, conclusions are based on this all-inclusive knowledge base. The second type of coverage consists of presenting materials that are representative of most other works in a given field or area. Often authors who adopt this strategy will search for relevant articles in a small number of top-tier journals in a field ( Paré et al., 2015 ). In the third strategy, the review team concentrates on prior works that have been central or pivotal to a particular topic. This may include empirical studies or conceptual papers that initiated a line of investigation, changed how problems or questions were framed, introduced new methods or concepts, or engendered important debate ( Cooper, 1988 ). Screening for inclusion: The following step consists of evaluating the applicability of the material identified in the preceding step ( Levy & Ellis, 2006 ; vom Brocke et al., 2009 ). Once a group of potential studies has been identified, members of the review team must screen them to determine their relevance ( Petticrew & Roberts, 2006 ). A set of predetermined rules provides a basis for including or excluding certain studies. This exercise requires a significant investment on the part of researchers, who must ensure enhanced objectivity and avoid biases or mistakes. As discussed later in this chapter, for certain types of reviews there must be at least two independent reviewers involved in the screening process and a procedure to resolve disagreements must also be in place ( Liberati et al., 2009 ; Shea et al., 2009 ). Assessing the quality of primary studies: In addition to screening material for inclusion, members of the review team may need to assess the scientific quality of the selected studies, that is, appraise the rigour of the research design and methods. Such formal assessment, which is usually conducted independently by at least two coders, helps members of the review team refine which studies to include in the final sample, determine whether or not the differences in quality may affect their conclusions, or guide how they analyze the data and interpret the findings ( Petticrew & Roberts, 2006 ). Ascribing quality scores to each primary study or considering through domain-based evaluations which study components have or have not been designed and executed appropriately makes it possible to reflect on the extent to which the selected study addresses possible biases and maximizes validity ( Shea et al., 2009 ). Extracting data: The following step involves gathering or extracting applicable information from each primary study included in the sample and deciding what is relevant to the problem of interest ( Cooper & Hedges, 2009 ). Indeed, the type of data that should be recorded mainly depends on the initial research questions ( Okoli & Schabram, 2010 ). However, important information may also be gathered about how, when, where and by whom the primary study was conducted, the research design and methods, or qualitative/quantitative results ( Cooper & Hedges, 2009 ). Analyzing and synthesizing data : As a final step, members of the review team must collate, summarize, aggregate, organize, and compare the evidence extracted from the included studies. The extracted data must be presented in a meaningful way that suggests a new contribution to the extant literature ( Jesson et al., 2011 ). Webster and Watson (2002) warn researchers that literature reviews should be much more than lists of papers and should provide a coherent lens to make sense of extant knowledge on a given topic. There exist several methods and techniques for synthesizing quantitative (e.g., frequency analysis, meta-analysis) and qualitative (e.g., grounded theory, narrative analysis, meta-ethnography) evidence ( Dixon-Woods, Agarwal, Jones, Young, & Sutton, 2005 ; Thomas & Harden, 2008 ).

9.3. Types of Review Articles and Brief Illustrations

EHealth researchers have at their disposal a number of approaches and methods for making sense out of existing literature, all with the purpose of casting current research findings into historical contexts or explaining contradictions that might exist among a set of primary research studies conducted on a particular topic. Our classification scheme is largely inspired from Paré and colleagues’ (2015) typology. Below we present and illustrate those review types that we feel are central to the growth and development of the eHealth domain.

9.3.1. Narrative Reviews

The narrative review is the “traditional” way of reviewing the extant literature and is skewed towards a qualitative interpretation of prior knowledge ( Sylvester et al., 2013 ). Put simply, a narrative review attempts to summarize or synthesize what has been written on a particular topic but does not seek generalization or cumulative knowledge from what is reviewed ( Davies, 2000 ; Green et al., 2006 ). Instead, the review team often undertakes the task of accumulating and synthesizing the literature to demonstrate the value of a particular point of view ( Baumeister & Leary, 1997 ). As such, reviewers may selectively ignore or limit the attention paid to certain studies in order to make a point. In this rather unsystematic approach, the selection of information from primary articles is subjective, lacks explicit criteria for inclusion and can lead to biased interpretations or inferences ( Green et al., 2006 ). There are several narrative reviews in the particular eHealth domain, as in all fields, which follow such an unstructured approach ( Silva et al., 2015 ; Paul et al., 2015 ).

Despite these criticisms, this type of review can be very useful in gathering together a volume of literature in a specific subject area and synthesizing it. As mentioned above, its primary purpose is to provide the reader with a comprehensive background for understanding current knowledge and highlighting the significance of new research ( Cronin et al., 2008 ). Faculty like to use narrative reviews in the classroom because they are often more up to date than textbooks, provide a single source for students to reference, and expose students to peer-reviewed literature ( Green et al., 2006 ). For researchers, narrative reviews can inspire research ideas by identifying gaps or inconsistencies in a body of knowledge, thus helping researchers to determine research questions or formulate hypotheses. Importantly, narrative reviews can also be used as educational articles to bring practitioners up to date with certain topics of issues ( Green et al., 2006 ).

Recently, there have been several efforts to introduce more rigour in narrative reviews that will elucidate common pitfalls and bring changes into their publication standards. Information systems researchers, among others, have contributed to advancing knowledge on how to structure a “traditional” review. For instance, Levy and Ellis (2006) proposed a generic framework for conducting such reviews. Their model follows the systematic data processing approach comprised of three steps, namely: (a) literature search and screening; (b) data extraction and analysis; and (c) writing the literature review. They provide detailed and very helpful instructions on how to conduct each step of the review process. As another methodological contribution, vom Brocke et al. (2009) offered a series of guidelines for conducting literature reviews, with a particular focus on how to search and extract the relevant body of knowledge. Last, Bandara, Miskon, and Fielt (2011) proposed a structured, predefined and tool-supported method to identify primary studies within a feasible scope, extract relevant content from identified articles, synthesize and analyze the findings, and effectively write and present the results of the literature review. We highly recommend that prospective authors of narrative reviews consult these useful sources before embarking on their work.

Darlow and Wen (2015) provide a good example of a highly structured narrative review in the eHealth field. These authors synthesized published articles that describe the development process of mobile health (m-health) interventions for patients’ cancer care self-management. As in most narrative reviews, the scope of the research questions being investigated is broad: (a) how development of these systems are carried out; (b) which methods are used to investigate these systems; and (c) what conclusions can be drawn as a result of the development of these systems. To provide clear answers to these questions, a literature search was conducted on six electronic databases and Google Scholar . The search was performed using several terms and free text words, combining them in an appropriate manner. Four inclusion and three exclusion criteria were utilized during the screening process. Both authors independently reviewed each of the identified articles to determine eligibility and extract study information. A flow diagram shows the number of studies identified, screened, and included or excluded at each stage of study selection. In terms of contributions, this review provides a series of practical recommendations for m-health intervention development.

9.3.2. Descriptive or Mapping Reviews

The primary goal of a descriptive review is to determine the extent to which a body of knowledge in a particular research topic reveals any interpretable pattern or trend with respect to pre-existing propositions, theories, methodologies or findings ( King & He, 2005 ; Paré et al., 2015 ). In contrast with narrative reviews, descriptive reviews follow a systematic and transparent procedure, including searching, screening and classifying studies ( Petersen, Vakkalanka, & Kuzniarz, 2015 ). Indeed, structured search methods are used to form a representative sample of a larger group of published works ( Paré et al., 2015 ). Further, authors of descriptive reviews extract from each study certain characteristics of interest, such as publication year, research methods, data collection techniques, and direction or strength of research outcomes (e.g., positive, negative, or non-significant) in the form of frequency analysis to produce quantitative results ( Sylvester et al., 2013 ). In essence, each study included in a descriptive review is treated as the unit of analysis and the published literature as a whole provides a database from which the authors attempt to identify any interpretable trends or draw overall conclusions about the merits of existing conceptualizations, propositions, methods or findings ( Paré et al., 2015 ). In doing so, a descriptive review may claim that its findings represent the state of the art in a particular domain ( King & He, 2005 ).

In the fields of health sciences and medical informatics, reviews that focus on examining the range, nature and evolution of a topic area are described by Anderson, Allen, Peckham, and Goodwin (2008) as mapping reviews . Like descriptive reviews, the research questions are generic and usually relate to publication patterns and trends. There is no preconceived plan to systematically review all of the literature although this can be done. Instead, researchers often present studies that are representative of most works published in a particular area and they consider a specific time frame to be mapped.

An example of this approach in the eHealth domain is offered by DeShazo, Lavallie, and Wolf (2009). The purpose of this descriptive or mapping review was to characterize publication trends in the medical informatics literature over a 20-year period (1987 to 2006). To achieve this ambitious objective, the authors performed a bibliometric analysis of medical informatics citations indexed in medline using publication trends, journal frequencies, impact factors, Medical Subject Headings (MeSH) term frequencies, and characteristics of citations. Findings revealed that there were over 77,000 medical informatics articles published during the covered period in numerous journals and that the average annual growth rate was 12%. The MeSH term analysis also suggested a strong interdisciplinary trend. Finally, average impact scores increased over time with two notable growth periods. Overall, patterns in research outputs that seem to characterize the historic trends and current components of the field of medical informatics suggest it may be a maturing discipline (DeShazo et al., 2009).

9.3.3. Scoping Reviews

Scoping reviews attempt to provide an initial indication of the potential size and nature of the extant literature on an emergent topic (Arksey & O’Malley, 2005; Daudt, van Mossel, & Scott, 2013 ; Levac, Colquhoun, & O’Brien, 2010). A scoping review may be conducted to examine the extent, range and nature of research activities in a particular area, determine the value of undertaking a full systematic review (discussed next), or identify research gaps in the extant literature ( Paré et al., 2015 ). In line with their main objective, scoping reviews usually conclude with the presentation of a detailed research agenda for future works along with potential implications for both practice and research.

Unlike narrative and descriptive reviews, the whole point of scoping the field is to be as comprehensive as possible, including grey literature (Arksey & O’Malley, 2005). Inclusion and exclusion criteria must be established to help researchers eliminate studies that are not aligned with the research questions. It is also recommended that at least two independent coders review abstracts yielded from the search strategy and then the full articles for study selection ( Daudt et al., 2013 ). The synthesized evidence from content or thematic analysis is relatively easy to present in tabular form (Arksey & O’Malley, 2005; Thomas & Harden, 2008 ).

One of the most highly cited scoping reviews in the eHealth domain was published by Archer, Fevrier-Thomas, Lokker, McKibbon, and Straus (2011) . These authors reviewed the existing literature on personal health record ( phr ) systems including design, functionality, implementation, applications, outcomes, and benefits. Seven databases were searched from 1985 to March 2010. Several search terms relating to phr s were used during this process. Two authors independently screened titles and abstracts to determine inclusion status. A second screen of full-text articles, again by two independent members of the research team, ensured that the studies described phr s. All in all, 130 articles met the criteria and their data were extracted manually into a database. The authors concluded that although there is a large amount of survey, observational, cohort/panel, and anecdotal evidence of phr benefits and satisfaction for patients, more research is needed to evaluate the results of phr implementations. Their in-depth analysis of the literature signalled that there is little solid evidence from randomized controlled trials or other studies through the use of phr s. Hence, they suggested that more research is needed that addresses the current lack of understanding of optimal functionality and usability of these systems, and how they can play a beneficial role in supporting patient self-management ( Archer et al., 2011 ).

9.3.4. Forms of Aggregative Reviews

Healthcare providers, practitioners, and policy-makers are nowadays overwhelmed with large volumes of information, including research-based evidence from numerous clinical trials and evaluation studies, assessing the effectiveness of health information technologies and interventions ( Ammenwerth & de Keizer, 2004 ; Deshazo et al., 2009 ). It is unrealistic to expect that all these disparate actors will have the time, skills, and necessary resources to identify the available evidence in the area of their expertise and consider it when making decisions. Systematic reviews that involve the rigorous application of scientific strategies aimed at limiting subjectivity and bias (i.e., systematic and random errors) can respond to this challenge.

Systematic reviews attempt to aggregate, appraise, and synthesize in a single source all empirical evidence that meet a set of previously specified eligibility criteria in order to answer a clearly formulated and often narrow research question on a particular topic of interest to support evidence-based practice ( Liberati et al., 2009 ). They adhere closely to explicit scientific principles ( Liberati et al., 2009 ) and rigorous methodological guidelines (Higgins & Green, 2008) aimed at reducing random and systematic errors that can lead to deviations from the truth in results or inferences. The use of explicit methods allows systematic reviews to aggregate a large body of research evidence, assess whether effects or relationships are in the same direction and of the same general magnitude, explain possible inconsistencies between study results, and determine the strength of the overall evidence for every outcome of interest based on the quality of included studies and the general consistency among them ( Cook, Mulrow, & Haynes, 1997 ). The main procedures of a systematic review involve:

  • Formulating a review question and developing a search strategy based on explicit inclusion criteria for the identification of eligible studies (usually described in the context of a detailed review protocol).
  • Searching for eligible studies using multiple databases and information sources, including grey literature sources, without any language restrictions.
  • Selecting studies, extracting data, and assessing risk of bias in a duplicate manner using two independent reviewers to avoid random or systematic errors in the process.
  • Analyzing data using quantitative or qualitative methods.
  • Presenting results in summary of findings tables.
  • Interpreting results and drawing conclusions.

Many systematic reviews, but not all, use statistical methods to combine the results of independent studies into a single quantitative estimate or summary effect size. Known as meta-analyses , these reviews use specific data extraction and statistical techniques (e.g., network, frequentist, or Bayesian meta-analyses) to calculate from each study by outcome of interest an effect size along with a confidence interval that reflects the degree of uncertainty behind the point estimate of effect ( Borenstein, Hedges, Higgins, & Rothstein, 2009 ; Deeks, Higgins, & Altman, 2008 ). Subsequently, they use fixed or random-effects analysis models to combine the results of the included studies, assess statistical heterogeneity, and calculate a weighted average of the effect estimates from the different studies, taking into account their sample sizes. The summary effect size is a value that reflects the average magnitude of the intervention effect for a particular outcome of interest or, more generally, the strength of a relationship between two variables across all studies included in the systematic review. By statistically combining data from multiple studies, meta-analyses can create more precise and reliable estimates of intervention effects than those derived from individual studies alone, when these are examined independently as discrete sources of information.

The review by Gurol-Urganci, de Jongh, Vodopivec-Jamsek, Atun, and Car (2013) on the effects of mobile phone messaging reminders for attendance at healthcare appointments is an illustrative example of a high-quality systematic review with meta-analysis. Missed appointments are a major cause of inefficiency in healthcare delivery with substantial monetary costs to health systems. These authors sought to assess whether mobile phone-based appointment reminders delivered through Short Message Service ( sms ) or Multimedia Messaging Service ( mms ) are effective in improving rates of patient attendance and reducing overall costs. To this end, they conducted a comprehensive search on multiple databases using highly sensitive search strategies without language or publication-type restrictions to identify all rct s that are eligible for inclusion. In order to minimize the risk of omitting eligible studies not captured by the original search, they supplemented all electronic searches with manual screening of trial registers and references contained in the included studies. Study selection, data extraction, and risk of bias assessments were performed inde­­pen­dently by two coders using standardized methods to ensure consistency and to eliminate potential errors. Findings from eight rct s involving 6,615 participants were pooled into meta-analyses to calculate the magnitude of effects that mobile text message reminders have on the rate of attendance at healthcare appointments compared to no reminders and phone call reminders.

Meta-analyses are regarded as powerful tools for deriving meaningful conclusions. However, there are situations in which it is neither reasonable nor appropriate to pool studies together using meta-analytic methods simply because there is extensive clinical heterogeneity between the included studies or variation in measurement tools, comparisons, or outcomes of interest. In these cases, systematic reviews can use qualitative synthesis methods such as vote counting, content analysis, classification schemes and tabulations, as an alternative approach to narratively synthesize the results of the independent studies included in the review. This form of review is known as qualitative systematic review.

A rigorous example of one such review in the eHealth domain is presented by Mickan, Atherton, Roberts, Heneghan, and Tilson (2014) on the use of handheld computers by healthcare professionals and their impact on access to information and clinical decision-making. In line with the methodological guide­lines for systematic reviews, these authors: (a) developed and registered with prospero ( www.crd.york.ac.uk/ prospero / ) an a priori review protocol; (b) conducted comprehensive searches for eligible studies using multiple databases and other supplementary strategies (e.g., forward searches); and (c) subsequently carried out study selection, data extraction, and risk of bias assessments in a duplicate manner to eliminate potential errors in the review process. Heterogeneity between the included studies in terms of reported outcomes and measures precluded the use of meta-analytic methods. To this end, the authors resorted to using narrative analysis and synthesis to describe the effectiveness of handheld computers on accessing information for clinical knowledge, adherence to safety and clinical quality guidelines, and diagnostic decision-making.

In recent years, the number of systematic reviews in the field of health informatics has increased considerably. Systematic reviews with discordant findings can cause great confusion and make it difficult for decision-makers to interpret the review-level evidence ( Moher, 2013 ). Therefore, there is a growing need for appraisal and synthesis of prior systematic reviews to ensure that decision-making is constantly informed by the best available accumulated evidence. Umbrella reviews , also known as overviews of systematic reviews, are tertiary types of evidence synthesis that aim to accomplish this; that is, they aim to compare and contrast findings from multiple systematic reviews and meta-analyses ( Becker & Oxman, 2008 ). Umbrella reviews generally adhere to the same principles and rigorous methodological guidelines used in systematic reviews. However, the unit of analysis in umbrella reviews is the systematic review rather than the primary study ( Becker & Oxman, 2008 ). Unlike systematic reviews that have a narrow focus of inquiry, umbrella reviews focus on broader research topics for which there are several potential interventions ( Smith, Devane, Begley, & Clarke, 2011 ). A recent umbrella review on the effects of home telemonitoring interventions for patients with heart failure critically appraised, compared, and synthesized evidence from 15 systematic reviews to investigate which types of home telemonitoring technologies and forms of interventions are more effective in reducing mortality and hospital admissions ( Kitsiou, Paré, & Jaana, 2015 ).

9.3.5. Realist Reviews

Realist reviews are theory-driven interpretative reviews developed to inform, enhance, or supplement conventional systematic reviews by making sense of heterogeneous evidence about complex interventions applied in diverse contexts in a way that informs policy decision-making ( Greenhalgh, Wong, Westhorp, & Pawson, 2011 ). They originated from criticisms of positivist systematic reviews which centre on their “simplistic” underlying assumptions ( Oates, 2011 ). As explained above, systematic reviews seek to identify causation. Such logic is appropriate for fields like medicine and education where findings of randomized controlled trials can be aggregated to see whether a new treatment or intervention does improve outcomes. However, many argue that it is not possible to establish such direct causal links between interventions and outcomes in fields such as social policy, management, and information systems where for any intervention there is unlikely to be a regular or consistent outcome ( Oates, 2011 ; Pawson, 2006 ; Rousseau, Manning, & Denyer, 2008 ).

To circumvent these limitations, Pawson, Greenhalgh, Harvey, and Walshe (2005) have proposed a new approach for synthesizing knowledge that seeks to unpack the mechanism of how “complex interventions” work in particular contexts. The basic research question — what works? — which is usually associated with systematic reviews changes to: what is it about this intervention that works, for whom, in what circumstances, in what respects and why? Realist reviews have no particular preference for either quantitative or qualitative evidence. As a theory-building approach, a realist review usually starts by articulating likely underlying mechanisms and then scrutinizes available evidence to find out whether and where these mechanisms are applicable ( Shepperd et al., 2009 ). Primary studies found in the extant literature are viewed as case studies which can test and modify the initial theories ( Rousseau et al., 2008 ).

The main objective pursued in the realist review conducted by Otte-Trojel, de Bont, Rundall, and van de Klundert (2014) was to examine how patient portals contribute to health service delivery and patient outcomes. The specific goals were to investigate how outcomes are produced and, most importantly, how variations in outcomes can be explained. The research team started with an exploratory review of background documents and research studies to identify ways in which patient portals may contribute to health service delivery and patient outcomes. The authors identified six main ways which represent “educated guesses” to be tested against the data in the evaluation studies. These studies were identified through a formal and systematic search in four databases between 2003 and 2013. Two members of the research team selected the articles using a pre-established list of inclusion and exclusion criteria and following a two-step procedure. The authors then extracted data from the selected articles and created several tables, one for each outcome category. They organized information to bring forward those mechanisms where patient portals contribute to outcomes and the variation in outcomes across different contexts.

9.3.6. Critical Reviews

Lastly, critical reviews aim to provide a critical evaluation and interpretive analysis of existing literature on a particular topic of interest to reveal strengths, weaknesses, contradictions, controversies, inconsistencies, and/or other important issues with respect to theories, hypotheses, research methods or results ( Baumeister & Leary, 1997 ; Kirkevold, 1997 ). Unlike other review types, critical reviews attempt to take a reflective account of the research that has been done in a particular area of interest, and assess its credibility by using appraisal instruments or critical interpretive methods. In this way, critical reviews attempt to constructively inform other scholars about the weaknesses of prior research and strengthen knowledge development by giving focus and direction to studies for further improvement ( Kirkevold, 1997 ).

Kitsiou, Paré, and Jaana (2013) provide an example of a critical review that assessed the methodological quality of prior systematic reviews of home telemonitoring studies for chronic patients. The authors conducted a comprehensive search on multiple databases to identify eligible reviews and subsequently used a validated instrument to conduct an in-depth quality appraisal. Results indicate that the majority of systematic reviews in this particular area suffer from important methodological flaws and biases that impair their internal validity and limit their usefulness for clinical and decision-making purposes. To this end, they provide a number of recommendations to strengthen knowledge development towards improving the design and execution of future reviews on home telemonitoring.

9.4. Summary

Table 9.1 outlines the main types of literature reviews that were described in the previous sub-sections and summarizes the main characteristics that distinguish one review type from another. It also includes key references to methodological guidelines and useful sources that can be used by eHealth scholars and researchers for planning and developing reviews.

Table 9.1. Typology of Literature Reviews (adapted from Paré et al., 2015).

Typology of Literature Reviews (adapted from Paré et al., 2015).

As shown in Table 9.1 , each review type addresses different kinds of research questions or objectives, which subsequently define and dictate the methods and approaches that need to be used to achieve the overarching goal(s) of the review. For example, in the case of narrative reviews, there is greater flexibility in searching and synthesizing articles ( Green et al., 2006 ). Researchers are often relatively free to use a diversity of approaches to search, identify, and select relevant scientific articles, describe their operational characteristics, present how the individual studies fit together, and formulate conclusions. On the other hand, systematic reviews are characterized by their high level of systematicity, rigour, and use of explicit methods, based on an “a priori” review plan that aims to minimize bias in the analysis and synthesis process (Higgins & Green, 2008). Some reviews are exploratory in nature (e.g., scoping/mapping reviews), whereas others may be conducted to discover patterns (e.g., descriptive reviews) or involve a synthesis approach that may include the critical analysis of prior research ( Paré et al., 2015 ). Hence, in order to select the most appropriate type of review, it is critical to know before embarking on a review project, why the research synthesis is conducted and what type of methods are best aligned with the pursued goals.

9.5. Concluding Remarks

In light of the increased use of evidence-based practice and research generating stronger evidence ( Grady et al., 2011 ; Lyden et al., 2013 ), review articles have become essential tools for summarizing, synthesizing, integrating or critically appraising prior knowledge in the eHealth field. As mentioned earlier, when rigorously conducted review articles represent powerful information sources for eHealth scholars and practitioners looking for state-of-the-art evidence. The typology of literature reviews we used herein will allow eHealth researchers, graduate students and practitioners to gain a better understanding of the similarities and differences between review types.

We must stress that this classification scheme does not privilege any specific type of review as being of higher quality than another ( Paré et al., 2015 ). As explained above, each type of review has its own strengths and limitations. Having said that, we realize that the methodological rigour of any review — be it qualitative, quantitative or mixed — is a critical aspect that should be considered seriously by prospective authors. In the present context, the notion of rigour refers to the reliability and validity of the review process described in section 9.2. For one thing, reliability is related to the reproducibility of the review process and steps, which is facilitated by a comprehensive documentation of the literature search process, extraction, coding and analysis performed in the review. Whether the search is comprehensive or not, whether it involves a methodical approach for data extraction and synthesis or not, it is important that the review documents in an explicit and transparent manner the steps and approach that were used in the process of its development. Next, validity characterizes the degree to which the review process was conducted appropriately. It goes beyond documentation and reflects decisions related to the selection of the sources, the search terms used, the period of time covered, the articles selected in the search, and the application of backward and forward searches ( vom Brocke et al., 2009 ). In short, the rigour of any review article is reflected by the explicitness of its methods (i.e., transparency) and the soundness of the approach used. We refer those interested in the concepts of rigour and quality to the work of Templier and Paré (2015) which offers a detailed set of methodological guidelines for conducting and evaluating various types of review articles.

To conclude, our main objective in this chapter was to demystify the various types of literature reviews that are central to the continuous development of the eHealth field. It is our hope that our descriptive account will serve as a valuable source for those conducting, evaluating or using reviews in this important and growing domain.

  • Ammenwerth E., de Keizer N. An inventory of evaluation studies of information technology in health care. Trends in evaluation research, 1982-2002. International Journal of Medical Informatics. 2004; 44 (1):44–56. [ PubMed : 15778794 ]
  • Anderson S., Allen P., Peckham S., Goodwin N. Asking the right questions: scoping studies in the commissioning of research on the organisation and delivery of health services. Health Research Policy and Systems. 2008; 6 (7):1–12. [ PMC free article : PMC2500008 ] [ PubMed : 18613961 ] [ CrossRef ]
  • Archer N., Fevrier-Thomas U., Lokker C., McKibbon K. A., Straus S.E. Personal health records: a scoping review. Journal of American Medical Informatics Association. 2011; 18 (4):515–522. [ PMC free article : PMC3128401 ] [ PubMed : 21672914 ]
  • Arksey H., O’Malley L. Scoping studies: towards a methodological framework. International Journal of Social Research Methodology. 2005; 8 (1):19–32.
  • A systematic, tool-supported method for conducting literature reviews in information systems. Paper presented at the Proceedings of the 19th European Conference on Information Systems ( ecis 2011); June 9 to 11; Helsinki, Finland. 2011.
  • Baumeister R. F., Leary M.R. Writing narrative literature reviews. Review of General Psychology. 1997; 1 (3):311–320.
  • Becker L. A., Oxman A.D. In: Cochrane handbook for systematic reviews of interventions. Higgins J. P. T., Green S., editors. Hoboken, nj : John Wiley & Sons, Ltd; 2008. Overviews of reviews; pp. 607–631.
  • Borenstein M., Hedges L., Higgins J., Rothstein H. Introduction to meta-analysis. Hoboken, nj : John Wiley & Sons Inc; 2009.
  • Cook D. J., Mulrow C. D., Haynes B. Systematic reviews: Synthesis of best evidence for clinical decisions. Annals of Internal Medicine. 1997; 126 (5):376–380. [ PubMed : 9054282 ]
  • Cooper H., Hedges L.V. In: The handbook of research synthesis and meta-analysis. 2nd ed. Cooper H., Hedges L. V., Valentine J. C., editors. New York: Russell Sage Foundation; 2009. Research synthesis as a scientific process; pp. 3–17.
  • Cooper H. M. Organizing knowledge syntheses: A taxonomy of literature reviews. Knowledge in Society. 1988; 1 (1):104–126.
  • Cronin P., Ryan F., Coughlan M. Undertaking a literature review: a step-by-step approach. British Journal of Nursing. 2008; 17 (1):38–43. [ PubMed : 18399395 ]
  • Darlow S., Wen K.Y. Development testing of mobile health interventions for cancer patient self-management: A review. Health Informatics Journal. 2015 (online before print). [ PubMed : 25916831 ] [ CrossRef ]
  • Daudt H. M., van Mossel C., Scott S.J. Enhancing the scoping study methodology: a large, inter-professional team’s experience with Arksey and O’Malley’s framework. bmc Medical Research Methodology. 2013; 13 :48. [ PMC free article : PMC3614526 ] [ PubMed : 23522333 ] [ CrossRef ]
  • Davies P. The relevance of systematic reviews to educational policy and practice. Oxford Review of Education. 2000; 26 (3-4):365–378.
  • Deeks J. J., Higgins J. P. T., Altman D.G. In: Cochrane handbook for systematic reviews of interventions. Higgins J. P. T., Green S., editors. Hoboken, nj : John Wiley & Sons, Ltd; 2008. Analysing data and undertaking meta-analyses; pp. 243–296.
  • Deshazo J. P., Lavallie D. L., Wolf F.M. Publication trends in the medical informatics literature: 20 years of “Medical Informatics” in mesh . bmc Medical Informatics and Decision Making. 2009; 9 :7. [ PMC free article : PMC2652453 ] [ PubMed : 19159472 ] [ CrossRef ]
  • Dixon-Woods M., Agarwal S., Jones D., Young B., Sutton A. Synthesising qualitative and quantitative evidence: a review of possible methods. Journal of Health Services Research and Policy. 2005; 10 (1):45–53. [ PubMed : 15667704 ]
  • Finfgeld-Connett D., Johnson E.D. Literature search strategies for conducting knowledge-building and theory-generating qualitative systematic reviews. Journal of Advanced Nursing. 2013; 69 (1):194–204. [ PMC free article : PMC3424349 ] [ PubMed : 22591030 ]
  • Grady B., Myers K. M., Nelson E. L., Belz N., Bennett L., Carnahan L. … Guidelines Working Group. Evidence-based practice for telemental health. Telemedicine Journal and E Health. 2011; 17 (2):131–148. [ PubMed : 21385026 ]
  • Green B. N., Johnson C. D., Adams A. Writing narrative literature reviews for peer-reviewed journals: secrets of the trade. Journal of Chiropractic Medicine. 2006; 5 (3):101–117. [ PMC free article : PMC2647067 ] [ PubMed : 19674681 ]
  • Greenhalgh T., Wong G., Westhorp G., Pawson R. Protocol–realist and meta-narrative evidence synthesis: evolving standards ( rameses ). bmc Medical Research Methodology. 2011; 11 :115. [ PMC free article : PMC3173389 ] [ PubMed : 21843376 ]
  • Gurol-Urganci I., de Jongh T., Vodopivec-Jamsek V., Atun R., Car J. Mobile phone messaging reminders for attendance at healthcare appointments. Cochrane Database System Review. 2013; 12 cd 007458. [ PMC free article : PMC6485985 ] [ PubMed : 24310741 ] [ CrossRef ]
  • Hart C. Doing a literature review: Releasing the social science research imagination. London: SAGE Publications; 1998.
  • Higgins J. P. T., Green S., editors. Cochrane handbook for systematic reviews of interventions: Cochrane book series. Hoboken, nj : Wiley-Blackwell; 2008.
  • Jesson J., Matheson L., Lacey F.M. Doing your literature review: traditional and systematic techniques. Los Angeles & London: SAGE Publications; 2011.
  • King W. R., He J. Understanding the role and methods of meta-analysis in IS research. Communications of the Association for Information Systems. 2005; 16 :1.
  • Kirkevold M. Integrative nursing research — an important strategy to further the development of nursing science and nursing practice. Journal of Advanced Nursing. 1997; 25 (5):977–984. [ PubMed : 9147203 ]
  • Kitchenham B., Charters S. ebse Technical Report Version 2.3. Keele & Durham. uk : Keele University & University of Durham; 2007. Guidelines for performing systematic literature reviews in software engineering.
  • Kitsiou S., Paré G., Jaana M. Systematic reviews and meta-analyses of home telemonitoring interventions for patients with chronic diseases: a critical assessment of their methodological quality. Journal of Medical Internet Research. 2013; 15 (7):e150. [ PMC free article : PMC3785977 ] [ PubMed : 23880072 ]
  • Kitsiou S., Paré G., Jaana M. Effects of home telemonitoring interventions on patients with chronic heart failure: an overview of systematic reviews. Journal of Medical Internet Research. 2015; 17 (3):e63. [ PMC free article : PMC4376138 ] [ PubMed : 25768664 ]
  • Levac D., Colquhoun H., O’Brien K. K. Scoping studies: advancing the methodology. Implementation Science. 2010; 5 (1):69. [ PMC free article : PMC2954944 ] [ PubMed : 20854677 ]
  • Levy Y., Ellis T.J. A systems approach to conduct an effective literature review in support of information systems research. Informing Science. 2006; 9 :181–211.
  • Liberati A., Altman D. G., Tetzlaff J., Mulrow C., Gøtzsche P. C., Ioannidis J. P. A. et al. Moher D. The prisma statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: Explanation and elaboration. Annals of Internal Medicine. 2009; 151 (4):W-65. [ PubMed : 19622512 ]
  • Lyden J. R., Zickmund S. L., Bhargava T. D., Bryce C. L., Conroy M. B., Fischer G. S. et al. McTigue K. M. Implementing health information technology in a patient-centered manner: Patient experiences with an online evidence-based lifestyle intervention. Journal for Healthcare Quality. 2013; 35 (5):47–57. [ PubMed : 24004039 ]
  • Mickan S., Atherton H., Roberts N. W., Heneghan C., Tilson J.K. Use of handheld computers in clinical practice: a systematic review. bmc Medical Informatics and Decision Making. 2014; 14 :56. [ PMC free article : PMC4099138 ] [ PubMed : 24998515 ]
  • Moher D. The problem of duplicate systematic reviews. British Medical Journal. 2013; 347 (5040) [ PubMed : 23945367 ] [ CrossRef ]
  • Montori V. M., Wilczynski N. L., Morgan D., Haynes R. B., Hedges T. Systematic reviews: a cross-sectional study of location and citation counts. bmc Medicine. 2003; 1 :2. [ PMC free article : PMC281591 ] [ PubMed : 14633274 ]
  • Mulrow C. D. The medical review article: state of the science. Annals of Internal Medicine. 1987; 106 (3):485–488. [ PubMed : 3813259 ] [ CrossRef ]
  • Evidence-based information systems: A decade later. Proceedings of the European Conference on Information Systems ; 2011. Retrieved from http://aisel ​.aisnet.org/cgi/viewcontent ​.cgi?article ​=1221&context ​=ecis2011 .
  • Okoli C., Schabram K. A guide to conducting a systematic literature review of information systems research. ssrn Electronic Journal. 2010
  • Otte-Trojel T., de Bont A., Rundall T. G., van de Klundert J. How outcomes are achieved through patient portals: a realist review. Journal of American Medical Informatics Association. 2014; 21 (4):751–757. [ PMC free article : PMC4078283 ] [ PubMed : 24503882 ]
  • Paré G., Trudel M.-C., Jaana M., Kitsiou S. Synthesizing information systems knowledge: A typology of literature reviews. Information & Management. 2015; 52 (2):183–199.
  • Patsopoulos N. A., Analatos A. A., Ioannidis J.P. A. Relative citation impact of various study designs in the health sciences. Journal of the American Medical Association. 2005; 293 (19):2362–2366. [ PubMed : 15900006 ]
  • Paul M. M., Greene C. M., Newton-Dame R., Thorpe L. E., Perlman S. E., McVeigh K. H., Gourevitch M.N. The state of population health surveillance using electronic health records: A narrative review. Population Health Management. 2015; 18 (3):209–216. [ PubMed : 25608033 ]
  • Pawson R. Evidence-based policy: a realist perspective. London: SAGE Publications; 2006.
  • Pawson R., Greenhalgh T., Harvey G., Walshe K. Realist review—a new method of systematic review designed for complex policy interventions. Journal of Health Services Research & Policy. 2005; 10 (Suppl 1):21–34. [ PubMed : 16053581 ]
  • Petersen K., Vakkalanka S., Kuzniarz L. Guidelines for conducting systematic mapping studies in software engineering: An update. Information and Software Technology. 2015; 64 :1–18.
  • Petticrew M., Roberts H. Systematic reviews in the social sciences: A practical guide. Malden, ma : Blackwell Publishing Co; 2006.
  • Rousseau D. M., Manning J., Denyer D. Evidence in management and organizational science: Assembling the field’s full weight of scientific knowledge through syntheses. The Academy of Management Annals. 2008; 2 (1):475–515.
  • Rowe F. What literature review is not: diversity, boundaries and recommendations. European Journal of Information Systems. 2014; 23 (3):241–255.
  • Shea B. J., Hamel C., Wells G. A., Bouter L. M., Kristjansson E., Grimshaw J. et al. Boers M. amstar is a reliable and valid measurement tool to assess the methodological quality of systematic reviews. Journal of Clinical Epidemiology. 2009; 62 (10):1013–1020. [ PubMed : 19230606 ]
  • Shepperd S., Lewin S., Straus S., Clarke M., Eccles M. P., Fitzpatrick R. et al. Sheikh A. Can we systematically review studies that evaluate complex interventions? PLoS Medicine. 2009; 6 (8):e1000086. [ PMC free article : PMC2717209 ] [ PubMed : 19668360 ]
  • Silva B. M., Rodrigues J. J., de la Torre Díez I., López-Coronado M., Saleem K. Mobile-health: A review of current state in 2015. Journal of Biomedical Informatics. 2015; 56 :265–272. [ PubMed : 26071682 ]
  • Smith V., Devane D., Begley C., Clarke M. Methodology in conducting a systematic review of systematic reviews of healthcare interventions. bmc Medical Research Methodology. 2011; 11 (1):15. [ PMC free article : PMC3039637 ] [ PubMed : 21291558 ]
  • Sylvester A., Tate M., Johnstone D. Beyond synthesis: re-presenting heterogeneous research literature. Behaviour & Information Technology. 2013; 32 (12):1199–1215.
  • Templier M., Paré G. A framework for guiding and evaluating literature reviews. Communications of the Association for Information Systems. 2015; 37 (6):112–137.
  • Thomas J., Harden A. Methods for the thematic synthesis of qualitative research in systematic reviews. bmc Medical Research Methodology. 2008; 8 (1):45. [ PMC free article : PMC2478656 ] [ PubMed : 18616818 ]
  • Reconstructing the giant: on the importance of rigour in documenting the literature search process. Paper presented at the Proceedings of the 17th European Conference on Information Systems ( ecis 2009); Verona, Italy. 2009.
  • Webster J., Watson R.T. Analyzing the past to prepare for the future: Writing a literature review. Management Information Systems Quarterly. 2002; 26 (2):11.
  • Whitlock E. P., Lin J. S., Chou R., Shekelle P., Robinson K.A. Using existing systematic reviews in complex systematic reviews. Annals of Internal Medicine. 2008; 148 (10):776–782. [ PubMed : 18490690 ]

This publication is licensed under a Creative Commons License, Attribution-Noncommercial 4.0 International License (CC BY-NC 4.0): see https://creativecommons.org/licenses/by-nc/4.0/

  • Cite this Page Paré G, Kitsiou S. Chapter 9 Methods for Literature Reviews. In: Lau F, Kuziemsky C, editors. Handbook of eHealth Evaluation: An Evidence-based Approach [Internet]. Victoria (BC): University of Victoria; 2017 Feb 27.
  • PDF version of this title (4.5M)

In this Page

  • Introduction
  • Overview of the Literature Review Process and Steps
  • Types of Review Articles and Brief Illustrations
  • Concluding Remarks

Related information

  • PMC PubMed Central citations
  • PubMed Links to PubMed

Recent Activity

  • Chapter 9 Methods for Literature Reviews - Handbook of eHealth Evaluation: An Ev... Chapter 9 Methods for Literature Reviews - Handbook of eHealth Evaluation: An Evidence-based Approach

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

Connect with NLM

National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894

Web Policies FOIA HHS Vulnerability Disclosure

Help Accessibility Careers

statistics

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

How to Do a Systematic Review: A Best Practice Guide for Conducting and Reporting Narrative Reviews, Meta-Analyses, and Meta-Syntheses

Affiliations.

  • 1 Behavioural Science Centre, Stirling Management School, University of Stirling, Stirling FK9 4LA, United Kingdom; email: [email protected].
  • 2 Department of Psychological and Behavioural Science, London School of Economics and Political Science, London WC2A 2AE, United Kingdom.
  • 3 Department of Statistics, Northwestern University, Evanston, Illinois 60208, USA; email: [email protected].
  • PMID: 30089228
  • DOI: 10.1146/annurev-psych-010418-102803

Systematic reviews are characterized by a methodical and replicable methodology and presentation. They involve a comprehensive search to locate all relevant published and unpublished work on a subject; a systematic integration of search results; and a critique of the extent, nature, and quality of evidence in relation to a particular research question. The best reviews synthesize studies to draw broad theoretical conclusions about what a literature means, linking theory to evidence and evidence to theory. This guide describes how to plan, conduct, organize, and present a systematic review of quantitative (meta-analysis) or qualitative (narrative review, meta-synthesis) information. We outline core standards and principles and describe commonly encountered problems. Although this guide targets psychological scientists, its high level of abstraction makes it potentially relevant to any subject area or discipline. We argue that systematic reviews are a key methodology for clarifying whether and how research findings replicate and for explaining possible inconsistencies, and we call for researchers to conduct systematic reviews to help elucidate whether there is a replication crisis.

Keywords: evidence; guide; meta-analysis; meta-synthesis; narrative; systematic review; theory.

PubMed Disclaimer

Similar articles

  • The future of Cochrane Neonatal. Soll RF, Ovelman C, McGuire W. Soll RF, et al. Early Hum Dev. 2020 Nov;150:105191. doi: 10.1016/j.earlhumdev.2020.105191. Epub 2020 Sep 12. Early Hum Dev. 2020. PMID: 33036834
  • Summarizing systematic reviews: methodological development, conduct and reporting of an umbrella review approach. Aromataris E, Fernandez R, Godfrey CM, Holly C, Khalil H, Tungpunkom P. Aromataris E, et al. Int J Evid Based Healthc. 2015 Sep;13(3):132-40. doi: 10.1097/XEB.0000000000000055. Int J Evid Based Healthc. 2015. PMID: 26360830
  • RAMESES publication standards: meta-narrative reviews. Wong G, Greenhalgh T, Westhorp G, Buckingham J, Pawson R. Wong G, et al. BMC Med. 2013 Jan 29;11:20. doi: 10.1186/1741-7015-11-20. BMC Med. 2013. PMID: 23360661 Free PMC article.
  • A Primer on Systematic Reviews and Meta-Analyses. Nguyen NH, Singh S. Nguyen NH, et al. Semin Liver Dis. 2018 May;38(2):103-111. doi: 10.1055/s-0038-1655776. Epub 2018 Jun 5. Semin Liver Dis. 2018. PMID: 29871017 Review.
  • Publication Bias and Nonreporting Found in Majority of Systematic Reviews and Meta-analyses in Anesthesiology Journals. Hedin RJ, Umberham BA, Detweiler BN, Kollmorgen L, Vassar M. Hedin RJ, et al. Anesth Analg. 2016 Oct;123(4):1018-25. doi: 10.1213/ANE.0000000000001452. Anesth Analg. 2016. PMID: 27537925 Review.
  • The Association between Emotional Intelligence and Prosocial Behaviors in Children and Adolescents: A Systematic Review and Meta-Analysis. Cao X, Chen J. Cao X, et al. J Youth Adolesc. 2024 Aug 28. doi: 10.1007/s10964-024-02062-y. Online ahead of print. J Youth Adolesc. 2024. PMID: 39198344
  • The impact of chemical pollution across major life transitions: a meta-analysis on oxidative stress in amphibians. Martin C, Capilla-Lasheras P, Monaghan P, Burraco P. Martin C, et al. Proc Biol Sci. 2024 Aug;291(2029):20241536. doi: 10.1098/rspb.2024.1536. Epub 2024 Aug 28. Proc Biol Sci. 2024. PMID: 39191283 Free PMC article.
  • Target mechanisms of mindfulness-based programmes and practices: a scoping review. Maloney S, Kock M, Slaghekke Y, Radley L, Lopez-Montoyo A, Montero-Marin J, Kuyken W. Maloney S, et al. BMJ Ment Health. 2024 Aug 24;27(1):e300955. doi: 10.1136/bmjment-2023-300955. BMJ Ment Health. 2024. PMID: 39181568 Free PMC article. Review.
  • Bridging disciplines-key to success when implementing planetary health in medical training curricula. Malmqvist E, Oudin A. Malmqvist E, et al. Front Public Health. 2024 Aug 6;12:1454729. doi: 10.3389/fpubh.2024.1454729. eCollection 2024. Front Public Health. 2024. PMID: 39165783 Free PMC article. Review.
  • Strength of evidence for five happiness strategies. Puterman E, Zieff G, Stoner L. Puterman E, et al. Nat Hum Behav. 2024 Aug 12. doi: 10.1038/s41562-024-01954-0. Online ahead of print. Nat Hum Behav. 2024. PMID: 39134738 No abstract available.
  • Search in MeSH

LinkOut - more resources

Full text sources.

  • Ingenta plc
  • Ovid Technologies, Inc.

Other Literature Sources

  • scite Smart Citations

Miscellaneous

  • NCI CPTAC Assay Portal
  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

Structuring a qualitative findings section

Reporting the findings from a qualitative study in a way that is interesting, meaningful, and trustworthy can be a struggle. Those new to qualitative research often find themselves trying to quantify everything to make it seem more “rigorous,” or asking themselves, “Do I really need this much data to support my findings?” Length requirements and word limits imposed by academic journals can also make the process difficult because qualitative data takes up a lot of room! In this post, I’m going to outline a few ways to structure qualitative findings, and a few tips and tricks to develop a strong findings section.

There are A LOT of different ways to structure a qualitative findings section. I’m going to focus on the following:

Tables (but not ONLY tables)

Themes/Findings as Headings

Research Questions as Headings

Anchoring Quotations

Anchoring Excerpts from Field Notes

Before I get into each of those, however, here is a bit of general guidance. First, make sure that you are providing adequate direct evidence for your findings. Second, be sure to integrate that direct evidence into the narrative. In other words, if for example, you were using quotes from a participant to support one of your themes, you should present and explain the theme (akin to a thesis statement), introduce the supporting quote, present it, explain the quote, and connect it to your finding. Below is an example of what I mean from one of my articles on implementation challenges in personalized learning ( Bingham, Pane, Steiner, & Hamilton, 2018 ). The finding supported by this paragraph was: “Inadequate Teacher Preparation, Development, and Support”

To mitigate the difficulties of enacting personalized learning in their classrooms, teachers wanted a model from which they could extrapolate practices that might serve them well in their own classrooms. As one teacher explained, “the ideas and the implementation is what’s lacking I think. I don’t feel like I know what I’m doing. I need to see things modeled and I need to know what it is. I need to be able to touch it. Show me a model, model for me.” Unfortunately, teachers had little to draw on for effective practices. Professional development was not as helpful as teachers had hoped, outside training on using the digital content or learning platforms fell short, and few examples or best practices existed for teachers to use in their own classrooms. As a result, teachers had to work harder to address gaps in their own knowledge. 

Finally, you should not leave quotations to speak for themselves and you should not have quotations as standalone paragraphs or sentences, with no introduction or explanation. Don’t make the reader do the analytic work for you.

Now, on to some specific ways to structure your findings section.

Screen Shot 2020-09-26 at 9.47.48 AM.png

Tables can be used to give an overview of what you’re about to present in your findings, including the themes, some supporting evidence, and the meaning/explanation of the theme. Tables can be a useful way to give readers a quick reference for what your findings are. However, tables should not be used as your ONLY means of presenting those findings.

If you are choosing to use a table to present qualitative findings, you must also describe the findings in context, and provide supporting evidence in a narrative format (as in the paragraph outlined in the previous section).

2). Themes/Findings as Headings

Another option is to present your themes/findings as general or specific headings in your findings section. Here are some examples of findings as general headings:

Importance of Data Utilization and Analysis in the Classroom  The Role of Student Discipline and Accountability Differences in the Experiences of Teachers 

As you can see these headings do not describe precisely what the finding is, but they give the general idea/subject of the finding. You can have sub-headings within these findings that are more specific if you would like.

Another way to do this would be to be a bit more specific. For example:

School Infrastructure and Available Technology Do Not yet Fully Align with Teachers’ Needs 

Structural support for high levels of technology use is not fully developed 

Using multiple sources of digital content led to alignment issues 

Measures of School and Student Success are Misaligned

Traditional methods of measuring student progress conflict with personalized learning

Difficulties communicating new measures of student success to colleges and universities.

As you can see, here the findings are shown as headings, but are structured as specific sentences, with sub-themes included as well.

3). Research Questions as Headings

You can also present your findings using your research questions as the headings in the findings section. This is a useful strategy that ensures you’re answering your research questions and also allows the reader to quickly ascertain where the answers to your research questions are. Often, you will also need to present themes within each research question to keep yourself organized and to adequately flesh out your findings. The example below presents a research question from my study of blended learning at a charter high school (Bingham, 2016) , and an excerpt from my findings that answered that research question. I have also included the associated theme.

Research Question 1: What challenges, if any, do teachers face in implementing a blended model in a school’s first year? Theme: TROUBLESHOOTING AND TASK-MANAGING: TECHNOLOGY USE IN THE CLASSROOM In the original vision for instruction at Blended Academy, technology was to be an integral part of students’ learning, meant to allow students to find their own answers to their questions, to explore their personal interests, and to provide multiple opportunities for learning. The use of iPods in the classroom was partially intended to serve the social-emotional component of the model, allowing students to enjoy music and to “tune out” from other classroom activities when working on Digital X. Further, the iPods would allow stu- dents to listen to podcasts or teacher-created content at any time, in any location. However, prior to the school’s opening, little attention was paid to the management of these devices, and their potential for misuse. As a result, teachers spent much of their time managing students’ technology use, troubleshooting, and developing classroom procedures to ensure that technology use was relevant to learning. For example, in Ms. L’s classroom, she attempted to ensure learning was happening by instituting “Technology-Free” periods in the classroom. When students had to be working on their laptops in order to complete lessons or quizzes, the majority of her time was spent walking from student to student, watching for off-task behavior, and calling out students for how long they were “logged in” to the digital curriculum. In one typical interaction, Ms. L admonished one student, saying “It says you only logged in for one minute . . . when are you going to finish your English if you only logged in one minute today?” The difficulties around ensuring students were using technology productively resulted in teachers “hovering” over students, making it difficult to provide targeted instructional help. Teachers often responded to off-task behavior/ technology use by confiscating computers and devices or restricting their use, in order to ensure that students were working. However, because the majority of tasks were meant to be delivered online or through technological devices, this was not a productive or effective solution.

4). Vignettes

Vignettes can be a strategy to spark interest in your study, add narrative context, and provide a descriptive overview of your study/site/participants. They can also be used as a strategy to introduce themes. You can place them at the beginning of a paper, or at the start of the findings section, or in your discussion of each theme. They wouldn’t typically be the only representation of your findings that you present, but you can use them to hook the reader and provide a story that exemplifies findings, themes, contexts, participants, etc. Below is an example from one of my recent studies.

The Role of Pilot Teachers in Schoolwide Technology Integration Blended High School is a lot like many other charter schools. Students wear uniforms, and as you walk through the halls, there is almost always a teacher issuing a demerit to a student who is not wearing the right shoes, or who hasn’t tucked in their shirt. In this school, however, teachers use technology in almost every facet of their instruction, operating in a school model that blends face-to-face and online learning in the classroom in order to personalize students’ learning experiences. It has, however, been a long road to this level of technology use. BHS’s first year of operation was, arguably, disastrous. Teachers were overwhelmed and students didn’t progress as expected. In one staff meeting toward the end of the schools’ first year, teachers and administrators expressed frustration with each other and with the school model, with several teachers arguing that technology was hurting, not helping. The atmosphere was tense, with one teacher finally shrugging anxiously and saying “Maybe need to ask ourselves, ‘Is this the best model to use with some of our kids?’” Ultimately, by the end of the first year, technology was not a regular classroom practice. In BHS’s second year, the administration again pushed for full technology integration, but they wanted to start slow. In a fall semester staff meeting, the principal and the assistant principal ran what the principal referred to as a “technology therapy session,” where teachers could share their struggles with using technology to engage in PL. During the session, one of the new teachers mentions that she is having a difficult time letting go – changing her focus from lecturing to computer-based work. Another teacher worries about finding good online resources. Most of the teachers, new and veteran, are alarmed by the time it is taking for them design lessons that integrate technology. Some admit only engaging in technology use in a shallow way – uploading worksheets to Google Docs, recording Powerpoints, etc.  A few months after the discussion in which teachers aired their fears and struggles, the principal leads the teachers in analyzing student data from that week and spends a bit of time highlighting the work of a few teachers whose students are doing particularly well and who have been able to use technology in everyday classroom practice. Those teachers are part of a small group of “pilot teachers,” each of whom have been experimenting with various technology-based practices, including testing new learning management systems, designing their own online modules with personalized student objectives, providing students with technology-facilitated immediate feedback, and using up-to-the-minute data to develop technology-guided small-group instruction.  Over the course of the next several months, administrators encouraged teachers to continue to be transparent about their concerns and share those concerns in regular staff meetings. Administrators conferred with the pilot teachers and administrators and teachers together set incremental goals based on the pilot teachers’ recommendations. In weekly staff meetings, the pilot teachers shared their progress, including concerns and challenges. They collaborated with the other teachers to find solutions and worked with the administration to get what they needed to enact those solutions. For example, after a push from the pilot teachers, administration increased funding for technology purchases and introduced shifts in the school schedule to allow for planning in order to help teachers manage the demands of a high-tech classroom. Because the pilot teachers emphasized how much time meaningful technology integration took, and knew what worked and what didn’t, they were able to train other teachers in high-tech practices and to make the case to administration for needed changes.  By BHS’s third year, teachers schoolwide were able to fully integrate technology in their classrooms. All teachers were using the same learning management system, which had been initially chosen and tested by a pilot teacher. In every classroom, teachers were also engaging online modules, technology-facilitated breakout groups, and real time technology-based data analysis – all of which were practices the pilot teachers had tested and shared in the second year. The consistent collaboration between administration and pilot teachers and pilot teachers and other teachers helped calibrate classroom changes to manage the conflict between existing practices and new high-tech practices. By focusing on student learning data, creating the room for experimentation, collaborating consistently, and distributing the leadership for technology integration, teachers and administrators felt comfortable with the increasing reliance on tech-heavy practices.

I developed this vignette as a composite from my field notes and interviews and used it to set the stage for the rest of the findings section.

4). Anchoring Quotes

Using exemplar quotes from your participants is another way to structure your findings. In the following, which also comes from Bingham et al. (2018) , the finding itself is used as the heading, and the anchoring quotes come directly after the heading, prior to the rest of the narrative discussion of the finding. These quotations help provide some initial evidence and set the stage for what’s to come.

School Infrastructure and Available Technology Do Not Yet Fully Align With Teachers’ Needs  “I know that computer problems are an issue almost daily.” (Middle school personalized learning teacher)  “If the data was exactly what we needed, it would be easier. I think a lot of times we’re not using it enough because the way we’re using the data is not as effective as it should be.” (High school personalized learning teacher) 

You can note the source next to or after the quote. This can be done with your chosen pseudonyms, or with a general description, as I've done above.

5). Anchoring Excerpts from Field Notes

Similarly, excerpts from field notes can be used to start your discussion of a finding. Again, the finding itself is used as the heading, and the excerpt from field notes supporting that finding comes directly after the heading, prior to the rest of the narrative discussion of the finding. The example below comes from a study in which I explored how a personalized learning model evolved over the course of three years (Bingham, 2017) . I used excerpts from my field notes to open the discussion of each year.

Year 1: Navigating the disconnect between vision and practice  Walking into the large classroom space shared by Ms. Z and Ms. H, it is not immediately evident that these are high-tech PL classrooms. At first, there are no laptops out in either class. Both Ms. Z’s and Ms. H’s students are completing warm-up activities that are projected on each teacher’s white board. After a few minutes, Ms. Z’s students get up and get laptops. Ms. Z walks around to students and asks them what lesson from the digital curriculum they will be working on today. As Ms. Z speaks to a table of students, other students in the room listen to their iPods, sometimes singing loudly. Some students are on YouTube, watching music videos; others are messaging friends on GChat or Facebook. As Ms. Z makes her way around, students toggle back to the screen devoted to the digital curriculum. Sometimes, Ms. Z notices that students are off-task and she redirects them. Other times, she is too busy unlocking an online quiz for a student, or confiscating a student’s iPod. 

This excerpt from my field notes provided an overview of what teacher practice looked like in the first year of the school, so that I could then discuss several themes that were representative of how practice evolved over that first year.

The key takeaway here is that there are many ways to structure your findings section. You have to choose the method that best supports your study, and best represents your data and participants. No matter what you choose, the findings section itself should be constructed to answer your research questions, while also providing context and thick description, and, of course, telling a story.

Writing a discussion section

Some tips for academic writing.

  • Affiliate Program

Wordvice

  • UNITED STATES
  • 台灣 (TAIWAN)
  • TÜRKIYE (TURKEY)
  • Academic Editing Services
  • - Research Paper
  • - Journal Manuscript
  • - Dissertation
  • - College & University Assignments
  • Admissions Editing Services
  • - Application Essay
  • - Personal Statement
  • - Recommendation Letter
  • - Cover Letter
  • - CV/Resume
  • Business Editing Services
  • - Business Documents
  • - Report & Brochure
  • - Website & Blog
  • Writer Editing Services
  • - Script & Screenplay
  • Our Editors
  • Client Reviews
  • Editing & Proofreading Prices
  • Wordvice Points
  • Partner Discount
  • Plagiarism Checker
  • APA Citation Generator
  • MLA Citation Generator
  • Chicago Citation Generator
  • Vancouver Citation Generator
  • - APA Style
  • - MLA Style
  • - Chicago Style
  • - Vancouver Style
  • Writing & Editing Guide
  • Academic Resources
  • Admissions Resources

How to Write the Results/Findings Section in Research

research findings and review

What is the research paper Results section and what does it do?

The Results section of a scientific research paper represents the core findings of a study derived from the methods applied to gather and analyze information. It presents these findings in a logical sequence without bias or interpretation from the author, setting up the reader for later interpretation and evaluation in the Discussion section. A major purpose of the Results section is to break down the data into sentences that show its significance to the research question(s).

The Results section appears third in the section sequence in most scientific papers. It follows the presentation of the Methods and Materials and is presented before the Discussion section —although the Results and Discussion are presented together in many journals. This section answers the basic question “What did you find in your research?”

What is included in the Results section?

The Results section should include the findings of your study and ONLY the findings of your study. The findings include:

  • Data presented in tables, charts, graphs, and other figures (may be placed into the text or on separate pages at the end of the manuscript)
  • A contextual analysis of this data explaining its meaning in sentence form
  • All data that corresponds to the central research question(s)
  • All secondary findings (secondary outcomes, subgroup analyses, etc.)

If the scope of the study is broad, or if you studied a variety of variables, or if the methodology used yields a wide range of different results, the author should present only those results that are most relevant to the research question stated in the Introduction section .

As a general rule, any information that does not present the direct findings or outcome of the study should be left out of this section. Unless the journal requests that authors combine the Results and Discussion sections, explanations and interpretations should be omitted from the Results.

How are the results organized?

The best way to organize your Results section is “logically.” One logical and clear method of organizing research results is to provide them alongside the research questions—within each research question, present the type of data that addresses that research question.

Let’s look at an example. Your research question is based on a survey among patients who were treated at a hospital and received postoperative care. Let’s say your first research question is:

results section of a research paper, figures

“What do hospital patients over age 55 think about postoperative care?”

This can actually be represented as a heading within your Results section, though it might be presented as a statement rather than a question:

Attitudes towards postoperative care in patients over the age of 55

Now present the results that address this specific research question first. In this case, perhaps a table illustrating data from a survey. Likert items can be included in this example. Tables can also present standard deviations, probabilities, correlation matrices, etc.

Following this, present a content analysis, in words, of one end of the spectrum of the survey or data table. In our example case, start with the POSITIVE survey responses regarding postoperative care, using descriptive phrases. For example:

“Sixty-five percent of patients over 55 responded positively to the question “ Are you satisfied with your hospital’s postoperative care ?” (Fig. 2)

Include other results such as subcategory analyses. The amount of textual description used will depend on how much interpretation of tables and figures is necessary and how many examples the reader needs in order to understand the significance of your research findings.

Next, present a content analysis of another part of the spectrum of the same research question, perhaps the NEGATIVE or NEUTRAL responses to the survey. For instance:

  “As Figure 1 shows, 15 out of 60 patients in Group A responded negatively to Question 2.”

After you have assessed the data in one figure and explained it sufficiently, move on to your next research question. For example:

  “How does patient satisfaction correspond to in-hospital improvements made to postoperative care?”

results section of a research paper, figures

This kind of data may be presented through a figure or set of figures (for instance, a paired T-test table).

Explain the data you present, here in a table, with a concise content analysis:

“The p-value for the comparison between the before and after groups of patients was .03% (Fig. 2), indicating that the greater the dissatisfaction among patients, the more frequent the improvements that were made to postoperative care.”

Let’s examine another example of a Results section from a study on plant tolerance to heavy metal stress . In the Introduction section, the aims of the study are presented as “determining the physiological and morphological responses of Allium cepa L. towards increased cadmium toxicity” and “evaluating its potential to accumulate the metal and its associated environmental consequences.” The Results section presents data showing how these aims are achieved in tables alongside a content analysis, beginning with an overview of the findings:

“Cadmium caused inhibition of root and leave elongation, with increasing effects at higher exposure doses (Fig. 1a-c).”

The figure containing this data is cited in parentheses. Note that this author has combined three graphs into one single figure. Separating the data into separate graphs focusing on specific aspects makes it easier for the reader to assess the findings, and consolidating this information into one figure saves space and makes it easy to locate the most relevant results.

results section of a research paper, figures

Following this overall summary, the relevant data in the tables is broken down into greater detail in text form in the Results section.

  • “Results on the bio-accumulation of cadmium were found to be the highest (17.5 mg kgG1) in the bulb, when the concentration of cadmium in the solution was 1×10G2 M and lowest (0.11 mg kgG1) in the leaves when the concentration was 1×10G3 M.”

Captioning and Referencing Tables and Figures

Tables and figures are central components of your Results section and you need to carefully think about the most effective way to use graphs and tables to present your findings . Therefore, it is crucial to know how to write strong figure captions and to refer to them within the text of the Results section.

The most important advice one can give here as well as throughout the paper is to check the requirements and standards of the journal to which you are submitting your work. Every journal has its own design and layout standards, which you can find in the author instructions on the target journal’s website. Perusing a journal’s published articles will also give you an idea of the proper number, size, and complexity of your figures.

Regardless of which format you use, the figures should be placed in the order they are referenced in the Results section and be as clear and easy to understand as possible. If there are multiple variables being considered (within one or more research questions), it can be a good idea to split these up into separate figures. Subsequently, these can be referenced and analyzed under separate headings and paragraphs in the text.

To create a caption, consider the research question being asked and change it into a phrase. For instance, if one question is “Which color did participants choose?”, the caption might be “Color choice by participant group.” Or in our last research paper example, where the question was “What is the concentration of cadmium in different parts of the onion after 14 days?” the caption reads:

 “Fig. 1(a-c): Mean concentration of Cd determined in (a) bulbs, (b) leaves, and (c) roots of onions after a 14-day period.”

Steps for Composing the Results Section

Because each study is unique, there is no one-size-fits-all approach when it comes to designing a strategy for structuring and writing the section of a research paper where findings are presented. The content and layout of this section will be determined by the specific area of research, the design of the study and its particular methodologies, and the guidelines of the target journal and its editors. However, the following steps can be used to compose the results of most scientific research studies and are essential for researchers who are new to preparing a manuscript for publication or who need a reminder of how to construct the Results section.

Step 1 : Consult the guidelines or instructions that the target journal or publisher provides authors and read research papers it has published, especially those with similar topics, methods, or results to your study.

  • The guidelines will generally outline specific requirements for the results or findings section, and the published articles will provide sound examples of successful approaches.
  • Note length limitations on restrictions on content. For instance, while many journals require the Results and Discussion sections to be separate, others do not—qualitative research papers often include results and interpretations in the same section (“Results and Discussion”).
  • Reading the aims and scope in the journal’s “ guide for authors ” section and understanding the interests of its readers will be invaluable in preparing to write the Results section.

Step 2 : Consider your research results in relation to the journal’s requirements and catalogue your results.

  • Focus on experimental results and other findings that are especially relevant to your research questions and objectives and include them even if they are unexpected or do not support your ideas and hypotheses.
  • Catalogue your findings—use subheadings to streamline and clarify your report. This will help you avoid excessive and peripheral details as you write and also help your reader understand and remember your findings. Create appendices that might interest specialists but prove too long or distracting for other readers.
  • Decide how you will structure of your results. You might match the order of the research questions and hypotheses to your results, or you could arrange them according to the order presented in the Methods section. A chronological order or even a hierarchy of importance or meaningful grouping of main themes or categories might prove effective. Consider your audience, evidence, and most importantly, the objectives of your research when choosing a structure for presenting your findings.

Step 3 : Design figures and tables to present and illustrate your data.

  • Tables and figures should be numbered according to the order in which they are mentioned in the main text of the paper.
  • Information in figures should be relatively self-explanatory (with the aid of captions), and their design should include all definitions and other information necessary for readers to understand the findings without reading all of the text.
  • Use tables and figures as a focal point to tell a clear and informative story about your research and avoid repeating information. But remember that while figures clarify and enhance the text, they cannot replace it.

Step 4 : Draft your Results section using the findings and figures you have organized.

  • The goal is to communicate this complex information as clearly and precisely as possible; precise and compact phrases and sentences are most effective.
  • In the opening paragraph of this section, restate your research questions or aims to focus the reader’s attention to what the results are trying to show. It is also a good idea to summarize key findings at the end of this section to create a logical transition to the interpretation and discussion that follows.
  • Try to write in the past tense and the active voice to relay the findings since the research has already been done and the agent is usually clear. This will ensure that your explanations are also clear and logical.
  • Make sure that any specialized terminology or abbreviation you have used here has been defined and clarified in the  Introduction section .

Step 5 : Review your draft; edit and revise until it reports results exactly as you would like to have them reported to your readers.

  • Double-check the accuracy and consistency of all the data, as well as all of the visual elements included.
  • Read your draft aloud to catch language errors (grammar, spelling, and mechanics), awkward phrases, and missing transitions.
  • Ensure that your results are presented in the best order to focus on objectives and prepare readers for interpretations, valuations, and recommendations in the Discussion section . Look back over the paper’s Introduction and background while anticipating the Discussion and Conclusion sections to ensure that the presentation of your results is consistent and effective.
  • Consider seeking additional guidance on your paper. Find additional readers to look over your Results section and see if it can be improved in any way. Peers, professors, or qualified experts can provide valuable insights.

One excellent option is to use a professional English proofreading and editing service  such as Wordvice, including our paper editing service . With hundreds of qualified editors from dozens of scientific fields, Wordvice has helped thousands of authors revise their manuscripts and get accepted into their target journals. Read more about the  proofreading and editing process  before proceeding with getting academic editing services and manuscript editing services for your manuscript.

As the representation of your study’s data output, the Results section presents the core information in your research paper. By writing with clarity and conciseness and by highlighting and explaining the crucial findings of their study, authors increase the impact and effectiveness of their research manuscripts.

For more articles and videos on writing your research manuscript, visit Wordvice’s Resources page.

Wordvice Resources

  • How to Write a Research Paper Introduction 
  • Which Verb Tenses to Use in a Research Paper
  • How to Write an Abstract for a Research Paper
  • How to Write a Research Paper Title
  • Useful Phrases for Academic Writing
  • Common Transition Terms in Academic Papers
  • Active and Passive Voice in Research Papers
  • 100+ Verbs That Will Make Your Research Writing Amazing
  • Tips for Paraphrasing in Research Papers

research findings and review

How To Write The Results/Findings Chapter

For qualitative studies (dissertations & theses).

By: Jenna Crossley (PhD). Expert Reviewed By: Dr. Eunice Rautenbach | August 2021

So, you’ve collected and analysed your qualitative data, and it’s time to write up your results chapter. But where do you start? In this post, we’ll guide you through the qualitative results chapter (also called the findings chapter), step by step. 

Overview: Qualitative Results Chapter

  • What (exactly) the qualitative results chapter is
  • What to include in your results chapter
  • How to write up your results chapter
  • A few tips and tricks to help you along the way
  • Free results chapter template

What exactly is the results chapter?

The results chapter in a dissertation or thesis (or any formal academic research piece) is where you objectively and neutrally present the findings of your qualitative analysis (or analyses if you used multiple qualitative analysis methods ). This chapter can sometimes be combined with the discussion chapter (where you interpret the data and discuss its meaning), depending on your university’s preference.  We’ll treat the two chapters as separate, as that’s the most common approach.

In contrast to a quantitative results chapter that presents numbers and statistics, a qualitative results chapter presents data primarily in the form of words . But this doesn’t mean that a qualitative study can’t have quantitative elements – you could, for example, present the number of times a theme or topic pops up in your data, depending on the analysis method(s) you adopt.

Adding a quantitative element to your study can add some rigour, which strengthens your results by providing more evidence for your claims. This is particularly common when using qualitative content analysis. Keep in mind though that qualitative research aims to achieve depth, richness and identify nuances , so don’t get tunnel vision by focusing on the numbers. They’re just cream on top in a qualitative analysis.

So, to recap, the results chapter is where you objectively present the findings of your analysis, without interpreting them (you’ll save that for the discussion chapter). With that out the way, let’s take a look at what you should include in your results chapter.

Free template for results section of a dissertation or thesis

What should you include in the results chapter?

As we’ve mentioned, your qualitative results chapter should purely present and describe your results , not interpret them in relation to the existing literature or your research questions . Any speculations or discussion about the implications of your findings should be reserved for your discussion chapter.

In your results chapter, you’ll want to talk about your analysis findings and whether or not they support your hypotheses (if you have any). Naturally, the exact contents of your results chapter will depend on which qualitative analysis method (or methods) you use. For example, if you were to use thematic analysis, you’d detail the themes identified in your analysis, using extracts from the transcripts or text to support your claims.

While you do need to present your analysis findings in some detail, you should avoid dumping large amounts of raw data in this chapter. Instead, focus on presenting the key findings and using a handful of select quotes or text extracts to support each finding . The reams of data and analysis can be relegated to your appendices.

While it’s tempting to include every last detail you found in your qualitative analysis, it is important to make sure that you report only that which is relevant to your research aims, objectives and research questions .  Always keep these three components, as well as your hypotheses (if you have any) front of mind when writing the chapter and use them as a filter to decide what’s relevant and what’s not.

Need a helping hand?

research findings and review

How do I write the results chapter?

Now that we’ve covered the basics, it’s time to look at how to structure your chapter. Broadly speaking, the results chapter needs to contain three core components – the introduction, the body and the concluding summary. Let’s take a look at each of these.

Section 1: Introduction

The first step is to craft a brief introduction to the chapter. This intro is vital as it provides some context for your findings. In your introduction, you should begin by reiterating your problem statement and research questions and highlight the purpose of your research . Make sure that you spell this out for the reader so that the rest of your chapter is well contextualised.

The next step is to briefly outline the structure of your results chapter. In other words, explain what’s included in the chapter and what the reader can expect. In the results chapter, you want to tell a story that is coherent, flows logically, and is easy to follow , so make sure that you plan your structure out well and convey that structure (at a high level), so that your reader is well oriented.

The introduction section shouldn’t be lengthy. Two or three short paragraphs should be more than adequate. It is merely an introduction and overview, not a summary of the chapter.

Pro Tip – To help you structure your chapter, it can be useful to set up an initial draft with (sub)section headings so that you’re able to easily (re)arrange parts of your chapter. This will also help your reader to follow your results and give your chapter some coherence.  Be sure to use level-based heading styles (e.g. Heading 1, 2, 3 styles) to help the reader differentiate between levels visually. You can find these options in Word (example below).

Heading styles in the results chapter

Section 2: Body

Before we get started on what to include in the body of your chapter, it’s vital to remember that a results section should be completely objective and descriptive, not interpretive . So, be careful not to use words such as, “suggests” or “implies”, as these usually accompany some form of interpretation – that’s reserved for your discussion chapter.

The structure of your body section is very important , so make sure that you plan it out well. When planning out your qualitative results chapter, create sections and subsections so that you can maintain the flow of the story you’re trying to tell. Be sure to systematically and consistently describe each portion of results. Try to adopt a standardised structure for each portion so that you achieve a high level of consistency throughout the chapter.

For qualitative studies, results chapters tend to be structured according to themes , which makes it easier for readers to follow. However, keep in mind that not all results chapters have to be structured in this manner. For example, if you’re conducting a longitudinal study, you may want to structure your chapter chronologically. Similarly, you might structure this chapter based on your theoretical framework . The exact structure of your chapter will depend on the nature of your study , especially your research questions.

As you work through the body of your chapter, make sure that you use quotes to substantiate every one of your claims . You can present these quotes in italics to differentiate them from your own words. A general rule of thumb is to use at least two pieces of evidence per claim, and these should be linked directly to your data. Also, remember that you need to include all relevant results , not just the ones that support your assumptions or initial leanings.

In addition to including quotes, you can also link your claims to the data by using appendices , which you should reference throughout your text. When you reference, make sure that you include both the name/number of the appendix , as well as the line(s) from which you drew your data.

As referencing styles can vary greatly, be sure to look up the appendix referencing conventions of your university’s prescribed style (e.g. APA , Harvard, etc) and keep this consistent throughout your chapter.

Section 3: Concluding summary

The concluding summary is very important because it summarises your key findings and lays the foundation for the discussion chapter . Keep in mind that some readers may skip directly to this section (from the introduction section), so make sure that it can be read and understood well in isolation.

In this section, you need to remind the reader of the key findings. That is, the results that directly relate to your research questions and that you will build upon in your discussion chapter. Remember, your reader has digested a lot of information in this chapter, so you need to use this section to remind them of the most important takeaways.

Importantly, the concluding summary should not present any new information and should only describe what you’ve already presented in your chapter. Keep it concise – you’re not summarising the whole chapter, just the essentials.

Tips for writing an A-grade results chapter

Now that you’ve got a clear picture of what the qualitative results chapter is all about, here are some quick tips and reminders to help you craft a high-quality chapter:

  • Your results chapter should be written in the past tense . You’ve done the work already, so you want to tell the reader what you found , not what you are currently finding .
  • Make sure that you review your work multiple times and check that every claim is adequately backed up by evidence . Aim for at least two examples per claim, and make use of an appendix to reference these.
  • When writing up your results, make sure that you stick to only what is relevant . Don’t waste time on data that are not relevant to your research objectives and research questions.
  • Use headings and subheadings to create an intuitive, easy to follow piece of writing. Make use of Microsoft Word’s “heading styles” and be sure to use them consistently.
  • When referring to numerical data, tables and figures can provide a useful visual aid. When using these, make sure that they can be read and understood independent of your body text (i.e. that they can stand-alone). To this end, use clear, concise labels for each of your tables or figures and make use of colours to code indicate differences or hierarchy.
  • Similarly, when you’re writing up your chapter, it can be useful to highlight topics and themes in different colours . This can help you to differentiate between your data if you get a bit overwhelmed and will also help you to ensure that your results flow logically and coherently.

If you have any questions, leave a comment below and we’ll do our best to help. If you’d like 1-on-1 help with your results chapter (or any chapter of your dissertation or thesis), check out our private dissertation coaching service here or book a free initial consultation to discuss how we can help you.

research findings and review

Psst... there’s more!

This post was based on one of our popular Research Bootcamps . If you're working on a research project, you'll definitely want to check this out ...

23 Comments

David Person

This was extremely helpful. Thanks a lot guys

Aditi

Hi, thanks for the great research support platform created by the gradcoach team!

I wanted to ask- While “suggests” or “implies” are interpretive terms, what terms could we use for the results chapter? Could you share some examples of descriptive terms?

TcherEva

I think that instead of saying, ‘The data suggested, or The data implied,’ you can say, ‘The Data showed or revealed, or illustrated or outlined’…If interview data, you may say Jane Doe illuminated or elaborated, or Jane Doe described… or Jane Doe expressed or stated.

Llala Phoshoko

I found this article very useful. Thank you very much for the outstanding work you are doing.

Oliwia

What if i have 3 different interviewees answering the same interview questions? Should i then present the results in form of the table with the division on the 3 perspectives or rather give a results in form of the text and highlight who said what?

Rea

I think this tabular representation of results is a great idea. I am doing it too along with the text. Thanks

Nomonde Mteto

That was helpful was struggling to separate the discussion from the findings

Esther Peter.

this was very useful, Thank you.

tendayi

Very helpful, I am confident to write my results chapter now.

Sha

It is so helpful! It is a good job. Thank you very much!

Nabil

Very useful, well explained. Many thanks.

Agnes Ngatuni

Hello, I appreciate the way you provided a supportive comments about qualitative results presenting tips

Carol Ch

I loved this! It explains everything needed, and it has helped me better organize my thoughts. What words should I not use while writing my results section, other than subjective ones.

Hend

Thanks a lot, it is really helpful

Anna milanga

Thank you so much dear, i really appropriate your nice explanations about this.

Wid

Thank you so much for this! I was wondering if anyone could help with how to prproperly integrate quotations (Excerpts) from interviews in the finding chapter in a qualitative research. Please GradCoach, address this issue and provide examples.

nk

what if I’m not doing any interviews myself and all the information is coming from case studies that have already done the research.

FAITH NHARARA

Very helpful thank you.

Philip

This was very helpful as I was wondering how to structure this part of my dissertation, to include the quotes… Thanks for this explanation

Aleks

This is very helpful, thanks! I am required to write up my results chapters with the discussion in each of them – any tips and tricks for this strategy?

Wei Leong YONG

For qualitative studies, can the findings be structured according to the Research questions? Thank you.

Katie Allison

Do I need to include literature/references in my findings chapter?

Reona Persaud

This was very helpful

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

research findings and review

  • Print Friendly
  • USC Libraries
  • Research Guides

Organizing Your Social Sciences Research Paper

  • 8. The Discussion
  • Purpose of Guide
  • Design Flaws to Avoid
  • Independent and Dependent Variables
  • Glossary of Research Terms
  • Reading Research Effectively
  • Narrowing a Topic Idea
  • Broadening a Topic Idea
  • Extending the Timeliness of a Topic Idea
  • Academic Writing Style
  • Applying Critical Thinking
  • Choosing a Title
  • Making an Outline
  • Paragraph Development
  • Research Process Video Series
  • Executive Summary
  • The C.A.R.S. Model
  • Background Information
  • The Research Problem/Question
  • Theoretical Framework
  • Citation Tracking
  • Content Alert Services
  • Evaluating Sources
  • Primary Sources
  • Secondary Sources
  • Tiertiary Sources
  • Scholarly vs. Popular Publications
  • Qualitative Methods
  • Quantitative Methods
  • Insiderness
  • Using Non-Textual Elements
  • Limitations of the Study
  • Common Grammar Mistakes
  • Writing Concisely
  • Avoiding Plagiarism
  • Footnotes or Endnotes?
  • Further Readings
  • Generative AI and Writing
  • USC Libraries Tutorials and Other Guides
  • Bibliography

The purpose of the discussion section is to interpret and describe the significance of your findings in relation to what was already known about the research problem being investigated and to explain any new understanding or insights that emerged as a result of your research. The discussion will always connect to the introduction by way of the research questions or hypotheses you posed and the literature you reviewed, but the discussion does not simply repeat or rearrange the first parts of your paper; the discussion clearly explains how your study advanced the reader's understanding of the research problem from where you left them at the end of your review of prior research.

Annesley, Thomas M. “The Discussion Section: Your Closing Argument.” Clinical Chemistry 56 (November 2010): 1671-1674; Peacock, Matthew. “Communicative Moves in the Discussion Section of Research Articles.” System 30 (December 2002): 479-497.

Importance of a Good Discussion

The discussion section is often considered the most important part of your research paper because it:

  • Most effectively demonstrates your ability as a researcher to think critically about an issue, to develop creative solutions to problems based upon a logical synthesis of the findings, and to formulate a deeper, more profound understanding of the research problem under investigation;
  • Presents the underlying meaning of your research, notes possible implications in other areas of study, and explores possible improvements that can be made in order to further develop the concerns of your research;
  • Highlights the importance of your study and how it can contribute to understanding the research problem within the field of study;
  • Presents how the findings from your study revealed and helped fill gaps in the literature that had not been previously exposed or adequately described; and,
  • Engages the reader in thinking critically about issues based on an evidence-based interpretation of findings; it is not governed strictly by objective reporting of information.

Annesley Thomas M. “The Discussion Section: Your Closing Argument.” Clinical Chemistry 56 (November 2010): 1671-1674; Bitchener, John and Helen Basturkmen. “Perceptions of the Difficulties of Postgraduate L2 Thesis Students Writing the Discussion Section.” Journal of English for Academic Purposes 5 (January 2006): 4-18; Kretchmer, Paul. Fourteen Steps to Writing an Effective Discussion Section. San Francisco Edit, 2003-2008.

Structure and Writing Style

I.  General Rules

These are the general rules you should adopt when composing your discussion of the results :

  • Do not be verbose or repetitive; be concise and make your points clearly
  • Avoid the use of jargon or undefined technical language
  • Follow a logical stream of thought; in general, interpret and discuss the significance of your findings in the same sequence you described them in your results section [a notable exception is to begin by highlighting an unexpected result or a finding that can grab the reader's attention]
  • Use the present verb tense, especially for established facts; however, refer to specific works or prior studies in the past tense
  • If needed, use subheadings to help organize your discussion or to categorize your interpretations into themes

II.  The Content

The content of the discussion section of your paper most often includes :

  • Explanation of results : Comment on whether or not the results were expected for each set of findings; go into greater depth to explain findings that were unexpected or especially profound. If appropriate, note any unusual or unanticipated patterns or trends that emerged from your results and explain their meaning in relation to the research problem.
  • References to previous research : Either compare your results with the findings from other studies or use the studies to support a claim. This can include re-visiting key sources already cited in your literature review section, or, save them to cite later in the discussion section if they are more important to compare with your results instead of being a part of the general literature review of prior research used to provide context and background information. Note that you can make this decision to highlight specific studies after you have begun writing the discussion section.
  • Deduction : A claim for how the results can be applied more generally. For example, describing lessons learned, proposing recommendations that can help improve a situation, or highlighting best practices.
  • Hypothesis : A more general claim or possible conclusion arising from the results [which may be proved or disproved in subsequent research]. This can be framed as new research questions that emerged as a consequence of your analysis.

III.  Organization and Structure

Keep the following sequential points in mind as you organize and write the discussion section of your paper:

  • Think of your discussion as an inverted pyramid. Organize the discussion from the general to the specific, linking your findings to the literature, then to theory, then to practice [if appropriate].
  • Use the same key terms, narrative style, and verb tense [present] that you used when describing the research problem in your introduction.
  • Begin by briefly re-stating the research problem you were investigating and answer all of the research questions underpinning the problem that you posed in the introduction.
  • Describe the patterns, principles, and relationships shown by each major findings and place them in proper perspective. The sequence of this information is important; first state the answer, then the relevant results, then cite the work of others. If appropriate, refer the reader to a figure or table to help enhance the interpretation of the data [either within the text or as an appendix].
  • Regardless of where it's mentioned, a good discussion section includes analysis of any unexpected findings. This part of the discussion should begin with a description of the unanticipated finding, followed by a brief interpretation as to why you believe it appeared and, if necessary, its possible significance in relation to the overall study. If more than one unexpected finding emerged during the study, describe each of them in the order they appeared as you gathered or analyzed the data. As noted, the exception to discussing findings in the same order you described them in the results section would be to begin by highlighting the implications of a particularly unexpected or significant finding that emerged from the study, followed by a discussion of the remaining findings.
  • Before concluding the discussion, identify potential limitations and weaknesses if you do not plan to do so in the conclusion of the paper. Comment on their relative importance in relation to your overall interpretation of the results and, if necessary, note how they may affect the validity of your findings. Avoid using an apologetic tone; however, be honest and self-critical [e.g., in retrospect, had you included a particular question in a survey instrument, additional data could have been revealed].
  • The discussion section should end with a concise summary of the principal implications of the findings regardless of their significance. Give a brief explanation about why you believe the findings and conclusions of your study are important and how they support broader knowledge or understanding of the research problem. This can be followed by any recommendations for further research. However, do not offer recommendations which could have been easily addressed within the study. This would demonstrate to the reader that you have inadequately examined and interpreted the data.

IV.  Overall Objectives

The objectives of your discussion section should include the following: I.  Reiterate the Research Problem/State the Major Findings

Briefly reiterate the research problem or problems you are investigating and the methods you used to investigate them, then move quickly to describe the major findings of the study. You should write a direct, declarative, and succinct proclamation of the study results, usually in one paragraph.

II.  Explain the Meaning of the Findings and Why They are Important

No one has thought as long and hard about your study as you have. Systematically explain the underlying meaning of your findings and state why you believe they are significant. After reading the discussion section, you want the reader to think critically about the results and why they are important. You don’t want to force the reader to go through the paper multiple times to figure out what it all means. If applicable, begin this part of the section by repeating what you consider to be your most significant or unanticipated finding first, then systematically review each finding. Otherwise, follow the general order you reported the findings presented in the results section.

III.  Relate the Findings to Similar Studies

No study in the social sciences is so novel or possesses such a restricted focus that it has absolutely no relation to previously published research. The discussion section should relate your results to those found in other studies, particularly if questions raised from prior studies served as the motivation for your research. This is important because comparing and contrasting the findings of other studies helps to support the overall importance of your results and it highlights how and in what ways your study differs from other research about the topic. Note that any significant or unanticipated finding is often because there was no prior research to indicate the finding could occur. If there is prior research to indicate this, you need to explain why it was significant or unanticipated. IV.  Consider Alternative Explanations of the Findings

It is important to remember that the purpose of research in the social sciences is to discover and not to prove . When writing the discussion section, you should carefully consider all possible explanations for the study results, rather than just those that fit your hypothesis or prior assumptions and biases. This is especially important when describing the discovery of significant or unanticipated findings.

V.  Acknowledge the Study’s Limitations

It is far better for you to identify and acknowledge your study’s limitations than to have them pointed out by your professor! Note any unanswered questions or issues your study could not address and describe the generalizability of your results to other situations. If a limitation is applicable to the method chosen to gather information, then describe in detail the problems you encountered and why. VI.  Make Suggestions for Further Research

You may choose to conclude the discussion section by making suggestions for further research [as opposed to offering suggestions in the conclusion of your paper]. Although your study can offer important insights about the research problem, this is where you can address other questions related to the problem that remain unanswered or highlight hidden issues that were revealed as a result of conducting your research. You should frame your suggestions by linking the need for further research to the limitations of your study [e.g., in future studies, the survey instrument should include more questions that ask..."] or linking to critical issues revealed from the data that were not considered initially in your research.

NOTE: Besides the literature review section, the preponderance of references to sources is usually found in the discussion section . A few historical references may be helpful for perspective, but most of the references should be relatively recent and included to aid in the interpretation of your results, to support the significance of a finding, and/or to place a finding within a particular context. If a study that you cited does not support your findings, don't ignore it--clearly explain why your research findings differ from theirs.

V.  Problems to Avoid

  • Do not waste time restating your results . Should you need to remind the reader of a finding to be discussed, use "bridge sentences" that relate the result to the interpretation. An example would be: “In the case of determining available housing to single women with children in rural areas of Texas, the findings suggest that access to good schools is important...," then move on to further explaining this finding and its implications.
  • As noted, recommendations for further research can be included in either the discussion or conclusion of your paper, but do not repeat your recommendations in the both sections. Think about the overall narrative flow of your paper to determine where best to locate this information. However, if your findings raise a lot of new questions or issues, consider including suggestions for further research in the discussion section.
  • Do not introduce new results in the discussion section. Be wary of mistaking the reiteration of a specific finding for an interpretation because it may confuse the reader. The description of findings [results section] and the interpretation of their significance [discussion section] should be distinct parts of your paper. If you choose to combine the results section and the discussion section into a single narrative, you must be clear in how you report the information discovered and your own interpretation of each finding. This approach is not recommended if you lack experience writing college-level research papers.
  • Use of the first person pronoun is generally acceptable. Using first person singular pronouns can help emphasize a point or illustrate a contrasting finding. However, keep in mind that too much use of the first person can actually distract the reader from the main points [i.e., I know you're telling me this--just tell me!].

Analyzing vs. Summarizing. Department of English Writing Guide. George Mason University; Discussion. The Structure, Format, Content, and Style of a Journal-Style Scientific Paper. Department of Biology. Bates College; Hess, Dean R. "How to Write an Effective Discussion." Respiratory Care 49 (October 2004); Kretchmer, Paul. Fourteen Steps to Writing to Writing an Effective Discussion Section. San Francisco Edit, 2003-2008; The Lab Report. University College Writing Centre. University of Toronto; Sauaia, A. et al. "The Anatomy of an Article: The Discussion Section: "How Does the Article I Read Today Change What I Will Recommend to my Patients Tomorrow?” The Journal of Trauma and Acute Care Surgery 74 (June 2013): 1599-1602; Research Limitations & Future Research . Lund Research Ltd., 2012; Summary: Using it Wisely. The Writing Center. University of North Carolina; Schafer, Mickey S. Writing the Discussion. Writing in Psychology course syllabus. University of Florida; Yellin, Linda L. A Sociology Writer's Guide . Boston, MA: Allyn and Bacon, 2009.

Writing Tip

Don’t Over-Interpret the Results!

Interpretation is a subjective exercise. As such, you should always approach the selection and interpretation of your findings introspectively and to think critically about the possibility of judgmental biases unintentionally entering into discussions about the significance of your work. With this in mind, be careful that you do not read more into the findings than can be supported by the evidence you have gathered. Remember that the data are the data: nothing more, nothing less.

MacCoun, Robert J. "Biases in the Interpretation and Use of Research Results." Annual Review of Psychology 49 (February 1998): 259-287; Ward, Paulet al, editors. The Oxford Handbook of Expertise . Oxford, UK: Oxford University Press, 2018.

Another Writing Tip

Don't Write Two Results Sections!

One of the most common mistakes that you can make when discussing the results of your study is to present a superficial interpretation of the findings that more or less re-states the results section of your paper. Obviously, you must refer to your results when discussing them, but focus on the interpretation of those results and their significance in relation to the research problem, not the data itself.

Azar, Beth. "Discussing Your Findings."  American Psychological Association gradPSYCH Magazine (January 2006).

Yet Another Writing Tip

Avoid Unwarranted Speculation!

The discussion section should remain focused on the findings of your study. For example, if the purpose of your research was to measure the impact of foreign aid on increasing access to education among disadvantaged children in Bangladesh, it would not be appropriate to speculate about how your findings might apply to populations in other countries without drawing from existing studies to support your claim or if analysis of other countries was not a part of your original research design. If you feel compelled to speculate, do so in the form of describing possible implications or explaining possible impacts. Be certain that you clearly identify your comments as speculation or as a suggestion for where further research is needed. Sometimes your professor will encourage you to expand your discussion of the results in this way, while others don’t care what your opinion is beyond your effort to interpret the data in relation to the research problem.

  • << Previous: Using Non-Textual Elements
  • Next: Limitations of the Study >>
  • Last Updated: Sep 4, 2024 9:40 AM
  • URL: https://libguides.usc.edu/writingguide

Jump to navigation

Home

Cochrane Training

Chapter 14: completing ‘summary of findings’ tables and grading the certainty of the evidence.

Holger J Schünemann, Julian PT Higgins, Gunn E Vist, Paul Glasziou, Elie A Akl, Nicole Skoetz, Gordon H Guyatt; on behalf of the Cochrane GRADEing Methods Group (formerly Applicability and Recommendations Methods Group) and the Cochrane Statistical Methods Group

Key Points:

  • A ‘Summary of findings’ table for a given comparison of interventions provides key information concerning the magnitudes of relative and absolute effects of the interventions examined, the amount of available evidence and the certainty (or quality) of available evidence.
  • ‘Summary of findings’ tables include a row for each important outcome (up to a maximum of seven). Accepted formats of ‘Summary of findings’ tables and interactive ‘Summary of findings’ tables can be produced using GRADE’s software GRADEpro GDT.
  • Cochrane has adopted the GRADE approach (Grading of Recommendations Assessment, Development and Evaluation) for assessing certainty (or quality) of a body of evidence.
  • The GRADE approach specifies four levels of the certainty for a body of evidence for a given outcome: high, moderate, low and very low.
  • GRADE assessments of certainty are determined through consideration of five domains: risk of bias, inconsistency, indirectness, imprecision and publication bias. For evidence from non-randomized studies and rarely randomized studies, assessments can then be upgraded through consideration of three further domains.

Cite this chapter as: Schünemann HJ, Higgins JPT, Vist GE, Glasziou P, Akl EA, Skoetz N, Guyatt GH. Chapter 14: Completing ‘Summary of findings’ tables and grading the certainty of the evidence [last updated August 2023]. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA (editors). Cochrane Handbook for Systematic Reviews of Interventions version 6.5. Cochrane, 2024. Available from www.training.cochrane.org/handbook .

14.1 ‘Summary of findings’ tables

14.1.1 introduction to ‘summary of findings’ tables.

‘Summary of findings’ tables present the main findings of a review in a transparent, structured and simple tabular format. In particular, they provide key information concerning the certainty or quality of evidence (i.e. the confidence or certainty in the range of an effect estimate or an association), the magnitude of effect of the interventions examined, and the sum of available data on the main outcomes. Cochrane Reviews should incorporate ‘Summary of findings’ tables during planning and publication, and should have at least one key ‘Summary of findings’ table representing the most important comparisons. Some reviews may include more than one ‘Summary of findings’ table, for example if the review addresses more than one major comparison, or includes substantially different populations that require separate tables (e.g. because the effects differ or it is important to show results separately). In the Cochrane Database of Systematic Reviews (CDSR),  all ‘Summary of findings’ tables for a review appear at the beginning, before the Background section.

14.1.2 Selecting outcomes for ‘Summary of findings’ tables

Planning for the ‘Summary of findings’ table starts early in the systematic review, with the selection of the outcomes to be included in: (i) the review; and (ii) the ‘Summary of findings’ table. This is a crucial step, and one that review authors need to address carefully.

To ensure production of optimally useful information, Cochrane Reviews begin by developing a review question and by listing all main outcomes that are important to patients and other decision makers (see Chapter 2 and Chapter 3 ). The GRADE approach to assessing the certainty of the evidence (see Section 14.2 ) defines and operationalizes a rating process that helps separate outcomes into those that are critical, important or not important for decision making. Consultation and feedback on the review protocol, including from consumers and other decision makers, can enhance this process.

Critical outcomes are likely to include clearly important endpoints; typical examples include mortality and major morbidity (such as strokes and myocardial infarction). However, they may also represent frequent minor and rare major side effects, symptoms, quality of life, burdens associated with treatment, and resource issues (costs). Burdens represent the impact of healthcare workload on patient function and well-being, and include the demands of adhering to an intervention that patients or caregivers (e.g. family) may dislike, such as having to undergo more frequent tests, or the restrictions on lifestyle that certain interventions require (Spencer-Bonilla et al 2017).

Frequently, when formulating questions that include all patient-important outcomes for decision making, review authors will confront reports of studies that have not included all these outcomes. This is particularly true for adverse outcomes. For instance, randomized trials might contribute evidence on intended effects, and on frequent, relatively minor side effects, but not report on rare adverse outcomes such as suicide attempts. Chapter 19 discusses strategies for addressing adverse effects. To obtain data for all important outcomes it may be necessary to examine the results of non-randomized studies (see Chapter 24 ). Cochrane, in collaboration with others, has developed guidance for review authors to support their decision about when to look for and include non-randomized studies (Schünemann et al 2013).

If a review includes only randomized trials, these trials may not address all important outcomes and it may therefore not be possible to address these outcomes within the constraints of the review. Review authors should acknowledge these limitations and make them transparent to readers. Review authors are encouraged to include non-randomized studies to examine rare or long-term adverse effects that may not adequately be studied in randomized trials. This raises the possibility that harm outcomes may come from studies in which participants differ from those in studies used in the analysis of benefit. Review authors will then need to consider how much such differences are likely to impact on the findings, and this will influence the certainty of evidence because of concerns about indirectness related to the population (see Section 14.2.2 ).

Non-randomized studies can provide important information not only when randomized trials do not report on an outcome or randomized trials suffer from indirectness, but also when the evidence from randomized trials is rated as very low and non-randomized studies provide evidence of higher certainty. Further discussion of these issues appears also in Chapter 24 .

14.1.3 General template for ‘Summary of findings’ tables

Several alternative standard versions of ‘Summary of findings’ tables have been developed to ensure consistency and ease of use across reviews, inclusion of the most important information needed by decision makers, and optimal presentation (see examples at Figures 14.1.a and 14.1.b ). These formats are supported by research that focused on improved understanding of the information they intend to convey (Carrasco-Labra et al 2016, Langendam et al 2016, Santesso et al 2016). They are available through GRADE’s official software package developed to support the GRADE approach: GRADEpro GDT (www.gradepro.org).

Standard Cochrane ‘Summary of findings’ tables include the following elements using one of the accepted formats. Further guidance on each of these is provided in Section 14.1.6 .

  • A brief description of the population and setting addressed by the available evidence (which may be slightly different to or narrower than those defined by the review question).
  • A brief description of the comparison addressed in the ‘Summary of findings’ table, including both the experimental and comparison interventions.
  • A list of the most critical and/or important health outcomes, both desirable and undesirable, limited to seven or fewer outcomes.
  • A measure of the typical burden of each outcomes (e.g. illustrative risk, or illustrative mean, on comparator intervention).
  • The absolute and relative magnitude of effect measured for each (if both are appropriate).
  • The numbers of participants and studies contributing to the analysis of each outcomes.
  • A GRADE assessment of the overall certainty of the body of evidence for each outcome (which may vary by outcome).
  • Space for comments.
  • Explanations (formerly known as footnotes).

Ideally, ‘Summary of findings’ tables are supported by more detailed tables (known as ‘evidence profiles’) to which the review may be linked, which provide more detailed explanations. Evidence profiles include the same important health outcomes, and provide greater detail than ‘Summary of findings’ tables of both of the individual considerations feeding into the grading of certainty and of the results of the studies (Guyatt et al 2011a). They ensure that a structured approach is used to rating the certainty of evidence. Although they are rarely published in Cochrane Reviews, evidence profiles are often used, for example, by guideline developers in considering the certainty of the evidence to support guideline recommendations. Review authors will find it easier to develop the ‘Summary of findings’ table by completing the rating of the certainty of evidence in the evidence profile first in GRADEpro GDT. They can then automatically convert this to one of the ‘Summary of findings’ formats in GRADEpro GDT, including an interactive ‘Summary of findings’ for publication.

As a measure of the magnitude of effect for dichotomous outcomes, the ‘Summary of findings’ table should provide a relative measure of effect (e.g. risk ratio, odds ratio, hazard) and measures of absolute risk. For other types of data, an absolute measure alone (such as a difference in means for continuous data) might be sufficient. It is important that the magnitude of effect is presented in a meaningful way, which may require some transformation of the result of a meta-analysis (see also Chapter 15, Section 15.4 and Section 15.5 ). Reviews with more than one main comparison should include a separate ‘Summary of findings’ table for each comparison.

Figure 14.1.a provides an example of a ‘Summary of findings’ table. Figure 15.1.b  provides an alternative format that may further facilitate users’ understanding and interpretation of the review’s findings. Evidence evaluating different formats suggests that the ‘Summary of findings’ table should include a risk difference as a measure of the absolute effect and authors should preferably use a format that includes a risk difference .

A detailed description of the contents of a ‘Summary of findings’ table appears in Section 14.1.6 .

Figure 14.1.a Example of a ‘Summary of findings’ table

Summary of findings (for interactive version click here )

anyone taking a long flight (lasting more than 6 hours)

international air travel

compression stockings

without stockings

Outcomes

* (95% CI)

Relative effect (95% CI)

Number of participants (studies)

Certainty of the evidence (GRADE)

Comments

Assumed risk

Corresponding risk

(DVT)

See comment

See comment

Not estimable

2821

(9 studies)

See comment

0 participants developed symptomatic DVT in these studies

(0.04 to 0.26)

2637

(9 studies)

⊕⊕⊕⊕

 

(0 to 3)

(1 to 8)

(2 to 15)

(0.18 to 1.13)

1804

(8 studies)

⊕⊕⊕◯

 

Post-flight values measured on a scale from 0, no oedema, to 10, maximum oedema

The mean oedema score ranged across control groups from

The mean oedema score in the intervention groups was on average

(95% CI –4.9 to –4.5)

 

1246

(6 studies)

⊕⊕◯◯

 

See comment

See comment

Not estimable

2821

(9 studies)

See comment

0 participants developed pulmonary embolus in these studies

See comment

See comment

Not estimable

2821

(9 studies)

See comment

0 participants died in these studies

See comment

See comment

Not estimable

1182

(4 studies)

See comment

The tolerability of the stockings was described as very good with no complaints of side effects in 4 studies

*The basis for the is provided in footnotes. The (and its 95% confidence interval) is based on the assumed risk in the intervention group and the of the intervention (and its 95% CI).

CI: confidence interval; RR: risk ratio; GRADE: GRADE Working Group grades of evidence (see explanations).

a All the stockings in the nine studies included in this review were below-knee compression stockings. In four studies the compression strength was 20 mmHg to 30 mmHg at the ankle. It was 10 mmHg to 20 mmHg in the other four studies. Stockings come in different sizes. If a stocking is too tight around the knee it can prevent essential venous return causing the blood to pool around the knee. Compression stockings should be fitted properly. A stocking that is too tight could cut into the skin on a long flight and potentially cause ulceration and increased risk of DVT. Some stockings can be slightly thicker than normal leg covering and can be potentially restrictive with tight foot wear. It is a good idea to wear stockings around the house prior to travel to ensure a good, comfortable fit. Participants put their stockings on two to three hours before the flight in most of the studies. The availability and cost of stockings can vary.

b Two studies recruited high risk participants defined as those with previous episodes of DVT, coagulation disorders, severe obesity, limited mobility due to bone or joint problems, neoplastic disease within the previous two years, large varicose veins or, in one of the studies, participants taller than 190 cm and heavier than 90 kg. The incidence for the seven studies that excluded high risk participants was 1.45% and the incidence for the two studies that recruited high-risk participants (with at least one risk factor) was 2.43%. We have used 10 and 30 per 1000 to express different risk strata, respectively.

c The confidence interval crosses no difference and does not rule out a small increase.

d The measurement of oedema was not validated (indirectness of the outcome) or blinded to the intervention (risk of bias).

e If there are very few or no events and the number of participants is large, judgement about the certainty of evidence (particularly judgements about imprecision) may be based on the absolute effect. Here the certainty rating may be considered ‘high’ if the outcome was appropriately assessed and the event, in fact, did not occur in 2821 studied participants.

f None of the other studies reported adverse effects, apart from four cases of superficial vein thrombosis in varicose veins in the knee region that were compressed by the upper edge of the stocking in one study.

Figure 14.1.b Example of alternative ‘Summary of findings’ table

children given antibiotics

inpatients and outpatient

probiotics

no probiotics

Follow-up: 10 days to 3 months

Children < 5 years

 

⊕⊕⊕⊝

Due to risk of bias

Probably decreases the incidence of diarrhoea.

1474 (7 studies)

(0.29 to 0.55)

(6.5 to 12.2)

(10.1 to 15.8 fewer)

Children > 5 years

 

⊕⊕⊝⊝

Due to risk of bias and imprecision

May decrease the incidence of diarrhoea.

624 (4 studies)

(0.53 to 1.21)

(5.9 to 13.6)

(5.3 fewer to 2.4 more)

Follow-up: 10 to 44 days

1575 (11 studies)

-

(0.8 to 3.8)

(1 fewer to 2 more)

⊕⊕⊝⊝

Due to risk of bias and inconsistency

There may be little or no difference in adverse events.

Follow-up: 10 days to 3 months

897 (5 studies)

-

The mean duration of diarrhoea without probiotics was

-

(1.18 to 0.02 fewer days)

⊕⊕⊝⊝

Due to imprecision and inconsistency

May decrease the duration of diarrhoea.

Follow-up: 10 days to 3 months

425 (4 studies)

-

The mean stools per day without probiotics was

-

(0.6 to 0 fewer)

⊕⊕⊝⊝

Due to imprecision and inconsistency

There may be little or no difference in stools per day.

*The basis for the (e.g. the median control group risk across studies) is provided in footnotes. The (and its 95% confidence interval) is based on the assumed risk in the comparison group and the of the intervention (and its 95% CI). confidence interval; risk ratio.

Control group risk estimates come from pooled estimates of control groups. Relative effect based on available case analysis

High risk of bias due to high loss to follow-up.

Imprecision due to few events and confidence intervals include appreciable benefit or harm.

Side effects: rash, nausea, flatulence, vomiting, increased phlegm, chest pain, constipation, taste disturbance and low appetite.

Risks were calculated from pooled risk differences.

High risk of bias. Only 11 of 16 trials reported on adverse events, suggesting a selective reporting bias.

Serious inconsistency. Numerous probiotic agents and doses were evaluated amongst a relatively small number of trials, limiting our ability to draw conclusions on the safety of the many probiotics agents and doses administered.

Serious unexplained inconsistency (large heterogeneity I = 79%, P value [P = 0.04], point estimates and confidence intervals vary considerably).

Serious imprecision. The upper bound of 0.02 fewer days of diarrhoea is not considered patient important.

Serious unexplained inconsistency (large heterogeneity I = 78%, P value [P = 0.05], point estimates and confidence intervals vary considerably).

Serious imprecision. The 95% confidence interval includes no effect and lower bound of 0.60 stools per day is of questionable patient importance.

14.1.4 Producing ‘Summary of findings’ tables

The GRADE Working Group’s software, GRADEpro GDT ( www.gradepro.org ), including GRADE’s interactive handbook, is available to assist review authors in the preparation of ‘Summary of findings’ tables. GRADEpro can use data on the comparator group risk and the effect estimate (entered by the review authors or imported from files generated in RevMan) to produce the relative effects and absolute risks associated with experimental interventions. In addition, it leads the user through the process of a GRADE assessment, and produces a table that can be used as a standalone table in a review (including by direct import into software such as RevMan or integration with RevMan Web), or an interactive ‘Summary of findings’ table (see help resources in GRADEpro).

14.1.5 Statistical considerations in ‘Summary of findings’ tables

14.1.5.1 dichotomous outcomes.

‘Summary of findings’ tables should include both absolute and relative measures of effect for dichotomous outcomes. Risk ratios, odds ratios and risk differences are different ways of comparing two groups with dichotomous outcome data (see Chapter 6, Section 6.4.1 ). Furthermore, there are two distinct risk ratios, depending on which event (e.g. ‘yes’ or ‘no’) is the focus of the analysis (see Chapter 6, Section 6.4.1.5 ). In the presence of a non-zero intervention effect, any variation across studies in the comparator group risks (i.e. variation in the risk of the event occurring without the intervention of interest, for example in different populations) makes it impossible for more than one of these measures to be truly the same in every study.

It has long been assumed in epidemiology that relative measures of effect are more consistent than absolute measures of effect from one scenario to another. There is empirical evidence to support this assumption (Engels et al 2000, Deeks and Altman 2001, Furukawa et al 2002). For this reason, meta-analyses should generally use either a risk ratio or an odds ratio as a measure of effect (see Chapter 10, Section 10.4.3 ). Correspondingly, a single estimate of relative effect is likely to be a more appropriate summary than a single estimate of absolute effect. If a relative effect is indeed consistent across studies, then different comparator group risks will have different implications for absolute benefit. For instance, if the risk ratio is consistently 0.75, then the experimental intervention would reduce a comparator group risk of 80% to 60% in the intervention group (an absolute risk reduction of 20 percentage points), but would also reduce a comparator group risk of 20% to 15% in the intervention group (an absolute risk reduction of 5 percentage points).

‘Summary of findings’ tables are built around the assumption of a consistent relative effect. It is therefore important to consider the implications of this effect for different comparator group risks (these can be derived or estimated from a number of sources, see Section 14.1.6.3 ), which may require an assessment of the certainty of evidence for prognostic evidence (Spencer et al 2012, Iorio et al 2015). For any comparator group risk, it is possible to estimate a corresponding intervention group risk (i.e. the absolute risk with the intervention) from the meta-analytic risk ratio or odds ratio. Note that the numbers provided in the ‘Corresponding risk’ column are specific to the ‘risks’ in the adjacent column.

For the meta-analytic risk ratio (RR) and assumed comparator risk (ACR) the corresponding intervention risk is obtained as:

research findings and review

As an example, in Figure 14.1.a , the meta-analytic risk ratio for symptomless deep vein thrombosis (DVT) is RR = 0.10 (95% CI 0.04 to 0.26). Assuming a comparator risk of ACR = 10 per 1000 = 0.01, we obtain:

research findings and review

For the meta-analytic odds ratio (OR) and assumed comparator risk, ACR, the corresponding intervention risk is obtained as:

research findings and review

Upper and lower confidence limits for the corresponding intervention risk are obtained by replacing RR or OR by their upper and lower confidence limits, respectively (e.g. replacing 0.10 with 0.04, then with 0.26, in the example). Such confidence intervals do not incorporate uncertainty in the assumed comparator risks.

When dealing with risk ratios, it is critical that the same definition of ‘event’ is used as was used for the meta-analysis. For example, if the meta-analysis focused on ‘death’ (as opposed to survival) as the event, then corresponding risks in the ‘Summary of findings’ table must also refer to ‘death’.

In (rare) circumstances in which there is clear rationale to assume a consistent risk difference in the meta-analysis, in principle it is possible to present this for relevant ‘assumed risks’ and their corresponding risks, and to present the corresponding (different) relative effects for each assumed risk.

The risk difference expresses the difference between the ACR and the corresponding intervention risk (or the difference between the experimental and the comparator intervention).

For the meta-analytic risk ratio (RR) and assumed comparator risk (ACR) the corresponding risk difference is obtained as (note that risks can also be expressed using percentage or percentage points):

research findings and review

As an example, in Figure 14.1.b the meta-analytic risk ratio is 0.41 (95% CI 0.29 to 0.55) for diarrhoea in children less than 5 years of age. Assuming a comparator group risk of 22.3% we obtain:

research findings and review

For the meta-analytic odds ratio (OR) and assumed comparator risk (ACR) the absolute risk difference is obtained as (percentage points):

research findings and review

Upper and lower confidence limits for the absolute risk difference are obtained by re-running the calculation above while replacing RR or OR by their upper and lower confidence limits, respectively (e.g. replacing 0.41 with 0.28, then with 0.55, in the example). Such confidence intervals do not incorporate uncertainty in the assumed comparator risks.

14.1.5.2 Time-to-event outcomes

Time-to-event outcomes measure whether and when a particular event (e.g. death) occurs (van Dalen et al 2007). The impact of the experimental intervention relative to the comparison group on time-to-event outcomes is usually measured using a hazard ratio (HR) (see Chapter 6, Section 6.8.1 ).

A hazard ratio expresses a relative effect estimate. It may be used in various ways to obtain absolute risks and other interpretable quantities for a specific population. Here we describe how to re-express hazard ratios in terms of: (i) absolute risk of event-free survival within a particular period of time; (ii) absolute risk of an event within a particular period of time; and (iii) median time to the event. All methods are built on an assumption of consistent relative effects (i.e. that the hazard ratio does not vary over time).

(i) Absolute risk of event-free survival within a particular period of time Event-free survival (e.g. overall survival) is commonly reported by individual studies. To obtain absolute effects for time-to-event outcomes measured as event-free survival, the summary HR can be used in conjunction with an assumed proportion of patients who are event-free in the comparator group (Tierney et al 2007). This proportion of patients will be specific to a period of time of observation. However, it is not strictly necessary to specify this period of time. For instance, a proportion of 50% of event-free patients might apply to patients with a high event rate observed over 1 year, or to patients with a low event rate observed over 2 years.

research findings and review

As an example, suppose the meta-analytic hazard ratio is 0.42 (95% CI 0.25 to 0.72). Assuming a comparator group risk of event-free survival (e.g. for overall survival people being alive) at 2 years of ACR = 900 per 1000 = 0.9 we obtain:

research findings and review

so that that 956 per 1000 people will be alive with the experimental intervention at 2 years. The derivation of the risk should be explained in a comment or footnote.

(ii) Absolute risk of an event within a particular period of time To obtain this absolute effect, again the summary HR can be used (Tierney et al 2007):

research findings and review

In the example, suppose we assume a comparator group risk of events (e.g. for mortality, people being dead) at 2 years of ACR = 100 per 1000 = 0.1. We obtain:

research findings and review

so that that 44 per 1000 people will be dead with the experimental intervention at 2 years.

(iii) Median time to the event Instead of absolute numbers, the time to the event in the intervention and comparison groups can be expressed as median survival time in months or years. To obtain median survival time the pooled HR can be applied to an assumed median survival time in the comparator group (Tierney et al 2007):

research findings and review

In the example, assuming a comparator group median survival time of 80 months, we obtain:

research findings and review

For all three of these options for re-expressing results of time-to-event analyses, upper and lower confidence limits for the corresponding intervention risk are obtained by replacing HR by its upper and lower confidence limits, respectively (e.g. replacing 0.42 with 0.25, then with 0.72, in the example). Again, as for dichotomous outcomes, such confidence intervals do not incorporate uncertainty in the assumed comparator group risks. This is of special concern for long-term survival with a low or moderate mortality rate and a corresponding high number of censored patients (i.e. a low number of patients under risk and a high censoring rate).

14.1.6 Detailed contents of a ‘Summary of findings’ table

14.1.6.1 table title and header.

The title of each ‘Summary of findings’ table should specify the healthcare question, framed in terms of the population and making it clear exactly what comparison of interventions are made. In Figure 14.1.a , the population is people taking long aeroplane flights, the intervention is compression stockings, and the control is no compression stockings.

The first rows of each ‘Summary of findings’ table should provide the following ‘header’ information:

Patients or population This further clarifies the population (and possibly the subpopulations) of interest and ideally the magnitude of risk of the most crucial adverse outcome at which an intervention is directed. For instance, people on a long-haul flight may be at different risks for DVT; those using selective serotonin reuptake inhibitors (SSRIs) might be at different risk for side effects; while those with atrial fibrillation may be at low (< 1%), moderate (1% to 4%) or high (> 4%) yearly risk of stroke.

Setting This should state any specific characteristics of the settings of the healthcare question that might limit the applicability of the summary of findings to other settings (e.g. primary care in Europe and North America).

Intervention The experimental intervention.

Comparison The comparator intervention (including no specific intervention).

14.1.6.2 Outcomes

The rows of a ‘Summary of findings’ table should include all desirable and undesirable health outcomes (listed in order of importance) that are essential for decision making, up to a maximum of seven outcomes. If there are more outcomes in the review, review authors will need to omit the less important outcomes from the table, and the decision selecting which outcomes are critical or important to the review should be made during protocol development (see Chapter 3 ). Review authors should provide time frames for the measurement of the outcomes (e.g. 90 days or 12 months) and the type of instrument scores (e.g. ranging from 0 to 100).

Note that review authors should include the pre-specified critical and important outcomes in the table whether data are available or not. However, they should be alert to the possibility that the importance of an outcome (e.g. a serious adverse effect) may only become known after the protocol was written or the analysis was carried out, and should take appropriate actions to include these in the ‘Summary of findings’ table.

The ‘Summary of findings’ table can include effects in subgroups of the population for different comparator risks and effect sizes separately. For instance, in Figure 14.1.b effects are presented for children younger and older than 5 years separately. Review authors may also opt to produce separate ‘Summary of findings’ tables for different populations.

Review authors should include serious adverse events, but it might be possible to combine minor adverse events as a single outcome, and describe this in an explanatory footnote (note that it is not appropriate to add events together unless they are independent, that is, a participant who has experienced one adverse event has an unaffected chance of experiencing the other adverse event).

Outcomes measured at multiple time points represent a particular problem. In general, to keep the table simple, review authors should present multiple time points only for outcomes critical to decision making, where either the result or the decision made are likely to vary over time. The remainder should be presented at a common time point where possible.

Review authors can present continuous outcome measures in the ‘Summary of findings’ table and should endeavour to make these interpretable to the target audience. This requires that the units are clear and readily interpretable, for example, days of pain, or frequency of headache, and the name and scale of any measurement tools used should be stated (e.g. a Visual Analogue Scale, ranging from 0 to 100). However, many measurement instruments are not readily interpretable by non-specialist clinicians or patients, for example, points on a Beck Depression Inventory or quality of life score. For these, a more interpretable presentation might involve converting a continuous to a dichotomous outcome, such as >50% improvement (see Chapter 15, Section 15.5 ).

14.1.6.3 Best estimate of risk with comparator intervention

Review authors should provide up to three typical risks for participants receiving the comparator intervention. For dichotomous outcomes, we recommend that these be presented in the form of the number of people experiencing the event per 100 or 1000 people (natural frequency) depending on the frequency of the outcome. For continuous outcomes, this would be stated as a mean or median value of the outcome measured.

Estimated or assumed comparator intervention risks could be based on assessments of typical risks in different patient groups derived from the review itself, individual representative studies in the review, or risks derived from a systematic review of prognosis studies or other sources of evidence which may in turn require an assessment of the certainty for the prognostic evidence (Spencer et al 2012, Iorio et al 2015). Ideally, risks would reflect groups that clinicians can easily identify on the basis of their presenting features.

An explanatory footnote should specify the source or rationale for each comparator group risk, including the time period to which it corresponds where appropriate. In Figure 14.1.a , clinicians can easily differentiate individuals with risk factors for deep venous thrombosis from those without. If there is known to be little variation in baseline risk then review authors may use the median comparator group risk across studies. If typical risks are not known, an option is to choose the risk from the included studies, providing the second highest for a high and the second lowest for a low risk population.

14.1.6.4 Risk with intervention

For dichotomous outcomes, review authors should provide a corresponding absolute risk for each comparator group risk, along with a confidence interval. This absolute risk with the (experimental) intervention will usually be derived from the meta-analysis result presented in the relative effect column (see Section 14.1.6.6 ). Formulae are provided in Section 14.1.5 . Review authors should present the absolute effect in the same format as the risks with comparator intervention (see Section 14.1.6.3 ), for example as the number of people experiencing the event per 1000 people.

For continuous outcomes, a difference in means or standardized difference in means should be presented with its confidence interval. These will typically be obtained directly from a meta-analysis. Explanatory text should be used to clarify the meaning, as in Figures 14.1.a and 14.1.b .

14.1.6.5 Risk difference

For dichotomous outcomes, the risk difference can be provided using one of the ‘Summary of findings’ table formats as an additional option (see Figure 14.1.b ). This risk difference expresses the difference between the experimental and comparator intervention and will usually be derived from the meta-analysis result presented in the relative effect column (see Section 14.1.6.6 ). Formulae are provided in Section 14.1.5 . Review authors should present the risk difference in the same format as assumed and corresponding risks with comparator intervention (see Section 14.1.6.3 ); for example, as the number of people experiencing the event per 1000 people or as percentage points if the assumed and corresponding risks are expressed in percentage.

For continuous outcomes, if the ‘Summary of findings’ table includes this option, the mean difference can be presented here and the ‘corresponding risk’ column left blank (see Figure 14.1.b ).

14.1.6.6 Relative effect (95% CI)

The relative effect will typically be a risk ratio or odds ratio (or occasionally a hazard ratio) with its accompanying 95% confidence interval, obtained from a meta-analysis performed on the basis of the same effect measure. Risk ratios and odds ratios are similar when the comparator intervention risks are low and effects are small, but may differ considerably when comparator group risks increase. The meta-analysis may involve an assumption of either fixed or random effects, depending on what the review authors consider appropriate, and implying that the relative effect is either an estimate of the effect of the intervention, or an estimate of the average effect of the intervention across studies, respectively.

14.1.6.7 Number of participants (studies)

This column should include the number of participants assessed in the included studies for each outcome and the corresponding number of studies that contributed these participants.

14.1.6.8 Certainty of the evidence (GRADE)

Review authors should comment on the certainty of the evidence (also known as quality of the body of evidence or confidence in the effect estimates). Review authors should use the specific evidence grading system developed by the GRADE Working Group (Atkins et al 2004, Guyatt et al 2008, Guyatt et al 2011a), which is described in detail in Section 14.2 . The GRADE approach categorizes the certainty in a body of evidence as ‘high’, ‘moderate’, ‘low’ or ‘very low’ by outcome. This is a result of judgement, but the judgement process operates within a transparent structure. As an example, the certainty would be ‘high’ if the summary were of several randomized trials with low risk of bias, but the rating of certainty becomes lower if there are concerns about risk of bias, inconsistency, indirectness, imprecision or publication bias. Judgements other than of ‘high’ certainty should be made transparent using explanatory footnotes or the ‘Comments’ column in the ‘Summary of findings’ table (see Section 14.1.6.10 ).

14.1.6.9 Comments

The aim of the ‘Comments’ field is to help interpret the information or data identified in the row. For example, this may be on the validity of the outcome measure or the presence of variables that are associated with the magnitude of effect. Important caveats about the results should be flagged here. Not all rows will need comments, and it is best to leave a blank if there is nothing warranting a comment.

14.1.6.10 Explanations

Detailed explanations should be included as footnotes to support the judgements in the ‘Summary of findings’ table, such as the overall GRADE assessment. The explanations should describe the rationale for important aspects of the content. Table 14.1.a lists guidance for useful explanations. Explanations should be concise, informative, relevant, easy to understand and accurate. If explanations cannot be sufficiently described in footnotes, review authors should provide further details of the issues in the Results and Discussion sections of the review.

Table 14.1.a Guidance for providing useful explanations in ‘Summary of findings’ (SoF) tables. Adapted from Santesso et al (2016)

, Chi , Tau), or the overlap of confidence intervals, or similarity of point estimates. , describe it as considerable, substantial, moderate or not important.

14.2 Assessing the certainty or quality of a body of evidence

14.2.1 the grade approach.

The Grades of Recommendation, Assessment, Development and Evaluation Working Group (GRADE Working Group) has developed a system for grading the certainty of evidence (Schünemann et al 2003, Atkins et al 2004, Schünemann et al 2006, Guyatt et al 2008, Guyatt et al 2011a). Over 100 organizations including the World Health Organization (WHO), the American College of Physicians, the American Society of Hematology (ASH), the Canadian Agency for Drugs and Technology in Health (CADTH) and the National Institutes of Health and Clinical Excellence (NICE) in the UK have adopted the GRADE system ( www.gradeworkinggroup.org ).

Cochrane has also formally adopted this approach, and all Cochrane Reviews should use GRADE to evaluate the certainty of evidence for important outcomes (see MECIR Box 14.2.a ).

MECIR Box 14.2.a Relevant expectations for conduct of intervention reviews

Assessing the certainty of the body of evidence ( )

GRADE is the most widely used approach for summarizing confidence in effects of interventions by outcome across studies. It is preferable to use the online GRADEpro tool, and to use it as described in the help system of the software. This should help to ensure that author teams are accessing the same information to inform their judgements. Ideally, two people working independently should assess the certainty of the body of evidence and reach a consensus view on any downgrading decisions. The five GRADE considerations should be addressed irrespective of whether the review includes a ‘Summary of findings’ table. It is helpful to draw on this information in the Discussion, in the Authors’ conclusions and to convey the certainty in the evidence in the Abstract and Plain language summary.

Justifying assessments of the certainty of the body of evidence ( )

The adoption of a structured approach ensures transparency in formulating an interpretation of the evidence, and the result is more informative to the user.

For systematic reviews, the GRADE approach defines the certainty of a body of evidence as the extent to which one can be confident that an estimate of effect or association is close to the quantity of specific interest. Assessing the certainty of a body of evidence involves consideration of within- and across-study risk of bias (limitations in study design and execution or methodological quality), inconsistency (or heterogeneity), indirectness of evidence, imprecision of the effect estimates and risk of publication bias (see Section 14.2.2 ), as well as domains that may increase our confidence in the effect estimate (as described in Section 14.2.3 ). The GRADE system entails an assessment of the certainty of a body of evidence for each individual outcome. Judgements about the domains that determine the certainty of evidence should be described in the results or discussion section and as part of the ‘Summary of findings’ table.

The GRADE approach specifies four levels of certainty ( Figure 14.2.a ). For interventions, including diagnostic and other tests that are evaluated as interventions (Schünemann et al 2008b, Schünemann et al 2008a, Balshem et al 2011, Schünemann et al 2012), the starting point for rating the certainty of evidence is categorized into two types:

  • randomized trials; and
  • non-randomized studies of interventions (NRSI), including observational studies (including but not limited to cohort studies, and case-control studies, cross-sectional studies, case series and case reports, although not all of these designs are usually included in Cochrane Reviews).

There are many instances in which review authors rely on information from NRSI, in particular to evaluate potential harms (see Chapter 24 ). In addition, review authors can obtain relevant data from both randomized trials and NRSI, with each type of evidence complementing the other (Schünemann et al 2013).

In GRADE, a body of evidence from randomized trials begins with a high-certainty rating while a body of evidence from NRSI begins with a low-certainty rating. The lower rating with NRSI is the result of the potential bias induced by the lack of randomization (i.e. confounding and selection bias).

However, when using the new Risk Of Bias In Non-randomized Studies of Interventions (ROBINS-I) tool (Sterne et al 2016), an assessment tool that covers the risk of bias due to lack of randomization, all studies may start as high certainty of the evidence (Schünemann et al 2018). The approach of starting all study designs (including NRSI) as high certainty does not conflict with the initial GRADE approach of starting the rating of NRSI as low certainty evidence. This is because a body of evidence from NRSI should generally be downgraded by two levels due to the inherent risk of bias associated with the lack of randomization, namely confounding and selection bias. Not downgrading NRSI from high to low certainty needs transparent and detailed justification for what mitigates concerns about confounding and selection bias (Schünemann et al 2018). Very few examples of where not rating down by two levels is appropriate currently exist.

The highest certainty rating is a body of evidence when there are no concerns in any of the GRADE factors listed in Figure 14.2.a . Review authors often downgrade evidence to moderate, low or even very low certainty evidence, depending on the presence of the five factors in Figure 14.2.a . Usually, certainty rating will fall by one level for each factor, up to a maximum of three levels for all factors. If there are very severe problems for any one domain (e.g. when assessing risk of bias, all studies were unconcealed, unblinded and lost over 50% of their patients to follow-up), evidence may fall by two levels due to that factor alone. It is not possible to rate lower than ‘very low certainty’ evidence.

Review authors will generally grade evidence from sound non-randomized studies as low certainty, even if ROBINS-I is used. If, however, such studies yield large effects and there is no obvious bias explaining those effects, review authors may rate the evidence as moderate or – if the effect is large enough – even as high certainty ( Figure 14.2.a ). The very low certainty level is appropriate for, but is not limited to, studies with critical problems and unsystematic clinical observations (e.g. case series or case reports).

Figure 14.2.a Levels of the certainty of a body of evidence in the GRADE approach. *Upgrading criteria are usually applicable to non-randomized studies only (but exceptions exist).


 


 


 

 

⊕⊕⊕⊕

 

 

⊕⊕⊕◯

⊕⊕◯◯

 

 

⊕◯◯◯

14.2.2 Domains that can lead to decreasing the certainty level of a body of evidence   

We now describe in more detail the five reasons (or domains) for downgrading the certainty of a body of evidence for a specific outcome. In each case, if no reason is found for downgrading the evidence, it should be classified as 'no limitation or not serious' (not important enough to warrant downgrading). If a reason is found for downgrading the evidence, it should be classified as 'serious' (downgrading the certainty rating by one level) or 'very serious' (downgrading the certainty grade by two levels). For non-randomized studies assessed with ROBINS-I, rating down by three levels should be classified as 'extremely' serious.

(1) Risk of bias or limitations in the detailed design and implementation

Our confidence in an estimate of effect decreases if studies suffer from major limitations that are likely to result in a biased assessment of the intervention effect. For randomized trials, these methodological limitations include failure to generate a random sequence, lack of allocation sequence concealment, lack of blinding (particularly with subjective outcomes that are highly susceptible to biased assessment), a large loss to follow-up or selective reporting of outcomes. Chapter 8 provides a discussion of study-level assessments of risk of bias in the context of a Cochrane Review, and proposes an approach to assessing the risk of bias for an outcome across studies as ‘Low’ risk of bias, ‘Some concerns’ and ‘High’ risk of bias for randomized trials. Levels of ‘Low’. ‘Moderate’, ‘Serious’ and ‘Critical’ risk of bias arise for non-randomized studies assessed with ROBINS-I ( Chapter 25 ). These assessments should feed directly into this GRADE domain. In particular, ‘Low’ risk of bias would indicate ‘no limitation’; ‘Some concerns’ would indicate either ‘no limitation’ or ‘serious limitation’; and ‘High’ risk of bias would indicate either ‘serious limitation’ or ‘very serious limitation’. ‘Critical’ risk of bias on ROBINS-I would indicate extremely serious limitations in GRADE. Review authors should use their judgement to decide between alternative categories, depending on the likely magnitude of the potential biases.

Every study addressing a particular outcome will differ, to some degree, in the risk of bias. Review authors should make an overall judgement on whether the certainty of evidence for an outcome warrants downgrading on the basis of study limitations. The assessment of study limitations should apply to the studies contributing to the results in the ‘Summary of findings’ table, rather than to all studies that could potentially be included in the analysis. We have argued in Chapter 7, Section 7.6.2 , that the primary analysis should be restricted to studies at low (or low and unclear) risk of bias where possible.

Table 14.2.a presents the judgements that must be made in going from assessments of the risk of bias to judgements about study limitations for each outcome included in a ‘Summary of findings’ table. A rating of high certainty evidence can be achieved only when most evidence comes from studies that met the criteria for low risk of bias. For example, of the 22 studies addressing the impact of beta-blockers on mortality in patients with heart failure, most probably or certainly used concealed allocation of the sequence, all blinded at least some key groups and follow-up of randomized patients was almost complete (Brophy et al 2001). The certainty of evidence might be downgraded by one level when most of the evidence comes from individual studies either with a crucial limitation for one item, or with some limitations for multiple items. An example of very serious limitations, warranting downgrading by two levels, is provided by evidence on surgery versus conservative treatment in the management of patients with lumbar disc prolapse (Gibson and Waddell 2007). We are uncertain of the benefit of surgery in reducing symptoms after one year or longer, because the one study included in the analysis had inadequate concealment of the allocation sequence and the outcome was assessed using a crude rating by the surgeon without blinding.

(2) Unexplained heterogeneity or inconsistency of results

When studies yield widely differing estimates of effect (heterogeneity or variability in results), investigators should look for robust explanations for that heterogeneity. For instance, drugs may have larger relative effects in sicker populations or when given in larger doses. A detailed discussion of heterogeneity and its investigation is provided in Chapter 10, Section 10.10 and Section 10.11 . If an important modifier exists, with good evidence that important outcomes are different in different subgroups (which would ideally be pre-specified), then a separate ‘Summary of findings’ table may be considered for a separate population. For instance, a separate ‘Summary of findings’ table would be used for carotid endarterectomy in symptomatic patients with high grade stenosis (70% to 99%) in which the intervention is, in the hands of the right surgeons, beneficial, and another (if review authors considered it relevant) for asymptomatic patients with low grade stenosis (less than 30%) in which surgery appears harmful (Orrapin and Rerkasem 2017). When heterogeneity exists and affects the interpretation of results, but review authors are unable to identify a plausible explanation with the data available, the certainty of the evidence decreases.

(3) Indirectness of evidence

Two types of indirectness are relevant. First, a review comparing the effectiveness of alternative interventions (say A and B) may find that randomized trials are available, but they have compared A with placebo and B with placebo. Thus, the evidence is restricted to indirect comparisons between A and B. Where indirect comparisons are undertaken within a network meta-analysis context, GRADE for network meta-analysis should be used (see Chapter 11, Section 11.5 ).

Second, a review may find randomized trials that meet eligibility criteria but address a restricted version of the main review question in terms of population, intervention, comparator or outcomes. For example, suppose that in a review addressing an intervention for secondary prevention of coronary heart disease, most identified studies happened to be in people who also had diabetes. Then the evidence may be regarded as indirect in relation to the broader question of interest because the population is primarily related to people with diabetes. The opposite scenario can equally apply: a review addressing the effect of a preventive strategy for coronary heart disease in people with diabetes may consider studies in people without diabetes to provide relevant, albeit indirect, evidence. This would be particularly likely if investigators had conducted few if any randomized trials in the target population (e.g. people with diabetes). Other sources of indirectness may arise from interventions studied (e.g. if in all included studies a technical intervention was implemented by expert, highly trained specialists in specialist centres, then evidence on the effects of the intervention outside these centres may be indirect), comparators used (e.g. if the comparator groups received an intervention that is less effective than standard treatment in most settings) and outcomes assessed (e.g. indirectness due to surrogate outcomes when data on patient-important outcomes are not available, or when investigators seek data on quality of life but only symptoms are reported). Review authors should make judgements transparent when they believe downgrading is justified, based on differences in anticipated effects in the group of primary interest. Review authors may be aided and increase transparency of their judgements about indirectness if they use Table 14.2.b available in the GRADEpro GDT software (Schünemann et al 2013).

(4) Imprecision of results

When studies include few participants or few events, and thus have wide confidence intervals, review authors can lower their rating of the certainty of the evidence. The confidence intervals included in the ‘Summary of findings’ table will provide readers with information that allows them to make, to some extent, their own rating of precision. Review authors can use a calculation of the optimal information size (OIS) or review information size (RIS), similar to sample size calculations, to make judgements about imprecision (Guyatt et al 2011b, Schünemann 2016). The OIS or RIS is calculated on the basis of the number of participants required for an adequately powered individual study. If the 95% confidence interval excludes a risk ratio (RR) of 1.0, and the total number of events or patients exceeds the OIS criterion, precision is adequate. If the 95% CI includes appreciable benefit or harm (an RR of under 0.75 or over 1.25 is often suggested as a very rough guide) downgrading for imprecision may be appropriate even if OIS criteria are met (Guyatt et al 2011b, Schünemann 2016).

(5) High probability of publication bias

The certainty of evidence level may be downgraded if investigators fail to report studies on the basis of results (typically those that show no effect: publication bias) or outcomes (typically those that may be harmful or for which no effect was observed: selective outcome non-reporting bias). Selective reporting of outcomes from among multiple outcomes measured is assessed at the study level as part of the assessment of risk of bias (see Chapter 8, Section 8.7 ), so for the studies contributing to the outcome in the ‘Summary of findings’ table this is addressed by domain 1 above (limitations in the design and implementation). If a large number of studies included in the review do not contribute to an outcome, or if there is evidence of publication bias, the certainty of the evidence may be downgraded. Chapter 13 provides a detailed discussion of reporting biases, including publication bias, and how it may be tackled in a Cochrane Review. A prototypical situation that may elicit suspicion of publication bias is when published evidence includes a number of small studies, all of which are industry-funded (Bhandari et al 2004). For example, 14 studies of flavanoids in patients with haemorrhoids have shown apparent large benefits, but enrolled a total of only 1432 patients (i.e. each study enrolled relatively few patients) (Alonso-Coello et al 2006). The heavy involvement of sponsors in most of these studies raises questions of whether unpublished studies that suggest no benefit exist (publication bias).

A particular body of evidence can suffer from problems associated with more than one of the five factors listed here, and the greater the problems, the lower the certainty of evidence rating that should result. One could imagine a situation in which randomized trials were available, but all or virtually all of these limitations would be present, and in serious form. A very low certainty of evidence rating would result.

Table 14.2.a Further guidelines for domain 1 (of 5) in a GRADE assessment: going from assessments of risk of bias in studies to judgements about study limitations for main outcomes across studies

Low risk of bias

Most information is from results at low risk of bias.

Plausible bias unlikely to seriously alter the results.

No apparent limitations.

No serious limitations, do not downgrade.

Some concerns

Most information is from results at low risk of bias or with some concerns.

Plausible bias that raises some doubt about the results.

Potential limitations are unlikely to lower confidence in the estimate of effect.

No serious limitations, do not downgrade.

Potential limitations are likely to lower confidence in the estimate of effect.

Serious limitations, downgrade one level.

High risk of bias

The proportion of information from results at high risk of bias is sufficient to affect the interpretation of results.

Plausible bias that seriously weakens confidence in the results.

Crucial limitation for one criterion, or some limitations for multiple criteria, sufficient to lower confidence in the estimate of effect.

Serious limitations, downgrade one level.

Crucial limitation for one or more criteria sufficient to substantially lower confidence in the estimate of effect.

Very serious limitations, downgrade two levels.

Table 14.2.b Judgements about indirectness by outcome (available in GRADEpro GDT)

 

Probably yes

Probably no

No

 

 

 

 

Intervention:

Yes

Probably yes

Probably no

No

 

 

 

 

Comparator:

Direct comparison:

Final judgement about indirectness across domains:

 

14.2.3 Domains that may lead to increasing the certainty level of a body of evidence

Although NRSI and downgraded randomized trials will generally yield a low rating for certainty of evidence, there will be unusual circumstances in which review authors could ‘upgrade’ such evidence to moderate or even high certainty ( Table 14.3.a ).

  • Large effects On rare occasions when methodologically well-done observational studies yield large, consistent and precise estimates of the magnitude of an intervention effect, one may be particularly confident in the results. A large estimated effect (e.g. RR >2 or RR <0.5) in the absence of plausible confounders, or a very large effect (e.g. RR >5 or RR <0.2) in studies with no major threats to validity, might qualify for this. In these situations, while the NRSI may possibly have provided an over-estimate of the true effect, the weak study design may not explain all of the apparent observed benefit. Thus, despite reservations based on the observational study design, review authors are confident that the effect exists. The magnitude of the effect in these studies may move the assigned certainty of evidence from low to moderate (if the effect is large in the absence of other methodological limitations). For example, a meta-analysis of observational studies showed that bicycle helmets reduce the risk of head injuries in cyclists by a large margin (odds ratio (OR) 0.31, 95% CI 0.26 to 0.37) (Thompson et al 2000). This large effect, in the absence of obvious bias that could create the association, suggests a rating of moderate-certainty evidence.  Note : GRADE guidance suggests the possibility of rating up one level for a large effect if the relative effect is greater than 2.0. However, if the point estimate of the relative effect is greater than 2.0, but the confidence interval is appreciably below 2.0, then some hesitation would be appropriate in the decision to rate up for a large effect. Another situation allows inference of a strong association without a formal comparative study. Consider the question of the impact of routine colonoscopy versus no screening for colon cancer on the rate of perforation associated with colonoscopy. Here, a large series of representative patients undergoing colonoscopy may provide high certainty evidence about the risk of perforation associated with colonoscopy. When the risk of the event among patients receiving the relevant comparator is known to be near 0 (i.e. we are certain that the incidence of spontaneous colon perforation in patients not undergoing colonoscopy is extremely low), case series or cohort studies of representative patients can provide high certainty evidence of adverse effects associated with an intervention, thereby allowing us to infer a strong association from even a limited number of events.
  • Dose-response The presence of a dose-response gradient may increase our confidence in the findings of observational studies and thereby enhance the assigned certainty of evidence. For example, our confidence in the result of observational studies that show an increased risk of bleeding in patients who have supratherapeutic anticoagulation levels is increased by the observation that there is a dose-response gradient between the length of time needed for blood to clot (as measured by the international normalized ratio (INR)) and an increased risk of bleeding (Levine et al 2004). A systematic review of NRSI investigating the effect of cyclooxygenase-2 inhibitors on cardiovascular events found that the summary estimate (RR) with rofecoxib was 1.33 (95% CI 1.00 to 1.79) with doses less than 25mg/d, and 2.19 (95% CI 1.64 to 2.91) with doses more than 25mg/d. Although residual confounding is likely to exist in the NRSI that address this issue, the existence of a dose-response gradient and the large apparent effect of higher doses of rofecoxib markedly increase our strength of inference that the association cannot be explained by residual confounding, and is therefore likely to be both causal and, at high levels of exposure, substantial.  Note : GRADE guidance suggests the possibility of rating up one level for a large effect if the relative effect is greater than 2.0. Here, the fact that the point estimate of the relative effect is greater than 2.0, but the confidence interval is appreciably below 2.0 might make some hesitate in the decision to rate up for a large effect
  • Plausible confounding On occasion, all plausible biases from randomized or non-randomized studies may be working to under-estimate an apparent intervention effect. For example, if only sicker patients receive an experimental intervention or exposure, yet they still fare better, it is likely that the actual intervention or exposure effect is larger than the data suggest. For instance, a rigorous systematic review of observational studies including a total of 38 million patients demonstrated higher death rates in private for-profit versus private not-for-profit hospitals (Devereaux et al 2002). One possible bias relates to different disease severity in patients in the two hospital types. It is likely, however, that patients in the not-for-profit hospitals were sicker than those in the for-profit hospitals. Thus, to the extent that residual confounding existed, it would bias results against the not-for-profit hospitals. The second likely bias was the possibility that higher numbers of patients with excellent private insurance coverage could lead to a hospital having more resources and a spill-over effect that would benefit those without such coverage. Since for-profit hospitals are likely to admit a larger proportion of such well-insured patients than not-for-profit hospitals, the bias is once again against the not-for-profit hospitals. Since the plausible biases would all diminish the demonstrated intervention effect, one might consider the evidence from these observational studies as moderate rather than low certainty. A parallel situation exists when observational studies have failed to demonstrate an association, but all plausible biases would have increased an intervention effect. This situation will usually arise in the exploration of apparent harmful effects. For example, because the hypoglycaemic drug phenformin causes lactic acidosis, the related agent metformin was under suspicion for the same toxicity. Nevertheless, very large observational studies have failed to demonstrate an association (Salpeter et al 2007). Given the likelihood that clinicians would be more alert to lactic acidosis in the presence of the agent and over-report its occurrence, one might consider this moderate, or even high certainty, evidence refuting a causal relationship between typical therapeutic doses of metformin and lactic acidosis.

14.3 Describing the assessment of the certainty of a body of evidence using the GRADE framework

Review authors should report the grading of the certainty of evidence in the Results section for each outcome for which this has been performed, providing the rationale for downgrading or upgrading the evidence, and referring to the ‘Summary of findings’ table where applicable.

Table 14.3.a provides a framework and examples for how review authors can justify their judgements about the certainty of evidence in each domain. These justifications should also be included in explanatory notes to the ‘Summary of Findings’ table (see Section 14.1.6.10 ).

Chapter 15, Section 15.6 , describes in more detail how the overall GRADE assessment across all domains can be used to draw conclusions about the effects of the intervention, as well as providing implications for future research.

Table 14.3.a Framework for describing the certainty of evidence and justifying downgrading or upgrading

Describe the risk of bias based on the criteria used in the risk-of-bias table.

Downgraded because of 10 randomized trials, five did not blind patients and caretakers.

Describe the degree of inconsistency by outcome using one or more indicators (e.g. I and P value), confidence interval overlap, difference in point estimate, between-study variance.

Not downgraded because the proportion of the variability in effect estimates that is due to true heterogeneity rather than chance is not important (I = 0%).

Describe if the majority of studies address the PICO – were they similar to the question posed?

Downgraded because the included studies were restricted to patients with advanced cancer.

Describe the number of events, and width of the confidence intervals.

The confidence intervals for the effect on mortality are consistent with both an appreciable benefit and appreciable harm and we lowered the certainty.

Describe the possible degree of publication bias.

1. The funnel plot of 14 randomized trials indicated that there were several small studies that showed a small positive effect, but small studies that showed no effect or harm may have been unpublished. The certainty of the evidence was lowered.

2. There are only three small positive studies, it appears that studies showing no effect or harm have not been published. There also is for-profit interest in the intervention. The certainty of the evidence was lowered.

Describe the magnitude of the effect and the widths of the associate confidence intervals.

Upgraded because the RR is large: 0.3 (95% CI 0.2 to 0.4), with a sufficient number of events to be precise.

 

The studies show a clear relation with increases in the outcome of an outcome (e.g. lung cancer) with higher exposure levels.

Upgraded because the dose-response relation shows a relative risk increase of 10% in never smokers, 15% in smokers of 10 pack years and 20% in smokers of 15 pack years.

Describe which opposing plausible biases and confounders may have not been considered.

The estimate of effect is not controlled for the following possible confounders: smoking, degree of education, but the distribution of these factors in the studies is likely to lead to an under-estimate of the true effect. The certainty of the evidence was increased.

14.4 Chapter information

Authors: Holger J Schünemann, Julian PT Higgins, Gunn E Vist, Paul Glasziou, Elie A Akl, Nicole Skoetz, Gordon H Guyatt; on behalf of the Cochrane GRADEing Methods Group (formerly Applicability and Recommendations Methods Group) and the Cochrane Statistical Methods Group

Acknowledgements: Andrew D Oxman contributed to earlier versions. Professor Penny Hawe contributed to the text on adverse effects in earlier versions. Jon Deeks provided helpful contributions on an earlier version of this chapter. For details of previous authors and editors of the Handbook , please refer to the Preface.

Funding: This work was in part supported by funding from the Michael G DeGroote Cochrane Canada Centre and the Ontario Ministry of Health.

14.5 References

Alonso-Coello P, Zhou Q, Martinez-Zapata MJ, Mills E, Heels-Ansdell D, Johanson JF, Guyatt G. Meta-analysis of flavonoids for the treatment of haemorrhoids. British Journal of Surgery 2006; 93 : 909-920.

Atkins D, Best D, Briss PA, Eccles M, Falck-Ytter Y, Flottorp S, Guyatt GH, Harbour RT, Haugh MC, Henry D, Hill S, Jaeschke R, Leng G, Liberati A, Magrini N, Mason J, Middleton P, Mrukowicz J, O'Connell D, Oxman AD, Phillips B, Schünemann HJ, Edejer TT, Varonen H, Vist GE, Williams JW, Jr., Zaza S. Grading quality of evidence and strength of recommendations. BMJ 2004; 328 : 1490.

Balshem H, Helfand M, Schünemann HJ, Oxman AD, Kunz R, Brozek J, Vist GE, Falck-Ytter Y, Meerpohl J, Norris S, Guyatt GH. GRADE guidelines: 3. Rating the quality of evidence. Journal of Clinical Epidemiology 2011; 64 : 401-406.

Bhandari M, Busse JW, Jackowski D, Montori VM, Schünemann H, Sprague S, Mears D, Schemitsch EH, Heels-Ansdell D, Devereaux PJ. Association between industry funding and statistically significant pro-industry findings in medical and surgical randomized trials. Canadian Medical Association Journal 2004; 170 : 477-480.

Brophy JM, Joseph L, Rouleau JL. Beta-blockers in congestive heart failure. A Bayesian meta-analysis. Annals of Internal Medicine 2001; 134 : 550-560.

Carrasco-Labra A, Brignardello-Petersen R, Santesso N, Neumann I, Mustafa RA, Mbuagbaw L, Etxeandia Ikobaltzeta I, De Stio C, McCullagh LJ, Alonso-Coello P, Meerpohl JJ, Vandvik PO, Brozek JL, Akl EA, Bossuyt P, Churchill R, Glenton C, Rosenbaum S, Tugwell P, Welch V, Garner P, Guyatt G, Schünemann HJ. Improving GRADE evidence tables part 1: a randomized trial shows improved understanding of content in summary of findings tables with a new format. Journal of Clinical Epidemiology 2016; 74 : 7-18.

Deeks JJ, Altman DG. Effect measures for meta-analysis of trials with binary outcomes. In: Egger M, Davey Smith G, Altman DG, editors. Systematic Reviews in Health Care: Meta-analysis in Context . 2nd ed. London (UK): BMJ Publication Group; 2001. p. 313-335.

Devereaux PJ, Choi PT, Lacchetti C, Weaver B, Schünemann HJ, Haines T, Lavis JN, Grant BJ, Haslam DR, Bhandari M, Sullivan T, Cook DJ, Walter SD, Meade M, Khan H, Bhatnagar N, Guyatt GH. A systematic review and meta-analysis of studies comparing mortality rates of private for-profit and private not-for-profit hospitals. Canadian Medical Association Journal 2002; 166 : 1399-1406.

Engels EA, Schmid CH, Terrin N, Olkin I, Lau J. Heterogeneity and statistical significance in meta-analysis: an empirical study of 125 meta-analyses. Statistics in Medicine 2000; 19 : 1707-1728.

Furukawa TA, Guyatt GH, Griffith LE. Can we individualize the 'number needed to treat'? An empirical study of summary effect measures in meta-analyses. International Journal of Epidemiology 2002; 31 : 72-76.

Gibson JN, Waddell G. Surgical interventions for lumbar disc prolapse: updated Cochrane Review. Spine 2007; 32 : 1735-1747.

Guyatt G, Oxman A, Vist G, Kunz R, Falck-Ytter Y, Alonso-Coello P, Schünemann H. GRADE: an emerging consensus on rating quality of evidence and strength of recommendations. BMJ 2008; 336 : 3.

Guyatt G, Oxman AD, Akl EA, Kunz R, Vist G, Brozek J, Norris S, Falck-Ytter Y, Glasziou P, DeBeer H, Jaeschke R, Rind D, Meerpohl J, Dahm P, Schünemann HJ. GRADE guidelines: 1. Introduction-GRADE evidence profiles and summary of findings tables. Journal of Clinical Epidemiology 2011a; 64 : 383-394.

Guyatt GH, Oxman AD, Kunz R, Brozek J, Alonso-Coello P, Rind D, Devereaux PJ, Montori VM, Freyschuss B, Vist G, Jaeschke R, Williams JW, Jr., Murad MH, Sinclair D, Falck-Ytter Y, Meerpohl J, Whittington C, Thorlund K, Andrews J, Schünemann HJ. GRADE guidelines 6. Rating the quality of evidence--imprecision. Journal of Clinical Epidemiology 2011b; 64 : 1283-1293.

Iorio A, Spencer FA, Falavigna M, Alba C, Lang E, Burnand B, McGinn T, Hayden J, Williams K, Shea B, Wolff R, Kujpers T, Perel P, Vandvik PO, Glasziou P, Schünemann H, Guyatt G. Use of GRADE for assessment of evidence about prognosis: rating confidence in estimates of event rates in broad categories of patients. BMJ 2015; 350 : h870.

Langendam M, Carrasco-Labra A, Santesso N, Mustafa RA, Brignardello-Petersen R, Ventresca M, Heus P, Lasserson T, Moustgaard R, Brozek J, Schünemann HJ. Improving GRADE evidence tables part 2: a systematic survey of explanatory notes shows more guidance is needed. Journal of Clinical Epidemiology 2016; 74 : 19-27.

Levine MN, Raskob G, Landefeld S, Kearon C, Schulman S. Hemorrhagic complications of anticoagulant treatment: the Seventh ACCP Conference on Antithrombotic and Thrombolytic Therapy. Chest 2004; 126 : 287S-310S.

Orrapin S, Rerkasem K. Carotid endarterectomy for symptomatic carotid stenosis. Cochrane Database of Systematic Reviews 2017; 6 : CD001081.

Salpeter S, Greyber E, Pasternak G, Salpeter E. Risk of fatal and nonfatal lactic acidosis with metformin use in type 2 diabetes mellitus. Cochrane Database of Systematic Reviews 2007; 4 : CD002967.

Santesso N, Carrasco-Labra A, Langendam M, Brignardello-Petersen R, Mustafa RA, Heus P, Lasserson T, Opiyo N, Kunnamo I, Sinclair D, Garner P, Treweek S, Tovey D, Akl EA, Tugwell P, Brozek JL, Guyatt G, Schünemann HJ. Improving GRADE evidence tables part 3: detailed guidance for explanatory footnotes supports creating and understanding GRADE certainty in the evidence judgments. Journal of Clinical Epidemiology 2016; 74 : 28-39.

Schünemann HJ, Best D, Vist G, Oxman AD, Group GW. Letters, numbers, symbols and words: how to communicate grades of evidence and recommendations. Canadian Medical Association Journal 2003; 169 : 677-680.

Schünemann HJ, Jaeschke R, Cook DJ, Bria WF, El-Solh AA, Ernst A, Fahy BF, Gould MK, Horan KL, Krishnan JA, Manthous CA, Maurer JR, McNicholas WT, Oxman AD, Rubenfeld G, Turino GM, Guyatt G. An official ATS statement: grading the quality of evidence and strength of recommendations in ATS guidelines and recommendations. American Journal of Respiratory and Critical Care Medicine 2006; 174 : 605-614.

Schünemann HJ, Oxman AD, Brozek J, Glasziou P, Jaeschke R, Vist GE, Williams JW, Jr., Kunz R, Craig J, Montori VM, Bossuyt P, Guyatt GH. Grading quality of evidence and strength of recommendations for diagnostic tests and strategies. BMJ 2008a; 336 : 1106-1110.

Schünemann HJ, Oxman AD, Brozek J, Glasziou P, Bossuyt P, Chang S, Muti P, Jaeschke R, Guyatt GH. GRADE: assessing the quality of evidence for diagnostic recommendations. ACP Journal Club 2008b; 149 : 2.

Schünemann HJ, Mustafa R, Brozek J. [Diagnostic accuracy and linked evidence--testing the chain]. Zeitschrift für Evidenz, Fortbildung und Qualität im Gesundheitswesen 2012; 106 : 153-160.

Schünemann HJ, Tugwell P, Reeves BC, Akl EA, Santesso N, Spencer FA, Shea B, Wells G, Helfand M. Non-randomized studies as a source of complementary, sequential or replacement evidence for randomized controlled trials in systematic reviews on the effects of interventions. Research Synthesis Methods 2013; 4 : 49-62.

Schünemann HJ. Interpreting GRADE's levels of certainty or quality of the evidence: GRADE for statisticians, considering review information size or less emphasis on imprecision? Journal of Clinical Epidemiology 2016; 75 : 6-15.

Schünemann HJ, Cuello C, Akl EA, Mustafa RA, Meerpohl JJ, Thayer K, Morgan RL, Gartlehner G, Kunz R, Katikireddi SV, Sterne J, Higgins JPT, Guyatt G, Group GW. GRADE guidelines: 18. How ROBINS-I and other tools to assess risk of bias in nonrandomized studies should be used to rate the certainty of a body of evidence. Journal of Clinical Epidemiology 2018.

Spencer-Bonilla G, Quinones AR, Montori VM, International Minimally Disruptive Medicine W. Assessing the Burden of Treatment. Journal of General Internal Medicine 2017; 32 : 1141-1145.

Spencer FA, Iorio A, You J, Murad MH, Schünemann HJ, Vandvik PO, Crowther MA, Pottie K, Lang ES, Meerpohl JJ, Falck-Ytter Y, Alonso-Coello P, Guyatt GH. Uncertainties in baseline risk estimates and confidence in treatment effects. BMJ 2012; 345 : e7401.

Sterne JAC, Hernán MA, Reeves BC, Savović J, Berkman ND, Viswanathan M, Henry D, Altman DG, Ansari MT, Boutron I, Carpenter JR, Chan AW, Churchill R, Deeks JJ, Hróbjartsson A, Kirkham J, Jüni P, Loke YK, Pigott TD, Ramsay CR, Regidor D, Rothstein HR, Sandhu L, Santaguida PL, Schünemann HJ, Shea B, Shrier I, Tugwell P, Turner L, Valentine JC, Waddington H, Waters E, Wells GA, Whiting PF, Higgins JPT. ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions. BMJ 2016; 355 : i4919.

Thompson DC, Rivara FP, Thompson R. Helmets for preventing head and facial injuries in bicyclists. Cochrane Database of Systematic Reviews 2000; 2 : CD001855.

Tierney JF, Stewart LA, Ghersi D, Burdett S, Sydes MR. Practical methods for incorporating summary time-to-event data into meta-analysis. Trials 2007; 8 .

van Dalen EC, Tierney JF, Kremer LCM. Tips and tricks for understanding and using SR results. No. 7: time‐to‐event data. Evidence-Based Child Health 2007; 2 : 1089-1090.

For permission to re-use material from the Handbook (either academic or commercial), please see here for full details.

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

education-logo

Article Menu

research findings and review

  • Subscribe SciFeed
  • Recommended Articles
  • Author Biographies
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

Advancing middle grade research on critical pedagogy: research synthesis.

research findings and review

1. Introduction

2. materials and methods, 2.1. scope of the literature review.

  • How do teachers across content areas use and promote critical and culturally responsive teaching practices?
  • What strategies and classroom practices do teachers implement that examine and challenge power relations and center culturally and linguistically diverse students?
  • What is the impact of classroom implementation of critical pedagogies on young adolescent learning?
  • How do educators and researchers expand the concept of critical pedagogies to include antiracist and anticolonial teaching practices for action?

2.2. Study Selection

  • Critical pedagogies AND middle school or junior high or 6th, 7th, and 8th grades, AND teaching strategies or teaching methods or teaching approaches or classroom techniques;
  • Antiracist teaching AND middle school or junior high or 6th, 7th, and 8th grades;
  • Antiracism or anti-racism or antiracist or antiracist AND middle school or junior high or 6th, 7th, and 8th grades AND education or school or learning or teaching or classroom or education system (later added AND education to further narrow results);
  • Culturally responsive teaching or culturally relevant pedagogy or culturally responsive instruction or culturally inclusive AND middle school or junior high or 6th, 7th, and 8th grades or young adolescents;
  • Anticolonial or anti-colonial or decolonial AND middle school or junior high or 6th, 7th, and 8th grades or young adolescents AND education or school or learning or teaching or classroom or education system;
  • Critical literacy or social justice AND teaching strategies or teaching methods or teaching approaches or classroom techniques AND middle school or junior high or 6th or 7th or 8th.

3.1. Diverse Instructional Practices

3.2. culturally responsive pedagogies, 3.3. decolonial and antiracist strategies, 4. discussion, 5. conclusions, author contributions, institutional review board statement, informed consent statement, data availability statement, conflicts of interest.

  • Nanjundaswamy, C.; Baskaran, S.; Leela, M.H. Digital Pedagogy for Sustainable Learning. Shanlax Int. J. Educ. 2021 , 9 , 80. [ Google Scholar ] [ CrossRef ]
  • Luke, A. Critical Literacy, Schooling, and Social Justice ; Routledge: New York, NY, USA, 2018. [ Google Scholar ]
  • Collins, P.H. Intersectionality as Critical Social Theory ; Duke University Press: New York, NY, USA, 2019; p. 41. [ Google Scholar ]
  • Middle Level Teacher Preparation Standards with Rubrics and Supporting Explanations ; Association for Middle Level Education: New York, NY, USA, 2012. Available online: http://www.amle.org/AboutAMLE/ProfessionalPreparation/AMLEStandards.aspx (accessed on 1 December 2023).
  • Grant, M.J.; Booth, A. A Typology of Reviews; An Analysis of 14 Review Types and Associated Methodologies. Health Inf. Libr. J. 2009 , 26 , 91–108. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Aksakalli, A. The Effects of Science Teaching Based on Critical Pedagogy Principles on the Classroom Climate. Sci. Educ. Int. 2018 , 29 , 250–260. [ Google Scholar ] [ CrossRef ]
  • Arphattananon, T. Breaking the Mold of Liberal Multicultural Education in Thailand through Social Studies Lessons. Clear. House J. Educ. Strateg. Issues Ideas 2021 , 94 , 53–62. [ Google Scholar ] [ CrossRef ]
  • Ruppert, N.; Coleman, B.; Pinter, H.; Johnson, D.; Rector, M.; Diaz, C. Culturally Sustaining Practices for Middle Level Mathematics Teachers. Educ. Sci. 2022 , 12 , 910. [ Google Scholar ] [ CrossRef ]
  • Walker, A. Transformative Potential of Culturally Responsive Teaching: Examining Preservice Teachers’ Collaboration Practices Centering Refugee Youth. Educ. Sci. 2023 , 13 , 621. [ Google Scholar ] [ CrossRef ]
  • Turner, K.C.N.; Hayes, N.V.; Way, K. Critical Multimodal Hip Hop Production: A Social Justice Approach to African American Language and Literacy Practices. Equity Excell. Educ. 2013 , 46 , 342–354. [ Google Scholar ] [ CrossRef ]
  • Coppola, R.; Woodard, R.; Vaughan, A. And the Students Shall Lead Us: Putting Culturally Sustaining Pedagogy in Conversation with Universal Design for Learning in a Middle-School Spoken Word Poetry Unit. Lit. Res. Theory Method Pract. 2019 , 68 , 226–240. [ Google Scholar ] [ CrossRef ]
  • Arada, K.; Sanchez, A.; Bell, P. Youth as Pattern Makers for Racial Justice: How Speculative Design Pedagogy in Science Can Promote Restorative Futures through Radical Care Practices. J. Learn. Sci. 2023 , 32 , 76–109. [ Google Scholar ] [ CrossRef ]
  • Lensmire, A. Children Teaching Future Teachers: A Civic Literacy Project on the Border Wall during the Trump Regime. J. Curric. Pedagog. 2023 , 20 , 250–272. [ Google Scholar ] [ CrossRef ]
  • Andrews, P.G.; Moulton, M.J.; Hughes, H.E. Integrating Social Justice into Middle Grades Education. Middle Sch. J. 2018 , 49 , 4–15. [ Google Scholar ] [ CrossRef ]
  • Gay, G. Culturally Responsive Teaching: Theory, Research, and Practice , 3rd ed.; Teachers College Press: New York, NY, USA, 2018. [ Google Scholar ]
  • Aguirre, J.M.; Zavala, M.D.R. Making Culturally Responsive Mathematics Teaching Explicit: A Lesson Analysis Tool. Pedagog. Int. J. 2013 , 8 , 163–190. [ Google Scholar ] [ CrossRef ]
  • Gunn, A.A. Focus on Middle School: Honoring My Students’ Names! Using Web 2.0 Tools to Create Culturally Responsive Literacy Classrooms. Child. Educ. 2014 , 90 , 150–153. [ Google Scholar ] [ CrossRef ]
  • Casler-Failing, S.L.; Stevenson, A.D.; King Miller, B.A. Integrating Mathematics, Science, and Literacy into a Culturally Responsive STEM After-School Program. Curr. Issues Middle Level Educ. 2021 , 26 , 3. [ Google Scholar ] [ CrossRef ]
  • Balint-Langel, K.; Woods-Groves, S.; Rodgers, D.B.; Rila, A.; Riden, B.S. Using a Computer-Based Strategy to Teach Self-Advocacy Skills to Middle School Students with Disabilities. J. Spec. Educ. Technol. 2019 , 35 , 249–261. [ Google Scholar ] [ CrossRef ]
  • O’Keefe, S.B.; Medina, C.M. Nine Strategies for Helping Middle School Students Weather the Perfect Storm of Diversity, Disability and Adolescence. Am. Second. Educ. 2016 , 44 , 72–87. [ Google Scholar ]
  • Wexler, J. Improving Instruction in Co-Taught Classrooms to Support Reading Comprehension. Interv. Sch. Clin. 2021 , 56 , 195–199. [ Google Scholar ] [ CrossRef ]
  • Krawec, J.; Huang, J. Modifying a Research-Based Problem-Solving Intervention to Improve the Problem-Solving Performance of Fifth and Sixth Graders with and without Learning Disabilities. J. Learn. Disabil. 2017 , 50 , 468–480. [ Google Scholar ] [ CrossRef ]
  • Cuenca-Carlino, Y.; Freen-Green, S.; Stephenson, G.W.; Hauth, C. Self-Regulated Strategy Development Instruction for Teaching Multi-Step Equations to Middle School Students Struggling in Math. J. Spec. Educ. 2016 , 50 , 75–85. [ Google Scholar ] [ CrossRef ]
  • King-Sears, M.E.; Jenkins, M.C.; Brawand, A. Co-Teaching Perspectives from Middle School Algebra Co-Teachers and Their Students with and without Disabilities. Int. J. Incl. Educ. 2020 , 24 , 427–442. [ Google Scholar ] [ CrossRef ]
  • Swanson, E.; Stevens, E.A.; Wexler, J. Engaging Students with Disabilities in Text-Based Discussions: Guidance for General Education Social Studies Classrooms. TEACHING Except. Child. 2019 , 51 , 305–312. [ Google Scholar ] [ CrossRef ]
  • DeMink-Carthew, J.; Smith, K.; Burgess, K.; Leonard, S.; Yoon, B.; Andrews, G.; Nagle, J.; Bishop, P. Navigating Common Challenges: Guidance for Educators in Racial Justice Work. Middle Sch. J. 2023 , 54 , 25–36. [ Google Scholar ] [ CrossRef ]
  • Kavanagh, S.S.; Danielson, K.A. Practicing Justice, Justifying Practice: Toward Critical Practice Teacher Education. Am. Educ. Res. J. 2020 , 57 , 69–105. [ Google Scholar ] [ CrossRef ]
  • Hagerman, D.; Porath, S. The Possibilities of Teaching for, with, and about Social Justice in a Public Middle School. Middle Sch. J. 2018 , 49 , 26–34. [ Google Scholar ] [ CrossRef ]
  • Hughes, H.E.; Ranschaert, R.; Benson, K.L. Engaged Pedagogies in the Middle Grades: A Case Study of Justice-Oriented Teachers in COVID Times. Middle Sch. J. 2023 , 9 , 4. [ Google Scholar ]
  • DeMink-Carthew, J.; Gonell, E. Lessons Learned From Teaching Social Justice Education in Sixth Grade. Middle Sch. J. 2022 , 53 , 5–14. [ Google Scholar ] [ CrossRef ]
  • Vachon, K.J. The Racialization of Self and Others: An Exploration of Criticality in Pre-Service Teacher Self-Reflection. Issues Teach. Educ. 2022 , 31 , 35–56. [ Google Scholar ]
  • Brewer, A. Critical Global Literacies: Expanding Our Critical Global View from the Classroom. Engl. J. 2019 , 108 , 100–102. [ Google Scholar ]
  • Varghese, M.; Daniels, J.R.; Park, C.C. Structuring Disruption within University-Based Teacher Education Programs: Possibilities and Challenges of Race-Based Caucuses. Teach. Coll. Rec. 2019 , 121 , 1–34. [ Google Scholar ] [ CrossRef ]
  • Kavanagh, S.S. Practicing Social Justice: Towards a Practice-Based Approach to Learning to Teach for Social Justice. In Reflective Theories in Teacher Education Practice: Process, Impact, and Enactment ; Brandenburg, R., Glasswell, K., Jones, M., Ryan, J., Eds.; Springer: Berlin/Heidelberg, Germany, 2017; pp. 161–175. [ Google Scholar ]
  • Brinegar, K.; Caskey, M.M. Developmental Characteristics of Young Adolescents: Research Summary ; Association for Middle Level Education: Columbus, OH, USA, 2022. Available online: https://www.amle.org/developmental-characteristics-of-young-adolescents/ (accessed on 1 December 2023).
  • Dominguez, M. Cultivating Epistemic Disobedience: Exploring the Possibilities of a Decolonial Practice-Based Teacher Education. J. Teach. Educ. 2021 , 72 , 551–563. [ Google Scholar ] [ CrossRef ]
  • Ladson-Billings, G. Toward a Theory of Culturally Relevant Pedagogy. Am. Educ. Res. J. 1995 , 32 , 465–491. [ Google Scholar ] [ CrossRef ]
  • Paris, D. Culturally Sustaining Pedagogy: A Needed Change in Stance, Terminology, and Practice. Educ. Res. 2012 , 41 , 93–97. [ Google Scholar ] [ CrossRef ]
Sub-ThemeReferences Included in This Literature Review
Diverse Instructional Approaches 2018, 29, pp. 250–260. 2021, 94, pp. 53–62. 2022, 12, 910. . , 342–354. 2019, 68, 226–240. 2023, 32, 76–109. . 2023, 20, 250–272. 2018, 49, 4–15. .
Culturally Responsive Pedagogies 2021, 94, pp. 53–62. 2022, 12, 910. . 2019, 68, 226–240. 2013, 8, 163–190. . 2014, 90, 150–153. 2021, 26. 2019, 35, 249–261. 2016, 44, 72–87. 2021, 56, 195–199. 2017, 50, 468–480. . 2016, 50, 75–85. 2020, 24, 427–442. 2019, 51, 305–312.
Decolonial
and Antiracist Strategies
2023, 54, 25–36. 2020, 57, 69–105. 2018, 49, 26–34. . 2023, 9. 2022, 31, 35–56. 2019, 108, 6, pp. 100–102.
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

Walker, A.; Yoon, B.; Pankowski, J. Advancing Middle Grade Research on Critical Pedagogy: Research Synthesis. Educ. Sci. 2024 , 14 , 997. https://doi.org/10.3390/educsci14090997

Walker A, Yoon B, Pankowski J. Advancing Middle Grade Research on Critical Pedagogy: Research Synthesis. Education Sciences . 2024; 14(9):997. https://doi.org/10.3390/educsci14090997

Walker, Amy, Bogum Yoon, and Jennifer Pankowski. 2024. "Advancing Middle Grade Research on Critical Pedagogy: Research Synthesis" Education Sciences 14, no. 9: 997. https://doi.org/10.3390/educsci14090997

Article Metrics

Article access statistics, further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

New findings on the incidence and management of CNS adverse reactions in ALK-positive NSCLC with lorlatinib treatment

  • Case Report
  • Open access
  • Published: 13 September 2024
  • Volume 15 , article number  444 , ( 2024 )

Cite this article

You have full access to this open access article

research findings and review

  • Fanfan Chu 1 ,
  • Wenxi Zhang 1 &
  • Hong Hu 2  

To explore the presentation and control of CNS adverse reactions in patients with ALK-positive NSCLC treated with lorlatinib. This study includes a retrospective case report from Sir Run Run Shaw Hospital on a lorlatinib-treated patient with CNS adverse reactions and a systematic literature review of similar cases until January 2023. The report detailed a case of a 74-year-old male with Grade III CNS adverse reactions 25 days after starting lorlatinib, which were reversible with dose modification and pharmacotherapy. The review indicated a 19.39% occurrence rate of such reactions, with a 17% improvement rate post-dose adjustment. CNS adverse reactions frequently occur in ALK-positive NSCLC patients on lorlatinib, yet they are reversible with appropriate management. Research should continue to optimize treatment protocols to decrease these reactions' frequency.

This study provides the first detailed report in China on CNS adverse reactions induced by lorlatinib and its management.

It emphasizes the scientific issues and objectives addressed in resolving CNS adverse reactions during lorlatinib treatment.

The study reveals that CNS adverse reactions caused by lorlatinib can be effectively managed through dose adjustment.

It fills the data gap on CNS adverse reactions to lorlatinib in the Asian population.

The study highlights the importance and scientific value of optimizing lorlatinib treatment protocols to improve patient's quality of life.

Similar content being viewed by others

research findings and review

Multi-disciplinary proactive follow-up algorithm for patients with advanced NSCLC receiving afatinib

research findings and review

Adverse reactions of targeted therapy in cancer patients: a retrospective study of hospital medical data in China

Consensus recommendations for management and counseling of adverse events associated with lorlatinib: a guide for healthcare practitioners.

Avoid common mistakes on your manuscript.

1 Introduction

Lung cancer is one of the most common cancers globally and the leading cause of cancer-related mortality [ 1 ]. According to global cancer statistics, lung cancer ranks second in incidence and first in mortality worldwide [ 2 , 3 ]. Non-small cell lung cancer (NSCLC), accounting for 80–85% of all lung cancer cases, often harbors targeted genetic mutations in over 60% of advanced cases [ 4 , 5 ]. The anaplastic lymphoma kinase (ALK) fusion gene, as the second most common tumor-driving gene in NSCLC, affects approximately 5–8% of NSCLC patients [ 4 , 6 ]. NSCLC encompasses various histological types, each potentially responding differently to treatment [ 7 , 8 ]. Introducing first and second-generation ALK tyrosine kinase inhibitors (TKIs) has revolutionized the treatment of ALK-positive NSCLC [ 8 , 9 , 10 ]. However, drug resistance often emerges post-treatment, with the central nervous system (CNS) being a common site of progression [ 11 ]. Lorlatinib, a novel third-generation ALK TKI, can penetrate the blood–brain barrier [ 12 ]. Regardless of the type of EML4-ALK variant or the presence of ALK kinase mutations, the lorlatinib group showed superior overall response (OR), duration of remission, and progression-free survival (PFS) compared to the crizotinib group [ 13 , 14 , 15 ]. However, adverse drug reactions (ADRs) or adverse events (AEs) induced by lorlatinib may necessitate treatment interruption or discontinuation in some patients [ 14 , 16 , 17 ].

Central nervous system (CNS) reactions are one of the common adverse drug reactions (ADRs) or adverse events (AEs) during treatment with lorlatinib [ 16 ]. Patients receiving lorlatinib may experience a range of CNS reactions, including seizures, mental effects, cognitive functions (such as consciousness, memory, spatial and temporal orientation, and attention), emotions (including suicidal ideation), speech, mental state, and sleep changes [ 16 , 17 , 18 ]. Methods for assessing the mental state of patients include the Symptom Checklist 90 (SCL-90) [ 19 ], Beck Depression Inventory (BDI) [ 20 ], Mini-Cog [ 21 ], and open and closed questionnaires [ 22 ]. The grading standards for CNS reactions refer to the National Cancer Institute Common Terminology Criteria for Adverse Event v5.0 (NCI CTCAE v5.0) [ 23 ] (Table  1 ). When CNS reactions occur, lorlatinib dosage can be adjusted based on severity. Studies indicate that dosage adjustment effectively manages CNS AEs without compromising treatment outcomes [ 24 ]. If CNS ADRs significantly affect a patient's daily life, a reduced lorlatinib dosage is necessary [ 25 ]. While CNS adverse reactions can be managed to some extent through dosage adjustment or combination with other medications, current research and guidelines remain insufficient for effectively preventing and controlling these reactions. Addressing this issue requires a deeper understanding and further research into CNS adverse reactions induced by Lorlatinib.

This article reviews the epidemiology, pathogenesis, diagnosis, treatment, and management strategies of CNS reactions as ADRs to lorlatinib treatment. It aims to provide clinicians with an in-depth understanding of CNS reactions to better manage ALK-positive NSCLC patients receiving lorlatinib treatment. Through this study, we hope to optimize Lorlatinib's usage protocols, reduce adverse reactions, and thus improve patient treatment outcomes and quality of life. Additionally, the results of this study will offer valuable reference information for future drug development, contributing to the creation of safer and more effective ALK inhibitors.

2 Case presentation

On December 13, 2022, a 74-year-old male patient was admitted to the hospital with mental disorders due to abnormal speech and behavior after being diagnosed with lung adenocarcinoma and receiving targeted treatment with lorlatinib. The patient was first diagnosed with lung adenocarcinoma (LUAD) in 2015. Chest CT showed a left upper lung mass of uncertain nature, with possible chronic inflammation and a tumor to be ruled out. On June 24, 2015, an EBUS + TBNA examination revealed a very small number of atypical epithelial cells, indicating a high possibility of adenocarcinoma. To further clarify the diagnosis, a thoracoscopic left chest exploration, left pleural biopsy, and pleural fixation surgery were performed. During the operation, a small amount of adhesion was observed in the left chest cavity without pleural effusion. A mass with a diameter of about 4.5cm and a hard texture was found in the lingual segment of the left upper lobe near the hilum of the lung, involving the visceral pleura and invading the lower pulmonary vein and part of the pericardium. Multiple lymphadenectasis in the interstitium, hilum, and mediastinum. Resection of a chest wall nodule for examination, pathological examination combined with clinical diagnosis of left upper lung adenocarcinoma and chest wall metastatic adenocarcinoma. The disease stage was cT3N2M1c. Due to the immunohistochemical results of ALK ( +), Her-2 (−), ROS1 (−), and gene sequencing analysis of EGFR as wild-type, targeted therapy with oral administration of crizotinib has been administered until now (Fig.  1 ).

figure 1

Timeline of patient diagnosis and treatment

Chest CT scans were performed regularly. Subsequently, in April 2018, an enlarged mass was observed in the left upper lung's lingular segment. After five cycles of the PC + B regimen (carboplatin AUC = 5 on day 1, paclitaxel 175 mg/m 2 on day 1, and bevacizumab 15 mg/kg on day 1), the overall efficacy evaluation reached PR. During the treatment, the patient developed prominent bilateral mammary glands, which were confirmed to be gynecomastia, suspected to be a side effect induced by crizotinib or other treatment drugs. Due to severe anemia caused by chemotherapy side effects, the patient switched to maintenance therapy with crizotinib. A chest CT scan on April 2, 2020, showed an enlarged mass in the left upper lung's lingular segment. The patient began oral ceritinib treatment. The CT scan indicated an increase in pulmonary lesions, so ceritinib was discontinued on November 18, 2022, and lorlatinib 100 mg/day treatment was initiated. All these procedures were fully informed to patients. Twenty-five days into the lorlatinib treatment, the patient exhibited atypical behaviors, including hallucinations, sleep disturbances, and agitation. Following an intense altercation with the family on December 15, 2022, the patient was admitted to the psychiatric department with police assistance. A multidisciplinary assessment, including psychiatry and medical oncology, diagnosed the patient with Grade III lorlatinib-induced CNS adverse reactions. Comprehensive evaluations of the patient's cardiovascular, abdominal, reproductive, urinary, skeletal, muscular, and integumentary systems revealed no abnormalities, and pain assessment indicated no discomfort. The nursing staff conducted a series of admission assessments, finding the patient in a state of moderate consciousness impairment according to the Glasgow Coma Scale. The Braden Scale, Morse Fall Scale (MFS), Nutritional Risk Screening 2002 (NRS2002), and Barthel Index indicated the patient was at high risk and required close monitoring. Nursing staff administered 3 L/min of oxygen via a nasal cannula and monitored vital signs hourly, including pulse, respiration, blood pressure, heart rate, and oxygen saturation. Lorlatinib was initially discontinued to manage the adverse reactions. The patient was then administered 5 mg of haloperidol for sedation and 0.05 g of quetiapine to alleviate abnormal emotions and cognitive impairments. Fortunately, he regained consciousness and responded rapidly and effectively to the management of the adverse reactions. On December 17, 2022, the patient was re-administered lorlatinib at 100 mg/day. After nearly 2 hours, adverse reactions re-emerged, including incoherent speech, hallucinations, and disorientation, with a GCS assessment indicating mild impairment of consciousness. Given the positive response to previous treatment, the same management was continued to help the patient return to his prior state of health. The patient's adverse reactions did not progress further, and he regained consciousness and emotional control the following day. After observing no signs of adverse events in the central nervous system, a decision was made to adjust the dosage of lorlatinib on December 20, 2022. Considering the patient's severe adverse events profile, a dosage of 50 mg per day was administered [ 24 ]. Subsequently, it was found that the patient did not experience any adverse reactions, indicating the reversibility of the reactions with dose reduction. Post-discharge, nursing staff conducted bi-weekly follow-up calls to monitor the patient's physical condition, consciousness, and emotional state. They assessed the patient's compliance with lorlatinib treatment and repeatedly emphasized the availability of consultation with the follow-up nurse or doctor for any concerns. A CT scan on April 25, 2023, showed tumor shrinkage, indicating the effectiveness of lorlatinib treatment (Fig.  2 ).

figure 2

Imaging comparison of space-occupying lesions before and after lorlatinib therapy

In summary, the patient exhibited Grade III central nervous system adverse reactions 25 days into lorlatinib treatment, characterized by incoherent speech, anger, hallucinations, and disorientation. Sedation with 5mg of haloperidol and subsequent intervention with 0.05 g of quetiapine facilitated recovery. Reducing the lorlatinib dosage from 100 mg/day to 50 mg/day resulted in no further adverse drug reactions. Follow-up CT scans showed tumor shrinkage with no signs of brain metastasis, indicating effective disease control.

3 Discussion

3.1 the role and challenges of lorlatinib in the treatment of alk-positive nsclc.

In treating ALK-positive NSCLC, lorlatinib has emerged as a groundbreaking therapeutic option [ 26 ]. As a third-generation ALK inhibitor, lorlatinib was developed to overcome resistance issues associated with earlier generations of ALK inhibitors and to offer a more effective treatment for brain metastases [ 27 ]. Its unique ability to penetrate the blood–brain barrier has demonstrated unprecedented potential in treating intracranial lesions [ 28 ]. However, this capability also introduces new challenges related to CNS adverse reactions [ 29 ]. These adverse reactions range from mild symptoms such as headaches and fatigue to severe cognitive impairments and hallucinations, significantly impacting patients' quality of life [ 30 ]. By reviewing the latest research, this article aims to explore management strategies for CNS adverse reactions during lorlatinib treatment, intending to provide clinicians with a more comprehensive guide to therapy [ 31 ].

3.2 Epidemiology and clinical characteristics of CNS adverse reactions induced by lorlatinib

As of January 2023, a comprehensive search of published studies on lorlatinib was conducted, focusing on adverse drug reactions, AEs, and impacts on the CNS. The inclusion criteria for the studies were: (1) reports on ALK-positive NSCLC patients; (2) documentation of lorlatinib administered at any therapeutic duration and its standard dosage; (3) descriptions of adverse reactions or events attributed to lorlatinib.

After screening 64 records containing the keywords “lorlatinib,”d “adverse events,” and “non-small cell lung cancer,” and excluding review articles and studies not addressing CNS adverse effects, ten studies specifically related to lorlatinib-induced CNS adverse reactions were included. These studies encompassed a total of 1450 participants. A summary of the CNS adverse reactions observed in patients treated with lorlatinib (100 mg/day) was compiled (Table  2 ). CNS adverse reactions were categorized into cognitive, emotional, speech, and hallucinatory types. Cognitive and emotional reactions were the most frequently reported CNS adverse reactions across all included lorlatinib studies [ 29 ]. The incidence rates of CNS adverse reactions were as follows [ 29 ]: cognitive reactions at 19.17% (278/1450), emotional reactions at 13.52% (196/1450), speech reactions at 2.48% (36/1450), and hallucinatory reactions at 0.97% (14/1450) [ 32 ]. Moreover, studies indicated that the incidence rates of these AEs were higher in non-Asian populations than in Asian populations, suggesting potential ethnic differences in susceptibility (Soo RA) [ 25 , 32 ]. The proportion of CNS AEs with grade III or higher is 2.97% (43/1450). These findings suggest that lorlatinib’s adverse reactions are generally mild to moderate, with only a minority reaching Grade III to IV severity [ 33 ]. Unfortunately, most published clinical studies have only documented the occurrence and severity of adverse reactions under the standard lorlatinib dose of 100 mg without implementing measures to manage these reactions [ 34 ]. A post-hoc analysis of the safety and efficacy in the CROWN Phase III trial [ 24 ] revealed that the median onset time for CNS AEs post-lorlatinib treatment was 57 (1–533) days, with a median duration of 182 (2–751) days, and 33% of CNS AEs resolved naturally without intervention.

3.3 Management strategies for central nervous system adverse reactions induced by lorlatinib

Lorlatinib, an effective ALK inhibitor for treating ALK-positive NSCLC, has introduced therapeutic breakthroughs and challenges related to CNS adverse reactions [ 35 ]. The early identification and timely intervention of CNS AEs is key to ensuring clinical treatment success. A comprehensive assessment of the patient's baseline characteristics and medication history before initiating treatment is essential [ 36 ]. Furthermore, enhancing communication between the medical team, patients, and their families is crucial for accurately reporting potential cognitive disorientation, emotional changes, hallucinations, or alterations in speech and sleep during treatment [ 37 ].

When CNS adverse reactions occur in patients receiving lorlatinib, healthcare professionals must promptly follow up on the patient's treatment response. Establishing effective communication channels ensures patients and their families receive the necessary support and guidance [ 38 ]. For instance, a case reported in the literature experienced hallucinations and restlessness on day 25 of treatment, which worsened due to delayed notification to the treatment team [ 37 ]. It underscores family members' significant role in monitoring changes in the patient's condition, where timely identification and reporting of CNS adverse reactions are critical to preventing further deterioration [ 39 ]. Management strategies for CNS adverse reactions induced by lorlatinib include dose adjustment, treatment interruption, or symptomatic treatment [ 40 ]. Dose adjustment deserves particular attention, as studies have shown that appropriately reducing the lorlatinib dose can effectively mitigate CNS adverse reactions without compromising treatment efficacy [ 41 ]. The CROWN study [ 24 ] provides empirical support, demonstrating that 17% of CNS AEs were reversed following dose adjustment, with no negative impact on the patients' progression-free survival (PFS) or treatment outcomes.

During hospitalization, the nursing team should conduct a comprehensive assessment of the patient, including mental status, self-care capabilities, nutritional status, and potential risk for pressure ulcers, to determine the appropriate level of care [ 42 ]. Personalized care plans, such as implementing safety measures, fall prevention, nutritional support, and sleep interventions, are crucial for alleviating CNS adverse reactions caused by lorlatinib [ 43 ].

3.4 Optimizing the management of CNS adverse reactions in lorlatinib treatment

Lorlatinib offers new therapeutic hope for ALK-positive NSCLC patients, especially those with brain metastases [ 24 ]. However, CNS AEs are common side effects during lorlatinib treatment, negatively affecting patients’ quality of life [ 44 ]. Notably, the incidence rate of CNS AEs in the Chinese population is significantly lower than in the general population, at 6.4% compared to 35% [ 24 , 45 ]. Additionally, the cognitive, emotional, and speech impact rates in the Chinese population were reported as 2.8%, 1.8%, and 0.9%, respectively [ 45 ]. Our study found that the incidence rate of cognitive impairment was 19.39%, emotional disorders 13.76%, and speech difficulties 2.25%. The prevalence of cognitive and emotional side effects in non-Asian populations is twice that of Asian populations [ 25 ]. Patients typically experience various types of CNS adverse reactions. These findings help to better understand the variations in CNS AE incidence rates across different populations [ 44 ].

Our research emphasizes that CNS AEs typically induced by lorlatinib are less severe, with uncommon clinical symptoms. The reported incidence rates of CNS AEs at Grade III or above vary, ranging from 0.00 to 6.73%. Importantly, the incidence rate of CNS AEs at Grade III or above in the Asian population is significantly lower, at only 0.93% [ 25 ]. AEs affecting the CNS are usually responsive to treatment, and timely intervention can often prevent the need for premature or indefinite medication discontinuation. Dose adjustment is an effective method for managing CNS AEs without compromising efficacy [ 46 ]. Solomon et al.'s research indicated no significant difference in the 12-month PFS rate between the standard-dose subgroup and the reduced-dose subgroup within 16 weeks (93% compared to 89%). Likewise, there was no significant difference in the 12-month PFS rate between subgroups above and below the average relative dose intensity (90% compared to 93%). The reported case employed a dose reduction strategy from 100 mg/day to 50 mg/day, with no further adverse reactions observed, allowing for continued benefit from lorlatinib treatment.

4 Limitations

Through case reports and a literature review, this study explored the clinical characteristics and management strategies of CNS adverse reactions caused by lorlatinib in treating ALK-positive NSCLC patients. However, the limitations of this study are noteworthy. Firstly, the anecdotal nature of case reports implies that the observed outcomes may not be easily generalizable to a broader patient population, and the absence of a control group makes it difficult to ascertain whether the outcomes were due to the intervention itself or other confounding factors. Moreover, selection bias might lead to results skewed towards reporting exceptional, rare, or notably successful treatment cases, overlooking more common or median scenarios. Due to the design and nature of case reports, establishing causality also presents a challenge. Regarding the literature review, the quality of the study highly depends on the selection criteria and quality of the chosen literature, while selection bias, heterogeneity in study design, delays in updating due to rapidly advancing scientific research, and subjectivity in interpreting results could all affect the conclusions of the review. Therefore, despite this study providing valuable insights into understanding and managing CNS adverse reactions induced by lorlatinib, the limitations above should be considered when interpreting the findings. Future research should employ broader samples and more rigorous study designs to enhance the generalizability and accuracy of the findings.

Our study and case reports underscore the importance of effectively identifying and managing CNS AEs during lorlatinib treatment for ALK-positive NSCLC (Fig.  3 ). By leveraging the collaboration of a multidisciplinary team, educating patients and their families, and implementing personalized treatment and care strategies, it is possible to minimize CNS AEs during lorlatinib treatment, thereby improving patients’ quality of life and treatment outcomes.

figure 3

Mechanism of action, management of central nervous system adverse reactions, and long-term monitoring strategies for lorlatinib treatment in ALK-positive NSCLC

Data availability

The datasets generated during and analysed during the current study are available from the corresponding author on reasonable request.

Huang J, Ngai CH, Deng Y, et al. Cancer incidence and mortality in asian countries: a trend analysis. Cancer Control. 2022;29:10732748221095956. https://doi.org/10.1177/10732748221095955 .

Article   PubMed   PubMed Central   Google Scholar  

Siegel RL, Miller KD, Wagle NS, Jemal A. Cancer statistics, 2023. CA Cancer J Clin. 2023;73(1):17–48. https://doi.org/10.3322/caac.21763 .

Article   PubMed   Google Scholar  

Huang J, Deng Y, Tin MS, et al. Distribution, risk factors, and temporal trends for lung cancer incidence and mortality: a global analysis. Chest. 2022;161(4):1101–11. https://doi.org/10.1016/j.chest.2021.12.655 .

Skoulidis F, Heymach JV. Co-occurring genomic alterations in non-small-cell lung cancer biology and therapy. Nat Rev Cancer. 2019;19(9):495–509. https://doi.org/10.1038/s41568-019-0179-8 .

Article   CAS   PubMed   PubMed Central   Google Scholar  

Ferreira CG, Reis MX, Veloso GGV. Editorial: molecular genetic testing and emerging targeted therapies for non-small cell lung cancer. Front Oncol. 2023;13:1308525. https://doi.org/10.3389/fonc.2023.1308525 .

Yamamoto K, Toyokawa G, Kozuma Y, Shoji F, Yamazaki K, Takeo S. ALK-positive lung cancer in a patient with recurrent brain metastases and meningeal dissemination who achieved long-term survival of more than seven years with sequential treatment of five ALK-inhibitors: a case report. Thorac Cancer. 2021;12(11):1761–4. https://doi.org/10.1111/1759-7714.13962 .

Cooper AJ, Sequist LV, Lin JJ. Third-generation EGFR and ALK inhibitors: mechanisms of resistance and management [published correction appears in Nat Rev Clin Oncol. 2022;19(11):744. https://doi.org/10.1038/s41571-022-00680-8 ]. Nat Rev Clin Oncol. 2022;19(8):499–514. https://doi.org/10.1038/s41571-022-00639-9 .

Cameron LB, Hitchen N, Chandran E, et al. Targeted therapy for advanced anaplastic lymphoma kinase ( ALK )-rearranged non-small cell lung cancer. Cochrane Database Syst Rev. 2022;1(1): CD13453. https://doi.org/10.1002/14651858.CD013453.pub2 .

Article   Google Scholar  

Solomon BJ, Bauer TM, Mok TSK, et al. Efficacy and safety of first-line lorlatinib versus crizotinib in patients with advanced, ALK-positive non-small-cell lung cancer: updated analysis of data from the phase 3, randomised, open-label CROWN study. Lancet Respir Med. 2023;11(4):354–66. https://doi.org/10.1016/S2213-2600(22)00437-4 .

Article   CAS   PubMed   Google Scholar  

Peng L, Zhu L, Sun Y, et al. Targeting ALK rearrangements in NSCLC: current state of the Art. Front Oncol. 2022;12:863461. https://doi.org/10.3389/fonc.2022.863461 .

Golding B, Luu A, Jones R, et al. The function and therapeutic targeting of anaplastic lymphoma kinase (ALK) in non-small cell lung cancer (NSCLC). Mol Cancer. 2018;17:52. https://doi.org/10.1186/s12943-018-0810-4 .

Zou HY, Friboulet L, Kodack DP, et al. PF-06463922, an ALK/ROS1 inhibitor, overcomes resistance to first and second generation ALK inhibitors in preclinical models. Cancer Cell. 2015;28(1):70–81. https://doi.org/10.1016/j.ccell.2015.05.010 .

Bearz A, Martini JF, Jassem J, et al. Efficacy of lorlatinib in treatment-naive patients with ALK-positive advanced NSCLC in relation to EML4::ALK variant type and ALK with or without tp53 mutations. J Thorac Oncol. 2023;18(11):1581–93. https://doi.org/10.1016/j.jtho.2023.07.023 .

Shaw AT, Bauer TM, de Marinis F, et al. First-line lorlatinib or crizotinib in advanced ALK-positive lung cancer. N Engl J Med. 2020;383(21):2018–29. https://doi.org/10.1056/NEJMoa2027187 .

Shaw AT, Solomon BJ, Besse B, et al. ALK resistance mutations and efficacy of lorlatinib in advanced anaplastic lymphoma kinase-positive non-small-cell lung cancer. J Clin Oncol. 2019;37(16):1370–9. https://doi.org/10.1200/JCO.18.02236 .

Bauer TM, Felip E, Solomon BJ, et al. Clinical management of adverse events associated with lorlatinib. Oncologist. 2019;24(8):1103–10. https://doi.org/10.1634/theoncologist.2018-0380 .

Reed M, Rosales AS, Chioda MD, Parker L, Devgan G, Kettle J. Consensus recommendations for management and counseling of adverse events associated with lorlatinib: a guide for healthcare practitioners. Adv Ther. 2020;37(6):3019–30. https://doi.org/10.1007/s12325-020-01365-3 .

Solomon BJ, Besse B, Bauer TM, et al. Lorlatinib in patients with ALK-positive non-small-cell lung cancer: results from a global phase 2 study [published correction appears in Lancet Oncol. 2019;20(1):e10. https://doi.org/10.1016/S1470-2045(18)30927-6 ]. Lancet Oncol. 2018;19(12):1654–1667. https://doi.org/10.1016/S1470-2045(18)30649-1 .

Prinz U, Nutzinger DO, Schulz H, Petermann F, Braukhaus C, Andreas S. Comparative psychometric analyses of the SCL-90-R and its short versions in patients with affective disorders. BMC Psychiatry. 2013;13:104. https://doi.org/10.1186/1471-244X-13-104 .

Richter P, Werner J, Heerlein A, Kraus A, Sauer H. On the validity of the beck depression inventory: a review. Psychopathology. 1998;31(3):160–8. https://doi.org/10.1159/000066239 .

Borson S, Scanlan J, Brush M, Vitaliano P, Dokmak A. The mini-cog: a cognitive “vital signs” measure for dementia screening in multi-lingual elderly. Int J Geriatr Psychiatry. 2000;15(11):1021–7. https://doi.org/10.1002/1099-1166(200011)15:113.0.co;2-6 .

Barata F, Aguiar C, Marques TR, Marques JB, Hespanhol V. Monitoring and managing lorlatinib adverse events in the Portuguese clinical setting: a position paper. Drug Saf. 2021;44(8):825–34. https://doi.org/10.1007/s40264-021-01083-x .

Freites-Martinez A, Santana N, Arias-Santiago S, Viera A. Using the common terminology criteria for adverse events (CTCAE-Version 5.0) to evaluate the severity of adverse events of anticancer therapies. CTCAE versión 5.0. Actas Dermosifiliogr. 2021;112(1):90–2. https://doi.org/10.1016/j.ad.2019.05.009 .

Solomon BJ, Bauer TM, Ignatius Ou SH, et al. Post hoc analysis of lorlatinib intracranial efficacy and safety in patients with ALK-positive advanced non-small-cell lung cancer from the phase III CROWN study. J Clin Oncol. 2022;40(31):3593–602. https://doi.org/10.1200/JCO.21.02278 .

Soo RA, Huat Tan E, Hayashi H, et al. Efficacy and safety of lorlatinib in Asian and non-Asian patients with ALK-positive advanced non-small cell lung cancer: subgroup analysis of a global phase 2 trial. Lung Cancer. 2022;169:67–76. https://doi.org/10.1016/j.lungcan.2022.05.012 .

Lu C, Yu R, Zhang C, et al. Protective autophagy decreases lorlatinib cytotoxicity through Foxo3a-dependent inhibition of apoptosis in NSCLC. Cell Death Discov. 2022;8(1):221. https://doi.org/10.1038/s41420-022-01027-z .

Shimizu Y, Okada K, Adachi J, et al. GSK3 inhibition circumvents and overcomes acquired lorlatinib resistance in ALK-rearranged non-small-cell lung cancer. NPJ Precis Oncol. 2022;6(1):16. https://doi.org/10.1038/s41698-022-00260-0 .

Aboubakr M, Elshafae SM, Abdelhiee EY, et al. Antioxidant and Anti-inflammatory potential of thymoquinone and lycopene mitigate the chlorpyrifos-induced toxic neuropathy. Pharmaceuticals. 2021;14(9):940. https://doi.org/10.3390/ph14090940 .

Mathews B, Thalody AA, Miraj SS, Kunhikatta V, Rao M, Saravu K. Adverse effects of fluoroquinolones: a retrospective cohort study in a south Indian tertiary healthcare facility. Antibiotics. 2019;8(3):104. https://doi.org/10.3390/antibiotics8030104 .

Gudesblatt M, Wissemann K, Zarif M, et al. Improvement in cognitive function as measured by NeuroTrax in patients with relapsing multiple sclerosis treated with natalizumab: a 2-year retrospective analysis [published correction appears in CNS Drugs. 2018;32(12):1183. https://doi.org/10.1007/s40263-018-0574-9 ]. CNS Drugs. 2018;32(12):1173–1181. https://doi.org/10.1007/s40263-018-0553-1 .

Verlingue L, Dugourd A, Stoll G, Barillot E, Calzone L, Londoño-Vallejo A. A comprehensive approach to the molecular determinants of lifespan using a Boolean model of geroconversion. Aging Cell. 2016;15(6):1018–26. https://doi.org/10.1111/acel.12504 .

Santos K, Lukka PB, Grzegorzewicz A, et al. Primary lung dendritic cell cultures to assess efficacy of spectinamide-1599 against intracellular mycobacterium tuberculosis. Front Microbiol. 2018;9:1895. https://doi.org/10.3389/fmicb.2018.01895 .

Zhang W, Wu L, Chen L, et al. The efficacy and safety of transarterial chemoembolization plus iodine 125 seed implantation in the treatment of hepatocellular carcinoma with oligometastases: a case series reports. Front Oncol. 2022;12:828850. https://doi.org/10.3389/fonc.2022.828850 .

Sapkota K, Dore K, Tang K, et al. The NMDA receptor intracellular C-terminal domains reciprocally interact with allosteric modulators. Biochem Pharmacol. 2019;159:140–53. https://doi.org/10.1016/j.bcp.2018.11.018 .

Zhang Z, Gao W, Zhou L, et al. Repurposing brigatinib for the treatment of colorectal cancer based on inhibition of ER-phagy. Theranostics. 2019;9(17):4878–92. https://doi.org/10.7150/thno.36254 .

Jahan NK, Ahmad MP, Dhanoa A, et al. A community-based prospective cohort study of dengue viral infection in Malaysia: the study protocol. Infect Dis Poverty. 2016;5(1):76. https://doi.org/10.1186/s40249-016-0172-3 .

Zhou Q, Lu S, Li Y, et al. Chinese expert consensus on management of special adverse effects associated with lorlatinib. Zhongguo Fei Ai Za Zhi. 2022;25(8):555–66. https://doi.org/10.3779/j.issn.1009-3419.2022.101.39 .

Shields MC, Ritter G, Busch AB. Electronic health information exchange at discharge from inpatient psychiatric care in acute care hospitals. Health Aff. 2020;39(6):958–67. https://doi.org/10.1377/hlthaff.2019.00985 .

Hosseini SA, Hajirezaei MR, Seiler C, Sreenivasulu N, von Wirén N. A potential role of flag leaf potassium in conferring tolerance to drought-induced leaf senescence in barley. Front Plant Sci. 2016;7:206. https://doi.org/10.3389/fpls.2016.00206 .

Kasugai K, Iwai H, Kuboyama N, Yoshikawa A, Fukudo S. Efficacy and safety of a crystalline lactulose preparation (SK-1202) in Japanese patients with chronic constipation: a randomized, double-blind, placebo-controlled, dose-finding study. J Gastroenterol. 2019;54(6):530–40. https://doi.org/10.1007/s00535-018-01545-7 .

Meka RR, Venkatesha SH, Acharya B, Moudgil KD. Peptide-targeted liposomal delivery of dexamethasone for arthritis therapy. Nanomedicine. 2019;14(11):1455–69. https://doi.org/10.2217/nnm-2018-0501 .

Fillmore NR, Elbers DC, La J, et al. An application to support COVID-19 occupational health and patient tracking at a Veterans Affairs medical center [published correction appears in J Am Med Inform Assoc. 2021;28(3):673. https://doi.org/10.1093/jamia/ocaa317 ]. J Am Med Inform Assoc. 2020;27(11):1716–1720. https://doi.org/10.1093/jamia/ocaa162 .

Dawczynski C. A study protocol for a parallel-designed trial evaluating the impact of plant-based diets in comparison to animal-based diets on health status and prevention of non-communicable diseases-the nutritional evaluation (NuEva) study. Front Nutr. 2021;7:608854. https://doi.org/10.3389/fnut.2020.608854 .

Sperling MR, Abou-Khalil B, Aboumatar S, et al. Efficacy of cenobamate for uncontrolled focal seizures: post hoc analysis of a phase 3, multicenter, open-label study. Epilepsia. 2021;62(12):3005–15. https://doi.org/10.1111/epi.17091 .

Lu S, Zhou Q, Liu X, et al. Lorlatinib for previously treated ALK-positive advanced NSCLC: primary efficacy and safety from a phase 2 study in People’s Republic of China. J Thorac Oncol. 2022;17(6):816–26. https://doi.org/10.1016/j.jtho.2022.02.014 .

Sogawa R, Saita T, Yamamoto Y, et al. Development of a competitive enzyme-linked immunosorbent assay for therapeutic drug monitoring of afatinib. J Pharm Anal. 2019;9(1):49–54. https://doi.org/10.1016/j.jpha.2018.09.002 .

Peled N, Gillis R, Kilickap S, et al. GLASS: Global Lorlatinib for ALK(+) and ROS1(+) retrospective Study: real world data of 123 NSCLC patients. Lung Cancer. 2020;148:48–54. https://doi.org/10.1016/j.lungcan.2020.07.022 .

Seto T, Hayashi H, Satouchi M, et al. Lorlatinib in previously treated anaplastic lymphoma kinase-rearranged non-small cell lung cancer: Japanese subgroup analysis of a global study. Cancer Sci. 2020;111(10):3726–38. https://doi.org/10.1111/cas.14576 .

Felip E, Shaw AT, Bearz A, et al. Intracranial and extracranial efficacy of lorlatinib in patients with ALK-positive non-small-cell lung cancer previously treated with second-generation ALK TKIs. Ann Oncol. 2021;32(5):620–30. https://doi.org/10.1016/j.annonc.2021.02.012 .

Baldacci S, Besse B, Avrillon V, et al. Lorlatinib for advanced anaplastic lymphoma kinase-positive non-small cell lung cancer: results of the IFCT-1803 LORLATU cohort. Eur J Cancer. 2022;166:51–9. https://doi.org/10.1016/j.ejca.2022.01.018 .

Shaw AT, Solomon BJ, Chiari R, et al. Lorlatinib in advanced ROS1-positive non-small-cell lung cancer: a multicentre, open-label, single-arm, phase 1–2 trial. Lancet Oncol. 2019;20(12):1691–701. https://doi.org/10.1016/S1470-2045(19)30655-2 .

Dagogo-Jack I, Oxnard GR, Evangelist M, et al. Phase II study of lorlatinib in patients with anaplastic lymphoma kinase-positive lung cancer and CNS-specific relapse. JCO Precis Oncol. 2022;6: e2100522. https://doi.org/10.1200/PO.21.00522 .

Download references

Acknowledgements

The authors wish to acknowledge Na Yan, Key Laboratory of Digital Technology in Medical Diagnostics of Zhejiang Province, for her help in providing a part of reference materials for this study and her meaningful guidance in the internal revision of the initial manuscript.

This study was supported by the CSCO-BMS Cancer Immunotherapy Research Foundation (Grant No. Y-BMS2019-098) and the CSCO-Xinda Cancer Immunotherapy Research Foundation (Grant No. Y-XD2019-225).

Author information

Authors and affiliations.

Department of Admission Preparation Center, College of Medicine, QianTang Campus of Sir Run Run Shaw Hospital, Zhejiang University, Hangzhou, Zhejiang, China

Fanfan Chu & Wenxi Zhang

Department of Medical Oncology, College of Medicine, QianTang Campus of Sir Run Run Shaw Hospital, Zhejiang University, No. 368, Xiasha Road, Hangzhou, Zhejiang, China

You can also search for this author in PubMed   Google Scholar

Contributions

Hong Hu and Fanfan Chu conceived and designed the study and analyzed the data. Fanfan Chu and Wenxi Zhang performed the experiments and wrote the manuscript. All authors reviewed and approved the final version of the manuscript.

Corresponding author

Correspondence to Hong Hu .

Ethics declarations

Ethics approval and consent to participate.

This research was rigorously conducted in adherence to international and domestic medical ethics standards and regulations throughout its design and implementation phases. It specifically focused on the clinical characteristics and management strategies of CNS adverse reactions in ALK-positive NSCLC patients undergoing Lorlatinib treatment. Before enrollment, all patients participating in this study were fully informed about the research objectives, potential risks, benefits, and possible alternative treatment options. Stringent privacy protection measures were implemented to safeguard participants' privacy and personal information. Each participant or their legal representative voluntarily signed a written informed consent form after fully understanding the content and procedures of the research. The study protocol received approval from the Ethics Review Committee of Sir Run Run Shaw Hospital (Approval Number: 2023-861-01), ensuring all research activities complied with the Declaration of Helsinki and other relevant medical ethics guidelines. Throughout the research process, patient welfare was prioritized, maintaining the highest ethical standards. Data collection and analysis were conducted carefully to protect patient privacy and dignity. Appropriate measures were taken to anonymize any patient information acquired, preventing the leakage of personal data.

Informed consent

Informed consent was obtained from the patient and/or their legal guardian to publish identifying information and images in an online open-access publication. The patient or their legal representative was thoroughly informed about the objectives of the study, the potential risks and benefits, and their right to withdraw from the study at any time without any consequences. All procedures followed the ethical standards set forth by the institutional and national research committee and the 1964 Helsinki Declaration and its later amendments or comparable ethical standards.

Competing interests

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ .

Reprints and permissions

About this article

Chu, F., Zhang, W. & Hu, H. New findings on the incidence and management of CNS adverse reactions in ALK-positive NSCLC with lorlatinib treatment. Discov Onc 15 , 444 (2024). https://doi.org/10.1007/s12672-024-01339-9

Download citation

Received : 19 March 2024

Accepted : 11 September 2024

Published : 13 September 2024

DOI : https://doi.org/10.1007/s12672-024-01339-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • ALK-positive
  • CNS toxicity
  • Management strategies
  • Find a journal
  • Publish with us
  • Track your research
  • Introduction
  • Conclusions
  • Article Information

Summary of selection of participants for the anomalous health incident and control groups in the neuroimaging study. For the anomalous health incident group and the government employee control group, the top shows the inclusion information common with the clinical phenotyping study and the bottom shows information unique to the neuroimaging study. NIH indicates National Institutes of Health.

A, Percentage magnitude-of-difference map for volumetric measurement between 81 participants from the group with anomalous health incidents (AHIs) and 48 control participants. Each voxel in this map is an estimate of how much the median volume of the group with anomalous health incidents changed, with respect to the median volume in the control group. The percentage magnitude map was computed from a composite of 81 volumetric maps from the anomalous health incident group and 48 images from the control group. However, the values shown in this map are only for the voxels that survived the unadjusted P  < .01 (2-sided) threshold from the nonparametric randomized permutation test of difference between the 2 groups. Regions with blue shades indicate significantly smaller volume in the anomalous health incident group than in control participants, while red shades indicate larger volume compared with control participants. The sagittal slice (top right) with green lines traversing from the anterior to the posterior plane of the brain is a reference to the locations of the axial slices shown. The map was superimposed on the JHU-ss-MNI atlas. 22 B, Heat map of the volumetric measurement for participants with AHIs and control participants. The participants with AHIs are sorted by increasing time (in days) after AHIs. Control participants are not sorted. The dashed vertical line separates the participants from each group. Darker shades of blue and red in the color bar represent extreme negative and positive values, respectively. The P scores are comparable to z scores; therefore, an individual in the greater than 3 range (approximately) would correspond to a z score of greater than 3 from a normal distribution and vice versa for a dark blue shade. In the key, “(“ indicates a P score has values greater than the number next to it, whereas “]” indicates that the value is inclusive; eg, the label “(>2.33,3]” represents values in the range “2.33 <  z  ≤3” and so on for others. ROI indicates regions of interest.

A, Analysis similar to that shown in Figure 2A, but the map illustrates mean diffusivity from diffusion magnetic resonance imaging (unadjusted P  < .01, 2-sided). At the chosen threshold, no voxels survive, but blue areas would represent regions with lower diffusivity in the anomalous health incident (AHI) group than in control participants, while red areas would correspond to regions with higher diffusivity compared with control participants. The map was superimposed on the average T1-weighted image of the population. B, Analysis similar to that shown in Figure 2B, but the heat map illustrates mean diffusivity. Some individual striations of red and blue are also apparent, with no systematic patterns across regions of interest (ROI). See Figure 2B legend for explanation of key.

A, Analysis similar to that shown in Figure 2A, but the map illustrates fractional anisotropy from diffusion magnetic resonance imaging (unadjusted P  < .01, 2-sided). Some clusters can be seen in the corpus callosum with small magnitude of difference (≈2%), where the anomalous health incident (AHI) group showed lower anisotropy compared with the control group. B, Analysis similar to that shown in Figure 2B, but the heat map illustrates fractional anisotropy. Some individual striations of red and blue are also apparent with no systematic patterns across regions of interest (ROI). See Figure 2B legend for explanation of key.

Within-network functional connectivity values (y-axis) were estimated by taking the mean of all correlations between each unique pair of region of interest (ROI) combinations comprising the corresponding large-scale resting-state networks (x-axis). For example, the posterior salience network consists of n = 12 ROIs. This would have ½ ×  n ( n  – 1) = 66 unique ROI-paired connections from which the mean functional connectivity was estimated. Horizontal lines within boxes indicate the medians in each group. Boxes indicate interquartile range; horizontal lines within boxes, median. Whiskers indicate the spread of functional connectivity values within each group, up to 1.5 times the interquartile range. Dots indicate more extreme values. Some evidence of less functional connectivity within the posterior salience network can be observed within participants with anomalous health incidents (AHIs) compared with control participants (Mann-Whitney P  = .006); however, the difference is not strong enough to survive adjustment for multiple comparisons (Benjamini-Hochberg P  = .08).

eAppendix 1. Structural Volumetric and Clinical MRI

eAppendix 2. Structural Diffusion MRI

eAppendix 3. Statistical Analysis Plan (SAP)

eAppendix 4. Resting State Functional MRI (RS-fMRI)

eAppendix 5. Statistical Analysis for RS-fMRI

eAppendix 6. Assessing Relationship of Imaging Metrics With Clinical Measures

eAppendix 7. Outcome Metrics

Data Sharing Statement

  • Clinical Findings and Outcomes in US Government Personnel Reporting Directional Sensory Phenomena in Cuba JAMA Preliminary Communication March 20, 2018 This case series describes findings from the clinical evaluation of US government personnel reporting symptoms after exposure to directional audible and sensory phenomena during their postings in Havana, Cuba, and their clinical outcomes after rehabilitation. Randel L. Swanson II, DO, PhD; Stephen Hampton, MD; Judith Green-McKenzie, MD, MPH; Ramon Diaz-Arrastia, MD, PhD; M. Sean Grady, MD; Ragini Verma, PhD; Rosette Biester, PhD; Diana Duda, PT, DPT; Ronald L. Wolf, MD, PhD; Douglas H. Smith, MD
  • Neuroimaging Findings in US Government Personnel With Possible Exposure to Directional Phenomena in Cuba JAMA Original Investigation July 23, 2019 Between 2016 and 2018, US government personnel stationed in Havana, Cuba, reported exposure to unusual auditory and sensory phenomena followed by neurological signs and symptoms characterized clinically as cognitive, vestibular, and oculomotor dysfunction with sleep impairment and headaches. This study characterizes magnetic resonance–based differences in brain tissue volume, microstructure, and functional connectivity in those personnel compared with demographically similar unexposed controls. Ragini Verma, PhD; Randel L. Swanson, DO, PhD; Drew Parker, BS; Abdol Aziz Ould Ismail, MD; Russell T. Shinohara, PhD; Jacob A. Alappatt, BTech; Jimit Doshi, MS; Christos Davatzikos, PhD; Michael Gallaway, OD; Diana Duda, PT, DPT; H. Isaac Chen, MD; Junghoon J. Kim, PhD; Ruben C. Gur, PhD; Ronald L. Wolf, MD, PhD; M. Sean Grady, MD; Stephen Hampton, MD; Ramon Diaz-Arrastia, MD, PhD; Douglas H. Smith, MD
  • Clinical, Biomarker, and Research Findings in Individuals With Anomalous Health Incidents JAMA Original Investigation April 2, 2024 This study assesses whether participants with anomalous health incidents (AHIs) differ significantly from US government control participants with respect to clinical, research, and biomarker assessments. Leighton Chan, MD, MPH; Mark Hallett, MD; Chris K. Zalewski, PhD; Carmen C. Brewer, PhD; Cris Zampieri, PhD; Michael Hoa, MD; Sara M. Lippa, PhD; Edmond Fitzgibbon, MD; Louis M. French, PsyD; Anita D. Moses, MSN; André J. van der Merwe, BS; Carlo Pierpaoli, MD, PhD; L. Christine Turtzo, MD, PhD; Simge Yonter, MD; Pashtun Shahim, MD, PhD; NIH AHI Intramural Research Program Team; Brian Moore, DMSc, MPH; Lauren Stamps, BS; Spencer Flynn, BA; Julia Fontana, BS; Swathi Tata, BS; Jessica Lo, BS; Mirella A. Fernandez, BS; Annie-Lori Joseph, BS; Jesse Matsubara, DPT; Julie Goldberg, MA; Thuy-Tien D. Nguyen, MS; Noa Sasson, BS; Justine Lely, BS; Bryan Smith, MD; Kelly A. King, AuD, PhD; Jennifer Chisholm, AuD; Julie Christensen, MS; M. Teresa Magone, MD; Chantal Cousineau-Krieger, MD; Rakibul Hafiz, PhD; Amritha Nayak, ME; Okan Irfanoglu, PhD; Sanaz Attaripour, MD; Chen Lai, PhD; Wendy B. Smith, MA, PhD, BCB
  • Neurological Illness and National Security JAMA Editorial April 2, 2024 David A. Relman, MD

See More About

Select your interests.

Customize your JAMA Network experience by selecting one or more topics from the list below.

  • Academic Medicine
  • Acid Base, Electrolytes, Fluids
  • Allergy and Clinical Immunology
  • American Indian or Alaska Natives
  • Anesthesiology
  • Anticoagulation
  • Art and Images in Psychiatry
  • Artificial Intelligence
  • Assisted Reproduction
  • Bleeding and Transfusion
  • Caring for the Critically Ill Patient
  • Challenges in Clinical Electrocardiography
  • Climate and Health
  • Climate Change
  • Clinical Challenge
  • Clinical Decision Support
  • Clinical Implications of Basic Neuroscience
  • Clinical Pharmacy and Pharmacology
  • Complementary and Alternative Medicine
  • Consensus Statements
  • Coronavirus (COVID-19)
  • Critical Care Medicine
  • Cultural Competency
  • Dental Medicine
  • Dermatology
  • Diabetes and Endocrinology
  • Diagnostic Test Interpretation
  • Drug Development
  • Electronic Health Records
  • Emergency Medicine
  • End of Life, Hospice, Palliative Care
  • Environmental Health
  • Equity, Diversity, and Inclusion
  • Facial Plastic Surgery
  • Gastroenterology and Hepatology
  • Genetics and Genomics
  • Genomics and Precision Health
  • Global Health
  • Guide to Statistics and Methods
  • Hair Disorders
  • Health Care Delivery Models
  • Health Care Economics, Insurance, Payment
  • Health Care Quality
  • Health Care Reform
  • Health Care Safety
  • Health Care Workforce
  • Health Disparities
  • Health Inequities
  • Health Policy
  • Health Systems Science
  • History of Medicine
  • Hypertension
  • Images in Neurology
  • Implementation Science
  • Infectious Diseases
  • Innovations in Health Care Delivery
  • JAMA Infographic
  • Law and Medicine
  • Leading Change
  • Less is More
  • LGBTQIA Medicine
  • Lifestyle Behaviors
  • Medical Coding
  • Medical Devices and Equipment
  • Medical Education
  • Medical Education and Training
  • Medical Journals and Publishing
  • Mobile Health and Telemedicine
  • Narrative Medicine
  • Neuroscience and Psychiatry
  • Notable Notes
  • Nutrition, Obesity, Exercise
  • Obstetrics and Gynecology
  • Occupational Health
  • Ophthalmology
  • Orthopedics
  • Otolaryngology
  • Pain Medicine
  • Palliative Care
  • Pathology and Laboratory Medicine
  • Patient Care
  • Patient Information
  • Performance Improvement
  • Performance Measures
  • Perioperative Care and Consultation
  • Pharmacoeconomics
  • Pharmacoepidemiology
  • Pharmacogenetics
  • Pharmacy and Clinical Pharmacology
  • Physical Medicine and Rehabilitation
  • Physical Therapy
  • Physician Leadership
  • Population Health
  • Primary Care
  • Professional Well-being
  • Professionalism
  • Psychiatry and Behavioral Health
  • Public Health
  • Pulmonary Medicine
  • Regulatory Agencies
  • Reproductive Health
  • Research, Methods, Statistics
  • Resuscitation
  • Rheumatology
  • Risk Management
  • Scientific Discovery and the Future of Medicine
  • Shared Decision Making and Communication
  • Sleep Medicine
  • Sports Medicine
  • Stem Cell Transplantation
  • Substance Use and Addiction Medicine
  • Surgical Innovation
  • Surgical Pearls
  • Teachable Moment
  • Technology and Finance
  • The Art of JAMA
  • The Arts and Medicine
  • The Rational Clinical Examination
  • Tobacco and e-Cigarettes
  • Translational Medicine
  • Trauma and Injury
  • Treatment Adherence
  • Ultrasonography
  • Users' Guide to the Medical Literature
  • Vaccination
  • Venous Thromboembolism
  • Veterans Health
  • Women's Health
  • Workflow and Process
  • Wound Care, Infection, Healing

Others Also Liked

  • Download PDF
  • X Facebook More LinkedIn

Pierpaoli C , Nayak A , Hafiz R, et al. Neuroimaging Findings in US Government Personnel and Their Family Members Involved in Anomalous Health Incidents. JAMA. 2024;331(13):1122–1134. doi:10.1001/jama.2024.2424

Manage citations:

© 2024

  • Permissions

Neuroimaging Findings in US Government Personnel and Their Family Members Involved in Anomalous Health Incidents

  • 1 Laboratory on Quantitative Medical Imaging, National Institute of Biomedical Imaging and Bioengineering, Bethesda, Maryland
  • 2 Scientific and Statistical Computing Core, National Institute of Mental Health (NIMH), National Institutes of Health (NIH), Bethesda, Maryland
  • 3 National Institute of Neurological Disorders and Stroke, Bethesda, Maryland
  • 4 National Institute on Deafness and Other Communication Disorders, National Institutes of Health, Bethesda, Maryland
  • 5 Rehabilitation Medicine Department, National Institutes of Health, Bethesda, Maryland
  • 6 Military Traumatic Brain Injury Initiative (MTBI2—formerly known as the Center for Neuroscience and Regenerative Medicine [CNRM])
  • 7 The Henry Jackson Foundation for the Advancement of Military Medicine, Bethesda, Maryland
  • 8 National Intrepid Center of Excellence Walter Reed National Military Medical Center, Bethesda, Maryland
  • 9 Uniformed Services University of the Health Sciences, Bethesda, Maryland
  • Editorial Neurological Illness and National Security David A. Relman, MD JAMA
  • Preliminary Communication Clinical Findings and Outcomes in US Government Personnel Reporting Directional Sensory Phenomena in Cuba Randel L. Swanson II, DO, PhD; Stephen Hampton, MD; Judith Green-McKenzie, MD, MPH; Ramon Diaz-Arrastia, MD, PhD; M. Sean Grady, MD; Ragini Verma, PhD; Rosette Biester, PhD; Diana Duda, PT, DPT; Ronald L. Wolf, MD, PhD; Douglas H. Smith, MD JAMA
  • Original Investigation Neuroimaging Findings in US Government Personnel With Possible Exposure to Directional Phenomena in Cuba Ragini Verma, PhD; Randel L. Swanson, DO, PhD; Drew Parker, BS; Abdol Aziz Ould Ismail, MD; Russell T. Shinohara, PhD; Jacob A. Alappatt, BTech; Jimit Doshi, MS; Christos Davatzikos, PhD; Michael Gallaway, OD; Diana Duda, PT, DPT; H. Isaac Chen, MD; Junghoon J. Kim, PhD; Ruben C. Gur, PhD; Ronald L. Wolf, MD, PhD; M. Sean Grady, MD; Stephen Hampton, MD; Ramon Diaz-Arrastia, MD, PhD; Douglas H. Smith, MD JAMA
  • Original Investigation Clinical, Biomarker, and Research Findings in Individuals With Anomalous Health Incidents Leighton Chan, MD, MPH; Mark Hallett, MD; Chris K. Zalewski, PhD; Carmen C. Brewer, PhD; Cris Zampieri, PhD; Michael Hoa, MD; Sara M. Lippa, PhD; Edmond Fitzgibbon, MD; Louis M. French, PsyD; Anita D. Moses, MSN; André J. van der Merwe, BS; Carlo Pierpaoli, MD, PhD; L. Christine Turtzo, MD, PhD; Simge Yonter, MD; Pashtun Shahim, MD, PhD; NIH AHI Intramural Research Program Team; Brian Moore, DMSc, MPH; Lauren Stamps, BS; Spencer Flynn, BA; Julia Fontana, BS; Swathi Tata, BS; Jessica Lo, BS; Mirella A. Fernandez, BS; Annie-Lori Joseph, BS; Jesse Matsubara, DPT; Julie Goldberg, MA; Thuy-Tien D. Nguyen, MS; Noa Sasson, BS; Justine Lely, BS; Bryan Smith, MD; Kelly A. King, AuD, PhD; Jennifer Chisholm, AuD; Julie Christensen, MS; M. Teresa Magone, MD; Chantal Cousineau-Krieger, MD; Rakibul Hafiz, PhD; Amritha Nayak, ME; Okan Irfanoglu, PhD; Sanaz Attaripour, MD; Chen Lai, PhD; Wendy B. Smith, MA, PhD, BCB JAMA

Question   Can a systematic evaluation using quantitative magnetic resonance imaging (MRI) metrics identify potential brain lesions in patients who have experienced anomalous health incidents (AHIs) compared with a well-matched control group?

Findings   In this exploratory study that involved brain imaging of 81 participants who experienced AHIs and 48 matched control participants, there were no significant between-group differences in MRI measures of volume, diffusion MRI–derived metrics, or functional connectivity using functional MRI after adjustments for multiple comparisons. The MRI results were highly reproducible and stable at longitudinal follow-ups. No clear relationships between imaging and clinical variables emerged.

Meaning   In this exploratory neuroimaging study, there was no significant MRI-detectable evidence of brain injury among the group of participants who experienced AHIs compared with a group of matched control participants. This finding has implications for future research efforts as well as for interventions aimed at improving clinical care for the participants who experienced AHIs.

Importance   US government personnel stationed internationally have reported anomalous health incidents (AHIs), with some individuals experiencing persistent debilitating symptoms.

Objective   To assess the potential presence of magnetic resonance imaging (MRI)–detectable brain lesions in participants with AHIs, with respect to a well-matched control group.

Design, Setting, and Participants   This exploratory study was conducted at the National Institutes of Health (NIH) Clinical Center and the NIH MRI Research Facility between June 2018 and November 2022. Eighty-one participants with AHIs and 48 age- and sex-matched control participants, 29 of whom had similar employment as the AHI group, were assessed with clinical, volumetric, and functional MRI. A high-quality diffusion MRI scan and a second volumetric scan were also acquired during a different session. The structural MRI acquisition protocol was optimized to achieve high reproducibility. Forty-nine participants with AHIs had at least 1 additional imaging session approximately 6 to 12 months from the first visit.

Exposure   AHIs.

Main Outcomes and Measures   Group-level quantitative metrics obtained from multiple modalities: (1) volumetric measurement, voxel-wise and region of interest (ROI)–wise; (2) diffusion MRI–derived metrics, voxel-wise and ROI-wise; and (3) ROI-wise within-network resting-state functional connectivity using functional MRI. Exploratory data analyses used both standard, nonparametric tests and bayesian multilevel modeling.

Results   Among the 81 participants with AHIs, the mean (SD) age was 42 (9) years and 49% were female; among the 48 control participants, the mean (SD) age was 43 (11) years and 42% were female. Imaging scans were performed as early as 14 days after experiencing AHIs with a median delay period of 80 (IQR, 36-544) days. After adjustment for multiple comparisons, no significant differences between participants with AHIs and control participants were found for any MRI modality. At an unadjusted threshold ( P  < .05), compared with control participants, participants with AHIs had lower intranetwork connectivity in the salience networks, a larger corpus callosum, and diffusion MRI differences in the corpus callosum, superior longitudinal fasciculus, cingulum, inferior cerebellar peduncle, and amygdala. The structural MRI measurements were highly reproducible (median coefficient of variation <1% across all global volumetric ROIs and <1.5% for all white matter ROIs for diffusion metrics). Even individuals with large differences from control participants exhibited stable longitudinal results (typically, <±1% across visits), suggesting the absence of evolving lesions. The relationships between the imaging and clinical variables were weak (median Spearman ρ = 0.10). The study did not replicate the results of a previously published investigation of AHIs.

Conclusions and Relevance   In this exploratory neuroimaging study, there were no significant differences in imaging measures of brain structure or function between individuals reporting AHIs and matched control participants after adjustment for multiple comparisons.

US government personnel and their family members, mostly located internationally, have described unusual incidents of noise and head pressure often associated with headache, cognitive dysfunction, and other symptoms. These events have caused significant disruption in the lives of those affected and have been labeled anomalous health incidents (AHIs). 1 , 2

A previous neuroimaging study 3 of patients with AHIs from Havana, Cuba, reported significant differences with respect to control participants in brain volumes, diffusion magnetic resonance imaging (dMRI) metrics, and functional connectivity. Taken together, these findings suggested detectable differences in the brains of those who experienced AHIs.

The main objective of the present study was to evaluate a broad range of quantitative neuroimaging features in participants with AHIs, taking advantage of the following factors: (1) a larger cohort of participants with AHIs, (2) a group of control participants that included US government personnel with similar professional backgrounds, (3) a dedicated dMRI sequence aimed at providing high accuracy and reproducibility, (4) an evaluation of the achieved reproducibility for volumetric and diffusion metrics, and (5) the availability of deep phenotyping (reported in a companion article 4 ) for the assessment of clinical neuroimaging correlations.

The description of participant recruitment and inclusion/exclusion criteria are described in Figure 1 and Table 1 . Briefly, participants located in Cuba, China, Austria, the United States, and other locations were recruited and evaluated at the National Institutes of Health Clinical Center in a natural history study between June 2018 and November 2022. The study was approved by the NIH Institutional Review Board. All participants provided written informed consent. A more detailed description about the study design and cohort can be found in our companion article. 4

For the neuroimaging study, we included participants with an AHI and unaffected government employees with similar professional background as the participants who experienced AHIs. We also recruited additional healthy volunteers to reduce age and sex imbalance and to attain a larger normative group ( Figure 1 and Table 1 ).

The participants with AHIs were further separated into categories (AHI 1 and AHI 2) based on criteria used by a US intelligence panel categorizing AHI incident modalities. 5 AHI 1 is the category consistent with the 4 core characteristics of the incident as defined by the intelligence community, and AHI 2 is all other participants. Data from histories of participants with AHIs and Department of State Diplomatic Security Service investigations were used to assign participants to these AHI subgroups. The diagnoses of persistent postural-perceptual dizziness (PPPD) 6 were made when participants fully met specific criteria.

All participants had a clinical scan and resting-state functional MRI (RS-fMRI) performed using a 3T Biograph mMR scanner (Siemens), except for 1 control participant scanned using a 3T Achieva scanner (Philips). Each clinical scan (including susceptibility-weighted imaging) (eAppendix 1.1 in Supplement 1 ) was read by a board-certified neuroradiologist, who had access to the clinical history of the participants. On a different day, participants had a structural and dMRI research scan performed using a 3T Prisma scanner (Siemens). Acquisition and preprocessing details of structural MRI, 7 , 8 dMRI, 9 - 13 and RS-fMRI 14 , 15 are available in eAppendixes 1, 2, and 4, respectively, in Supplement 1 . Extensive testing was performed to ensure the reproducibility of the structural and dMRI data (eAppendices 1.4 and 2.5 in Supplement 1 ).

The options to have follow-up visits with structural and dMRI scans was offered to all participants with AHI included in the imaging study at a yearly interval. However, participants were free to decline, and the schedule was flexible to accommodate individual availability (eAppendix 3.5 in Supplement 1 ).

We computed several neuroimaging metrics to evaluate group differences between participants with AHIs and control participants. From the structural volumetric MRI, we computed voxel-wise and regional brain volumes. 7 , 8 From the dMRI, we computed diffusion tensor metrics 9 , 13 (fractional anisotropy, mean diffusivity, axial diffusivity, and radial diffusivity), mean apparent propagator metrics 11 (propagator anisotropy, return-to-axis probability [RTAP], return-to-origin probability, return-to-plane probability, and non-Gaussianity), and dual-compartment metrics 11 (cerebrospinal fluid signal fraction and parenchymal mean diffusivity). From the fMRI, we computed functional connectivity 14 , 15 within each large-scale network by averaging the unique pairwise correlations between only the regions of interest (ROIs) comprising the corresponding network. We did not assess any between-network connectivity, to be consistent with the analysis performed in a previous neuroimaging study 3 of AHIs. See eAppendix 7 in Supplement 1 for a more detailed description of each of these outcome measures.

Given the variability in clinical presentation, timing and modalities of the AHIs, the uncertainties regarding mechanism, spatial extent, and regional distribution of the potential injury, we conducted an exploratory data analysis. We developed a comprehensive analysis plan involving multiple statistical approaches (described in detail in eAppendices 3 and 5 in Supplement 1 ). These included conventional nonparametric analysis both with and without Benjamini-Hochberg 16 adjustment for multiple comparisons, as well as bayesian multilevel modeling, which intrinsically addresses the issue of multiplicity in the conventional model. 17

For the volumetric and dMRI data (eAppendix 3.3.1 in Supplement 1 ), an analysis approach based on percentiles was also developed to compare individual participants with respect to the median of the control distribution 18 (eAppendixes 3.3.3 and 3.3.4 in Supplement 1 ).

For the group comparison of RS-fMRI data, we performed the Mann-Whitney U test, followed by Benjamini-Hochberg adjustment for 13 networks derived using the same atlas 19 used by Verma et al 3 (see eAppendix 4.3 in Supplement 1 for methods and eAppendix 5.1.1 in Supplement 1 for the bayesian approach). For the 3-group (AHI 1, AHI 2, and control participants) comparison, see eAppendix 5.1.2 in Supplement 1 for the conventional approach and eAppendix 5.1.3 in Supplement 1 for the bayesian approach.

We evaluated the correlation between 41 clinical measures and relevant neuroimaging metrics within specific ROIs where differences between the control participants and participants with AHIs were observed at an unadjusted level. This included the first 2 principal components (eAppendix 3.3.5 in Supplement 1 ) obtained from 8 diffusion MRI measurements within the white matter ROIs, mean diffusivity from the gray matter ROIs, normalized volume, and functional connectivity within specific networks (eAppendix 6.1 in Supplement 1 ). In addition, we also evaluated the possible relationship between functional connectivity and clinical measures pertaining to dysfunction in those networks, using linear regression (eAppendix 6.2 in Supplement 1 ).

All voxel-wise statistical analyses were performed using the nonparametric “randomize” module in FSL 20 version 6.0 at an unadjusted threshold of P  < .01 (2-sided). The ROI-wise statistical analyses were performed using R (version 4.2.2) 21 at an unadjusted threshold of P  < .05 (2-sided).

The number of participants included in the primary analysis and the relevant demographics are shown in Figure 1 and Table 1 . Of the 86 participants included in the clinical assessment, 81 participated in the neuroimaging study. Of the 30 control participants included in the clinical assessments, 29 participated in the neuroimaging study and 19 additional participants were recruited to improve the age and sex match of the control population with the AHI population. No significant differences in age and sex were observed across any participant groups, including the control group (mean [SD] age, 43 [11] years; 42% female) and the AHI group (mean [SD] age, 42 [9] years; 49% female) and the AHI subgroups (AHI 1: mean [SD] age, 40 [9] years; 48% female, and AHI 2: mean [SD] age, 44 [9] years; 51% female). Imaging scans were performed as early as 14 days after experiencing AHIs, with a median delay interval of 80 (IQR, 36-544) days and a range of 14 to 1505 days across the entire AHI sample. For the longitudinal scans, the data presented here included participants with AHI (n = 49, with at least 2 visits) up to a fixed date (November 7, 2022), with a median interscan interval of 371 (IQR, 298-420) days (eAppendix 3.5 in Supplement 1 ).

The majority of the brain MRI scans were unremarkable as read by clinical neuroradiologists. No evidence of acute traumatic brain injury or hemorrhage was reported for any participant in the AHI or control group. The presence of gliosis, white matter hyperintensities abnormal for age, or small vessel ischemic changes were reported in 10.7% of participants with AHIs and 14.6% of control participants. Other incidental findings included, in order of decreasing frequency: sinusitis (n = 6 AHI, n = 2 control), retention cysts (n = 4 AHI, n = 1 control), developmental venous anomalies (n = 2 AHI, n = 1 control), and other congenital anatomical abnormalities (n = 2 AHI, n = 0 control) considered of little clinical relevance.

Four healthy volunteers underwent 5 repeated scans for the interscan reproducibility analysis (eAppendix 1.4.1, eTable 2 in Supplement 1 ). The coefficient of variation (averaged over 4 volunteers) across repeated scans for the structural volumetric data showed excellent reproducibility, with less than 1% coefficient of variation for the global ROIs (eg, total parenchyma = 0.5%, total gray matter = 0.6%, and cerebral white matter = 0.6%). The median coefficient of variation across all ROIs was 2.4% (IQR, 1.1%-3.2%) (eTable 3 in Supplement 1 ). The interscanner reproducibility of structural volumetric data (n = 110, eAppendix 1.4.2 in Supplement 1 ) between the Siemens Biograph mMR and the Siemens Prisma scanners was also high, with global ROIs exhibiting a very strong correlation ( r ≈  1) between the scanners (eg, total parenchyma [ R 2  = 0.99], cerebral white matter [ R 2  = 0.98], total gray matter [ R 2  = 0.98], cortical gray matter [ R 2  = 0.97]), and approximately 86% of ROIs showing an R 2 greater than 0.7 (eAppendix 3.1.2, eTable 4 in Supplement 1 ).

For the same 4 healthy volunteers mentioned above, the dMRI metrics also demonstrated strong reproducibility (eAppendix 3.1.3, eTable 6 in Supplement 1 ). For instance, the median coefficient of variation across all white matter ROIs was approximately 1% for the diffusion tensor (eg, fractional anisotropy: 0.6% [IQR, 0.5%-0.8%]; mean diffusivity: 0.6% [IQR, 0.5%-0.7%]) and mean apparent propagator (eg, propagator anisotropy: 0.2% [IQR, 0.1%-0.4%]; RTAP: 1.3% [IQR, 1.1%-1.6%]) metrics.

After adjustment for multiple comparisons, no statistically significant differences between participants with AHI and control participants were found for the whole-brain voxel-wise analysis or ROI-wise analysis for both the volumetric measurements and the dMRI metrics. Therefore, when we mention “significant” results in the remainder of this article, we refer either to significant results for the bayesian analysis or to “significant results at an unadjusted level” for the conventional analysis.

Figure 2 A, Figure 3 A, and Figure 4 A show voxel-wise magnitude effect maps for volumetrics, 23 - 26 mean diffusivity, 10 and fractional anisotropy, 10 respectively. Figure 2 A shows a few regions with significantly higher volume (unadjusted) in participants with AHIs, with neither apparent anatomical pattern nor left-right consistency. No regions with significantly altered mean diffusivity were observed in the brain parenchyma ( Figure 3 A), whereas participants with AHIs exhibited significantly lower fractional anisotropy than control participants in the corpus callosum and in regions located at the interfaces between gray matter and sulci ( Figure 4 A), which could arise from inconsistent interparticipant registration. No regions with significantly higher fractional anisotropy in participants with AHIs were observed. In all 3 metrics mentioned above, no clusters survived after adjustment for multiple comparisons. Differences between participants with AHIs and control participants for all other diffusion metrics were small in both magnitude and spatial extent (eFigure 3A-G in Supplement 1 ), and no remarkable differences emerged in the analysis of the AHI 1 and AHI 2 subgroups vs control participants (eFigure 4A-W in Supplement 1 ).

Table 2 summarizes the group-level ROI-wise analysis results for potential volumetric and dMRI outcomes of interest in ROIs where a difference was observed at an unadjusted threshold ( P  < .05, 2-sided) and a percentage difference in medians was at least 2%. The highest magnitude difference for volumetric and diffusion metrics was less than 8%. Compared with control participants, the corpus collosum at the midsagittal plane in participants with AHIs had 7% larger volume and lower RTAP (ie, increased diffusivity perpendicular to the fibers). The RTAP differences were statistically significant but of small magnitude, ranging from 2% to 7% depending on the analysis used. Decreased RTAP is in line with the reduction of fractional anisotropy found in several clusters in the corpus collosum at the voxel-wise analysis. In addition, the right superior longitudinal fasciculus showed decreased RTAP and fractional anisotropy in participants with AHIs compared with control participants, albeit with small magnitude differences (≈3%).

Figure 2 B, Figure3 B, and Figure 4 B show heat maps generated from the “Pscores” 18 (eAppendixes 3.3.3 and 3.3.4 in Supplement 1 ), for volume, mean diffusivity, and fractional anisotropy, respectively. In these maps, since each column represents an individual, a vertical line of a given color indicates a consistent deviation from the median of the control participants in multiple brain regions for that individual. Conversely, the rows represent ROIs and a darker color in a row for the participants with AHIs and not for the control participants would suggest that ROI may be involved in the expression of AHI pathology. Moreover, a higher representation of darker colors in the group of participants with AHIs would indicate an increased presence of extreme values and therefore an increased likelihood of abnormalities in the AHI cohort. Overall, vertical striations related to interindividual differences were observed, while there were neither ROI-specific striations nor more extreme values in the AHI group, even across other diffusion metrics (eFigure 7, panels A-I in Supplement 1 for the AHI and control groups and eFigure 8, panels A-L in Supplement 1 , which includes AHI subgroups).

To assess the dMRI differences between participants with AHIs and control participants globally in white matter, a principal component analysis was performed using percentiles averaged across all white matter ROIs from 8 diffusion metrics (eAppendix 3.3.5 in Supplement 1 ). The first 2 principal components explained more than 87% of the variance (eFigure 6B in Supplement 1 ). eFigure 9A in Supplement 1 shows the biplot from the principal component analysis revealing that the 95% confidence ellipses of participants with AHIs and control participants substantially overlap, indicating they have very similar global diffusion characteristics. A similar biplot including the AHI subgroups (AHI 1 and AHI 2) also shows that the 3 groups substantially overlap (eFigure 9B in Supplement 1 ).

Among the individuals included in the longitudinal assessment (n = 49 [2 visits], n = 17 [3 visits], and n = 8 [4 visits]), the median interscan interval between the visits was 371 (IQR, 298-420) days (eAppendix 3.5 in Supplement 1 ). In the regions where differences between participants with AHIs and control participants were observed at an unadjusted level ( P  < .05) from the cross-sectional analysis of the first visit, the ROI measurements of the available follow-up visits were plotted. The volumetric and dMRI metrics in these ROIs showed very small changes across these follow-up visits (on average,<±1%), even for individuals with large deviations from the median of the control participants at the first visit (eFigure 10A-G in Supplement 1 ).

Figure 5 shows box plots for the group of control participants and participants with AHIs for 13 resting-state networks. In general, control participants had a higher median functional connectivity in all resting-state networks compared with participants with AHIs; however, the only difference at an unadjusted level ( P  = .006; difference in location, −0.03 [95% CI, −0.05 to −0.01]) found in the posterior salience network did not survive Benjamini-Hochberg adjustment ( P  = .08). Stronger differences were observed between control participants and AHI 1 subgroup, within the posterior salience network (unadjusted P  = .004; difference in location, −0.04 [95% CI, −0.06 to −0.01]; P  = .03 after Benjamini-Hochberg adjustment) and the anterior salience network (unadjusted P  = .002; difference in location, −0.06 [95% CI, −0.09 to −0.02]; P  = .02 after Benjamini-Hochberg adjustment). See eAppendix 5.1.2 and eFigure 12 in Supplement 1 for more details on these analyses. The results from our conventional analysis also aligned with those from the Bayesian approach for both the 2-group comparison (eAppendices 5.1.1 and 5.1.3 in Supplement 1 ) and the 3-group comparison (eFigures 11 and 12 in Supplement 1 ).

There was no significant relationship between functional connectivity in the salience networks and any of the clinical measurements reported in the companion article 4 (eFigure 13 in Supplement 1 ), particularly including relationships with metrics assessing anxiety and posttraumatic stress disorder (eAppendix 6.2 and eFigure 15A-B in Supplement 1 ). Stratification of the participants into those diagnosed with PPPD and those without did not affect the functional connectivity findings (eFigure 14 in Supplement 1 ).

In this exploratory neuroimaging study, after adjustment for multiple comparisons, there were no significant differences between participants with AHIs and control participants for any MRI modality. When we report significant differences between groups, these differences are either from an “unadjusted” analysis or from the Bayesian analysis. Global volumetrics, such as total gray matter and white matter volumes did not show any differences even in the unadjusted analysis or the Bayesian analysis. The principal component analysis of diffusion metrics across all white matter regions also showed an essential overlap of the median values for participants with AHIs and control participants. From these findings, it may be concluded that, from a structural MRI standpoint, there was no evidence of widespread brain lesions in the AHI group.

In the ROI-wise analysis, the corpus collosum at the midsagittal plane in participants with AHIs had larger volume than in control participants. This is the opposite of what would be expected in presence of axonal loss consequent to brain injury. Low anisotropy and increased diffusivity perpendicular to the fibers has been reported in Wallerian degeneration, 27 but the small group differences we observed in the corpus collosum and the superior longitudinal fasciculus may not be indicative of pathology. Overall, the group-level analysis of structural and dMRI data revealed very small differences between participants with AHIs and control participants.

Lack of significant group differences may originate from a large heterogeneity in the response to the potential injury across individuals. The data were therefore also presented at the individual level using heat maps. These analyses indicated that the measurements were sensitive enough to detect differences across individuals, but they did not segregate into a clear pattern between participants with AHIs and control participants.

An important feature of this study was the longitudinal brain MRI scans, allowing evaluation of potential findings over time. This is especially important when there is a lack of predeployment brain MRI scans. Measurements across various MRI modalities were largely stable on the follow-up visits, even for participants with atypical values at initial scan, suggesting absence of evolving lesions. Lack of evolving lesions may indicate absence of an acute brain injury, since most injuries result in changes over time.

For the functional connectivity analysis, there were differences primarily in the salience networks, although most differences did not survive after adjustment for multiple comparisons. The primary regions comprising the salience networks (anterior salience, posterior salience) are the anterior and posterior insula, which are associated with sensorimotor, autonomic, emotion, and decision-making functions, among several other functions. 28 The salience networks process prioritized attention stimuli and work as a switching hub between other large-scale resting-state networks such as the central executive networks (left and right executive control networks) and default-mode networks (posterior and dorsal default-mode networks). 29 , 30 Abnormalities in salience networks have been seen in patients with functional neurologic disorders, 31 but the lack of a relationship here with functional manifestations (PPPD) makes this finding of uncertain significance. Moreover, increased connectivity in the salience network has been associated with posttraumatic stress disorder 32 , 33 and anxiety 34 ; however, no such relationships were observed within the AHI cohort. This further adds to the uncertainty of the clinical relevance of these differences with respect to AHIs.

Even restricting the analysis to the more selective AHI 1 group, we did not find substantial differences between AHI and control groups, except for the functional connectivity in the salience networks noted above. These findings suggest that the current criteria used by the Intelligence Community Experts Panel to identify cases of interest do not correspond to distinct MRI patterns. That this study did not identify a neuroimaging signature of brain injury in this AHI cohort does not detract from the seriousness of the clinical condition.

A previous neuroimaging study 3 reported significant differences with respect to control participants in regional white matter and gray matter volumes, decreased global white matter volume, decreased mean diffusivity in the cerebellar vermis, increased fractional anisotropy in the splenium of the corpus collosum, and reduced functional connectivity in the auditory and visuospatial subnetworks. The present study did not replicate any of the findings of the previous neuroimaging study on AHI for any MRI modality used.

For the dMRI component of the study, this lack of agreement could originate from experimental factors. dMRI is a powerful technique, but it is vulnerable to several potential artifacts, which may bias results. 12 , 35 Because obtaining reliable quantitative dMRI measurements from clinical scans is problematic, we therefore had a dedicated dMRI session with an acquisition scheme designed to achieve high accuracy and reproducibility. Experimental factors, however, are unlikely to explain the lack of agreement for the volumetric and functional MRI findings. The authors of the previous study 3 indicated that one limitation of their investigation was that it was not possible to obtain control participants who shared the same professional background of the exposed individuals. In this regard, the present study had a better-matched control cohort, although still with a small number of participants. An additional confounder in the previous study 3 was that different MRI protocols were used to acquire data for 2 subsets of the control cohort. The occurrence of spurious systematic differences in advanced MRI findings when using different acquisition protocols is well documented in the literature. 36 - 39 For both the previous 3 and the present study, the sample size is relatively small, and it is difficult to assess the risk of having spurious positive and false-negative findings. 40 - 42

This study has several limitations. First, the sample size of the control population was small, and not all control participants were matched vocationally to the participants with AHIs. On the other hand, control participants and participants with AHIs were well matched based on age and sex, and all participants were scanned with an identical imaging protocol. Second, the earliest scan for participants with AHIs was 14 days from the experienced event (median, 80 [IQR, 36-544] days; range, 14-1505 days), precluding the assessment of acute imaging abnormalities. Third, while the dMRI acquisition was designed to have very high quality, the fMRI acquisition was limited to more standard protocols. We also could not perform specific task-fMRI assessments that might help better characterize the fMRI correlates of clinical manifestations pertinent to this group. The RS-fMRI analysis was also limited to only intranetwork functional connectivity, while evaluations of both intranetwork and internetwork functional connectivity may be warranted for a comprehensive assessment.

This exploratory neuroimaging study, which was designed to produce highly reproducible quantitative imaging metrics, revealed no significant differences in imaging measures of brain structure or function between individuals reporting AHIs and matched control participants after adjustment for multiple comparisons. These findings suggest that the origin of the symptoms of participants with AHIs may not be linked to an MRI-identifiable injury to the brain.

Accepted for Publication: February 13, 2024.

Published Online: March 18, 2024. doi:10.1001/jama.2024.2424

Corresponding Author: Carlo Pierpaoli, MD, PhD, Laboratory on Quantitative Medical Imaging, National Institute of Biomedical Imaging and Bioengineering, Bldg 13, Room 3W43, Bethesda, MD 20892 ( [email protected] ).

NIH AHI Intramural Research Program Team Authors: Brian Moore, DMSc, MPH; Lauren Stamps, BS; Spencer Flynn, BA; Julia Fontana, BS; Swathi Tata, BS; Jessica Lo, BS; Mirella A. Fernandez, BS; Annie Lori-Joseph, BS; Jesse Matsubara, DPT; Julie Goldberg, MA; Thuy-Tien D. Nguyen, MS; Noa Sasson, BS; Justine Lely, BS; Bryan Smith, MD; Kelly A. King, AuD, PhD; Jennifer Chisholm, AuD; Julie Christensen, MS; M. Teresa Magone, MD; Chantal Cousineau-Krieger, MD; Louis M. French, PsyD; Simge Yonter, MD; Sanaz Attaripour, MD; Chen Lai, PhD.

Affiliations of NIH AHI Intramural Research Program Team Authors: National Institute of Neurological Disorders and Stroke, Bethesda, Maryland (Smith, Attaripour); National Institute on Deafness and Other Communication Disorders, National Institutes of Health, Bethesda, Maryland (King, Chisholm, Christensen); Rehabilitation Medicine Department, National Institutes of Health, Bethesda, Maryland (Flynn, Fontana, Tata, Lo, Fernandez, Lori-Joseph, Matsubara, Goldberg, Nguyen, Sasson, Lely, Yonter); Military Traumatic Brain Injury Initiative (MTBI2—formerly known as the Center for Neuroscience and Regenerative Medicine [CNRM]) (Moore, Stamps); The Henry Jackson Foundation for the Advancement of Military Medicine, Bethesda, Maryland (Moore, Stamps, Lai); National Intrepid Center of Excellence Walter Reed National Military Medical Center, Bethesda, Maryland (French); Uniformed Services University of the Health Sciences, Bethesda, Maryland (French, Lai); National Eye Institute, National Institutes of Health, Bethesda, Maryland (Magone, Cousineau-Krieger).

Author Contributions: Drs Pierpaoli, Shahim, and Chan had full access to all of the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis. Drs Chan and Shahim contributed equally to this work.

Concept and design: Pierpaoli, Irfanoglu, Nayak, Hafiz, Hallett, Zalewski, Brewer, Lippa, French, van der Merwe, Shahim, Chan.

Acquisition, analysis, or interpretation of data: Pierpaoli, Nayak, Hafiz, Irfanoglu, Chen, Taylor, Hallett, Hoa, Pham, Chou, Moses, van der Merwe, Lippa, Brewer, Zalewski, Zampieri, Turtzo, Shahim, Chan, Moore Stamps, Flynn, Fontana, Tata, Lo, Fernandez, Lori-Joseph, Matsubara, Goldberg, Nguyen, Sasson, Lely, Smith, King, Chisholm, Christensen, Magone, Cousineau-Krieger, French, Yonter, Attaripour, Lai.

Drafting of the manuscript: Pierpaoli, Nayak, Hafiz.

Critical review of the manuscript for important intellectual content: Pierpaoli, Nayak, Hafiz, Irfanoglu, Chen, Taylor, Hallett, Hoa, Pham, Chou, Moses, van der Merwe, Lippa, Brewer, Zalewski, Zampieri, Turtzo, Shahim, Chan, Moore Stamps, Flynn, Fontana, Tata, Lo, Fernandez, Lori-Joseph, Matsubara, Goldberg, Nguyen, Sasson, Lely, Smith, King, Chisholm, Christensen, Magone, Cousineau-Krieger, French, Yonter, Attaripour, Lai.

Statistical analysis: Hafiz, Chen, Taylor, Nayak, Pierpaoli.

Obtained funding: Chan, Pierpaoli.

Administrative, technical, or material support: Pierpaoli, Nayak, Hafiz, Irfanoglu, Chen, Taylor, Hallett, Hoa, Pham, Chou, Moses, van der Merwe, Lippa, Brewer, Zalewski, Zampieri, Turtzo, Shahim, Chan, Moore Stamps, Flynn, Fontana, Tata, Lo, Fernandez, Lori-Joseph, Matsubara, Goldberg, Nguyen, Sasson, Lely, Smith, King, Chisholm, Christensen, Magone, Cousineau-Krieger, French, Yonter, Attaripour, Lai.

Supervision: Pierpaoli, Chan.

Conflict of Interest Disclosures: Dr Hallett reported serving on medical advisory boards for QuantalX and VoxNeuro and receiving consulting fees from Janssen Pharmaceutical outside the submitted work. Dr Pham reported receiving grants from the National Multiple Sclerosis Society, Washington University in St. Louis, and Johns Hopkins University and receiving personal fees from the University of Pennsylvania outside the submitted work. Dr Brewer reported that she is Research Audiologist emeritus, National Institute on Deafness and Other Communication Disorders, and serving (unpaid) on the editorial board for, and receiving travel support for annual editorial board meeting from, Ear and Hearing .

Funding/Support: The study was funded by the National Institutes of Health (NIH) including the Clinical Center, Office of Behavioral and Social Sciences Research, the Office of the Director, and Uniform Services University (MTBI, 2 USU). This work was supported by the NIH intramural research programs of the National Institute of Biomedical Imaging and Bioengineering, the NIH Clinical Center, the National Institute on Deafness and Other Communication Disorders (DC000064), the National Institute of Neurological Disorders and Stroke, the National Eye Institute, and the National Institute of Nursing Research.

Role of the Funder/Sponsor: The National Institutes of Health had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication. The NIH and USU were only involved with all of these aspects of the study insofar as some of the authors are employees or otherwise affiliated with the NIH and USU.

Disclaimer: The views expressed in this article are those of the authors and do not reflect the policy of the US Department of the Army, Navy, Air Force, Department of Defense, or the US government. The identification of specific products or scientific instrumentation is considered an integral part of the scientific endeavor and does not constitute endorsement or implied endorsement on the part of the authors, Department of Defense, or any component agency.

Data Sharing Statement: See Supplement 2 .

Additional Contributions: We thank the study participants, their families, and the care providers (all staff who had interactions with the study participants, eg, MRI technicians, clinicians, nurses, receptionists) who made this study possible. This work used the computational resources of the NIH HPC Biowulf cluster ( http://hpc.nih.gov ). Henry L. Lew, MD, PhD, University of Hawaii, served as an independent medical monitor, and Josh Chang, PhD, NIH, provided statistical support. Dr Lew did not receive compensation, and Dr Chang received compensation as a contractor with the NIH.

  • Register for email alerts with links to free full-text articles
  • Access PDFs of free articles
  • Manage your interests
  • Save searches and receive search alerts

Cookies on GOV.UK

We use some essential cookies to make this website work.

We’d like to set additional cookies to understand how you use GOV.UK, remember your settings and improve government services.

We also use cookies set by other sites to help us deliver content from their services.

You have accepted additional cookies. You can change your cookie settings at any time.

You have rejected additional cookies. You can change your cookie settings at any time.

Independent investigation of the NHS in England

Lord Darzi's report on the state of the National Health Service in England.

Applies to England

Summary letter from lord darzi to the secretary of state for health and social care, independent investigation of the national health service in england.

PDF , 6.58 MB , 163 pages

This file may not be suitable for users of assistive technology.

Independent Investigation of the National Health Service in England: Technical Annex

PDF , 15.1 MB , 331 pages

In July 2024, the Secretary of State for Health and Social Care commissioned Lord Darzi to conduct an immediate and independent investigation of the NHS.

Lord Darzi’s report provides an expert understanding of the current performance of the NHS across England and the challenges facing the healthcare system. Lord Darzi has considered the available data and intelligence to assess:

  • patient access to healthcare
  • the quality of healthcare being provided
  • the overall performance of the health system

In line with the terms of reference of the investigation , Lord Darzi has only considered the state of the NHS in England. UK-wide analysis is occasionally used when making international comparisons.

If you need the report in a more accessible format, contact [email protected] .

Updates to this page

Added an accessible version of the summary letter.

First published.

Sign up for emails or print this page

Is this page useful.

  • Yes this page is useful
  • No this page is not useful

Help us improve GOV.UK

Don’t include personal or financial information like your National Insurance number or credit card details.

To help us improve GOV.UK, we’d like to know more about your visit today. Please fill in this survey (opens in a new tab) .

IMAGES

  1. Top 20 PowerPoint Templates for a Systematic Research Methodology

    research findings and review

  2. Research Findings

    research findings and review

  3. 💐 How to write up research findings. How to write chapter 4 Research

    research findings and review

  4. Research Methodology With Literature Review And Report Findings

    research findings and review

  5. Comparison of Research Findings and Literature Review

    research findings and review

  6. Main findings of qualitative studies

    research findings and review

VIDEO

  1. Difference between Research paper and a review. Which one is more important?

  2. Why Research Findings Only Apply to 50% of the Population #semaglutide

  3. Revolutionizing Survey Management with Data Tables by Ballpark

  4. Review Article vs Research Article: An in-depth exploration of the differences in 2 papers!

  5. Writing Literature review in Research #academicresearch #phdstudents#education #educational #study

  6. RECOMMENDATIONS of RESEARCH: What, Why, How?

COMMENTS

  1. Research Findings

    Literature Review: This section summarizes previous research studies and findings that are relevant to the current study. Methodology : This section describes the research design, methods, and procedures used in the study, including details on the sample, data collection, and data analysis.

  2. How to Write a Literature Review

    When you write a thesis, dissertation, or research paper, you will likely have to conduct a literature review to situate your research within existing knowledge. The literature review gives you a chance to: ... In the conclusion, you should summarize the key findings you have taken from the literature and emphasize their significance.

  3. Types of Literature Review

    1. Narrative Literature Review. A narrative literature review, also known as a traditional literature review, involves analyzing and summarizing existing literature without adhering to a structured methodology. It typically provides a descriptive overview of key concepts, theories, and relevant findings of the research topic.

  4. Literature review as a research methodology: An overview and guidelines

    An effective and well-conducted review as a research method creates a firm foundation for advancing knowledge and facilitating theory development (Webster & Watson, 2002). By integrating findings and perspectives from many empirical findings, a literature review can address research questions with a power that no single study has.

  5. Writing a Scientific Review Article: Comprehensive Insights for

    Writing a review article is equivalent to conducting a research study, with the information gathered by the author (reviewer) representing the data. Like all major studies, it involves conceptualisation, planning, implementation, and dissemination [], all of which may be detailed in a methodology section, if necessary.

  6. Writing a literature review

    Writing a literature review requires a range of skills to gather, sort, evaluate and summarise peer-reviewed published data into a relevant and informative unbiased narrative. Digital access to research papers, academic texts, review articles, reference databases and public data sets are all sources of information that are available to enrich ...

  7. Systematic Review

    A systematic review is a type of review that uses repeatable methods to find, select, and synthesize all available evidence. It answers a clearly formulated research question and explicitly states the methods used to arrive at the answer. Example: Systematic review. In 2008, Dr. Robert Boyle and his colleagues published a systematic review in ...

  8. 5. The Literature Review

    A literature review may consist of simply a summary of key sources, but in the social sciences, a literature review usually has an organizational pattern and combines both summary and synthesis, often within specific conceptual categories.A summary is a recap of the important information of the source, but a synthesis is a re-organization, or a reshuffling, of that information in a way that ...

  9. A Step-by-Step Guide to Writing a Scientific Review Article

    Abstract. Scientific review articles are comprehensive, focused reviews of the scientific literature written by subject matter experts. The task of writing a scientific review article can seem overwhelming; however, it can be managed by using an organized approach and devoting sufficient time to the process.

  10. Guidance on Conducting a Systematic Literature Review

    Literature reviews establish the foundation of academic inquires. However, in the planning field, we lack rigorous systematic reviews. In this article, through a systematic search on the methodology of literature review, we categorize a typology of literature reviews, discuss steps in conducting a systematic literature review, and provide suggestions on how to enhance rigor in literature ...

  11. How to Write a Results Section

    Checklist: Research results 0 / 7. I have completed my data collection and analyzed the results. I have included all results that are relevant to my research questions. I have concisely and objectively reported each result, including relevant descriptive statistics and inferential statistics. I have stated whether each hypothesis was supported ...

  12. Systematic reviews: Structure, form and content

    Topic selection and planning. In recent years, there has been an explosion in the number of systematic reviews conducted and published (Chalmers & Fox 2016, Fontelo & Liu 2018, Page et al 2015) - although a systematic review may be an inappropriate or unnecessary research methodology for answering many research questions.Systematic reviews can be inadvisable for a variety of reasons.

  13. Chapter 9 Methods for Literature Reviews

    9.3. Types of Review Articles and Brief Illustrations. EHealth researchers have at their disposal a number of approaches and methods for making sense out of existing literature, all with the purpose of casting current research findings into historical contexts or explaining contradictions that might exist among a set of primary research studies conducted on a particular topic.

  14. Organizing Your Social Sciences Research Paper

    Doing Case Study Research: A Practical Guide for Beginning Researchers. 2nd ed. New York: Teachers College Press, 2011; Introduction to Nursing Research: Reporting Research Findings. Nursing Research: Open Access Nursing Research and Review Articles. (January 4, 2012); Kretchmer, Paul. Twelve Steps to Writing an Effective Results Section.

  15. How to write a good scientific review article

    The central part of the review, which is usually divided into several subsections with appropriate topic-specific headings, should provide a detailed discussion of research findings relevant to the overall topic, with an adequate description of the methodologies, results and conclusions of individual research papers.

  16. A practical guide to data analysis in general literature reviews

    This article is a practical guide to conducting data analysis in general literature reviews. The general literature review is a synthesis and analysis of published research on a relevant clinical issue, and is a common format for academic theses at the bachelor's and master's levels in nursing, physiotherapy, occupational therapy, public health and other related fields.

  17. How to Do a Systematic Review: A Best Practice Guide for Conducting and

    Systematic reviews are characterized by a methodical and replicable methodology and presentation. They involve a comprehensive search to locate all relevant published and unpublished work on a subject; a systematic integration of search results; and a critique of the extent, nature, and quality of evidence in relation to a particular research question.

  18. Structuring a qualitative findings section

    Don't make the reader do the analytic work for you. Now, on to some specific ways to structure your findings section. 1). Tables. Tables can be used to give an overview of what you're about to present in your findings, including the themes, some supporting evidence, and the meaning/explanation of the theme.

  19. How to Write the Results/Findings Section in Research

    Step 1: Consult the guidelines or instructions that the target journal or publisher provides authors and read research papers it has published, especially those with similar topics, methods, or results to your study. The guidelines will generally outline specific requirements for the results or findings section, and the published articles will ...

  20. Dissertation Results & Findings Chapter (Qualitative ...

    The results chapter in a dissertation or thesis (or any formal academic research piece) is where you objectively and neutrally present the findings of your qualitative analysis (or analyses if you used multiple qualitative analysis methods). This chapter can sometimes be combined with the discussion chapter (where you interpret the data and ...

  21. Organizing Your Social Sciences Research Paper

    The discussion section is often considered the most important part of your research paper because it: Most effectively demonstrates your ability as a researcher to think critically about an issue, to develop creative solutions to problems based upon a logical synthesis of the findings, and to formulate a deeper, more profound understanding of the research problem under investigation;

  22. How to write a "results section" in biomedical scientific research

    To meet the objective of publishing and effectively communicating research findings, authors should better write the "results section" of the article in a clear, succinct, objective, logically structured, understandable, and compelling for readers. 8-11 This allows readers to judge the distinctive contributions of a particular paper to ...

  23. Chapter 14: Completing 'Summary of findings' tables and ...

    Figure 15.1.b provides an alternative format that may further facilitate users' understanding and interpretation of the review's findings. Evidence evaluating different formats suggests that the 'Summary of findings' table should include a risk difference as a measure of the absolute effect and authors should preferably use a format ...

  24. Advancing Middle Grade Research on Critical Pedagogy: Research ...

    In this critical literature review, we examine how middle-level pedagogies, specifically critical pedagogies, impact students' academic, physical, and socioemotional development. This literature review examines critical pedagogies research in middle-level education, focusing on methods that address systemic inequities and center diverse and historically marginalized student populations ...

  25. New findings on the incidence and management of CNS adverse reactions

    The review indicated a 19.39% occurrence rate of such reactions, with a 17% improvement rate post-dose adjustment. CNS adverse reactions frequently occur in ALK-positive NSCLC patients on lorlatinib, yet they are reversible with appropriate management. Research should continue to optimize treatment protocols to decrease these reactions' frequency.

  26. Neuroimaging Findings in US Government Personnel and Their Family

    A, Analysis similar to that shown in Figure 2A, but the map illustrates mean diffusivity from diffusion magnetic resonance imaging (unadjusted P < .01, 2-sided).At the chosen threshold, no voxels survive, but blue areas would represent regions with lower diffusivity in the anomalous health incident (AHI) group than in control participants, while red areas would correspond to regions with ...

  27. Independent investigation of the NHS in England

    Research and statistics. Reports, analysis and official statistics. Policy papers and consultations. Consultations and strategy. Transparency. Data, Freedom of Information releases and corporate ...