• Privacy Policy

Research Method

Home » Research Findings – Types Examples and Writing Guide

Research Findings – Types Examples and Writing Guide

Table of Contents

Research Findings

Research Findings

Definition:

Research findings refer to the results obtained from a study or investigation conducted through a systematic and scientific approach. These findings are the outcomes of the data analysis, interpretation, and evaluation carried out during the research process.

Types of Research Findings

There are two main types of research findings:

Qualitative Findings

Qualitative research is an exploratory research method used to understand the complexities of human behavior and experiences. Qualitative findings are non-numerical and descriptive data that describe the meaning and interpretation of the data collected. Examples of qualitative findings include quotes from participants, themes that emerge from the data, and descriptions of experiences and phenomena.

Quantitative Findings

Quantitative research is a research method that uses numerical data and statistical analysis to measure and quantify a phenomenon or behavior. Quantitative findings include numerical data such as mean, median, and mode, as well as statistical analyses such as t-tests, ANOVA, and regression analysis. These findings are often presented in tables, graphs, or charts.

Both qualitative and quantitative findings are important in research and can provide different insights into a research question or problem. Combining both types of findings can provide a more comprehensive understanding of a phenomenon and improve the validity and reliability of research results.

Parts of Research Findings

Research findings typically consist of several parts, including:

  • Introduction: This section provides an overview of the research topic and the purpose of the study.
  • Literature Review: This section summarizes previous research studies and findings that are relevant to the current study.
  • Methodology : This section describes the research design, methods, and procedures used in the study, including details on the sample, data collection, and data analysis.
  • Results : This section presents the findings of the study, including statistical analyses and data visualizations.
  • Discussion : This section interprets the results and explains what they mean in relation to the research question(s) and hypotheses. It may also compare and contrast the current findings with previous research studies and explore any implications or limitations of the study.
  • Conclusion : This section provides a summary of the key findings and the main conclusions of the study.
  • Recommendations: This section suggests areas for further research and potential applications or implications of the study’s findings.

How to Write Research Findings

Writing research findings requires careful planning and attention to detail. Here are some general steps to follow when writing research findings:

  • Organize your findings: Before you begin writing, it’s essential to organize your findings logically. Consider creating an outline or a flowchart that outlines the main points you want to make and how they relate to one another.
  • Use clear and concise language : When presenting your findings, be sure to use clear and concise language that is easy to understand. Avoid using jargon or technical terms unless they are necessary to convey your meaning.
  • Use visual aids : Visual aids such as tables, charts, and graphs can be helpful in presenting your findings. Be sure to label and title your visual aids clearly, and make sure they are easy to read.
  • Use headings and subheadings: Using headings and subheadings can help organize your findings and make them easier to read. Make sure your headings and subheadings are clear and descriptive.
  • Interpret your findings : When presenting your findings, it’s important to provide some interpretation of what the results mean. This can include discussing how your findings relate to the existing literature, identifying any limitations of your study, and suggesting areas for future research.
  • Be precise and accurate : When presenting your findings, be sure to use precise and accurate language. Avoid making generalizations or overstatements and be careful not to misrepresent your data.
  • Edit and revise: Once you have written your research findings, be sure to edit and revise them carefully. Check for grammar and spelling errors, make sure your formatting is consistent, and ensure that your writing is clear and concise.

Research Findings Example

Following is a Research Findings Example sample for students:

Title: The Effects of Exercise on Mental Health

Sample : 500 participants, both men and women, between the ages of 18-45.

Methodology : Participants were divided into two groups. The first group engaged in 30 minutes of moderate intensity exercise five times a week for eight weeks. The second group did not exercise during the study period. Participants in both groups completed a questionnaire that assessed their mental health before and after the study period.

Findings : The group that engaged in regular exercise reported a significant improvement in mental health compared to the control group. Specifically, they reported lower levels of anxiety and depression, improved mood, and increased self-esteem.

Conclusion : Regular exercise can have a positive impact on mental health and may be an effective intervention for individuals experiencing symptoms of anxiety or depression.

Applications of Research Findings

Research findings can be applied in various fields to improve processes, products, services, and outcomes. Here are some examples:

  • Healthcare : Research findings in medicine and healthcare can be applied to improve patient outcomes, reduce morbidity and mortality rates, and develop new treatments for various diseases.
  • Education : Research findings in education can be used to develop effective teaching methods, improve learning outcomes, and design new educational programs.
  • Technology : Research findings in technology can be applied to develop new products, improve existing products, and enhance user experiences.
  • Business : Research findings in business can be applied to develop new strategies, improve operations, and increase profitability.
  • Public Policy: Research findings can be used to inform public policy decisions on issues such as environmental protection, social welfare, and economic development.
  • Social Sciences: Research findings in social sciences can be used to improve understanding of human behavior and social phenomena, inform public policy decisions, and develop interventions to address social issues.
  • Agriculture: Research findings in agriculture can be applied to improve crop yields, develop new farming techniques, and enhance food security.
  • Sports : Research findings in sports can be applied to improve athlete performance, reduce injuries, and develop new training programs.

When to use Research Findings

Research findings can be used in a variety of situations, depending on the context and the purpose. Here are some examples of when research findings may be useful:

  • Decision-making : Research findings can be used to inform decisions in various fields, such as business, education, healthcare, and public policy. For example, a business may use market research findings to make decisions about new product development or marketing strategies.
  • Problem-solving : Research findings can be used to solve problems or challenges in various fields, such as healthcare, engineering, and social sciences. For example, medical researchers may use findings from clinical trials to develop new treatments for diseases.
  • Policy development : Research findings can be used to inform the development of policies in various fields, such as environmental protection, social welfare, and economic development. For example, policymakers may use research findings to develop policies aimed at reducing greenhouse gas emissions.
  • Program evaluation: Research findings can be used to evaluate the effectiveness of programs or interventions in various fields, such as education, healthcare, and social services. For example, educational researchers may use findings from evaluations of educational programs to improve teaching and learning outcomes.
  • Innovation: Research findings can be used to inspire or guide innovation in various fields, such as technology and engineering. For example, engineers may use research findings on materials science to develop new and innovative products.

Purpose of Research Findings

The purpose of research findings is to contribute to the knowledge and understanding of a particular topic or issue. Research findings are the result of a systematic and rigorous investigation of a research question or hypothesis, using appropriate research methods and techniques.

The main purposes of research findings are:

  • To generate new knowledge : Research findings contribute to the body of knowledge on a particular topic, by adding new information, insights, and understanding to the existing knowledge base.
  • To test hypotheses or theories : Research findings can be used to test hypotheses or theories that have been proposed in a particular field or discipline. This helps to determine the validity and reliability of the hypotheses or theories, and to refine or develop new ones.
  • To inform practice: Research findings can be used to inform practice in various fields, such as healthcare, education, and business. By identifying best practices and evidence-based interventions, research findings can help practitioners to make informed decisions and improve outcomes.
  • To identify gaps in knowledge: Research findings can help to identify gaps in knowledge and understanding of a particular topic, which can then be addressed by further research.
  • To contribute to policy development: Research findings can be used to inform policy development in various fields, such as environmental protection, social welfare, and economic development. By providing evidence-based recommendations, research findings can help policymakers to develop effective policies that address societal challenges.

Characteristics of Research Findings

Research findings have several key characteristics that distinguish them from other types of information or knowledge. Here are some of the main characteristics of research findings:

  • Objective : Research findings are based on a systematic and rigorous investigation of a research question or hypothesis, using appropriate research methods and techniques. As such, they are generally considered to be more objective and reliable than other types of information.
  • Empirical : Research findings are based on empirical evidence, which means that they are derived from observations or measurements of the real world. This gives them a high degree of credibility and validity.
  • Generalizable : Research findings are often intended to be generalizable to a larger population or context beyond the specific study. This means that the findings can be applied to other situations or populations with similar characteristics.
  • Transparent : Research findings are typically reported in a transparent manner, with a clear description of the research methods and data analysis techniques used. This allows others to assess the credibility and reliability of the findings.
  • Peer-reviewed: Research findings are often subject to a rigorous peer-review process, in which experts in the field review the research methods, data analysis, and conclusions of the study. This helps to ensure the validity and reliability of the findings.
  • Reproducible : Research findings are often designed to be reproducible, meaning that other researchers can replicate the study using the same methods and obtain similar results. This helps to ensure the validity and reliability of the findings.

Advantages of Research Findings

Research findings have many advantages, which make them valuable sources of knowledge and information. Here are some of the main advantages of research findings:

  • Evidence-based: Research findings are based on empirical evidence, which means that they are grounded in data and observations from the real world. This makes them a reliable and credible source of information.
  • Inform decision-making: Research findings can be used to inform decision-making in various fields, such as healthcare, education, and business. By identifying best practices and evidence-based interventions, research findings can help practitioners and policymakers to make informed decisions and improve outcomes.
  • Identify gaps in knowledge: Research findings can help to identify gaps in knowledge and understanding of a particular topic, which can then be addressed by further research. This contributes to the ongoing development of knowledge in various fields.
  • Improve outcomes : Research findings can be used to develop and implement evidence-based practices and interventions, which have been shown to improve outcomes in various fields, such as healthcare, education, and social services.
  • Foster innovation: Research findings can inspire or guide innovation in various fields, such as technology and engineering. By providing new information and understanding of a particular topic, research findings can stimulate new ideas and approaches to problem-solving.
  • Enhance credibility: Research findings are generally considered to be more credible and reliable than other types of information, as they are based on rigorous research methods and are subject to peer-review processes.

Limitations of Research Findings

While research findings have many advantages, they also have some limitations. Here are some of the main limitations of research findings:

  • Limited scope: Research findings are typically based on a particular study or set of studies, which may have a limited scope or focus. This means that they may not be applicable to other contexts or populations.
  • Potential for bias : Research findings can be influenced by various sources of bias, such as researcher bias, selection bias, or measurement bias. This can affect the validity and reliability of the findings.
  • Ethical considerations: Research findings can raise ethical considerations, particularly in studies involving human subjects. Researchers must ensure that their studies are conducted in an ethical and responsible manner, with appropriate measures to protect the welfare and privacy of participants.
  • Time and resource constraints : Research studies can be time-consuming and require significant resources, which can limit the number and scope of studies that are conducted. This can lead to gaps in knowledge or a lack of research on certain topics.
  • Complexity: Some research findings can be complex and difficult to interpret, particularly in fields such as science or medicine. This can make it challenging for practitioners and policymakers to apply the findings to their work.
  • Lack of generalizability : While research findings are intended to be generalizable to larger populations or contexts, there may be factors that limit their generalizability. For example, cultural or environmental factors may influence how a particular intervention or treatment works in different populations or contexts.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Data collection

Data Collection – Methods Types and Examples

Delimitations

Delimitations in Research – Types, Examples and...

Research Process

Research Process – Steps, Examples and Tips

Research Design

Research Design – Types, Methods and Examples

Institutional Review Board (IRB)

Institutional Review Board – Application Sample...

Evaluating Research

Evaluating Research – Process, Examples and...

Study Site Homepage

  • Request new password
  • Create a new account

Doing Research in Counselling and Psychotherapy

Student resources, disseminating the findings of your research study.

It is very important to find appropriate ways to disseminate the findings of your research – projects that sit on office or library shelves and are seldom or never read represent a tragic loss to the profession.

A key dimension of research dissemination is to be actively involved with potential audiences for your work, and help them to understand what it means to them. These dialogues also represent invaluable learning experiences for researchers, in terms of developing new ideas and appreciating the methodological limitations of their work. An inspiring example of how to do this can be found in:

Granek, L., & Nakash, O. (2016). The impact of qualitative research on the “real world” knowledge translation as education, policy, clinical training, and clinical practice.  Journal of Humanistic Psychology , 56(4), 414 – 435. 

A further key dimension of research dissemination lies in the act of writing. There are a number of challenges associated with writing counselling and psychotherapy research papers, such as the need to adhere to journal formats, and the need (sometimes) to weave personal reflective writing into a predominantly third-person standard academic style. The items in the following sections explore these challenges from a variety of perspectives.

Suggestions for becoming a more effective academic writer

Sources of advice on how to ease the pain of writing:

Gioia, D. (2019). Gioia’s rules of the game.  Journal of Management Inquiry , 28(1), 113 – 115. 

Greenhalgh, T. (2019). Twitter women’s tips on academic writing: a female response to Gioia’s rules of the game. Journal of Management Inquiry , 28(4), 484 – 487.

Roulston, K. (2019). Learning how to write successfully from academic writers. The Qualitative Report, 24(7), 1778 – 1781. 

Writing tips from the student centre, University of Berkeley

File

The transition from being a therapist to being a researcher

Finlay, L. (2020). How to write a journal article: Top tips for the novice writer.  European Journal for Qualitative Research in Psychotherapy , 10, 28 – 40.

McBeath, A., Bager-Charleson, S., & Abarbanel, A. (2019). Therapists and academic writing: “Once upon a time psychotherapy practitioners and researchers were the same people”.  European Journal for Qualitative Research in Psychotherapy , 9, 103 – 116. 

McPherson, A. (2020). Dissertation to published article: A journey from shame to sharing.  European Journal for Qualitative Research in Psychotherapy , 10, 41 – 52.

Journal article style requirements of the American Psychological Association (including a section on writing quantitative papers)

Writing qualitative reports

Jonsen, K., Fendt, J., & Point, S. (2018). Convincing qualitative research: What constitutes persuasive writing?  Organizational Research Methods , 21(1), 30 – 67.

Ponterotto, J.G. & Grieger, I. (2007). Effectively communicating qualitative research.  The Counseling Psychologist , 35, 404 – 430.

Smith, L., Rosenzweig, L. & Schmidt, M. (2010). Best practices in the reporting of participatory action research: embracing both the forest and the trees.  The Counseling Psychologist, 38, 1115 – 1138.

Staller, K.M. & Krumer-Nevo, M. (2013).  Successful qualitative articles: A tentative list of cautionary advice. Qualitative Social Work, 12, 247 – 253. 

Clark, A.M. & Thompson, D.R. (2016). Five tips for writing qualitative research in high-impact journals: moving from #BMJnoQual . International Journal of Qualitative Methods , 15, 1 – 3

Gustafson, D. L., Parsons, J. E., & Gillingham, B. (2019). Writing to transgress: Knowledge production in feminist participatory action research. Forum Qualitative Sozialforschung/Forum: Qualitative Social Research, 20 . DOI:  10.17169/fqs-20.2.3164

Caulley, D.N. (2008). Making qualitative reports less boring: the techniques of writing creative nonfiction.  Qualitative Inquiry, 14, 424 – 449.

News alert: UC Berkeley has announced its next university librarian

Secondary menu

  • Log in to your Library account
  • Hours and Maps
  • Connect from Off Campus
  • UC Berkeley Home

Search form

Research methods--quantitative, qualitative, and more: overview.

  • Quantitative Research
  • Qualitative Research
  • Data Science Methods (Machine Learning, AI, Big Data)
  • Text Mining and Computational Text Analysis
  • Evidence Synthesis/Systematic Reviews
  • Get Data, Get Help!

About Research Methods

This guide provides an overview of research methods, how to choose and use them, and supports and resources at UC Berkeley. 

As Patten and Newhart note in the book Understanding Research Methods , "Research methods are the building blocks of the scientific enterprise. They are the "how" for building systematic knowledge. The accumulation of knowledge through research is by its nature a collective endeavor. Each well-designed study provides evidence that may support, amend, refute, or deepen the understanding of existing knowledge...Decisions are important throughout the practice of research and are designed to help researchers collect evidence that includes the full spectrum of the phenomenon under study, to maintain logical rules, and to mitigate or account for possible sources of bias. In many ways, learning research methods is learning how to see and make these decisions."

The choice of methods varies by discipline, by the kind of phenomenon being studied and the data being used to study it, by the technology available, and more.  This guide is an introduction, but if you don't see what you need here, always contact your subject librarian, and/or take a look to see if there's a library research guide that will answer your question. 

Suggestions for changes and additions to this guide are welcome! 

START HERE: SAGE Research Methods

Without question, the most comprehensive resource available from the library is SAGE Research Methods.  HERE IS THE ONLINE GUIDE  to this one-stop shopping collection, and some helpful links are below:

  • SAGE Research Methods
  • Little Green Books  (Quantitative Methods)
  • Little Blue Books  (Qualitative Methods)
  • Dictionaries and Encyclopedias  
  • Case studies of real research projects
  • Sample datasets for hands-on practice
  • Streaming video--see methods come to life
  • Methodspace- -a community for researchers
  • SAGE Research Methods Course Mapping

Library Data Services at UC Berkeley

Library Data Services Program and Digital Scholarship Services

The LDSP offers a variety of services and tools !  From this link, check out pages for each of the following topics:  discovering data, managing data, collecting data, GIS data, text data mining, publishing data, digital scholarship, open science, and the Research Data Management Program.

Be sure also to check out the visual guide to where to seek assistance on campus with any research question you may have!

Library GIS Services

Other Data Services at Berkeley

D-Lab Supports Berkeley faculty, staff, and graduate students with research in data intensive social science, including a wide range of training and workshop offerings Dryad Dryad is a simple self-service tool for researchers to use in publishing their datasets. It provides tools for the effective publication of and access to research data. Geospatial Innovation Facility (GIF) Provides leadership and training across a broad array of integrated mapping technologies on campu Research Data Management A UC Berkeley guide and consulting service for research data management issues

General Research Methods Resources

Here are some general resources for assistance:

  • Assistance from ICPSR (must create an account to access): Getting Help with Data , and Resources for Students
  • Wiley Stats Ref for background information on statistics topics
  • Survey Documentation and Analysis (SDA) .  Program for easy web-based analysis of survey data.

Consultants

  • D-Lab/Data Science Discovery Consultants Request help with your research project from peer consultants.
  • Research data (RDM) consulting Meet with RDM consultants before designing the data security, storage, and sharing aspects of your qualitative project.
  • Statistics Department Consulting Services A service in which advanced graduate students, under faculty supervision, are available to consult during specified hours in the Fall and Spring semesters.

Related Resourcex

  • IRB / CPHS Qualitative research projects with human subjects often require that you go through an ethics review.
  • OURS (Office of Undergraduate Research and Scholarships) OURS supports undergraduates who want to embark on research projects and assistantships. In particular, check out their "Getting Started in Research" workshops
  • Sponsored Projects Sponsored projects works with researchers applying for major external grants.
  • Next: Quantitative Research >>
  • Last Updated: Apr 25, 2024 11:09 AM
  • URL: https://guides.lib.berkeley.edu/researchmethods
  • Affiliate Program

Wordvice

  • UNITED STATES
  • 台灣 (TAIWAN)
  • TÜRKIYE (TURKEY)
  • Academic Editing Services
  • - Research Paper
  • - Journal Manuscript
  • - Dissertation
  • - College & University Assignments
  • Admissions Editing Services
  • - Application Essay
  • - Personal Statement
  • - Recommendation Letter
  • - Cover Letter
  • - CV/Resume
  • Business Editing Services
  • - Business Documents
  • - Report & Brochure
  • - Website & Blog
  • Writer Editing Services
  • - Script & Screenplay
  • Our Editors
  • Client Reviews
  • Editing & Proofreading Prices
  • Wordvice Points
  • Partner Discount
  • Plagiarism Checker
  • APA Citation Generator
  • MLA Citation Generator
  • Chicago Citation Generator
  • Vancouver Citation Generator
  • - APA Style
  • - MLA Style
  • - Chicago Style
  • - Vancouver Style
  • Writing & Editing Guide
  • Academic Resources
  • Admissions Resources

How to Write the Results/Findings Section in Research

study finding of research

What is the research paper Results section and what does it do?

The Results section of a scientific research paper represents the core findings of a study derived from the methods applied to gather and analyze information. It presents these findings in a logical sequence without bias or interpretation from the author, setting up the reader for later interpretation and evaluation in the Discussion section. A major purpose of the Results section is to break down the data into sentences that show its significance to the research question(s).

The Results section appears third in the section sequence in most scientific papers. It follows the presentation of the Methods and Materials and is presented before the Discussion section —although the Results and Discussion are presented together in many journals. This section answers the basic question “What did you find in your research?”

What is included in the Results section?

The Results section should include the findings of your study and ONLY the findings of your study. The findings include:

  • Data presented in tables, charts, graphs, and other figures (may be placed into the text or on separate pages at the end of the manuscript)
  • A contextual analysis of this data explaining its meaning in sentence form
  • All data that corresponds to the central research question(s)
  • All secondary findings (secondary outcomes, subgroup analyses, etc.)

If the scope of the study is broad, or if you studied a variety of variables, or if the methodology used yields a wide range of different results, the author should present only those results that are most relevant to the research question stated in the Introduction section .

As a general rule, any information that does not present the direct findings or outcome of the study should be left out of this section. Unless the journal requests that authors combine the Results and Discussion sections, explanations and interpretations should be omitted from the Results.

How are the results organized?

The best way to organize your Results section is “logically.” One logical and clear method of organizing research results is to provide them alongside the research questions—within each research question, present the type of data that addresses that research question.

Let’s look at an example. Your research question is based on a survey among patients who were treated at a hospital and received postoperative care. Let’s say your first research question is:

results section of a research paper, figures

“What do hospital patients over age 55 think about postoperative care?”

This can actually be represented as a heading within your Results section, though it might be presented as a statement rather than a question:

Attitudes towards postoperative care in patients over the age of 55

Now present the results that address this specific research question first. In this case, perhaps a table illustrating data from a survey. Likert items can be included in this example. Tables can also present standard deviations, probabilities, correlation matrices, etc.

Following this, present a content analysis, in words, of one end of the spectrum of the survey or data table. In our example case, start with the POSITIVE survey responses regarding postoperative care, using descriptive phrases. For example:

“Sixty-five percent of patients over 55 responded positively to the question “ Are you satisfied with your hospital’s postoperative care ?” (Fig. 2)

Include other results such as subcategory analyses. The amount of textual description used will depend on how much interpretation of tables and figures is necessary and how many examples the reader needs in order to understand the significance of your research findings.

Next, present a content analysis of another part of the spectrum of the same research question, perhaps the NEGATIVE or NEUTRAL responses to the survey. For instance:

  “As Figure 1 shows, 15 out of 60 patients in Group A responded negatively to Question 2.”

After you have assessed the data in one figure and explained it sufficiently, move on to your next research question. For example:

  “How does patient satisfaction correspond to in-hospital improvements made to postoperative care?”

results section of a research paper, figures

This kind of data may be presented through a figure or set of figures (for instance, a paired T-test table).

Explain the data you present, here in a table, with a concise content analysis:

“The p-value for the comparison between the before and after groups of patients was .03% (Fig. 2), indicating that the greater the dissatisfaction among patients, the more frequent the improvements that were made to postoperative care.”

Let’s examine another example of a Results section from a study on plant tolerance to heavy metal stress . In the Introduction section, the aims of the study are presented as “determining the physiological and morphological responses of Allium cepa L. towards increased cadmium toxicity” and “evaluating its potential to accumulate the metal and its associated environmental consequences.” The Results section presents data showing how these aims are achieved in tables alongside a content analysis, beginning with an overview of the findings:

“Cadmium caused inhibition of root and leave elongation, with increasing effects at higher exposure doses (Fig. 1a-c).”

The figure containing this data is cited in parentheses. Note that this author has combined three graphs into one single figure. Separating the data into separate graphs focusing on specific aspects makes it easier for the reader to assess the findings, and consolidating this information into one figure saves space and makes it easy to locate the most relevant results.

results section of a research paper, figures

Following this overall summary, the relevant data in the tables is broken down into greater detail in text form in the Results section.

  • “Results on the bio-accumulation of cadmium were found to be the highest (17.5 mg kgG1) in the bulb, when the concentration of cadmium in the solution was 1×10G2 M and lowest (0.11 mg kgG1) in the leaves when the concentration was 1×10G3 M.”

Captioning and Referencing Tables and Figures

Tables and figures are central components of your Results section and you need to carefully think about the most effective way to use graphs and tables to present your findings . Therefore, it is crucial to know how to write strong figure captions and to refer to them within the text of the Results section.

The most important advice one can give here as well as throughout the paper is to check the requirements and standards of the journal to which you are submitting your work. Every journal has its own design and layout standards, which you can find in the author instructions on the target journal’s website. Perusing a journal’s published articles will also give you an idea of the proper number, size, and complexity of your figures.

Regardless of which format you use, the figures should be placed in the order they are referenced in the Results section and be as clear and easy to understand as possible. If there are multiple variables being considered (within one or more research questions), it can be a good idea to split these up into separate figures. Subsequently, these can be referenced and analyzed under separate headings and paragraphs in the text.

To create a caption, consider the research question being asked and change it into a phrase. For instance, if one question is “Which color did participants choose?”, the caption might be “Color choice by participant group.” Or in our last research paper example, where the question was “What is the concentration of cadmium in different parts of the onion after 14 days?” the caption reads:

 “Fig. 1(a-c): Mean concentration of Cd determined in (a) bulbs, (b) leaves, and (c) roots of onions after a 14-day period.”

Steps for Composing the Results Section

Because each study is unique, there is no one-size-fits-all approach when it comes to designing a strategy for structuring and writing the section of a research paper where findings are presented. The content and layout of this section will be determined by the specific area of research, the design of the study and its particular methodologies, and the guidelines of the target journal and its editors. However, the following steps can be used to compose the results of most scientific research studies and are essential for researchers who are new to preparing a manuscript for publication or who need a reminder of how to construct the Results section.

Step 1 : Consult the guidelines or instructions that the target journal or publisher provides authors and read research papers it has published, especially those with similar topics, methods, or results to your study.

  • The guidelines will generally outline specific requirements for the results or findings section, and the published articles will provide sound examples of successful approaches.
  • Note length limitations on restrictions on content. For instance, while many journals require the Results and Discussion sections to be separate, others do not—qualitative research papers often include results and interpretations in the same section (“Results and Discussion”).
  • Reading the aims and scope in the journal’s “ guide for authors ” section and understanding the interests of its readers will be invaluable in preparing to write the Results section.

Step 2 : Consider your research results in relation to the journal’s requirements and catalogue your results.

  • Focus on experimental results and other findings that are especially relevant to your research questions and objectives and include them even if they are unexpected or do not support your ideas and hypotheses.
  • Catalogue your findings—use subheadings to streamline and clarify your report. This will help you avoid excessive and peripheral details as you write and also help your reader understand and remember your findings. Create appendices that might interest specialists but prove too long or distracting for other readers.
  • Decide how you will structure of your results. You might match the order of the research questions and hypotheses to your results, or you could arrange them according to the order presented in the Methods section. A chronological order or even a hierarchy of importance or meaningful grouping of main themes or categories might prove effective. Consider your audience, evidence, and most importantly, the objectives of your research when choosing a structure for presenting your findings.

Step 3 : Design figures and tables to present and illustrate your data.

  • Tables and figures should be numbered according to the order in which they are mentioned in the main text of the paper.
  • Information in figures should be relatively self-explanatory (with the aid of captions), and their design should include all definitions and other information necessary for readers to understand the findings without reading all of the text.
  • Use tables and figures as a focal point to tell a clear and informative story about your research and avoid repeating information. But remember that while figures clarify and enhance the text, they cannot replace it.

Step 4 : Draft your Results section using the findings and figures you have organized.

  • The goal is to communicate this complex information as clearly and precisely as possible; precise and compact phrases and sentences are most effective.
  • In the opening paragraph of this section, restate your research questions or aims to focus the reader’s attention to what the results are trying to show. It is also a good idea to summarize key findings at the end of this section to create a logical transition to the interpretation and discussion that follows.
  • Try to write in the past tense and the active voice to relay the findings since the research has already been done and the agent is usually clear. This will ensure that your explanations are also clear and logical.
  • Make sure that any specialized terminology or abbreviation you have used here has been defined and clarified in the  Introduction section .

Step 5 : Review your draft; edit and revise until it reports results exactly as you would like to have them reported to your readers.

  • Double-check the accuracy and consistency of all the data, as well as all of the visual elements included.
  • Read your draft aloud to catch language errors (grammar, spelling, and mechanics), awkward phrases, and missing transitions.
  • Ensure that your results are presented in the best order to focus on objectives and prepare readers for interpretations, valuations, and recommendations in the Discussion section . Look back over the paper’s Introduction and background while anticipating the Discussion and Conclusion sections to ensure that the presentation of your results is consistent and effective.
  • Consider seeking additional guidance on your paper. Find additional readers to look over your Results section and see if it can be improved in any way. Peers, professors, or qualified experts can provide valuable insights.

One excellent option is to use a professional English proofreading and editing service  such as Wordvice, including our paper editing service . With hundreds of qualified editors from dozens of scientific fields, Wordvice has helped thousands of authors revise their manuscripts and get accepted into their target journals. Read more about the  proofreading and editing process  before proceeding with getting academic editing services and manuscript editing services for your manuscript.

As the representation of your study’s data output, the Results section presents the core information in your research paper. By writing with clarity and conciseness and by highlighting and explaining the crucial findings of their study, authors increase the impact and effectiveness of their research manuscripts.

For more articles and videos on writing your research manuscript, visit Wordvice’s Resources page.

Wordvice Resources

  • How to Write a Research Paper Introduction 
  • Which Verb Tenses to Use in a Research Paper
  • How to Write an Abstract for a Research Paper
  • How to Write a Research Paper Title
  • Useful Phrases for Academic Writing
  • Common Transition Terms in Academic Papers
  • Active and Passive Voice in Research Papers
  • 100+ Verbs That Will Make Your Research Writing Amazing
  • Tips for Paraphrasing in Research Papers
  • Chamberlain University Library
  • Chamberlain Library Core

Finding Types of Research

  • Evidence-Based Research

On This Guide

About this guide, understand evidence-based practice, identify research study types.

  • Quantitative Studies
  • Qualitative Studies
  • Meta-Analysis
  • Systematic Reviews
  • Randomized Controlled Trials
  • Observational Studies
  • Literature Reviews
  • Finding Research Tools This link opens in a new window

Throughout your schooling, you may need to find different types of evidence and research to support your course work. This guide provides a high-level overview of evidence-based practice as well as the different types of research and study designs. Each page of this guide offers an overview and search tips for finding articles that fit that study design.

Note! If you need help finding a specific type of study, visit the  Get Research Help guide  to contact the librarians.

What is Evidence-Based Practice?

One of the requirements for your coursework is to find articles that support evidence-based practice. But what exactly is evidence-based practice? Evidence-based practice is a method that uses relevant and current evidence to plan, implement and evaluate patient care. This definition is included in the video below, which explains all the steps of evidence-based practice in greater detail.

  • Video - Evidence-based practice: What it is and what it is not. Medcom (Producer), & Cobb, D. (Director). (2017). Evidence-based practice: What it is and what it is not [Streaming Video]. United States of America: Producer. Retrieved from Alexander Street Press Nursing Education Collection

Quantitative and Qualitative Studies

Research is broken down into two different types: quantitative and qualitative. Quantitative studies are all about measurement. They will report statistics of things that can be physically measured like blood pressure, weight and oxygen saturation. Qualitative studies, on the other hand, are about people's experiences and how they feel about something. This type of information cannot be measured using statistics. Both of these types of studies report original research and are considered single studies. Watch the video below for more information.

Watch the Identifying Quantitative and Qualitative video

Study Designs

Some research study types that you will encounter include:

  • Case-Control Studies
  • Cohort Studies
  • Cross-Sectional Studies

Studies that Synthesize Other Studies

Sometimes, a research study will look at the results of many studies and look for trends and draw conclusions. These types of studies include:

  • Meta Analyses

Tip! How do you determine the research article's study type or level of evidence? First, look at the article abstract. Most of the time the abstract will have a methodology section, which should tell you what type of study design the researchers are using. If it is not in the abstract, look for the methodology section of the article. It should tell you all about what type of study the researcher is doing and the steps they used to carry out the study.

Read the book below to learn how to read a clinical paper, including the types of study designs you will encounter.

Understanding Clinical Papers Cover

  • Search Website
  • Library Tech Support
  • Services for Colleagues

Chamberlain College of Nursing is owned and operated by Chamberlain University LLC. In certain states, Chamberlain operates as Chamberlain College of Nursing pending state authorization for Chamberlain University.

What Is Research, and Why Do People Do It?

  • Open Access
  • First Online: 03 December 2022

Cite this chapter

You have full access to this open access chapter

study finding of research

  • James Hiebert 6 ,
  • Jinfa Cai 7 ,
  • Stephen Hwang 7 ,
  • Anne K Morris 6 &
  • Charles Hohensee 6  

Part of the book series: Research in Mathematics Education ((RME))

16k Accesses

Abstractspiepr Abs1

Every day people do research as they gather information to learn about something of interest. In the scientific world, however, research means something different than simply gathering information. Scientific research is characterized by its careful planning and observing, by its relentless efforts to understand and explain, and by its commitment to learn from everyone else seriously engaged in research. We call this kind of research scientific inquiry and define it as “formulating, testing, and revising hypotheses.” By “hypotheses” we do not mean the hypotheses you encounter in statistics courses. We mean predictions about what you expect to find and rationales for why you made these predictions. Throughout this and the remaining chapters we make clear that the process of scientific inquiry applies to all kinds of research studies and data, both qualitative and quantitative.

You have full access to this open access chapter,  Download chapter PDF

Part I. What Is Research?

Have you ever studied something carefully because you wanted to know more about it? Maybe you wanted to know more about your grandmother’s life when she was younger so you asked her to tell you stories from her childhood, or maybe you wanted to know more about a fertilizer you were about to use in your garden so you read the ingredients on the package and looked them up online. According to the dictionary definition, you were doing research.

Recall your high school assignments asking you to “research” a topic. The assignment likely included consulting a variety of sources that discussed the topic, perhaps including some “original” sources. Often, the teacher referred to your product as a “research paper.”

Were you conducting research when you interviewed your grandmother or wrote high school papers reviewing a particular topic? Our view is that you were engaged in part of the research process, but only a small part. In this book, we reserve the word “research” for what it means in the scientific world, that is, for scientific research or, more pointedly, for scientific inquiry .

Exercise 1.1

Before you read any further, write a definition of what you think scientific inquiry is. Keep it short—Two to three sentences. You will periodically update this definition as you read this chapter and the remainder of the book.

This book is about scientific inquiry—what it is and how to do it. For starters, scientific inquiry is a process, a particular way of finding out about something that involves a number of phases. Each phase of the process constitutes one aspect of scientific inquiry. You are doing scientific inquiry as you engage in each phase, but you have not done scientific inquiry until you complete the full process. Each phase is necessary but not sufficient.

In this chapter, we set the stage by defining scientific inquiry—describing what it is and what it is not—and by discussing what it is good for and why people do it. The remaining chapters build directly on the ideas presented in this chapter.

A first thing to know is that scientific inquiry is not all or nothing. “Scientificness” is a continuum. Inquiries can be more scientific or less scientific. What makes an inquiry more scientific? You might be surprised there is no universally agreed upon answer to this question. None of the descriptors we know of are sufficient by themselves to define scientific inquiry. But all of them give you a way of thinking about some aspects of the process of scientific inquiry. Each one gives you different insights.

An image of the book's description with the words like research, science, and inquiry and what the word research meant in the scientific world.

Exercise 1.2

As you read about each descriptor below, think about what would make an inquiry more or less scientific. If you think a descriptor is important, use it to revise your definition of scientific inquiry.

Creating an Image of Scientific Inquiry

We will present three descriptors of scientific inquiry. Each provides a different perspective and emphasizes a different aspect of scientific inquiry. We will draw on all three descriptors to compose our definition of scientific inquiry.

Descriptor 1. Experience Carefully Planned in Advance

Sir Ronald Fisher, often called the father of modern statistical design, once referred to research as “experience carefully planned in advance” (1935, p. 8). He said that humans are always learning from experience, from interacting with the world around them. Usually, this learning is haphazard rather than the result of a deliberate process carried out over an extended period of time. Research, Fisher said, was learning from experience, but experience carefully planned in advance.

This phrase can be fully appreciated by looking at each word. The fact that scientific inquiry is based on experience means that it is based on interacting with the world. These interactions could be thought of as the stuff of scientific inquiry. In addition, it is not just any experience that counts. The experience must be carefully planned . The interactions with the world must be conducted with an explicit, describable purpose, and steps must be taken to make the intended learning as likely as possible. This planning is an integral part of scientific inquiry; it is not just a preparation phase. It is one of the things that distinguishes scientific inquiry from many everyday learning experiences. Finally, these steps must be taken beforehand and the purpose of the inquiry must be articulated in advance of the experience. Clearly, scientific inquiry does not happen by accident, by just stumbling into something. Stumbling into something unexpected and interesting can happen while engaged in scientific inquiry, but learning does not depend on it and serendipity does not make the inquiry scientific.

Descriptor 2. Observing Something and Trying to Explain Why It Is the Way It Is

When we were writing this chapter and googled “scientific inquiry,” the first entry was: “Scientific inquiry refers to the diverse ways in which scientists study the natural world and propose explanations based on the evidence derived from their work.” The emphasis is on studying, or observing, and then explaining . This descriptor takes the image of scientific inquiry beyond carefully planned experience and includes explaining what was experienced.

According to the Merriam-Webster dictionary, “explain” means “(a) to make known, (b) to make plain or understandable, (c) to give the reason or cause of, and (d) to show the logical development or relations of” (Merriam-Webster, n.d. ). We will use all these definitions. Taken together, they suggest that to explain an observation means to understand it by finding reasons (or causes) for why it is as it is. In this sense of scientific inquiry, the following are synonyms: explaining why, understanding why, and reasoning about causes and effects. Our image of scientific inquiry now includes planning, observing, and explaining why.

An image represents the observation required in the scientific inquiry including planning and explaining.

We need to add a final note about this descriptor. We have phrased it in a way that suggests “observing something” means you are observing something in real time—observing the way things are or the way things are changing. This is often true. But, observing could mean observing data that already have been collected, maybe by someone else making the original observations (e.g., secondary analysis of NAEP data or analysis of existing video recordings of classroom instruction). We will address secondary analyses more fully in Chap. 4 . For now, what is important is that the process requires explaining why the data look like they do.

We must note that for us, the term “data” is not limited to numerical or quantitative data such as test scores. Data can also take many nonquantitative forms, including written survey responses, interview transcripts, journal entries, video recordings of students, teachers, and classrooms, text messages, and so forth.

An image represents the data explanation as it is not limited and takes numerous non-quantitative forms including an interview, journal entries, etc.

Exercise 1.3

What are the implications of the statement that just “observing” is not enough to count as scientific inquiry? Does this mean that a detailed description of a phenomenon is not scientific inquiry?

Find sources that define research in education that differ with our position, that say description alone, without explanation, counts as scientific research. Identify the precise points where the opinions differ. What are the best arguments for each of the positions? Which do you prefer? Why?

Descriptor 3. Updating Everyone’s Thinking in Response to More and Better Information

This descriptor focuses on a third aspect of scientific inquiry: updating and advancing the field’s understanding of phenomena that are investigated. This descriptor foregrounds a powerful characteristic of scientific inquiry: the reliability (or trustworthiness) of what is learned and the ultimate inevitability of this learning to advance human understanding of phenomena. Humans might choose not to learn from scientific inquiry, but history suggests that scientific inquiry always has the potential to advance understanding and that, eventually, humans take advantage of these new understandings.

Before exploring these bold claims a bit further, note that this descriptor uses “information” in the same way the previous two descriptors used “experience” and “observations.” These are the stuff of scientific inquiry and we will use them often, sometimes interchangeably. Frequently, we will use the term “data” to stand for all these terms.

An overriding goal of scientific inquiry is for everyone to learn from what one scientist does. Much of this book is about the methods you need to use so others have faith in what you report and can learn the same things you learned. This aspect of scientific inquiry has many implications.

One implication is that scientific inquiry is not a private practice. It is a public practice available for others to see and learn from. Notice how different this is from everyday learning. When you happen to learn something from your everyday experience, often only you gain from the experience. The fact that research is a public practice means it is also a social one. It is best conducted by interacting with others along the way: soliciting feedback at each phase, taking opportunities to present work-in-progress, and benefitting from the advice of others.

A second implication is that you, as the researcher, must be committed to sharing what you are doing and what you are learning in an open and transparent way. This allows all phases of your work to be scrutinized and critiqued. This is what gives your work credibility. The reliability or trustworthiness of your findings depends on your colleagues recognizing that you have used all appropriate methods to maximize the chances that your claims are justified by the data.

A third implication of viewing scientific inquiry as a collective enterprise is the reverse of the second—you must be committed to receiving comments from others. You must treat your colleagues as fair and honest critics even though it might sometimes feel otherwise. You must appreciate their job, which is to remain skeptical while scrutinizing what you have done in considerable detail. To provide the best help to you, they must remain skeptical about your conclusions (when, for example, the data are difficult for them to interpret) until you offer a convincing logical argument based on the information you share. A rather harsh but good-to-remember statement of the role of your friendly critics was voiced by Karl Popper, a well-known twentieth century philosopher of science: “. . . if you are interested in the problem which I tried to solve by my tentative assertion, you may help me by criticizing it as severely as you can” (Popper, 1968, p. 27).

A final implication of this third descriptor is that, as someone engaged in scientific inquiry, you have no choice but to update your thinking when the data support a different conclusion. This applies to your own data as well as to those of others. When data clearly point to a specific claim, even one that is quite different than you expected, you must reconsider your position. If the outcome is replicated multiple times, you need to adjust your thinking accordingly. Scientific inquiry does not let you pick and choose which data to believe; it mandates that everyone update their thinking when the data warrant an update.

Doing Scientific Inquiry

We define scientific inquiry in an operational sense—what does it mean to do scientific inquiry? What kind of process would satisfy all three descriptors: carefully planning an experience in advance; observing and trying to explain what you see; and, contributing to updating everyone’s thinking about an important phenomenon?

We define scientific inquiry as formulating , testing , and revising hypotheses about phenomena of interest.

Of course, we are not the only ones who define it in this way. The definition for the scientific method posted by the editors of Britannica is: “a researcher develops a hypothesis, tests it through various means, and then modifies the hypothesis on the basis of the outcome of the tests and experiments” (Britannica, n.d. ).

An image represents the scientific inquiry definition given by the editors of Britannica and also defines the hypothesis on the basis of the experiments.

Notice how defining scientific inquiry this way satisfies each of the descriptors. “Carefully planning an experience in advance” is exactly what happens when formulating a hypothesis about a phenomenon of interest and thinking about how to test it. “ Observing a phenomenon” occurs when testing a hypothesis, and “ explaining ” what is found is required when revising a hypothesis based on the data. Finally, “updating everyone’s thinking” comes from comparing publicly the original with the revised hypothesis.

Doing scientific inquiry, as we have defined it, underscores the value of accumulating knowledge rather than generating random bits of knowledge. Formulating, testing, and revising hypotheses is an ongoing process, with each revised hypothesis begging for another test, whether by the same researcher or by new researchers. The editors of Britannica signaled this cyclic process by adding the following phrase to their definition of the scientific method: “The modified hypothesis is then retested, further modified, and tested again.” Scientific inquiry creates a process that encourages each study to build on the studies that have gone before. Through collective engagement in this process of building study on top of study, the scientific community works together to update its thinking.

Before exploring more fully the meaning of “formulating, testing, and revising hypotheses,” we need to acknowledge that this is not the only way researchers define research. Some researchers prefer a less formal definition, one that includes more serendipity, less planning, less explanation. You might have come across more open definitions such as “research is finding out about something.” We prefer the tighter hypothesis formulation, testing, and revision definition because we believe it provides a single, coherent map for conducting research that addresses many of the thorny problems educational researchers encounter. We believe it is the most useful orientation toward research and the most helpful to learn as a beginning researcher.

A final clarification of our definition is that it applies equally to qualitative and quantitative research. This is a familiar distinction in education that has generated much discussion. You might think our definition favors quantitative methods over qualitative methods because the language of hypothesis formulation and testing is often associated with quantitative methods. In fact, we do not favor one method over another. In Chap. 4 , we will illustrate how our definition fits research using a range of quantitative and qualitative methods.

Exercise 1.4

Look for ways to extend what the field knows in an area that has already received attention by other researchers. Specifically, you can search for a program of research carried out by more experienced researchers that has some revised hypotheses that remain untested. Identify a revised hypothesis that you might like to test.

Unpacking the Terms Formulating, Testing, and Revising Hypotheses

To get a full sense of the definition of scientific inquiry we will use throughout this book, it is helpful to spend a little time with each of the key terms.

We first want to make clear that we use the term “hypothesis” as it is defined in most dictionaries and as it used in many scientific fields rather than as it is usually defined in educational statistics courses. By “hypothesis,” we do not mean a null hypothesis that is accepted or rejected by statistical analysis. Rather, we use “hypothesis” in the sense conveyed by the following definitions: “An idea or explanation for something that is based on known facts but has not yet been proved” (Cambridge University Press, n.d. ), and “An unproved theory, proposition, or supposition, tentatively accepted to explain certain facts and to provide a basis for further investigation or argument” (Agnes & Guralnik, 2008 ).

We distinguish two parts to “hypotheses.” Hypotheses consist of predictions and rationales . Predictions are statements about what you expect to find when you inquire about something. Rationales are explanations for why you made the predictions you did, why you believe your predictions are correct. So, for us “formulating hypotheses” means making explicit predictions and developing rationales for the predictions.

“Testing hypotheses” means making observations that allow you to assess in what ways your predictions were correct and in what ways they were incorrect. In education research, it is rarely useful to think of your predictions as either right or wrong. Because of the complexity of most issues you will investigate, most predictions will be right in some ways and wrong in others.

By studying the observations you make (data you collect) to test your hypotheses, you can revise your hypotheses to better align with the observations. This means revising your predictions plus revising your rationales to justify your adjusted predictions. Even though you might not run another test, formulating revised hypotheses is an essential part of conducting a research study. Comparing your original and revised hypotheses informs everyone of what you learned by conducting your study. In addition, a revised hypothesis sets the stage for you or someone else to extend your study and accumulate more knowledge of the phenomenon.

We should note that not everyone makes a clear distinction between predictions and rationales as two aspects of hypotheses. In fact, common, non-scientific uses of the word “hypothesis” may limit it to only a prediction or only an explanation (or rationale). We choose to explicitly include both prediction and rationale in our definition of hypothesis, not because we assert this should be the universal definition, but because we want to foreground the importance of both parts acting in concert. Using “hypothesis” to represent both prediction and rationale could hide the two aspects, but we make them explicit because they provide different kinds of information. It is usually easier to make predictions than develop rationales because predictions can be guesses, hunches, or gut feelings about which you have little confidence. Developing a compelling rationale requires careful thought plus reading what other researchers have found plus talking with your colleagues. Often, while you are developing your rationale you will find good reasons to change your predictions. Developing good rationales is the engine that drives scientific inquiry. Rationales are essentially descriptions of how much you know about the phenomenon you are studying. Throughout this guide, we will elaborate on how developing good rationales drives scientific inquiry. For now, we simply note that it can sharpen your predictions and help you to interpret your data as you test your hypotheses.

An image represents the rationale and the prediction for the scientific inquiry and different types of information provided by the terms.

Hypotheses in education research take a variety of forms or types. This is because there are a variety of phenomena that can be investigated. Investigating educational phenomena is sometimes best done using qualitative methods, sometimes using quantitative methods, and most often using mixed methods (e.g., Hay, 2016 ; Weis et al. 2019a ; Weisner, 2005 ). This means that, given our definition, hypotheses are equally applicable to qualitative and quantitative investigations.

Hypotheses take different forms when they are used to investigate different kinds of phenomena. Two very different activities in education could be labeled conducting experiments and descriptions. In an experiment, a hypothesis makes a prediction about anticipated changes, say the changes that occur when a treatment or intervention is applied. You might investigate how students’ thinking changes during a particular kind of instruction.

A second type of hypothesis, relevant for descriptive research, makes a prediction about what you will find when you investigate and describe the nature of a situation. The goal is to understand a situation as it exists rather than to understand a change from one situation to another. In this case, your prediction is what you expect to observe. Your rationale is the set of reasons for making this prediction; it is your current explanation for why the situation will look like it does.

You will probably read, if you have not already, that some researchers say you do not need a prediction to conduct a descriptive study. We will discuss this point of view in Chap. 2 . For now, we simply claim that scientific inquiry, as we have defined it, applies to all kinds of research studies. Descriptive studies, like others, not only benefit from formulating, testing, and revising hypotheses, but also need hypothesis formulating, testing, and revising.

One reason we define research as formulating, testing, and revising hypotheses is that if you think of research in this way you are less likely to go wrong. It is a useful guide for the entire process, as we will describe in detail in the chapters ahead. For example, as you build the rationale for your predictions, you are constructing the theoretical framework for your study (Chap. 3 ). As you work out the methods you will use to test your hypothesis, every decision you make will be based on asking, “Will this help me formulate or test or revise my hypothesis?” (Chap. 4 ). As you interpret the results of testing your predictions, you will compare them to what you predicted and examine the differences, focusing on how you must revise your hypotheses (Chap. 5 ). By anchoring the process to formulating, testing, and revising hypotheses, you will make smart decisions that yield a coherent and well-designed study.

Exercise 1.5

Compare the concept of formulating, testing, and revising hypotheses with the descriptions of scientific inquiry contained in Scientific Research in Education (NRC, 2002 ). How are they similar or different?

Exercise 1.6

Provide an example to illustrate and emphasize the differences between everyday learning/thinking and scientific inquiry.

Learning from Doing Scientific Inquiry

We noted earlier that a measure of what you have learned by conducting a research study is found in the differences between your original hypothesis and your revised hypothesis based on the data you collected to test your hypothesis. We will elaborate this statement in later chapters, but we preview our argument here.

Even before collecting data, scientific inquiry requires cycles of making a prediction, developing a rationale, refining your predictions, reading and studying more to strengthen your rationale, refining your predictions again, and so forth. And, even if you have run through several such cycles, you still will likely find that when you test your prediction you will be partly right and partly wrong. The results will support some parts of your predictions but not others, or the results will “kind of” support your predictions. A critical part of scientific inquiry is making sense of your results by interpreting them against your predictions. Carefully describing what aspects of your data supported your predictions, what aspects did not, and what data fell outside of any predictions is not an easy task, but you cannot learn from your study without doing this analysis.

An image represents the cycle of events that take place before making predictions, developing the rationale, and studying the prediction and rationale multiple times.

Analyzing the matches and mismatches between your predictions and your data allows you to formulate different rationales that would have accounted for more of the data. The best revised rationale is the one that accounts for the most data. Once you have revised your rationales, you can think about the predictions they best justify or explain. It is by comparing your original rationales to your new rationales that you can sort out what you learned from your study.

Suppose your study was an experiment. Maybe you were investigating the effects of a new instructional intervention on students’ learning. Your original rationale was your explanation for why the intervention would change the learning outcomes in a particular way. Your revised rationale explained why the changes that you observed occurred like they did and why your revised predictions are better. Maybe your original rationale focused on the potential of the activities if they were implemented in ideal ways and your revised rationale included the factors that are likely to affect how teachers implement them. By comparing the before and after rationales, you are describing what you learned—what you can explain now that you could not before. Another way of saying this is that you are describing how much more you understand now than before you conducted your study.

Revised predictions based on carefully planned and collected data usually exhibit some of the following features compared with the originals: more precision, more completeness, and broader scope. Revised rationales have more explanatory power and become more complete, more aligned with the new predictions, sharper, and overall more convincing.

Part II. Why Do Educators Do Research?

Doing scientific inquiry is a lot of work. Each phase of the process takes time, and you will often cycle back to improve earlier phases as you engage in later phases. Because of the significant effort required, you should make sure your study is worth it. So, from the beginning, you should think about the purpose of your study. Why do you want to do it? And, because research is a social practice, you should also think about whether the results of your study are likely to be important and significant to the education community.

If you are doing research in the way we have described—as scientific inquiry—then one purpose of your study is to understand , not just to describe or evaluate or report. As we noted earlier, when you formulate hypotheses, you are developing rationales that explain why things might be like they are. In our view, trying to understand and explain is what separates research from other kinds of activities, like evaluating or describing.

One reason understanding is so important is that it allows researchers to see how or why something works like it does. When you see how something works, you are better able to predict how it might work in other contexts, under other conditions. And, because conditions, or contextual factors, matter a lot in education, gaining insights into applying your findings to other contexts increases the contributions of your work and its importance to the broader education community.

Consequently, the purposes of research studies in education often include the more specific aim of identifying and understanding the conditions under which the phenomena being studied work like the observations suggest. A classic example of this kind of study in mathematics education was reported by William Brownell and Harold Moser in 1949 . They were trying to establish which method of subtracting whole numbers could be taught most effectively—the regrouping method or the equal additions method. However, they realized that effectiveness might depend on the conditions under which the methods were taught—“meaningfully” versus “mechanically.” So, they designed a study that crossed the two instructional approaches with the two different methods (regrouping and equal additions). Among other results, they found that these conditions did matter. The regrouping method was more effective under the meaningful condition than the mechanical condition, but the same was not true for the equal additions algorithm.

What do education researchers want to understand? In our view, the ultimate goal of education is to offer all students the best possible learning opportunities. So, we believe the ultimate purpose of scientific inquiry in education is to develop understanding that supports the improvement of learning opportunities for all students. We say “ultimate” because there are lots of issues that must be understood to improve learning opportunities for all students. Hypotheses about many aspects of education are connected, ultimately, to students’ learning. For example, formulating and testing a hypothesis that preservice teachers need to engage in particular kinds of activities in their coursework in order to teach particular topics well is, ultimately, connected to improving students’ learning opportunities. So is hypothesizing that school districts often devote relatively few resources to instructional leadership training or hypothesizing that positioning mathematics as a tool students can use to combat social injustice can help students see the relevance of mathematics to their lives.

We do not exclude the importance of research on educational issues more removed from improving students’ learning opportunities, but we do think the argument for their importance will be more difficult to make. If there is no way to imagine a connection between your hypothesis and improving learning opportunities for students, even a distant connection, we recommend you reconsider whether it is an important hypothesis within the education community.

Notice that we said the ultimate goal of education is to offer all students the best possible learning opportunities. For too long, educators have been satisfied with a goal of offering rich learning opportunities for lots of students, sometimes even for just the majority of students, but not necessarily for all students. Evaluations of success often are based on outcomes that show high averages. In other words, if many students have learned something, or even a smaller number have learned a lot, educators may have been satisfied. The problem is that there is usually a pattern in the groups of students who receive lower quality opportunities—students of color and students who live in poor areas, urban and rural. This is not acceptable. Consequently, we emphasize the premise that the purpose of education research is to offer rich learning opportunities to all students.

One way to make sure you will be able to convince others of the importance of your study is to consider investigating some aspect of teachers’ shared instructional problems. Historically, researchers in education have set their own research agendas, regardless of the problems teachers are facing in schools. It is increasingly recognized that teachers have had trouble applying to their own classrooms what researchers find. To address this problem, a researcher could partner with a teacher—better yet, a small group of teachers—and talk with them about instructional problems they all share. These discussions can create a rich pool of problems researchers can consider. If researchers pursued one of these problems (preferably alongside teachers), the connection to improving learning opportunities for all students could be direct and immediate. “Grounding a research question in instructional problems that are experienced across multiple teachers’ classrooms helps to ensure that the answer to the question will be of sufficient scope to be relevant and significant beyond the local context” (Cai et al., 2019b , p. 115).

As a beginning researcher, determining the relevance and importance of a research problem is especially challenging. We recommend talking with advisors, other experienced researchers, and peers to test the educational importance of possible research problems and topics of study. You will also learn much more about the issue of research importance when you read Chap. 5 .

Exercise 1.7

Identify a problem in education that is closely connected to improving learning opportunities and a problem that has a less close connection. For each problem, write a brief argument (like a logical sequence of if-then statements) that connects the problem to all students’ learning opportunities.

Part III. Conducting Research as a Practice of Failing Productively

Scientific inquiry involves formulating hypotheses about phenomena that are not fully understood—by you or anyone else. Even if you are able to inform your hypotheses with lots of knowledge that has already been accumulated, you are likely to find that your prediction is not entirely accurate. This is normal. Remember, scientific inquiry is a process of constantly updating your thinking. More and better information means revising your thinking, again, and again, and again. Because you never fully understand a complicated phenomenon and your hypotheses never produce completely accurate predictions, it is easy to believe you are somehow failing.

The trick is to fail upward, to fail to predict accurately in ways that inform your next hypothesis so you can make a better prediction. Some of the best-known researchers in education have been open and honest about the many times their predictions were wrong and, based on the results of their studies and those of others, they continuously updated their thinking and changed their hypotheses.

A striking example of publicly revising (actually reversing) hypotheses due to incorrect predictions is found in the work of Lee J. Cronbach, one of the most distinguished educational psychologists of the twentieth century. In 1955, Cronbach delivered his presidential address to the American Psychological Association. Titling it “Two Disciplines of Scientific Psychology,” Cronbach proposed a rapprochement between two research approaches—correlational studies that focused on individual differences and experimental studies that focused on instructional treatments controlling for individual differences. (We will examine different research approaches in Chap. 4 ). If these approaches could be brought together, reasoned Cronbach ( 1957 ), researchers could find interactions between individual characteristics and treatments (aptitude-treatment interactions or ATIs), fitting the best treatments to different individuals.

In 1975, after years of research by many researchers looking for ATIs, Cronbach acknowledged the evidence for simple, useful ATIs had not been found. Even when trying to find interactions between a few variables that could provide instructional guidance, the analysis, said Cronbach, creates “a hall of mirrors that extends to infinity, tormenting even the boldest investigators and defeating even ambitious designs” (Cronbach, 1975 , p. 119).

As he was reflecting back on his work, Cronbach ( 1986 ) recommended moving away from documenting instructional effects through statistical inference (an approach he had championed for much of his career) and toward approaches that probe the reasons for these effects, approaches that provide a “full account of events in a time, place, and context” (Cronbach, 1986 , p. 104). This is a remarkable change in hypotheses, a change based on data and made fully transparent. Cronbach understood the value of failing productively.

Closer to home, in a less dramatic example, one of us began a line of scientific inquiry into how to prepare elementary preservice teachers to teach early algebra. Teaching early algebra meant engaging elementary students in early forms of algebraic reasoning. Such reasoning should help them transition from arithmetic to algebra. To begin this line of inquiry, a set of activities for preservice teachers were developed. Even though the activities were based on well-supported hypotheses, they largely failed to engage preservice teachers as predicted because of unanticipated challenges the preservice teachers faced. To capitalize on this failure, follow-up studies were conducted, first to better understand elementary preservice teachers’ challenges with preparing to teach early algebra, and then to better support preservice teachers in navigating these challenges. In this example, the initial failure was a necessary step in the researchers’ scientific inquiry and furthered the researchers’ understanding of this issue.

We present another example of failing productively in Chap. 2 . That example emerges from recounting the history of a well-known research program in mathematics education.

Making mistakes is an inherent part of doing scientific research. Conducting a study is rarely a smooth path from beginning to end. We recommend that you keep the following things in mind as you begin a career of conducting research in education.

First, do not get discouraged when you make mistakes; do not fall into the trap of feeling like you are not capable of doing research because you make too many errors.

Second, learn from your mistakes. Do not ignore your mistakes or treat them as errors that you simply need to forget and move past. Mistakes are rich sites for learning—in research just as in other fields of study.

Third, by reflecting on your mistakes, you can learn to make better mistakes, mistakes that inform you about a productive next step. You will not be able to eliminate your mistakes, but you can set a goal of making better and better mistakes.

Exercise 1.8

How does scientific inquiry differ from everyday learning in giving you the tools to fail upward? You may find helpful perspectives on this question in other resources on science and scientific inquiry (e.g., Failure: Why Science is So Successful by Firestein, 2015).

Exercise 1.9

Use what you have learned in this chapter to write a new definition of scientific inquiry. Compare this definition with the one you wrote before reading this chapter. If you are reading this book as part of a course, compare your definition with your colleagues’ definitions. Develop a consensus definition with everyone in the course.

Part IV. Preview of Chap. 2

Now that you have a good idea of what research is, at least of what we believe research is, the next step is to think about how to actually begin doing research. This means how to begin formulating, testing, and revising hypotheses. As for all phases of scientific inquiry, there are lots of things to think about. Because it is critical to start well, we devote Chap. 2 to getting started with formulating hypotheses.

Agnes, M., & Guralnik, D. B. (Eds.). (2008). Hypothesis. In Webster’s new world college dictionary (4th ed.). Wiley.

Google Scholar  

Britannica. (n.d.). Scientific method. In Encyclopaedia Britannica . Retrieved July 15, 2022 from https://www.britannica.com/science/scientific-method

Brownell, W. A., & Moser, H. E. (1949). Meaningful vs. mechanical learning: A study in grade III subtraction . Duke University Press..

Cai, J., Morris, A., Hohensee, C., Hwang, S., Robison, V., Cirillo, M., Kramer, S. L., & Hiebert, J. (2019b). Posing significant research questions. Journal for Research in Mathematics Education, 50 (2), 114–120. https://doi.org/10.5951/jresematheduc.50.2.0114

Article   Google Scholar  

Cambridge University Press. (n.d.). Hypothesis. In Cambridge dictionary . Retrieved July 15, 2022 from https://dictionary.cambridge.org/us/dictionary/english/hypothesis

Cronbach, J. L. (1957). The two disciplines of scientific psychology. American Psychologist, 12 , 671–684.

Cronbach, L. J. (1975). Beyond the two disciplines of scientific psychology. American Psychologist, 30 , 116–127.

Cronbach, L. J. (1986). Social inquiry by and for earthlings. In D. W. Fiske & R. A. Shweder (Eds.), Metatheory in social science: Pluralisms and subjectivities (pp. 83–107). University of Chicago Press.

Hay, C. M. (Ed.). (2016). Methods that matter: Integrating mixed methods for more effective social science research . University of Chicago Press.

Merriam-Webster. (n.d.). Explain. In Merriam-Webster.com dictionary . Retrieved July 15, 2022, from https://www.merriam-webster.com/dictionary/explain

National Research Council. (2002). Scientific research in education . National Academy Press.

Weis, L., Eisenhart, M., Duncan, G. J., Albro, E., Bueschel, A. C., Cobb, P., Eccles, J., Mendenhall, R., Moss, P., Penuel, W., Ream, R. K., Rumbaut, R. G., Sloane, F., Weisner, T. S., & Wilson, J. (2019a). Mixed methods for studies that address broad and enduring issues in education research. Teachers College Record, 121 , 100307.

Weisner, T. S. (Ed.). (2005). Discovering successful pathways in children’s development: Mixed methods in the study of childhood and family life . University of Chicago Press.

Download references

Author information

Authors and affiliations.

School of Education, University of Delaware, Newark, DE, USA

James Hiebert, Anne K Morris & Charles Hohensee

Department of Mathematical Sciences, University of Delaware, Newark, DE, USA

Jinfa Cai & Stephen Hwang

You can also search for this author in PubMed   Google Scholar

Rights and permissions

Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

Reprints and permissions

Copyright information

© 2023 The Author(s)

About this chapter

Hiebert, J., Cai, J., Hwang, S., Morris, A.K., Hohensee, C. (2023). What Is Research, and Why Do People Do It?. In: Doing Research: A New Researcher’s Guide. Research in Mathematics Education. Springer, Cham. https://doi.org/10.1007/978-3-031-19078-0_1

Download citation

DOI : https://doi.org/10.1007/978-3-031-19078-0_1

Published : 03 December 2022

Publisher Name : Springer, Cham

Print ISBN : 978-3-031-19077-3

Online ISBN : 978-3-031-19078-0

eBook Packages : Education Education (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research
  • Open access
  • Published: 27 April 2024

Assessing fragility of statistically significant findings from randomized controlled trials assessing pharmacological therapies for opioid use disorders: a systematic review

  • Leen Naji   ORCID: orcid.org/0000-0003-0994-1109 1 , 2 , 3 ,
  • Brittany Dennis 4 , 5 ,
  • Myanca Rodrigues 2 ,
  • Monica Bawor 6 ,
  • Alannah Hillmer 7 ,
  • Caroul Chawar 8 ,
  • Eve Deck 9 ,
  • Andrew Worster 2 , 4 ,
  • James Paul 10 ,
  • Lehana Thabane 11 , 2 &
  • Zainab Samaan 12 , 2  

Trials volume  25 , Article number:  286 ( 2024 ) Cite this article

22 Accesses

1 Altmetric

Metrics details

The fragility index is a statistical measure of the robustness or “stability” of a statistically significant result. It has been adapted to assess the robustness of statistically significant outcomes from randomized controlled trials. By hypothetically switching some non-responders to responders, for instance, this metric measures how many individuals would need to have responded for a statistically significant finding to become non-statistically significant. The purpose of this study is to assess the fragility index of randomized controlled trials evaluating opioid substitution and antagonist therapies for opioid use disorder. This will provide an indication as to the robustness of trials in the field and the confidence that should be placed in the trials’ outcomes, potentially identifying ways to improve clinical research in the field. This is especially important as opioid use disorder has become a global epidemic, and the incidence of opioid related fatalities have climbed 500% in the past two decades.

Six databases were searched from inception to September 25, 2021, for randomized controlled trials evaluating opioid substitution and antagonist therapies for opioid use disorder, and meeting the necessary requirements for fragility index calculation. Specifically, we included all parallel arm or two-by-two factorial design RCTs that assessed the effectiveness of any opioid substitution and antagonist therapies using a binary primary outcome and reported a statistically significant result. The fragility index of each study was calculated using methods described by Walsh and colleagues. The risk of bias of included studies was assessed using the Revised Cochrane Risk of Bias tool for randomized trials.

Ten studies with a median sample size of 82.5 (interquartile range (IQR) 58, 179, range 52–226) were eligible for inclusion. Overall risk of bias was deemed to be low in seven studies, have some concerns in two studies, and be high in one study. The median fragility index was 7.5 (IQR 4, 12, range 1–26).

Conclusions

Our results suggest that approximately eight participants are needed to overturn the conclusions of the majority of trials in opioid use disorder. Future work should focus on maximizing transparency in reporting of study results, by reporting confidence intervals, fragility indexes, and emphasizing the clinical relevance of findings.

Trial registration

PROSPERO CRD42013006507. Registered on November 25, 2013.

Peer Review reports

Introduction

Opioid use disorder (OUD) has become a global epidemic, and the incidence of opioid related fatality is unparalleled to the rates observed in North America, having climbed 500% in the past two decades [ 1 , 2 ]. There is a dire need to identify the most effective treatment modality to maintain patient engagement in treatment, mitigate high risk consumption patterns, as well as eliminate overdose risk. Numerous studies have aimed to identify the most effective treatment modality for OUD [ 3 , 4 , 5 ]. Unfortunately, this multifaceted disease is complicated by the interplay between both neurobiological and social factors, impacting our current body of evidence and clinical decision making. Optimal treatment selection is further challenged by the rising number of pharmacological opioid substitution and antagonist therapies (OSAT) [ 6 ]. Despite this growing body of evidence and available therapies, we have yet to arrive to a consensus regarding the best treatment modality given the substantial variability in research findings and directly conflicting results [ 6 , 7 , 8 , 9 ]. More concerning, international clinical practice guidelines rely on out-of-date systematic review evidence to inform guideline development [ 10 ]. In fact, these guidelines make strong recommendations based on a fraction of the available evidence, employing trials with restrictive eligibility criteria which fail to reflect the common OUD patients seen in clinical practice [ 10 ].

A major factor hindering our ability to advance the field of addiction medicine is our failure to apply the necessary critical lens to the growing body of evidence used to inform clinical practice. While distinct concerns exist regarding the external validity of randomized trials in addiction medicine, the robustness of the universally recognized “well designed” trials remains unknown [ 10 ]. The reliability of the results of clinical trials rests on not only the sample size of the study but also the number of outcome events. In fact, a shift in the results of only a few events could in theory render the findings of the trial null, impacting the traditional hypothesis tests above the standard threshold accepted as “statistical significance.” A metric of this fragility was first introduced in 1990, known formally as the fragility index (FI) [ 11 ]. In 2014, it was adapted for use as a tool to assess the robustness of findings from randomized controlled trials (RCTs) [ 12 ]. Briefly, the FI determines the minimum number of participants whose outcome would have to change from non-event to event in order for a statistically significant result to become non-significant. Larger FIs indicate more robust findings [ 11 , 13 ]. Additionally, when the number of study participants lost to follow-up exceeds the FI of the trial, this implies that the outcome of these participants could have significantly altered the statistical significance and final conclusions of the study. The FI has been applied across multiple fields, often yielding similar results such that the change in a small number of outcome events has been powerful enough to overturn the statistical conclusions of many “well-designed” trials [ 13 ].

The concerning state of the OUD literature has left us with guidelines which neither acknowledge the lack of external validity and actually go so far as to rank the quality of the evidence as good, despite the concerning limitations we have raised [ 10 ]. Such alarming practices necessitate vigilance on behalf of methodologists and practitioners to be critical and open to a thorough review of the evidence in the field of addiction medicine [ 12 ]. Given the complex nature of OUD treatment and the increasing number of available therapies, concentrated efforts are needed to ensure the reliability and internal validity of the results of clinical trials used to inform guidelines. Application of the FI can serve to provide additional insight into the robustness of the evidence in addiction medicine. The purpose of this study is to assess the fragility of findings of RCTs assessing OSAT for OUD.

Systematic review protocol

We conducted a systematic review of the evidence surrounding OSATs for OUD [ 5 ]. The study protocol was registered with PROSPERO a priori (PROSPERO CRD42013006507). We searched Medline, EMBASE, PubMed, PsycINFO, Web of Science, and Cochrane Library for relevant studies from inception to September 25, 2021. We included all RCTs evaluating the effectiveness of any OSAT for OUD, which met the criteria required for FI calculation. Specifically, we included all parallel arm or two-by-two factorial design RCTs that allocated patients at a 1:1 ratio, assessed the effectiveness of any OSAT using a binary primary or co-primary outcome, and reported this outcome to be statistically significant ( p < 0.05).

All titles, abstracts, and full texts were screened for eligibility by two reviewers independently and in duplicate. Any discrepancies between the two reviewers were discussed for consensus, and a third reviewer was called upon when needed.

Data extraction and risk of bias assessment (ROB)

Two reviewers extracted the following data from the included studies in duplicate and independently using a pilot-tested excel data extraction sheet: sample size, whether a sample size calculation was conducted, statistical test used, primary outcome, number of responders and non-responders in each arm, number lost to follow-up, and the p -value. The 2021 Thomson Reuters Journal Impact Factor for each included study was also recorded. The ROB of included studies for the dichotomous outcome used in the FI calculation was assessed using the Revised Cochrane ROB tool for randomized trials [ 14 ]. Two reviewers independently assessed the included studies based on the following domains for potential ROB: randomization process, deviations from the intended interventions, missing outcome data, measurement of the outcome, and selection of the reported results.

Statistical analyses

Study characteristics were summarized using descriptive statistics. Means and standard deviations (SD), as well as medians and interquartile ranges (IQR: Q 25 , Q 75 ) were used as measures of central tendency for continuous outcomes with normal and skewed distributions, respectively. Frequencies and percentages were used to summarize categorical variables. The FI was calculated using a publicly available free online calculator, using the methods described by Walsh et al. [ 12 , 15 ] In summary, the number of events and non-events in each treatment arm were entered into a two-by-two contingency table for each trial. An event was added to the treatment arm with the smaller number of events, while subtracting a non-event from the same arm, thus keeping the overall sample size the same. Each time this was done, the two-sided p -value for Fisher’s exact test was recalculated. The FI was defined as the number of non-events that needed to be switched to events for the p -value to reach non-statistical significance (i.e., ≥0.05).

We intended to conduct a linear regression and Spearman’s rank correlations to assess the association between FI and journal impact factor, study sample size, and number events. However, we were not powered to do so given the limited number of eligible studies included in this review and thus refrained from conducting any inferential statistics.

We followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines for reporting (see Supplementary Material ) [ 16 ].

Study selection

Our search yielded 13,463 unique studies, of which 104 were RCTs evaluating OSAT for OUD. Among these, ten studies met the criteria required for FI calculation and were included in our analyses. Please refer to Fig. 1 for the search results, study inclusion flow diagram, and Table 1 for details on included studies.

figure 1

PRISMA flow diagram delineating study selection

Characteristics of included studies

The included studies were published between 1980 and 2018, in eight different journals with a median impact factor of 8.48 (IQR 6.53–56.27, range 3.77–91.25). Four studies reported on a calculated sample size [ 17 , 18 , 19 , 20 ], and only one study specified that reporting guidelines were used [ 21 ]. Treatment retention was the most commonly reported primary outcome ( k = 8). The median sample size of included studies was 82.5 (IQR 58–179, range 52–226).

Overall ROB was deemed to be low in seven studies [ 17 , 19 , 20 , 21 , 22 , 23 , 24 ], have some concerns in two studies [ 18 , 25 ], and be high in one study [ 26 ] due to a high proportion of missing outcome data that was not accounted for in the analyses. We present a breakdown of the ROB assessment of the included studies for the dichotomous outcome of interest in Table 2 .

  • Fragility index

The median FI of included studies was 7.5 (IQR 4–12; range 1–26). The FI of individual studies is reported in Table 1 . The number of participants lost to follow-up exceeded the FI in two studies [ 23 , 26 ]. We find that there is a relatively positive correlation between the FI and sample size. However, no clear correlation was appreciated between FI and journal impact factor or number of events.

This is the first study to evaluate the FI in the field of addiction medicine, and more specifically in OUD trials. Among the ten RCTs evaluating the OSAT for OUD, we found that, in some cases, changing the outcome of one or two participants could completely alter the study’s conclusions and render the results statistically non-significant.

We compare our findings to those of Holek et al. , wherein they examined the mean FI across all reviews published in PubMed between 2014 and 2019 that assessed the distribution of FI indices, irrespective of discipline (though none were in addiction medicine) [ 13 ]. Among 24 included reviews with a median sample size of 134 (IQR 82, 207), they found a mean FI of 4 (95% CI 3, 5) [ 13 ]. This is slightly lower than our calculated our median FI of 7.5 (IQR 4–12; range 1–26). It is important to note that half of the reviews included in the study by Holek et al. were conducted in surgical disciplines, which are generally subjected to more limitations to internal and external validity, as it is often not possible to conceal allocation, blind participants, or operators, and the intervention is operator dependent. [ 27 ] To date, no study has directly applied FI to the findings of trials in OUD. In the HIV/AIDS literature, however, a population which is commonly shared with addiction medicine due to the prevalence of the comorbidities coexisting, the median fragility across all trials assessing anti-retroviral therapies ( n = 39) was 6 (IQR = 1, 11) [ 28 ], which is more closely related to our calculated FI. Among the included studies, only 3 were deemed to be at high risk of bias, whereas 13 and 20 studies were deemed to be at low and some risk of bias, respectively.

Loss-to-follow-up plays an important role in the interpretation of the FI. For instance, when the number of study participants lost to follow-up exceeds the FI of the trial, this implies that the outcome of these participants could have significantly altered the statistical significance and final conclusions of the study. While only two of the included studies had an FI that was greater than the total number of participants lost to follow-up [ 23 , 26 ], this metric is less important in our case given the primary outcome assessed by the majority of trials was retention in treatment, rendering loss to follow-up an outcome itself. In our report, we considered participants to be lost to follow-up if they left the study for reasons that were known and not necessarily indicative of treatment failure, such as due to factors beyond the participants, control including incarceration or being transferred to another treatment location.

Findings from our analysis of the literature as well as the application of FI to the existing clinical trials in the field of addiction medicine demonstrates significant concerns regarding the robustness of the evidence. This, in conjunction with the large differences between the clinical population and trial participants of opioid-dependent patients inherent in addiction medicine trials, raises larger concerns as to a growing body of evidence with deficiencies in both internal and external validity. The findings from this study raise important clinical concerns regarding the applicability of the current evidence to treating patients in the context of the opioid epidemic. Are we recommending the appropriate treatments for patients with OUD based on robust and applicable evidence? Are we completing our due diligence and ensuring clinicians and researchers alike understand the critical issues rampant in the literature, including the fragility of the data and misconceptions of p -values? Are we possibly putting our patients at risk employing such treatment based on fragile data? These questions cannot be answered until the appropriate re-evaluation of the evidence takes place employing both the use pragmatic trial designs as well as transparent metrics to reflect the reliability and robustness of the findings.

Strengths and limitations

Our study is strengthened by a comprehensive search strategy, rigorous and systematic screening of studies, and the use of an objective measure to gauge the robustness of studies (i.e., FI). The limitations of this study are inherent in the limitations of the FI. Precisely, that it can only be calculated for RCTs with a 1:1 allocation ratio, a parallel arm or two-by-two factorial design, and a dichotomous primary outcome. As a result, 94 RCTs evaluating OSAT for OUD were excluded for not meeting these criteria (Fig. 1 ). Nonetheless, the FI provides a general sense of the robustness of the available studies, and our data reflect studies published across almost four decades in journals of varying impact factor.

Future direction

This study serves as further evidence for the need of a shift away from p -values [ 29 , 30 ]. Although there is increasingly a shift among statisticians to shift away from relying on statistical significance due to its inability to convey clinical importance [ 31 ], this remains the simplest way and most commonly reported metric in manuscripts. p -values provide a simple statistical measure to confirm or refute a null hypothesis, by providing a measure of how likely the observed result would be if the null hypothesis were true. An arbitrary cutoff of 5% is traditionally used as a threshold for rejecting the null hypothesis. However, a major drawback of the p -value is that it does not take into account the effect size of the outcome measure, such that a small incremental change that may not be clinically significant may still be statistically significant in a large enough trial. Contrastingly, a very large effect size that has biological plausibility, for instance, may not reach statistical significance if the trial size is not large enough [ 29 , 30 ]. This is highly problematic given the common misconceptions surrounding the p -value. Increasing emphasis is being placed on the importance of transparency in outcome reporting, and the reporting of confidence intervals to allow the reader to gauge the uncertainty in the evidence, and make a clinically informed decision about whether a finding is clinically significant or not. It has also been recommended that studies report FI where possible to provide readers with a comprehensible way of gauging the robustness of their findings [ 12 , 13 ]. There is a strive to make all data publicly available, allowing for replication of study findings as well as pooling of data among databases for generating more robust analyses using larger pragmatic samples [ 32 ]. Together, these efforts aim to increase transparency of research and facilitate data sharing to allow for stronger and more robust evidence to be produced, allowing for advancements in evidence-based medicine and improvements in the quality of care delivered to patients.

Our results suggest that approximately eight participants are needed to overturn the conclusions of the majority of trials in addiction medicine. Findings from our analysis of the literature and application of FI to the existing clinical trials in the field of addiction medicine demonstrates significant concerns regarding the overall quality and specifically robustness and stability of the evidence and the conclusions of the trials. Findings from this work raises larger concerns as to a growing body of evidence with deficiencies in both internal and external validity. In order to advance the field of addiction medicine, we must re-evaluate the quality of the evidence and consider employing pragmatic trial designs as well as transparent metrics to reflect the reliability and robustness of the findings. Placing emphasis on clinical relevance and reporting the FI along with confidence intervals may provide researchers, clinicians, and guideline developers with a transparent method to assess the outcomes from clinical trials, ensuring vigilance in decisions regarding management and treatment of patients with substance use disorders.

Availability of data and materials

All data generated or analyzed during this study are included in this published article (and its supplementary information files).

Abbreviations

Interquartile range

  • Opioid use disorder

Opioid substitution and antagonist therapies

  • Randomized controlled trials

Risk of bias

Standard deviation

Preferred Reporting Items for Systematic Reviews and Meta-Analyses

Products - Vital Statistics Rapid Release - Provisional Drug Overdose Data. https://www.cdc.gov/nchs/nvss/vsrr/drug-overdose-data.htm . Accessed April 26, 2020.

Spencer MR, Miniño AM, Warner M. Drug overdose deaths in the United States, 2001–2021. NCHS Data Brief, no 457. Hyattsville, MD: National Center for Health Statistics. 2022. https://doi.org/10.15620/cdc:122556 .

Mattick RP, Breen C, Kimber J, Davoli M. Methadone maintenance therapy versus no opioid replacement therapy for opioid dependence. Cochrane Database Syst Rev. 2009;(3).  https://doi.org/10.1002/14651858.CD002209.PUB2/FULL .

Hedrich D, Alves P, Farrell M, Stöver H, Møller L, Mayet S. The effectiveness of opioid maintenance treatment in prison settings: a systematic review. Addiction. 2012;107(3):501–17. https://doi.org/10.1111/J.1360-0443.2011.03676.X .

Article   PubMed   Google Scholar  

Dennis BB, Naji L, Bawor M, et al. The effectiveness of opioid substitution treatments for patients with opioid dependence: a systematic review and multiple treatment comparison protocol. Syst Rev. 2014;3(1):105. https://doi.org/10.1186/2046-4053-3-105 .

Article   PubMed   PubMed Central   Google Scholar  

Dennis BB, Sanger N, Bawor M, et al. A call for consensus in defining efficacy in clinical trials for opioid addiction: combined results from a systematic review and qualitative study in patients receiving pharmacological assisted therapy for opioid use disorder. Trials. 2020;21(1). https://doi.org/10.1186/s13063-019-3995-y .

British Columbia Centre on Substance Use. (2017). A Guideline for the Clinical Management of Opioid Use Disorder . http://www.bccsu.ca/care-guidance-publications/ . Accessed December 4, 2020.

Kampman  K, Jarvis M. American Society of Addiction Medicine (ASAM) national practice guideline for the use of medications in the treatment of addiction involving opioid use. J Addict Med. 2015;9(5):358–367.

Srivastava A, Wyman J, Fcfp MD, Mph D. Methadone treatment for people who use fentanyl: recommendations. 2021. www.metaphi.ca . Accessed November 14, 2023.

Dennis BB, Roshanov PS, Naji L, et al. Opioid substitution and antagonist therapy trials exclude the common addiction patient: a systematic review and analysis of eligibility criteria. Trials. 2015;16(1):1. https://doi.org/10.1186/s13063-015-0942-4 .

Article   CAS   Google Scholar  

Feinstein AR. The unit fragility index: an additional appraisal of “statistical significance” for a contrast of two proportions. J Clin Epidemiol. 1990;43(2):201–9. https://doi.org/10.1016/0895-4356(90)90186-S .

Article   CAS   PubMed   Google Scholar  

Walsh M, Srinathan SK, McAuley DF, et al. The statistical significance of randomized controlled trial results is frequently fragile: a case for a fragility index. J Clin Epidemiol. 2014;67(6):622–8. https://doi.org/10.1016/j.jclinepi.2013.10.019 .

Holek M, Bdair F, Khan M, et al. Fragility of clinical trials across research fields: a synthesis of methodological reviews. Contemp Clin Trials. 2020;97. doi: https://doi.org/10.1016/j.cct.2020.106151

Sterne JAC, Savović J, Page MJ, et al. RoB 2: a revised tool for assessing risk of bias in randomised trials. BMJ. 2019;366. doi: https://doi.org/10.1136/bmj.l4898

Kane SP. Fragility Index Calculator. ClinCalc: https://clincalc.com/Stats/FragilityIndex.aspx . Updated July 19, 2018. Accessed October 17, 2023.

Page MJ, McKenzie JE, Bossuyt PM, The PRISMA, et al. statement: an updated guideline for reporting systematic reviews. BMJ. 2020;2021:372. https://doi.org/10.1136/bmj.n71 .

Article   Google Scholar  

Petitjean S, Stohler R, Déglon JJ, et al. Double-blind randomized trial of buprenorphine and methadone in opiate dependence. Drug Alcohol Depend. 2001;62(1):97–104. https://doi.org/10.1016/S0376-8716(00)00163-0 .

Sees KL, Delucchi KL, Masson C, et al. Methadone maintenance vs 180-day psychosocially enriched detoxification for treatment of opioid dependence: a randomized controlled trial. JAMA. 2000;283(10):1303–10. https://doi.org/10.1001/JAMA.283.10.1303 .

Kakko J, Dybrandt Svanborg K, Kreek MJ, Heilig M. 1-year retention and social function after buprenorphine-assisted relapse prevention treatment for heroin dependence in Sweden: a randomised, placebo-controlled trial. Lancet (London, England). 2003;361(9358):662–8. https://doi.org/10.1016/S0140-6736(03)12600-1 .

Oviedo-Joekes E, Brissette S, Marsh DC, et al. Diacetylmorphine versus methadone for the treatment of opioid addiction. N Engl J Med. 2009;361(8):777–86. https://doi.org/10.1056/NEJMoa0810635 .

Article   CAS   PubMed   PubMed Central   Google Scholar  

Hulse GK, Morris N, Arnold-Reed D, Tait RJ. Improving clinical outcomes in treating heroin dependence: randomized, controlled trial of oral or implant naltrexone. Arch Gen Psychiatry. 2009;66(10):1108–15. https://doi.org/10.1001/ARCHGENPSYCHIATRY.2009.130 .

Krupitsky EM, Zvartau EE, Masalov DV, et al. Naltrexone for heroin dependence treatment in St. Petersburg, Russia. J Subst Abuse Treat. 2004;26(4):285–94. https://doi.org/10.1016/j.jsat.2004.02.002 .

Krook AL, Brørs O, Dahlberg J, et al. A placebo-controlled study of high dose buprenorphine in opiate dependents waiting for medication-assisted rehabilitation in Oslo. Norway Addiction. 2002;97(5):533–42. https://doi.org/10.1046/J.1360-0443.2002.00090.X .

Hartnoll RL, Mitcheson MC, Battersby A, et al. Evaluation of heroin maintenance in controlled trial. Arch Gen Psychiatry. 1980;37(8):877–84. https://doi.org/10.1001/ARCHPSYC.1980.01780210035003 .

Fischer G, Gombas W, Eder H, et al. Buprenorphine versus methadone maintenance for the treatment of opioid dependence. Addiction. 1999;94(9):1337–47. https://doi.org/10.1046/J.1360-0443.1999.94913376.X .

Yancovitz SR, Des Jarlais DC, Peyser NP, et al. A randomized trial of an interim methadone maintenance clinic. Am J Public Health. 1991;81(9):1185–91. https://doi.org/10.2105/AJPH.81.9.1185 .

Demange MK, Fregni F. Limits to clinical trials in surgical areas. Clinics (Sao Paulo). 2011;66(1):159–61. https://doi.org/10.1590/S1807-59322011000100027 .

Wayant C, Meyer C, Gupton R, Som M, Baker D, Vassar M. The fragility index in a cohort of HIV/AIDS randomized controlled trials. J Gen Intern Med. 2019;34(7):1236–43. https://doi.org/10.1007/S11606-019-04928-5 .

Amrhein V, Greenland S, McShane B. Scientists rise up against statistical significance. Nature. 2019;567(7748):305–7. https://doi.org/10.1038/D41586-019-00857-9 .

Ioannidis JPA. Why most published research findings are false. PLoS Med. 2005;2(8):e124. https://doi.org/10.1371/journal.pmed.0020124 .

Goodman SN. Toward evidence-based medical statistics. 1: the p value fallacy. Ann Intern Med. 1999;130(12):995–1004. https://doi.org/10.7326/0003-4819-130-12-199906150-00008 .

Allison DB, Shiffrin RM, Stodden V. Reproducibility of research: issues and proposed remedies. Proc Natl Acad Sci U S A. 2018;115(11):2561–2. https://doi.org/10.1073/PNAS.1802324115 .

Download references

Acknowledgements

The authors received no funding for this work.

Author information

Authors and affiliations.

Department of Family Medicine, David Braley Health Sciences Centre, McMaster University, 100 Main St W, 3rdFloor, Hamilton, ON, L8P 1H6, Canada

Department of Health Research Methods, Evidence, and Impact, McMaster University, Hamilton, ON, Canada

Leen Naji, Myanca Rodrigues, Andrew Worster, Lehana Thabane & Zainab Samaan

Department of Medicine, Montefiore Medical Center, New York, NY, USA

Department of Medicine, McMaster University, Hamilton, ON, Canada

Brittany Dennis & Andrew Worster

Department of Medicine, University of British Columbia, Vancouver, Canada

Brittany Dennis

Department of Medicine, Imperial College Healthcare NHS Trust, London, UK

Monica Bawor

Department of Psychiatry and Behavaioral Neurosciences, McMaster University, Hamilton, ON, Canada

Alannah Hillmer

Physician Assistant Program, University of Toronto, Toronto, ON, Canada

Caroul Chawar

Department of Family Medicine, Western University, London, ON, Canada

Department of Anesthesia, McMaster University, Hamilton, ON, Canada

Biostatistics Unit, Research Institute at St Joseph’s Healthcare, Hamilton, ON, Canada

Lehana Thabane

Department of Psychiatry and Behavioral Neurosciences, McMaster University, Hamilton, ON, Canada

Zainab Samaan

You can also search for this author in PubMed   Google Scholar

Contributions

LN, BD, MB, LT, and ZS conceived the research question and protocol. LN, BD, MR, and AH designed the search strategy and ran the literature search. LN, BD, MR, AH, CC, and ED contributed to screening studies for eligibility and data extraction. LN and LT analyzed data. All authors contributed equally to the writing and revision of the manuscript. All authors approved the final version of the manuscript.

Corresponding author

Correspondence to Leen Naji .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary material 1., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Naji, L., Dennis, B., Rodrigues, M. et al. Assessing fragility of statistically significant findings from randomized controlled trials assessing pharmacological therapies for opioid use disorders: a systematic review. Trials 25 , 286 (2024). https://doi.org/10.1186/s13063-024-08104-x

Download citation

Received : 11 December 2022

Accepted : 10 April 2024

Published : 27 April 2024

DOI : https://doi.org/10.1186/s13063-024-08104-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Research methods
  • Critical appraisal
  • Systematic review

ISSN: 1745-6215

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

study finding of research

  • Share full article

Advertisement

Supported by

A Peek Inside the Brains of ‘Super-Agers’

New research explores why some octogenarians have exceptional memories.

Close up of a grey haired, wrinkled older woman’s eye.

By Dana G. Smith

When it comes to aging, we tend to assume that cognition gets worse as we get older. Our thoughts may slow down or become confused, or we may start to forget things, like the name of our high school English teacher or what we meant to buy at the grocery store.

But that’s not the case for everyone.

For a little over a decade, scientists have been studying a subset of people they call “super-agers.” These individuals are age 80 and up, but they have the memory ability of a person 20 to 30 years younger.

Most research on aging and memory focuses on the other side of the equation — people who develop dementia in their later years. But, “if we’re constantly talking about what’s going wrong in aging, it’s not capturing the full spectrum of what’s happening in the older adult population,” said Emily Rogalski, a professor of neurology at the University of Chicago, who published one of the first studies on super-agers in 2012.

A paper published Monday in the Journal of Neuroscience helps shed light on what’s so special about the brains of super-agers. The biggest takeaway, in combination with a companion study that came out last year on the same group of individuals, is that their brains have less atrophy than their peers’ do.

The research was conducted on 119 octogenarians from Spain: 64 super-agers and 55 older adults with normal memory abilities for their age. The participants completed multiple tests assessing their memory, motor and verbal skills; underwent brain scans and blood draws; and answered questions about their lifestyle and behaviors.

The scientists found that the super-agers had more volume in areas of the brain important for memory, most notably the hippocampus and entorhinal cortex. They also had better preserved connectivity between regions in the front of the brain that are involved in cognition. Both the super-agers and the control group showed minimal signs of Alzheimer’s disease in their brains.

“By having two groups that have low levels of Alzheimer’s markers, but striking cognitive differences and striking differences in their brain, then we’re really speaking to a resistance to age-related decline,” said Dr. Bryan Strange, a professor of clinical neuroscience at the Polytechnic University of Madrid, who led the studies.

These findings are backed up by Dr. Rogalski’s research , initially conducted when she was at Northwestern University, which showed that super-agers’ brains looked more like 50- or 60-year-olds’ brains than their 80-year-old peers. When followed over several years, the super-agers’ brains atrophied at a slower rate than average.

No precise numbers exist on how many super-agers there are among us, but Dr. Rogalski said they’re “relatively rare,” noting that “far less than 10 percent” of the people she sees end up meeting the criteria.

But when you meet a super-ager, you know it, Dr. Strange said. “They are really quite energetic people, you can see. Motivated, on the ball, elderly individuals.”

Experts don’t know how someone becomes a super-ager, though there were a few differences in health and lifestyle behaviors between the two groups in the Spanish study. Most notably, the super-agers had slightly better physical health, both in terms of blood pressure and glucose metabolism, and they performed better on a test of mobility . The super-agers didn’t report doing more exercise at their current age than the typical older adults, but they were more active in middle age. They also reported better mental health .

But overall, Dr. Strange said, there were a lot of similarities between the super-agers and the regular agers. “There are a lot of things that are not particularly striking about them,” he said. And, he added, “we see some surprising omissions, things that you would expect to be associated with super-agers that weren’t really there.” For example, there were no differences between the groups in terms of their diets, the amount of sleep they got, their professional backgrounds or their alcohol and tobacco use.

The behaviors of some of the Chicago super-agers were similarly a surprise. Some exercised regularly, but some never had; some stuck to a Mediterranean diet, others subsisted off TV dinners; and a few of them still smoked cigarettes. However, one consistency among the group was that they tended to have strong social relationships , Dr. Rogalski said.

“In an ideal world, you’d find out that, like, all the super-agers, you know, ate six tomatoes every day and that was the key,” said Tessa Harrison, an assistant project scientist at the University of California, Berkeley, who collaborated with Dr. Rogalski on the first Chicago super-ager study.

Instead, Dr. Harrison continued, super-agers probably have “some sort of lucky predisposition or some resistance mechanism in the brain that’s on the molecular level that we don’t understand yet,” possibly related to their genes.

While there isn’t a recipe for becoming a super-ager, scientists do know that, in general , eating healthily, staying physically active, getting enough sleep and maintaining social connections are important for healthy brain aging.

Dana G. Smith is a Times reporter covering personal health, particularly aging and brain health. More about Dana G. Smith

A Guide to Aging Well

Looking to grow old gracefully we can help..

Researchers are investigating how our biology changes as we grow older — and whether there are ways to stop it .

You need more than strength to age well — you also need power. Here’s how to measure how much power you have  and here’s how to increase yours .

Ignore the hyperbaric chambers and infrared light: These are the evidence-backed secrets to aging well .

Your body’s need for fuel shifts as you get older. Your eating habits should shift , too.

People who think positively about getting older often live longer, healthier lives. These tips can help you reconsider your perspective .

The sun’s rays cause the majority of skin changes as you grow older. Here’s how sunscreen helps prevent the damage .

Joint pain, stiffness and swelling aren’t always inevitable results of aging, experts say. Here’s what you can do to reduce your risk for arthritis .

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Clin Transl Sci
  • v.4(3); 2020 Jun

Logo of jctsci

Communicating and disseminating research findings to study participants: Formative assessment of participant and researcher expectations and preferences

Cathy l. melvin.

1 College of Medicine, Medical University of South Carolina, Charleston, SC, USA

Jillian Harvey

2 College of Health Professions/Healthcare Leadership & Management, Medical University of South Carolina, Charleston, SC, USA

Tara Pittman

3 South Carolina Clinical & Translational Research Institute (CTSA), Medical University of South Carolina, Charleston, SC, USA

Stephanie Gentilin

Dana burshell.

4 SOGI-SES Add Health Study Carolina Population Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA

Teresa Kelechi

5 College of Nursing, Medical University of South Carolina, Charleston, SC, USA

Introduction:

Translating research findings into practice requires understanding how to meet communication and dissemination needs and preferences of intended audiences including past research participants (PSPs) who want, but seldom receive, information on research findings during or after participating in research studies. Most researchers want to let others, including PSP, know about their findings but lack knowledge about how to effectively communicate findings to a lay audience.

We designed a two-phase, mixed methods pilot study to understand experiences, expectations, concerns, preferences, and capacities of researchers and PSP in two age groups (adolescents/young adults (AYA) or older adults) and to test communication prototypes for sharing, receiving, and using information on research study findings.

Principal Results:

PSP and researchers agreed that sharing study findings should happen and that doing so could improve participant recruitment and enrollment, use of research findings to improve health and health-care delivery, and build community support for research. Some differences and similarities in communication preferences and message format were identified between PSP groups, reinforcing the best practice of customizing communication channel and messaging. Researchers wanted specific training and/or time and resources to help them prepare messages in formats to meet PSP needs and preferences but were unaware of resources to help them do so.

Conclusions:

Our findings offer insight into how to engage both PSP and researchers in the design and use of strategies to share research findings and highlight the need to develop services and support for researchers as they aim to bridge this translational barrier.

Introduction

Since 2006, the National Institutes of Health Clinical and Translational Science Awards (CTSA) have aimed to advance science and translate knowledge into evidence that, if implemented, helps patients and providers make more informed decisions with the potential to improve health care and health outcomes [ 1 , 2 ]. This aim responded to calls by leaders in the fields of comparative effectiveness research, clinical trials, research ethics, and community engagement to assure that results of clinical trials were made available to participants and suggesting that providing participants with results both positive and negative should be the “ethical norm” [ 1 , 3 ]. Others noted that

on the surface, the concept of providing clinical trial results might seem straightforward but putting such a plan into action will be much more complicated. Communication with patients following participation in a clinical trial represents an important and often overlooked aspect of the patient-physician relationship. Careful exploration of this issue, both from the patient and clinician-researcher perspective, is warranted [ 4 ].

Authors also noted that no systematic approach to operationalizing this “ethical norm” existed and that evidence was lacking to describe either positive or negative outcomes of sharing clinical trial results with study participants and the community [ 4 ]. It was generally assumed, but not supported by research, that sharing would result in better patient–physician/researcher communication, improvement in patient care and satisfaction with care, better patient/participant understanding of clinical trials, and enhanced clinical trial accrual [ 4 ].

More recent literature informs these processes but also raises unresolved concerns about the communication and dissemination of research results. A 2008 narrative review of available data on the effects of communicating aggregate and individual research showed that

  • research participants want aggregate and clinically significant individual study results made available to them despite the transient distress that communication of results sometimes elicits [ 3 , 5 ]. While differing in their preferences for specific channels of communication, they indicated that not sharing results fostered lack of participant trust in the health-care system, providers, and researchers [ 6 ] and an adverse impact on trial participation [ 5 ];
  • investigators recognized their ethical obligation to at least offer to share research findings with recipients and the nonacademic community but differed on whether they should proactively re-contact participants, the type of results to be offered to participants, the need for clinical relevance before disclosure, and the stage at which research results should be offered [ 5 ]. They also reported not being well versed in communication and dissemination strategies known to be effective and not having funding sources to implement proven strategies for sharing with specific audiences [ 5 ];
  • members of the research enterprise noted that while public opinion regarding participation in clinical trials is positive, clinical trial accrual remains low and that the failure to provide information about study results may be one of many factors negatively affecting accrual. They also called for better understanding of physician–researcher and patient attitudes and preferences and posit that development of effective mechanisms to share trial results with study participants should enhance patient–physician communication and improve clinical care and research processes [ 5 ].

A 2010 survey of CTSAs found that while professional and scientific audiences are currently the primary focus for communicating and disseminating research findings, it is equally vital to develop approaches for sharing research findings with other audiences, including individuals who participate in clinical trials [ 1 , 5 ]. Effective communication and dissemination strategies are documented in the literature [ 6 , 7 ], but most are designed to promote adoption of evidence-based interventions and lack of applicability to participants overall, especially to participants who are members of special populations and underrepresented minorities who have fewer opportunities to participate in research and whose preferences for receiving research findings are unknown [ 7 ].

Researchers often have limited exposure to methods that offer them guidance in communicating and disseminating study findings in ways likely to improve awareness, adoption, and use of their findings [ 7 ]. Researchers also lack expertise in using communication channels such as traditional journalism platforms, live or face-to-face events such as public festivals, lectures, and panels, and online interactions [ 8 ]. Few strategies provide guidance for researchers about how to develop communications that are patient-centered, contain plain language, create awareness of the influence of findings on participant or population health, and increase the likelihood of enrollment in future studies.

Consequently, researchers often rely on traditional methods (e.g., presentations at scientific meetings and publication of study findings in peer-reviewed journals) despite evidence suggesting their limited reach and/or impact among professional/scientific and/or lay audiences [ 9 , 10 ].

Input from stakeholders can enhance our understanding of how to assure that participants will receive understandable, useful information about research findings and, as appropriate, interpret and use this information to inform their decisions about changing health behaviors, interacting with their health-care providers, enrolling in future research studies, sharing their study experiences with others, or recommending to others that they participate in studies.

Purpose and Goal

This pilot project was undertaken to address issues cited above and in response to expressed concerns of community members in our area about not receiving information on research studies in which they participated. The project design, a two-phase, mixed methods pilot study, was informed by their subsequent participation in a committee of community-academic representatives to determine possible options for improving the communication and dissemination of study results to both study participants and the community at large.

Our goals were to understand the experiences, expectations, concerns, preferences, and capacities of researchers and past research participants (PSP) in two age groups (adolescents/young adults (AYA) aged 15–25 years and older adults aged 50 years or older) and to test communication prototypes for sharing, receiving, and using information on research study findings. Our long-term objectives are to stimulate new, interdisciplinary collaborative research and to develop resources to meet PSP and researcher needs.

This study was conducted in an academic medical center located in south-eastern South Carolina. Phase one consisted of surveying PSP and researchers. In phase two, in-person focus groups were conducted among PSP completing the survey and one-on-one interviews were conducted among researchers. Participants in either the interviews or focus groups responded to a set of questions from a discussion guide developed by the study team and reviewed three prototypes for communicating and disseminating study results developed by the study team in response to PSP and researcher survey responses: a study results letter, a study results email, and a web-based communication – Mail Chimp (Figs.  1 – 3 ).

An external file that holds a picture, illustration, etc.
Object name is S2059866120000096_fig1.jpg

Prototype 1: study results email prototype. MUSC, Medical University of South Carolina.

An external file that holds a picture, illustration, etc.
Object name is S2059866120000096_fig3.jpg

Prototype 3: study results MailChimp prototypes 1 and 2. MUSC, Medical University of South Carolina.

An external file that holds a picture, illustration, etc.
Object name is S2059866120000096_fig2.jpg

Prototype 2: study results letter prototype.

PSP and researcher surveys

A 42-item survey questionnaire representing seven domains was developed by a multidisciplinary team of clinicians, researchers, and PSP that evaluated the questions for content, ease of understanding, usefulness, and comprehensiveness [ 11 ]. Project principal investigators reviewed questions for content and clarity [ 11 ]. The PSP and researcher surveys contained screening and demographic questions to determine participant eligibility and participant characteristics. The PSP survey assessed prior experience with research, receipt of study information from the research team, intention to participate in future research, and preferences and opinions about receipt of information about study findings and next steps. Specific questions for PSP elicited their preferences for communication channels such as phone call, email, social or mass media, and public forum and included channels unique to South Carolina, such as billboards. PSP were asked to rank their preferences and experiences regarding receipt of study results using a Likert scale with the following measurements: “not at all interested” (0), “not very interested” (1), “neutral” (3), “somewhat interested” (3), and “very interested” (4).

The researcher survey contained questions about researcher decisions, plans, and actions regarding communication and dissemination of research results for a recently completed study. Items included knowledge and opinions about how to communicate and disseminate research findings, resources used and needed to develop communication strategies, and awareness and use of dissemination channels, message development, and presentation format.

A research team member administered the survey to PSP and researchers either in person or via phone. Researchers could also complete the survey online through Research Electronic Data Capture (REDCap©).

Focus groups and discussion guide content

The PSP focus group discussion guide contained questions to assess participants’ past experiences with receiving information about research findings; identify participant preferences for receiving research findings whether negative, positive, or equivocal; gather information to improve communication of research results back to participants; assess participant intention to enroll in future research studies, to share their study experiences with others, and to refer others to our institution for study participation; and provide comments and suggestions on prototypes developed for communication and dissemination of study results. Five AYA participated in one focus group, and 11 older adults participated in one focus group. Focus groups were conducted in an off-campus location with convenient parking and at times convenient for participants. Snacks and beverages were provided.

The researcher interview guide was designed to understand researchers’ perspectives on communicating and disseminating research findings to participants; explore past experiences, if any, of researchers with communication and dissemination of research findings to study participants; document any approaches researchers may have used or intend to use to communicate and disseminate research findings to study participants; assess researcher expectations of benefits associated with sharing findings with participants, as well as, perceived and actual barriers to sharing findings; and provide comments and suggestions on prototypes developed for communication and dissemination of study results.

Prototype materials

Three prototypes were presented to focus group participants and included (1) a formal letter on hospital letterhead designed to be delivered by standard mail, describing the purpose and findings of a fictional study and thanking the individual for his/her participation, (2) a text-only email including a brief thank you and a summary of major findings with a link to a study website for more information, and (3) an email formatted like a newsletter with detailed information on study purpose, method, and findings with graphics to help convey results. A mock study website was shown and included information about study background, purpose, methods, results, as well as, links to other research and health resources. Prototypes were presented either in paper or PowerPoint format during the focus groups and explained by a study team member who then elicited participant input using the focus group guide. Researchers also reviewed and commented on prototype content and format in one-on-one interviews with a study team member.

Protection of Human Subjects

The study protocol (No. Pro00067659) was submitted to and approved by the Institutional Review Board at the Medical University of South Carolina in 2017. PSP (or the caretakers for PSP under age 18), and researchers provided verbal informed consent prior to completing the survey or participating in either a focus group or interview. Participants received a verbal introduction prior to participating in each phase.

Recruitment and Interview Procedures

Past study participants.

A study team member reviewed study participant logs from five recently completed studies at our institution involving AYA or older adults to identify individuals who provided consent for contact regarding future studies. Subsequent PSP recruitment efforts based on these searches were consistent with previous contact preferences recorded in each study participant’s consent indicating desire to be re-contacted. The primary modes of contact were phone/SMS and email.

Efforts to recruit other PSP were made through placement of flyers in frequented public locations such as coffee shops, recreation complexes, and college campuses and through social media, Yammer, and newsletters. ResearchMatch, a web-based recruitment tool, was used to alert its subscribers about the study. Potential participants reached by these methods contacted our study team to learn more about the study, and if interested and pre-screened eligible, volunteered and were consented for the study. PSP completing the survey indicated willingness to share experiences with the study team in a focus group and were re-contacted to participate in focus groups.

Researcher recruitment

Researchers were identified through informal outreach by study investigators and staff, a flyer distributed on campus, use of Yammer and other institutional social media platforms, and internal electronic newsletters. Researchers responding to these recruitment efforts were invited to participate in the researcher survey and/or interview.

Incentives for participation

Researchers and PSP received a $25 gift card for completing the survey and $75 for completing the interview (researcher) or focus group (PSP) (up to $100 per researcher or PSP).

Data tables displaying demographic and other data from the PSP surveys (Table ​ (Table1) 1 ) were prepared from the REDCap© database and responses reported as number and percent of respondents choosing each response option.

Post study participant (PSP) characteristics by Adolescents/Young Adults (AYA), Older Adults, and ALL (All participants regardless of age)

Age mean (SD) = 49.7 (18.6).

Focus group and researcher interview data were recorded (either via audio recording and/or notes taken by research staff) and analyzed via a general inductive qualitative approach, a method appropriate for program evaluation studies and aimed at condensing large amounts of textual data into frameworks that describe the underlying process and experiences under study [ 12 ]. Data were analyzed by our team’s qualitative expert who read the textual data multiple times, developed a coding scheme to identify themes in the textual data, and used group consensus methods with other team members to identify unique, key themes.

Sixty-one of sixty-five PSP who volunteered to participate in the PSP survey were screened eligible, fifty were consented, and forty-eight completed the survey questionnaire. Of the 48 PSP completing the survey, 15 (32%) were AYA and 33 (68%) older adults. The mean age of survey respondents was 49.7 years, 23.5 for AYA, and 61.6 for older adults. Survey respondents were predominantly White, non-Hispanic/Latino, female, and with some college or a college degree (Table ​ (Table1). 1 ). The percentage of participants in each group never or rarely needing any help with reading/interpreting written materials was above 93% in both groups.

Over 90% of PSP responded that they would participate in another research study, and more than 75% of PSP indicated that study participants should know about study results. Most (68.8%) respondents indicated that they did not receive any communications from study staff after they finished a study .

PSP preferences for communication channel are summarized in Table ​ Table2 2 and based on responses to the question “How do you want to receive information?.” Both AYA and older adults agree or completely agree that they prefer email to other communication channels and that billboards did not apply to them. Older adult preferences for communication channels as indicated by agreeing or completely agreeing were in ranked order of highest to lowest: use of mailed letters/postcards, newsletter, and phone. A majority (over 50%) of older adults completely disagreed or disagreed on texting and social media as options and had only slight preference for mass media, public forum, and wellness fairs or expos.

Communication preference by group: AYA * , older adult ** , and ALL ( n = 48)

ALL, total per column.

While AYA preferred email over all other options, they completely disagreed/disagreed with mailed letters/postcards, social media, and mass media options.

When communication formats were ranked overall by each group and by both groups combined, the ranking from most to least preferred was written materials, opportunities to interact with study teams and ask questions, visual charts, graphs, pictures, and videos, audios, and podcasts.

PSP Focus Groups

PSP want to receive and share information on study findings for studies in which he/she participated. Furthermore, participants stated their desire to share study results across social networks and highlighted opportunities to share communicated study results with their health-care providers, family members, friends, and other acquaintances with similar medical conditions.

Because of the things I was in a study for, it’s a condition I knew three other people who had the same condition, so as soon as it worked for me, I put the word out, this is great stuff. I would forward the email with the link, this is where you can go to also get in on this study, or I’d also tell them, you know, for me, like the medication. Here’s the medication. Here’s the name of it. Tell your doctor. I would definitely share. I’d just tell everyone without a doubt. Right when I get home, as soon as I walk in the door, and say Renee-that’s my daughter-I’ve got to tell you this.

Communication of study information could happen through several channels including social media, verbal communication, sharing of written documents, and forwarding emails containing a range of content in a range of formats (e.g., reports and pamphlets).

Word of mouth and I have no shame in saying I had head to toe psoriasis, and I used the drug being studied, and so I would just go to people, hey, look. So, if you had it in paper form, like a pamphlet or something, yeah I’d pass it on to them.

PSP prefer clear, simple messaging and highlighted multiple, preferred communication modalities for receiving information on study findings including emails, letters, newsletters, social media, and websites.

The wording is really simple, which I like. It’s to the point and clear. I really like the bullet points, because it’s quick and to the point. I think the [long] paragraphs-you get lost, especially when you are reading on your phone.

They indicated a clear preference for colorful, simple, easy to read communication. PSP also expressed some concern about difficulty opening emails with pictures and dislike lengthy written text. “I don’t read long emails. I tend to delete them”

PSP indicated some confusion about common research language. For example, one participant indicated that using the word “estimate” indicates the research findings were an approximation, “When I hear those words, I just think you’re guessing, estimate, you know? It sounds like an estimate, not a definite answer.”

Researcher Survey

Twenty-three of thirty-two researchers volunteered to participate in the researcher survey, were screened eligible, and two declined to participate, resulting in 19 who provided consent to participate and completed the survey. The mean age of survey respondents was 51.8 years. Respondents were predominantly White, non-Hispanic/Latino, and female, and all were holders of either a professional school degree or a doctoral degree. When asked if it is important to inform study participants of study results, 94.8% of responding researchers agreed that it was extremely important or important. Most researchers have disseminated findings to study participants or plan to disseminate findings.

Researchers listed a variety of reasons for their rating of the importance of informing study participants of study results including “to promote feelings of inclusion by participants and other community members”, “maintaining participant interest and engagement in the subject study and in research generally”, “allowing participants to benefit somewhat from their participation in research and especially if personal health data are collected”, “increasing transparency and opportunities for learning”, and “helping in understanding the impact of the research on the health issue under study”.

Some researchers view sharing study findings as an “ethical responsibility and/or a tenet of volunteerism for a research study”. For example, “if we (researchers) are obligated to inform participants about anything that comes up during the conduct of the study, we should feel compelled to equally give the results at the end of the study”.

One researcher “thought it a good idea to ask participants if they would like an overview of findings at the end of the study that they could share with others who would like to see the information”.

Two researchers said that sharing research results “depends on the study” and that providing “general findings to the participants” might be “sufficient for a treatment outcome study”.

Researchers indicated that despite their willingness to share study results, they face resource challenges such as a lack of funding and/or staff to support communication and dissemination activities and need assistance in developing these materials. One researcher remarked “I would really like to learn what are (sic) the best ways to share research findings. I am truly ignorant about this other than what I have casually observed. I would enjoy attending a workshop on the topic with suggested templates and communication strategies that work best” and that this survey “reminds me how important this is and it is promising that our CTSA seems to plan to take this on and help researchers with this important study element.”

Another researcher commented on a list of potential types of assistance that could be made available to assist with communicating and disseminating results, that “Training on developing lay friendly messaging is especially critically important and would translate across so many different aspects of what we do, not just dissemination of findings. But I’ve noticed that it is a skill that very few people have, and some people never can seem to develop. For that reason, I find as a principal investigator that I am spending a lot of my time working on these types of materials when I’d really prefer research assistant level folks having the ability to get me 99% of the way there.”

Most researchers indicated that they provide participants with personal tests or assessments taken from the study (60% n = 6) and final study results (72.7%, n = 8) but no other information such as recruitment and retention updates, interim updates or results, information on the impact of the study on either the health topic of the study or the community, information on other studies or provide tips and resources related to the health topic and self-help. Sixty percent ( n = 6) of researcher respondents indicated sharing planned next steps for the study team and information on how the study results would be used.

When asked about how they communicated results, phone calls were mentioned most frequently followed by newsletters, email, webpages, public forums, journal article, mailed letter or postcard, mass media, wellness fairs/expos, texting, or social media.

Researchers used a variety of communication formats to communicate with study participants. Written descriptions of study findings were most frequently reported followed by visual depictions, opportunities to interact with study staff and ask questions or provide feedback, and videos/audio/podcasts.

Seventy-three percent of researchers reported that they made efforts to make study findings information available to those with low levels of literacy, health literacy, or other possible limitations such as non-English-speaking populations.

In open-ended responses, most researchers reported wanting to increase their awareness and use of on-campus training and other resources to support communication and dissemination of study results, including how to get resources and budgets to support their use.

Researcher Interviews

One-on-one interviews with researchers identified two themes.

Researchers may struggle to see the utility of communicating small findings

Some researchers indicated hesitancy in communicating preliminary findings, findings from small studies, or highly summarized information. In addition, in comparison to research participants, researchers seemed to place a higher value on specific details of the study.

“I probably wouldn’t put it up [on social media] until the actual manuscript was out with the graphs and the figures, because I think that’s what people ultimately would be interested in.”

Researchers face resource and time limitations in communication and dissemination of study findings

Researchers expressed interest in communicating research results to study participants. However, they highlighted several challenges including difficulties in tracking current email and physical addresses for participants; compliance with literacy and visual impairment regulations; and the number of products already required in research that consume a considerable amount of a research team’s time. Researchers expressed a desire to have additional resources and templates to facilitate sharing study findings. According to one respondent, “For every grant there is (sic) 4-10 papers and 3-5 presentations, already doing 10-20 products.” Researchers do not want to “reinvent the wheel” and would like to pull from existing papers and presentations on how to share with participants and have boilerplate, writing templates, and other logistical information available for their use.

Researchers would also like training in the form of lunch-n-learns, podcasts, or easily accessible online tools on how to develop materials and approaches. Researchers are interested in understanding the “do’s and don’ts” of communicating and disseminating study findings and any regulatory requirements that should be considered when communicating with research participants following a completed study. For example, one researcher asked, “From beginning to end – the do’s and don’ts – are stamps allowed as a direct cost? or can indirect costs include paper for printing newsletters, how about designing a website, a checklist for pulling together a newsletter?”

The purpose of this pilot study was to explore the current experiences, expectations, concerns, preferences, and capacities of PSP including youth/young adult and older adult populations and researchers for sharing, receiving, and using information on research study findings. PSP and researchers agreed, as shown in earlier work [ 3 , 5 ], that sharing information upon study completion with participants was something that should be done and that had value for both PSP and researchers. As in prior studies [ 3 , 5 ], both groups also agreed that sharing study findings could improve ancillary outcomes such as participant recruitment and enrollment, use of research findings to improve health and health-care delivery, and build overall community support for research. In addition, communicating results acknowledges study participants’ contributions to research, a principle firmly rooted in respect for treating participants as not merely a means to further scientific investigation [ 5 ].

The majority of PSP indicated that they did not receive research findings from studies they participated in, that they would like to receive such information, and that they preferred specific communication methods for receipt of this information such as email and phone calls. While our sample was small, we did identify preferences for communication channels and for message format. Some differences and similarities in preferences for communication channels and message format were identified between AYA and older adults, thus reinforcing the best practice of customizing communication channel and messaging to each specific group. However, the preference for email and the similar rank ordering of messaging formats suggest that there are some overall communication preferences that may apply to most populations of PSP. It remains unclear whether participants prefer individual or aggregate results of study findings and depends on the type of study, for example, individual results of genotypes versus aggregate results of epidemiological studies [ 13 ]. A study by Miller et al suggests that the impact of receiving aggregate results, whether clinically relevant or not, may equal that of receiving individual results [ 14 ]. Further investigation warrants evaluation of whether, when, and how researchers should communicate types of results to study participants, considering multiple demographics of the populations such as age and ethnicity on preferences.

While researchers acknowledged that PSP would like to hear from them regarding research results and that they wanted to meet this expectation, they indicated needing specific training and/or time and resources to provide this information to PSP in a way that meets PSP needs and preferences. Costs associated with producing reports of findings were a concern of researchers in our study, similar to findings from a study conducted by Di Blasi and colleagues in which 15% (8 of 53 investigators) indicated that they wanted to avoid extra costs associated with the conduct of their studies and extra administrative work [ 15 ]. In this same study, the major reason for not informing participants about study results was that forty percent of investigators never considered this option. Researchers were unaware of resources available on existing platforms at their home institution or elsewhere to help them with communication and dissemination efforts [ 10 ].

Addressing Barriers to Implementation

Information from academic and other organizations on how to best communicate research findings in plain language is available and could be shared with researchers and their teams. The Cochrane Collaborative [ 16 ], the Centers for Disease Control and Prevention [ 17 ], and the Patient-Centered Outcomes Research Institute [ 18 ] have resources to help researchers develop plain language summaries using proven approaches to overcome literacy and other issues that limit participant access to study findings. Some academic institutions have electronic systems in place to confidentially share templated laboratory and other personal study information with participants and, if appropriate, with their health-care providers.

Limitations

Findings from the study are limited by several study and respondent characteristics. The sample was drawn from research records at one university engaging in research in a relatively defined geographic area and among two special populations: AYA and older adults. As such, participants were not representative of either the general population in the area, the population of PSP or researchers available in the area, or the racial and ethnic diversity of potential and/or actual participants in the geographic area. The small number of researcher participants did not represent the pool of researchers at the university, and the research studies from which participants were drawn were not representative of the broad range of clinical and translational research undertaken by our institution or within the geographic community it serves. The number of survey and focus group participants was insufficient to allow robust analysis of findings specific to participants’ race, ethnicity, gender, or membership in the target age groups of AYA or older adult. However, these data will inform a future trial with adequate representations from underrepresented and special population groups.

Since all PSP had participated in research, they may have been biased in favor of wanting to know more about study results and/or supportive/nonsupportive of the method of communication/dissemination they were exposed to through their participation in these studies.

Conclusions

Our findings provide information from PSP and researchers on their expectations about sharing study findings, preferences for how to communicate and disseminate study findings, and need for greater assistance in removing roadblocks to using proven communication and dissemination approaches. This information illustrates the potential to engage both PSP and researchers in the design and use of communication and dissemination strategies and materials to share research findings, engage in efforts to more broadly disseminate research findings, and inform our understanding of how to interpret and communicate research findings for members of special population groups. While several initial prototypes were developed in response to this feedback and shared for review by participants in this study, future research will focus on finalizing and testing specific communication and dissemination prototypes aimed at these special population groups.

Findings from our study support a major goal of the National Center for Advancing Translational Science Recruitment Innovation Center to engage and collaborate with patients and their communities to advance translation science. In response to the increased awareness of the importance of sharing results with study participants or the general public, a template for dissemination of research results is available in the Recruitment and Retention Toolbox through the CTSA Trial Innovation Network (TIN: trialinnovationnetwork.org ). We believe that our findings will inform resources for use in special populations through collaborations within the TIN.

Acknowledgment

This pilot project was supported, in part, by the National Center for Advancing Translational Sciences of the NIH under Grant Number UL1 TR001450. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH.

Disclosures

The authors have no conflicts of interest to declare.

Ethical Approval

This study was reviewed, approved, and continuously overseen by the IRB at the Medical University of South Carolina (ID: Pro00067659). All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.

  • International edition
  • Australia edition
  • Europe edition

One woman and two men exercise in a gym and do squats using sports balls

Healthy lifestyle may offset genetics by 60% and add five years to life, study says

Genetics alone can mean a 21% greater risk of early death, research finds, but people can improve their chances

A healthy lifestyle may offset the impact of genetics by more than 60% and add another five years to your life, according to the first study of its kind.

It is well established that some people are genetically predisposed to a shorter lifespan. It is also well known that lifestyle factors, specifically smoking, alcohol consumption, diet and physical activity, can have an impact on longevity.

However, until now there has been no investigation to understand the extent to which a healthy lifestyle may counterbalance genetics.

Findings from several long-term studies suggest a healthy lifestyle could offset effects of life-shortening genes by 62% and add as much as five years to your life. The results were published in the journal BMJ Evidence-Based Medicine .

“This study elucidates the pivotal role of a healthy lifestyle in mitigating the impact of genetic factors on lifespan reduction,” the researchers concluded. “Public health policies for improving healthy lifestyles would serve as potent complements to conventional healthcare and mitigate the influence of genetic factors on human lifespan.”

The study involved 353,742 people from the UK Biobank and showed that those with a high genetic risk of a shorter life have a 21% increased risk of early death compared with those with a low genetic risk, regardless of their lifestyle.

Meanwhile, people with unhealthy lifestyles have a 78% increased chance of early death, regardless of their genetic risk, researchers from Zhejiang University School of Medicine in China and the University of Edinburgh found.

The study added that having an unhealthy lifestyle and shorter lifespan genes more than doubled the risk of early death compared with people with luckier genes and healthy lifestyles.

However, researchers found that people did appear to have a degree of control over what happened. The genetic risk of a shorter lifespan or premature death may be offset by a favourable lifestyle by about 62%, they found.

They wrote: “Participants with high genetic risk could prolong approximately 5.22 years of life expectancy at age 40 with a favourable lifestyle.”

The “optimal lifestyle combination” for a longer life was found to be “never smoking, regular physical activity, adequate sleep duration and healthy diet”.

The study followed people for 13 years on average, during which time 24,239 deaths occurred. People were grouped into three genetically determined lifespan categories including long (20.1%), intermediate (60.1%) and short (19.8%), and three lifestyle score categories including favourable (23.1%), intermediate (55.6%) and unfavourable (21.3%).

Researchers used polygenic risk scores to look at multiple genetic variants to arrive at a person’s overall genetic predisposition to a longer or shorter life. Other scores looked at whether people smoked, drank alcohol, took exercise, their body shape, healthy diet and sleep.

Matt Lambert, a senior health information officer at the World Cancer Research Fund, said: “This new research shows that, despite genetic factors, living a healthy lifestyle, including eating a balanced nutritious diet and keeping active, can help us live longer.”

  • Medical research
  • Health & wellbeing

Most viewed

Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

  • Publications
  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

5 key findings about LGBTQ+ Americans

A Pride flag is displayed during the 52nd annual San Francisco Pride parade on June 26, 2022. (Arun Nevader/Getty Images)

Pew Research Center has been tracking Americans’ attitudes toward same-sex marriage , gender identity and other LGBTQ+ issues for more than a decade. In that time, we have also done deep explorations of the experiences of LGBT and transgender and nonbinary Americans.

As the United States celebrates LGBTQ+ Pride month , here are five key findings about LGBTQ+ Americans from our recent surveys:

A bar chart showing that 12% of young U.S. adults describe themselves as bisexual.

Some 7% of Americans are lesbian, gay or bisexual, according to a Pew Research Center survey of 12,147 U.S. adults conducted in summer 2022. Some 17% of adults younger than 30 identify as lesbian, gay or bisexual, compared with 8% of those ages 30 to 49, 5% of those 50 to 64 and 2% of those 65 and older. Similar shares of men and women identify with any of these terms, as do similar shares of adults across racial and ethnic groups.

Pew Research Center sought to provide an overview of findings on LGBTQ+ Americans. The overview is based on data from Center surveys and analyses conducted from 2019 to 2022, including a 2019 analysis of 2017 survey data from Stanford University. Links to the methodology and questions used can be found in the text and at the bottom of this overview.

More Americans identify as bisexual than as gay or lesbian. Among adults who are lesbian, gay or bisexual, 62% identify as bisexual, while 38% are gay or lesbian, according to the same 2022 survey.

Among Americans who are lesbian, gay or bisexual, the vast majority of women say they are bisexual (79%) while the majority of men say they are gay (57%).

Adults younger than 50 who are lesbian, gay or bisexual are far more likely to identify as bisexual (69%) than as gay or lesbian (31%). The opposite is true among those ages 50 and older: 66% identify as gay or lesbian and 34% as bisexual.

Bisexual adults are far less likely than gay or lesbian adults to be “out” to the important people in their life,  according to a 2019 Center analysis of survey data from Stanford University. Only 19% of those who identify as bisexual say all or most of the important people in their life are aware of their sexual orientation. In contrast, 75% of gay or lesbian adults say the same. About one-quarter of bisexual adults (26%) say they are not “out” to any of the important people in their life, compared with 4% of gay or lesbian adults.

A bar chart that shows bisexual adults are far less likely to be ‘out’ to the important people in their life.

One factor that might contribute to bisexual adults being less likely to be “out” is that most (82%) bisexual men and women who are married or living with a partner are in a relationship with someone of the opposite gender, according to a new Center survey .

A bar chart showing that young adults are more likely than older adults to be transgender or nonbinary.

Some 1.6% of U.S. adults are transgender or nonbinary – that is, their gender differs from the sex they were assigned at birth.  Adults under 30 are more likely than older adults to be trans or nonbinary . Some 5.1% of adults younger than 30 are trans or nonbinary, including 2.0% who are trans men or trans women and 3.0% who are nonbinary – that is, they are neither a man nor a woman, or not strictly one or the other. (Due to rounding, subtotals may not add up to the total.) This compares with 1.6% of those ages 30 to 49 and 0.3% of those 50 and older who are trans or nonbinary.

Related: Essay: The experiences, challenges and hopes of transgender and nonbinary adults

The share of U.S. adults who are transgender is particularly high among adults younger than 25. In this age group, 3.1% are trans men or trans women, compared with just 0.5% of those ages 25 to 29. There is no statistically significant difference between these two age groups in the share who are nonbinary.

While a relatively small share of U.S. adults are transgender or nonbinary, many Americans say they know someone who is. More than four-in-ten U.S. adults (44%) say they personally know someone who is trans , and 20% know someone who is nonbinary.

A bar chart that shows more than four-in-ten U.S. adults report knowing a trans person.

About a quarter of U.S. adults (27%) say they have a trans friend, while 13% say they have a co-worker who is trans and 10% say they have a trans family member. About one-in-ten adults (9%) say they know a trans person who is younger than 18.

A  2021 Center survey  found that 26% of U.S. adults personally knew someone who goes by gender-neutral pronouns such as “they” instead of “he” or “she,” up from 18% in 2018 .

Note: This is an update of a post originally published June 13, 2017. Findings from two surveys were used in this analysis:

  • July 18-Aug. 21, 2022: Survey questions, with responses , and methodology
  • April 10-16, 2023: Survey questions, with responses , and methodology
  • Gender Identity
  • LGBTQ Attitudes & Experiences

Anna Brown's photo

Anna Brown is a research associate focusing on social and demographic trends research at Pew Research Center

Who Are You? The Art and Science of Measuring Identity

Black americans firmly support gender equality but are split on transgender and nonbinary issues   , black democrats differ from other democrats in their views on gender identity, transgender issues, parents differ sharply by party over what their k-12 children should learn in school, how americans view policy proposals on transgender and gender identity issues, and where such policies exist, most popular.

1615 L St. NW, Suite 800 Washington, DC 20036 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Age & Generations
  • Coronavirus (COVID-19)
  • Economy & Work
  • Family & Relationships
  • Gender & LGBTQ
  • Immigration & Migration
  • International Affairs
  • Internet & Technology
  • Methodological Research
  • News Habits & Media
  • Non-U.S. Governments
  • Other Topics
  • Politics & Policy
  • Race & Ethnicity
  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of  The Pew Charitable Trusts .

Copyright 2024 Pew Research Center

Terms & Conditions

Privacy Policy

Cookie Settings

Reprints, Permissions & Use Policy

IMAGES

  1. Research Findings

    study finding of research

  2. Research Paper Findings

    study finding of research

  3. How to Find a Research Idea?

    study finding of research

  4. What is a Research Gap

    study finding of research

  5. Five Basic Types of Research Studies

    study finding of research

  6. 8 Types of Analysis in Research

    study finding of research

VIDEO

  1. Finding Research Gaps: A guide for project and research students

  2. Guide in Finding Research Topic

  3. Struggling finding research papers for your literature review? #researchtips #studytips #chatgpt

  4. 2 Scientific Method

  5. Interactive Insights: Navigating Supervisor-Student Dynamics

  6. 2. Literature Review & Finding Research Ideas

COMMENTS

  1. Research Findings

    Qualitative Findings. Qualitative research is an exploratory research method used to understand the complexities of human behavior and experiences. Qualitative findings are non-numerical and descriptive data that describe the meaning and interpretation of the data collected. Examples of qualitative findings include quotes from participants ...

  2. A Practical Guide to Writing Quantitative and Qualitative Research

    INTRODUCTION. Scientific research is usually initiated by posing evidenced-based research questions which are then explicitly restated as hypotheses.1,2 The hypotheses provide directions to guide the study, solutions, explanations, and expected results.3,4 Both research questions and hypotheses are essentially formulated based on conventional theories and real-world processes, which allow the ...

  3. How to Write a Results Section

    Checklist: Research results 0 / 7. I have completed my data collection and analyzed the results. I have included all results that are relevant to my research questions. I have concisely and objectively reported each result, including relevant descriptive statistics and inferential statistics. I have stated whether each hypothesis was supported ...

  4. PDF Analyzing and Interpreting Findings

    study's findings with those of other studies. In qualitative research, we are open to dif-ferent ways of seeing the world. We make assumptions about how things work. We strive to be open to the reality of others and under-stand different realities. We must listen before we can understand. Analysis of the findings

  5. How to use and assess qualitative research methods

    Abstract. This paper aims to provide an overview of the use and assessment of qualitative research methods in the health sciences. Qualitative research can be defined as the study of the nature of phenomena and is especially appropriate for answering questions of why something is (not) observed, assessing complex multi-component interventions ...

  6. Research Methods

    Research methods are specific procedures for collecting and analyzing data. Developing your research methods is an integral part of your research design. When planning your methods, there are two key decisions you will make. First, decide how you will collect data. Your methods depend on what type of data you need to answer your research question:

  7. Disseminating the Findings of your Research Study

    Disseminating the Findings of your Research Study. It is very important to find appropriate ways to disseminate the findings of your research - projects that sit on office or library shelves and are seldom or never read represent a tragic loss to the profession. A key dimension of research dissemination is to be actively involved with ...

  8. A Beginner's Guide to Starting the Research Process

    Step 4: Create a research design. The research design is a practical framework for answering your research questions. It involves making decisions about the type of data you need, the methods you'll use to collect and analyze it, and the location and timescale of your research. There are often many possible paths you can take to answering ...

  9. Research Methods--Quantitative, Qualitative, and More: Overview

    About Research Methods. This guide provides an overview of research methods, how to choose and use them, and supports and resources at UC Berkeley. As Patten and Newhart note in the book Understanding Research Methods, "Research methods are the building blocks of the scientific enterprise. They are the "how" for building systematic knowledge.

  10. PDF Results/Findings Sections for Empirical Research Papers

    the study design. For example, it makes sense to present the results of an ethnographic study as a chronological narrative. Qualitative studies that use thematic coding might break down results by theme or category, whereas quantitative studies might break up findings by research question or statistical test. In most Results sections

  11. Looking forward: Making better use of research findings

    Implementing knowledge. Research findings can influence decisions at many levels—in caring for individual patients, in developing practice guidelines, in commissioning health care, in developing prevention and health promotion strategies, in developing policy, in designing educational programmes, and in performing clinical audit—but only if clinicians know how to translate knowledge into ...

  12. Improving Qualitative Research Findings Presentations:

    The qualitative research findings presentation, as a distinct genre, conventionally shares particular facets of genre entwined and contextualized in method and scholarly discourse. Despite the commonality and centrality of these presentations, little is known of the quality of current presentations of qualitative research findings.

  13. How to Write the Dissertation Findings or Results

    Our panel of experts makes sure to keep the 3 pillars of the Dissertation strong. 1. Reporting Quantitative Findings. The best way to present your quantitative findings is to structure them around the research hypothesis or questions you intend to address as part of your dissertation project. Report the relevant findings for each research ...

  14. How to Write the Results/Findings Section in Research

    Step 1: Consult the guidelines or instructions that the target journal or publisher provides authors and read research papers it has published, especially those with similar topics, methods, or results to your study. The guidelines will generally outline specific requirements for the results or findings section, and the published articles will ...

  15. Home: Finding Types of Research: Evidence-Based Research

    If you need help finding a specific type of study, visit the Get Research Help guide to contact the librarians. Understand Evidence-Based Practice. ... Sometimes, a research study will look at the results of many studies and look for trends and draw conclusions. These types of studies include: Meta Analyses;

  16. What Is Research, and Why Do People Do It?

    Abstractspiepr Abs1. Every day people do research as they gather information to learn about something of interest. In the scientific world, however, research means something different than simply gathering information. Scientific research is characterized by its careful planning and observing, by its relentless efforts to understand and explain ...

  17. ResearchGate

    Access 160+ million publications and connect with 25+ million researchers. Join for free and gain visibility by uploading your research.

  18. Assessing fragility of statistically significant findings from

    There is a strive to make all data publicly available, allowing for replication of study findings as well as pooling of data among databases for generating more robust analyses using larger pragmatic samples . Together, these efforts aim to increase transparency of research and facilitate data sharing to allow for stronger and more robust ...

  19. Physical Fitness Linked to Better Mental Health in Young People

    A new study bolsters existing research suggesting that exercise can protect against anxiety, depression and attention challenges. ... These findings come amid a surge of mental health diagnoses ...

  20. Types of Research Designs Compared

    Types of Research Designs Compared | Guide & Examples. Published on June 20, 2019 by Shona McCombes.Revised on June 22, 2023. When you start planning a research project, developing research questions and creating a research design, you will have to make various decisions about the type of research you want to do.. There are many ways to categorize different types of research.

  21. Team from OSU-CHS, HPNRI publishes findings on language used in obesity

    Webb and his fellow co-authors hope that uncovering the prevalent use of non-PCL terminology in published research can change how language is used in studies. "We hope that fellow medical researchers are made aware of PCL and develop ways to utilize it in their future research," he said.

  22. Cancer incidence, treatment, and survival in the prison population

    In this population-based, matched cohort study, we used cancer registration data from the National Cancer Registration and Analysis Service in England to identify primary invasive cancers and cervical cancers in situ diagnosed in adults (aged ≥18 years) in the prison and general populations between Jan 1, 1998, and Dec 31, 2017.

  23. A Peek Inside the Brains of 'Super-Agers'

    These findings are backed up by Dr. Rogalski's research, initially conducted when she was at Northwestern University, which showed that super-agers' brains looked more like 50- or 60-year-olds ...

  24. Communicating and disseminating research findings to study participants

    The researcher interview guide was designed to understand researchers' perspectives on communicating and disseminating research findings to participants; explore past experiences, if any, of researchers with communication and dissemination of research findings to study participants; document any approaches researchers may have used or intend ...

  25. Healthy lifestyle may offset genetics by 60% and add five years to life

    Findings from several long-term studies suggest a healthy lifestyle could offset effects of life-shortening genes by 62% and add as much as five years to your life. The results were published in ...

  26. 5 key findings about LGBTQ+ Americans

    Pew Research Center sought to provide an overview of findings on LGBTQ+ Americans. The overview is based on data from Center surveys and analyses conducted from 2019 to 2022, including a 2019 analysis of 2017 survey data from Stanford University. Links to the methodology and questions used can be found in the text and at the bottom of this ...

  27. $20M NSF grant to support center to study how complex biological

    A $20 million grant from the U.S. National Science Foundation will support the establishment and operation of the National Synthesis Center for Emergence in the Molecular and Cellular Sciences at Penn State. The center will enable research that uses existing, publicly available data to glean new insights about how complex biological systems, such as cells, emerge from simpler molecules.

  28. OHSU study finds big jump in addiction treatment at community health

    The findings, published online today in the journal JAMA Health Forum, provides a glimmer of hope amid a national overdose epidemic that has claimed more than 100,000 lives in the United States in each of the past few years. The study examined community health centers serving low-income people primarily in West Coast states.

  29. What Is a Research Design

    A research design is a strategy for answering your research question using empirical data. Creating a research design means making decisions about: Your overall research objectives and approach. Whether you'll rely on primary research or secondary research. Your sampling methods or criteria for selecting subjects. Your data collection methods.