Grad Coach

How To Write A Research Paper

Step-By-Step Tutorial With Examples + FREE Template

By: Derek Jansen (MBA) | Expert Reviewer: Dr Eunice Rautenbach | March 2024

For many students, crafting a strong research paper from scratch can feel like a daunting task – and rightly so! In this post, we’ll unpack what a research paper is, what it needs to do , and how to write one – in three easy steps. 🙂 

Overview: Writing A Research Paper

What (exactly) is a research paper.

  • How to write a research paper
  • Stage 1 : Topic & literature search
  • Stage 2 : Structure & outline
  • Stage 3 : Iterative writing
  • Key takeaways

Let’s start by asking the most important question, “ What is a research paper? ”.

Simply put, a research paper is a scholarly written work where the writer (that’s you!) answers a specific question (this is called a research question ) through evidence-based arguments . Evidence-based is the keyword here. In other words, a research paper is different from an essay or other writing assignments that draw from the writer’s personal opinions or experiences. With a research paper, it’s all about building your arguments based on evidence (we’ll talk more about that evidence a little later).

Now, it’s worth noting that there are many different types of research papers , including analytical papers (the type I just described), argumentative papers, and interpretative papers. Here, we’ll focus on analytical papers , as these are some of the most common – but if you’re keen to learn about other types of research papers, be sure to check out the rest of the blog .

With that basic foundation laid, let’s get down to business and look at how to write a research paper .

Research Paper Template

Overview: The 3-Stage Process

While there are, of course, many potential approaches you can take to write a research paper, there are typically three stages to the writing process. So, in this tutorial, we’ll present a straightforward three-step process that we use when working with students at Grad Coach.

These three steps are:

  • Finding a research topic and reviewing the existing literature
  • Developing a provisional structure and outline for your paper, and
  • Writing up your initial draft and then refining it iteratively

Let’s dig into each of these.

Need a helping hand?

research paper on 5

Step 1: Find a topic and review the literature

As we mentioned earlier, in a research paper, you, as the researcher, will try to answer a question . More specifically, that’s called a research question , and it sets the direction of your entire paper. What’s important to understand though is that you’ll need to answer that research question with the help of high-quality sources – for example, journal articles, government reports, case studies, and so on. We’ll circle back to this in a minute.

The first stage of the research process is deciding on what your research question will be and then reviewing the existing literature (in other words, past studies and papers) to see what they say about that specific research question. In some cases, your professor may provide you with a predetermined research question (or set of questions). However, in many cases, you’ll need to find your own research question within a certain topic area.

Finding a strong research question hinges on identifying a meaningful research gap – in other words, an area that’s lacking in existing research. There’s a lot to unpack here, so if you wanna learn more, check out the plain-language explainer video below.

Once you’ve figured out which question (or questions) you’ll attempt to answer in your research paper, you’ll need to do a deep dive into the existing literature – this is called a “ literature search ”. Again, there are many ways to go about this, but your most likely starting point will be Google Scholar .

If you’re new to Google Scholar, think of it as Google for the academic world. You can start by simply entering a few different keywords that are relevant to your research question and it will then present a host of articles for you to review. What you want to pay close attention to here is the number of citations for each paper – the more citations a paper has, the more credible it is (generally speaking – there are some exceptions, of course).

how to use google scholar

Ideally, what you’re looking for are well-cited papers that are highly relevant to your topic. That said, keep in mind that citations are a cumulative metric , so older papers will often have more citations than newer papers – just because they’ve been around for longer. So, don’t fixate on this metric in isolation – relevance and recency are also very important.

Beyond Google Scholar, you’ll also definitely want to check out academic databases and aggregators such as Science Direct, PubMed, JStor and so on. These will often overlap with the results that you find in Google Scholar, but they can also reveal some hidden gems – so, be sure to check them out.

Once you’ve worked your way through all the literature, you’ll want to catalogue all this information in some sort of spreadsheet so that you can easily recall who said what, when and within what context. If you’d like, we’ve got a free literature spreadsheet that helps you do exactly that.

Don’t fixate on an article’s citation count in isolation - relevance (to your research question) and recency are also very important.

Step 2: Develop a structure and outline

With your research question pinned down and your literature digested and catalogued, it’s time to move on to planning your actual research paper .

It might sound obvious, but it’s really important to have some sort of rough outline in place before you start writing your paper. So often, we see students eagerly rushing into the writing phase, only to land up with a disjointed research paper that rambles on in multiple

Now, the secret here is to not get caught up in the fine details . Realistically, all you need at this stage is a bullet-point list that describes (in broad strokes) what you’ll discuss and in what order. It’s also useful to remember that you’re not glued to this outline – in all likelihood, you’ll chop and change some sections once you start writing, and that’s perfectly okay. What’s important is that you have some sort of roadmap in place from the start.

You need to have a rough outline in place before you start writing your paper - or you’ll end up with a disjointed research paper that rambles on.

At this stage you might be wondering, “ But how should I structure my research paper? ”. Well, there’s no one-size-fits-all solution here, but in general, a research paper will consist of a few relatively standardised components:

  • Introduction
  • Literature review
  • Methodology

Let’s take a look at each of these.

First up is the introduction section . As the name suggests, the purpose of the introduction is to set the scene for your research paper. There are usually (at least) four ingredients that go into this section – these are the background to the topic, the research problem and resultant research question , and the justification or rationale. If you’re interested, the video below unpacks the introduction section in more detail. 

The next section of your research paper will typically be your literature review . Remember all that literature you worked through earlier? Well, this is where you’ll present your interpretation of all that content . You’ll do this by writing about recent trends, developments, and arguments within the literature – but more specifically, those that are relevant to your research question . The literature review can oftentimes seem a little daunting, even to seasoned researchers, so be sure to check out our extensive collection of literature review content here .

With the introduction and lit review out of the way, the next section of your paper is the research methodology . In a nutshell, the methodology section should describe to your reader what you did (beyond just reviewing the existing literature) to answer your research question. For example, what data did you collect, how did you collect that data, how did you analyse that data and so on? For each choice, you’ll also need to justify why you chose to do it that way, and what the strengths and weaknesses of your approach were.

Now, it’s worth mentioning that for some research papers, this aspect of the project may be a lot simpler . For example, you may only need to draw on secondary sources (in other words, existing data sets). In some cases, you may just be asked to draw your conclusions from the literature search itself (in other words, there may be no data analysis at all). But, if you are required to collect and analyse data, you’ll need to pay a lot of attention to the methodology section. The video below provides an example of what the methodology section might look like.

By this stage of your paper, you will have explained what your research question is, what the existing literature has to say about that question, and how you analysed additional data to try to answer your question. So, the natural next step is to present your analysis of that data . This section is usually called the “results” or “analysis” section and this is where you’ll showcase your findings.

Depending on your school’s requirements, you may need to present and interpret the data in one section – or you might split the presentation and the interpretation into two sections. In the latter case, your “results” section will just describe the data, and the “discussion” is where you’ll interpret that data and explicitly link your analysis back to your research question. If you’re not sure which approach to take, check in with your professor or take a look at past papers to see what the norms are for your programme.

Alright – once you’ve presented and discussed your results, it’s time to wrap it up . This usually takes the form of the “ conclusion ” section. In the conclusion, you’ll need to highlight the key takeaways from your study and close the loop by explicitly answering your research question. Again, the exact requirements here will vary depending on your programme (and you may not even need a conclusion section at all) – so be sure to check with your professor if you’re unsure.

Step 3: Write and refine

Finally, it’s time to get writing. All too often though, students hit a brick wall right about here… So, how do you avoid this happening to you?

Well, there’s a lot to be said when it comes to writing a research paper (or any sort of academic piece), but we’ll share three practical tips to help you get started.

First and foremost , it’s essential to approach your writing as an iterative process. In other words, you need to start with a really messy first draft and then polish it over multiple rounds of editing. Don’t waste your time trying to write a perfect research paper in one go. Instead, take the pressure off yourself by adopting an iterative approach.

Secondly , it’s important to always lean towards critical writing , rather than descriptive writing. What does this mean? Well, at the simplest level, descriptive writing focuses on the “ what ”, while critical writing digs into the “ so what ” – in other words, the implications . If you’re not familiar with these two types of writing, don’t worry! You can find a plain-language explanation here.

Last but not least, you’ll need to get your referencing right. Specifically, you’ll need to provide credible, correctly formatted citations for the statements you make. We see students making referencing mistakes all the time and it costs them dearly. The good news is that you can easily avoid this by using a simple reference manager . If you don’t have one, check out our video about Mendeley, an easy (and free) reference management tool that you can start using today.

Recap: Key Takeaways

We’ve covered a lot of ground here. To recap, the three steps to writing a high-quality research paper are:

  • To choose a research question and review the literature
  • To plan your paper structure and draft an outline
  • To take an iterative approach to writing, focusing on critical writing and strong referencing

Remember, this is just a b ig-picture overview of the research paper development process and there’s a lot more nuance to unpack. So, be sure to grab a copy of our free research paper template to learn more about how to write a research paper.

You Might Also Like:

Referencing in Word

Can you help me with a full paper template for this Abstract:

Background: Energy and sports drinks have gained popularity among diverse demographic groups, including adolescents, athletes, workers, and college students. While often used interchangeably, these beverages serve distinct purposes, with energy drinks aiming to boost energy and cognitive performance, and sports drinks designed to prevent dehydration and replenish electrolytes and carbohydrates lost during physical exertion.

Objective: To assess the nutritional quality of energy and sports drinks in Egypt.

Material and Methods: A cross-sectional study assessed the nutrient contents, including energy, sugar, electrolytes, vitamins, and caffeine, of sports and energy drinks available in major supermarkets in Cairo, Alexandria, and Giza, Egypt. Data collection involved photographing all relevant product labels and recording nutritional information. Descriptive statistics and appropriate statistical tests were employed to analyze and compare the nutritional values of energy and sports drinks.

Results: The study analyzed 38 sports drinks and 42 energy drinks. Sports drinks were significantly more expensive than energy drinks, with higher net content and elevated magnesium, potassium, and vitamin C. Energy drinks contained higher concentrations of caffeine, sugars, and vitamins B2, B3, and B6.

Conclusion: Significant nutritional differences exist between sports and energy drinks, reflecting their intended uses. However, these beverages’ high sugar content and calorie loads raise health concerns. Proper labeling, public awareness, and responsible marketing are essential to guide safe consumption practices in Egypt.

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly
  • Privacy Policy

Research Method

Home » Research Paper – Structure, Examples and Writing Guide

Research Paper – Structure, Examples and Writing Guide

Table of Contents

Research Paper

Research Paper

Definition:

Research Paper is a written document that presents the author’s original research, analysis, and interpretation of a specific topic or issue.

It is typically based on Empirical Evidence, and may involve qualitative or quantitative research methods, or a combination of both. The purpose of a research paper is to contribute new knowledge or insights to a particular field of study, and to demonstrate the author’s understanding of the existing literature and theories related to the topic.

Structure of Research Paper

The structure of a research paper typically follows a standard format, consisting of several sections that convey specific information about the research study. The following is a detailed explanation of the structure of a research paper:

The title page contains the title of the paper, the name(s) of the author(s), and the affiliation(s) of the author(s). It also includes the date of submission and possibly, the name of the journal or conference where the paper is to be published.

The abstract is a brief summary of the research paper, typically ranging from 100 to 250 words. It should include the research question, the methods used, the key findings, and the implications of the results. The abstract should be written in a concise and clear manner to allow readers to quickly grasp the essence of the research.

Introduction

The introduction section of a research paper provides background information about the research problem, the research question, and the research objectives. It also outlines the significance of the research, the research gap that it aims to fill, and the approach taken to address the research question. Finally, the introduction section ends with a clear statement of the research hypothesis or research question.

Literature Review

The literature review section of a research paper provides an overview of the existing literature on the topic of study. It includes a critical analysis and synthesis of the literature, highlighting the key concepts, themes, and debates. The literature review should also demonstrate the research gap and how the current study seeks to address it.

The methods section of a research paper describes the research design, the sample selection, the data collection and analysis procedures, and the statistical methods used to analyze the data. This section should provide sufficient detail for other researchers to replicate the study.

The results section presents the findings of the research, using tables, graphs, and figures to illustrate the data. The findings should be presented in a clear and concise manner, with reference to the research question and hypothesis.

The discussion section of a research paper interprets the findings and discusses their implications for the research question, the literature review, and the field of study. It should also address the limitations of the study and suggest future research directions.

The conclusion section summarizes the main findings of the study, restates the research question and hypothesis, and provides a final reflection on the significance of the research.

The references section provides a list of all the sources cited in the paper, following a specific citation style such as APA, MLA or Chicago.

How to Write Research Paper

You can write Research Paper by the following guide:

  • Choose a Topic: The first step is to select a topic that interests you and is relevant to your field of study. Brainstorm ideas and narrow down to a research question that is specific and researchable.
  • Conduct a Literature Review: The literature review helps you identify the gap in the existing research and provides a basis for your research question. It also helps you to develop a theoretical framework and research hypothesis.
  • Develop a Thesis Statement : The thesis statement is the main argument of your research paper. It should be clear, concise and specific to your research question.
  • Plan your Research: Develop a research plan that outlines the methods, data sources, and data analysis procedures. This will help you to collect and analyze data effectively.
  • Collect and Analyze Data: Collect data using various methods such as surveys, interviews, observations, or experiments. Analyze data using statistical tools or other qualitative methods.
  • Organize your Paper : Organize your paper into sections such as Introduction, Literature Review, Methods, Results, Discussion, and Conclusion. Ensure that each section is coherent and follows a logical flow.
  • Write your Paper : Start by writing the introduction, followed by the literature review, methods, results, discussion, and conclusion. Ensure that your writing is clear, concise, and follows the required formatting and citation styles.
  • Edit and Proofread your Paper: Review your paper for grammar and spelling errors, and ensure that it is well-structured and easy to read. Ask someone else to review your paper to get feedback and suggestions for improvement.
  • Cite your Sources: Ensure that you properly cite all sources used in your research paper. This is essential for giving credit to the original authors and avoiding plagiarism.

Research Paper Example

Note : The below example research paper is for illustrative purposes only and is not an actual research paper. Actual research papers may have different structures, contents, and formats depending on the field of study, research question, data collection and analysis methods, and other factors. Students should always consult with their professors or supervisors for specific guidelines and expectations for their research papers.

Research Paper Example sample for Students:

Title: The Impact of Social Media on Mental Health among Young Adults

Abstract: This study aims to investigate the impact of social media use on the mental health of young adults. A literature review was conducted to examine the existing research on the topic. A survey was then administered to 200 university students to collect data on their social media use, mental health status, and perceived impact of social media on their mental health. The results showed that social media use is positively associated with depression, anxiety, and stress. The study also found that social comparison, cyberbullying, and FOMO (Fear of Missing Out) are significant predictors of mental health problems among young adults.

Introduction: Social media has become an integral part of modern life, particularly among young adults. While social media has many benefits, including increased communication and social connectivity, it has also been associated with negative outcomes, such as addiction, cyberbullying, and mental health problems. This study aims to investigate the impact of social media use on the mental health of young adults.

Literature Review: The literature review highlights the existing research on the impact of social media use on mental health. The review shows that social media use is associated with depression, anxiety, stress, and other mental health problems. The review also identifies the factors that contribute to the negative impact of social media, including social comparison, cyberbullying, and FOMO.

Methods : A survey was administered to 200 university students to collect data on their social media use, mental health status, and perceived impact of social media on their mental health. The survey included questions on social media use, mental health status (measured using the DASS-21), and perceived impact of social media on their mental health. Data were analyzed using descriptive statistics and regression analysis.

Results : The results showed that social media use is positively associated with depression, anxiety, and stress. The study also found that social comparison, cyberbullying, and FOMO are significant predictors of mental health problems among young adults.

Discussion : The study’s findings suggest that social media use has a negative impact on the mental health of young adults. The study highlights the need for interventions that address the factors contributing to the negative impact of social media, such as social comparison, cyberbullying, and FOMO.

Conclusion : In conclusion, social media use has a significant impact on the mental health of young adults. The study’s findings underscore the need for interventions that promote healthy social media use and address the negative outcomes associated with social media use. Future research can explore the effectiveness of interventions aimed at reducing the negative impact of social media on mental health. Additionally, longitudinal studies can investigate the long-term effects of social media use on mental health.

Limitations : The study has some limitations, including the use of self-report measures and a cross-sectional design. The use of self-report measures may result in biased responses, and a cross-sectional design limits the ability to establish causality.

Implications: The study’s findings have implications for mental health professionals, educators, and policymakers. Mental health professionals can use the findings to develop interventions that address the negative impact of social media use on mental health. Educators can incorporate social media literacy into their curriculum to promote healthy social media use among young adults. Policymakers can use the findings to develop policies that protect young adults from the negative outcomes associated with social media use.

References :

  • Twenge, J. M., & Campbell, W. K. (2019). Associations between screen time and lower psychological well-being among children and adolescents: Evidence from a population-based study. Preventive medicine reports, 15, 100918.
  • Primack, B. A., Shensa, A., Escobar-Viera, C. G., Barrett, E. L., Sidani, J. E., Colditz, J. B., … & James, A. E. (2017). Use of multiple social media platforms and symptoms of depression and anxiety: A nationally-representative study among US young adults. Computers in Human Behavior, 69, 1-9.
  • Van der Meer, T. G., & Verhoeven, J. W. (2017). Social media and its impact on academic performance of students. Journal of Information Technology Education: Research, 16, 383-398.

Appendix : The survey used in this study is provided below.

Social Media and Mental Health Survey

  • How often do you use social media per day?
  • Less than 30 minutes
  • 30 minutes to 1 hour
  • 1 to 2 hours
  • 2 to 4 hours
  • More than 4 hours
  • Which social media platforms do you use?
  • Others (Please specify)
  • How often do you experience the following on social media?
  • Social comparison (comparing yourself to others)
  • Cyberbullying
  • Fear of Missing Out (FOMO)
  • Have you ever experienced any of the following mental health problems in the past month?
  • Do you think social media use has a positive or negative impact on your mental health?
  • Very positive
  • Somewhat positive
  • Somewhat negative
  • Very negative
  • In your opinion, which factors contribute to the negative impact of social media on mental health?
  • Social comparison
  • In your opinion, what interventions could be effective in reducing the negative impact of social media on mental health?
  • Education on healthy social media use
  • Counseling for mental health problems caused by social media
  • Social media detox programs
  • Regulation of social media use

Thank you for your participation!

Applications of Research Paper

Research papers have several applications in various fields, including:

  • Advancing knowledge: Research papers contribute to the advancement of knowledge by generating new insights, theories, and findings that can inform future research and practice. They help to answer important questions, clarify existing knowledge, and identify areas that require further investigation.
  • Informing policy: Research papers can inform policy decisions by providing evidence-based recommendations for policymakers. They can help to identify gaps in current policies, evaluate the effectiveness of interventions, and inform the development of new policies and regulations.
  • Improving practice: Research papers can improve practice by providing evidence-based guidance for professionals in various fields, including medicine, education, business, and psychology. They can inform the development of best practices, guidelines, and standards of care that can improve outcomes for individuals and organizations.
  • Educating students : Research papers are often used as teaching tools in universities and colleges to educate students about research methods, data analysis, and academic writing. They help students to develop critical thinking skills, research skills, and communication skills that are essential for success in many careers.
  • Fostering collaboration: Research papers can foster collaboration among researchers, practitioners, and policymakers by providing a platform for sharing knowledge and ideas. They can facilitate interdisciplinary collaborations and partnerships that can lead to innovative solutions to complex problems.

When to Write Research Paper

Research papers are typically written when a person has completed a research project or when they have conducted a study and have obtained data or findings that they want to share with the academic or professional community. Research papers are usually written in academic settings, such as universities, but they can also be written in professional settings, such as research organizations, government agencies, or private companies.

Here are some common situations where a person might need to write a research paper:

  • For academic purposes: Students in universities and colleges are often required to write research papers as part of their coursework, particularly in the social sciences, natural sciences, and humanities. Writing research papers helps students to develop research skills, critical thinking skills, and academic writing skills.
  • For publication: Researchers often write research papers to publish their findings in academic journals or to present their work at academic conferences. Publishing research papers is an important way to disseminate research findings to the academic community and to establish oneself as an expert in a particular field.
  • To inform policy or practice : Researchers may write research papers to inform policy decisions or to improve practice in various fields. Research findings can be used to inform the development of policies, guidelines, and best practices that can improve outcomes for individuals and organizations.
  • To share new insights or ideas: Researchers may write research papers to share new insights or ideas with the academic or professional community. They may present new theories, propose new research methods, or challenge existing paradigms in their field.

Purpose of Research Paper

The purpose of a research paper is to present the results of a study or investigation in a clear, concise, and structured manner. Research papers are written to communicate new knowledge, ideas, or findings to a specific audience, such as researchers, scholars, practitioners, or policymakers. The primary purposes of a research paper are:

  • To contribute to the body of knowledge : Research papers aim to add new knowledge or insights to a particular field or discipline. They do this by reporting the results of empirical studies, reviewing and synthesizing existing literature, proposing new theories, or providing new perspectives on a topic.
  • To inform or persuade: Research papers are written to inform or persuade the reader about a particular issue, topic, or phenomenon. They present evidence and arguments to support their claims and seek to persuade the reader of the validity of their findings or recommendations.
  • To advance the field: Research papers seek to advance the field or discipline by identifying gaps in knowledge, proposing new research questions or approaches, or challenging existing assumptions or paradigms. They aim to contribute to ongoing debates and discussions within a field and to stimulate further research and inquiry.
  • To demonstrate research skills: Research papers demonstrate the author’s research skills, including their ability to design and conduct a study, collect and analyze data, and interpret and communicate findings. They also demonstrate the author’s ability to critically evaluate existing literature, synthesize information from multiple sources, and write in a clear and structured manner.

Characteristics of Research Paper

Research papers have several characteristics that distinguish them from other forms of academic or professional writing. Here are some common characteristics of research papers:

  • Evidence-based: Research papers are based on empirical evidence, which is collected through rigorous research methods such as experiments, surveys, observations, or interviews. They rely on objective data and facts to support their claims and conclusions.
  • Structured and organized: Research papers have a clear and logical structure, with sections such as introduction, literature review, methods, results, discussion, and conclusion. They are organized in a way that helps the reader to follow the argument and understand the findings.
  • Formal and objective: Research papers are written in a formal and objective tone, with an emphasis on clarity, precision, and accuracy. They avoid subjective language or personal opinions and instead rely on objective data and analysis to support their arguments.
  • Citations and references: Research papers include citations and references to acknowledge the sources of information and ideas used in the paper. They use a specific citation style, such as APA, MLA, or Chicago, to ensure consistency and accuracy.
  • Peer-reviewed: Research papers are often peer-reviewed, which means they are evaluated by other experts in the field before they are published. Peer-review ensures that the research is of high quality, meets ethical standards, and contributes to the advancement of knowledge in the field.
  • Objective and unbiased: Research papers strive to be objective and unbiased in their presentation of the findings. They avoid personal biases or preconceptions and instead rely on the data and analysis to draw conclusions.

Advantages of Research Paper

Research papers have many advantages, both for the individual researcher and for the broader academic and professional community. Here are some advantages of research papers:

  • Contribution to knowledge: Research papers contribute to the body of knowledge in a particular field or discipline. They add new information, insights, and perspectives to existing literature and help advance the understanding of a particular phenomenon or issue.
  • Opportunity for intellectual growth: Research papers provide an opportunity for intellectual growth for the researcher. They require critical thinking, problem-solving, and creativity, which can help develop the researcher’s skills and knowledge.
  • Career advancement: Research papers can help advance the researcher’s career by demonstrating their expertise and contributions to the field. They can also lead to new research opportunities, collaborations, and funding.
  • Academic recognition: Research papers can lead to academic recognition in the form of awards, grants, or invitations to speak at conferences or events. They can also contribute to the researcher’s reputation and standing in the field.
  • Impact on policy and practice: Research papers can have a significant impact on policy and practice. They can inform policy decisions, guide practice, and lead to changes in laws, regulations, or procedures.
  • Advancement of society: Research papers can contribute to the advancement of society by addressing important issues, identifying solutions to problems, and promoting social justice and equality.

Limitations of Research Paper

Research papers also have some limitations that should be considered when interpreting their findings or implications. Here are some common limitations of research papers:

  • Limited generalizability: Research findings may not be generalizable to other populations, settings, or contexts. Studies often use specific samples or conditions that may not reflect the broader population or real-world situations.
  • Potential for bias : Research papers may be biased due to factors such as sample selection, measurement errors, or researcher biases. It is important to evaluate the quality of the research design and methods used to ensure that the findings are valid and reliable.
  • Ethical concerns: Research papers may raise ethical concerns, such as the use of vulnerable populations or invasive procedures. Researchers must adhere to ethical guidelines and obtain informed consent from participants to ensure that the research is conducted in a responsible and respectful manner.
  • Limitations of methodology: Research papers may be limited by the methodology used to collect and analyze data. For example, certain research methods may not capture the complexity or nuance of a particular phenomenon, or may not be appropriate for certain research questions.
  • Publication bias: Research papers may be subject to publication bias, where positive or significant findings are more likely to be published than negative or non-significant findings. This can skew the overall findings of a particular area of research.
  • Time and resource constraints: Research papers may be limited by time and resource constraints, which can affect the quality and scope of the research. Researchers may not have access to certain data or resources, or may be unable to conduct long-term studies due to practical limitations.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Research Contribution

Research Contribution – Thesis Guide

How to Publish a Research Paper

How to Publish a Research Paper – Step by Step...

Limitations in Research

Limitations in Research – Types, Examples and...

Research Problem

Research Problem – Examples, Types and Guide

Research Paper Formats

Research Paper Format – Types, Examples and...

Future Research

Future Research – Thesis Guide

  • How to write a research paper

Last updated

11 January 2024

Reviewed by

With proper planning, knowledge, and framework, completing a research paper can be a fulfilling and exciting experience. 

Though it might initially sound slightly intimidating, this guide will help you embrace the challenge. 

By documenting your findings, you can inspire others and make a difference in your field. Here's how you can make your research paper unique and comprehensive.

  • What is a research paper?

Research papers allow you to demonstrate your knowledge and understanding of a particular topic. These papers are usually lengthier and more detailed than typical essays, requiring deeper insight into the chosen topic.

To write a research paper, you must first choose a topic that interests you and is relevant to the field of study. Once you’ve selected your topic, gathering as many relevant resources as possible, including books, scholarly articles, credible websites, and other academic materials, is essential. You must then read and analyze these sources, summarizing their key points and identifying gaps in the current research.

You can formulate your ideas and opinions once you thoroughly understand the existing research. To get there might involve conducting original research, gathering data, or analyzing existing data sets. It could also involve presenting an original argument or interpretation of the existing research.

Writing a successful research paper involves presenting your findings clearly and engagingly, which might involve using charts, graphs, or other visual aids to present your data and using concise language to explain your findings. You must also ensure your paper adheres to relevant academic formatting guidelines, including proper citations and references.

Overall, writing a research paper requires a significant amount of time, effort, and attention to detail. However, it is also an enriching experience that allows you to delve deeply into a subject that interests you and contribute to the existing body of knowledge in your chosen field.

  • How long should a research paper be?

Research papers are deep dives into a topic. Therefore, they tend to be longer pieces of work than essays or opinion pieces. 

However, a suitable length depends on the complexity of the topic and your level of expertise. For instance, are you a first-year college student or an experienced professional? 

Also, remember that the best research papers provide valuable information for the benefit of others. Therefore, the quality of information matters most, not necessarily the length. Being concise is valuable.

Following these best practice steps will help keep your process simple and productive:

1. Gaining a deep understanding of any expectations

Before diving into your intended topic or beginning the research phase, take some time to orient yourself. Suppose there’s a specific topic assigned to you. In that case, it’s essential to deeply understand the question and organize your planning and approach in response. Pay attention to the key requirements and ensure you align your writing accordingly. 

This preparation step entails

Deeply understanding the task or assignment

Being clear about the expected format and length

Familiarizing yourself with the citation and referencing requirements 

Understanding any defined limits for your research contribution

Where applicable, speaking to your professor or research supervisor for further clarification

2. Choose your research topic

Select a research topic that aligns with both your interests and available resources. Ideally, focus on a field where you possess significant experience and analytical skills. In crafting your research paper, it's crucial to go beyond summarizing existing data and contribute fresh insights to the chosen area.

Consider narrowing your focus to a specific aspect of the topic. For example, if exploring the link between technology and mental health, delve into how social media use during the pandemic impacts the well-being of college students. Conducting interviews and surveys with students could provide firsthand data and unique perspectives, adding substantial value to the existing knowledge.

When finalizing your topic, adhere to legal and ethical norms in the relevant area (this ensures the integrity of your research, protects participants' rights, upholds intellectual property standards, and ensures transparency and accountability). Following these principles not only maintains the credibility of your work but also builds trust within your academic or professional community.

For instance, in writing about medical research, consider legal and ethical norms , including patient confidentiality laws and informed consent requirements. Similarly, if analyzing user data on social media platforms, be mindful of data privacy regulations, ensuring compliance with laws governing personal information collection and use. Aligning with legal and ethical standards not only avoids potential issues but also underscores the responsible conduct of your research.

3. Gather preliminary research

Once you’ve landed on your topic, it’s time to explore it further. You’ll want to discover more about available resources and existing research relevant to your assignment at this stage. 

This exploratory phase is vital as you may discover issues with your original idea or realize you have insufficient resources to explore the topic effectively. This key bit of groundwork allows you to redirect your research topic in a different, more feasible, or more relevant direction if necessary. 

Spending ample time at this stage ensures you gather everything you need, learn as much as you can about the topic, and discover gaps where the topic has yet to be sufficiently covered, offering an opportunity to research it further. 

4. Define your research question

To produce a well-structured and focused paper, it is imperative to formulate a clear and precise research question that will guide your work. Your research question must be informed by the existing literature and tailored to the scope and objectives of your project. By refining your focus, you can produce a thoughtful and engaging paper that effectively communicates your ideas to your readers.

5. Write a thesis statement

A thesis statement is a one-to-two-sentence summary of your research paper's main argument or direction. It serves as an overall guide to summarize the overall intent of the research paper for you and anyone wanting to know more about the research.

A strong thesis statement is:

Concise and clear: Explain your case in simple sentences (avoid covering multiple ideas). It might help to think of this section as an elevator pitch.

Specific: Ensure that there is no ambiguity in your statement and that your summary covers the points argued in the paper.

Debatable: A thesis statement puts forward a specific argument––it is not merely a statement but a debatable point that can be analyzed and discussed.

Here are three thesis statement examples from different disciplines:

Psychology thesis example: "We're studying adults aged 25-40 to see if taking short breaks for mindfulness can help with stress. Our goal is to find practical ways to manage anxiety better."

Environmental science thesis example: "This research paper looks into how having more city parks might make the air cleaner and keep people healthier. I want to find out if more green spaces means breathing fewer carcinogens in big cities."

UX research thesis example: "This study focuses on improving mobile banking for older adults using ethnographic research, eye-tracking analysis, and interactive prototyping. We investigate the usefulness of eye-tracking analysis with older individuals, aiming to spark debate and offer fresh perspectives on UX design and digital inclusivity for the aging population."

6. Conduct in-depth research

A research paper doesn’t just include research that you’ve uncovered from other papers and studies but your fresh insights, too. You will seek to become an expert on your topic––understanding the nuances in the current leading theories. You will analyze existing research and add your thinking and discoveries.  It's crucial to conduct well-designed research that is rigorous, robust, and based on reliable sources. Suppose a research paper lacks evidence or is biased. In that case, it won't benefit the academic community or the general public. Therefore, examining the topic thoroughly and furthering its understanding through high-quality research is essential. That usually means conducting new research. Depending on the area under investigation, you may conduct surveys, interviews, diary studies , or observational research to uncover new insights or bolster current claims.

7. Determine supporting evidence

Not every piece of research you’ve discovered will be relevant to your research paper. It’s important to categorize the most meaningful evidence to include alongside your discoveries. It's important to include evidence that doesn't support your claims to avoid exclusion bias and ensure a fair research paper.

8. Write a research paper outline

Before diving in and writing the whole paper, start with an outline. It will help you to see if more research is needed, and it will provide a framework by which to write a more compelling paper. Your supervisor may even request an outline to approve before beginning to write the first draft of the full paper. An outline will include your topic, thesis statement, key headings, short summaries of the research, and your arguments.

9. Write your first draft

Once you feel confident about your outline and sources, it’s time to write your first draft. While penning a long piece of content can be intimidating, if you’ve laid the groundwork, you will have a structure to help you move steadily through each section. To keep up motivation and inspiration, it’s often best to keep the pace quick. Stopping for long periods can interrupt your flow and make jumping back in harder than writing when things are fresh in your mind.

10. Cite your sources correctly

It's always a good practice to give credit where it's due, and the same goes for citing any works that have influenced your paper. Building your arguments on credible references adds value and authenticity to your research. In the formatting guidelines section, you’ll find an overview of different citation styles (MLA, CMOS, or APA), which will help you meet any publishing or academic requirements and strengthen your paper's credibility. It is essential to follow the guidelines provided by your school or the publication you are submitting to ensure the accuracy and relevance of your citations.

11. Ensure your work is original

It is crucial to ensure the originality of your paper, as plagiarism can lead to serious consequences. To avoid plagiarism, you should use proper paraphrasing and quoting techniques. Paraphrasing is rewriting a text in your own words while maintaining the original meaning. Quoting involves directly citing the source. Giving credit to the original author or source is essential whenever you borrow their ideas or words. You can also use plagiarism detection tools such as Scribbr or Grammarly to check the originality of your paper. These tools compare your draft writing to a vast database of online sources. If you find any accidental plagiarism, you should correct it immediately by rephrasing or citing the source.

12. Revise, edit, and proofread

One of the essential qualities of excellent writers is their ability to understand the importance of editing and proofreading. Even though it's tempting to call it a day once you've finished your writing, editing your work can significantly improve its quality. It's natural to overlook the weaker areas when you've just finished writing a paper. Therefore, it's best to take a break of a day or two, or even up to a week, to refresh your mind. This way, you can return to your work with a new perspective. After some breathing room, you can spot any inconsistencies, spelling and grammar errors, typos, or missing citations and correct them. 

  • The best research paper format 

The format of your research paper should align with the requirements set forth by your college, school, or target publication. 

There is no one “best” format, per se. Depending on the stated requirements, you may need to include the following elements:

Title page: The title page of a research paper typically includes the title, author's name, and institutional affiliation and may include additional information such as a course name or instructor's name. 

Table of contents: Include a table of contents to make it easy for readers to find specific sections of your paper.

Abstract: The abstract is a summary of the purpose of the paper.

Methods : In this section, describe the research methods used. This may include collecting data , conducting interviews, or doing field research .

Results: Summarize the conclusions you drew from your research in this section.

Discussion: In this section, discuss the implications of your research . Be sure to mention any significant limitations to your approach and suggest areas for further research.

Tables, charts, and illustrations: Use tables, charts, and illustrations to help convey your research findings and make them easier to understand.

Works cited or reference page: Include a works cited or reference page to give credit to the sources that you used to conduct your research.

Bibliography: Provide a list of all the sources you consulted while conducting your research.

Dedication and acknowledgments : Optionally, you may include a dedication and acknowledgments section to thank individuals who helped you with your research.

  • General style and formatting guidelines

Formatting your research paper means you can submit it to your college, journal, or other publications in compliance with their criteria.

Research papers tend to follow the American Psychological Association (APA), Modern Language Association (MLA), or Chicago Manual of Style (CMOS) guidelines.

Here’s how each style guide is typically used:

Chicago Manual of Style (CMOS):

CMOS is a versatile style guide used for various types of writing. It's known for its flexibility and use in the humanities. CMOS provides guidelines for citations, formatting, and overall writing style. It allows for both footnotes and in-text citations, giving writers options based on their preferences or publication requirements.

American Psychological Association (APA):

APA is common in the social sciences. It’s hailed for its clarity and emphasis on precision. It has specific rules for citing sources, creating references, and formatting papers. APA style uses in-text citations with an accompanying reference list. It's designed to convey information efficiently and is widely used in academic and scientific writing.

Modern Language Association (MLA):

MLA is widely used in the humanities, especially literature and language studies. It emphasizes the author-page format for in-text citations and provides guidelines for creating a "Works Cited" page. MLA is known for its focus on the author's name and the literary works cited. It’s frequently used in disciplines that prioritize literary analysis and critical thinking.

To confirm you're using the latest style guide, check the official website or publisher's site for updates, consult academic resources, and verify the guide's publication date. Online platforms and educational resources may also provide summaries and alerts about any revisions or additions to the style guide.

Citing sources

When working on your research paper, it's important to cite the sources you used properly. Your citation style will guide you through this process. Generally, there are three parts to citing sources in your research paper: 

First, provide a brief citation in the body of your essay. This is also known as a parenthetical or in-text citation. 

Second, include a full citation in the Reference list at the end of your paper. Different types of citations include in-text citations, footnotes, and reference lists. 

In-text citations include the author's surname and the date of the citation. 

Footnotes appear at the bottom of each page of your research paper. They may also be summarized within a reference list at the end of the paper. 

A reference list includes all of the research used within the paper at the end of the document. It should include the author, date, paper title, and publisher listed in the order that aligns with your citation style.

10 research paper writing tips:

Following some best practices is essential to writing a research paper that contributes to your field of study and creates a positive impact.

These tactics will help you structure your argument effectively and ensure your work benefits others:

Clear and precise language:  Ensure your language is unambiguous. Use academic language appropriately, but keep it simple. Also, provide clear takeaways for your audience.

Effective idea separation:  Organize the vast amount of information and sources in your paper with paragraphs and titles. Create easily digestible sections for your readers to navigate through.

Compelling intro:  Craft an engaging introduction that captures your reader's interest. Hook your audience and motivate them to continue reading.

Thorough revision and editing:  Take the time to review and edit your paper comprehensively. Use tools like Grammarly to detect and correct small, overlooked errors.

Thesis precision:  Develop a clear and concise thesis statement that guides your paper. Ensure that your thesis aligns with your research's overall purpose and contribution.

Logical flow of ideas:  Maintain a logical progression throughout the paper. Use transitions effectively to connect different sections and maintain coherence.

Critical evaluation of sources:  Evaluate and critically assess the relevance and reliability of your sources. Ensure that your research is based on credible and up-to-date information.

Thematic consistency:  Maintain a consistent theme throughout the paper. Ensure that all sections contribute cohesively to the overall argument.

Relevant supporting evidence:  Provide concise and relevant evidence to support your arguments. Avoid unnecessary details that may distract from the main points.

Embrace counterarguments:  Acknowledge and address opposing views to strengthen your position. Show that you have considered alternative arguments in your field.

7 research tips 

If you want your paper to not only be well-written but also contribute to the progress of human knowledge, consider these tips to take your paper to the next level:

Selecting the appropriate topic: The topic you select should align with your area of expertise, comply with the requirements of your project, and have sufficient resources for a comprehensive investigation.

Use academic databases: Academic databases such as PubMed, Google Scholar, and JSTOR offer a wealth of research papers that can help you discover everything you need to know about your chosen topic.

Critically evaluate sources: It is important not to accept research findings at face value. Instead, it is crucial to critically analyze the information to avoid jumping to conclusions or overlooking important details. A well-written research paper requires a critical analysis with thorough reasoning to support claims.

Diversify your sources: Expand your research horizons by exploring a variety of sources beyond the standard databases. Utilize books, conference proceedings, and interviews to gather diverse perspectives and enrich your understanding of the topic.

Take detailed notes: Detailed note-taking is crucial during research and can help you form the outline and body of your paper.

Stay up on trends: Keep abreast of the latest developments in your field by regularly checking for recent publications. Subscribe to newsletters, follow relevant journals, and attend conferences to stay informed about emerging trends and advancements. 

Engage in peer review: Seek feedback from peers or mentors to ensure the rigor and validity of your research . Peer review helps identify potential weaknesses in your methodology and strengthens the overall credibility of your findings.

  • The real-world impact of research papers

Writing a research paper is more than an academic or business exercise. The experience provides an opportunity to explore a subject in-depth, broaden one's understanding, and arrive at meaningful conclusions. With careful planning, dedication, and hard work, writing a research paper can be a fulfilling and enriching experience contributing to advancing knowledge.

How do I publish my research paper? 

Many academics wish to publish their research papers. While challenging, your paper might get traction if it covers new and well-written information. To publish your research paper, find a target publication, thoroughly read their guidelines, format your paper accordingly, and send it to them per their instructions. You may need to include a cover letter, too. After submission, your paper may be peer-reviewed by experts to assess its legitimacy, quality, originality, and methodology. Following review, you will be informed by the publication whether they have accepted or rejected your paper. 

What is a good opening sentence for a research paper? 

Beginning your research paper with a compelling introduction can ensure readers are interested in going further. A relevant quote, a compelling statistic, or a bold argument can start the paper and hook your reader. Remember, though, that the most important aspect of a research paper is the quality of the information––not necessarily your ability to storytell, so ensure anything you write aligns with your goals.

Research paper vs. a research proposal—what’s the difference?

While some may confuse research papers and proposals, they are different documents. 

A research proposal comes before a research paper. It is a detailed document that outlines an intended area of exploration. It includes the research topic, methodology, timeline, sources, and potential conclusions. Research proposals are often required when seeking approval to conduct research. 

A research paper is a summary of research findings. A research paper follows a structured format to present those findings and construct an argument or conclusion.

Should you be using a customer insights hub?

Do you want to discover previous research faster?

Do you share your research findings with others?

Do you analyze research data?

Start for free today, add your research, and get to key insights faster

Editor’s picks

Last updated: 13 April 2023

Last updated: 14 February 2024

Last updated: 27 January 2024

Last updated: 18 April 2023

Last updated: 8 February 2023

Last updated: 23 January 2024

Last updated: 30 January 2024

Last updated: 7 February 2023

Last updated: 18 May 2023

Last updated: 31 January 2024

Last updated: 13 May 2024

Latest articles

Related topics, .css-je19u9{-webkit-align-items:flex-end;-webkit-box-align:flex-end;-ms-flex-align:flex-end;align-items:flex-end;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex-direction:row;-ms-flex-direction:row;flex-direction:row;-webkit-box-flex-wrap:wrap;-webkit-flex-wrap:wrap;-ms-flex-wrap:wrap;flex-wrap:wrap;-webkit-box-pack:center;-ms-flex-pack:center;-webkit-justify-content:center;justify-content:center;row-gap:0;text-align:center;max-width:671px;}@media (max-width: 1079px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}}@media (max-width: 799px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}} decide what to .css-1kiodld{max-height:56px;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;}@media (max-width: 1079px){.css-1kiodld{display:none;}} build next, decide what to build next.

research paper on 5

Users report unexpectedly high data usage, especially during streaming sessions.

research paper on 5

Users find it hard to navigate from the home page to relevant playlists in the app.

research paper on 5

It would be great to have a sleep timer feature, especially for bedtime listening.

research paper on 5

I need better filters to find the songs or artists I’m looking for.

  • 10 research paper

Log in or sign up

Get started for free

Research Guide

Chapter 5 sections of a paper.

Now that you have identified your research question, have compiled the data you need, and have a clear argument and roadmap, it is time for you to write. In this Module, I will briefly explain how to develop different sections of your research paper. I devote a different chapter to the empirical section. Please take into account that these are guidelines to follow in the different section, but you need to adapt them to the specific context of your paper.

5.1 The Abstract

The abstract of a research paper contains the most critical aspects of the paper: your research question, the context (country/population/subjects and period) analyzed, the findings, and the main conclusion. You have about 250 characters to attract the attention of the readers. Many times (in fact, most of the time), readers will only read the abstract. You need to “sell” your argument and entice them to continue reading. Thus, abstracts require good and direct writing. Use journalistic style. Go straight to the point.

There are two ways in which an abstract can start:

By introducing what motivates the research question. This is relevant when some context may be needed. When there is ‘something superior’ motivating your project. Use this strategy with care, as you may confuse the reader who may have a hard time understanding your research question.

By introducing your research question. This is the best way to attract the attention of your readers, as they can understand the main objective of the paper from the beginning. When the question is clear and straightforward this is the best method to follow.

Regardless of the path you follow, make sure that the abstract only includes short sentences written in active voice and present tense. Remember: Readers are very impatient. They will only skim the papers. You should make it simple for readers to find all the necessary information.

5.2 The Introduction

The introduction represents the most important section of your research paper. Whereas your title and abstract guide the readers towards the paper, the introduction should convince them to stay and read the rest of it. This section represents your opportunity to state your research question and link it to the bigger issue (why does your research matter?), how will you respond it (your empirical methods and the theory behind), your findings, and your contribution to the literature on that issue.

I reviewed the “Introduction Formulas” guidelines by Keith Head , David Evans and Jessica B. Hoel and compiled their ideas in this document, based on what my I have seen is used in papers in political economy, and development economics.

This is not a set of rules, as papers may differ depending on the methods and specific characteristics of the field, but it can work as a guideline. An important takeaway is that the introduction will be the section that deserves most of the attention in your paper. You can write it first, but you need to go back to it as you make progress in the rest of teh paper. Keith Head puts it excellent by saying that this exercise (going back and forth) is mostly useful to remind you what are you doing in the paper and why.

5.2.1 Outline

What are the sections generally included in well-written introductions? According to the analysis of what different authors suggest, a well-written introduction includes the following sections:

  • Hook: Motivation, puzzle. (1-2 paragraphs)
  • Research Question: What is the paper doing? (1 paragraph)
  • Antecedents: (optional) How your paper is linked to the bigger issue. Theory. (1-2 paragraphs)
  • Empirical approach: Method X, country Y, dataset Z. (1-2 paragraphs)
  • Detailed results: Don’t make the readers wait. (2-3 paragraphs)
  • Mechanisms, robustness and limitations: (optional) Your results are valid and important (1 paragraph)
  • Value added: Why is your paper important? How is it contributing to the field? (1-3 paragraphs)
  • Roadmap A convention (1 paragraph)

Now, let’s describe the different sections with more detail.

5.2.1.1 1. The Hook

Your first paragraph(s) should attract the attention of the readers, showing them why your research topic is important. Some attributes here are:

  • Big issue, specific angle: This is the big problem, here is this aspect of the problem (that your research tackles)
  • Big puzzle: There is no single explanation of the problem (you will address that)
  • Major policy implemented: Here is the issue and the policy implemented (you will test if if worked)
  • Controversial debate: some argue X, others argue Y

5.2.1.2 2. Research Question

After the issue has been introduced, you need to clearly state your research question; tell the reader what does the paper researches. Some words that may work here are:

  • I (We) focus on
  • This paper asks whether
  • In this paper,
  • Given the gaps in knoweldge, this paper
  • This paper investigates

5.2.1.3 3. Antecedents (Optional section)

I included this section as optional as it is not always included, but it may help to center the paper in the literature on the field.

However, an important warning needs to be placed here. Remember that the introduction is limited and you need to use it to highlight your work and not someone else’s. So, when the section is included, it is important to:

  • Avoid discussing paper that are not part of the larger narrative that surrounds your work
  • Use it to notice the gaps that exist in the current literature and that your paper is covering

In this section, you may also want to include a description of theoretical framework of your paper and/or a short description of a story example that frames your work.

5.2.1.4 4. Empirical Approach

One of the most important sections of the paper, particularly if you are trying to infer causality. Here, you need to explain how you are going to answer the research question you introduced earlier. This section of the introduction needs to be succint but clear and indicate your methodology, case selection, and the data used.

5.2.1.5 5. Overview of the Results

Let’s be honest. A large proportion of the readers will not go over the whole article. Readers need to understand what you’re doing, how and what did you obtain in the (brief) time they will allocate to read your paper (some eager readers may go back to some sections of the paper). So, you want to introduce your results early on (another reason you may want to go back to the introduction multiple times). Highlight the results that are more interesting and link them to the context.

According to David Evans , some authors prefer to alternate between the introduction of one of the empirical strategies, to those results, and then they introduce another empirical strategy and the results. This strategy may be useful if different empirical methodologies are used.

5.2.1.6 6. Mechanisms, Robustness and Limitations (Optional Section)

If you have some ideas about what drives your results (the mechanisms involved), you may want to indicate that here. Some of the current critiques towards economics (and probably social sciences in general) has been the strong focus on establishing causation, with little regard to the context surrounding this (if you want to hear more, there is this thread from Dani Rodrick ). Agency matters and if the paper can say something about this (sometimes this goes beyond our research), you should indicate it in the introduction.

You may also want to briefly indicate how your results are valid after trying different specifications or sources of data (this is called Robustness checks). But you also want to be honest about the limitations of your research. But here, do not diminish the importance of your project. After you indicate the limitations, finish the paragraph restating the importance of your findings.

5.2.1.7 7. Value Added

A very important section in the introduction, these paragraphs help readers (and reviewers) to show why is your work important. What are the specific contributions of your paper?

This section is different from section 3 in that it points out the detailed additions you are making to the field with your research. Both sections can be connected if that fits your paper, but it is quite important that you keep the focus on the contributions of your paper, even if you discuss some literature connected to it, but always with the focus of showing what your paper adds. References (literature review) should come after in the paper.

5.2.1.8 8. Roadmap

A convention for the papers, this section needs to be kept short and outline the organization of the paper. To make it more useful, you can highlight some details that might be important in certain sections. But you want to keep this section succint (most readers skip this paragraph altogether).

5.2.2 In summary

The introduction of your paper will play a huge role in defining the future of your paper. Do not waste this opportunity and use it as well as your North Star guiding your path throughout the rest of the paper.

5.3 Context (Literature Review)

Do you need a literature review section?

5.4 Conclusion

How to Write a Research Paper

Last Updated: February 18, 2024 Fact Checked

This article was co-authored by Chris Hadley, PhD . Chris Hadley, PhD is part of the wikiHow team and works on content strategy and data and analytics. Chris Hadley earned his PhD in Cognitive Psychology from UCLA in 2006. Chris' academic research has been published in numerous scientific journals. There are 14 references cited in this article, which can be found at the bottom of the page. This article has been fact-checked, ensuring the accuracy of any cited facts and confirming the authority of its sources. This article has been viewed 4,189,490 times.

Whether you’re in a history, literature, or science class, you’ll probably have to write a research paper at some point. It may seem daunting when you’re just starting out, but staying organized and budgeting your time can make the process a breeze. Research your topic, find reliable sources, and come up with a working thesis. Then create an outline and start drafting your paper. Be sure to leave plenty of time to make revisions, as editing is essential if you want to hand in your best work!

Sample Research Papers and Outlines

research paper on 5

Researching Your Topic

Step 1 Focus your research on a narrow topic.

  • For instance, you might start with a general subject, like British decorative arts. Then, as you read, you home in on transferware and pottery. Ultimately, you focus on 1 potter in the 1780s who invented a way to mass-produce patterned tableware.

Tip: If you need to analyze a piece of literature, your task is to pull the work apart into literary elements and explain how the author uses those parts to make their point.

Step 2 Search for credible sources online and at a library.

  • Authoritative, credible sources include scholarly articles (especially those other authors reference), government websites, scientific studies, and reputable news bureaus. Additionally, check your sources' dates, and make sure the information you gather is up to date.
  • Evaluate how other scholars have approached your topic. Identify authoritative sources or works that are accepted as the most important accounts of the subject matter. Additionally, look for debates among scholars, and ask yourself who presents the strongest evidence for their case. [3] X Trustworthy Source Purdue Online Writing Lab Trusted resource for writing and citation guidelines Go to source
  • You’ll most likely need to include a bibliography or works cited page, so keep your sources organized. List your sources, format them according to your assigned style guide (such as MLA or Chicago ), and write 2 or 3 summary sentences below each one. [4] X Research source

Step 3 Come up with a preliminary thesis.

  • Imagine you’re a lawyer in a trial and are presenting a case to a jury. Think of your readers as the jurors; your opening statement is your thesis and you’ll present evidence to the jury to make your case.
  • A thesis should be specific rather than vague, such as: “Josiah Spode’s improved formula for bone china enabled the mass production of transfer-printed wares, which expanded the global market for British pottery.”

Drafting Your Essay

Step 1 Create an outline

  • Your outline is your paper’s skeleton. After making the outline, all you’ll need to do is fill in the details.
  • For easy reference, include your sources where they fit into your outline, like this: III. Spode vs. Wedgewood on Mass Production A. Spode: Perfected chemical formula with aims for fast production and distribution (Travis, 2002, 43) B. Wedgewood: Courted high-priced luxury market; lower emphasis on mass production (Himmelweit, 2001, 71) C. Therefore: Wedgewood, unlike Spode, delayed the expansion of the pottery market.

Step 2 Present your thesis...

  • For instance, your opening line could be, “Overlooked in the present, manufacturers of British pottery in the eighteenth and nineteenth centuries played crucial roles in England’s Industrial Revolution.”
  • After presenting your thesis, lay out your evidence, like this: “An examination of Spode’s innovative production and distribution techniques will demonstrate the importance of his contributions to the industry and Industrial Revolution at large.”

Tip: Some people prefer to write the introduction first and use it to structure the rest of the paper. However, others like to write the body, then fill in the introduction. Do whichever seems natural to you. If you write the intro first, keep in mind you can tweak it later to reflect your finished paper’s layout.

Step 3 Build your argument in the body paragraphs.

  • After setting the context, you'd include a section on Josiah Spode’s company and what he did to make pottery easier to manufacture and distribute.
  • Next, discuss how targeting middle class consumers increased demand and expanded the pottery industry globally.
  • Then, you could explain how Spode differed from competitors like Wedgewood, who continued to court aristocratic consumers instead of expanding the market to the middle class.
  • The right number of sections or paragraphs depends on your assignment. In general, shoot for 3 to 5, but check your prompt for your assigned length.

Step 4 Address a counterargument to strengthen your case.

  • If you bring up a counterargument, make sure it’s a strong claim that’s worth entertaining instead of ones that's weak and easily dismissed.
  • Suppose, for instance, you’re arguing for the benefits of adding fluoride to toothpaste and city water. You could bring up a study that suggested fluoride produced harmful health effects, then explain how its testing methods were flawed.

Step 5 Summarize your argument...

  • Sum up your argument, but don’t simply rewrite your introduction using slightly different wording. To make your conclusion more memorable, you could also connect your thesis to a broader topic or theme to make it more relatable to your reader.
  • For example, if you’ve discussed the role of nationalism in World War I, you could conclude by mentioning nationalism’s reemergence in contemporary foreign affairs.

Revising Your Paper

Step 1 Ensure your paper...

  • This is also a great opportunity to make sure your paper fulfills the parameters of the assignment and answers the prompt!
  • It’s a good idea to put your essay aside for a few hours (or overnight, if you have time). That way, you can start editing it with fresh eyes.

Tip: Try to give yourself at least 2 or 3 days to revise your paper. It may be tempting to simply give your paper a quick read and use the spell-checker to make edits. However, revising your paper properly is more in-depth.

Step 2 Cut out unnecessary words and other fluff.

  • The passive voice, such as “The door was opened by me,” feels hesitant and wordy. On the other hand, the active voice, or “I opened the door,” feels strong and concise.
  • Each word in your paper should do a specific job. Try to avoid including extra words just to fill up blank space on a page or sound fancy.
  • For instance, “The author uses pathos to appeal to readers’ emotions” is better than “The author utilizes pathos to make an appeal to the emotional core of those who read the passage.”

Step 3 Proofread

  • Read your essay out loud to help ensure you catch every error. As you read, check for flow as well and, if necessary, tweak any spots that sound awkward. [13] X Trustworthy Source University of North Carolina Writing Center UNC's on-campus and online instructional service that provides assistance to students, faculty, and others during the writing process Go to source

Step 4 Ask a friend, relative, or teacher to read your work before you submit it.

  • It’s wise to get feedback from one person who’s familiar with your topic and another who’s not. The person who knows about the topic can help ensure you’ve nailed all the details. The person who’s unfamiliar with the topic can help make sure your writing is clear and easy to understand.

Community Q&A

Community Answer

  • Remember that your topic and thesis should be as specific as possible. Thanks Helpful 5 Not Helpful 0
  • Researching, outlining, drafting, and revising are all important steps, so do your best to budget your time wisely. Try to avoid waiting until the last minute to write your paper. Thanks Helpful 6 Not Helpful 2

research paper on 5

You Might Also Like

Get Started With a Research Project

  • ↑ https://writing.wisc.edu/handbook/assignments/planresearchpaper/
  • ↑ https://writingcenter.unc.edu/tips-and-tools/evaluating-print-sources/
  • ↑ https://owl.purdue.edu/owl/research_and_citation/conducting_research/research_overview/index.html
  • ↑ https://poorvucenter.yale.edu/writing/graduate-writing-lab/writing-through-graduate-school/working-sources
  • ↑ https://opentextbc.ca/writingforsuccess/chapter/chapter-5-putting-the-pieces-together-with-a-thesis-statement/
  • ↑ https://owl.purdue.edu/owl/general_writing/the_writing_process/developing_an_outline/index.html
  • ↑ https://writingcenter.unc.edu/tips-and-tools/introductions/
  • ↑ https://academicguides.waldenu.edu/writingcenter/writingprocess/counterarguments
  • ↑ https://writingcenter.fas.harvard.edu/pages/ending-essay-conclusions
  • ↑ https://writingcenter.unc.edu/tips-and-tools/revising-drafts/
  • ↑ https://academicguides.waldenu.edu/formandstyle/writing/scholarlyvoice/activepassive
  • ↑ https://writingcenter.unc.edu/tips-and-tools/editing-and-proofreading/
  • ↑ https://writingcenter.unc.edu/tips-and-tools/reading-aloud/
  • ↑ https://owl.purdue.edu/owl/general_writing/the_writing_process/proofreading/index.html

About This Article

Chris Hadley, PhD

To write a research paper, start by researching your topic at the library, online, or using an academic database. As you conduct your research and take notes, zero in on a specific topic that you want to write about and create a 1-2 sentence thesis to state the focus of your paper. Then, create an outline that includes an introduction, 3 to 5 body paragraphs to present your arguments, and a conclusion to sum up your main points. Once you have your paper's structure organized, draft your paragraphs, focusing on 1 argument per paragraph. Use the information you found through your research to back up your claims and prove your thesis statement. Finally, proofread and revise your content until it's polished and ready to submit. For more information on researching and citing sources, read on! Did this summary help you? Yes No

  • Send fan mail to authors

Reader Success Stories

Private And Discrete

Private And Discrete

Aug 2, 2020

Did this article help you?

Private And Discrete

Jan 3, 2018

Anonymous

Oct 29, 2016

Maronicha Lyles

Maronicha Lyles

Jul 24, 2016

Maxwell Ansah

Maxwell Ansah

Nov 22, 2019

Do I Have a Dirty Mind Quiz

Featured Articles

The Best Strategies to Win at Fortnite

Trending Articles

What Does “If They Wanted to, They Would” Mean and Is It True?

Watch Articles

Clean Silver Jewelry with Vinegar

  • Terms of Use
  • Privacy Policy
  • Do Not Sell or Share My Info
  • Not Selling Info

Get all the best how-tos!

Sign up for wikiHow's weekly email newsletter

Purdue Online Writing Lab Purdue OWL® College of Liberal Arts

Writing a Research Paper

OWL logo

Welcome to the Purdue OWL

This page is brought to you by the OWL at Purdue University. When printing this page, you must include the entire legal notice.

Copyright ©1995-2018 by The Writing Lab & The OWL at Purdue and Purdue University. All rights reserved. This material may not be published, reproduced, broadcast, rewritten, or redistributed without permission. Use of this site constitutes acceptance of our terms and conditions of fair use.

The Research Paper

There will come a time in most students' careers when they are assigned a research paper. Such an assignment often creates a great deal of unneeded anxiety in the student, which may result in procrastination and a feeling of confusion and inadequacy. This anxiety frequently stems from the fact that many students are unfamiliar and inexperienced with this genre of writing. Never fear—inexperience and unfamiliarity are situations you can change through practice! Writing a research paper is an essential aspect of academics and should not be avoided on account of one's anxiety. In fact, the process of writing a research paper can be one of the more rewarding experiences one may encounter in academics. What is more, many students will continue to do research throughout their careers, which is one of the reasons this topic is so important.

Becoming an experienced researcher and writer in any field or discipline takes a great deal of practice. There are few individuals for whom this process comes naturally. Remember, even the most seasoned academic veterans have had to learn how to write a research paper at some point in their career. Therefore, with diligence, organization, practice, a willingness to learn (and to make mistakes!), and, perhaps most important of all, patience, students will find that they can achieve great things through their research and writing.

The pages in this section cover the following topic areas related to the process of writing a research paper:

  • Genre - This section will provide an overview for understanding the difference between an analytical and argumentative research paper.
  • Choosing a Topic - This section will guide the student through the process of choosing topics, whether the topic be one that is assigned or one that the student chooses themselves.
  • Identifying an Audience - This section will help the student understand the often times confusing topic of audience by offering some basic guidelines for the process.
  • Where Do I Begin - This section concludes the handout by offering several links to resources at Purdue, and also provides an overview of the final stages of writing a research paper.
  • Search This Site All UCSD Sites Faculty/Staff Search Term
  • Contact & Directions
  • Climate Statement
  • Cognitive Behavioral Neuroscience
  • Cognitive Psychology
  • Developmental Psychology
  • Social Psychology
  • Adjunct Faculty
  • Non-Senate Instructors
  • Researchers
  • Psychology Grads
  • Affiliated Grads
  • New and Prospective Students
  • Honors Program
  • Experiential Learning
  • Programs & Events
  • Psi Chi / Psychology Club
  • Prospective PhD Students
  • Current PhD Students
  • Area Brown Bags
  • Colloquium Series
  • Anderson Distinguished Lecture Series
  • Speaker Videos
  • Undergraduate Program
  • Academic and Writing Resources

Writing Research Papers

  • Research Paper Structure

Whether you are writing a B.S. Degree Research Paper or completing a research report for a Psychology course, it is highly likely that you will need to organize your research paper in accordance with American Psychological Association (APA) guidelines.  Here we discuss the structure of research papers according to APA style.

Major Sections of a Research Paper in APA Style

A complete research paper in APA style that is reporting on experimental research will typically contain a Title page, Abstract, Introduction, Methods, Results, Discussion, and References sections. 1  Many will also contain Figures and Tables and some will have an Appendix or Appendices.  These sections are detailed as follows (for a more in-depth guide, please refer to " How to Write a Research Paper in APA Style ”, a comprehensive guide developed by Prof. Emma Geller). 2

What is this paper called and who wrote it? – the first page of the paper; this includes the name of the paper, a “running head”, authors, and institutional affiliation of the authors.  The institutional affiliation is usually listed in an Author Note that is placed towards the bottom of the title page.  In some cases, the Author Note also contains an acknowledgment of any funding support and of any individuals that assisted with the research project.

One-paragraph summary of the entire study – typically no more than 250 words in length (and in many cases it is well shorter than that), the Abstract provides an overview of the study.

Introduction

What is the topic and why is it worth studying? – the first major section of text in the paper, the Introduction commonly describes the topic under investigation, summarizes or discusses relevant prior research (for related details, please see the Writing Literature Reviews section of this website), identifies unresolved issues that the current research will address, and provides an overview of the research that is to be described in greater detail in the sections to follow.

What did you do? – a section which details how the research was performed.  It typically features a description of the participants/subjects that were involved, the study design, the materials that were used, and the study procedure.  If there were multiple experiments, then each experiment may require a separate Methods section.  A rule of thumb is that the Methods section should be sufficiently detailed for another researcher to duplicate your research.

What did you find? – a section which describes the data that was collected and the results of any statistical tests that were performed.  It may also be prefaced by a description of the analysis procedure that was used. If there were multiple experiments, then each experiment may require a separate Results section.

What is the significance of your results? – the final major section of text in the paper.  The Discussion commonly features a summary of the results that were obtained in the study, describes how those results address the topic under investigation and/or the issues that the research was designed to address, and may expand upon the implications of those findings.  Limitations and directions for future research are also commonly addressed.

List of articles and any books cited – an alphabetized list of the sources that are cited in the paper (by last name of the first author of each source).  Each reference should follow specific APA guidelines regarding author names, dates, article titles, journal titles, journal volume numbers, page numbers, book publishers, publisher locations, websites, and so on (for more information, please see the Citing References in APA Style page of this website).

Tables and Figures

Graphs and data (optional in some cases) – depending on the type of research being performed, there may be Tables and/or Figures (however, in some cases, there may be neither).  In APA style, each Table and each Figure is placed on a separate page and all Tables and Figures are included after the References.   Tables are included first, followed by Figures.   However, for some journals and undergraduate research papers (such as the B.S. Research Paper or Honors Thesis), Tables and Figures may be embedded in the text (depending on the instructor’s or editor’s policies; for more details, see "Deviations from APA Style" below).

Supplementary information (optional) – in some cases, additional information that is not critical to understanding the research paper, such as a list of experiment stimuli, details of a secondary analysis, or programming code, is provided.  This is often placed in an Appendix.

Variations of Research Papers in APA Style

Although the major sections described above are common to most research papers written in APA style, there are variations on that pattern.  These variations include: 

  • Literature reviews – when a paper is reviewing prior published research and not presenting new empirical research itself (such as in a review article, and particularly a qualitative review), then the authors may forgo any Methods and Results sections. Instead, there is a different structure such as an Introduction section followed by sections for each of the different aspects of the body of research being reviewed, and then perhaps a Discussion section. 
  • Multi-experiment papers – when there are multiple experiments, it is common to follow the Introduction with an Experiment 1 section, itself containing Methods, Results, and Discussion subsections. Then there is an Experiment 2 section with a similar structure, an Experiment 3 section with a similar structure, and so on until all experiments are covered.  Towards the end of the paper there is a General Discussion section followed by References.  Additionally, in multi-experiment papers, it is common for the Results and Discussion subsections for individual experiments to be combined into single “Results and Discussion” sections.

Departures from APA Style

In some cases, official APA style might not be followed (however, be sure to check with your editor, instructor, or other sources before deviating from standards of the Publication Manual of the American Psychological Association).  Such deviations may include:

  • Placement of Tables and Figures  – in some cases, to make reading through the paper easier, Tables and/or Figures are embedded in the text (for example, having a bar graph placed in the relevant Results section). The embedding of Tables and/or Figures in the text is one of the most common deviations from APA style (and is commonly allowed in B.S. Degree Research Papers and Honors Theses; however you should check with your instructor, supervisor, or editor first). 
  • Incomplete research – sometimes a B.S. Degree Research Paper in this department is written about research that is currently being planned or is in progress. In those circumstances, sometimes only an Introduction and Methods section, followed by References, is included (that is, in cases where the research itself has not formally begun).  In other cases, preliminary results are presented and noted as such in the Results section (such as in cases where the study is underway but not complete), and the Discussion section includes caveats about the in-progress nature of the research.  Again, you should check with your instructor, supervisor, or editor first.
  • Class assignments – in some classes in this department, an assignment must be written in APA style but is not exactly a traditional research paper (for instance, a student asked to write about an article that they read, and to write that report in APA style). In that case, the structure of the paper might approximate the typical sections of a research paper in APA style, but not entirely.  You should check with your instructor for further guidelines.

Workshops and Downloadable Resources

  • For in-person discussion of the process of writing research papers, please consider attending this department’s “Writing Research Papers” workshop (for dates and times, please check the undergraduate workshops calendar).

Downloadable Resources

  • How to Write APA Style Research Papers (a comprehensive guide) [ PDF ]
  • Tips for Writing APA Style Research Papers (a brief summary) [ PDF ]
  • Example APA Style Research Paper (for B.S. Degree – empirical research) [ PDF ]
  • Example APA Style Research Paper (for B.S. Degree – literature review) [ PDF ]

Further Resources

How-To Videos     

  • Writing Research Paper Videos

APA Journal Article Reporting Guidelines

  • Appelbaum, M., Cooper, H., Kline, R. B., Mayo-Wilson, E., Nezu, A. M., & Rao, S. M. (2018). Journal article reporting standards for quantitative research in psychology: The APA Publications and Communications Board task force report . American Psychologist , 73 (1), 3.
  • Levitt, H. M., Bamberg, M., Creswell, J. W., Frost, D. M., Josselson, R., & Suárez-Orozco, C. (2018). Journal article reporting standards for qualitative primary, qualitative meta-analytic, and mixed methods research in psychology: The APA Publications and Communications Board task force report . American Psychologist , 73 (1), 26.  

External Resources

  • Formatting APA Style Papers in Microsoft Word
  • How to Write an APA Style Research Paper from Hamilton University
  • WikiHow Guide to Writing APA Research Papers
  • Sample APA Formatted Paper with Comments
  • Sample APA Formatted Paper
  • Tips for Writing a Paper in APA Style

1 VandenBos, G. R. (Ed). (2010). Publication manual of the American Psychological Association (6th ed.) (pp. 41-60).  Washington, DC: American Psychological Association.

2 geller, e. (2018).  how to write an apa-style research report . [instructional materials]. , prepared by s. c. pan for ucsd psychology.

Back to top  

  • Formatting Research Papers
  • Using Databases and Finding References
  • What Types of References Are Appropriate?
  • Evaluating References and Taking Notes
  • Citing References
  • Writing a Literature Review
  • Writing Process and Revising
  • Improving Scientific Writing
  • Academic Integrity and Avoiding Plagiarism
  • Writing Research Papers Videos
  • USC Libraries
  • Research Guides

Organizing Your Social Sciences Research Paper

  • 5. The Literature Review
  • Purpose of Guide
  • Design Flaws to Avoid
  • Independent and Dependent Variables
  • Glossary of Research Terms
  • Reading Research Effectively
  • Narrowing a Topic Idea
  • Broadening a Topic Idea
  • Extending the Timeliness of a Topic Idea
  • Academic Writing Style
  • Applying Critical Thinking
  • Choosing a Title
  • Making an Outline
  • Paragraph Development
  • Research Process Video Series
  • Executive Summary
  • The C.A.R.S. Model
  • Background Information
  • The Research Problem/Question
  • Theoretical Framework
  • Citation Tracking
  • Content Alert Services
  • Evaluating Sources
  • Primary Sources
  • Secondary Sources
  • Tiertiary Sources
  • Scholarly vs. Popular Publications
  • Qualitative Methods
  • Quantitative Methods
  • Insiderness
  • Using Non-Textual Elements
  • Limitations of the Study
  • Common Grammar Mistakes
  • Writing Concisely
  • Avoiding Plagiarism
  • Footnotes or Endnotes?
  • Further Readings
  • Generative AI and Writing
  • USC Libraries Tutorials and Other Guides
  • Bibliography

A literature review surveys prior research published in books, scholarly articles, and any other sources relevant to a particular issue, area of research, or theory, and by so doing, provides a description, summary, and critical evaluation of these works in relation to the research problem being investigated. Literature reviews are designed to provide an overview of sources you have used in researching a particular topic and to demonstrate to your readers how your research fits within existing scholarship about the topic.

Fink, Arlene. Conducting Research Literature Reviews: From the Internet to Paper . Fourth edition. Thousand Oaks, CA: SAGE, 2014.

Importance of a Good Literature Review

A literature review may consist of simply a summary of key sources, but in the social sciences, a literature review usually has an organizational pattern and combines both summary and synthesis, often within specific conceptual categories . A summary is a recap of the important information of the source, but a synthesis is a re-organization, or a reshuffling, of that information in a way that informs how you are planning to investigate a research problem. The analytical features of a literature review might:

  • Give a new interpretation of old material or combine new with old interpretations,
  • Trace the intellectual progression of the field, including major debates,
  • Depending on the situation, evaluate the sources and advise the reader on the most pertinent or relevant research, or
  • Usually in the conclusion of a literature review, identify where gaps exist in how a problem has been researched to date.

Given this, the purpose of a literature review is to:

  • Place each work in the context of its contribution to understanding the research problem being studied.
  • Describe the relationship of each work to the others under consideration.
  • Identify new ways to interpret prior research.
  • Reveal any gaps that exist in the literature.
  • Resolve conflicts amongst seemingly contradictory previous studies.
  • Identify areas of prior scholarship to prevent duplication of effort.
  • Point the way in fulfilling a need for additional research.
  • Locate your own research within the context of existing literature [very important].

Fink, Arlene. Conducting Research Literature Reviews: From the Internet to Paper. 2nd ed. Thousand Oaks, CA: Sage, 2005; Hart, Chris. Doing a Literature Review: Releasing the Social Science Research Imagination . Thousand Oaks, CA: Sage Publications, 1998; Jesson, Jill. Doing Your Literature Review: Traditional and Systematic Techniques . Los Angeles, CA: SAGE, 2011; Knopf, Jeffrey W. "Doing a Literature Review." PS: Political Science and Politics 39 (January 2006): 127-132; Ridley, Diana. The Literature Review: A Step-by-Step Guide for Students . 2nd ed. Los Angeles, CA: SAGE, 2012.

Types of Literature Reviews

It is important to think of knowledge in a given field as consisting of three layers. First, there are the primary studies that researchers conduct and publish. Second are the reviews of those studies that summarize and offer new interpretations built from and often extending beyond the primary studies. Third, there are the perceptions, conclusions, opinion, and interpretations that are shared informally among scholars that become part of the body of epistemological traditions within the field.

In composing a literature review, it is important to note that it is often this third layer of knowledge that is cited as "true" even though it often has only a loose relationship to the primary studies and secondary literature reviews. Given this, while literature reviews are designed to provide an overview and synthesis of pertinent sources you have explored, there are a number of approaches you could adopt depending upon the type of analysis underpinning your study.

Argumentative Review This form examines literature selectively in order to support or refute an argument, deeply embedded assumption, or philosophical problem already established in the literature. The purpose is to develop a body of literature that establishes a contrarian viewpoint. Given the value-laden nature of some social science research [e.g., educational reform; immigration control], argumentative approaches to analyzing the literature can be a legitimate and important form of discourse. However, note that they can also introduce problems of bias when they are used to make summary claims of the sort found in systematic reviews [see below].

Integrative Review Considered a form of research that reviews, critiques, and synthesizes representative literature on a topic in an integrated way such that new frameworks and perspectives on the topic are generated. The body of literature includes all studies that address related or identical hypotheses or research problems. A well-done integrative review meets the same standards as primary research in regard to clarity, rigor, and replication. This is the most common form of review in the social sciences.

Historical Review Few things rest in isolation from historical precedent. Historical literature reviews focus on examining research throughout a period of time, often starting with the first time an issue, concept, theory, phenomena emerged in the literature, then tracing its evolution within the scholarship of a discipline. The purpose is to place research in a historical context to show familiarity with state-of-the-art developments and to identify the likely directions for future research.

Methodological Review A review does not always focus on what someone said [findings], but how they came about saying what they say [method of analysis]. Reviewing methods of analysis provides a framework of understanding at different levels [i.e. those of theory, substantive fields, research approaches, and data collection and analysis techniques], how researchers draw upon a wide variety of knowledge ranging from the conceptual level to practical documents for use in fieldwork in the areas of ontological and epistemological consideration, quantitative and qualitative integration, sampling, interviewing, data collection, and data analysis. This approach helps highlight ethical issues which you should be aware of and consider as you go through your own study.

Systematic Review This form consists of an overview of existing evidence pertinent to a clearly formulated research question, which uses pre-specified and standardized methods to identify and critically appraise relevant research, and to collect, report, and analyze data from the studies that are included in the review. The goal is to deliberately document, critically evaluate, and summarize scientifically all of the research about a clearly defined research problem . Typically it focuses on a very specific empirical question, often posed in a cause-and-effect form, such as "To what extent does A contribute to B?" This type of literature review is primarily applied to examining prior research studies in clinical medicine and allied health fields, but it is increasingly being used in the social sciences.

Theoretical Review The purpose of this form is to examine the corpus of theory that has accumulated in regard to an issue, concept, theory, phenomena. The theoretical literature review helps to establish what theories already exist, the relationships between them, to what degree the existing theories have been investigated, and to develop new hypotheses to be tested. Often this form is used to help establish a lack of appropriate theories or reveal that current theories are inadequate for explaining new or emerging research problems. The unit of analysis can focus on a theoretical concept or a whole theory or framework.

NOTE: Most often the literature review will incorporate some combination of types. For example, a review that examines literature supporting or refuting an argument, assumption, or philosophical problem related to the research problem will also need to include writing supported by sources that establish the history of these arguments in the literature.

Baumeister, Roy F. and Mark R. Leary. "Writing Narrative Literature Reviews."  Review of General Psychology 1 (September 1997): 311-320; Mark R. Fink, Arlene. Conducting Research Literature Reviews: From the Internet to Paper . 2nd ed. Thousand Oaks, CA: Sage, 2005; Hart, Chris. Doing a Literature Review: Releasing the Social Science Research Imagination . Thousand Oaks, CA: Sage Publications, 1998; Kennedy, Mary M. "Defining a Literature." Educational Researcher 36 (April 2007): 139-147; Petticrew, Mark and Helen Roberts. Systematic Reviews in the Social Sciences: A Practical Guide . Malden, MA: Blackwell Publishers, 2006; Torracro, Richard. "Writing Integrative Literature Reviews: Guidelines and Examples." Human Resource Development Review 4 (September 2005): 356-367; Rocco, Tonette S. and Maria S. Plakhotnik. "Literature Reviews, Conceptual Frameworks, and Theoretical Frameworks: Terms, Functions, and Distinctions." Human Ressource Development Review 8 (March 2008): 120-130; Sutton, Anthea. Systematic Approaches to a Successful Literature Review . Los Angeles, CA: Sage Publications, 2016.

Structure and Writing Style

I.  Thinking About Your Literature Review

The structure of a literature review should include the following in support of understanding the research problem :

  • An overview of the subject, issue, or theory under consideration, along with the objectives of the literature review,
  • Division of works under review into themes or categories [e.g. works that support a particular position, those against, and those offering alternative approaches entirely],
  • An explanation of how each work is similar to and how it varies from the others,
  • Conclusions as to which pieces are best considered in their argument, are most convincing of their opinions, and make the greatest contribution to the understanding and development of their area of research.

The critical evaluation of each work should consider :

  • Provenance -- what are the author's credentials? Are the author's arguments supported by evidence [e.g. primary historical material, case studies, narratives, statistics, recent scientific findings]?
  • Methodology -- were the techniques used to identify, gather, and analyze the data appropriate to addressing the research problem? Was the sample size appropriate? Were the results effectively interpreted and reported?
  • Objectivity -- is the author's perspective even-handed or prejudicial? Is contrary data considered or is certain pertinent information ignored to prove the author's point?
  • Persuasiveness -- which of the author's theses are most convincing or least convincing?
  • Validity -- are the author's arguments and conclusions convincing? Does the work ultimately contribute in any significant way to an understanding of the subject?

II.  Development of the Literature Review

Four Basic Stages of Writing 1.  Problem formulation -- which topic or field is being examined and what are its component issues? 2.  Literature search -- finding materials relevant to the subject being explored. 3.  Data evaluation -- determining which literature makes a significant contribution to the understanding of the topic. 4.  Analysis and interpretation -- discussing the findings and conclusions of pertinent literature.

Consider the following issues before writing the literature review: Clarify If your assignment is not specific about what form your literature review should take, seek clarification from your professor by asking these questions: 1.  Roughly how many sources would be appropriate to include? 2.  What types of sources should I review (books, journal articles, websites; scholarly versus popular sources)? 3.  Should I summarize, synthesize, or critique sources by discussing a common theme or issue? 4.  Should I evaluate the sources in any way beyond evaluating how they relate to understanding the research problem? 5.  Should I provide subheadings and other background information, such as definitions and/or a history? Find Models Use the exercise of reviewing the literature to examine how authors in your discipline or area of interest have composed their literature review sections. Read them to get a sense of the types of themes you might want to look for in your own research or to identify ways to organize your final review. The bibliography or reference section of sources you've already read, such as required readings in the course syllabus, are also excellent entry points into your own research. Narrow the Topic The narrower your topic, the easier it will be to limit the number of sources you need to read in order to obtain a good survey of relevant resources. Your professor will probably not expect you to read everything that's available about the topic, but you'll make the act of reviewing easier if you first limit scope of the research problem. A good strategy is to begin by searching the USC Libraries Catalog for recent books about the topic and review the table of contents for chapters that focuses on specific issues. You can also review the indexes of books to find references to specific issues that can serve as the focus of your research. For example, a book surveying the history of the Israeli-Palestinian conflict may include a chapter on the role Egypt has played in mediating the conflict, or look in the index for the pages where Egypt is mentioned in the text. Consider Whether Your Sources are Current Some disciplines require that you use information that is as current as possible. This is particularly true in disciplines in medicine and the sciences where research conducted becomes obsolete very quickly as new discoveries are made. However, when writing a review in the social sciences, a survey of the history of the literature may be required. In other words, a complete understanding the research problem requires you to deliberately examine how knowledge and perspectives have changed over time. Sort through other current bibliographies or literature reviews in the field to get a sense of what your discipline expects. You can also use this method to explore what is considered by scholars to be a "hot topic" and what is not.

III.  Ways to Organize Your Literature Review

Chronology of Events If your review follows the chronological method, you could write about the materials according to when they were published. This approach should only be followed if a clear path of research building on previous research can be identified and that these trends follow a clear chronological order of development. For example, a literature review that focuses on continuing research about the emergence of German economic power after the fall of the Soviet Union. By Publication Order your sources by publication chronology, then, only if the order demonstrates a more important trend. For instance, you could order a review of literature on environmental studies of brown fields if the progression revealed, for example, a change in the soil collection practices of the researchers who wrote and/or conducted the studies. Thematic [“conceptual categories”] A thematic literature review is the most common approach to summarizing prior research in the social and behavioral sciences. Thematic reviews are organized around a topic or issue, rather than the progression of time, although the progression of time may still be incorporated into a thematic review. For example, a review of the Internet’s impact on American presidential politics could focus on the development of online political satire. While the study focuses on one topic, the Internet’s impact on American presidential politics, it would still be organized chronologically reflecting technological developments in media. The difference in this example between a "chronological" and a "thematic" approach is what is emphasized the most: themes related to the role of the Internet in presidential politics. Note that more authentic thematic reviews tend to break away from chronological order. A review organized in this manner would shift between time periods within each section according to the point being made. Methodological A methodological approach focuses on the methods utilized by the researcher. For the Internet in American presidential politics project, one methodological approach would be to look at cultural differences between the portrayal of American presidents on American, British, and French websites. Or the review might focus on the fundraising impact of the Internet on a particular political party. A methodological scope will influence either the types of documents in the review or the way in which these documents are discussed.

Other Sections of Your Literature Review Once you've decided on the organizational method for your literature review, the sections you need to include in the paper should be easy to figure out because they arise from your organizational strategy. In other words, a chronological review would have subsections for each vital time period; a thematic review would have subtopics based upon factors that relate to the theme or issue. However, sometimes you may need to add additional sections that are necessary for your study, but do not fit in the organizational strategy of the body. What other sections you include in the body is up to you. However, only include what is necessary for the reader to locate your study within the larger scholarship about the research problem.

Here are examples of other sections, usually in the form of a single paragraph, you may need to include depending on the type of review you write:

  • Current Situation : Information necessary to understand the current topic or focus of the literature review.
  • Sources Used : Describes the methods and resources [e.g., databases] you used to identify the literature you reviewed.
  • History : The chronological progression of the field, the research literature, or an idea that is necessary to understand the literature review, if the body of the literature review is not already a chronology.
  • Selection Methods : Criteria you used to select (and perhaps exclude) sources in your literature review. For instance, you might explain that your review includes only peer-reviewed [i.e., scholarly] sources.
  • Standards : Description of the way in which you present your information.
  • Questions for Further Research : What questions about the field has the review sparked? How will you further your research as a result of the review?

IV.  Writing Your Literature Review

Once you've settled on how to organize your literature review, you're ready to write each section. When writing your review, keep in mind these issues.

Use Evidence A literature review section is, in this sense, just like any other academic research paper. Your interpretation of the available sources must be backed up with evidence [citations] that demonstrates that what you are saying is valid. Be Selective Select only the most important points in each source to highlight in the review. The type of information you choose to mention should relate directly to the research problem, whether it is thematic, methodological, or chronological. Related items that provide additional information, but that are not key to understanding the research problem, can be included in a list of further readings . Use Quotes Sparingly Some short quotes are appropriate if you want to emphasize a point, or if what an author stated cannot be easily paraphrased. Sometimes you may need to quote certain terminology that was coined by the author, is not common knowledge, or taken directly from the study. Do not use extensive quotes as a substitute for using your own words in reviewing the literature. Summarize and Synthesize Remember to summarize and synthesize your sources within each thematic paragraph as well as throughout the review. Recapitulate important features of a research study, but then synthesize it by rephrasing the study's significance and relating it to your own work and the work of others. Keep Your Own Voice While the literature review presents others' ideas, your voice [the writer's] should remain front and center. For example, weave references to other sources into what you are writing but maintain your own voice by starting and ending the paragraph with your own ideas and wording. Use Caution When Paraphrasing When paraphrasing a source that is not your own, be sure to represent the author's information or opinions accurately and in your own words. Even when paraphrasing an author’s work, you still must provide a citation to that work.

V.  Common Mistakes to Avoid

These are the most common mistakes made in reviewing social science research literature.

  • Sources in your literature review do not clearly relate to the research problem;
  • You do not take sufficient time to define and identify the most relevant sources to use in the literature review related to the research problem;
  • Relies exclusively on secondary analytical sources rather than including relevant primary research studies or data;
  • Uncritically accepts another researcher's findings and interpretations as valid, rather than examining critically all aspects of the research design and analysis;
  • Does not describe the search procedures that were used in identifying the literature to review;
  • Reports isolated statistical results rather than synthesizing them in chi-squared or meta-analytic methods; and,
  • Only includes research that validates assumptions and does not consider contrary findings and alternative interpretations found in the literature.

Cook, Kathleen E. and Elise Murowchick. “Do Literature Review Skills Transfer from One Course to Another?” Psychology Learning and Teaching 13 (March 2014): 3-11; Fink, Arlene. Conducting Research Literature Reviews: From the Internet to Paper . 2nd ed. Thousand Oaks, CA: Sage, 2005; Hart, Chris. Doing a Literature Review: Releasing the Social Science Research Imagination . Thousand Oaks, CA: Sage Publications, 1998; Jesson, Jill. Doing Your Literature Review: Traditional and Systematic Techniques . London: SAGE, 2011; Literature Review Handout. Online Writing Center. Liberty University; Literature Reviews. The Writing Center. University of North Carolina; Onwuegbuzie, Anthony J. and Rebecca Frels. Seven Steps to a Comprehensive Literature Review: A Multimodal and Cultural Approach . Los Angeles, CA: SAGE, 2016; Ridley, Diana. The Literature Review: A Step-by-Step Guide for Students . 2nd ed. Los Angeles, CA: SAGE, 2012; Randolph, Justus J. “A Guide to Writing the Dissertation Literature Review." Practical Assessment, Research, and Evaluation. vol. 14, June 2009; Sutton, Anthea. Systematic Approaches to a Successful Literature Review . Los Angeles, CA: Sage Publications, 2016; Taylor, Dena. The Literature Review: A Few Tips On Conducting It. University College Writing Centre. University of Toronto; Writing a Literature Review. Academic Skills Centre. University of Canberra.

Writing Tip

Break Out of Your Disciplinary Box!

Thinking interdisciplinarily about a research problem can be a rewarding exercise in applying new ideas, theories, or concepts to an old problem. For example, what might cultural anthropologists say about the continuing conflict in the Middle East? In what ways might geographers view the need for better distribution of social service agencies in large cities than how social workers might study the issue? You don’t want to substitute a thorough review of core research literature in your discipline for studies conducted in other fields of study. However, particularly in the social sciences, thinking about research problems from multiple vectors is a key strategy for finding new solutions to a problem or gaining a new perspective. Consult with a librarian about identifying research databases in other disciplines; almost every field of study has at least one comprehensive database devoted to indexing its research literature.

Frodeman, Robert. The Oxford Handbook of Interdisciplinarity . New York: Oxford University Press, 2010.

Another Writing Tip

Don't Just Review for Content!

While conducting a review of the literature, maximize the time you devote to writing this part of your paper by thinking broadly about what you should be looking for and evaluating. Review not just what scholars are saying, but how are they saying it. Some questions to ask:

  • How are they organizing their ideas?
  • What methods have they used to study the problem?
  • What theories have been used to explain, predict, or understand their research problem?
  • What sources have they cited to support their conclusions?
  • How have they used non-textual elements [e.g., charts, graphs, figures, etc.] to illustrate key points?

When you begin to write your literature review section, you'll be glad you dug deeper into how the research was designed and constructed because it establishes a means for developing more substantial analysis and interpretation of the research problem.

Hart, Chris. Doing a Literature Review: Releasing the Social Science Research Imagination . Thousand Oaks, CA: Sage Publications, 1 998.

Yet Another Writing Tip

When Do I Know I Can Stop Looking and Move On?

Here are several strategies you can utilize to assess whether you've thoroughly reviewed the literature:

  • Look for repeating patterns in the research findings . If the same thing is being said, just by different people, then this likely demonstrates that the research problem has hit a conceptual dead end. At this point consider: Does your study extend current research?  Does it forge a new path? Or, does is merely add more of the same thing being said?
  • Look at sources the authors cite to in their work . If you begin to see the same researchers cited again and again, then this is often an indication that no new ideas have been generated to address the research problem.
  • Search Google Scholar to identify who has subsequently cited leading scholars already identified in your literature review [see next sub-tab]. This is called citation tracking and there are a number of sources that can help you identify who has cited whom, particularly scholars from outside of your discipline. Here again, if the same authors are being cited again and again, this may indicate no new literature has been written on the topic.

Onwuegbuzie, Anthony J. and Rebecca Frels. Seven Steps to a Comprehensive Literature Review: A Multimodal and Cultural Approach . Los Angeles, CA: Sage, 2016; Sutton, Anthea. Systematic Approaches to a Successful Literature Review . Los Angeles, CA: Sage Publications, 2016.

  • << Previous: Theoretical Framework
  • Next: Citation Tracking >>
  • Last Updated: May 30, 2024 9:38 AM
  • URL: https://libguides.usc.edu/writingguide

Cornell University --> Graduate School

Careers beyond academia, tips for a memorable 5-minute research presentation.

microphone with empty chairs

“If you get the first 5 minutes down, you are going to be golden for the rest of your presentation.” These were the words Susi Varvayanis, Executive Director of Careers Beyond Academia, stated at the start of Tips for a Memorable 5-Minute Research Presentation.

To help alleviate the stress and worries of making a good presentation, please review a summary of some amazing tips. There are three parts of a presentation that can influence the outcome of the presentation.

  • You, the speaker
  • Your presentation slides
  • The audience

How do you as the speaker prepare yourself for the best presentation?

  • Be aware of your body language – gestures are important, and they underscore the importance of the message we pass across. Add a smile! Be enthusiastic and make eye contact with the audience. These contribute to the appearance of confidence as you present.
  • Practice voice modulations – the way you speak can convey a lot about the information you are passing. Avoid going too fast. Add pauses as you speak, slow your speech, and emphasize key words.
  • Avoid jargon and acronyms – According to the dictionary, jargon is defined as special words or expressions that are used by a particular profession or groups and is difficult for others to understand. So, avoid them! Especially since some words can convey different connotations for different audiences. So, if I don’t use jargon, what should I use? How do I still convey my point? Try a different word, or use an analogy.

What makes for good presentation slides?

  • Good illustrations – make use of simplified images that pass across the information that you are presenting. Simple cartoon illustrations make it easy for the audience, regardless of background, to understand and follow the meanings.
  • Data presentation – avoid using excel defaults. Replace topics and labels with easier to understand headings that communicate your main point. Also, simplify images by removing unnecessary sections that do not apply to your audience. Most importantly, lead the audience through your work with all its ups and downs.

How does the audience affect your presentation?

The audience that you have dictates how you present your information. To prepare for your presentation, evaluate your audience. Understand the hook and make them care. Find unifying interests or commonality among the audience. Understand the goals and issues that challenge the audience. Do your images intrigue the audience?

Here is what makes your 5-minute pitch memorable:

  • It is passionate – This comes with understanding what inspires your work. Passion for research leads you to excel, even when you suffer setbacks.
  • It tells a good story – when you have a flow with compelling images, it helps tell a story, saves explanation, and hooks the audience.
  • It gives a ‘why’ – from your presentation, the audience should know why they should care about your work, the implications of your results and how they can apply this information.

Here are some resources that you can explore to help you with a great presentation:

  • Tool to check for jargon: De-Jargonizer (scienceandpublic.com)
  • The difference between ‘what’ and your ‘why’: Know Your Why | Michael Jr. – YouTube
  • Practice your skills: join ComSciCon-NY – in early June; Three-Minute Thesis or business case competitions
  • A guide with many exercises to improve your research communication – Finding Your Research Voice – Cornell University Library Catalog

We would love to hear your own opinions and tips on what you feel gives a good presentation!

research paper on 5

Grounding DINO 1.5: Pushing the Boundaries of Open-Set Object Detection

This article reviews the advancements presented in the paper "Grounding DINO 1.5: Advance the 'Edge' of Open-Set Object Detection." We will explore the methodologies introduced, the impact on open-set object detection, and the potential applications and future directions suggested by this research.

a few seconds ago   •   9 min read

Sign up FREE

Build & scale AI models on low-cost cloud GPUs.

Grounding DINO 1.5: Pushing the Boundaries of Open-Set Object Detection

In recent years, zero-shot object detection has become a cornerstone of advancements in computer vision. Creating versatile and efficient detectors has been a significant focus on building real-world applications. The introduction of Grounding DINO 1.5 by IDEA Research marks a significant leap forward in this field, particularly in open-set object detection.

We will run the demo using Paperspace GPUs, a platform known for offering high-performance computing resources for various applications. These GPUs are designed to meet the needs of machine learning, deep learning, and data analysis and provide scalable and flexible computing power without needing physical hardware investments.

Paperspace offers a range of GPU options to suit different performance requirements and budgets, allowing users to accelerate their computational workflows efficiently. Additionally, the platform integrates with popular tools and frameworks, making it easier for developers and researchers to deploy, manage, and scale their projects.

Join our Discord Community

What is Grounding DINO?

Grounding DINO, an open-set detector based on DINO, not only achieved state-of-the-art object detection performance but also enabled the integration of multi-level text information through grounded pre-training. Grounding DINO offers several advantages over GLIP or Grounded Language-Image Pre-training or GLIP. Firstly, its Transformer-based architecture , similar to language models, facilitates processing both image and language data.

Grounding DINO Framework

Overall framework of Grounding DINO 1.5 series

The framework shown in the above image is the overall framework of the Grounding DINO 1.5 series. This framework retains the dual-encoder-single-decoder structure of Grounding DINO. Further, this framework extends it to Grounding DINO 1.5 for both the Pro and Edge models.

Grounding DINO combines concepts from DINO and GLIP. DINO, a transformer-based method, excels in object detection with end-to-end optimization, removing the need for handcrafted modules like Non-Maximum Suppression or NMS. Conversely, GLIP focuses on phrase grounding, linking words or phrases in text to visual elements in images or videos.

Grounding DINO's architecture consists of an image backbone, a text backbone, a feature enhancer for image-text fusion, a language-guided query selection module, and a cross-modality decoder for refining object boxes. Initially, it extracts image and text features, fuses them, selects queries from image features, and uses these queries in a decoder to predict object boxes and corresponding phrases.

Bring this project to life

What's new in Grounding DINO 1.5?

Grounding DINO 1.5 builds upon the foundation laid by its predecessor, Grounding DINO, which redefined object detection by incorporating linguistic information and framing the task as phrase grounding. This innovative approach leverages large-scale pre-training on diverse datasets and self-training on pseudo-labeled data from an extensive pool of image-text pairs. The result is a model that excels in open-world scenarios due to its robust architecture and semantic richness.

Grounding DINO 1.5 extends these capabilities even further, introducing two specialized models: Grounding DINO 1.5 Pro and Grounding DINO 1.5 Edge. The Pro model enhances detection performance by significantly expanding the model's capacity and dataset size, incorporating advanced architectures like the ViT-L, and generating over 20 million annotated images. In contrast, the Edge model is optimized for edge devices, emphasizing computational efficiency while maintaining high detection quality through high-level image features.

Experimental findings underscore the effectiveness of Grounding DINO 1.5, with the Pro model setting new performance standards and the Edge model showcasing impressive speed and accuracy, rendering it highly suitable for edge computing applications. This article delves into the advancements brought by Grounding DINO 1.5, exploring its methodologies, impact, and potential future directions in the dynamic landscape of open-set object detection, thereby highlighting its practical applications in real-world scenarios.

Grounding DINO 1.5 is pre-trained on Grounding-20M, a dataset of over 20 million grounding images from public sources. During the training process, high-quality annotations with well-developed annotation pipelines and post-processing rules are ensured.

Performance Analysis

The figure below shows the model's ability to recognize objects in datasets like COCO and LVIS, which contain many categories. It indicates that Grounding DINO 1.5 Pro significantly outperforms previous versions. Compared to a specific previous model, Grounding DINO 1.5 Pro shows a remarkable improvement.

research paper on 5

The model was tested in various real-world scenarios using the ODinW (Object Detection in the Wild) benchmark, which includes 35 datasets covering different applications. Grounding DINO 1.5 Pro achieved significantly improved performance over the previous version of Grounding DINO.

research paper on 5

Zero-shot results for Grounding DINO 1.5 Edge on COCO and LVIS are measured in frames per second (FPS) using an A100 GPU, reported in PyTorch speed / TensorRT FP32 speed. FPS on NVIDIA Orin NX is also provided. Grounding DINO 1.5 Edge achieves remarkable performance and also surpasses all other state-of-the-art algorithms (OmDet-Turbo-T 30.3 AP, YOLO-Worldv2-L 32.9 AP, YOLO-Worldv2-M 30.0 AP, YOLO-Worldv2-S 22.7 AP).

research paper on 5

Grounding DINO 1.5 Pro and Grounding DINO 1.5 Edge

Grounding dino 1.5 pro.

Grounding DINO 1.5 Pro builds on the core architecture of Grounding DINO but enhances the model architecture with a larger Vision Transformer (ViT-L) backbone. The ViT-L model is known for its exceptional performance on various tasks, and the transformer-based design aids in optimizing training and inference.

One of the key methodologies Grounding DINO 1.5 Pro adopts is a deep early fusion strategy for feature extraction. This means that language and image features are combined early on using cross-attention mechanisms during the feature extraction process before moving to the decoding phase. This early integration allows for a more thorough fusion of information from both modalities.

In their research, the team compared early fusion with later fusion strategies. In early fusion, language, and image features are integrated early in the process, leading to higher detection recall and more accurate bounding box predictions. However, this approach can sometimes cause the model to hallucinate, meaning it predicts objects that aren't present in the images.

On the other hand, late fusion keeps language and image features separate until the loss calculation phase, where they are integrated. This approach is generally more robust against hallucinations but tends to result in lower detection recall because aligning vision and language features becomes more challenging when they are only combined at the end.

To maximize the benefits of early fusion while minimizing its drawbacks, Grounding DINO 1.5 Pro retains the early fusion design but incorporates a more comprehensive training sampling strategy. This strategy increases the proportion of negative samples—images without the objects of interest—during training. By doing so, the model learns to distinguish between relevant and irrelevant information better, thereby reducing hallucinations while maintaining high detection recall and accuracy.

In summary, Grounding DINO 1.5 Pro enhances its prediction capabilities and robustness by combining early fusion with an improved training approach that balances the strengths and weaknesses of early fusion architecture.

Grounding DINO 1.5 Edge

Grounding DINO is a powerful model for detecting objects in images, but it requires a lot of computing power. This makes it challenging to use on small devices with limited resources, like those in cars, medical equipment, or smartphones. These devices need to process images quickly and efficiently in real time. Deploying Grounding DINO on edge devices is highly desirable for many applications, such as autonomous driving, medical image processing, and computational photography.

However, open-set detection models typically require significant computational resources, which edge devices lack. The original Grounding DINO model uses multi-scale image features and a computationally intensive feature enhancer. While this improves the training speed and performance, it is impractical for real-time applications on edge devices.

To address this challenge, the researchers propose an efficient feature enhancer for edge devices. Their approach focuses on using only high-level image features (P5 level) for cross-modality fusion, as lower-level features lack semantic information and increase computational costs. This method significantly reduces the number of tokens processed, cutting the computational load.

For better integration on edge devices, the model replaces deformable self-attention with vanilla self-attention and introduces a cross-scale feature fusion module to integrate lower-level image features (P3 and P4 levels). This design balances the need for feature enhancement with the necessity for computational efficiency.

In Grounding DINO 1.5 Edge, the original feature enhancer is replaced with this new efficient enhancer, and EfficientViT-L1 is used as the image backbone for rapid multi-scale feature extraction. When deployed on the NVIDIA Orin NX platform, this optimized model achieves an inference speed of over 10 FPS with an input size of 640 × 640. This makes it suitable for real-time applications on edge devices, balancing performance and efficiency.

research paper on 5

Object Detection and Paperspace Demo

Please make sure to request DeepDataSpace to get the API key. Please refer to the DeepDataSpace for API keys:  https://deepdataspace.com/request_api .

To run this demo and start your experimentation with the model, we have created and added a Jupyter notebook with this article so that you can test it.

First, we will clone the repository:

Next, we will install the required packages:

Run the code below to generate the link:

research paper on 5

Real-World Application and Concluding Thoughts on Grounding DINO 1.5

  • Autonomous Vehicles :
  • Detecting and recognizing known traffic signs and pedestrians and unfamiliar objects that might appear on the road, ensuring safer navigation.
  • Identifying unexpected obstacles, such as debris or animals, that are not pre-labeled in the training data.
  • Surveillance and Security :
  • Recognizing unauthorized individuals or objects in restricted areas, even if they haven't been seen before.
  • Detecting abandoned objects in public places, such as airports or train stations, could be potential security threats.
  • Retail and Inventory Management :
  • Identifying and tracking items on store shelves, including new products that may not have been part of the original inventory.
  • Recognizing unusual activities or unfamiliar objects in a store that could indicate shoplifting.
  • Healthcare :
  • Detecting anomalies or unfamiliar patterns in medical scans, such as new types of tumors or rare conditions.
  • Identifying unusual patient behaviors or movements, especially in long-term care or post-surgery recovery.
  • Enabling robots to operate in dynamic and unstructured environments by recognizing and adapting to new objects or changes in their surroundings.
  • Detecting victims or hazards in disaster-stricken areas where the environment is unpredictable and filled with unfamiliar objects.
  • Wildlife Monitoring and Conservation :
  • Detecting and identifying new or rare species in natural habitats for biodiversity studies and conservation efforts.
  • Monitoring protected areas for unfamiliar human presence or tools that could indicate illegal poaching activities.
  • Manufacturing and Quality Control :
  • Identifying defects or anomalies in products on a production line, including new types of defects not previously encountered.
  • Recognizing and sorting a wide variety of objects to improve efficiency in manufacturing processes.

This article introduces Grounding DINO 1.5, designed to enhance open-set object detection. The leading model, Grounding DINO 1.5 Pro, has set new benchmarks on the COCO and LVIS zero-shot tests, marking significant progress in detection accuracy and reliability.

Additionally, the Grounding DINO 1.5 Edge model supports real-time object detection across diverse applications, broadening the series' practical applicability.

We hope you have enjoyed reading the article!

Add speed and simplicity to your Machine Learning workflow today

  • Original research paper
  • Github Link
  • generative ai
  • Object Detection

Spread the word

Managing your app's deployment costs efficiently on paperspace, keep reading, building a personal coding assistant in just 6 lines of code on paperspace, try google's new state of art open model: gemma on paperspace gradient, interactive conversations with pdfs on paperspace using langchain, subscribe to our newsletter.

Stay updated with Paperspace Blog by signing up for our newsletter.

🎉 Awesome! Now check your inbox and click the link to confirm your subscription.

Please enter a valid email address

Oops! There was an error sending the email, please try later

research paper on 5

  • Article Information

Shown are the 28-day risk period rates of the 28 included adverse events of special interest to COVID-19 vaccines following XBB.1.5-containing mRNA vaccine immunization as a fifth dose compared with reference period rates in Danish people aged 65 years and older from October 1, 2023, to January 8, 2024. The 28-day risk period outcome rates following fifth dose vaccination with an XBB.1.5-containing mRNA vaccine was compared with reference period rates from day 43 after the fourth or fifth dose and onward. Individuals could contribute with person-time during both the 28-day risk period and the 2 reference periods while the number of events and person-time from the 2 reference periods were aggregated. Each outcome was studied separately, which is why there may be slight differences in the denominators due to different exclusions. The arrows indicate that the 95% CI exceeds the upper or lower limits on the x-axis. IRR indicates incidence rate ratio; NE, not estimable; and TIA, transient ischemic attack.

eTable. Eligibility Criteria and Outcome and Covariates Definitions

eFigure. Schematic Figure of the Study Design

eReferences

Data Sharing Statement

See More About

Select your interests.

Customize your JAMA Network experience by selecting one or more topics from the list below.

  • Academic Medicine
  • Acid Base, Electrolytes, Fluids
  • Allergy and Clinical Immunology
  • American Indian or Alaska Natives
  • Anesthesiology
  • Anticoagulation
  • Art and Images in Psychiatry
  • Artificial Intelligence
  • Assisted Reproduction
  • Bleeding and Transfusion
  • Caring for the Critically Ill Patient
  • Challenges in Clinical Electrocardiography
  • Climate and Health
  • Climate Change
  • Clinical Challenge
  • Clinical Decision Support
  • Clinical Implications of Basic Neuroscience
  • Clinical Pharmacy and Pharmacology
  • Complementary and Alternative Medicine
  • Consensus Statements
  • Coronavirus (COVID-19)
  • Critical Care Medicine
  • Cultural Competency
  • Dental Medicine
  • Dermatology
  • Diabetes and Endocrinology
  • Diagnostic Test Interpretation
  • Drug Development
  • Electronic Health Records
  • Emergency Medicine
  • End of Life, Hospice, Palliative Care
  • Environmental Health
  • Equity, Diversity, and Inclusion
  • Facial Plastic Surgery
  • Gastroenterology and Hepatology
  • Genetics and Genomics
  • Genomics and Precision Health
  • Global Health
  • Guide to Statistics and Methods
  • Hair Disorders
  • Health Care Delivery Models
  • Health Care Economics, Insurance, Payment
  • Health Care Quality
  • Health Care Reform
  • Health Care Safety
  • Health Care Workforce
  • Health Disparities
  • Health Inequities
  • Health Policy
  • Health Systems Science
  • History of Medicine
  • Hypertension
  • Images in Neurology
  • Implementation Science
  • Infectious Diseases
  • Innovations in Health Care Delivery
  • JAMA Infographic
  • Law and Medicine
  • Leading Change
  • Less is More
  • LGBTQIA Medicine
  • Lifestyle Behaviors
  • Medical Coding
  • Medical Devices and Equipment
  • Medical Education
  • Medical Education and Training
  • Medical Journals and Publishing
  • Mobile Health and Telemedicine
  • Narrative Medicine
  • Neuroscience and Psychiatry
  • Notable Notes
  • Nutrition, Obesity, Exercise
  • Obstetrics and Gynecology
  • Occupational Health
  • Ophthalmology
  • Orthopedics
  • Otolaryngology
  • Pain Medicine
  • Palliative Care
  • Pathology and Laboratory Medicine
  • Patient Care
  • Patient Information
  • Performance Improvement
  • Performance Measures
  • Perioperative Care and Consultation
  • Pharmacoeconomics
  • Pharmacoepidemiology
  • Pharmacogenetics
  • Pharmacy and Clinical Pharmacology
  • Physical Medicine and Rehabilitation
  • Physical Therapy
  • Physician Leadership
  • Population Health
  • Primary Care
  • Professional Well-being
  • Professionalism
  • Psychiatry and Behavioral Health
  • Public Health
  • Pulmonary Medicine
  • Regulatory Agencies
  • Reproductive Health
  • Research, Methods, Statistics
  • Resuscitation
  • Rheumatology
  • Risk Management
  • Scientific Discovery and the Future of Medicine
  • Shared Decision Making and Communication
  • Sleep Medicine
  • Sports Medicine
  • Stem Cell Transplantation
  • Substance Use and Addiction Medicine
  • Surgical Innovation
  • Surgical Pearls
  • Teachable Moment
  • Technology and Finance
  • The Art of JAMA
  • The Arts and Medicine
  • The Rational Clinical Examination
  • Tobacco and e-Cigarettes
  • Translational Medicine
  • Trauma and Injury
  • Treatment Adherence
  • Ultrasonography
  • Users' Guide to the Medical Literature
  • Vaccination
  • Venous Thromboembolism
  • Veterans Health
  • Women's Health
  • Workflow and Process
  • Wound Care, Infection, Healing

Others Also Liked

  • Download PDF
  • X Facebook More LinkedIn

Andersson NW , Thiesson EM , Hviid A. Adverse Events After XBB.1.5-Containing COVID-19 mRNA Vaccines. JAMA. 2024;331(12):1057–1059. doi:10.1001/jama.2024.1036

Manage citations:

© 2024

  • Permissions

Adverse Events After XBB.1.5-Containing COVID-19 mRNA Vaccines

  • 1 Department of Epidemiology Research, Statens Serum Institut, Copenhagen, Denmark

The monovalent Omicron XBB.1.5–containing COVID-19 mRNA vaccines were authorized in the US and Europe for use in autumn and winter 2023-2024. 1 , 2 In Denmark, the XBB.1.5-containing vaccines were recommended as a fifth COVID-19 vaccine dose to individuals aged 65 years and older beginning October 1, 2023. However, data to support safety evaluations are lacking.

We investigated the association between the XBB.1.5-containing vaccine administered as a fifth COVID-19 vaccine dose and the risk of 28 adverse events.

A study cohort of all individuals in Denmark aged 65 and older who had received 4 COVID-19 vaccine doses was established by cross-linking nationwide health care and demography registers on an individual level. The study period was September 15, 2022 (ie, the national rollout date of the fourth dose), to January 8, 2024, and vaccination status was classified in a time-varying manner (the eTable in Supplement 1 provides further details). The 28 adverse events were adapted from prioritized lists of adverse events of special interest to COVID-19 vaccines (eTable in Supplement 1 ). 3 - 5 Each outcome was studied separately and identified as any first hospital contact where an outcome diagnosis was recorded. The diagnosis date served as the event date.

Individuals were followed up from day 43 after the fourth dose (days 29-42 considered a buffer period) until first outcome event while censoring upon emigration, death, receipt of a sixth vaccine dose (as such a dose was not rolled out to the general Danish population during the study period), or end of the study period (eFigure in Supplement 1 ). Outcome rates within the risk period of 28 days following XBB.1.5-containing vaccine administration as a fifth dose was compared with reference period rates from day 43 after a fourth or fifth dose and onward as previously described; the number of events and person-time from the 2 reference periods were aggregated. 5 Individuals could contribute person-time both during the 28-day risk period and the 2 reference periods; individuals not receiving the XBB.1.5-containing vaccine only contributed to reference period person-time. Using Poisson regression, the risk and reference period outcome rates were compared by incidence rate ratios, adjusted for sex, age, region of residence, considered at high risk of severe COVID-19, health care worker, calendar time, and number of comorbidities. Statistical tests were 2-sided and conducted in R (version 4.1.1; R Project for Statistical Computing). A 95% CI that did not cross 1 was defined as statistically significant. Analysis was performed as surveillance activities as part of the advisory tasks of the governmental institution Statens Serum Institut (SSI), which monitors the spread of disease in accordance with §222 of the Danish Health Act, for the Danish Ministry of Health. According to Danish law, national surveillance activities conducted by SSI do not require approval from an ethics committee.

Among the 1 076 531 included individuals (mean [SD] age, 74.7 [7.4] years; 53.8% female), 902 803 received an XBB.1.5-containing vaccine as a fifth dose during follow-up ( Table ).

Receipt of an XBB.1.5-containing vaccine was not associated with a statistically significant increased rate of hospital contacts for any of the 28 different adverse events within 28 days after vaccination compared with the reference period rates ( Figure ). For example, the incidence rate ratio was 0.96 (95% CI, 0.87-1.07) for an ischemic cardiac event, 0.87 (95% CI, 0.79-0.96) for a cerebral infarction, and 0.60 (95% CI, 0.14-2.66) for myocarditis. Some outcomes were very rare during follow-up (eg, cerebral venous thrombosis), resulting in lower statistical precision; however, for 18 of the 28 adverse events examined, the upper bound of the CI was inconsistent with moderate to large increases in relative risk of 1.4 or greater.

In a nationwide cohort of more than 1 million adults aged 65 years and older, no increased risk of 28 adverse events was observed following vaccination with a monovalent XBB.1.5-containing vaccine.

Limitations of this study include potential residual confounding; differences in ascertainment of adverse events between compared periods cannot be excluded, but, in contrast to what was observed, would bias toward increased risks if present. This was mitigated by comparing the 28-day risk period rates following a fifth dose vaccination with an XBB.1.5-containing vaccine with reference period rates from 43 days or more after the fourth and fifth vaccine dose as opposed to never vaccinated period rates. Additionally, analyses were not adjusted for multiple testing, and some results showed lower risk for XBB.1.5-containing vaccines; yet, a time-varying healthy vaccinee effect cannot be excluded. Also, no medical record review of cases was done, but any outcome misclassification would most likely be nondifferential.

Accepted for Publication: January 23, 2024.

Published Online: February 26, 2024. doi:10.1001/jama.2024.1036

Corresponding Author: Niklas Worm Andersson, MD, Department of Epidemiology Research, Statens Serum Institut, Artillerivej 5, Copenhagen S 2300, Denmark ( [email protected] ).

Author Contributions: Dr Andersson had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.

Concept and design: All authors.

Acquisition, analysis, or interpretation of data: All authors.

Drafting of the manuscript: Andersson.

Critical review of the manuscript for important intellectual content: All authors.

Statistical analysis: Andersson, Thiesson.

Supervision: Hviid.

Conflict of Interest Disclosures: Dr Hviid reported receiving grants from Lundbeck Foundation, Novo Nordisk Foundation, and Danish Medical Research Council and being a scientific advisory board member for VAC4EU. No other disclosures were reported.

Data Sharing Statement: See Supplement 2 .

  • Register for email alerts with links to free full-text articles
  • Access PDFs of free articles
  • Manage your interests
  • Save searches and receive search alerts

research paper on 5

Data Science Journal

Press logo

  • Download PDF (English) XML (English) Supplement 1 (English)
  • Alt. Display

The FAIR Assessment Conundrum: Reflections on Tools and Metrics

  • Leonardo Candela
  • Dario Mangione
  • Gina Pavone

Several tools for assessing FAIRness have been developed. Although their purpose is common, they use different assessment techniques, they are designed to work with diverse research products, and they are applied in specific scientific disciplines. It is thus inevitable that they perform the assessment using different metrics. This paper provides an overview of the actual FAIR assessment tools and metrics landscape to highlight the challenges characterising this task. In particular, 20 relevant FAIR assessment tools and 1180 relevant metrics were identified and analysed concerning (i) the tool’s distinguishing aspects and their trends, (ii) the gaps between the metric intents and the FAIR principles, (iii) the discrepancies between the declared intent of the metrics and the actual aspects assessed, including the most recurring issues, (iv) the technologies used or mentioned the most in the assessment metrics. The findings highlight (a) the distinguishing characteristics of the tools and the emergence of trends over time concerning those characteristics, (b) the identification of gaps at both metric and tool levels, (c) discrepancies observed in 345 metrics between their declared intent and the actual aspects assessed, pointing at several recurring issues, and (d) the variety in the technology used for the assessments, the majority of which can be ascribed to linked data solutions. This work also highlights some open issues that FAIR assessment still needs to address.

  • FAIR assessment tools
  • FAIR assessment metrics

1 Introduction

Wilkinson et al. formulated the FAIR guiding principles to support data producers and publishers in dealing with four fundamental challenges in scientific data management and formal scholarly digital publishing, namely Findability, Accessibility, Interoperability, and Reusability ( Wilkinson et al. 2016 ). The principles were minimally defined to keep, as low as possible, the barrier-to-entry for data producers, publishers, and stewards who wish to make their data holdings FAIR. Moreover, the intent was to formulate principles that apply not only to ‘data’ in the conventional sense but also to the algorithms, tools, and workflows that led to that data. All scholarly digital research objects were expected to benefit from applying these principles since all components of the research process must be available to ensure transparency, reusability and, whenever possible, reproducibility. Later, homologous principles were formulated to deal with specific typologies of research products ( Goble et al. 2020 ; Katz, Gruenpeter & Honeyman 2021 ; Lamprecht et al. 2020 ).

Such principles were well received by several communities and are nowadays in the research agenda of almost any community dealing with research data despite the absence of concrete implementation details ( Jacobsen et al. 2020 ; Mons et al. 2017 ). This situation is producing a proliferation of approaches and initiatives related to their interpretation and concrete implementation ( Mangione, Candela & Castelli 2022 ; Thompson et al. 2020 ). It also requires evaluating the level of FAIRness achieved, which results in a multitude of maturity indicators, metrics, and assessment frameworks, e.g. ( Bahim, Dekkers & Wyns 2019 ; De Miranda Azevedo & Dumontier 2020 ; Krans et al. 2022 ).

Having a clear and up-to-date understanding of FAIR assessment practices and approaches helps in perceiving the differences that characterise them, properly interpreting their results, and eventually envisaging new solutions to overcome the limitations affecting the current landscape. This paper analyses a comprehensive set of FAIR assessment tools and the metrics used by these tools for the assessment to highlight the challenges characterising this valuable task. In particular, the research questions this study focuses on are: (i) to highlight the characteristics and trends of the currently existing tools, and (ii) to identify the relationships that exist among the FAIR principles and the approaches exploited to assess them in practice thus to discuss whether the resulting assessment is practical or there are gaps to deal with. A comprehensive ensemble of tools and metrics is needed to respond to these questions. This ensemble was developed by carefully analysing the literature, the information on the web and the actual implementation of tools and metrics. The resulting data set is openly available (see Data Accessibility Statements).

The rest of the paper is organised as follows. Section 2 discusses the related works, namely the surveys and analysis of FAIR assessment tools performed before this study. Section 3 presents the research questions this study focuses on, and the methodology used to respond to them. Section 4 describes the results of the study. Section 5 critically discusses the results by analysing them and providing insights. Finally, Section 6 concludes the paper by summarising the study’s findings. An appendix mainly containing the tabular representation of the data underlying the findings complements the paper.

2 Related Work

Several comparative studies and surveys on the existing FAIR assessment tools can be found in the literature.

Bahim et al. ( Bahim, Dekkers & Wyns 2019 ) conducted a landscape analysis to define FAIR indicators by assessing the approaches and the metrics developed until 2019. They produced a list of twelve tools (The twelve tools analysed were: the ANDS-NECTAR-RDS-FAIR data assessment tool, the DANS-Fairdat, the DANS-Fair enough?, the CSIRO 5-star Data Rating tool, the FAIR Metrics Questionnaire, the Stewardship Maturity Mix, the FAIR Evaluator, the Data Stewardship Wizard, the Checklist for Evaluation of Dataset Fitness for Use, the RDA-SHARC Evaluation, the WMO-Wide Stewardship Maturity Matrix for Climate Data, and the Data Use and Services Maturity Matrix.). They also produced a comparison of the different 148 metrics characterising the selected tools, ultimately presenting a classification of the metrics by FAIR principle and specifically by five dimensions: ‘Findable’, ‘Accessible’, ‘Interoperable’, ‘Reusable’, and ‘Beyond FAIR’.

Peters-von Gehlen et al. ( 2022 ) widened the FAIR assessment tool list originating from Bahim, Dekkers and Wyns ( 2019 ). By adopting a research data repository’s perspective, they shifted their attention to the different evaluation results obtained by employing five FAIR evaluation tools for assessing the same set of discipline-specific data resources. Their study showed that the evaluation results produced by the selected tools reliably reflected the curation status of the data resources assessed and that the scores, although consistent on the overall FAIRness level, were more likely to be similar among the tools that shared the same manual or automated methodology. They also concluded that even if manual approaches proved to be better suited for capturing contextual information, when focusing on assessing discipline-specific FAIRness there is no FAIR evaluation tool that meets the need, and promising solutions would be envisaging hybrid approaches.

Krans et al. ( 2022 ) classified and described ten assessment tools (selected through online searches in June 2020) to highlight the gaps between the FAIR data practices and the ones currently characterising the field of human risk assessment of microplastics and nanomaterials. The ten tools discussed were: FAIRdat, FAIRenough? (no longer available), ARDC FAIR self-assessment, FAIRshake, SATIFYD, FAIR maturity indicators for nanosafety, FAIR evaluator software, RDA-SHARC Simple Grids, GARDIAN (no longer available), and Data Stewardship Wizard. These tools were classified by type, namely ‘online survey’, ‘(semi-)automated’, ‘offline survey’, and ‘other’, and evaluated using two sets of criteria: developer-centred and user-centred. The first characterised the tools binarily based on their extensibility and degree of maturity; the latter distinguished nine user friendliness dimensions (‘expertise’, ‘guidance’, ‘ease of use’, ‘type of input’, ‘applicability’, ‘time investment’, ‘type of output’, ‘detail’, and ‘improvement’) grouped in three sets (‘prerequisites’, ‘use’, and ‘output’). Their study showed that the instruments based on human judgement could not guarantee the consistency of the results even if used by domain experts. In contrast, the (semi-)automated ones were more objective. Overall, they registered a lack of consensus in the score systems and on how FAIRness should be measured.

Sun et al. ( 2022 ) focused on comparing three automated FAIR evaluation tools (F-UJI, FAIR Evaluator, and FAIR checker) based on three dimensions: ‘usability’, ‘evaluation metrics’, and ‘metric test results’. They highlighted three significant differences among the tools, which heavily influenced the results: the different understanding of data and metadata identifiers, the different extent of information extraction, and the differences in the metrics implementation.

In this paper, we have extended the previous analyses by including more tools and, above all, by including a concrete study of the metrics that these tools use. The original contribution of the paper consists of a precise analysis of the metrics used for the assessment and what issues arise in FAIRness assessment processes. The aim is to examine the various implementation choices and the challenges that emerge in the FAIR assessment process related to them. These implementation choices are in fact necessary for transitioning from the principle level to the factual check level. Such checks are rule-based and depend on the selection of parameters and methods for verification. Our analysis shows the issues associated with the implementation choices that define the current FAIR assessment process.

3 Methodology

We defined the following research questions to drive the study:

  • RQ1. What are the aspects characterising existing tools? What are the trends characterising these aspects?
  • RQ2. Are there any gaps between the FAIR principles coverage and the metrics’ overall coverage emerging from the declared intents?
  • RQ3. Are there discrepancies between the declared intent of the metrics and the actual aspects assessed? What are the most recurring issues?
  • RQ4. Which approaches and technologies are the most cited and used by the metrics implementations for each principle?

To reply to these questions, we identified a suitable ensemble of existing tools and metrics. The starting point was the list provided by FAIRassist ( https://fairassist.org/ ). To achieve an up-to-date ensemble of tools, we enriched the tool list by referring to Mangione et al. ( 2022 ), by snowballing, and, lastly, by web searching. From the overall resulting list of tools, we removed the no longer running ones and the ones not intended for the assessment of FAIR principles in the strict sense. In particular: the GARDIAN FAIR Metrics, the 5 Star Data Rating Tool, and the FAIR enough? were removed because they are no longer running; the FAIR-Aware, the Data Stewardship Wizard, the Do I-PASS for FAIR, the TRIPLE Training Toolkit, and the CLARIN Metadata Curation Dashboard were removed because they were considered out of scope. Table 1 reports the resulting list of the 20 tools identified and surveyed.

List of FAIR assessment tools analysed.

We used several sources to collect the list of existing metrics. In particular, we carefully analysed the specific websites, papers, and any additional documentation characterising the selected tools, including the source code and information deriving from the use of the tools themselves. For the tools that enable users to define their specific metrics, we considered all documented metrics, except those created by users for testing purposes or written in a language other than English. In the case of metrics structured as questions with multiple answers (not just binary), each answer was considered a different metric as the different checks cannot be put into a single formulation. This approach was necessary for capturing the different degrees of FAIRness that the tool creators conceived. The selection process resulted in a data set of 1180 metrics.

Some tools associate each metric with specific principles. To observe the distribution of the metrics and the gaps concerning the FAIR principles, we considered the FAIR principle (or the letter of the FAIR acronym) which were designed to assess, as declared in the papers describing the tools, but also in the source code, and the results of the assessments performed by the tools themselves.

To analyse the metrics for the identification of discrepancies between the declared intent of the metrics and the actual aspects assessed, we adopted a classification approach based on a close reading of the FAIR principles, assigning one or more principles to each metric. This approach was preferred to one envisaging the development of our list of checks, as any such list would merely constitute an additional FAIR interpretation. This process was applied to both the tools that already had principles associated with the metrics and those that did not. We classified each metric under the FAIR principle we deemed the closest, depending on the metric formulation or implementation. We relied on the metrics implementation source code, when available, to better understand the checks performed. The classification is provided in the accompanying data set and it is summarised in Figure 3 .

The analysis of the approaches and technologies used by the metrics is based on the metric formulation, their source code, and the results of the assessments performed by the tools. With regard to the approaches, we classified the approach of each metric linked to a specific FAIR principle, as declared by the metric authors, following a bottom to top process. We grouped the metrics by specific FAIR principle and then we created a taxonomy of approaches based on the ones observed in each group (App. A.5). For the technologies, we annotated each metric with the technologies mentioned in the metric formulation, in the results of the assessments performed by the tools, and as observed through a source code review.

This section reports the study findings concerning the tool-related and metric-related research questions. Findings are discussed and analysed in Section 5.

4.1 Assessment tools

Table 1 enumerates the 20 FAIR Assessment tools analysed by reporting their name, URL, and the year the tool was initially proposed.

The tools were analysed through the following characteristics: (i) the target , i.e. the digital object the tool focuses on (e.g., dataset, software); (ii) the methodology , i.e. whether the assessment process is manual or automatic; (iii) the adaptability , i.e. whether the assessment process is fixed or can be adapted (specific methods and metrics can be added); (iv) the discipline-specificity , i.e. whether the assessment method is tailored for a specific discipline (or conceived to be) or discipline-agnostic; (v) the community-specificity , i.e. whether the assessment method is tailored for a specific community (or conceived to be) or community-agnostic; (vi) the provisioning , i.e. whether the tool is made available as-a-service or on-premises.

Table 2 shows the differentiation of the analysed tools based on the identified distinguishing characteristics.

Differentiation of analysed tools based on identified distinguishing characteristics. The term ‘enabled’ signifies that the configuration allows the addition of new metrics, allowing individuals to include metrics relevant to their discipline or community. The ‘any dig. obj.*’ value means that there is a large number of typologies supported yet this is specialised rather than actually supporting ‘any’.

By observing the emergence of the identified characteristics over time, from 2017 to 2023, it is possible to highlight trends in the development of tools created for FAIR assessment purposes. Figure 1 depicts these trends.

FAIR assessment tools trends

FAIR assessment tools trends.

Target. We observed an increasing variety of digital objects ( Figure 1 Target), reflecting the growing awareness of the specificities of different research products stemming from the debates that followed the publication of the FAIR guiding principles. However, 45% of the tools deal with datasets. We assigned the label any dig. obj . (any digital object) to the tools that allow the creation of user-defined metrics, but also when the checks performed are generic enough to be applied notwithstanding the digital object type, e.g., evaluations based on the existence of a persistent identifier such as a DOI and the use of a generic metadata schema, such as the Dublin Core, for describing a digital object. The asterisk that follows the label ‘ any dig. obj.’ in Table 2 indicates that, although many types of objects are supported, the tools specifically assess some of them. In particular: (a) AUT deals with datasets, tools, and a combination of them in workflows, and (b) FRO is intended for assessing a specific format of digital objects, namely RO-Crate ( Soiland-Reyes et al. 2022 ), which can package any type of digital object.

Methodology. The tools implement three modes of operation: (i) manual , i.e. if the assessment is performed manually by the user; (ii) automatic , i.e. if it does not require user judgement; (iii) hybrid , i.e. a combination of manual and automated approaches. Manual and hybrid approaches were the first implemented, but over time, automatic approaches were preferred due to the high subjectivity characterising the first two methodologies ( Figure 1 . Assessment methodology). Fifty-five per cent of the tools implement automatic assessments. Notable exceptions are MAT (2020 – hybrid) and FES (2023 – manual), assessing the FAIRness of a repository and requiring metrics that include organisational aspects, which are not easily measured and whose automation still poses difficulties.

Adaptability. We distinguished tools between (i) non-adaptable (whose metrics are predefined and cannot be extended) and (ii) adaptable (when it is possible to add user-defined metrics). Only three tools of the ensemble are adaptable, namely FSH, EVL, and ENO. EVA was considered a ‘fixed’ tool, although it supports the implementation of plug-ins that specialise the actual checks performed by a given metric. Despite their limitations, the preference for non-adaptable tools is observed to persist over time ( Figure 1 . Assessment method).

Discipline-specific. A further feature is whether a tool is conceived to assess the FAIRness of discipline-specific research outputs or is discipline agnostic. We grouped three tools as discipline-specific: AUT, CHE, and MAT. While the adaptable tools (FSH, EVL, and ENO) may not include discipline-specific metrics at the moment, they enable such possibility, as well as EVA, since it allows defining custom configurations for the existing assessments. The trend observed is a preference for discipline-agnostic tools ( Figure 1 . Discipline-specific nature).

Community-specific. Among the tools, some include checks related to community-specific standards (e.g. the OpenAIRE Guidelines) or that allow the possibility of defining community-relevant evaluations. As for the case of discipline-specific tools, the adaptable tools (FSH, EVL, and ENO) also enable community-specific evaluations, as well as EVA. Figure 1 . Community-specific nature shows that, in general, community-agnostic solutions were preferred.

Provisioning. The tools are offered following the as-a-service model or as an on-premises application (we included in the latter category the self-assessment questionnaires in a PDF format). While on-premises solutions are still being developed (e.g. python notebooks and libraries), the observed trend is a preference for the as-a-service model ( Figure 1 . Provisioning).

4.2 Assessment metrics

Existing assessment metrics are analysed to (i) identify gaps between the FAIR principles’ coverage and the metrics’ overall coverage emerging from the declared intents (cf. Section 4.2.1), (ii) highlight discrepancies among metrics intent and observed behaviour concerning FAIR principles and distil the issues leading to the mismatch (cf. Section 4.2.2), and (iii) determine frequent approaches and technologies considered in metrics implementations (cf. Section 4.2.3).

4.2.1 Assessment metrics: gaps with respect to FAIR principles

To identify possible gaps in the FAIR assessment process we observed the distributions of the selected metrics grouped according to the FAIR principle they were designed to assess. Such information was taken from different sources, including the papers describing the tools, other available documentation, the source code, and the use of the tools themselves.

Figure 2 reports the distribution of metrics with respect to the declared target principle, if any, for each tool. Appendix A.1 reports a table with the detailed data. In the left diagram, a metric falls in the F, A, I, and R when it refers only to Findable, Accessible, Interoperable, and Reusable and not to a numbered/specific principle. The ‘n/a’ series is used for counting the metrics that do not declare a reference to a specific principle or even to a letter of the FAIR acronym. In the right diagram, the metrics are aggregated by class of principles, e.g. the F-related metrics include all the ones that in the left diagram are either F, F1, F2, F3 or F4.

Distribution of declared metric intent per tool

FAIR assessment tools’ declared metric intent distribution. In the left diagram, F, A, I, and R series refer to metrics with declared intent Findable, Accessible, Interoperable, and Reusable rather than a numbered/specific principle. The ‘n/a’ series is for metrics that do not declare an intent referring to a specific principle or even to a letter of the FAIR acronym. In the right diagram, the metrics are aggregated by class of principles, e.g. the F-related metrics include all the ones that in the left diagram are either F, F1, F2, F3 or F4.

Distribution of observed metric intent per tool

FAIR assessment tools’ observed metric goal distribution. In the left diagram, metrics are associated either with a specific principle, ‘many’ principles or ‘none’ principle. In the right diagram, the metrics associated with a specific principle are aggregated by class of principles, e.g. the R-related metrics include all the ones that in the left diagram are either F1, F2, F3 or F4.

Only 12 tools (CHE, ENO, EVA, EVL, FOO, FRO, FSH, FUJ, MAT, OFA, OPE, RDA) out of 20 identify a specific principle linked to the metrics. The rest either refer only to Findable, Accessible, Interoperable, or Reusable to annotate their metrics (namely, AUT, DAT, FDB, FES, SAG, SAT, SET) or do not refer to specific principles nor letters of the acronym for annotating their metrics (namely, HFI). Even among those tools that make explicit connections, some metrics remain detached from any particular FAIR principle or acronym letter, as indicated with ‘n/a’ in Figure 2 and Table A.1.

The figures also document that assessment metrics exist for each FAIR principle, but not every principle is equally covered, and not all the tools implement metrics for all the principles.

When focusing on the 12 tools explicitly referring to principles in metrics declared intents and considering the total amount of metrics exploited by a given tool to perform the assessment, it is easy to observe that some tools use a larger set of metrics than others. For instance, FSH uses 339 distinct metrics, while MAT uses only 13 distinct metrics. The tools having a lower number of metrics tend to overlook some principles.

The distribution of metrics with respect to their target highlights that, for each principle, some kind of check has been conceived, even though with different numbers. They, in fact, range from the A1.2 minimum (covered by 16 metrics) to the F1 maximum (covered by 76 metrics). It is also to be noted that for each group of principles linked to a letter of the FAIR acronym, the largest number of metrics is concentrated on the first of them. In particular, this is evident for the A group, with 71 metrics focusing on A1 and around 20 for the others.

Four principles are somehow considered by all the tools, namely F1, F2, I1, and R1.1. While for F1, F2, and I1, the tools use many metrics for their assessment, for R1.1, few metrics were exploited.

Four principles experience relatively lower emphasis, namely A1.2, F3, A1.1, and A2, with fewer metrics dedicated to their assessment. While at the tool level, A1.2, A2, R1.2, and R1.3 are principles that remain unexplored by several of them. A1.2 is not assessed at all by four tools out of 12; A2 is not assessed at all by three tools out of 12; R1.2 is not assessed at all by three tools out of 12; R1.3 is not assessed at all by two tools out of 12.

4.2.2 Assessment metrics: observed behaviours and FAIR principles discrepancies

In addition to the metrics not linked to a specific FAIR principle, we noticed that the implementation of some metrics was misaligned with the principle it declared to target. The implementation term is used with a comprehensive meaning thus including metrics from manual tools and metrics from automatic tools. By analysing the implementation of the metrics, we assigned to each the FAIR principle or set of principles sounding closer.

We identified three discrepancy cases: (i) from a FAIR principle to another, (ii) from a letter of the FAIR acronym to a FAIR principle of a different letter of the acronym (e.g. from A to R1.1), and (iii) from any declared or undeclared FAIR principle to a formulation that we consider beyond FAIRness (‘none’ in Figure 3 ).

An example of a metric with a discrepancy from one FAIR principle to another is ‘Data access information is machine readable’ declared for the assessment of A1, but rather attributable to I1. Likewise, the metric ‘Metadata is given in a way major search engines can ingest it for their catalogues (JSON-LD, Dublin Core, RDFa)’, declared for F4, can be rather linked to I1, as it leverages a serialisation point of view.

The metric ‘Which of the usage licenses provided by EASY did you choose in order to comply with the access rights attached to the data? Open access (CC0)’ with only the letter ‘A’ declared is instead a case in which the assessment concerns a different principle (i.e. R1.1).

Regarding the discrepancies from any declared or undeclared FAIR principle to a formulation that we consider beyond FAIRness, examples are the metrics ‘Tutorials for the tool are available on the tools homepage’ and ‘The tool’s compatibility information is provided’.

In addition to the three identified types of discrepancies, we also encountered metrics that were not initially assigned a FAIR principle or corresponding letter. However, we mapped these metrics to one of the FAIR principles. An example is the metric ‘Available in a standard machine-readable format’, attributable to I1. The latter case is indicative of how wide the implementation spectrum of the FAIRness assessment can be, to the point of distancing particularly far from the formulation of the principles themselves. These metrics that we have called ‘beyond FAIRness’ do not necessarily betray the objective of the principles, but for sure they ask for technologies or solutions which cannot be strictly considered related to FAIR principles.

Figure 3 shows the distribution of all the metrics in our sample resulting from the analysis and assignment to FAIR principles activity. Appendix A.2 reports the table with the detailed data.

This figure confirms that (a) all the principles are somehow assessed, (b) few tools assess all the principles (namely, EVA, FSH, OFA, and RDA), (c) there is a significant amount of metrics (136 out of 1180) that refer to more than one principle at the same time (the ‘many’), and (d) there is a significant amount of metrics (170 out of 1180) that sounds far from the FAIR principles at all (the ‘none’).

Figure 4 depicts the distribution of declared ( Figure 2 , detailed data in Appendix A.1) and observed ( Figure 3 , detailed data in Appendix A.2) metrics intent with respect to FAIR principles. Apart from the absence of metrics referring to one of the overall areas of FAIR, the distribution of the metrics’ observed intents highlights the great number of metrics that either refer to many FAIR principles or any. Concerning the principles, the graph shows a significant growth in the number of metrics assessing F1 (from 76 to 114), F4 (from 39 to 62), A1.1 (from 26 to 56), I1 (from 56 to 113), R1 (from 56 to 79), R1.1 (from 44 to 73), and R1.2 (from 52 to 74). All in all, for 835 metrics out of the 1180 analysed, the declared metric intent and the observed metric intent correspond (i.e. if (i) the referred principle corresponds, or (ii) the declared intent is either F, A, I, or R while the observed intent is a specific principle of the same class). The cases of misalignment are carefully discussed in the remainder of the section.

Declared vs observed metric intents

Comparison of the metrics distributions with regard to their declared and observed intent.

While the declared metrics intent is always linked to one principle only – or even to only one letter of the FAIR acronym – we noted that 136 metrics can be related to more than one FAIR principle at once. These correspond to the ‘many’ series in Figure 4 counting the number of times we associated more than one FAIR principle to a metric of a tool (see also Table A.2, column ‘many’).

Figure 5 shows the distribution of these co-occurrences among the FAIR principles we observed (see also Table A.3 in Section A.3).

Co-occurrences among FAIR principles

Co-occurrences among metrics observed FAIR principles in numbers and percentages.

Such co-occurrences involve all FAIR principles. In some cases, assessment metrics on a specific principle are also considered to be about many diverse principles, notably: (i) metrics dealing with I1 also deal with either F1, F2, A1, I2, I3, or a Reproducibility-related principle, (ii) metrics dealing with R1.3 also deal with either F2, A1, A1.1, an Interoperability principle or R1.2. The number of different principles we found co-occurring with I1 hints at the importance given to the machine-readability of metadata, which is a recurrent parameter in the assessments, particularly for automated ones, so that it can be considered an implementation prerequisite notwithstanding the FAIR guidelines. The fact that R1.3 is the second principle for the number of co-occurrences with other principles is an indicator of the role of the communities in shaping actual practices and workflows.

In some cases, there is a significant number of co-occurrences between two principles, e.g. we observed that many metrics deal with both F2 and R1 (36) or I1 and R1.3 (35). The co-occurrences between F2 and R1 are strictly connected to the formulation of the principles and symptomatic of a missing clear demarcation between the two. The case of metrics with both I1 and R1.3 is ascribable to the overlapping of the ubiquitous machine-readable requirement and the actual implementation of machine-readable solutions by communities of practice.

We also observed metrics that we could not link to any FAIR principle ( Figure 3 ‘none’ series) because of the parameters used in the assessment. Examples of metrics we considered not matching any FAIR principle include (a) those focusing on the openness of the object since ‘FAIR is not equal to open’ ( Mons et al. 2017 ), (b) those focusing on the downloadability of the object, (c) those focusing on the long-term availability of the object since A2 only requires that the metadata of the object must be preserved, (d) those relying on the concept of data or metadata validity, e.g. a metric verifying that the contact information given is valid, (e) those focusing on trustworthiness (for repositories), and (f) those focusing on multilingualism of the digital object.

To identify the discrepancies between declared intents and observed behaviour, we considered misaligned the metrics with a different FAIR principle declared than the one we observed. In addition, all the metrics counted as none in Figure 3 are discrepancies since they include concepts beyond FAIRness assessment. For the metrics that are referred only to a letter of the FAIR acronym, we based the misalignments on the discordance with the FAIR principle letter. Concerning the metrics linked to more than one FAIR principle, we considered as discrepancies only the cases where the principle or letter declared does not match any of the observed possibilities. Figure 6 documents these discrepancies (detailed data are in Appendix A.4).

Discrepancies among declared and observed metric intents

Discrepancies among metrics declared and observed FAIR principles in numbers and percentages.

When looking at the observed intent, including the metrics in the ‘many’ column, all FAIR principles are in the codomain of the mismatches, except for A1.2 and A2 (in fact, there is no column for that in Figure 6 ). Moreover, misaligned metrics for Findability and Accessibility are always relocated to other letters of the acronym, implying a higher tendency to confusion in the assessment of F and A principles .

While it is possible to observe misalignments in metrics implementations that we linked to more than one principle, no such cases involve accessibility-oriented declared metrics. For metrics pertaining to the other FAIR areas, there are a few cases, mainly involving findability and metrics with no declared intent. No metrics that we could not link to a FAIR principle were found for F4, A1.2, and I2 principles, indicating that the checks on indexing (F4), authentication and authorisation (A1.2), and use of vocabularies (I2) tend to be more unambiguous .

Concerning metrics with findability-oriented declared intents, we did not observe misalignments with any findability principle. Still, we found misalignments with accessibility, interoperability, and reusability principles, including metrics that can be linked with more than one principle and metrics that we could not associate with any principle. Accessibility-related misalignments concern A1 (9), with references to the use of standard protocols to access metadata and to the resolvability of an identifier, and A1.1 (12), because of references to free accessibility to a digital object. Interoperability-related misalignment concern I1 (23) and are linked to references to machine-readability (e.g. the presence of machine-readable metadata, such as the JSON-LD format, or structured metadata in general) and semantic resources (e.g. the use of controlled vocabularies or knowledge representation languages like RDF). Reusability-related misalignments concern R1 (4), because of references to metadata that cannot be easily linked to a findability aim (e.g. the size of a digital object), R1.2 (7), as we observed references to versioning and provenance information, and R1.3 (3), for references to community standards (e.g. community accepted terminologies). Concerning the findability-oriented metrics we classify as ‘many’ (18), we observed they intertwine concepts pertaining to A1, A1.1, I1, I2, or R1.3. About the metrics we could not link to a principle (19), they include references to parameters such as free downloadability and the existence of a landing page.

Concerning metrics with accessibility-oriented declared intents, we did not observe misalignments with an accessibility principle. There is one misalignment with F2, regarding the existence of a title associated with a digital object, and few with I1 (5) because of references to the machine-readability (e.g. machine-readable access information) and semantic artefacts (e.g. controlled vocabularies for access terms). The majority of misalignments are observed with reusability, as we observed metrics involving R1 (9), with references to metadata elements related to access conditions (e.g. dc:rights) and to the current status of a digital object (e.g. owl:deprecated), R1.1 (2), because of mentions to the presence of a licence (e.g. creative commons licence), and R1.2 (2), since there are references to versioning information (e.g. if metadata on versioning information is provided). There are also metrics (43) that we cannot link to any principle, which refer to parameters such as the availability of tutorials, long-term preservation of digital objects, and free downloadability.

Concerning metrics with interoperability-oriented declared intents mismatches concern F1 (11) with references to the use of identifiers (e.g. URI), A1 (2) because of references to the resolvability of a metadata element identifier, I1 (5) for checks limited to the scope of I1 even if declared for assessing I2 or I3 (e.g. metadata represented in an RDF serialisation), and I3 (4) because of checks only aimed at verifying that other semantic resources are used even if declared to assess I2. We also observed metrics declared to assess I2 (2) linked to multiple principles; they intertwine aspects pertaining to F2, I1, and R1.3. Except for I2, there are 20 interoperability-oriented metrics that we could not link to any principle (e.g. citing the availability of source code in the case of software).

Concerning metrics with reusability-oriented declared intents, mismatches regard F4 (1) because of a reference to software hosted in a repository, I1 (6) with references to machine-readability, specific semantic artefacts (e.g. Schema.org ), or to lists of formats, and I3 (1) as there is a reference to ontology elements defined through a property restriction or an equivalent class, but they mainly involve reusability principles. Looking at reusability to reusability mismatches: (i) for R1 declared metrics, we observed mismatches with R1.1 (2) concerning licences, R1.2 (2) because of references to provenance information, and R1.3 (3) since there are references to community-specific or domain-specific semantic artefacts (e.g. Human Phenotype Ontology); (ii) for R1.1 declared metrics, there are mismatches concerning R1 (3) since there are references to access rights metadata elements (e.g. cc:morePermissions); (iii) for R1.2 declared metrics, we observed mismatches concerning R1 (1) and R1.1 (1) because of references to contact and licence information respectively; (iv) for R1.3 declared metrics mismatches concern R1.1 (2) since there are references to licences. Only in the case of one R1.2 declared metric we observed a link with more than one FAIR principle, F2 and R1, because of references to citation information. The reusability declared metrics we could not link to any principle (40) concern references such as the availability of helpdesk support or the existence of a rationale among the documentation provided for a digital object.

Concerning the metrics whose intent was not declared (80), we observed that 40% (32) are linked to at least one principle, while the remaining 60% (48) are beyond FAIRness. In this set of metrics we found metrics concerning F4 (10), e.g. verifying if a software source code is in a registry; I1 (1) a metric verifying the availability of a standard machine-readable format; R1 (2), e.g. for a reference to terms of service; R1.1 (4), because of references to licences; R1.2 (2), e.g. a metric verifies if all the steps to reproduce the data are provided. Some metrics can be linked to more than one principle (13); these metrics intertwin aspects pertaining to F2, F3, I1, I2, I3, R1, and R1.2. An example is a reference to citation information, which can be linked to F2 and R1.

4.2.3 Assessment metrics: approaches and technologies

Having observed that assessment metrics have been proposed for each FAIR principle, it is important to understand how these metrics have been formulated in practice in terms of approaches and technologies with respect to the specific principles they target.

Analysing the metrics explicitly having one of the FAIR principles as a target of their declared intent (cf. Section 4.2.1), it emerged that some (101 out of 677) are simply implemented by repeating the principle formulation or part of it. These metrics do not give any help or indication to the specific assessment task that remains as generic and open to diverse interpretations as the principle formulation is. The rest of the implementations are summarised in Appendix A.5 together with concrete examples to offer an overview of the wealth of approaches proposed for implementing FAIR assessment rules. These approaches include identifier-centred ones (e.g. checking whether the identifier is compliant with a given format, belongs to a list of controlled values or can be successfully resolved), metadata-element centred ones (e.g. verifying the presence of a specific metadata element), metadata-value centred ones (e.g. verify if a specific value or string is used for compiling a given metadata element), service-based ones (e.g. checking whether an object can be found by a search engine or a registry). All approaches involve more than one FAIR area, except for: (a) policy-centred approaches (i.e. looking for the existence of a policy regarding the identifier persistency) for F1, (b) documentation-centred approaches (i.e. an URL to a document describing the required assessment feature), only used for A1.1, A1.2, and A2 verifications, (c) service-centred approaches (i.e. the presence of a given feature in a registry or in a repository), only used for F4, and (d) metadata schema-centred approaches (i.e. verify that a schema rather than an element of it is used), used for R1.3.

Approaches based on the label of the metadata element employed to describe an object, and those based on an identifier, assigned to the object or identifying a metadata element, are the most prevalent. The former is utilised for assessing 14 out of 15 principles (with the exception of A2), while the latter is applied in the assessment of 13 out of 15 principles (excluding F4 and A2).

By analysing the metrics and, when possible, their implementation, we identified 535 metrics mentioning or using technologies for the specific assessment purpose, with four of them referring only to the generic use of linked data. Of the 535 metrics, 174 declare to assess findability, 92 accessibility, 120 interoperability, 147 reusability, and two are not explicitly linked with any FAIR principle or area. Overall, these metrics refer to 215 distinct technologies (the term ‘technology’ is used in its widest acceptation thus including very diverse typologies ranging from (meta)data formats to standards, semantic technologies, protocols, and services). These do not include a generic reference to the IANA media types mentioned by one metric, which alone are 2007. Selected technologies can be categorised as (i) application programming interfaces (referred by 19 metrics), (ii) formats (referred by 91 metrics), (iii) identifiers (referred by 184 metrics), (iv) software libraries (referred by 22 metrics), (v) licences (referred by two metrics), (vi) semantic artefacts (referred by 291 metrics), (vii) protocols (referred by 29 metrics), (viii) query languages (referred by 5 metrics), (ix) registries (referred by 28 metrics), (x) repositories (referred by 14 metrics), and (xi) search engines (referred by 5 metrics). When referring to the number of metrics per technology class, it should be noted that each metric can mention or use one or more technologies.

Figure 7 depicts how these technologies are exploited across the principles using the metric’s declared intent for classifying the technology.

Technology types per declared metric intent

Technology types per declared metric intent.

The most cited or used technologies in the metrics or their implementations are semantic artefacts and identifiers . In particular, Dublin Core and Schema.org are the most mentioned, followed by standards related to knowledge representation languages (Web Ontology Language and Resource Description Framework) and ontologies (Ontology Metadata Vocabulary and Metadata for Ontology Description). The most cited identifier is the uniform resource locator (URL), followed by mentions of uniform resource identifiers (even if technically all URLs are URIs) and, among persistent identifiers, digital object identifiers (DOI).

Semantic artefacts are among the most cited for findability assessments (e.g., Dublin Core, Schema.org , Web Ontology Language, Metadata for Ontology Description, Ontology Metadata Vocabulary, Friend of a Friend, and Vann), followed by identifiers (URL, DOI, URI).

Identifiers are the most cited technologies for accessibility assessments (URL, URI, Handle, DOI, InChi key), followed by protocols (HTTP, OAI-PMH), semantic artefacts (Web Ontology Language, Dublin Core), and formats (XML).

The most mentioned technologies for interoperability assessments are semantic artefacts (Ontology Metadata Vocabulary, Dublin Core, Friend of a Friend, Web Ontology Language) and formats (JSON-LD, XML, RDF/XML, turtle), followed by identifiers (URI, DOI, Handle).

For reusability assessments, besides Dublin Core, Schema.org , Metadata for Ontology Description (MOD), Datacite metadata schema, and Open Graph, also figure semantic artefacts that are specific for provenance (Provenance Ontology and Provenance, Authoring and Versioning) and licensing (Creative Commons Rights Expression Language). Identifiers (URLs) and formats (XML) are also among the most used technologies for reusability purposes.

Ultimately, HTTP-based and linked data technologies are the most used technologies in the metrics, either if considering all metrics at once or just focusing on a single dimension of the FAIR principles.

5 Discussion

The current state of FAIR assessment practices is characterised by different issues, linked to the way the assessment is performed both at a tool and metric level. In the remainder of this section, we critically discuss what emerged in Section 4 concerning assessment tools and assessment metrics.

5.1 Assessment tools

The variety of the tools and their characteristics discussed in Section 4.1 demonstrates the various flavours of solutions that can be envisaged for FAIR assessment. This variety is due to several factors, namely (a) the willingness to assess diverse objects (from any digital object to software), (b) the need to rely on either automatic, manual or hybrid approaches, (c) the necessity to respond to specific settings by adaptability or by being natively designed to be discipline-specific or community-specific. This denotes a certain discretionality in the interpretation and application of the principles themselves, in addition to producing different results and scores for the same product ( Krans et al. 2022 ). In other words, the aspirational formulation of the FAIR principles is hardly reconcilable with punctual measurement .

The characteristics of the tools and their assessment approaches impact assessment tasks and results. Manual assessment tools rely on the assessor’s knowledge, so they do not typically need to be as specific as the automated ones, e.g. when citing a specific technology expected to be exploited to implement a principle, they do not have to clarify how that technology is expected to be used thus catering for diverse interpretations from different assessors. Manual assessment practices tend to be subjective, making it challenging to achieve a unanimous consensus on results. Automatic assessment tools require that (meta)data are machine-readable and only apparently solve the subjectivity issue. While it is true that automatic assessments have to rely on a defined and granular process, which does not leave space for interpretations, every automated tool actually proposes its own FAIRness implementation by defining the granular process itself, especially those tools that do not allow the creation and integration of user-defined metrics. Consequently, the assessment process is objective, but the results are still subjective and biassed by the specific FAIR principles interpretation implemented by the tool developer.

Although the trends observed for tools characteristics in Section 4.1 seems to suggest some tendencies (namely, in the last years more automatic tools than manual ones were developed, more non-adaptable tools than adaptable ones were released, and discipline-agnostic and community-agnostic were emerging over the others) it is almost impossible to figure out whether tools with these characteristics are actually better serving the needs of communities than others. The specific nature of FAIRness assessment is likely to promote the development of tools where community-specific FAIR implementation choices can be easily and immediately vehiculated into assessment pipelines, no matter the tool design decisions regarding methodology, adaptability, etc.

5.2 Assessment metrics

The following three subsections retrace the analysis of assessment metrics as discussed in Section 4.2 subsections to give some reasoning about them. In particular, they elaborate on the findings stemming from the analysis of the gaps between declared metrics intents and the FAIR principles, the discrepancies between declared intents and observed behaviours, and the set of technologies cited for assessing FAIRness, respectively.

5.2.1 Assessment approaches: gaps with respect to FAIR principles

The results reported in Section 4.2.1 highlighted the apparently comprehensive coverage of proposed metrics with respect to principles, the fuzziness of some metrics as well as the variety of metrics implementations for assessing the same principle.

Regarding the coverage , the fact that there exist metrics to assess any principle while the number of metrics per principle and per tool is diverse depends on the principle and tool characteristics. It does not guarantee that all principles are equally assessed. Some principles are multifaceted by formulation, which might lead to many metrics to assess them. This is the case of F1 requiring uniqueness and persistence of identifiers; the number of metrics dedicated to assessing it was the highest we found (Table A.3). However, F1 also has the ‘ (Meta)data ’ multifaceted formulation that is occurring in many other principles without leading to a proliferation of assessment metrics. R1.1 is similar to F1 since it has the (meta)data aspect as well as the accessibility and intelligibility of the licence, yet this is not causing the proliferation of metrics. In contrast with these two principles that are explicitly assessed by all the tools declaring an association among metrics and principles (together with F2 and I1), there are multifaceted principles, like A1.2 and R1.2, that were not explicitly assessed by some tools, actually by automatic tools that are probably facing issues in assessing them programmatically. This diversity of approaches for assessing the same principle further demonstrates the gaps among the principles and their many implementations, thus making any attempt to assess FAIRness in absolute terms almost impossible and meaningless .

Regarding the fuzziness , we observed metrics that either replicate or rephrase the principle itself, thus remaining as generic as the principles are. The effectiveness of these metrics is also limited in the case of manual assessment tools. In practice, using these metrics, the actual assessment check remains hidden either in the assessor’s understanding or in the tool implementation.

Regarding the variety of implementations , every implementation of a metric inevitably comes with implementation choices impacting the scope of cases passing the assessment check. In fact, it is not feasible to implement metrics capturing all the different real-world cases that can be considered suitable for a positive assessment of a given principle. Consequently, even if ‘FAIR is not equal to RDF, Linked Data, or the Semantic Web’ ( Mons et al. 2017 ), linked data technologies are understandably among the main adopted solutions for creating assessment metric implementations. However, the reuse of common implementations across tools is not promoted or facilitated; FAIR Implementation Profiles (FIP) ( Schultes et al. 2020 ) and metadata templates ( Musen et al. 2022 ) could facilitate this by identifying sets of community standards and requirements to be then exploited by various tools. The availability of ‘implementation profiles’ could help to deal with the principles requiring ‘rich metadata’ (namely F2 and R1), whose dedicated metrics seem quite poor for both discoverability and reusability aspects.

5.2.2 Assessment metrics: observed behaviours and FAIR principles discrepancies

The results reported in Section 4.2.2 revealed 345 misaligned metrics ( Figure 6 , Table A.4). Overall, we found metrics that seemed to be very discretionary and not immediately adhering to the FAIR principles, also injecting in assessment pipelines checks going beyond FAIRness . Although these misalignments result from our reading of the FAIR principles, they reveal the following recurring issues characterising metrics implementations realising surprising/unexpected interpretations of FAIR principles aspects.

Access rights. Checks verifying the existence of access rights or access condition metadata are used for assessing accessibility, in particular, the A1 principle. This is problematic because (a) the accessibility principles focus on something different, e.g. the protocols used and the long-term availability of (meta)data, and (b) they overlook the equal treatment A1 envisages for both data and metadata.

Long-term preservation. It is used to assess digital objects rather than just metadata (as requested by A2). In particular, long-term preservation-oriented metrics were proposed for assessing accessibility and reusability (R1.3), thus introducing an extensive interpretation of principles requiring (domain-oriented and community-oriented) standardised way for accessing the metadata.

Openness and free downloadability. These recur among the metrics and are also used contextually for assessing adherence to community standards (R1.3). When used alone, we observed that openness-related metrics are employed for assessing reusability, while free-download-related metrics are used for assessing findability and accessibility (in particular for A1.1). Strictly speaking, it was already clarified that none of the FAIR principles necessitate data being ‘open’ or ‘free’ ( Mons et al. 2017 ). Nonetheless, there is a tendency to give a positive, or more positive, assessment when the object is open. While this is in line with the general intentions of the principles (increasing reusability and re-use of data or other research products), this may be at odds with the need to protect certain types of data (e.g. sensitive data, commercial data, etc.).

Machine-readability. This metadata characteristic is found in metrics assessing findability (F2, F4), accessibility (A1), and reusability (R1.3). As the FAIR principles were conceived for lowering the barriers of data discovery and reuse for both humans and machines, machine-readability is at the very core of the requirements for the FAIRification of a research object. While it is understandably emphasised across the assessment metrics, the concept is frequently used as an additional assessment parameter in metrics assessing other principles rather than the ones defined for interoperability.

Resolvability of identifiers. This aspect characterises metrics assessing findability (specifically for F1, F2, and F3) and interoperability (I2). While resolvability is widely associated with persistent and unique identifiers and is indeed a desirable characteristic, we argue that it is not inherently connected to an identifier itself. URNs are a typical example of this. In the context of the FAIR principles, resolvability should be considered an aspect of accessibility, specifically related to A1, which concerns retrievability through an identifier and the use of a standardised communication protocol.

Validity. Metadata or information validity is used for assessing findability, accessibility, interoperability (specifically I3), and reusability (in particular R1), i.e. FAIR aspects that call for ‘rich’ metadata or metadata suitable for a certain scope. However, although metadata is indeed expected to be ‘valid’ to play their envisaged role, in reality, FAIR advocates and requires a plurality of metadata to facilitate the exploitation of the objects in a wider variety of contexts, without tackling data quality issues.

Versions. The availability of version information or different versions of a digital object is used for assessing findability and accessibility (specifically the A2 principle).

5.2.3 Assessment metrics: approaches and technologies

The fact that the vast majority of approaches encompass more than one FAIR area (Section 4.2.3) is indicative of an assessment that is inherently metadata-oriented. It is indeed the metadata, rather than the object itself, that are used in the verifications. This also explains why there are metrics developed for data assessment tools that are applicable for evaluating any digital object.

Challenges arise when evaluating accessibility principles (namely, A1.1, A1.2, and A2), which are the only ones for which an approach based on the availability of documentation pertaining to an assessment criterion (e.g. a metadata retention policy) is found. This approach further highlights the persistent obstacles in developing automated solutions that address all the FAIR principles comprehensively.

The results reported in Section 4.2.3 about the technologies referred in metrics implementations suggest there is an evident gap between the willingness to provide communities with FAIR assessment tools and metrics and the specific decisions and needs characterising the processes of FAIRification and FAIRness assessment in community settings. There is no single technology that is globally considered suitable for implementing any of the FAIR principles , and each community is entitled to pick any technology they deem suitable for implementing a FAIR principle by the formulation of the principle. The fact that some tools cater for injecting community-specific assessment metrics into their assessment pipelines aims at compensating this gap by bringing the risk of ‘implicit knowledge’, i.e. when a given technology is a de-facto standard in a context or for a community, it is likely that this technology is taken for granted and disappear from the assessment practices produced by the community itself.

5.3 FAIR assessment prospects

The findings and discussions reported so far allow us to envisage some potential enhancements that might make future FAIR assessments more effective. It is desirable for forthcoming FAIR assessment tools to perform their assessment by (a) making the assessment process as automatic as possible, (b) making openly available the assessment process specification, including details on the metrics exploited, (c) allowing assessors to inject context-specific assessment specifications and metrics, (d) providing assessors with concrete suggestions (eventually AI-based) aiming at augmenting the FAIRness of the assessed objects. All in all, assessment tools should contribute to refrain from the diffusion of the feeling that FAIRness is a ‘yes’ or ‘no’ feature; every FAIR assessment exercise or FAIRness indicator associated with the object should always be accompanied with context-related documentation clarifying the settings leading to it.

It is also desirable to gradually reduce the need for FAIR assessment tools by developing data production and publication pipelines that are FAIR ‘ by design ’. Although any of such pipelines will indeed implement a specific interpretation of the FAIR principles, the one deemed suitable for the specific context, it will certainly result in a new generation of datasets, more generally resources, that are born with a flavour of FAIRness. These datasets should be accompanied by metadata clarifying the specification implemented by the pipeline to make them FAIR (this was already envisaged in R1.2). The richer and wider in scope the specification driving the FAIR by design pipelines is, the larger will be the set of contexts benefitting from the FAIRification. Data Management Plans might play a crucial role ( David et al. 2023 ; Salazar et al. 2023 ; Specht et al. 2023 ) in promoting the development of documented FAIR by design management pipelines. The FIP2DMP pipeline can be used to automatically inform Data Management Plans about the decisions taken by a community regarding the use of FAIR Enabling Resources ( Hettne et al. 2023 ). This can facilitate easier adoption of community standards by the members of that community and promote FAIR by design data management practices.

In the development of FAIR by design pipelines, community involvement is pivotal. Indeed, it is within each community that the requirements for a FAIR implementation profile to be followed can be established. Since it is ultimately the end-user who verifies the FAIRness of a digital object, particularly in terms of reusability, it is essential for each community to foster initiatives that define actual FAIR implementations through a bottom to top process, aiming to achieve an informed consensus on machine-actionable specifics. An example in this direction is NASA, which, as a community, has committed to establishing interpretative boundaries and actions to achieve and measure the FAIRness of their research products in the context of their data infrastructures ( SMD Data Repository Standards and Guidelines Working Group 2024 ).

Community-tailored FAIR by design pipelines would, on one hand, overcome the constraints of a top-down defined FAIRness, which may not suit the broad spectrum of existing scenarios. One of these constraints is exemplified by the number of technologies that a rule-based assessment tool ought to incorporate. While a community may establish reference technologies, it is far more challenging for a checklist to suffice for the needs of diverse communities. On the other hand, community-tailored FAIR by design pipelines can aid in establishing a concept of minimum requirements for absolute FAIRness, derived from the intersection of different specifications, or, on the contrary, in proving its unfeasibility.

Instead of attempting to devise a tool for a generic FAIR assessment within a rule-based control context, which cannot cover the different scenarios in which research outputs are produced, it may be more advantageous to focus on community-specific assessment tools. Even in this scenario, the modularity of the tools and the granularity of the assessments performed would be essential for creating an adaptable instrument that changes with the ever-evolving technologies and standards.

For examining the FAIRness of an object from a broad standpoint, large language models (LLMs) could serve as an initial benchmark for a preliminary FAIR evaluation. Such an approach would have the advantage of not being bound to a rule-based verification, since the model would be based on a comprehensive training set, allowing it to identify a wide range of possibilities, while managing to provide a consistent and close interpretation of the FAIR principles through different scenarios.

6 Conclusion

This study analysed 20 FAIR assessment tools and their related 1180 metrics to answer four research questions to develop a comprehensive and up-to-date view of the FAIR assessment.

The tools were analysed along seven axes (assessment unit, assessment methodology, adaptability, discipline specificity, community specificity, and provisioning mode), highlighting the emergence of trends over time: the increasing variety in the assessment units and the preference for automatic assessment methodologies, non-adaptable assessment methods, discipline and community generality, and the as-a-Service provisioning model. The inherent subjectivity in interpreting and applying the FAIR principles leads to a spectrum of assessment solutions, underscoring the challenge of reconciling the aspirational nature of the FAIR principles with precise measurement. Manual assessment practices fail to yield consistent results for the same reason that they constitute a valuable resource, that is, they facilitate the adaptability to the variety of assessment contexts by avoiding extensional formulations. Automated tools, although objective in their processes, are not immune to subjectivity as they reflect the biases and interpretations of their developers. This is particularly evident in tools that do not support user-defined metrics, which could otherwise allow for a more nuanced FAIR assessment.

The metrics were analysed with respect to the FAIR principles’ coverage, the discrepancies between the declared intent of the metrics and the actual aspects assessed, and the approaches and technologies employed for the assessment. This revealed gaps, discrepancies, and high heterogeneity among the existing metrics and the principles. This was quite expected and depended on the difference of intents among a set of aspirational principles by design that was oriented to allow many different approaches to rendering the target items FAIR and metrics called to assess in practice concrete implementations of FAIR principles. Principles do not represent a standard to adhere to ( Mons et al. 2017 ) and some of them are multifaceted, while metrics have to be implemented by making decisions on principles implementations to make the assessment useful or remain at the same level of genericity of the principle, thus leaving room for interpretation from the assessor and making the assessment exposed to personal biases. Multifaceted principles are not uniformly assessed, with tools, especially automated ones, struggling to evaluate them programmatically. Accessibility principles, in particular, are not consistently addressed. The controls envisaged for assessing FAIRness also encompass aspects that extend beyond the original intentions of the principles’ authors. Concepts such as open, free, and valid are in fact employed within the context of FAIR assessment, reflecting a shifting awareness of the interconnected yet distinct issues associated with data management practices. Just as closed digital objects can be FAIR, data and metadata that are not valid may comply with the principles as well, depending on the context they were produced. The diversity of assessment approaches for the same principle and the absence of a universally accepted technology for implementing FAIR principles, reflecting the diverse needs and preferences of scientific communities, further highlights the variability in interpretation, ultimately rendering absolute assessments of FAIRness impractical and, arguably, nonsensical.

Forthcoming FAIR assessment tools should include among their features the possibility of implementing new checks and allow user-defined assessment profiles. The ‘publication’ of metrics will allow the development of a repository or a registry for FAIR assessment implementations, fostering their peer review process and the reuse or repurposing of them by different assessment tools, ultimately being an effective solution for enabling and promoting the awareness of the available solutions without depending on a specific tool. The recently proposed FAIR Cookbook (Life Science) ( Rocca-Serra et al. 2023 ) goes in this direction. In addition, the need for assessment tools will likely be limited if FAIR-by-design data production and publication pipelines are developed, thus leading to FAIR-born items. Of course, the FAIR-born items are not universally FAIR, they are simply compliant with the specific implementation choices decided by the data publishing community in their FAIR-by-design pipeline. Rather than trying to define a FAIRness that can fit all purposes, shifting the focus from generic FAIR assessment solutions to community-specific FAIR assessment solutions would bring better results in the long run. A bottom-up approach would yield greater benefits, both short-term and long-term, as it would enable the immediate production of results that are informed by the specific needs of each community, thus ensuring immediate reusability. Furthermore, it would facilitate the identification of commonalities, thereby allowing for a shared definition of a broader FAIRness. LLMs could bring advantages to FAIR assessment processes by untying them from rule-based constraints and by ensuring a consistent interpretation of the FAIR principles amidst the variety characterising scientific settings and outputs.

All in all, we argue that FAIRness is a valuable concept yet FAIR is by design far from being a standard or a concrete specification whose compliance can be univocally assessed and measured. FAIR principles were proposed to guide data producers and publishers; thus FAIRness assessment tools are expected to help these key players to identify possible limitations in their data management practices with respect to good data management and stewardship.

Data Accessibility Statements

The data that support the findings of this study are openly available on Zenodo at https://doi.org/10.5281/zenodo.10082195 .

Additional File

The additional file for this article can be found as follows:

Appendixes A.1 to A.5. DOI: https://doi.org/10.5334/dsj-2024-033.s1

Funding Statement

Funded by: European Union’s Horizon 2020 and Horizon Europe research and innovation programmes.

Acknowledgements

We really thank D. Castelli (CNR-ISTI) for her valuable support and the many helpful comments she gave during the preparation of the manuscript. We sincerely thank the anonymous reviewers for their valuable feedback.

Funding information

This work has received funding from the European Union’s Horizon 2020 and Horizon Europe research and innovation programmes under the Blue Cloud project (grant agreement No. 862409), the Blue-Cloud 2026 project (grant agreement No. 101094227), the Skills4EOSC project (grant agreement No. 101058527), and the SoBigData-PlusPlus project (grant agreement No. 871042).

Competing Interests

The authors have no competing interests to declare.

Author Contributions

  • LC: Conceptualization, Funding acquisition, Methodology, Supervision, Validation, Visualization, Writing.
  • DM: Data curation, Formal Analysis, Investigation, Writing.
  • GP: Data curation, Formal Analysis, Investigation, Writing.

Aguilar Gómez, F 2022 FAIR EVA (Evaluator, Validator & Advisor). Spanish National Research Council. DOI: https://doi.org/10.20350/DIGITALCSIC/14559  

Amdouni, E, Bouazzouni, S and Jonquet, C 2022 O’FAIRe: Ontology FAIRness Evaluator in the AgroPortal Semantic Resource Repository. In: Groth, P, et al. (eds.), The Semantic Web: ESWC 2022 Satellite Events . Cham: Springer International Publishing (Lecture Notes in Computer Science). pp. 89–94. DOI: https://doi.org/10.1007/978-3-031-11609-4_17  

Ammar, A, et al. 2020 A semi-automated workflow for fair maturity indicators in the life sciences. Nanomaterials , 10(10): 2068. DOI: https://doi.org/10.3390/nano10102068  

Bahim, C, et al. 2020 The FAIR Data Maturity Model: An approach to harmonise FAIR Assessments. Data Science Journal , 19: 41. DOI: https://doi.org/10.5334/dsj-2020-041  

Bahim, C, Dekkers, M and Wyns, B 2019 Results of an Analysis of Existing FAIR Assessment Tools . RDA Report. DOI: https://doi.org/10.15497/rda00035  

Bonello, J, Cachia, E and Alfino, N 2022 AutoFAIR-A portal for automating FAIR assessments for bioinformatics resources. Biochimica et Biophysica Acta (BBA) – Gene Regulatory Mechanisms , 1865(1): 194767. DOI: https://doi.org/10.1016/j.bbagrm.2021.194767  

Clarke, D J B, et al. 2019 FAIRshake: Toolkit to Evaluate the FAIRness of Research Digital Resources. Cell Systems , 9(5): 417–421. DOI: https://doi.org/10.1016/j.cels.2019.09.011  

Czerniak, A, et al. 2021 Lightweight FAIR assessment in the OpenAIRE Validator. In: Open Science Fair 2021 . Available at: https://pub.uni-bielefeld.de/record/2958070 .  

David, R, et al. 2023 Umbrella Data Management Plans to integrate FAIR Data: Lessons from the ISIDORe and BY-COVID Consortia for Pandemic Preparedness. Data Science Journal , 22: 35. DOI: https://doi.org/10.5334/dsj-2023-035  

d’Aquin, M, et al. 2023 FAIREST: A framework for assessing research repositories. Data Intelligence , 5(1): 202–241. DOI: https://doi.org/10.1162/dint_a_00159  

De Miranda Azevedo, R and Dumontier, M 2020 considerations for the conduction and interpretation of FAIRness evaluations. Data Intelligence , 2(1–2): 285–292. DOI: https://doi.org/10.1162/dint_a_00051  

Devaraju, A and Huber, R 2020 F-UJI – An automated FAIR Data Assessment tool. Zenodo . DOI: https://doi.org/10.5281/ZENODO.4063720  

Gaignard, A, et al. 2023 FAIR-Checker: Supporting digital resource findability and reuse with Knowledge Graphs and Semantic Web standards. Journal of Biomedical Semantics , 14(1): 7. DOI: https://doi.org/10.1186/s13326-023-00289-5  

Garijo, D, Corcho, O and Poveda-Villalòn, M 2021 FOOPS!: An ontology pitfall scanner for the FAIR Principles. [Posters, Demos, and Industry Tracks]. In: International Semantic Web Conference (ISWC) 2021.  

Gehlen, K P, et al. 2022 Recommendations for discipline-specific FAIRness Evaluation derived from applying an ensemble of evaluation tools. Data Science Journal , 21: 7. DOI: https://doi.org/10.5334/dsj-2022-007  

Goble, C, et al. 2020 FAIR Computational Workflows. Data Intelligence , 2(1–2): 108–121. DOI: https://doi.org/10.1162/dint_a_00033  

González, E, Benítez, A and Garijo, D 2022 FAIROs: Towards FAIR Assessment in research objects. Lecture Notes in Computer Science, vol 13541 In: Silvello, G, et al. (eds.), Linking Theory and Practice of Digital Libraries . Cham: Springer International Publishing. pp. 68–80. DOI: https://doi.org/10.1007/978-3-031-16802-4_6  

Hettne, K M, et al. 2023 FIP2DMP: Linking data management plans with FAIR implementation profiles. FAIR Connect , 1(1): 23–27. DOI: https://doi.org/10.3233/FC-221515  

Jacobsen, A, et al. 2020 FAIR Principles: Interpretations and implementation considerations. Data Intelligence , 2(1–2): 10–29. DOI: https://doi.org/10.1162/dint_r_00024  

Katz, D S, Gruenpeter, M and Honeyman, T 2021 Taking a fresh look at FAIR for research software. Patterns , 2(3): 100222. DOI: https://doi.org/10.1016/j.patter.2021.100222  

Krans, N A, et al. 2022 FAIR assessment tools: evaluating use and performance. NanoImpact , 27: 100402. DOI: https://doi.org/10.1016/j.impact.2022.100402  

Lamprecht, A L, et al. 2020 Towards FAIR principles for research software. Data Science , 3(1): 37–59. DOI: https://doi.org/10.3233/DS-190026  

Mangione, D, Candela, L and Castelli, D 2022 A taxonomy of tools and approaches for FAIRification, In: 18th Italian Research Conference on Digital Libraries. IRCDL, Padua, Italy, 2022.  

Matentzoglu, N, et al. 2018 MIRO: guidelines for minimum information for the reporting of an ontology. Journal of Biomedical Semantics , 9(1):. 6. DOI: https://doi.org/10.1186/s13326-017-0172-7  

Mons, B, et al. 2017 Cloudy, increasingly FAIR; revisiting the FAIR Data guiding principles for the European Open Science Cloud. Information Services & Use , 37(1): 49–56. DOI: https://doi.org/10.3233/ISU-170824  

Musen, M A, O’Connor, M J, Schultes, E, et al. 2022 Modeling community standards for metadata as templates makes data FAIR. Sci Data , 9: 696. DOI: https://doi.org/10.1038/s41597-022-01815-3  

Rocca-Serra, P, et al. 2023 The FAIR Cookbook – The essential resource for and by FAIR doers. Scientific Data , 10(1): 292. DOI: https://doi.org/10.1038/s41597-023-02166-3  

Salazar, A, et al. 2023 How research data management plans can help in harmonizing open science and approaches in the digital economy. Chemistry – A European Journal , 29(9): e202202720. DOI: https://doi.org/10.1002/chem.202202720  

Schultes, E, Magagna, B, Hettne, K M, Pergl, R, Suchánek, M and Kuhn, T. 2020 Reusable FAIR implementation profiles as accelerators of FAIR convergence. In: Grossmann, G and Ram, S (eds.), Advances in Conceptual Modeling . ER 2020, Lecture Notes in Computer Science, Vol. 12584. Cham: Springer. DOI: https://doi.org/10.1007/978-3-030-65847-2_13  

SMD Data Repository Standards and Guidelines Working Group 2024 How to make NASA Science Data more FAIR . Available at: https://docs.google.com/document/d/1ELb2c7ajYywt8_pzHsNq2a352YjgzixmDh5KP4WfY9s/edit?usp=sharing .  

Soiland-Reyes, S, et al. 2022 Packaging research artefacts with RO-Crate. Data Science , 5(2): 97–138. DOI: https://doi.org/10.3233/DS-210053  

Specht, A, et al. 2023 The Value of a data and digital object management plan (D(DO)MP) in fostering sharing practices in a multidisciplinary multinational project. Data Science Journal , 22: 38. DOI: https://doi.org/10.5334/dsj-2023-038  

Sun, C, Emonet, V and Dumontier, M 2022 A comprehensive comparison of automated FAIRness Evaluation Tools, In: Semantic Web Applications and Tools for Health Care and Life Sciences. 13th International Conference on Semantic Web Applications and Tools for Health Care and Life Sciences. Leiden, Netherlands (Virtual Event) on 10th–14th January 2022, pp. 44–53.  

Thompson, M, et al. 2020 Making FAIR easy with FAIR Tools: From creolization to convergence. Data Intelligence , 2(1–2): 87–95. DOI: https://doi.org/10.1162/dint_a_00031  

Wilkinson, M D, et al. 2016 The FAIR Guiding Principles for scientific data management and stewardship. Scientific Data , 3(1): 160018. DOI: https://doi.org/10.1038/sdata.2016.18  

Wilkinson, M D, et al. 2019 Evaluating FAIR maturity through a scalable, automated, community-governed framework. Scientific Data , 6(1): 174. DOI: https://doi.org/10.1038/s41597-019-0184-5  

World record reduction in photon emission

Recently, a team of chemists, mathematicians, physicists and nano-engineers at the University of Twente in the Netherlands developed the ultimate device to control the emission of photons with unprecedented precision. This technology could lead to more efficient miniature light sources, sensitive sensors, and stable quantum bits for quantum computing.

The part of your smartphone that consumes the most energy is the screen. Reducing any unwanted energy that escapes from the screen increases your smartphone's durability. Imagine that your smartphone only needs to be charged once a week. However, to increase the efficiency you need to be able to emit photons in a more controlled manner.

MINT-toolbox

The researchers developed the ‘MINT-toolbox’: a set of tools from the scientific disciplines of Mathematics, Informatics, Natural Sciences and Technology. In this toolbox, there were advanced chemical tools. The most important were polymer brushes, tiny chemical chains that can hold the photon sources at a certain place. First author Schulz explains: “The polymer brushes are grafted in solution from pore-surfaces inside a so-called photonic crystal made from silicon. Quite a tricky experiment! So we were very excited when we saw in separate X-ray imaging studies that the photon sources were sitting at the right positions on top of the brushes.”

World record

By adding nanophotonic tools, the team has demonstrated that excited light sources are inhibited by nearly fifty times. In this situation, a light source remains excited fifty times longer than usual! The spectrum matches the theoretical one very well, as calculated with advanced mathematical tools. Second author Kozoň: “The theory predicts zero light since it pertains to a fictitious infinitely extended crystal. In our real finite crystal, the emitted light is non-zero, but so small it’s a new world record!”

Efficient light sources

The new results promise a new era for efficient miniature lasers and light sources, for qubits in photonic circuits with strongly reduced perturbations (due to elusive vacuum fluctuations). Willem Vos is ebullient: “Our multi-toolbox offers opportunities for completely new applications that profit from strongly stabilised excited states. These are central to photochemistry and could become sensitive chemical nanosensors.”

The research was done by Andreas Schulz, Marek Kozoň, Jurriaan Huskens, Julius Vancso and Willem Vos from the University of Twente. Andreas is a PhD student in the COPS, MNF, MTP and SPC chairs, Marek is a theoretician and mathematician who recently graduated from the COPS and MACS chairs (now with Pixel Photonics GmbH, a quantum detector company in Germany), Jurriaan is professor of MNF, Julius professor (emeritus) of MTP & SPC, and Willem professor of COPS.

The research was funded by NWO Echo CW contract 712.012.003, and by NWO-FOM (Marek), and by NWO-TTW Perspectief program “Freeform scattering optics (FFSO)” (P15-36).  

The paper entitled “ Strongly inhibited spontaneous emission of PbS quantum dots covalently bound to 3D silicon photonic band gap crystals ” appears in the Journal of Physical Chemistry C that is published by the American Chemical Society (ACS). The paper is available online.

DOI:  10.1021/acs.jpcc.4c01541

More recent news

UT student team HyDriven presents latest hydrogen racecar

Why the Pandemic Probably Started in a Lab, in 5 Key Points

research paper on 5

By Alina Chan

Dr. Chan is a molecular biologist at the Broad Institute of M.I.T. and Harvard, and a co-author of “Viral: The Search for the Origin of Covid-19.”

This article has been updated to reflect news developments.

On Monday, Dr. Anthony Fauci returned to the halls of Congress and testified before the House subcommittee investigating the Covid-19 pandemic. He was questioned about several topics related to the government’s handling of Covid-19, including how the National Institute of Allergy and Infectious Diseases, which he directed until retiring in 2022, supported risky virus work at a Chinese institute whose research may have caused the pandemic.

For more than four years, reflexive partisan politics have derailed the search for the truth about a catastrophe that has touched us all. It has been estimated that at least 25 million people around the world have died because of Covid-19, with over a million of those deaths in the United States.

Although how the pandemic started has been hotly debated, a growing volume of evidence — gleaned from public records released under the Freedom of Information Act, digital sleuthing through online databases, scientific papers analyzing the virus and its spread, and leaks from within the U.S. government — suggests that the pandemic most likely occurred because a virus escaped from a research lab in Wuhan, China. If so, it would be the most costly accident in the history of science.

Here’s what we now know:

1 The SARS-like virus that caused the pandemic emerged in Wuhan, the city where the world’s foremost research lab for SARS-like viruses is located.

  • At the Wuhan Institute of Virology, a team of scientists had been hunting for SARS-like viruses for over a decade, led by Shi Zhengli.
  • Their research showed that the viruses most similar to SARS‑CoV‑2, the virus that caused the pandemic, circulate in bats that live r oughly 1,000 miles away from Wuhan. Scientists from Dr. Shi’s team traveled repeatedly to Yunnan province to collect these viruses and had expanded their search to Southeast Asia. Bats in other parts of China have not been found to carry viruses that are as closely related to SARS-CoV-2.

research paper on 5

The closest known relatives to SARS-CoV-2 were found in southwestern China and in Laos.

Large cities

Mine in Yunnan province

Cave in Laos

South China Sea

research paper on 5

The closest known relatives to SARS-CoV-2

were found in southwestern China and in Laos.

philippines

research paper on 5

The closest known relatives to SARS-CoV-2 were found

in southwestern China and Laos.

Sources: Sarah Temmam et al., Nature; SimpleMaps

Note: Cities shown have a population of at least 200,000.

research paper on 5

There are hundreds of large cities in China and Southeast Asia.

research paper on 5

There are hundreds of large cities in China

and Southeast Asia.

research paper on 5

The pandemic started roughly 1,000 miles away, in Wuhan, home to the world’s foremost SARS-like virus research lab.

research paper on 5

The pandemic started roughly 1,000 miles away,

in Wuhan, home to the world’s foremost SARS-like virus research lab.

research paper on 5

The pandemic started roughly 1,000 miles away, in Wuhan,

home to the world’s foremost SARS-like virus research lab.

  • Even at hot spots where these viruses exist naturally near the cave bats of southwestern China and Southeast Asia, the scientists argued, as recently as 2019 , that bat coronavirus spillover into humans is rare .
  • When the Covid-19 outbreak was detected, Dr. Shi initially wondered if the novel coronavirus had come from her laboratory , saying she had never expected such an outbreak to occur in Wuhan.
  • The SARS‑CoV‑2 virus is exceptionally contagious and can jump from species to species like wildfire . Yet it left no known trace of infection at its source or anywhere along what would have been a thousand-mile journey before emerging in Wuhan.

2 The year before the outbreak, the Wuhan institute, working with U.S. partners, had proposed creating viruses with SARS‑CoV‑2’s defining feature.

  • Dr. Shi’s group was fascinated by how coronaviruses jump from species to species. To find viruses, they took samples from bats and other animals , as well as from sick people living near animals carrying these viruses or associated with the wildlife trade. Much of this work was conducted in partnership with the EcoHealth Alliance, a U.S.-based scientific organization that, since 2002, has been awarded over $80 million in federal funding to research the risks of emerging infectious diseases.
  • The laboratory pursued risky research that resulted in viruses becoming more infectious : Coronaviruses were grown from samples from infected animals and genetically reconstructed and recombined to create new viruses unknown in nature. These new viruses were passed through cells from bats, pigs, primates and humans and were used to infect civets and humanized mice (mice modified with human genes). In essence, this process forced these viruses to adapt to new host species, and the viruses with mutations that allowed them to thrive emerged as victors.
  • By 2019, Dr. Shi’s group had published a database describing more than 22,000 collected wildlife samples. But external access was shut off in the fall of 2019, and the database was not shared with American collaborators even after the pandemic started , when such a rich virus collection would have been most useful in tracking the origin of SARS‑CoV‑2. It remains unclear whether the Wuhan institute possessed a precursor of the pandemic virus.
  • In 2021, The Intercept published a leaked 2018 grant proposal for a research project named Defuse , which had been written as a collaboration between EcoHealth, the Wuhan institute and Ralph Baric at the University of North Carolina, who had been on the cutting edge of coronavirus research for years. The proposal described plans to create viruses strikingly similar to SARS‑CoV‑2.
  • Coronaviruses bear their name because their surface is studded with protein spikes, like a spiky crown, which they use to enter animal cells. T he Defuse project proposed to search for and create SARS-like viruses carrying spikes with a unique feature: a furin cleavage site — the same feature that enhances SARS‑CoV‑2’s infectiousness in humans, making it capable of causing a pandemic. Defuse was never funded by the United States . However, in his testimony on Monday, Dr. Fauci explained that the Wuhan institute would not need to rely on U.S. funding to pursue research independently.

research paper on 5

The Wuhan lab ran risky experiments to learn about how SARS-like viruses might infect humans.

1. Collect SARS-like viruses from bats and other wild animals, as well as from people exposed to them.

research paper on 5

2. Identify high-risk viruses by screening for spike proteins that facilitate infection of human cells.

research paper on 5

2. Identify high-risk viruses by screening for spike proteins that facilitate infection of

human cells.

research paper on 5

In Defuse, the scientists proposed to add a furin cleavage site to the spike protein.

3. Create new coronaviruses by inserting spike proteins or other features that could make the viruses more infectious in humans.

research paper on 5

4. Infect human cells, civets and humanized mice with the new coronaviruses, to determine how dangerous they might be.

research paper on 5

  • While it’s possible that the furin cleavage site could have evolved naturally (as seen in some distantly related coronaviruses), out of the hundreds of SARS-like viruses cataloged by scientists, SARS‑CoV‑2 is the only one known to possess a furin cleavage site in its spike. And the genetic data suggest that the virus had only recently gained the furin cleavage site before it started the pandemic.
  • Ultimately, a never-before-seen SARS-like virus with a newly introduced furin cleavage site, matching the description in the Wuhan institute’s Defuse proposal, caused an outbreak in Wuhan less than two years after the proposal was drafted.
  • When the Wuhan scientists published their seminal paper about Covid-19 as the pandemic roared to life in 2020, they did not mention the virus’s furin cleavage site — a feature they should have been on the lookout for, according to their own grant proposal, and a feature quickly recognized by other scientists.
  • Worse still, as the pandemic raged, their American collaborators failed to publicly reveal the existence of the Defuse proposal. The president of EcoHealth, Peter Daszak, recently admitted to Congress that he doesn’t know about virus samples collected by the Wuhan institute after 2015 and never asked the lab’s scientists if they had started the work described in Defuse. In May, citing failures in EcoHealth’s monitoring of risky experiments conducted at the Wuhan lab, the Biden administration suspended all federal funding for the organization and Dr. Daszak, and initiated proceedings to bar them from receiving future grants. In his testimony on Monday, Dr. Fauci said that he supported the decision to suspend and bar EcoHealth.
  • Separately, Dr. Baric described the competitive dynamic between his research group and the institute when he told Congress that the Wuhan scientists would probably not have shared their most interesting newly discovered viruses with him . Documents and email correspondence between the institute and Dr. Baric are still being withheld from the public while their release is fiercely contested in litigation.
  • In the end, American partners very likely knew of only a fraction of the research done in Wuhan. According to U.S. intelligence sources, some of the institute’s virus research was classified or conducted with or on behalf of the Chinese military . In the congressional hearing on Monday, Dr. Fauci repeatedly acknowledged the lack of visibility into experiments conducted at the Wuhan institute, saying, “None of us can know everything that’s going on in China, or in Wuhan, or what have you. And that’s the reason why — I say today, and I’ve said at the T.I.,” referring to his transcribed interview with the subcommittee, “I keep an open mind as to what the origin is.”

3 The Wuhan lab pursued this type of work under low biosafety conditions that could not have contained an airborne virus as infectious as SARS‑CoV‑2.

  • Labs working with live viruses generally operate at one of four biosafety levels (known in ascending order of stringency as BSL-1, 2, 3 and 4) that describe the work practices that are considered sufficiently safe depending on the characteristics of each pathogen. The Wuhan institute’s scientists worked with SARS-like viruses under inappropriately low biosafety conditions .

research paper on 5

In the United States, virologists generally use stricter Biosafety Level 3 protocols when working with SARS-like viruses.

Biosafety cabinets prevent

viral particles from escaping.

Viral particles

Personal respirators provide

a second layer of defense against breathing in the virus.

DIRECT CONTACT

Gloves prevent skin contact.

Disposable wraparound

gowns cover much of the rest of the body.

research paper on 5

Personal respirators provide a second layer of defense against breathing in the virus.

Disposable wraparound gowns

cover much of the rest of the body.

Note: ​​Biosafety levels are not internationally standardized, and some countries use more permissive protocols than others.

research paper on 5

The Wuhan lab had been regularly working with SARS-like viruses under Biosafety Level 2 conditions, which could not prevent a highly infectious virus like SARS-CoV-2 from escaping.

Some work is done in the open air, and masks are not required.

Less protective equipment provides more opportunities

for contamination.

research paper on 5

Some work is done in the open air,

and masks are not required.

Less protective equipment provides more opportunities for contamination.

  • In one experiment, Dr. Shi’s group genetically engineered an unexpectedly deadly SARS-like virus (not closely related to SARS‑CoV‑2) that exhibited a 10,000-fold increase in the quantity of virus in the lungs and brains of humanized mice . Wuhan institute scientists handled these live viruses at low biosafet y levels , including BSL-2.
  • Even the much more stringent containment at BSL-3 cannot fully prevent SARS‑CoV‑2 from escaping . Two years into the pandemic, the virus infected a scientist in a BSL-3 laboratory in Taiwan, which was, at the time, a zero-Covid country. The scientist had been vaccinated and was tested only after losing the sense of smell. By then, more than 100 close contacts had been exposed. Human error is a source of exposure even at the highest biosafety levels , and the risks are much greater for scientists working with infectious pathogens at low biosafety.
  • An early draft of the Defuse proposal stated that the Wuhan lab would do their virus work at BSL-2 to make it “highly cost-effective.” Dr. Baric added a note to the draft highlighting the importance of using BSL-3 to contain SARS-like viruses that could infect human cells, writing that “U.S. researchers will likely freak out.” Years later, after SARS‑CoV‑2 had killed millions, Dr. Baric wrote to Dr. Daszak : “I have no doubt that they followed state determined rules and did the work under BSL-2. Yes China has the right to set their own policy. You believe this was appropriate containment if you want but don’t expect me to believe it. Moreover, don’t insult my intelligence by trying to feed me this load of BS.”
  • SARS‑CoV‑2 is a stealthy virus that transmits effectively through the air, causes a range of symptoms similar to those of other common respiratory diseases and can be spread by infected people before symptoms even appear. If the virus had escaped from a BSL-2 laboratory in 2019, the leak most likely would have gone undetected until too late.
  • One alarming detail — leaked to The Wall Street Journal and confirmed by current and former U.S. government officials — is that scientists on Dr. Shi’s team fell ill with Covid-like symptoms in the fall of 2019 . One of the scientists had been named in the Defuse proposal as the person in charge of virus discovery work. The scientists denied having been sick .

4 The hypothesis that Covid-19 came from an animal at the Huanan Seafood Market in Wuhan is not supported by strong evidence.

  • In December 2019, Chinese investigators assumed the outbreak had started at a centrally located market frequented by thousands of visitors daily. This bias in their search for early cases meant that cases unlinked to or located far away from the market would very likely have been missed. To make things worse, the Chinese authorities blocked the reporting of early cases not linked to the market and, claiming biosafety precautions, ordered the destruction of patient samples on January 3, 2020, making it nearly impossible to see the complete picture of the earliest Covid-19 cases. Information about dozens of early cases from November and December 2019 remains inaccessible.
  • A pair of papers published in Science in 2022 made the best case for SARS‑CoV‑2 having emerged naturally from human-animal contact at the Wuhan market by focusing on a map of the early cases and asserting that the virus had jumped from animals into humans twice at the market in 2019. More recently, the two papers have been countered by other virologists and scientists who convincingly demonstrate that the available market evidence does not distinguish between a human superspreader event and a natural spillover at the market.
  • Furthermore, the existing genetic and early case data show that all known Covid-19 cases probably stem from a single introduction of SARS‑CoV‑2 into people, and the outbreak at the Wuhan market probably happened after the virus had already been circulating in humans.

research paper on 5

An analysis of SARS-CoV-2’s evolutionary tree shows how the virus evolved as it started to spread through humans.

SARS-COV-2 Viruses closest

to bat coronaviruses

more mutations

research paper on 5

Source: Lv et al., Virus Evolution (2024) , as reproduced by Jesse Bloom

research paper on 5

The viruses that infected people linked to the market were most likely not the earliest form of the virus that started the pandemic.

research paper on 5

  • Not a single infected animal has ever been confirmed at the market or in its supply chain. Without good evidence that the pandemic started at the Huanan Seafood Market, the fact that the virus emerged in Wuhan points squarely at its unique SARS-like virus laboratory.

5 Key evidence that would be expected if the virus had emerged from the wildlife trade is still missing.

research paper on 5

In previous outbreaks of coronaviruses, scientists were able to demonstrate natural origin by collecting multiple pieces of evidence linking infected humans to infected animals.

Infected animals

Earliest known

cases exposed to

live animals

Antibody evidence

of animals and

animal traders having

been infected

Ancestral variants

of the virus found in

Documented trade

of host animals

between the area

where bats carry

closely related viruses

and the outbreak site

research paper on 5

Infected animals found

Earliest known cases exposed to live animals

Antibody evidence of animals and animal

traders having been infected

Ancestral variants of the virus found in animals

Documented trade of host animals

between the area where bats carry closely

related viruses and the outbreak site

research paper on 5

For SARS-CoV-2, these same key pieces of evidence are still missing , more than four years after the virus emerged.

research paper on 5

For SARS-CoV-2, these same key pieces of evidence are still missing ,

more than four years after the virus emerged.

  • Despite the intense search trained on the animal trade and people linked to the market, investigators have not reported finding any animals infected with SARS‑CoV‑2 that had not been infected by humans. Yet, infected animal sources and other connective pieces of evidence were found for the earlier SARS and MERS outbreaks as quickly as within a few days, despite the less advanced viral forensic technologies of two decades ago.
  • Even though Wuhan is the home base of virus hunters with world-leading expertise in tracking novel SARS-like viruses, investigators have either failed to collect or report key evidence that would be expected if Covid-19 emerged from the wildlife trade . For example, investigators have not determined that the earliest known cases had exposure to intermediate host animals before falling ill. No antibody evidence shows that animal traders in Wuhan are regularly exposed to SARS-like viruses, as would be expected in such situations.
  • With today’s technology, scientists can detect how respiratory viruses — including SARS, MERS and the flu — circulate in animals while making repeated attempts to jump across species . Thankfully, these variants usually fail to transmit well after crossing over to a new species and tend to die off after a small number of infections. In contrast, virologists and other scientists agree that SARS‑CoV‑2 required little to no adaptation to spread rapidly in humans and other animals . The virus appears to have succeeded in causing a pandemic upon its only detected jump into humans.

The pandemic could have been caused by any of hundreds of virus species, at any of tens of thousands of wildlife markets, in any of thousands of cities, and in any year. But it was a SARS-like coronavirus with a unique furin cleavage site that emerged in Wuhan, less than two years after scientists, sometimes working under inadequate biosafety conditions, proposed collecting and creating viruses of that same design.

While several natural spillover scenarios remain plausible, and we still don’t know enough about the full extent of virus research conducted at the Wuhan institute by Dr. Shi’s team and other researchers, a laboratory accident is the most parsimonious explanation of how the pandemic began.

Given what we now know, investigators should follow their strongest leads and subpoena all exchanges between the Wuhan scientists and their international partners, including unpublished research proposals, manuscripts, data and commercial orders. In particular, exchanges from 2018 and 2019 — the critical two years before the emergence of Covid-19 — are very likely to be illuminating (and require no cooperation from the Chinese government to acquire), yet they remain beyond the public’s view more than four years after the pandemic began.

Whether the pandemic started on a lab bench or in a market stall, it is undeniable that U.S. federal funding helped to build an unprecedented collection of SARS-like viruses at the Wuhan institute, as well as contributing to research that enhanced them . Advocates and funders of the institute’s research, including Dr. Fauci, should cooperate with the investigation to help identify and close the loopholes that allowed such dangerous work to occur. The world must not continue to bear the intolerable risks of research with the potential to cause pandemics .

A successful investigation of the pandemic’s root cause would have the power to break a decades-long scientific impasse on pathogen research safety, determining how governments will spend billions of dollars to prevent future pandemics. A credible investigation would also deter future acts of negligence and deceit by demonstrating that it is indeed possible to be held accountable for causing a viral pandemic. Last but not least, people of all nations need to see their leaders — and especially, their scientists — heading the charge to find out what caused this world-shaking event. Restoring public trust in science and government leadership requires it.

A thorough investigation by the U.S. government could unearth more evidence while spurring whistleblowers to find their courage and seek their moment of opportunity. It would also show the world that U.S. leaders and scientists are not afraid of what the truth behind the pandemic may be.

More on how the pandemic may have started

research paper on 5

Where Did the Coronavirus Come From? What We Already Know Is Troubling.

Even if the coronavirus did not emerge from a lab, the groundwork for a potential disaster had been laid for years, and learning its lessons is essential to preventing others.

By Zeynep Tufekci

research paper on 5

Why Does Bad Science on Covid’s Origin Get Hyped?

If the raccoon dog was a smoking gun, it fired blanks.

By David Wallace-Wells

research paper on 5

A Plea for Making Virus Research Safer

A way forward for lab safety.

By Jesse Bloom

The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips . And here’s our email: [email protected] .

Follow the New York Times Opinion section on Facebook , Instagram , TikTok , WhatsApp , X and Threads .

Alina Chan ( @ayjchan ) is a molecular biologist at the Broad Institute of M.I.T. and Harvard, and a co-author of “ Viral : The Search for the Origin of Covid-19.” She was a member of the Pathogens Project , which the Bulletin of the Atomic Scientists organized to generate new thinking on responsible, high-risk pathogen research.

  • Share full article

Advertisement

Impact of U.S. Labor Productivity Losses from Extreme Heat

research paper on 5

Stephie Fried

Gregory Casey

Matthew Gibson

Download PDF (610 KB)

FRBSF Economic Letter 2024-14 | May 28, 2024

Extreme heat decreases labor productivity in sectors like construction, where much work occurs outdoors. Because construction is an important component of investment, lost productivity today will slow how much capital is built up for future use and thus can have long-lasting impacts on overall economic outcomes. Combining estimates of lost labor productivity due to extreme heat with a model of economic growth suggests that, by the year 2200, extreme heat will reduce the U.S. capital stock by 5.4% and annual consumption by 1.8%.

Extreme heat makes it more difficult to perform physical labor. In the United States, this is particularly relevant for agriculture, mining, and construction, where a substantial share of production takes place outdoors. Data from the Bureau of Economic Analysis show that, of these three sectors, construction contributes the most to economic output, which suggests that the impact from lost labor activity due to extreme heat will largely be driven by the effects on the construction sector.

The labor productivity losses in construction today could have long-lasting effects on the U.S. economy because construction is important for investment. Investment is the purchase of capital in the form of goods or services. Thus, if extreme heat lowers investment today, then it will slow the accumulation of capital for future use and have long-lasting impacts on economic outcomes.

In this Economic Letter based on Casey, Fried, and Gibson (2024), we combine economic theory with findings from the climate science literature to project the future economic impacts of U.S. labor productivity losses from extreme heat. We find that future increases in days of extreme heat can be expected to reduce the amount of accumulated capital by approximately 5.4% in 2200 and reduce annual consumption by approximately 1.8%.

Extreme heat and worker productivity

When a person works on a physically intensive task, the body must release heat to maintain a safe internal temperature. If it is not possible to release enough heat, the person can suffer from heat stress. Scientists use wet bulb globe temperature (WBGT), which incorporates the ambient air temperature, humidity, wind speed, and solar irradiance, to determine when people are at risk of heat stress. Rising temperatures increase the risk of heat stress for workers in settings without climate control, such as those who work outdoors.

Worker safety organizations, such as the Occupational Safety and Health Administration, as well as the U.S. military provide guidelines for how much effort individuals can safely exert under different climate conditions. Dunne, Stouffer, and Johns (2013) analyze these guidelines and find that they are consistent across organizations. For “heavy work” that is characteristic of construction and agriculture, the guidelines suggest that heat stress becomes a concern at a WBGT of 25 degrees Celsius (°C), equivalent to 77 degrees Fahrenheit (°F), and that it is not safe to do any work outdoors when WBGT is above 33°C (91°F).

Figure 1 projects the future vulnerability to heat stress for an outdoor worker in the United States, measured in days above certain WBGT thresholds. To construct the figure, we use projections of future weather conditions at the county-level from Rasmussen, Meinshausen, and Kopp (2016). The projections are based on a scenario that assumes no large-scale efforts to limit carbon emissions. To aggregate these projections to the national level, we take a weighted average across counties, where the weights are fixed over time and determined by the current level of outdoor employment in each county.

Figure 1 Projected number of days above WBGT thresholds

research paper on 5

The results suggest that future changes in climate will increase exposure to extreme heat for outdoor workers in the United States. The number of days above 25°C for these workers rises substantially between 2020 and 2100, from 22 days to 80 days per year. The number of days above 33°C increases from near zero to almost seven.

Why construction?

To understand how labor productivity losses from extreme heat could affect the economy, Figure 2 divides U.S. economic output from 1950 to 2019 into five sectors: services, manufacturing, construction, mining, and agriculture.

Figure 2 Composition of U.S. economic output

research paper on 5

Services (light blue line) and manufacturing (yellow line) play the largest role in the U.S. economy, but they are unlikely to be highly affected by heat. This is because work in these sectors is largely performed indoors and U.S. businesses generally have access to air conditioning (Nath 2022). On the other hand, agriculture, construction, and mining are more likely to entail outdoor work. Among these outdoor sectors, construction makes up the largest share of overall U.S. output. The construction share (dark blue line) has been relatively constant over time, equal to approximately 4%. In contrast, the share of agriculture (green line) has fallen over time and equaled less than 0.2% of output in 2019. The share of the mining sector (red line) has been consistently less than 1%. Projecting these trends into the future, we expect that construction is likely to determine the overall vulnerability of U.S. production to extreme heat.

These results do not imply that the impact of extreme heat on U.S. agriculture and mining is unimportant. Extreme heat’s impact on U.S. agricultural productivity could affect food prices around the world, which could have a disproportionate effect on low-income individuals in the United States and in developing countries. Moreover, these impacts could have negative consequences for U.S. workers in agriculture and their local communities. Relative to the larger share of construction, however, agriculture and mining are not as likely to drive national outcomes.

Consumption versus investment

Economic output can be used for consumption or for investment. Consumption refers to households’ purchases of goods and services, such as food or haircuts, that increase well-being today. Investment refers to purchases of goods and services that are used to produce output in the future, and thus increase well-being in the future. This includes spending by businesses on things like factories and software, as well as the purchase of homes by households. The distinction between consumption and investment matters because a decrease in consumption reduces well-being today but has no impact on future economic outcomes. In contrast, a decrease in investment has no impact on well-being today, but it reduces the accumulation of capital, making it harder to produce both consumption and investment goods and services in the future.

Figure 3 shows the contribution of the five sectors from Figure 2 to consumption and investment. The construction sector is an important component of U.S. investment, accounting for over 20% of investment value-added. Thus, a decrease in construction productivity from extreme heat would reduce investment and thereby have a long-lasting impact on the economy.

Figure 3 Composition of U.S. consumption and investment

research paper on 5

The future consequences of increases in extreme heat

To determine the impact of labor productivity losses from extreme heat, we build and simulate an economic model designed to study the impact of sectoral productivity on macroeconomic outcomes. Dunne et al. (2013) provide estimates of how WBGT affects labor productivity in outdoor work. We combine these estimates with the future paths of WBGT shown in Figure 1 to project future changes in productivity in the outdoor sectors. We then feed these reductions in outdoor productivity into our model.

Figure 4 shows the impact of extreme heat on the capital stock in our model. The capital stock is the value of accumulated investment, an important determinant of an economy’s ability to produce output. We compare the size of the capital stock under the scenario depicted in Figure 1 to the size of the capital stock when there is no change in extreme heat exposure after 2019. We find that future increases in extreme heat would lower the capital stock by about 1.4% in 2100 and by 5.4% in 2200. The lower capital stock reduces the economy’s ability to produce output, which in turn reduces consumption. Thus, we find that extreme heat reduces annual consumption by 0.5% in 2100 and 1.8% in 2200.

Figure 4 Impact of extreme heat on capital accumulation

research paper on 5

The WBGT paths in Figure 1 and our results in Figure 4 correspond to the most likely climate outcome given a particular path of carbon emissions. However, there is considerable uncertainty over these climate outcomes. As a result, some economists argue that it is important to consider the consequences of other less likely but still possible outcomes (Weitzman 2009). To do so, we simulated the economic effects of an alternative outcome with only a 5% likelihood that retains our given path of carbon emissions but has a larger increase in number of extreme heat days. For example, in that outcome, the number of days with WBGT greater than 25°C increases from 22 in 2020 to 125 in 2100, as opposed to from 22 to 80 as assumed in our main analysis. The outcome would lead to considerably larger consequences from extreme heat, reducing capital accumulation by 18% in 2200 and consumption by 7%.

Some caveats are in order when interpreting the magnitudes from our analysis. We abstract from some ways that companies could adapt to extreme heat, such as relocating production to cooler parts of the United States or shifting work hours to cooler parts of the day. Additionally, while our focus is on the overall consequences of extreme heat on U.S. labor productivity, the effects could vary across income groups and regions of the country. One could also consider the effects of extreme heat in other countries. For example, the impacts are likely to be larger in developing countries, where agriculture is a bigger fraction of output and where work in manufacturing and services is less likely to take place in climate-controlled environments. Finally, the increases in extreme heat days that we study could be paired with decreases in extreme cold days, which could in turn have different implications for labor productivity.

This Letter studies the impact of extreme heat on long-run economic outcomes in the United States. Extreme heat is most likely to affect economic outcomes through the construction sector for two reasons. First, construction makes up a larger share of economic output than other vulnerable sectors, like agriculture. Second, decreases in construction productivity slow capital accumulation and therefore have long-lasting effects on macroeconomic outcomes. Our findings suggest that, under a scenario with no large-scale efforts to reduce carbon emissions, future increases in extreme heat would reduce the capital stock by 5.4% and annual consumption by 1.8% by the year 2200.

Casey, Gregory, Stephie Fried, and Matthew Gibson. 2022. “ Understanding Climate Damages: Consumption versus Investment .” FRB San Francisco Working Paper 2022-21.

Dunne, John P., Ronald J. Stouffer, and Jasmin G. John. 2013. “Reductions in Labour Capacity from Heat Stress under Climate Warming.” Nature Climate Change 3(6), pp. 563–566.

Nath, Ishan B. 2022. “ Climate Change, the Food Problem, and the Challenge of Adaptation through Sectoral Reallocation .” National Bureau of Economic Research Working Paper 27297.

Rasmussen, D.J., Malte Meinshausen, and Robert E. Kopp. 2016. “ Probability-Weighted Ensembles of U.S. County-Level Climate Projections for Climate Risk Analysis .” Journal of Applied Meteorology and Climatolo gy 55(10), pp. 2,301–2,322.

Weitzman, Martin L. 2009. “On Modeling and Interpreting the Economics of Catastrophic Climate Change.” Review of Economics and Statistics 91(1), pp. 1–19.

Opinions expressed in FRBSF Economic Letter do not necessarily reflect the views of the management of the Federal Reserve Bank of San Francisco or of the Board of Governors of the Federal Reserve System. This publication is edited by Anita Todd and Karen Barnes. Permission to reprint portions of articles or whole articles must be obtained in writing. Please send editorial comments and requests for reprint permission to [email protected]

IMAGES

  1. FREE 5+ Sample Research Paper Templates in PDF

    research paper on 5

  2. Scientific Research Paper Sample

    research paper on 5

  3. 31+ Research Paper Templates in PDF

    research paper on 5

  4. Tips For How To Write A Scientific Research Paper

    research paper on 5

  5. Scholarship essay: 5 page research paper example

    research paper on 5

  6. 😀 Research paper format. The Basics of a Research Paper Format. 2019-02-10

    research paper on 5

VIDEO

  1. Mirza Engineer Ke Research Paper 5-B Ka Post Mortem, Part:29 By Saifullah Muhammadi

  2. 5th Grade

  3. #Chapter5 Tips in Writing Summary, Conclusions and Recommendations

  4. Final year project computer science || Final year project ideas || Help of Chhat GPT #project #ai

  5. Mirza Engineer Ke Research Paper 5-A Ka Post Mortem, Part:7 By Saifullah Muhammadi

COMMENTS

  1. How To Write A Research Paper (FREE Template

    Step 1: Find a topic and review the literature. As we mentioned earlier, in a research paper, you, as the researcher, will try to answer a question.More specifically, that's called a research question, and it sets the direction of your entire paper. What's important to understand though is that you'll need to answer that research question with the help of high-quality sources - for ...

  2. (PDF) CHAPTER 5 SUMMARY, CONCLUSIONS, IMPLICATIONS AND ...

    The conclusions are as stated below: i. Students' use of language in the oral sessions depicted their beliefs and values. based on their intentions. The oral sessions prompted the students to be ...

  3. Industry 5.0

    This paper presents a tertiary study of thirty-two literature reviews on Industry 5.0, supported by a bibliometric analysis in the Scopus database. The results show three stages of Industry 5.0 research since 2018, starting with the Industry 4.0 separation.

  4. How to Write a Research Paper

    Choose a research paper topic. Conduct preliminary research. Develop a thesis statement. Create a research paper outline. Write a first draft of the research paper. Write the introduction. Write a compelling body of text. Write the conclusion. The second draft.

  5. Research Paper

    Definition: Research Paper is a written document that presents the author's original research, analysis, and interpretation of a specific topic or issue. It is typically based on Empirical Evidence, and may involve qualitative or quantitative research methods, or a combination of both. The purpose of a research paper is to contribute new ...

  6. Writing a Research Paper Introduction

    Table of contents. Step 1: Introduce your topic. Step 2: Describe the background. Step 3: Establish your research problem. Step 4: Specify your objective (s) Step 5: Map out your paper. Research paper introduction examples. Frequently asked questions about the research paper introduction.

  7. How to Create a Structured Research Paper Outline

    A decimal outline is similar in format to the alphanumeric outline, but with a different numbering system: 1, 1.1, 1.2, etc. Text is written as short notes rather than full sentences. Example: 1 Body paragraph one. 1.1 First point. 1.1.1 Sub-point of first point. 1.1.2 Sub-point of first point.

  8. How to Write a Research Paper

    We would like to show you a description here but the site won't allow us.

  9. AP Research Performance Task Sample and Scoring ...

    2016: Through-Course and End-of-Course Assessments. Download sample Academic Papers along with scoring guidelines and scoring distributions. If you are using assistive technology and need help accessing these PDFs in another format, contact Services for Students with Disabilities at 212-713-8333 or by email at [email protected].

  10. How to Write a Research Paper: A Step by Step Writing Guide

    Along with Meredith Harris, Mitchell Allen. Hannah, a writer and editor since 2017, specializes in clear and concise academic and business writing. She has mentored countless scholars and companies in writing authoritative and engaging content. Writing a research paper is made easy with our 7 step guide. Learn how to organize your thoughts ...

  11. Chapter 5 Sections of a Paper

    5.1 The Abstract. The abstract of a research paper contains the most critical aspects of the paper: your research question, the context (country/population/subjects and period) analyzed, the findings, and the main conclusion. You have about 250 characters to attract the attention of the readers. Many times (in fact, most of the time), readers ...

  12. Google Scholar

    Google Scholar provides a simple way to broadly search for scholarly literature. Search across a wide variety of disciplines and sources: articles, theses, books, abstracts and court opinions.

  13. How to Write a Research Paper

    To write a research paper, start by researching your topic at the library, online, or using an academic database. As you conduct your research and take notes, zero in on a specific topic that you want to write about and create a 1-2 sentence thesis to state the focus of your paper. Then, create an outline that includes an introduction, 3 to 5 ...

  14. Sample papers

    Student sample paper with annotations (PDF, 5MB) Professional sample paper with annotations (PDF, 2.7MB) We also offer these sample papers in Microsoft Word (.docx) format with the annotations as comments to the text. Student sample paper with annotations as comments (DOCX, 42KB) Professional sample paper with annotations as comments (DOCX, 103KB)

  15. How to Write a Research Paper

    Step 1: Understand your instructor's expectations for how to write a research paper. Step 2: Brainstorm research paper ideas. Step 3: Conduct research. Step 4: Define your thesis statement. Step 5: Make a research paper outline. Step 6: Write! Step 7: Edit, edit, and edit again. Step 8 (optional): Create a title page.

  16. Writing a Research Paper

    Writing a research paper is an essential aspect of academics and should not be avoided on account of one's anxiety. In fact, the process of writing a research paper can be one of the more rewarding experiences one may encounter in academics. What is more, many students will continue to do research throughout their careers, which is one of the ...

  17. Research Paper Structure

    A complete research paper in APA style that is reporting on experimental research will typically contain a Title page, Abstract, Introduction, Methods, Results, Discussion, and References sections. 1 Many will also contain Figures and Tables and some will have an Appendix or Appendices. These sections are detailed as follows (for a more in ...

  18. Research Guides: Organizing Your Social Sciences Research Paper: 5. The

    A literature review may consist of simply a summary of key sources, but in the social sciences, a literature review usually has an organizational pattern and combines both summary and synthesis, often within specific conceptual categories.A summary is a recap of the important information of the source, but a synthesis is a re-organization, or a reshuffling, of that information in a way that ...

  19. Research Paper Format

    Formatting an APA paper. The main guidelines for formatting a paper in APA Style are as follows: Use a standard font like 12 pt Times New Roman or 11 pt Arial. Set 1 inch page margins. Apply double line spacing. If submitting for publication, insert a APA running head on every page. Indent every new paragraph ½ inch.

  20. Tips for a Memorable 5-Minute Research Presentation

    Here is what makes your 5-minute pitch memorable: It is passionate - This comes with understanding what inspires your work. Passion for research leads you to excel, even when you suffer setbacks. It tells a good story - when you have a flow with compelling images, it helps tell a story, saves explanation, and hooks the audience.

  21. Grounding DINO 1.5: Pushing the Boundaries of Open-Set Object Detection

    This article reviews the advancements presented in the paper "Grounding DINO 1.5: Advance the 'Edge' of Open-Set Object Detection." We will explore the methodologies introduced, the impact on open-set object detection, and the potential applications and future directions suggested by this research.

  22. Adverse Events After XBB.1.5-Containing COVID-19 mRNA Vaccines

    The monovalent Omicron XBB.1.5-containing COVID-19 mRNA vaccines were authorized in the US and Europe for use in autumn and winter 2023-2024. 1,2 In Denmark, the XBB.1.5-containing vaccines were recommended as a fifth COVID-19 vaccine dose to individuals aged 65 years and older beginning October 1, 2023. However, data to support safety evaluations are lacking.

  23. The FAIR Assessment Conundrum: Reflections on Tools and Metrics

    The CODATA Data Science Journal is a peer-reviewed, open access, electronic journal, publishing papers on the management, dissemination, use and reuse of research data and databases across all research domains, including science, technology, the humanities and the arts. The scope of the journal includes descriptions of data systems, their implementations and their publication, applications ...

  24. World record reduction in photon emission

    World record reduction in photon emission. MINT-toolbox. World record. Efficient light sources. The Team. The paper. Wednesday 29 May 2024. Recently, a team of chemists, mathematicians, physicists and nano-engineers at the University of Twente in the Netherlands developed the ultimate device to control the emission of photons with unprecedented ...

  25. Why the Pandemic Probably Started in a Lab, in 5 Key Points

    Dr. Chan is a molecular biologist at the Broad Institute of M.I.T. and Harvard, and a co-author of "Viral: The Search for the Origin of Covid-19." Updated June 3, 2024 at 3:09 p.m. E.T. This ...

  26. The Importance of Genomic Context in Interpreting Fosfomycin Resistance

    Search 218,798,804 papers from all fields of science. Search. Sign In Create Free Account. DOI: 10.1016/j.ijantimicag.2024.107210; Corpus ID: 269907060; ... Semantic Scholar is a free, AI-powered research tool for scientific literature, based at the Allen Institute for AI. Learn More. About

  27. Impact of U.S. Labor Productivity Losses from Extreme Heat

    Combining estimates of lost labor productivity due to extreme heat with a model of economic growth suggests that, by the year 2200, extreme heat will reduce the U.S. capital stock by 5.4% and annual consumption by 1.8%. Extreme heat makes it more difficult to perform physical labor. In the United States, this is particularly relevant for ...

  28. How to Write a Literature Review

    Examples of literature reviews. Step 1 - Search for relevant literature. Step 2 - Evaluate and select sources. Step 3 - Identify themes, debates, and gaps. Step 4 - Outline your literature review's structure. Step 5 - Write your literature review.

  29. IJGI

    This paper selects Wuhan New City as the research area, aiming to explore its hot spots and emerging hot spots. By selecting various indicators closely related to life, economy, transportation, and other aspects to analyze and judge future development factors, this paper aims to clarify the significant influencing factors of hot spots and ...

  30. Research Methods

    Research methods are specific procedures for collecting and analyzing data. Developing your research methods is an integral part of your research design. When planning your methods, there are two key decisions you will make. First, decide how you will collect data. Your methods depend on what type of data you need to answer your research question: