News alert: UC Berkeley has announced its next university librarian

Secondary menu

  • Log in to your Library account
  • Hours and Maps
  • Connect from Off Campus
  • UC Berkeley Home

Search form

Research methods--quantitative, qualitative, and more: overview.

  • Quantitative Research
  • Qualitative Research
  • Data Science Methods (Machine Learning, AI, Big Data)
  • Text Mining and Computational Text Analysis
  • Evidence Synthesis/Systematic Reviews
  • Get Data, Get Help!

About Research Methods

This guide provides an overview of research methods, how to choose and use them, and supports and resources at UC Berkeley. 

As Patten and Newhart note in the book Understanding Research Methods , "Research methods are the building blocks of the scientific enterprise. They are the "how" for building systematic knowledge. The accumulation of knowledge through research is by its nature a collective endeavor. Each well-designed study provides evidence that may support, amend, refute, or deepen the understanding of existing knowledge...Decisions are important throughout the practice of research and are designed to help researchers collect evidence that includes the full spectrum of the phenomenon under study, to maintain logical rules, and to mitigate or account for possible sources of bias. In many ways, learning research methods is learning how to see and make these decisions."

The choice of methods varies by discipline, by the kind of phenomenon being studied and the data being used to study it, by the technology available, and more.  This guide is an introduction, but if you don't see what you need here, always contact your subject librarian, and/or take a look to see if there's a library research guide that will answer your question. 

Suggestions for changes and additions to this guide are welcome! 

START HERE: SAGE Research Methods

Without question, the most comprehensive resource available from the library is SAGE Research Methods.  HERE IS THE ONLINE GUIDE  to this one-stop shopping collection, and some helpful links are below:

  • SAGE Research Methods
  • Little Green Books  (Quantitative Methods)
  • Little Blue Books  (Qualitative Methods)
  • Dictionaries and Encyclopedias  
  • Case studies of real research projects
  • Sample datasets for hands-on practice
  • Streaming video--see methods come to life
  • Methodspace- -a community for researchers
  • SAGE Research Methods Course Mapping

Library Data Services at UC Berkeley

Library Data Services Program and Digital Scholarship Services

The LDSP offers a variety of services and tools !  From this link, check out pages for each of the following topics:  discovering data, managing data, collecting data, GIS data, text data mining, publishing data, digital scholarship, open science, and the Research Data Management Program.

Be sure also to check out the visual guide to where to seek assistance on campus with any research question you may have!

Library GIS Services

Other Data Services at Berkeley

D-Lab Supports Berkeley faculty, staff, and graduate students with research in data intensive social science, including a wide range of training and workshop offerings Dryad Dryad is a simple self-service tool for researchers to use in publishing their datasets. It provides tools for the effective publication of and access to research data. Geospatial Innovation Facility (GIF) Provides leadership and training across a broad array of integrated mapping technologies on campu Research Data Management A UC Berkeley guide and consulting service for research data management issues

General Research Methods Resources

Here are some general resources for assistance:

  • Assistance from ICPSR (must create an account to access): Getting Help with Data , and Resources for Students
  • Wiley Stats Ref for background information on statistics topics
  • Survey Documentation and Analysis (SDA) .  Program for easy web-based analysis of survey data.

Consultants

  • D-Lab/Data Science Discovery Consultants Request help with your research project from peer consultants.
  • Research data (RDM) consulting Meet with RDM consultants before designing the data security, storage, and sharing aspects of your qualitative project.
  • Statistics Department Consulting Services A service in which advanced graduate students, under faculty supervision, are available to consult during specified hours in the Fall and Spring semesters.

Related Resourcex

  • IRB / CPHS Qualitative research projects with human subjects often require that you go through an ethics review.
  • OURS (Office of Undergraduate Research and Scholarships) OURS supports undergraduates who want to embark on research projects and assistantships. In particular, check out their "Getting Started in Research" workshops
  • Sponsored Projects Sponsored projects works with researchers applying for major external grants.
  • Next: Quantitative Research >>
  • Last Updated: Apr 25, 2024 11:09 AM
  • URL: https://guides.lib.berkeley.edu/researchmethods

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • What Is a Research Design | Types, Guide & Examples

What Is a Research Design | Types, Guide & Examples

Published on June 7, 2021 by Shona McCombes . Revised on November 20, 2023 by Pritha Bhandari.

A research design is a strategy for answering your   research question  using empirical data. Creating a research design means making decisions about:

  • Your overall research objectives and approach
  • Whether you’ll rely on primary research or secondary research
  • Your sampling methods or criteria for selecting subjects
  • Your data collection methods
  • The procedures you’ll follow to collect data
  • Your data analysis methods

A well-planned research design helps ensure that your methods match your research objectives and that you use the right kind of analysis for your data.

Table of contents

Step 1: consider your aims and approach, step 2: choose a type of research design, step 3: identify your population and sampling method, step 4: choose your data collection methods, step 5: plan your data collection procedures, step 6: decide on your data analysis strategies, other interesting articles, frequently asked questions about research design.

  • Introduction

Before you can start designing your research, you should already have a clear idea of the research question you want to investigate.

There are many different ways you could go about answering this question. Your research design choices should be driven by your aims and priorities—start by thinking carefully about what you want to achieve.

The first choice you need to make is whether you’ll take a qualitative or quantitative approach.

Qualitative approach Quantitative approach
and describe frequencies, averages, and correlations about relationships between variables

Qualitative research designs tend to be more flexible and inductive , allowing you to adjust your approach based on what you find throughout the research process.

Quantitative research designs tend to be more fixed and deductive , with variables and hypotheses clearly defined in advance of data collection.

It’s also possible to use a mixed-methods design that integrates aspects of both approaches. By combining qualitative and quantitative insights, you can gain a more complete picture of the problem you’re studying and strengthen the credibility of your conclusions.

Practical and ethical considerations when designing research

As well as scientific considerations, you need to think practically when designing your research. If your research involves people or animals, you also need to consider research ethics .

  • How much time do you have to collect data and write up the research?
  • Will you be able to gain access to the data you need (e.g., by travelling to a specific location or contacting specific people)?
  • Do you have the necessary research skills (e.g., statistical analysis or interview techniques)?
  • Will you need ethical approval ?

At each stage of the research design process, make sure that your choices are practically feasible.

Prevent plagiarism. Run a free check.

Within both qualitative and quantitative approaches, there are several types of research design to choose from. Each type provides a framework for the overall shape of your research.

Types of quantitative research designs

Quantitative designs can be split into four main types.

  • Experimental and   quasi-experimental designs allow you to test cause-and-effect relationships
  • Descriptive and correlational designs allow you to measure variables and describe relationships between them.
Type of design Purpose and characteristics
Experimental relationships effect on a
Quasi-experimental )
Correlational
Descriptive

With descriptive and correlational designs, you can get a clear picture of characteristics, trends and relationships as they exist in the real world. However, you can’t draw conclusions about cause and effect (because correlation doesn’t imply causation ).

Experiments are the strongest way to test cause-and-effect relationships without the risk of other variables influencing the results. However, their controlled conditions may not always reflect how things work in the real world. They’re often also more difficult and expensive to implement.

Types of qualitative research designs

Qualitative designs are less strictly defined. This approach is about gaining a rich, detailed understanding of a specific context or phenomenon, and you can often be more creative and flexible in designing your research.

The table below shows some common types of qualitative design. They often have similar approaches in terms of data collection, but focus on different aspects when analyzing the data.

Type of design Purpose and characteristics
Grounded theory
Phenomenology

Your research design should clearly define who or what your research will focus on, and how you’ll go about choosing your participants or subjects.

In research, a population is the entire group that you want to draw conclusions about, while a sample is the smaller group of individuals you’ll actually collect data from.

Defining the population

A population can be made up of anything you want to study—plants, animals, organizations, texts, countries, etc. In the social sciences, it most often refers to a group of people.

For example, will you focus on people from a specific demographic, region or background? Are you interested in people with a certain job or medical condition, or users of a particular product?

The more precisely you define your population, the easier it will be to gather a representative sample.

  • Sampling methods

Even with a narrowly defined population, it’s rarely possible to collect data from every individual. Instead, you’ll collect data from a sample.

To select a sample, there are two main approaches: probability sampling and non-probability sampling . The sampling method you use affects how confidently you can generalize your results to the population as a whole.

Probability sampling Non-probability sampling

Probability sampling is the most statistically valid option, but it’s often difficult to achieve unless you’re dealing with a very small and accessible population.

For practical reasons, many studies use non-probability sampling, but it’s important to be aware of the limitations and carefully consider potential biases. You should always make an effort to gather a sample that’s as representative as possible of the population.

Case selection in qualitative research

In some types of qualitative designs, sampling may not be relevant.

For example, in an ethnography or a case study , your aim is to deeply understand a specific context, not to generalize to a population. Instead of sampling, you may simply aim to collect as much data as possible about the context you are studying.

In these types of design, you still have to carefully consider your choice of case or community. You should have a clear rationale for why this particular case is suitable for answering your research question .

For example, you might choose a case study that reveals an unusual or neglected aspect of your research problem, or you might choose several very similar or very different cases in order to compare them.

Data collection methods are ways of directly measuring variables and gathering information. They allow you to gain first-hand knowledge and original insights into your research problem.

You can choose just one data collection method, or use several methods in the same study.

Survey methods

Surveys allow you to collect data about opinions, behaviors, experiences, and characteristics by asking people directly. There are two main survey methods to choose from: questionnaires and interviews .

Questionnaires Interviews
)

Observation methods

Observational studies allow you to collect data unobtrusively, observing characteristics, behaviors or social interactions without relying on self-reporting.

Observations may be conducted in real time, taking notes as you observe, or you might make audiovisual recordings for later analysis. They can be qualitative or quantitative.

Quantitative observation

Other methods of data collection

There are many other ways you might collect data depending on your field and topic.

Field Examples of data collection methods
Media & communication Collecting a sample of texts (e.g., speeches, articles, or social media posts) for data on cultural norms and narratives
Psychology Using technologies like neuroimaging, eye-tracking, or computer-based tasks to collect data on things like attention, emotional response, or reaction time
Education Using tests or assignments to collect data on knowledge and skills
Physical sciences Using scientific instruments to collect data on things like weight, blood pressure, or chemical composition

If you’re not sure which methods will work best for your research design, try reading some papers in your field to see what kinds of data collection methods they used.

Secondary data

If you don’t have the time or resources to collect data from the population you’re interested in, you can also choose to use secondary data that other researchers already collected—for example, datasets from government surveys or previous studies on your topic.

With this raw data, you can do your own analysis to answer new research questions that weren’t addressed by the original study.

Using secondary data can expand the scope of your research, as you may be able to access much larger and more varied samples than you could collect yourself.

However, it also means you don’t have any control over which variables to measure or how to measure them, so the conclusions you can draw may be limited.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

methods of study in research

As well as deciding on your methods, you need to plan exactly how you’ll use these methods to collect data that’s consistent, accurate, and unbiased.

Planning systematic procedures is especially important in quantitative research, where you need to precisely define your variables and ensure your measurements are high in reliability and validity.

Operationalization

Some variables, like height or age, are easily measured. But often you’ll be dealing with more abstract concepts, like satisfaction, anxiety, or competence. Operationalization means turning these fuzzy ideas into measurable indicators.

If you’re using observations , which events or actions will you count?

If you’re using surveys , which questions will you ask and what range of responses will be offered?

You may also choose to use or adapt existing materials designed to measure the concept you’re interested in—for example, questionnaires or inventories whose reliability and validity has already been established.

Reliability and validity

Reliability means your results can be consistently reproduced, while validity means that you’re actually measuring the concept you’re interested in.

Reliability Validity
) )

For valid and reliable results, your measurement materials should be thoroughly researched and carefully designed. Plan your procedures to make sure you carry out the same steps in the same way for each participant.

If you’re developing a new questionnaire or other instrument to measure a specific concept, running a pilot study allows you to check its validity and reliability in advance.

Sampling procedures

As well as choosing an appropriate sampling method , you need a concrete plan for how you’ll actually contact and recruit your selected sample.

That means making decisions about things like:

  • How many participants do you need for an adequate sample size?
  • What inclusion and exclusion criteria will you use to identify eligible participants?
  • How will you contact your sample—by mail, online, by phone, or in person?

If you’re using a probability sampling method , it’s important that everyone who is randomly selected actually participates in the study. How will you ensure a high response rate?

If you’re using a non-probability method , how will you avoid research bias and ensure a representative sample?

Data management

It’s also important to create a data management plan for organizing and storing your data.

Will you need to transcribe interviews or perform data entry for observations? You should anonymize and safeguard any sensitive data, and make sure it’s backed up regularly.

Keeping your data well-organized will save time when it comes to analyzing it. It can also help other researchers validate and add to your findings (high replicability ).

On its own, raw data can’t answer your research question. The last step of designing your research is planning how you’ll analyze the data.

Quantitative data analysis

In quantitative research, you’ll most likely use some form of statistical analysis . With statistics, you can summarize your sample data, make estimates, and test hypotheses.

Using descriptive statistics , you can summarize your sample data in terms of:

  • The distribution of the data (e.g., the frequency of each score on a test)
  • The central tendency of the data (e.g., the mean to describe the average score)
  • The variability of the data (e.g., the standard deviation to describe how spread out the scores are)

The specific calculations you can do depend on the level of measurement of your variables.

Using inferential statistics , you can:

  • Make estimates about the population based on your sample data.
  • Test hypotheses about a relationship between variables.

Regression and correlation tests look for associations between two or more variables, while comparison tests (such as t tests and ANOVAs ) look for differences in the outcomes of different groups.

Your choice of statistical test depends on various aspects of your research design, including the types of variables you’re dealing with and the distribution of your data.

Qualitative data analysis

In qualitative research, your data will usually be very dense with information and ideas. Instead of summing it up in numbers, you’ll need to comb through the data in detail, interpret its meanings, identify patterns, and extract the parts that are most relevant to your research question.

Two of the most common approaches to doing this are thematic analysis and discourse analysis .

Approach Characteristics
Thematic analysis
Discourse analysis

There are many other ways of analyzing qualitative data depending on the aims of your research. To get a sense of potential approaches, try reading some qualitative research papers in your field.

If you want to know more about the research process , methodology , research bias , or statistics , make sure to check out some of our other articles with explanations and examples.

  • Simple random sampling
  • Stratified sampling
  • Cluster sampling
  • Likert scales
  • Reproducibility

 Statistics

  • Null hypothesis
  • Statistical power
  • Probability distribution
  • Effect size
  • Poisson distribution

Research bias

  • Optimism bias
  • Cognitive bias
  • Implicit bias
  • Hawthorne effect
  • Anchoring bias
  • Explicit bias

A research design is a strategy for answering your   research question . It defines your overall approach and determines how you will collect and analyze data.

A well-planned research design helps ensure that your methods match your research aims, that you collect high-quality data, and that you use the right kind of analysis to answer your questions, utilizing credible sources . This allows you to draw valid , trustworthy conclusions.

Quantitative research designs can be divided into two main categories:

  • Correlational and descriptive designs are used to investigate characteristics, averages, trends, and associations between variables.
  • Experimental and quasi-experimental designs are used to test causal relationships .

Qualitative research designs tend to be more flexible. Common types of qualitative design include case study , ethnography , and grounded theory designs.

The priorities of a research design can vary depending on the field, but you usually have to specify:

  • Your research questions and/or hypotheses
  • Your overall approach (e.g., qualitative or quantitative )
  • The type of design you’re using (e.g., a survey , experiment , or case study )
  • Your data collection methods (e.g., questionnaires , observations)
  • Your data collection procedures (e.g., operationalization , timing and data management)
  • Your data analysis methods (e.g., statistical tests  or thematic analysis )

A sample is a subset of individuals from a larger population . Sampling means selecting the group that you will actually collect data from in your research. For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

In statistics, sampling allows you to test a hypothesis about the characteristics of a population.

Operationalization means turning abstract conceptual ideas into measurable observations.

For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioral avoidance of crowded places, or physical anxiety symptoms in social situations.

Before collecting data , it’s important to consider how you will operationalize the variables that you want to measure.

A research project is an academic, scientific, or professional undertaking to answer a research question . Research projects can take many forms, such as qualitative or quantitative , descriptive , longitudinal , experimental , or correlational . What kind of research approach you choose will depend on your topic.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

McCombes, S. (2023, November 20). What Is a Research Design | Types, Guide & Examples. Scribbr. Retrieved July 2, 2024, from https://www.scribbr.com/methodology/research-design/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, guide to experimental design | overview, steps, & examples, how to write a research proposal | examples & templates, ethical considerations in research | types & examples, what is your plagiarism score.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology

Research Methods | Definition, Types, Examples

Research methods are specific procedures for collecting and analysing data. Developing your research methods is an integral part of your research design . When planning your methods, there are two key decisions you will make.

First, decide how you will collect data . Your methods depend on what type of data you need to answer your research question :

  • Qualitative vs quantitative : Will your data take the form of words or numbers?
  • Primary vs secondary : Will you collect original data yourself, or will you use data that have already been collected by someone else?
  • Descriptive vs experimental : Will you take measurements of something as it is, or will you perform an experiment?

Second, decide how you will analyse the data .

  • For quantitative data, you can use statistical analysis methods to test relationships between variables.
  • For qualitative data, you can use methods such as thematic analysis to interpret patterns and meanings in the data.

Table of contents

Methods for collecting data, examples of data collection methods, methods for analysing data, examples of data analysis methods, frequently asked questions about methodology.

Data are the information that you collect for the purposes of answering your research question . The type of data you need depends on the aims of your research.

Qualitative vs quantitative data

Your choice of qualitative or quantitative data collection depends on the type of knowledge you want to develop.

For questions about ideas, experiences and meanings, or to study something that can’t be described numerically, collect qualitative data .

If you want to develop a more mechanistic understanding of a topic, or your research involves hypothesis testing , collect quantitative data .

Qualitative
Quantitative .

You can also take a mixed methods approach, where you use both qualitative and quantitative research methods.

Primary vs secondary data

Primary data are any original information that you collect for the purposes of answering your research question (e.g. through surveys , observations and experiments ). Secondary data are information that has already been collected by other researchers (e.g. in a government census or previous scientific studies).

If you are exploring a novel research question, you’ll probably need to collect primary data. But if you want to synthesise existing knowledge, analyse historical trends, or identify patterns on a large scale, secondary data might be a better choice.

Primary
Secondary

Descriptive vs experimental data

In descriptive research , you collect data about your study subject without intervening. The validity of your research will depend on your sampling method .

In experimental research , you systematically intervene in a process and measure the outcome. The validity of your research will depend on your experimental design .

To conduct an experiment, you need to be able to vary your independent variable , precisely measure your dependent variable, and control for confounding variables . If it’s practically and ethically possible, this method is the best choice for answering questions about cause and effect.

Descriptive
Experimental

Prevent plagiarism, run a free check.

Research methods for collecting data
Research method Primary or secondary? Qualitative or quantitative? When to use
Primary Quantitative To test cause-and-effect relationships.
Primary Quantitative To understand general characteristics of a population.
Interview/focus group Primary Qualitative To gain more in-depth understanding of a topic.
Observation Primary Either To understand how something occurs in its natural setting.
Secondary Either To situate your research in an existing body of work, or to evaluate trends within a research topic.
Either Either To gain an in-depth understanding of a specific group or context, or when you don’t have the resources for a large study.

Your data analysis methods will depend on the type of data you collect and how you prepare them for analysis.

Data can often be analysed both quantitatively and qualitatively. For example, survey responses could be analysed qualitatively by studying the meanings of responses or quantitatively by studying the frequencies of responses.

Qualitative analysis methods

Qualitative analysis is used to understand words, ideas, and experiences. You can use it to interpret data that were collected:

  • From open-ended survey and interview questions, literature reviews, case studies, and other sources that use text rather than numbers.
  • Using non-probability sampling methods .

Qualitative analysis tends to be quite flexible and relies on the researcher’s judgement, so you have to reflect carefully on your choices and assumptions.

Quantitative analysis methods

Quantitative analysis uses numbers and statistics to understand frequencies, averages and correlations (in descriptive studies) or cause-and-effect relationships (in experiments).

You can use quantitative analysis to interpret data that were collected either:

  • During an experiment.
  • Using probability sampling methods .

Because the data are collected and analysed in a statistically valid way, the results of quantitative analysis can be easily standardised and shared among researchers.

Research methods for analysing data
Research method Qualitative or quantitative? When to use
Quantitative To analyse data collected in a statistically valid manner (e.g. from experiments, surveys, and observations).
Meta-analysis Quantitative To statistically analyse the results of a large collection of studies.

Can only be applied to studies that collected data in a statistically valid manner.

Qualitative To analyse data collected from interviews, focus groups or textual sources.

To understand general themes in the data and how they are communicated.

Either To analyse large volumes of textual or visual data collected from surveys, literature reviews, or other sources.

Can be quantitative (i.e. frequencies of words) or qualitative (i.e. meanings of words).

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to test a hypothesis by systematically collecting and analysing data, while qualitative methods allow you to explore ideas and experiences in depth.

In mixed methods research , you use both qualitative and quantitative data collection and analysis methods to answer your research question .

A sample is a subset of individuals from a larger population. Sampling means selecting the group that you will actually collect data from in your research.

For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

Statistical sampling allows you to test a hypothesis about the characteristics of a population. There are various sampling methods you can use to ensure that your sample is representative of the population as a whole.

The research methods you use depend on the type of data you need to answer your research question .

  • If you want to measure something or test a hypothesis , use quantitative methods . If you want to explore ideas, thoughts, and meanings, use qualitative methods .
  • If you want to analyse a large amount of readily available data, use secondary data. If you want data specific to your purposes with control over how they are generated, collect primary data.
  • If you want to establish cause-and-effect relationships between variables , use experimental methods. If you want to understand the characteristics of a research subject, use descriptive methods.

Methodology refers to the overarching strategy and rationale of your research project . It involves studying the methods used in your field and the theories or principles behind them, in order to develop an approach that matches your objectives.

Methods are the specific tools and procedures you use to collect and analyse data (e.g. experiments, surveys , and statistical tests ).

In shorter scientific papers, where the aim is to report the findings of a specific study, you might simply describe what you did in a methods section .

In a longer or more complex research project, such as a thesis or dissertation , you will probably include a methodology section , where you explain your approach to answering the research questions and cite relevant sources to support your choice of methods.

Is this article helpful?

More interesting articles.

  • A Quick Guide to Experimental Design | 5 Steps & Examples
  • Between-Subjects Design | Examples, Pros & Cons
  • Case Study | Definition, Examples & Methods
  • Cluster Sampling | A Simple Step-by-Step Guide with Examples
  • Confounding Variables | Definition, Examples & Controls
  • Construct Validity | Definition, Types, & Examples
  • Content Analysis | A Step-by-Step Guide with Examples
  • Control Groups and Treatment Groups | Uses & Examples
  • Controlled Experiments | Methods & Examples of Control
  • Correlation vs Causation | Differences, Designs & Examples
  • Correlational Research | Guide, Design & Examples
  • Critical Discourse Analysis | Definition, Guide & Examples
  • Cross-Sectional Study | Definitions, Uses & Examples
  • Data Cleaning | A Guide with Examples & Steps
  • Data Collection Methods | Step-by-Step Guide & Examples
  • Descriptive Research Design | Definition, Methods & Examples
  • Doing Survey Research | A Step-by-Step Guide & Examples
  • Ethical Considerations in Research | Types & Examples
  • Explanatory Research | Definition, Guide, & Examples
  • Explanatory vs Response Variables | Definitions & Examples
  • Exploratory Research | Definition, Guide, & Examples
  • External Validity | Types, Threats & Examples
  • Extraneous Variables | Examples, Types, Controls
  • Face Validity | Guide with Definition & Examples
  • How to Do Thematic Analysis | Guide & Examples
  • How to Write a Strong Hypothesis | Guide & Examples
  • Inclusion and Exclusion Criteria | Examples & Definition
  • Independent vs Dependent Variables | Definition & Examples
  • Inductive Reasoning | Types, Examples, Explanation
  • Inductive vs Deductive Research Approach (with Examples)
  • Internal Validity | Definition, Threats & Examples
  • Internal vs External Validity | Understanding Differences & Examples
  • Longitudinal Study | Definition, Approaches & Examples
  • Mediator vs Moderator Variables | Differences & Examples
  • Mixed Methods Research | Definition, Guide, & Examples
  • Multistage Sampling | An Introductory Guide with Examples
  • Naturalistic Observation | Definition, Guide & Examples
  • Operationalisation | A Guide with Examples, Pros & Cons
  • Population vs Sample | Definitions, Differences & Examples
  • Primary Research | Definition, Types, & Examples
  • Qualitative vs Quantitative Research | Examples & Methods
  • Quasi-Experimental Design | Definition, Types & Examples
  • Questionnaire Design | Methods, Question Types & Examples
  • Random Assignment in Experiments | Introduction & Examples
  • Reliability vs Validity in Research | Differences, Types & Examples
  • Reproducibility vs Replicability | Difference & Examples
  • Research Design | Step-by-Step Guide with Examples
  • Sampling Methods | Types, Techniques, & Examples
  • Semi-Structured Interview | Definition, Guide & Examples
  • Simple Random Sampling | Definition, Steps & Examples
  • Stratified Sampling | A Step-by-Step Guide with Examples
  • Structured Interview | Definition, Guide & Examples
  • Systematic Review | Definition, Examples & Guide
  • Systematic Sampling | A Step-by-Step Guide with Examples
  • Textual Analysis | Guide, 3 Approaches & Examples
  • The 4 Types of Reliability in Research | Definitions & Examples
  • The 4 Types of Validity | Types, Definitions & Examples
  • Transcribing an Interview | 5 Steps & Transcription Software
  • Triangulation in Research | Guide, Types, Examples
  • Types of Interviews in Research | Guide & Examples
  • Types of Research Designs Compared | Examples
  • Types of Variables in Research | Definitions & Examples
  • Unstructured Interview | Definition, Guide & Examples
  • What Are Control Variables | Definition & Examples
  • What Is a Case-Control Study? | Definition & Examples
  • What Is a Cohort Study? | Definition & Examples
  • What Is a Conceptual Framework? | Tips & Examples
  • What Is a Double-Barrelled Question?
  • What Is a Double-Blind Study? | Introduction & Examples
  • What Is a Focus Group? | Step-by-Step Guide & Examples
  • What Is a Likert Scale? | Guide & Examples
  • What is a Literature Review? | Guide, Template, & Examples
  • What Is a Prospective Cohort Study? | Definition & Examples
  • What Is a Retrospective Cohort Study? | Definition & Examples
  • What Is Action Research? | Definition & Examples
  • What Is an Observational Study? | Guide & Examples
  • What Is Concurrent Validity? | Definition & Examples
  • What Is Content Validity? | Definition & Examples
  • What Is Convenience Sampling? | Definition & Examples
  • What Is Convergent Validity? | Definition & Examples
  • What Is Criterion Validity? | Definition & Examples
  • What Is Deductive Reasoning? | Explanation & Examples
  • What Is Discriminant Validity? | Definition & Example
  • What Is Ecological Validity? | Definition & Examples
  • What Is Ethnography? | Meaning, Guide & Examples
  • What Is Non-Probability Sampling? | Types & Examples
  • What Is Participant Observation? | Definition & Examples
  • What Is Peer Review? | Types & Examples
  • What Is Predictive Validity? | Examples & Definition
  • What Is Probability Sampling? | Types & Examples
  • What Is Purposive Sampling? | Definition & Examples
  • What Is Qualitative Observation? | Definition & Examples
  • What Is Qualitative Research? | Methods & Examples
  • What Is Quantitative Observation? | Definition & Examples
  • What Is Quantitative Research? | Definition & Methods
  • What Is Quota Sampling? | Definition & Examples
  • What is Secondary Research? | Definition, Types, & Examples
  • What Is Snowball Sampling? | Definition & Examples
  • Within-Subjects Design | Explanation, Approaches, Examples
  • Research Process
  • Manuscript Preparation
  • Manuscript Review
  • Publication Process
  • Publication Recognition
  • Language Editing Services
  • Translation Services

Elsevier QRcode Wechat

Choosing the Right Research Methodology: A Guide for Researchers

  • 3 minute read
  • 43.6K views

Table of Contents

Choosing an optimal research methodology is crucial for the success of any research project. The methodology you select will determine the type of data you collect, how you collect it, and how you analyse it. Understanding the different types of research methods available along with their strengths and weaknesses, is thus imperative to make an informed decision.

Understanding different research methods:

There are several research methods available depending on the type of study you are conducting, i.e., whether it is laboratory-based, clinical, epidemiological, or survey based . Some common methodologies include qualitative research, quantitative research, experimental research, survey-based research, and action research. Each method can be opted for and modified, depending on the type of research hypotheses and objectives.

Qualitative vs quantitative research:

When deciding on a research methodology, one of the key factors to consider is whether your research will be qualitative or quantitative. Qualitative research is used to understand people’s experiences, concepts, thoughts, or behaviours . Quantitative research, on the contrary, deals with numbers, graphs, and charts, and is used to test or confirm hypotheses, assumptions, and theories. 

Qualitative research methodology:

Qualitative research is often used to examine issues that are not well understood, and to gather additional insights on these topics. Qualitative research methods include open-ended survey questions, observations of behaviours described through words, and reviews of literature that has explored similar theories and ideas. These methods are used to understand how language is used in real-world situations, identify common themes or overarching ideas, and describe and interpret various texts. Data analysis for qualitative research typically includes discourse analysis, thematic analysis, and textual analysis. 

Quantitative research methodology:

The goal of quantitative research is to test hypotheses, confirm assumptions and theories, and determine cause-and-effect relationships. Quantitative research methods include experiments, close-ended survey questions, and countable and numbered observations. Data analysis for quantitative research relies heavily on statistical methods.

Analysing qualitative vs quantitative data:

The methods used for data analysis also differ for qualitative and quantitative research. As mentioned earlier, quantitative data is generally analysed using statistical methods and does not leave much room for speculation. It is more structured and follows a predetermined plan. In quantitative research, the researcher starts with a hypothesis and uses statistical methods to test it. Contrarily, methods used for qualitative data analysis can identify patterns and themes within the data, rather than provide statistical measures of the data. It is an iterative process, where the researcher goes back and forth trying to gauge the larger implications of the data through different perspectives and revising the analysis if required.

When to use qualitative vs quantitative research:

The choice between qualitative and quantitative research will depend on the gap that the research project aims to address, and specific objectives of the study. If the goal is to establish facts about a subject or topic, quantitative research is an appropriate choice. However, if the goal is to understand people’s experiences or perspectives, qualitative research may be more suitable. 

Conclusion:

In conclusion, an understanding of the different research methods available, their applicability, advantages, and disadvantages is essential for making an informed decision on the best methodology for your project. If you need any additional guidance on which research methodology to opt for, you can head over to Elsevier Author Services (EAS). EAS experts will guide you throughout the process and help you choose the perfect methodology for your research goals.

Why is data validation important in research

Why is data validation important in research?

Importance-of-Data-Collection

When Data Speak, Listen: Importance of Data Collection and Analysis Methods

You may also like.

what is a descriptive research design

Descriptive Research Design and Its Myriad Uses

Doctor doing a Biomedical Research Paper

Five Common Mistakes to Avoid When Writing a Biomedical Research Paper

Writing in Environmental Engineering

Making Technical Writing in Environmental Engineering Accessible

Risks of AI-assisted Academic Writing

To Err is Not Human: The Dangers of AI-assisted Academic Writing

Importance-of-Data-Collection

Writing a good review article

Scholarly Sources What are They and Where can You Find Them

Scholarly Sources: What are They and Where can You Find Them?

Input your search keywords and press Enter.

  • Harvard Library
  • Research Guides
  • Faculty of Arts & Sciences Libraries

Library Research Guide for Folklore and Mythology

  • Theory and Methods

What is Research?

Practicing folklore.

  • Research Design & Tools

Research Theories and Methods

  • Ethical Guidelines in Different Disciplines
  • Searching HOLLIS
  • Databases & Bibliographies
  • Encyclopedias
  • Dictionaries
  • Tale-Type and Motif Indices
  • Archives & Special Collections
  • Material Culture
  • Audiovisual Materials & Musical Scores
  • Requesting & Accessing Materials
  • Organizing and Managing Your Resarch
  • Learning With the Library

Research  is the systematic investigation of a subject, topic, or question. 

Data  is the information gathered during research.

Fieldwork  is the collection of data in its natural environment.

A white paper is a report or guide that synthesizes a complex topic or question and the state of information and ideas about it.

Scholarship  is, broadly, the activity of a scholar. More specifically though, the term refers to the writings of scholars which result from their research. The scholarship of a field or discipline are the books, articles, etc. which have been written on the field or discipline, or on a specific subject, topic, or question in the field or discipline.  

What is a theory?

A  theory  is the conceptual basis of a subject or area of study. It is the ideas which underlie how something is understood and the framework within which it is studied.  

What is a method?

A  method  is the process or tool used to collect data.

There are three method types: qualitative, quantitative, and historical. Likewise, some research uses mixed methods.

Qualitative research  is interested in the specific. It studies things in their natural settings, attempting to make sense of, or interpret, phenomena in terms of the meanings people bring to them, endeavoring to understand human behavior from the perspective of the individual.

Qualitative methods  collect data through observation. Qualitative methods include text analysis, interviews, focus groups, observation, record keeping, ethnographic research, case study research.

Qualitative data is descriptive. Qualitative data cannot be precisely measured and is, rather, analyzed for patterns and themes using coding. Qualitative data includes narratives, recordings, photographs, oral histories, etc.

Quantitative research  is interested in the general. It studies general laws of behavior and phenomena across different settings and contexts. This type of research endeavors to form conclusions about social phenomena, collecting data to test a theory and ultimately support or reject it.

Quantitative methods  collect data through measuring. Quantitative methods include experiments, surveys, questionnaires, statistical modeling, social networks, and demography.

Quantitative data  is numerical and statistical. It is data that can either be counted or compared on a numeric scale. Quantitative data includes statistical information. 

Historical research  is interested in the past. It reviews and interprets existing data to describe, explain, and understand past actions or events.

Historical methods  collect and analyze existing data and analyze it. Historical methods include text analysis, cultural analysis, visual analysis, archival research.

Historical data  is data which was created in the past. Historical data includes scholarship, records, artifacts.  

A methodology  is the rationale for the research approach and the methods used. It is based upon the theories underlying the field or discipline of the research.

Library of Congress YouTube Feed: Folklore

The American Folklife Center at the Library of Congress produces videos about the practice of folklore, featuring interviews with a variety of folklorists about their careers, methods, fieldwork experiences, and the implications and applications of their work.

book cover

Research Design: Qualitative, Quantitative, and Mixed Methods Approaches

John W. Creswell 2014, fourth edition

book cover

Research Design: Quantitative, Qualitative, Mixed Methods, Arts-Based, and Community-Based Participatory Research Approaches

Patricia Leavy 2017

  • Literatures
  • Linguistics
  • Anthropology
  • Human Geography

Cultural Studies

Folklore studies, literary studies.

Literary Studies, also called Literary Criticism, is the study of the written works of cultures, societies, groups, and individuals. Literary Studies examines the place of literature in society, and explores how we conceptualize and describe the world and ourselves.  

Literary Theories

There are a number of different theories about literature, why and how it is created. These theories influence how a work of literature is analyzed, interpreted, and understood. Literary Studies most often uses the method of textual analysis.

Cover Art

Linguistic Studies

Linguistics is the study of languages and their structures. Linguistic Studies examines how language is created and constructed, how it functions and is learned, and how we conceptualize and structure our world through our words.   

Language Theories

There are different theories about the creation and purpose of language. Some theories state that language is the result of the nature of society, while others emphasize the role of humans in constructing meaning. Linguistic Studies use methods such as textual analysis, ethnographic research, statistical modeling.

Cover Art

History Studies

History is the study of events, and their related ideas, individuals, and objects. History Studies examines how moments in time are connected, and how we make sense of things that happen.

Historiography  is the study of how historians have interpreted and written about historical events, in essence, how they perceive history itself. Traditionally, a historiography was a name for a history, literally a specific "writing of history".  

History Theories

There are many different theories about if and how events are related to one another, and these theories have influenced how history has been written about over the centuries. History Studies use methods such as textual analysis and archival research.

A related theory to history theories is Memory Theory , which considers how collective and individual memory is created and preserved. Memory Studies examines the ways in which events are recorded and remembered, or, alternatively, forgotten, and how we choose to create and remember (or forget) our past.

Cover Art

Anthropological Studies

Anthropology is the study of human societies, their behaviors and cultures. Anthropological Studies examine how societies are formed and function, and the many aspects which form our identities.

Social Anthropology  examines human behavior. Sometimes this sub-field is combined with Cultural Anthropology as Sociocultural Anthropology.

Cultural Anthropology  examines the cultures, or various beliefs and practices, of societies. Sometimes this sub-field is combined with Social Anthropology as Sociocultural Anthropology.

Physical Anthropology , also called Biological Anthropology, examines the biology of humans and how they interact with their environment.

Linguistic Anthropology  examines the place of language in shaping social life.

Archaeology  examines the material culture, or the objects, of humans. It is considered a sub-field of Anthropology in the United States, and a sub-field of History in other parts of the world.  

Ethnography is the study of a specific society using the methods of observation and immersion, or talking and living with individuals in order to understand them.   

Anthropological Theories

The is a long tradition of theories about how societies organize themselves and how they function. These theories determine how cultural beliefs and practices are understood, in essence, how we understand ourselves and others. Anthropology Studies use methods such as interviews, focus groups, observation, ethnographic research, and record keeping, as well as textual analysis and archival research.

Cover Art

Sociological Studies

Sociology is the study of societies, their behaviors, relationships, and interactions. It examines social order and social changes, trying to understand how and why we organize ourselves and relate to one another.

Historical Sociology   is the study of the behaviors and organization of societies of the past.   

Sociological Theories

There are different theories about how societies are structured and why they act the way they do. Sociological Studies often use the methods of surveys, experiments, ethnographic research, and textual analysis.

Sociological theories are theories about how the mechanics of societies function, whereas  Social Theory  encompasses more broadly theories which explain how societies think and act.

Cover Art

Geography Studies

Geography is the study of land, inhabitants, and natural phenomena. It examines the relationship between humans and their environment, and helps us to understand our relationship with the world. 

Human Geography  examines humans and their communities, and their relationships with place, space, and environment.

Physical Geography  examines the processes and patterns of environments, such as their atmosphere, hydrosphere, biosphere, and geosphere.

Cartography  is both the study of and the science and art of map-making. It reveals how we view and conceptualize the world and our relationship to it and to others.   

Geography Theories

There are a number of theories as to the relationship between humans and their environments, many of which are shared with the fields of Anthropology and Sociology. Geography Studies use a variety of research methods, including interviews, surveys, observation, and GIS or spatial analysis.

Cover Art

Cultural Studies is the study and analysis of culture. It is a cross-disciplinary field which examines the various aspects of a society, in order to understand how we form our identities. 

Culture  is the ideas, behaviors, customs, and objects of a region, society, group, or individual. 

Material culture   are the physical objects of a culture, such as tools, domestic objects, religious objects, works of art.  

Cultural Theories

Cultural theories draw upon theories in a variety of fields, including literary theories, semiotics, history theories, anthropological theories, social theories, museum studies, art history, and media studies. Cultural theories influence how we analyze and interpret the culture of societies. Cultural Studies tends to use methods such as interviews, observation, ethnographic research, record keeping, archival research, textual analysis, visual analysis.

Cover Art

Folklore Studies, also known as Folkloristics, is the study of the expressions of culture, particularly the practices and products of a society. Folklore Studies examines the things we make to understand how they make us.

Folklore  has been traditionally considered, narrowly, as the oral tales of a society. More broadly, the term refers to all aspects of a culture – beliefs, traditions, norms, behaviors, language, literature, jokes, music, art, foodways, tools, objects, etc.  

Folklore Theories

A number of theories have emerged over the years about how societies create themselves, and these theories influence how we view and understand the things which societies create. Folklore Studies use methods such as interviews, focus groups, observation, ethnographic research, and record keeping, as well as textual analysis, visual analysis and archival research.

Cover Art

Arts Studies

The arts are a range of disciplines which study, create, and engage with human expression. The arts include,

  • Architecture -- Design
  • Visual Arts -- Drawing, Painting, Illustration, Sculpting, Ceramics, Photography, Film
  • Literary Arts -- Fiction, Drama, Poetry, Creative Writing, Storytelling
  • Performance Arts -- Music, Dance, Theatre
  • Textile Arts -- Fashion
  • Craft -- Weaving, Woodwork, Paperwork, Glasswork, Jewelry-making
  • Culinary Arts -- Cooking, Baking, Chocolate-making, Brewing, Wine-making
  • Art History and Criticism

The arts are a collection of areas of studies which combine technical skills and creativity to produce objects which convey human experience.

Architecture  is the study and design of structures. It examines both the utilitarian and the sociological aspects of space, and the relationship between constructed space and humans. 

Art History  is the study and analysis of visual arts. 

Musicology  is the study and analysis of music.

Performance   is the study and the practice of art is time and space. 

Film & Media Studies  is the study of art which employs technologies.   

Art Theories

There are as many theories about the arts as there are areas of arts. These theories affect how we understand the identity and the agency of the artist, the meaning of the art, and the relationship between the art and society. Arts fields often employ textual and visual analysis research methods, as well as observation and experimentation. 

Cover Art

Folklorists study people's lives and thus they are responsible to preserve and protect culture. Folklorists are professionals and researchers and thus they have a responsibility to the field to uphold standards of behavior and work. Finally, folklorists interact with individuals and are responsible to uphold human rights. Though there is little direct legislation governing folklore studies, there are numerous laws concerning human rights and information, as well as professional standards in the field of cultural heritage preservation. 

Legislation

The codes of ethics and standards which govern folklore studies have been developed over time from a number of authorities.  

1948    United Nations, Universal Declaration of Human Rights

1948    American Anthropological Association, Resolution on Freedom of Publication

1971    American Anthropological Association, Principles of Professional Responsibility Statement of Ethics

1976    American Folklife Preservation Act (P.L. 94-201)

American Folklife Center established at the Library of Congress and given duty to preserve American folklife

1985    UNESCO, Protection of Expressions of Folklore Against Illicit Exploitation and Other Prejudicial Actions

1988    American Folklore Society, Statement of Ethics

1988    National Association for the Practice of Anthropology, Ethical Guidelines for Practitioners

1989    UNESCO, Recommendation on the Safeguarding of Traditional Culture and Folklore

1998    American Anthropological Association, Code of Ethics

2003    UNESCO, Convention for the Safeguarding of the Intangible Cultural Heritage

book cover

Handbook of Research Ethics and Scientific Integrity

Ron Iphofen, editor 2020

book cover

The Ethics of Research with Human Subjects

David B. Resnik 2018

book cover

The Ethics of Cultural Heritage

Tracy Ireland & John Schofeld 2014

book cover

Critical Ethnography

D. Soyini Madison 2005

book cover

Ethics in Ethnography

Margaret D. LeCompte & Jean J. Schensul 2015

book cover

The Ethics of Social Research

Joan E. Sieber, editor 1982

book cover

Research Ethics for Human Geography

Helen F. Wilson & Jonathan Darling, editors 2021

book cover

The Ethics of Cultural Studies

Joanna Zylinska 2005

book cover

Museum Collection Ethics

Steven Miller 2020

book cover

Theorizing Folklore from the Margins

Solimar Otero & Mintzi Auanda Martínez-Rivera, editors 2021

  • << Previous: Background
  • Next: Searching HOLLIS >>

Except where otherwise noted, this work is subject to a Creative Commons Attribution 4.0 International License , which allows anyone to share and adapt our material as long as proper attribution is given. For details and exceptions, see the Harvard Library Copyright Policy ©2021 Presidents and Fellows of Harvard College.

helpful professor logo

15 Types of Research Methods

15 Types of Research Methods

Chris Drew (PhD)

Dr. Chris Drew is the founder of the Helpful Professor. He holds a PhD in education and has published over 20 articles in scholarly journals. He is the former editor of the Journal of Learning Development in Higher Education. [Image Descriptor: Photo of Chris]

Learn about our Editorial Process

types of research methods, explained below

Research methods refer to the strategies, tools, and techniques used to gather and analyze data in a structured way in order to answer a research question or investigate a hypothesis (Hammond & Wellington, 2020).

Generally, we place research methods into two categories: quantitative and qualitative. Each has its own strengths and weaknesses, which we can summarize as:

  • Quantitative research can achieve generalizability through scrupulous statistical analysis applied to large sample sizes.
  • Qualitative research achieves deep, detailed, and nuance accounts of specific case studies, which are not generalizable.

Some researchers, with the aim of making the most of both quantitative and qualitative research, employ mixed methods, whereby they will apply both types of research methods in the one study, such as by conducting a statistical survey alongside in-depth interviews to add context to the quantitative findings.

Below, I’ll outline 15 common research methods, and include pros, cons, and examples of each .

Types of Research Methods

Research methods can be broadly categorized into two types: quantitative and qualitative.

  • Quantitative methods involve systematic empirical investigation of observable phenomena via statistical, mathematical, or computational techniques, providing an in-depth understanding of a specific concept or phenomenon (Schweigert, 2021). The strengths of this approach include its ability to produce reliable results that can be generalized to a larger population, although it can lack depth and detail.
  • Qualitative methods encompass techniques that are designed to provide a deep understanding of a complex issue, often in a specific context, through collection of non-numerical data (Tracy, 2019). This approach often provides rich, detailed insights but can be time-consuming and its findings may not be generalizable.

These can be further broken down into a range of specific research methods and designs:

Primarily Quantitative MethodsPrimarily Qualitative methods
Experimental ResearchCase Study
Surveys and QuestionnairesEthnography
Longitudinal StudiesPhenomenology
Cross-Sectional StudiesHistorical research
Correlational ResearchContent analysis
Causal-Comparative ResearchGrounded theory
Meta-AnalysisAction research
Quasi-Experimental DesignObservational research

Combining the two methods above, mixed methods research mixes elements of both qualitative and quantitative research methods, providing a comprehensive understanding of the research problem . We can further break these down into:

  • Sequential Explanatory Design (QUAN→QUAL): This methodology involves conducting quantitative analysis first, then supplementing it with a qualitative study.
  • Sequential Exploratory Design (QUAL→QUAN): This methodology goes in the other direction, starting with qualitative analysis and ending with quantitative analysis.

Let’s explore some methods and designs from both quantitative and qualitative traditions, starting with qualitative research methods.

Qualitative Research Methods

Qualitative research methods allow for the exploration of phenomena in their natural settings, providing detailed, descriptive responses and insights into individuals’ experiences and perceptions (Howitt, 2019).

These methods are useful when a detailed understanding of a phenomenon is sought.

1. Ethnographic Research

Ethnographic research emerged out of anthropological research, where anthropologists would enter into a setting for a sustained period of time, getting to know a cultural group and taking detailed observations.

Ethnographers would sometimes even act as participants in the group or culture, which many scholars argue is a weakness because it is a step away from achieving objectivity (Stokes & Wall, 2017).

In fact, at its most extreme version, ethnographers even conduct research on themselves, in a fascinating methodology call autoethnography .

The purpose is to understand the culture, social structure, and the behaviors of the group under study. It is often useful when researchers seek to understand shared cultural meanings and practices in their natural settings.

However, it can be time-consuming and may reflect researcher biases due to the immersion approach.

Pros of Ethnographic ResearchCons of Ethnographic Research
1. Provides deep cultural insights1. Time-consuming
2. Contextually relevant findings2. Potential researcher bias
3. Explores dynamic social processes3. May

Example of Ethnography

Liquidated: An Ethnography of Wall Street  by Karen Ho involves an anthropologist who embeds herself with Wall Street firms to study the culture of Wall Street bankers and how this culture affects the broader economy and world.

2. Phenomenological Research

Phenomenological research is a qualitative method focused on the study of individual experiences from the participant’s perspective (Tracy, 2019).

It focuses specifically on people’s experiences in relation to a specific social phenomenon ( see here for examples of social phenomena ).

This method is valuable when the goal is to understand how individuals perceive, experience, and make meaning of particular phenomena. However, because it is subjective and dependent on participants’ self-reports, findings may not be generalizable, and are highly reliant on self-reported ‘thoughts and feelings’.

Pros of Phenomenological ResearchCons of Phenomenological Research
1. Provides rich, detailed data1. Limited generalizability
2. Highlights personal experience and perceptions2. Data collection can be time-consuming
3. Allows exploration of complex phenomena3. Requires highly skilled researchers

Example of Phenomenological Research

A phenomenological approach to experiences with technology  by Sebnem Cilesiz represents a good starting-point for formulating a phenomenological study. With its focus on the ‘essence of experience’, this piece presents methodological, reliability, validity, and data analysis techniques that phenomenologists use to explain how people experience technology in their everyday lives.

3. Historical Research

Historical research is a qualitative method involving the examination of past events to draw conclusions about the present or make predictions about the future (Stokes & Wall, 2017).

As you might expect, it’s common in the research branches of history departments in universities.

This approach is useful in studies that seek to understand the past to interpret present events or trends. However, it relies heavily on the availability and reliability of source materials, which may be limited.

Common data sources include cultural artifacts from both material and non-material culture , which are then examined, compared, contrasted, and contextualized to test hypotheses and generate theories.

Pros of Historical ResearchCons of Historical Research
1. 1. Dependent on available sources
2. Can help understand current events or trends2. Potential bias in source materials
3. Allows the study of change over time3. Difficult to replicate

Example of Historical Research

A historical research example might be a study examining the evolution of gender roles over the last century. This research might involve the analysis of historical newspapers, advertisements, letters, and company documents, as well as sociocultural contexts.

4. Content Analysis

Content analysis is a research method that involves systematic and objective coding and interpreting of text or media to identify patterns, themes, ideologies, or biases (Schweigert, 2021).

A content analysis is useful in analyzing communication patterns, helping to reveal how texts such as newspapers, movies, films, political speeches, and other types of ‘content’ contain narratives and biases.

However, interpretations can be very subjective, which often requires scholars to engage in practices such as cross-comparing their coding with peers or external researchers.

Content analysis can be further broken down in to other specific methodologies such as semiotic analysis, multimodal analysis , and discourse analysis .

Pros of Content AnalysisCons of Content Analysis
1. Unobtrusive data collection1. Lacks contextual information
2. Allows for large sample analysis2. Potential coder bias
3. Replicable and reliable if done properly3. May overlook nuances

Example of Content Analysis

How is Islam Portrayed in Western Media?  by Poorebrahim and Zarei (2013) employs a type of content analysis called critical discourse analysis (common in poststructuralist and critical theory research ). This study by Poorebrahum and Zarei combs through a corpus of western media texts to explore the language forms that are used in relation to Islam and Muslims, finding that they are overly stereotyped, which may represent anti-Islam bias or failure to understand the Islamic world.

5. Grounded Theory Research

Grounded theory involves developing a theory  during and after  data collection rather than beforehand.

This is in contrast to most academic research studies, which start with a hypothesis or theory and then testing of it through a study, where we might have a null hypothesis (disproving the theory) and an alternative hypothesis (supporting the theory).

Grounded Theory is useful because it keeps an open mind to what the data might reveal out of the research. It can be time-consuming and requires rigorous data analysis (Tracy, 2019).

Pros of Grounded Theory ResearchCons of Grounded Theory Research
1. Helps with theory development1. Time-consuming
2. Rigorous data analysis2. Requires iterative data collection and analysis
3. Can fill gaps in existing theories3. Requires skilled researchers

Grounded Theory Example

Developing a Leadership Identity   by Komives et al (2005) employs a grounded theory approach to develop a thesis based on the data rather than testing a hypothesis. The researchers studied the leadership identity of 13 college students taking on leadership roles. Based on their interviews, the researchers theorized that the students’ leadership identities shifted from a hierarchical view of leadership to one that embraced leadership as a collaborative concept.

6. Action Research

Action research is an approach which aims to solve real-world problems and bring about change within a setting. The study is designed to solve a specific problem – or in other words, to take action (Patten, 2017).

This approach can involve mixed methods, but is generally qualitative because it usually involves the study of a specific case study wherein the researcher works, e.g. a teacher studying their own classroom practice to seek ways they can improve.

Action research is very common in fields like education and nursing where practitioners identify areas for improvement then implement a study in order to find paths forward.

Pros of Action ResearchCons of Action Research
1. Addresses real-world problems and seeks to find solutions.1. It is time-consuming and often hard to implement into a practitioner’s already busy schedule
2. Integrates research and action in an action-research cycle.2. Requires collaboration between researcher, practitioner, and research participants.
3. Can bring about positive change in isolated instances, such as in a school or nursery setting.3. Complexity of managing dual roles (where the researcher is also often the practitioner)

Action Research Example

Using Digital Sandbox Gaming to Improve Creativity Within Boys’ Writing   by Ellison and Drew was a research study one of my research students completed in his own classroom under my supervision. He implemented a digital game-based approach to literacy teaching with boys and interviewed his students to see if the use of games as stimuli for storytelling helped draw them into the learning experience.

7. Natural Observational Research

Observational research can also be quantitative (see: experimental research), but in naturalistic settings for the social sciences, researchers tend to employ qualitative data collection methods like interviews and field notes to observe people in their day-to-day environments.

This approach involves the observation and detailed recording of behaviors in their natural settings (Howitt, 2019). It can provide rich, in-depth information, but the researcher’s presence might influence behavior.

While observational research has some overlaps with ethnography (especially in regard to data collection techniques), it tends not to be as sustained as ethnography, e.g. a researcher might do 5 observations, every second Monday, as opposed to being embedded in an environment.

Pros of Qualitative Observational ResearchCons of Qualitative Observational Research
1. Captures behavior in natural settings, allowing for interesting insights into authentic behaviors. 1. Researcher’s presence may influence behavior
2. Can provide rich, detailed data through the researcher’s vignettes.2. Can be time-consuming
3. Non-invasive because researchers want to observe natural activities rather than interfering with research participants.3. Requires skilled and trained observers

Observational Research Example

A researcher might use qualitative observational research to study the behaviors and interactions of children at a playground. The researcher would document the behaviors observed, such as the types of games played, levels of cooperation , and instances of conflict.

8. Case Study Research

Case study research is a qualitative method that involves a deep and thorough investigation of a single individual, group, or event in order to explore facets of that phenomenon that cannot be captured using other methods (Stokes & Wall, 2017).

Case study research is especially valuable in providing contextualized insights into specific issues, facilitating the application of abstract theories to real-world situations (Patten, 2017).

However, findings from a case study may not be generalizable due to the specific context and the limited number of cases studied (Walliman, 2021).

Pros of Case Study ResearchCons of Case Study Research
1. Provides detailed insights1. Limited generalizability
2. Facilitates the study of complex phenomena2. Can be time-consuming
3. Can test or generate theories3. Subject to observer bias

See More: Case Study Advantages and Disadvantages

Example of a Case Study

Scholars conduct a detailed exploration of the implementation of a new teaching method within a classroom setting. The study focuses on how the teacher and students adapt to the new method, the challenges encountered, and the outcomes on student performance and engagement. While the study provides specific and detailed insights of the teaching method in that classroom, it cannot be generalized to other classrooms, as statistical significance has not been established through this qualitative approach.

Quantitative Research Methods

Quantitative research methods involve the systematic empirical investigation of observable phenomena via statistical, mathematical, or computational techniques (Pajo, 2022). The focus is on gathering numerical data and generalizing it across groups of people or to explain a particular phenomenon.

9. Experimental Research

Experimental research is a quantitative method where researchers manipulate one variable to determine its effect on another (Walliman, 2021).

This is common, for example, in high-school science labs, where students are asked to introduce a variable into a setting in order to examine its effect.

This type of research is useful in situations where researchers want to determine causal relationships between variables. However, experimental conditions may not reflect real-world conditions.

Pros of Experimental ResearchCons of Experimental Research
1. Allows for determination of causality1. Might not reflect real-world conditions
2. Allows for the study of phenomena in highly controlled environments to minimize research contamination.2. Can be costly and time-consuming to create a controlled environment.
3. Can be replicated so other researchers can test and verify the results.3. Ethical concerns need to be addressed as the research is directly manipulating variables.

Example of Experimental Research

A researcher may conduct an experiment to determine the effects of a new educational approach on student learning outcomes. Students would be randomly assigned to either the control group (traditional teaching method) or the experimental group (new educational approach).

10. Surveys and Questionnaires

Surveys and questionnaires are quantitative methods that involve asking research participants structured and predefined questions to collect data about their attitudes, beliefs, behaviors, or characteristics (Patten, 2017).

Surveys are beneficial for collecting data from large samples, but they depend heavily on the honesty and accuracy of respondents.

They tend to be seen as more authoritative than their qualitative counterparts, semi-structured interviews, because the data is quantifiable (e.g. a questionnaire where information is presented on a scale from 1 to 10 can allow researchers to determine and compare statistical means, averages, and variations across sub-populations in the study).

Pros of Surveys and QuestionnairesCons of Surveys and Questionnaires
1. Data can be gathered from larger samples than is possible in qualitative research. 1. There is heavy dependence on respondent honesty
2. The data is quantifiable, allowing for comparison across subpopulations2. There is limited depth of response as opposed to qualitative approaches.
3. Can be cost-effective and time-efficient3. Static with no flexibility to explore responses (unlike semi- or unstrcutured interviewing)

Example of a Survey Study

A company might use a survey to gather data about employee job satisfaction across its offices worldwide. Employees would be asked to rate various aspects of their job satisfaction on a Likert scale. While this method provides a broad overview, it may lack the depth of understanding possible with other methods (Stokes & Wall, 2017).

11. Longitudinal Studies

Longitudinal studies involve repeated observations of the same variables over extended periods (Howitt, 2019). These studies are valuable for tracking development and change but can be costly and time-consuming.

With multiple data points collected over extended periods, it’s possible to examine continuous changes within things like population dynamics or consumer behavior. This makes a detailed analysis of change possible.

a visual representation of a longitudinal study demonstrating that data is collected over time on one sample so researchers can examine how variables change over time

Perhaps the most relatable example of a longitudinal study is a national census, which is taken on the same day every few years, to gather comparative demographic data that can show how a nation is changing over time.

While longitudinal studies are commonly quantitative, there are also instances of qualitative ones as well, such as the famous 7 Up study from the UK, which studies 14 individuals every 7 years to explore their development over their lives.

Pros of Longitudinal StudiesCons of Longitudinal Studies
1. Tracks changes over time allowing for comparison of past to present events.1. Is almost by definition time-consuming because time needs to pass between each data collection session.
2. Can identify sequences of events, but causality is often harder to determine.2. There is high risk of participant dropout over time as participants move on with their lives.

Example of a Longitudinal Study

A national census, taken every few years, uses surveys to develop longitudinal data, which is then compared and analyzed to present accurate trends over time. Trends a census can reveal include changes in religiosity, values and attitudes on social issues, and much more.

12. Cross-Sectional Studies

Cross-sectional studies are a quantitative research method that involves analyzing data from a population at a specific point in time (Patten, 2017). They provide a snapshot of a situation but cannot determine causality.

This design is used to measure and compare the prevalence of certain characteristics or outcomes in different groups within the sampled population.

A visual representation of a cross-sectional group of people, demonstrating that the data is collected at a single point in time and you can compare groups within the sample

The major advantage of cross-sectional design is its ability to measure a wide range of variables simultaneously without needing to follow up with participants over time.

However, cross-sectional studies do have limitations . This design can only show if there are associations or correlations between different variables, but cannot prove cause and effect relationships, temporal sequence, changes, and trends over time.

Pros of Cross-Sectional StudiesCons of Cross-Sectional Studies
1. Quick and inexpensive, with no long-term commitment required.1. Cannot determine causality because it is a simple snapshot, with no time delay between data collection points.
2. Good for descriptive analyses.2. Does not allow researchers to follow up with research participants.

Example of a Cross-Sectional Study

Our longitudinal study example of a national census also happens to contain cross-sectional design. One census is cross-sectional, displaying only data from one point in time. But when a census is taken once every few years, it becomes longitudinal, and so long as the data collection technique remains unchanged, identification of changes will be achievable, adding another time dimension on top of a basic cross-sectional study.

13. Correlational Research

Correlational research is a quantitative method that seeks to determine if and to what degree a relationship exists between two or more quantifiable variables (Schweigert, 2021).

This approach provides a fast and easy way to make initial hypotheses based on either positive or  negative correlation trends  that can be observed within dataset.

While correlational research can reveal relationships between variables, it cannot establish causality.

Methods used for data analysis may include statistical correlations such as Pearson’s or Spearman’s.

Pros of Correlational ResearchCons of Correlational Research
1. Reveals relationships between variables1. Cannot determine causality
2. Can use existing data2. May be
3. Can guide further experimental research3. Correlation may be coincidental

Example of Correlational Research

A team of researchers is interested in studying the relationship between the amount of time students spend studying and their academic performance. They gather data from a high school, measuring the number of hours each student studies per week and their grade point averages (GPAs) at the end of the semester. Upon analyzing the data, they find a positive correlation, suggesting that students who spend more time studying tend to have higher GPAs.

14. Quasi-Experimental Design Research

Quasi-experimental design research is a quantitative research method that is similar to experimental design but lacks the element of random assignment to treatment or control.

Instead, quasi-experimental designs typically rely on certain other methods to control for extraneous variables.

The term ‘quasi-experimental’ implies that the experiment resembles a true experiment, but it is not exactly the same because it doesn’t meet all the criteria for a ‘true’ experiment, specifically in terms of control and random assignment.

Quasi-experimental design is useful when researchers want to study a causal hypothesis or relationship, but practical or ethical considerations prevent them from manipulating variables and randomly assigning participants to conditions.

Pros Cons
1. It’s more feasible to implement than true experiments.1. Without random assignment, it’s harder to rule out confounding variables.
2. It can be conducted in real-world settings, making the findings more applicable to the real world.2. The lack of random assignment may of the study.
3. Useful when it’s unethical or impossible to manipulate the independent variable or randomly assign participants.3. It’s more difficult to establish a cause-effect relationship due to the potential for confounding variables.

Example of Quasi-Experimental Design

A researcher wants to study the impact of a new math tutoring program on student performance. However, ethical and practical constraints prevent random assignment to the “tutoring” and “no tutoring” groups. Instead, the researcher compares students who chose to receive tutoring (experimental group) to similar students who did not choose to receive tutoring (control group), controlling for other variables like grade level and previous math performance.

Related: Examples and Types of Random Assignment in Research

15. Meta-Analysis Research

Meta-analysis statistically combines the results of multiple studies on a specific topic to yield a more precise estimate of the effect size. It’s the gold standard of secondary research .

Meta-analysis is particularly useful when there are numerous studies on a topic, and there is a need to integrate the findings to draw more reliable conclusions.

Some meta-analyses can identify flaws or gaps in a corpus of research, when can be highly influential in academic research, despite lack of primary data collection.

However, they tend only to be feasible when there is a sizable corpus of high-quality and reliable studies into a phenomenon.

Pros Cons
Increased Statistical Power: By combining data from multiple studies, meta-analysis increases the statistical power to detect effects.Publication Bias: Studies with null or negative findings are less likely to be published, leading to an overestimation of effect sizes.
Greater Precision: It provides more precise estimates of effect sizes by reducing the influence of random error.Quality of Studies: of a meta-analysis depends on the quality of the studies included.
Resolving Discrepancies: Meta-analysis can help resolve disagreements between different studies on a topic.Heterogeneity: Differences in study design, sample, or procedures can introduce heterogeneity, complicating interpretation of results.

Example of a Meta-Analysis

The power of feedback revisited (Wisniewski, Zierer & Hattie, 2020) is a meta-analysis that examines 435 empirical studies research on the effects of feedback on student learning. They use a random-effects model to ascertain whether there is a clear effect size across the literature. The authors find that feedback tends to impact cognitive and motor skill outcomes but has less of an effect on motivational and behavioral outcomes.

Choosing a research method requires a lot of consideration regarding what you want to achieve, your research paradigm, and the methodology that is most valuable for what you are studying. There are multiple types of research methods, many of which I haven’t been able to present here. Generally, it’s recommended that you work with an experienced researcher or research supervisor to identify a suitable research method for your study at hand.

Hammond, M., & Wellington, J. (2020). Research methods: The key concepts . New York: Routledge.

Howitt, D. (2019). Introduction to qualitative research methods in psychology . London: Pearson UK.

Pajo, B. (2022). Introduction to research methods: A hands-on approach . New York: Sage Publications.

Patten, M. L. (2017). Understanding research methods: An overview of the essentials . New York: Sage

Schweigert, W. A. (2021). Research methods in psychology: A handbook . Los Angeles: Waveland Press.

Stokes, P., & Wall, T. (2017). Research methods . New York: Bloomsbury Publishing.

Tracy, S. J. (2019). Qualitative research methods: Collecting evidence, crafting analysis, communicating impact . London: John Wiley & Sons.

Walliman, N. (2021). Research methods: The basics. London: Routledge.

Chris

  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd-2/ 101 Class Group Name Ideas (for School Students)
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd-2/ 19 Top Cognitive Psychology Theories (Explained)
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd-2/ 119 Bloom’s Taxonomy Examples
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd-2/ All 6 Levels of Understanding (on Bloom’s Taxonomy)

Leave a Comment Cancel Reply

Your email address will not be published. Required fields are marked *

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • BMC Med Res Methodol

Logo of bmcmrm

A tutorial on methodological studies: the what, when, how and why

Lawrence mbuagbaw.

1 Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, ON Canada

2 Biostatistics Unit/FSORC, 50 Charlton Avenue East, St Joseph’s Healthcare—Hamilton, 3rd Floor Martha Wing, Room H321, Hamilton, Ontario L8N 4A6 Canada

3 Centre for the Development of Best Practices in Health, Yaoundé, Cameroon

Daeria O. Lawson

Livia puljak.

4 Center for Evidence-Based Medicine and Health Care, Catholic University of Croatia, Ilica 242, 10000 Zagreb, Croatia

David B. Allison

5 Department of Epidemiology and Biostatistics, School of Public Health – Bloomington, Indiana University, Bloomington, IN 47405 USA

Lehana Thabane

6 Departments of Paediatrics and Anaesthesia, McMaster University, Hamilton, ON Canada

7 Centre for Evaluation of Medicine, St. Joseph’s Healthcare-Hamilton, Hamilton, ON Canada

8 Population Health Research Institute, Hamilton Health Sciences, Hamilton, ON Canada

Associated Data

Data sharing is not applicable to this article as no new data were created or analyzed in this study.

Methodological studies – studies that evaluate the design, analysis or reporting of other research-related reports – play an important role in health research. They help to highlight issues in the conduct of research with the aim of improving health research methodology, and ultimately reducing research waste.

We provide an overview of some of the key aspects of methodological studies such as what they are, and when, how and why they are done. We adopt a “frequently asked questions” format to facilitate reading this paper and provide multiple examples to help guide researchers interested in conducting methodological studies. Some of the topics addressed include: is it necessary to publish a study protocol? How to select relevant research reports and databases for a methodological study? What approaches to data extraction and statistical analysis should be considered when conducting a methodological study? What are potential threats to validity and is there a way to appraise the quality of methodological studies?

Appropriate reflection and application of basic principles of epidemiology and biostatistics are required in the design and analysis of methodological studies. This paper provides an introduction for further discussion about the conduct of methodological studies.

The field of meta-research (or research-on-research) has proliferated in recent years in response to issues with research quality and conduct [ 1 – 3 ]. As the name suggests, this field targets issues with research design, conduct, analysis and reporting. Various types of research reports are often examined as the unit of analysis in these studies (e.g. abstracts, full manuscripts, trial registry entries). Like many other novel fields of research, meta-research has seen a proliferation of use before the development of reporting guidance. For example, this was the case with randomized trials for which risk of bias tools and reporting guidelines were only developed much later – after many trials had been published and noted to have limitations [ 4 , 5 ]; and for systematic reviews as well [ 6 – 8 ]. However, in the absence of formal guidance, studies that report on research differ substantially in how they are named, conducted and reported [ 9 , 10 ]. This creates challenges in identifying, summarizing and comparing them. In this tutorial paper, we will use the term methodological study to refer to any study that reports on the design, conduct, analysis or reporting of primary or secondary research-related reports (such as trial registry entries and conference abstracts).

In the past 10 years, there has been an increase in the use of terms related to methodological studies (based on records retrieved with a keyword search [in the title and abstract] for “methodological review” and “meta-epidemiological study” in PubMed up to December 2019), suggesting that these studies may be appearing more frequently in the literature. See Fig.  1 .

An external file that holds a picture, illustration, etc.
Object name is 12874_2020_1107_Fig1_HTML.jpg

Trends in the number studies that mention “methodological review” or “meta-

epidemiological study” in PubMed.

The methods used in many methodological studies have been borrowed from systematic and scoping reviews. This practice has influenced the direction of the field, with many methodological studies including searches of electronic databases, screening of records, duplicate data extraction and assessments of risk of bias in the included studies. However, the research questions posed in methodological studies do not always require the approaches listed above, and guidance is needed on when and how to apply these methods to a methodological study. Even though methodological studies can be conducted on qualitative or mixed methods research, this paper focuses on and draws examples exclusively from quantitative research.

The objectives of this paper are to provide some insights on how to conduct methodological studies so that there is greater consistency between the research questions posed, and the design, analysis and reporting of findings. We provide multiple examples to illustrate concepts and a proposed framework for categorizing methodological studies in quantitative research.

What is a methodological study?

Any study that describes or analyzes methods (design, conduct, analysis or reporting) in published (or unpublished) literature is a methodological study. Consequently, the scope of methodological studies is quite extensive and includes, but is not limited to, topics as diverse as: research question formulation [ 11 ]; adherence to reporting guidelines [ 12 – 14 ] and consistency in reporting [ 15 ]; approaches to study analysis [ 16 ]; investigating the credibility of analyses [ 17 ]; and studies that synthesize these methodological studies [ 18 ]. While the nomenclature of methodological studies is not uniform, the intents and purposes of these studies remain fairly consistent – to describe or analyze methods in primary or secondary studies. As such, methodological studies may also be classified as a subtype of observational studies.

Parallel to this are experimental studies that compare different methods. Even though they play an important role in informing optimal research methods, experimental methodological studies are beyond the scope of this paper. Examples of such studies include the randomized trials by Buscemi et al., comparing single data extraction to double data extraction [ 19 ], and Carrasco-Labra et al., comparing approaches to presenting findings in Grading of Recommendations, Assessment, Development and Evaluations (GRADE) summary of findings tables [ 20 ]. In these studies, the unit of analysis is the person or groups of individuals applying the methods. We also direct readers to the Studies Within a Trial (SWAT) and Studies Within a Review (SWAR) programme operated through the Hub for Trials Methodology Research, for further reading as a potential useful resource for these types of experimental studies [ 21 ]. Lastly, this paper is not meant to inform the conduct of research using computational simulation and mathematical modeling for which some guidance already exists [ 22 ], or studies on the development of methods using consensus-based approaches.

When should we conduct a methodological study?

Methodological studies occupy a unique niche in health research that allows them to inform methodological advances. Methodological studies should also be conducted as pre-cursors to reporting guideline development, as they provide an opportunity to understand current practices, and help to identify the need for guidance and gaps in methodological or reporting quality. For example, the development of the popular Preferred Reporting Items of Systematic reviews and Meta-Analyses (PRISMA) guidelines were preceded by methodological studies identifying poor reporting practices [ 23 , 24 ]. In these instances, after the reporting guidelines are published, methodological studies can also be used to monitor uptake of the guidelines.

These studies can also be conducted to inform the state of the art for design, analysis and reporting practices across different types of health research fields, with the aim of improving research practices, and preventing or reducing research waste. For example, Samaan et al. conducted a scoping review of adherence to different reporting guidelines in health care literature [ 18 ]. Methodological studies can also be used to determine the factors associated with reporting practices. For example, Abbade et al. investigated journal characteristics associated with the use of the Participants, Intervention, Comparison, Outcome, Timeframe (PICOT) format in framing research questions in trials of venous ulcer disease [ 11 ].

How often are methodological studies conducted?

There is no clear answer to this question. Based on a search of PubMed, the use of related terms (“methodological review” and “meta-epidemiological study”) – and therefore, the number of methodological studies – is on the rise. However, many other terms are used to describe methodological studies. There are also many studies that explore design, conduct, analysis or reporting of research reports, but that do not use any specific terms to describe or label their study design in terms of “methodology”. This diversity in nomenclature makes a census of methodological studies elusive. Appropriate terminology and key words for methodological studies are needed to facilitate improved accessibility for end-users.

Why do we conduct methodological studies?

Methodological studies provide information on the design, conduct, analysis or reporting of primary and secondary research and can be used to appraise quality, quantity, completeness, accuracy and consistency of health research. These issues can be explored in specific fields, journals, databases, geographical regions and time periods. For example, Areia et al. explored the quality of reporting of endoscopic diagnostic studies in gastroenterology [ 25 ]; Knol et al. investigated the reporting of p -values in baseline tables in randomized trial published in high impact journals [ 26 ]; Chen et al. describe adherence to the Consolidated Standards of Reporting Trials (CONSORT) statement in Chinese Journals [ 27 ]; and Hopewell et al. describe the effect of editors’ implementation of CONSORT guidelines on reporting of abstracts over time [ 28 ]. Methodological studies provide useful information to researchers, clinicians, editors, publishers and users of health literature. As a result, these studies have been at the cornerstone of important methodological developments in the past two decades and have informed the development of many health research guidelines including the highly cited CONSORT statement [ 5 ].

Where can we find methodological studies?

Methodological studies can be found in most common biomedical bibliographic databases (e.g. Embase, MEDLINE, PubMed, Web of Science). However, the biggest caveat is that methodological studies are hard to identify in the literature due to the wide variety of names used and the lack of comprehensive databases dedicated to them. A handful can be found in the Cochrane Library as “Cochrane Methodology Reviews”, but these studies only cover methodological issues related to systematic reviews. Previous attempts to catalogue all empirical studies of methods used in reviews were abandoned 10 years ago [ 29 ]. In other databases, a variety of search terms may be applied with different levels of sensitivity and specificity.

Some frequently asked questions about methodological studies

In this section, we have outlined responses to questions that might help inform the conduct of methodological studies.

Q: How should I select research reports for my methodological study?

A: Selection of research reports for a methodological study depends on the research question and eligibility criteria. Once a clear research question is set and the nature of literature one desires to review is known, one can then begin the selection process. Selection may begin with a broad search, especially if the eligibility criteria are not apparent. For example, a methodological study of Cochrane Reviews of HIV would not require a complex search as all eligible studies can easily be retrieved from the Cochrane Library after checking a few boxes [ 30 ]. On the other hand, a methodological study of subgroup analyses in trials of gastrointestinal oncology would require a search to find such trials, and further screening to identify trials that conducted a subgroup analysis [ 31 ].

The strategies used for identifying participants in observational studies can apply here. One may use a systematic search to identify all eligible studies. If the number of eligible studies is unmanageable, a random sample of articles can be expected to provide comparable results if it is sufficiently large [ 32 ]. For example, Wilson et al. used a random sample of trials from the Cochrane Stroke Group’s Trial Register to investigate completeness of reporting [ 33 ]. It is possible that a simple random sample would lead to underrepresentation of units (i.e. research reports) that are smaller in number. This is relevant if the investigators wish to compare multiple groups but have too few units in one group. In this case a stratified sample would help to create equal groups. For example, in a methodological study comparing Cochrane and non-Cochrane reviews, Kahale et al. drew random samples from both groups [ 34 ]. Alternatively, systematic or purposeful sampling strategies can be used and we encourage researchers to justify their selected approaches based on the study objective.

Q: How many databases should I search?

A: The number of databases one should search would depend on the approach to sampling, which can include targeting the entire “population” of interest or a sample of that population. If you are interested in including the entire target population for your research question, or drawing a random or systematic sample from it, then a comprehensive and exhaustive search for relevant articles is required. In this case, we recommend using systematic approaches for searching electronic databases (i.e. at least 2 databases with a replicable and time stamped search strategy). The results of your search will constitute a sampling frame from which eligible studies can be drawn.

Alternatively, if your approach to sampling is purposeful, then we recommend targeting the database(s) or data sources (e.g. journals, registries) that include the information you need. For example, if you are conducting a methodological study of high impact journals in plastic surgery and they are all indexed in PubMed, you likely do not need to search any other databases. You may also have a comprehensive list of all journals of interest and can approach your search using the journal names in your database search (or by accessing the journal archives directly from the journal’s website). Even though one could also search journals’ web pages directly, using a database such as PubMed has multiple advantages, such as the use of filters, so the search can be narrowed down to a certain period, or study types of interest. Furthermore, individual journals’ web sites may have different search functionalities, which do not necessarily yield a consistent output.

Q: Should I publish a protocol for my methodological study?

A: A protocol is a description of intended research methods. Currently, only protocols for clinical trials require registration [ 35 ]. Protocols for systematic reviews are encouraged but no formal recommendation exists. The scientific community welcomes the publication of protocols because they help protect against selective outcome reporting, the use of post hoc methodologies to embellish results, and to help avoid duplication of efforts [ 36 ]. While the latter two risks exist in methodological research, the negative consequences may be substantially less than for clinical outcomes. In a sample of 31 methodological studies, 7 (22.6%) referenced a published protocol [ 9 ]. In the Cochrane Library, there are 15 protocols for methodological reviews (21 July 2020). This suggests that publishing protocols for methodological studies is not uncommon.

Authors can consider publishing their study protocol in a scholarly journal as a manuscript. Advantages of such publication include obtaining peer-review feedback about the planned study, and easy retrieval by searching databases such as PubMed. The disadvantages in trying to publish protocols includes delays associated with manuscript handling and peer review, as well as costs, as few journals publish study protocols, and those journals mostly charge article-processing fees [ 37 ]. Authors who would like to make their protocol publicly available without publishing it in scholarly journals, could deposit their study protocols in publicly available repositories, such as the Open Science Framework ( https://osf.io/ ).

Q: How to appraise the quality of a methodological study?

A: To date, there is no published tool for appraising the risk of bias in a methodological study, but in principle, a methodological study could be considered as a type of observational study. Therefore, during conduct or appraisal, care should be taken to avoid the biases common in observational studies [ 38 ]. These biases include selection bias, comparability of groups, and ascertainment of exposure or outcome. In other words, to generate a representative sample, a comprehensive reproducible search may be necessary to build a sampling frame. Additionally, random sampling may be necessary to ensure that all the included research reports have the same probability of being selected, and the screening and selection processes should be transparent and reproducible. To ensure that the groups compared are similar in all characteristics, matching, random sampling or stratified sampling can be used. Statistical adjustments for between-group differences can also be applied at the analysis stage. Finally, duplicate data extraction can reduce errors in assessment of exposures or outcomes.

Q: Should I justify a sample size?

A: In all instances where one is not using the target population (i.e. the group to which inferences from the research report are directed) [ 39 ], a sample size justification is good practice. The sample size justification may take the form of a description of what is expected to be achieved with the number of articles selected, or a formal sample size estimation that outlines the number of articles required to answer the research question with a certain precision and power. Sample size justifications in methodological studies are reasonable in the following instances:

  • Comparing two groups
  • Determining a proportion, mean or another quantifier
  • Determining factors associated with an outcome using regression-based analyses

For example, El Dib et al. computed a sample size requirement for a methodological study of diagnostic strategies in randomized trials, based on a confidence interval approach [ 40 ].

Q: What should I call my study?

A: Other terms which have been used to describe/label methodological studies include “ methodological review ”, “methodological survey” , “meta-epidemiological study” , “systematic review” , “systematic survey”, “meta-research”, “research-on-research” and many others. We recommend that the study nomenclature be clear, unambiguous, informative and allow for appropriate indexing. Methodological study nomenclature that should be avoided includes “ systematic review” – as this will likely be confused with a systematic review of a clinical question. “ Systematic survey” may also lead to confusion about whether the survey was systematic (i.e. using a preplanned methodology) or a survey using “ systematic” sampling (i.e. a sampling approach using specific intervals to determine who is selected) [ 32 ]. Any of the above meanings of the words “ systematic” may be true for methodological studies and could be potentially misleading. “ Meta-epidemiological study” is ideal for indexing, but not very informative as it describes an entire field. The term “ review ” may point towards an appraisal or “review” of the design, conduct, analysis or reporting (or methodological components) of the targeted research reports, yet it has also been used to describe narrative reviews [ 41 , 42 ]. The term “ survey ” is also in line with the approaches used in many methodological studies [ 9 ], and would be indicative of the sampling procedures of this study design. However, in the absence of guidelines on nomenclature, the term “ methodological study ” is broad enough to capture most of the scenarios of such studies.

Q: Should I account for clustering in my methodological study?

A: Data from methodological studies are often clustered. For example, articles coming from a specific source may have different reporting standards (e.g. the Cochrane Library). Articles within the same journal may be similar due to editorial practices and policies, reporting requirements and endorsement of guidelines. There is emerging evidence that these are real concerns that should be accounted for in analyses [ 43 ]. Some cluster variables are described in the section: “ What variables are relevant to methodological studies?”

A variety of modelling approaches can be used to account for correlated data, including the use of marginal, fixed or mixed effects regression models with appropriate computation of standard errors [ 44 ]. For example, Kosa et al. used generalized estimation equations to account for correlation of articles within journals [ 15 ]. Not accounting for clustering could lead to incorrect p -values, unduly narrow confidence intervals, and biased estimates [ 45 ].

Q: Should I extract data in duplicate?

A: Yes. Duplicate data extraction takes more time but results in less errors [ 19 ]. Data extraction errors in turn affect the effect estimate [ 46 ], and therefore should be mitigated. Duplicate data extraction should be considered in the absence of other approaches to minimize extraction errors. However, much like systematic reviews, this area will likely see rapid new advances with machine learning and natural language processing technologies to support researchers with screening and data extraction [ 47 , 48 ]. However, experience plays an important role in the quality of extracted data and inexperienced extractors should be paired with experienced extractors [ 46 , 49 ].

Q: Should I assess the risk of bias of research reports included in my methodological study?

A : Risk of bias is most useful in determining the certainty that can be placed in the effect measure from a study. In methodological studies, risk of bias may not serve the purpose of determining the trustworthiness of results, as effect measures are often not the primary goal of methodological studies. Determining risk of bias in methodological studies is likely a practice borrowed from systematic review methodology, but whose intrinsic value is not obvious in methodological studies. When it is part of the research question, investigators often focus on one aspect of risk of bias. For example, Speich investigated how blinding was reported in surgical trials [ 50 ], and Abraha et al., investigated the application of intention-to-treat analyses in systematic reviews and trials [ 51 ].

Q: What variables are relevant to methodological studies?

A: There is empirical evidence that certain variables may inform the findings in a methodological study. We outline some of these and provide a brief overview below:

  • Country: Countries and regions differ in their research cultures, and the resources available to conduct research. Therefore, it is reasonable to believe that there may be differences in methodological features across countries. Methodological studies have reported loco-regional differences in reporting quality [ 52 , 53 ]. This may also be related to challenges non-English speakers face in publishing papers in English.
  • Authors’ expertise: The inclusion of authors with expertise in research methodology, biostatistics, and scientific writing is likely to influence the end-product. Oltean et al. found that among randomized trials in orthopaedic surgery, the use of analyses that accounted for clustering was more likely when specialists (e.g. statistician, epidemiologist or clinical trials methodologist) were included on the study team [ 54 ]. Fleming et al. found that including methodologists in the review team was associated with appropriate use of reporting guidelines [ 55 ].
  • Source of funding and conflicts of interest: Some studies have found that funded studies report better [ 56 , 57 ], while others do not [ 53 , 58 ]. The presence of funding would indicate the availability of resources deployed to ensure optimal design, conduct, analysis and reporting. However, the source of funding may introduce conflicts of interest and warrant assessment. For example, Kaiser et al. investigated the effect of industry funding on obesity or nutrition randomized trials and found that reporting quality was similar [ 59 ]. Thomas et al. looked at reporting quality of long-term weight loss trials and found that industry funded studies were better [ 60 ]. Kan et al. examined the association between industry funding and “positive trials” (trials reporting a significant intervention effect) and found that industry funding was highly predictive of a positive trial [ 61 ]. This finding is similar to that of a recent Cochrane Methodology Review by Hansen et al. [ 62 ]
  • Journal characteristics: Certain journals’ characteristics may influence the study design, analysis or reporting. Characteristics such as journal endorsement of guidelines [ 63 , 64 ], and Journal Impact Factor (JIF) have been shown to be associated with reporting [ 63 , 65 – 67 ].
  • Study size (sample size/number of sites): Some studies have shown that reporting is better in larger studies [ 53 , 56 , 58 ].
  • Year of publication: It is reasonable to assume that design, conduct, analysis and reporting of research will change over time. Many studies have demonstrated improvements in reporting over time or after the publication of reporting guidelines [ 68 , 69 ].
  • Type of intervention: In a methodological study of reporting quality of weight loss intervention studies, Thabane et al. found that trials of pharmacologic interventions were reported better than trials of non-pharmacologic interventions [ 70 ].
  • Interactions between variables: Complex interactions between the previously listed variables are possible. High income countries with more resources may be more likely to conduct larger studies and incorporate a variety of experts. Authors in certain countries may prefer certain journals, and journal endorsement of guidelines and editorial policies may change over time.

Q: Should I focus only on high impact journals?

A: Investigators may choose to investigate only high impact journals because they are more likely to influence practice and policy, or because they assume that methodological standards would be higher. However, the JIF may severely limit the scope of articles included and may skew the sample towards articles with positive findings. The generalizability and applicability of findings from a handful of journals must be examined carefully, especially since the JIF varies over time. Even among journals that are all “high impact”, variations exist in methodological standards.

Q: Can I conduct a methodological study of qualitative research?

A: Yes. Even though a lot of methodological research has been conducted in the quantitative research field, methodological studies of qualitative studies are feasible. Certain databases that catalogue qualitative research including the Cumulative Index to Nursing & Allied Health Literature (CINAHL) have defined subject headings that are specific to methodological research (e.g. “research methodology”). Alternatively, one could also conduct a qualitative methodological review; that is, use qualitative approaches to synthesize methodological issues in qualitative studies.

Q: What reporting guidelines should I use for my methodological study?

A: There is no guideline that covers the entire scope of methodological studies. One adaptation of the PRISMA guidelines has been published, which works well for studies that aim to use the entire target population of research reports [ 71 ]. However, it is not widely used (40 citations in 2 years as of 09 December 2019), and methodological studies that are designed as cross-sectional or before-after studies require a more fit-for purpose guideline. A more encompassing reporting guideline for a broad range of methodological studies is currently under development [ 72 ]. However, in the absence of formal guidance, the requirements for scientific reporting should be respected, and authors of methodological studies should focus on transparency and reproducibility.

Q: What are the potential threats to validity and how can I avoid them?

A: Methodological studies may be compromised by a lack of internal or external validity. The main threats to internal validity in methodological studies are selection and confounding bias. Investigators must ensure that the methods used to select articles does not make them differ systematically from the set of articles to which they would like to make inferences. For example, attempting to make extrapolations to all journals after analyzing high-impact journals would be misleading.

Many factors (confounders) may distort the association between the exposure and outcome if the included research reports differ with respect to these factors [ 73 ]. For example, when examining the association between source of funding and completeness of reporting, it may be necessary to account for journals that endorse the guidelines. Confounding bias can be addressed by restriction, matching and statistical adjustment [ 73 ]. Restriction appears to be the method of choice for many investigators who choose to include only high impact journals or articles in a specific field. For example, Knol et al. examined the reporting of p -values in baseline tables of high impact journals [ 26 ]. Matching is also sometimes used. In the methodological study of non-randomized interventional studies of elective ventral hernia repair, Parker et al. matched prospective studies with retrospective studies and compared reporting standards [ 74 ]. Some other methodological studies use statistical adjustments. For example, Zhang et al. used regression techniques to determine the factors associated with missing participant data in trials [ 16 ].

With regard to external validity, researchers interested in conducting methodological studies must consider how generalizable or applicable their findings are. This should tie in closely with the research question and should be explicit. For example. Findings from methodological studies on trials published in high impact cardiology journals cannot be assumed to be applicable to trials in other fields. However, investigators must ensure that their sample truly represents the target sample either by a) conducting a comprehensive and exhaustive search, or b) using an appropriate and justified, randomly selected sample of research reports.

Even applicability to high impact journals may vary based on the investigators’ definition, and over time. For example, for high impact journals in the field of general medicine, Bouwmeester et al. included the Annals of Internal Medicine (AIM), BMJ, the Journal of the American Medical Association (JAMA), Lancet, the New England Journal of Medicine (NEJM), and PLoS Medicine ( n  = 6) [ 75 ]. In contrast, the high impact journals selected in the methodological study by Schiller et al. were BMJ, JAMA, Lancet, and NEJM ( n  = 4) [ 76 ]. Another methodological study by Kosa et al. included AIM, BMJ, JAMA, Lancet and NEJM ( n  = 5). In the methodological study by Thabut et al., journals with a JIF greater than 5 were considered to be high impact. Riado Minguez et al. used first quartile journals in the Journal Citation Reports (JCR) for a specific year to determine “high impact” [ 77 ]. Ultimately, the definition of high impact will be based on the number of journals the investigators are willing to include, the year of impact and the JIF cut-off [ 78 ]. We acknowledge that the term “generalizability” may apply differently for methodological studies, especially when in many instances it is possible to include the entire target population in the sample studied.

Finally, methodological studies are not exempt from information bias which may stem from discrepancies in the included research reports [ 79 ], errors in data extraction, or inappropriate interpretation of the information extracted. Likewise, publication bias may also be a concern in methodological studies, but such concepts have not yet been explored.

A proposed framework

In order to inform discussions about methodological studies, the development of guidance for what should be reported, we have outlined some key features of methodological studies that can be used to classify them. For each of the categories outlined below, we provide an example. In our experience, the choice of approach to completing a methodological study can be informed by asking the following four questions:

  • What is the aim?

A methodological study may be focused on exploring sources of bias in primary or secondary studies (meta-bias), or how bias is analyzed. We have taken care to distinguish bias (i.e. systematic deviations from the truth irrespective of the source) from reporting quality or completeness (i.e. not adhering to a specific reporting guideline or norm). An example of where this distinction would be important is in the case of a randomized trial with no blinding. This study (depending on the nature of the intervention) would be at risk of performance bias. However, if the authors report that their study was not blinded, they would have reported adequately. In fact, some methodological studies attempt to capture both “quality of conduct” and “quality of reporting”, such as Richie et al., who reported on the risk of bias in randomized trials of pharmacy practice interventions [ 80 ]. Babic et al. investigated how risk of bias was used to inform sensitivity analyses in Cochrane reviews [ 81 ]. Further, biases related to choice of outcomes can also be explored. For example, Tan et al investigated differences in treatment effect size based on the outcome reported [ 82 ].

Methodological studies may report quality of reporting against a reporting checklist (i.e. adherence to guidelines) or against expected norms. For example, Croituro et al. report on the quality of reporting in systematic reviews published in dermatology journals based on their adherence to the PRISMA statement [ 83 ], and Khan et al. described the quality of reporting of harms in randomized controlled trials published in high impact cardiovascular journals based on the CONSORT extension for harms [ 84 ]. Other methodological studies investigate reporting of certain features of interest that may not be part of formally published checklists or guidelines. For example, Mbuagbaw et al. described how often the implications for research are elaborated using the Evidence, Participants, Intervention, Comparison, Outcome, Timeframe (EPICOT) format [ 30 ].

Sometimes investigators may be interested in how consistent reports of the same research are, as it is expected that there should be consistency between: conference abstracts and published manuscripts; manuscript abstracts and manuscript main text; and trial registration and published manuscript. For example, Rosmarakis et al. investigated consistency between conference abstracts and full text manuscripts [ 85 ].

In addition to identifying issues with reporting in primary and secondary studies, authors of methodological studies may be interested in determining the factors that are associated with certain reporting practices. Many methodological studies incorporate this, albeit as a secondary outcome. For example, Farrokhyar et al. investigated the factors associated with reporting quality in randomized trials of coronary artery bypass grafting surgery [ 53 ].

Methodological studies may also be used to describe methods or compare methods, and the factors associated with methods. Muller et al. described the methods used for systematic reviews and meta-analyses of observational studies [ 86 ].

Some methodological studies synthesize results from other methodological studies. For example, Li et al. conducted a scoping review of methodological reviews that investigated consistency between full text and abstracts in primary biomedical research [ 87 ].

Some methodological studies may investigate the use of names and terms in health research. For example, Martinic et al. investigated the definitions of systematic reviews used in overviews of systematic reviews (OSRs), meta-epidemiological studies and epidemiology textbooks [ 88 ].

In addition to the previously mentioned experimental methodological studies, there may exist other types of methodological studies not captured here.

  • 2. What is the design?

Most methodological studies are purely descriptive and report their findings as counts (percent) and means (standard deviation) or medians (interquartile range). For example, Mbuagbaw et al. described the reporting of research recommendations in Cochrane HIV systematic reviews [ 30 ]. Gohari et al. described the quality of reporting of randomized trials in diabetes in Iran [ 12 ].

Some methodological studies are analytical wherein “analytical studies identify and quantify associations, test hypotheses, identify causes and determine whether an association exists between variables, such as between an exposure and a disease.” [ 89 ] In the case of methodological studies all these investigations are possible. For example, Kosa et al. investigated the association between agreement in primary outcome from trial registry to published manuscript and study covariates. They found that larger and more recent studies were more likely to have agreement [ 15 ]. Tricco et al. compared the conclusion statements from Cochrane and non-Cochrane systematic reviews with a meta-analysis of the primary outcome and found that non-Cochrane reviews were more likely to report positive findings. These results are a test of the null hypothesis that the proportions of Cochrane and non-Cochrane reviews that report positive results are equal [ 90 ].

  • 3. What is the sampling strategy?

Methodological reviews with narrow research questions may be able to include the entire target population. For example, in the methodological study of Cochrane HIV systematic reviews, Mbuagbaw et al. included all of the available studies ( n  = 103) [ 30 ].

Many methodological studies use random samples of the target population [ 33 , 91 , 92 ]. Alternatively, purposeful sampling may be used, limiting the sample to a subset of research-related reports published within a certain time period, or in journals with a certain ranking or on a topic. Systematic sampling can also be used when random sampling may be challenging to implement.

  • 4. What is the unit of analysis?

Many methodological studies use a research report (e.g. full manuscript of study, abstract portion of the study) as the unit of analysis, and inferences can be made at the study-level. However, both published and unpublished research-related reports can be studied. These may include articles, conference abstracts, registry entries etc.

Some methodological studies report on items which may occur more than once per article. For example, Paquette et al. report on subgroup analyses in Cochrane reviews of atrial fibrillation in which 17 systematic reviews planned 56 subgroup analyses [ 93 ].

This framework is outlined in Fig.  2 .

An external file that holds a picture, illustration, etc.
Object name is 12874_2020_1107_Fig2_HTML.jpg

A proposed framework for methodological studies

Conclusions

Methodological studies have examined different aspects of reporting such as quality, completeness, consistency and adherence to reporting guidelines. As such, many of the methodological study examples cited in this tutorial are related to reporting. However, as an evolving field, the scope of research questions that can be addressed by methodological studies is expected to increase.

In this paper we have outlined the scope and purpose of methodological studies, along with examples of instances in which various approaches have been used. In the absence of formal guidance on the design, conduct, analysis and reporting of methodological studies, we have provided some advice to help make methodological studies consistent. This advice is grounded in good contemporary scientific practice. Generally, the research question should tie in with the sampling approach and planned analysis. We have also highlighted the variables that may inform findings from methodological studies. Lastly, we have provided suggestions for ways in which authors can categorize their methodological studies to inform their design and analysis.

Acknowledgements

Abbreviations.

CONSORTConsolidated Standards of Reporting Trials
EPICOTEvidence, Participants, Intervention, Comparison, Outcome, Timeframe
GRADEGrading of Recommendations, Assessment, Development and Evaluations
PICOTParticipants, Intervention, Comparison, Outcome, Timeframe
PRISMAPreferred Reporting Items of Systematic reviews and Meta-Analyses
SWARStudies Within a Review
SWATStudies Within a Trial

Authors’ contributions

LM conceived the idea and drafted the outline and paper. DOL and LT commented on the idea and draft outline. LM, LP and DOL performed literature searches and data extraction. All authors (LM, DOL, LT, LP, DBA) reviewed several draft versions of the manuscript and approved the final manuscript.

This work did not receive any dedicated funding.

Availability of data and materials

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

DOL, DBA, LM, LP and LT are involved in the development of a reporting guideline for methodological studies.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Mobile logo non-retina

Types of Study Design

  • 📖 Geeky Medics OSCE Book
  • ⚡ Geeky Medics Bundles
  • ✨ 1300+ OSCE Stations
  • ✅ OSCE Checklist PDF Booklet
  • 🧠 UKMLA AKT Question Bank
  • 💊 PSA Question Bank
  • 💉 Clinical Skills App
  • 🗂️ Flashcard Collections | OSCE , Medicine , Surgery , Anatomy
  • 💬 SCA Cases for MRCGP

To be the first to know about our latest videos subscribe to our YouTube channel 🙌

Table of Contents

Suggest an improvement

  • Hidden Post Title
  • Hidden Post URL
  • Hidden Post ID
  • Type of issue * N/A Fix spelling/grammar issue Add or fix a link Add or fix an image Add more detail Improve the quality of the writing Fix a factual error
  • Please provide as much detail as possible * You don't need to tell us which article this feedback relates to, as we automatically capture that information for you.
  • Your Email (optional) This allows us to get in touch for more details if required.
  • Which organ is responsible for pumping blood around the body? * Enter a five letter word in lowercase
  • Comments This field is for validation purposes and should be left unchanged.

Introduction

Study designs are frameworks used in medical research to gather data and explore a specific research question .

Choosing an appropriate study design is one of many essential considerations before conducting research to minimise bias and yield valid results .

This guide provides a summary of study designs commonly used in medical research, their characteristics, advantages and disadvantages.

Case-report and case-series

A case report is a detailed description of a patient’s medical history, diagnosis, treatment, and outcome. A case report typically documents unusual or rare cases or reports  new or unexpected clinical findings .

A case series is a similar study that involves a group of patients sharing a similar disease or condition. A case series involves a comprehensive review of medical records for each patient to identify common features or disease patterns. Case series help better understand a disease’s presentation, diagnosis, and treatment.

While a case report focuses on a single patient, a case series involves a group of patients to provide a broader perspective on a specific disease. Both case reports and case series are important tools for understanding rare or unusual diseases .

Advantages of case series and case reports include:

  • Able to describe rare or poorly understood conditions or diseases
  • Helpful in generating hypotheses and identifying patterns or trends in patient populations
  • Can be conducted relatively quickly and at a lower cost compared to other research designs

Disadvantages

Disadvantages of case series and case reports include:

  • Prone to selection bias , meaning that the patients included in the series may not be representative of the general population
  • Lack a control group, which makes it difficult to conclude  the effectiveness of different treatments or interventions
  • They are descriptive and cannot establish causality or control for confounding factors

Cross-sectional study

A cross-sectional study aims to measure the prevalence or frequency of a disease in a population at a specific point in time. In other words, it provides a “ snapshot ” of the population at a single moment in time.

Cross-sectional studies are unique from other study designs in that they collect data on the exposure and the outcome of interest from a sample of individuals in the population. This type of data is used to investigate the distribution of health-related conditions and behaviours in different populations, which is especially useful for guiding the development of public health interventions .

Example of a cross-sectional study

A cross-sectional study might investigate the prevalence of hypertension (the outcome) in a sample of adults in a particular region. The researchers would measure blood pressure levels in each participant and gather information on other factors that could influence blood pressure, such as age, sex, weight, and lifestyle habits (exposure).

Advantages of cross-sectional studies include:

  • Relatively quick and inexpensive to conduct compared to other study designs, such as cohort or case-control studies
  • They can provide a snapshot of the prevalence and distribution of a particular health condition in a population
  • They can help to identify patterns and associations between exposure and outcome variables, which can be used to generate hypotheses for further research

Disadvantages of cross-sectional studies include:

  • They cannot establish causality , as they do not follow participants over time and cannot determine the temporal sequence between exposure and outcome
  • Prone to selection bias , as the sample may not represent the entire population being studied
  • They cannot account for confounding variables , which may affect the relationship between the exposure and outcome of interest

Case-control study

A case-control study compares people who have developed a disease of interest ( cases ) with people who have not developed the disease ( controls ) to identify potential risk factors associated with the disease.

Once cases and controls have been identified, researchers then collect information about related risk factors , such as age, sex, lifestyle factors, or environmental exposures, from individuals. By comparing the prevalence of risk factors between the cases and the controls, researchers can determine the association between the risk factors and the disease.

Example of a case-control study

A case-control study design might involve comparing a group of individuals with lung cancer (cases) to a group of individuals without lung cancer (controls) to assess the association between smoking (risk factor) and the development of lung cancer.

Advantages of case-control studies include:

  • Useful for studying rare diseases , as they allow researchers to selectively recruit cases with the disease of interest
  • Useful for investigating potential risk factors for a disease, as the researchers can collect data on many different factors from both cases and controls
  • Can be helpful in situations where it is not ethical or practical to manipulate exposure levels or randomise study participants

Disadvantages of case-control studies include:

  • Prone to selection bias , as the controls may not be representative of the general population or may have different underlying risk factors than the cases
  • Cannot establish causality , as they can only identify associations between factors and disease
  • May be limited by the availability of suitable controls , as finding appropriate controls who have similar characteristics to the cases can be challenging

Cohort study

A cohort study follows a group of individuals (a cohort) over time to investigate the relationship between an exposure or risk factor and a particular outcome or health condition. Cohort studies can be further classified into prospective or retrospective cohort studies.

Prospective cohort study

A prospective cohort study is a study in which the researchers select a group of individuals who do not have a particular disease or outcome of interest at the start of the study.

They then follow this cohort over time to track the number of patients who develop the outcome . Before the start of the study, information on exposure(s) of interest may also be collected.

Example of a prospective cohort study

A prospective cohort study might follow a group of individuals who have never smoked and measure their exposure to tobacco smoke over time to investigate the relationship between smoking and lung cancer .

Retrospective cohort study

In contrast, a retrospective cohort study is a study in which the researchers select a group of individuals who have already been exposed to something (e.g. smoking) and look back in time (for example, through patient charts) to see if they developed the outcome (e.g. lung cancer ).

The key difference in retrospective cohort studies is that data on exposure and outcome are collected after the outcome has occurred.

Example of a retrospective cohort study

A retrospective cohort study might look at the medical records of smokers and see if they developed a particular adverse event such as lung cancer.

Advantages of cohort studies include:

  • Generally considered to be the most appropriate study design for investigating the temporal relationship between exposure and outcome
  • Can provide estimates of incidence and relative risk , which are useful for quantifying the strength of the association between exposure and outcome
  • Can be used to investigate multiple outcomes or endpoints associated with a particular exposure, which can help to identify unexpected effects or outcomes

Disadvantages of cohort studies include:

  • Can be expensive and time-consuming to conduct, particularly for long-term follow-up
  • May suffer from selection bias , as the sample may not be representative of the entire population being studied
  • May suffer from attrition bias , as participants may drop out or be lost to follow-up over time

Meta-analysis

A meta-analysis is a type of study that involves extracting outcome data from all relevant studies in the literature and combining the results of multiple studies to produce an overall estimate of the effect size of an intervention or exposure.

Meta-analysis is often conducted alongside a systematic review and can be considered a study of studies . By doing this, researchers provide a more comprehensive and reliable estimate of the overall effect size and their confidence interval (a measure of precision).

Meta-analyses can be conducted for a wide range of research questions , including evaluating the effectiveness of medical interventions, identifying risk factors for disease, or assessing the accuracy of diagnostic tests. They are particularly useful when the results of individual studies are inconsistent or when the sample sizes of individual studies are small, as a meta-analysis can provide a more precise estimate of the true effect size.

When conducting a meta-analysis, researchers must carefully assess the risk of bias in each study to enhance the validity of the meta-analysis. Many aspects of research studies are prone to bias , such as the methodology and the reporting of results. Where studies exhibit a high risk of bias, authors may opt to exclude the study from the analysis or perform a subgroup or sensitivity analysis.

Advantages of a meta-analysis include:

  • Combine the results of multiple studies, resulting in a larger sample size and increased statistical power, to provide a more comprehensive and precise estimate of the effect size of an intervention or outcome
  • Can help to identify sources of heterogeneity or variability in the results of individual studies by exploring the influence of different study characteristics or subgroups
  • Can help to resolve conflicting results or controversies in the literature by providing a more robust estimate of the effect size

Disadvantages of a meta-analysis include:

  • Susceptible to publication bias , where studies with statistically significant or positive results are more likely to be published than studies with nonsignificant or negative results. This bias can lead to an overestimation of the treatment effect in a meta-analysis
  • May not be appropriate if the studies included are too heterogeneous , as this can make it difficult to draw meaningful conclusions from the pooled results
  • Depend on the quality and completeness of the data available from the individual studies and may be limited by the lack of data on certain outcomes or subgroups

Ecological study

An ecological study assesses the relationship between outcome and exposure at a population level or among groups of people rather than studying individuals directly.

The main goal of an ecological study is to observe and analyse patterns or trends at the population level and to identify potential associations or correlations between environmental factors or exposures and health outcomes.

Ecological studies focus on collecting data on population health outcomes , such as disease or mortality rates, and environmental factors or exposures, such as air pollution, temperature, or socioeconomic status.

Example of an ecological study

An ecological study might be used when comparing smoking rates and lung cancer incidence across different countries.

Advantages of an ecological study include:

  • Provide insights into how social, economic, and environmental factors may impact health outcomes in real-world settings , which can inform public health policies and interventions
  • Cost-effective and efficient, often using existing data or readily available data, such as data from national or regional databases

Disadvantages of an ecological study include:

  • Ecological fallacy occurs when conclusions about individual-level associations are drawn from population-level differences
  • Ecological studies rely on population-level (i.e. aggregate) rather than individual-level data; they cannot establish causal relationships between exposures and outcomes, as the studies do not account for differences or confounders at the individual level

Randomised controlled trial

A randomised controlled trial (RCT) is an important study design commonly used in medical research to determine the effectiveness of a treatment or intervention . It is considered the gold standard in research design because it allows researchers to draw cause-and-effect conclusions about the effects of an intervention.

In an RCT, participants are randomly assigned to two or more groups. One group receives the intervention being tested, such as a new drug or a specific medical procedure. In contrast, the other group is a control group and receives either no intervention or a placebo .

Randomisation ensures that each participant has an equal chance of being assigned to either group, thereby minimising selection bias . To reduce bias, an RCT often uses a technique called blinding , in which study participants, researchers, or analysts are kept unaware of participant assignment during the study. The participants are then followed over time, and outcome measures are collected and compared to determine if there is any statistical difference between the intervention and control groups.

Example of a randomised controlled trial

An RCT might be employed to evaluate the effectiveness of a new smoking cessation program in helping individuals quit smoking compared to the existing standard of care.

Advantages of an RCT include:

  • Considered the most reliable study design for establishing causal relationships between interventions and outcomes and determining the effectiveness of interventions
  • Randomisation of participants to intervention and control groups ensures that the groups are similar at the outset, reducing the risk of selection bias and enhancing internal validity
  • Using a control group allows researchers to compare with the group that received the intervention while controlling for confounding factors

Disadvantages of an RCT include:

  • Can raise ethical concerns ; for example, it may be considered unethical to withhold an intervention from a control group, especially if the intervention is known to be effective
  • Can be expensive and time-consuming to conduct, requiring resources for participant recruitment, randomisation, data collection, and analysis
  • Often have strict inclusion and exclusion criteria , which may limit the generalisability of the findings to broader populations
  • May not always be feasible or practical for certain research questions, especially in rare diseases or when studying long-term outcomes

Dr Chris Jefferies

  • Yuliya L, Qazi MA (eds.). Toronto Notes 2022. Toronto: Toronto Notes for Medical Students Inc; 2022.
  • Le T, Bhushan V, Qui C, Chalise A, Kaparaliotis P, Coleman C, Kallianos K. First Aid for the USMLE Step 1 2023. New York: McGraw-Hill Education; 2023.
  • Rothman KJ, Greenland S, Lash T. Modern Epidemiology. 3 rd ed. Philadelphia: Lippincott Williams & Wilkins; 2008.

Print Friendly, PDF & Email

Other pages

  • Product Bundles 🎉
  • Join the Team 🙌
  • Institutional Licence 📚
  • OSCE Station Creator Tool 🩺
  • Create and Share Flashcards 🗂️
  • OSCE Group Chat 💬
  • Newsletter 📰
  • Advertise With Us

Join the community

Grad Coach

How To Choose Your Research Methodology

Qualitative vs quantitative vs mixed methods.

By: Derek Jansen (MBA). Expert Reviewed By: Dr Eunice Rautenbach | June 2021

Without a doubt, one of the most common questions we receive at Grad Coach is “ How do I choose the right methodology for my research? ”. It’s easy to see why – with so many options on the research design table, it’s easy to get intimidated, especially with all the complex lingo!

In this post, we’ll explain the three overarching types of research – qualitative, quantitative and mixed methods – and how you can go about choosing the best methodological approach for your research.

Overview: Choosing Your Methodology

Understanding the options – Qualitative research – Quantitative research – Mixed methods-based research

Choosing a research methodology – Nature of the research – Research area norms – Practicalities

Free Webinar: Research Methodology 101

1. Understanding the options

Before we jump into the question of how to choose a research methodology, it’s useful to take a step back to understand the three overarching types of research – qualitative , quantitative and mixed methods -based research. Each of these options takes a different methodological approach.

Qualitative research utilises data that is not numbers-based. In other words, qualitative research focuses on words , descriptions , concepts or ideas – while quantitative research makes use of numbers and statistics. Qualitative research investigates the “softer side” of things to explore and describe, while quantitative research focuses on the “hard numbers”, to measure differences between variables and the relationships between them.

Importantly, qualitative research methods are typically used to explore and gain a deeper understanding of the complexity of a situation – to draw a rich picture . In contrast to this, quantitative methods are usually used to confirm or test hypotheses . In other words, they have distinctly different purposes. The table below highlights a few of the key differences between qualitative and quantitative research – you can learn more about the differences here.

  • Uses an inductive approach
  • Is used to build theories
  • Takes a subjective approach
  • Adopts an open and flexible approach
  • The researcher is close to the respondents
  • Interviews and focus groups are oftentimes used to collect word-based data.
  • Generally, draws on small sample sizes
  • Uses qualitative data analysis techniques (e.g. content analysis , thematic analysis , etc)
  • Uses a deductive approach
  • Is used to test theories
  • Takes an objective approach
  • Adopts a closed, highly planned approach
  • The research is disconnected from respondents
  • Surveys or laboratory equipment are often used to collect number-based data.
  • Generally, requires large sample sizes
  • Uses statistical analysis techniques to make sense of the data

Mixed methods -based research, as you’d expect, attempts to bring these two types of research together, drawing on both qualitative and quantitative data. Quite often, mixed methods-based studies will use qualitative research to explore a situation and develop a potential model of understanding (this is called a conceptual framework), and then go on to use quantitative methods to test that model empirically.

In other words, while qualitative and quantitative methods (and the philosophies that underpin them) are completely different, they are not at odds with each other. It’s not a competition of qualitative vs quantitative. On the contrary, they can be used together to develop a high-quality piece of research. Of course, this is easier said than done, so we usually recommend that first-time researchers stick to a single approach , unless the nature of their study truly warrants a mixed-methods approach.

The key takeaway here, and the reason we started by looking at the three options, is that it’s important to understand that each methodological approach has a different purpose – for example, to explore and understand situations (qualitative), to test and measure (quantitative) or to do both. They’re not simply alternative tools for the same job. 

Right – now that we’ve got that out of the way, let’s look at how you can go about choosing the right methodology for your research.

Methodology choices in research

2. How to choose a research methodology

To choose the right research methodology for your dissertation or thesis, you need to consider three important factors . Based on these three factors, you can decide on your overarching approach – qualitative, quantitative or mixed methods. Once you’ve made that decision, you can flesh out the finer details of your methodology, such as the sampling , data collection methods and analysis techniques (we discuss these separately in other posts ).

The three factors you need to consider are:

  • The nature of your research aims, objectives and research questions
  • The methodological approaches taken in the existing literature
  • Practicalities and constraints

Let’s take a look at each of these.

Factor #1: The nature of your research

As I mentioned earlier, each type of research (and therefore, research methodology), whether qualitative, quantitative or mixed, has a different purpose and helps solve a different type of question. So, it’s logical that the key deciding factor in terms of which research methodology you adopt is the nature of your research aims, objectives and research questions .

But, what types of research exist?

Broadly speaking, research can fall into one of three categories:

  • Exploratory – getting a better understanding of an issue and potentially developing a theory regarding it
  • Confirmatory – confirming a potential theory or hypothesis by testing it empirically
  • A mix of both – building a potential theory or hypothesis and then testing it

As a rule of thumb, exploratory research tends to adopt a qualitative approach , whereas confirmatory research tends to use quantitative methods . This isn’t set in stone, but it’s a very useful heuristic. Naturally then, research that combines a mix of both, or is seeking to develop a theory from the ground up and then test that theory, would utilize a mixed-methods approach.

Exploratory vs confirmatory research

Let’s look at an example in action.

If your research aims were to understand the perspectives of war veterans regarding certain political matters, you’d likely adopt a qualitative methodology, making use of interviews to collect data and one or more qualitative data analysis methods to make sense of the data.

If, on the other hand, your research aims involved testing a set of hypotheses regarding the link between political leaning and income levels, you’d likely adopt a quantitative methodology, using numbers-based data from a survey to measure the links between variables and/or constructs .

So, the first (and most important thing) thing you need to consider when deciding which methodological approach to use for your research project is the nature of your research aims , objectives and research questions. Specifically, you need to assess whether your research leans in an exploratory or confirmatory direction or involves a mix of both.

The importance of achieving solid alignment between these three factors and your methodology can’t be overstated. If they’re misaligned, you’re going to be forcing a square peg into a round hole. In other words, you’ll be using the wrong tool for the job, and your research will become a disjointed mess.

If your research is a mix of both exploratory and confirmatory, but you have a tight word count limit, you may need to consider trimming down the scope a little and focusing on one or the other. One methodology executed well has a far better chance of earning marks than a poorly executed mixed methods approach. So, don’t try to be a hero, unless there is a very strong underpinning logic.

Need a helping hand?

methods of study in research

Factor #2: The disciplinary norms

Choosing the right methodology for your research also involves looking at the approaches used by other researchers in the field, and studies with similar research aims and objectives to yours. Oftentimes, within a discipline, there is a common methodological approach (or set of approaches) used in studies. While this doesn’t mean you should follow the herd “just because”, you should at least consider these approaches and evaluate their merit within your context.

A major benefit of reviewing the research methodologies used by similar studies in your field is that you can often piggyback on the data collection techniques that other (more experienced) researchers have developed. For example, if you’re undertaking a quantitative study, you can often find tried and tested survey scales with high Cronbach’s alphas. These are usually included in the appendices of journal articles, so you don’t even have to contact the original authors. By using these, you’ll save a lot of time and ensure that your study stands on the proverbial “shoulders of giants” by using high-quality measurement instruments .

Of course, when reviewing existing literature, keep point #1 front of mind. In other words, your methodology needs to align with your research aims, objectives and questions. Don’t fall into the trap of adopting the methodological “norm” of other studies just because it’s popular. Only adopt that which is relevant to your research.

Factor #3: Practicalities

When choosing a research methodology, there will always be a tension between doing what’s theoretically best (i.e., the most scientifically rigorous research design ) and doing what’s practical , given your constraints . This is the nature of doing research and there are always trade-offs, as with anything else.

But what constraints, you ask?

When you’re evaluating your methodological options, you need to consider the following constraints:

  • Data access
  • Equipment and software
  • Your knowledge and skills

Let’s look at each of these.

Constraint #1: Data access

The first practical constraint you need to consider is your access to data . If you’re going to be undertaking primary research , you need to think critically about the sample of respondents you realistically have access to. For example, if you plan to use in-person interviews , you need to ask yourself how many people you’ll need to interview, whether they’ll be agreeable to being interviewed, where they’re located, and so on.

If you’re wanting to undertake a quantitative approach using surveys to collect data, you’ll need to consider how many responses you’ll require to achieve statistically significant results. For many statistical tests, a sample of a few hundred respondents is typically needed to develop convincing conclusions.

So, think carefully about what data you’ll need access to, how much data you’ll need and how you’ll collect it. The last thing you want is to spend a huge amount of time on your research only to find that you can’t get access to the required data.

Constraint #2: Time

The next constraint is time. If you’re undertaking research as part of a PhD, you may have a fairly open-ended time limit, but this is unlikely to be the case for undergrad and Masters-level projects. So, pay attention to your timeline, as the data collection and analysis components of different methodologies have a major impact on time requirements . Also, keep in mind that these stages of the research often take a lot longer than originally anticipated.

Another practical implication of time limits is that it will directly impact which time horizon you can use – i.e. longitudinal vs cross-sectional . For example, if you’ve got a 6-month limit for your entire research project, it’s quite unlikely that you’ll be able to adopt a longitudinal time horizon. 

Constraint #3: Money

As with so many things, money is another important constraint you’ll need to consider when deciding on your research methodology. While some research designs will cost near zero to execute, others may require a substantial budget .

Some of the costs that may arise include:

  • Software costs – e.g. survey hosting services, analysis software, etc.
  • Promotion costs – e.g. advertising a survey to attract respondents
  • Incentive costs – e.g. providing a prize or cash payment incentive to attract respondents
  • Equipment rental costs – e.g. recording equipment, lab equipment, etc.
  • Travel costs
  • Food & beverages

These are just a handful of costs that can creep into your research budget. Like most projects, the actual costs tend to be higher than the estimates, so be sure to err on the conservative side and expect the unexpected. It’s critically important that you’re honest with yourself about these costs, or you could end up getting stuck midway through your project because you’ve run out of money.

Budgeting for your research

Constraint #4: Equipment & software

Another practical consideration is the hardware and/or software you’ll need in order to undertake your research. Of course, this variable will depend on the type of data you’re collecting and analysing. For example, you may need lab equipment to analyse substances, or you may need specific analysis software to analyse statistical data. So, be sure to think about what hardware and/or software you’ll need for each potential methodological approach, and whether you have access to these.

Constraint #5: Your knowledge and skillset

The final practical constraint is a big one. Naturally, the research process involves a lot of learning and development along the way, so you will accrue knowledge and skills as you progress. However, when considering your methodological options, you should still consider your current position on the ladder.

Some of the questions you should ask yourself are:

  • Am I more of a “numbers person” or a “words person”?
  • How much do I know about the analysis methods I’ll potentially use (e.g. statistical analysis)?
  • How much do I know about the software and/or hardware that I’ll potentially use?
  • How excited am I to learn new research skills and gain new knowledge?
  • How much time do I have to learn the things I need to learn?

Answering these questions honestly will provide you with another set of criteria against which you can evaluate the research methodology options you’ve shortlisted.

So, as you can see, there is a wide range of practicalities and constraints that you need to take into account when you’re deciding on a research methodology. These practicalities create a tension between the “ideal” methodology and the methodology that you can realistically pull off. This is perfectly normal, and it’s your job to find the option that presents the best set of trade-offs.

Recap: Choosing a methodology

In this post, we’ve discussed how to go about choosing a research methodology. The three major deciding factors we looked at were:

  • Exploratory
  • Confirmatory
  • Combination
  • Research area norms
  • Hardware and software
  • Your knowledge and skillset

If you have any questions, feel free to leave a comment below. If you’d like a helping hand with your research methodology, check out our 1-on-1 research coaching service , or book a free consultation with a friendly Grad Coach.

methods of study in research

Psst... there’s more!

This post was based on one of our popular Research Bootcamps . If you're working on a research project, you'll definitely want to check this out ...

You Might Also Like:

How to choose a research topic: full video tutorial

Very useful and informative especially for beginners

Goudi

Nice article! I’m a beginner in the field of cybersecurity research. I am a Telecom and Network Engineer and Also aiming for PhD scholarship.

Margaret Mutandwa

I find the article very informative especially for my decitation it has been helpful and an eye opener.

Anna N Namwandi

Hi I am Anna ,

I am a PHD candidate in the area of cyber security, maybe we can link up

Tut Gatluak Doar

The Examples shows by you, for sure they are really direct me and others to knows and practices the Research Design and prepration.

Tshepo Ngcobo

I found the post very informative and practical.

Baraka Mfilinge

I struggle so much with designs of the research for sure!

Joyce

I’m the process of constructing my research design and I want to know if the data analysis I plan to present in my thesis defense proposal possibly change especially after I gathered the data already.

Janine Grace Baldesco

Thank you so much this site is such a life saver. How I wish 1-1 coaching is available in our country but sadly it’s not.

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

methods of study in research

Home Market Research

What is Research: Definition, Methods, Types & Examples

What is Research

The search for knowledge is closely linked to the object of study; that is, to the reconstruction of the facts that will provide an explanation to an observed event and that at first sight can be considered as a problem. It is very human to seek answers and satisfy our curiosity. Let’s talk about research.

Content Index

What is Research?

What are the characteristics of research.

  • Comparative analysis chart

Qualitative methods

Quantitative methods, 8 tips for conducting accurate research.

Research is the careful consideration of study regarding a particular concern or research problem using scientific methods. According to the American sociologist Earl Robert Babbie, “research is a systematic inquiry to describe, explain, predict, and control the observed phenomenon. It involves inductive and deductive methods.”

Inductive methods analyze an observed event, while deductive methods verify the observed event. Inductive approaches are associated with qualitative research , and deductive methods are more commonly associated with quantitative analysis .

Research is conducted with a purpose to:

  • Identify potential and new customers
  • Understand existing customers
  • Set pragmatic goals
  • Develop productive market strategies
  • Address business challenges
  • Put together a business expansion plan
  • Identify new business opportunities
  • Good research follows a systematic approach to capture accurate data. Researchers need to practice ethics and a code of conduct while making observations or drawing conclusions.
  • The analysis is based on logical reasoning and involves both inductive and deductive methods.
  • Real-time data and knowledge is derived from actual observations in natural settings.
  • There is an in-depth analysis of all data collected so that there are no anomalies associated with it.
  • It creates a path for generating new questions. Existing data helps create more research opportunities.
  • It is analytical and uses all the available data so that there is no ambiguity in inference.
  • Accuracy is one of the most critical aspects of research. The information must be accurate and correct. For example, laboratories provide a controlled environment to collect data. Accuracy is measured in the instruments used, the calibrations of instruments or tools, and the experiment’s final result.

What is the purpose of research?

There are three main purposes:

  • Exploratory: As the name suggests, researchers conduct exploratory studies to explore a group of questions. The answers and analytics may not offer a conclusion to the perceived problem. It is undertaken to handle new problem areas that haven’t been explored before. This exploratory data analysis process lays the foundation for more conclusive data collection and analysis.

LEARN ABOUT: Descriptive Analysis

  • Descriptive: It focuses on expanding knowledge on current issues through a process of data collection. Descriptive research describe the behavior of a sample population. Only one variable is required to conduct the study. The three primary purposes of descriptive studies are describing, explaining, and validating the findings. For example, a study conducted to know if top-level management leaders in the 21st century possess the moral right to receive a considerable sum of money from the company profit.

LEARN ABOUT: Best Data Collection Tools

  • Explanatory: Causal research or explanatory research is conducted to understand the impact of specific changes in existing standard procedures. Running experiments is the most popular form. For example, a study that is conducted to understand the effect of rebranding on customer loyalty.

Here is a comparative analysis chart for a better understanding:

 
Approach used Unstructured Structured Highly structured
Conducted throughAsking questions Asking questions By using hypotheses.
TimeEarly stages of decision making Later stages of decision makingLater stages of decision making

It begins by asking the right questions and choosing an appropriate method to investigate the problem. After collecting answers to your questions, you can analyze the findings or observations to draw reasonable conclusions.

When it comes to customers and market studies, the more thorough your questions, the better the analysis. You get essential insights into brand perception and product needs by thoroughly collecting customer data through surveys and questionnaires . You can use this data to make smart decisions about your marketing strategies to position your business effectively.

To make sense of your study and get insights faster, it helps to use a research repository as a single source of truth in your organization and manage your research data in one centralized data repository .

Types of research methods and Examples

what is research

Research methods are broadly classified as Qualitative and Quantitative .

Both methods have distinctive properties and data collection methods .

Qualitative research is a method that collects data using conversational methods, usually open-ended questions . The responses collected are essentially non-numerical. This method helps a researcher understand what participants think and why they think in a particular way.

Types of qualitative methods include:

  • One-to-one Interview
  • Focus Groups
  • Ethnographic studies
  • Text Analysis

Quantitative methods deal with numbers and measurable forms . It uses a systematic way of investigating events or data. It answers questions to justify relationships with measurable variables to either explain, predict, or control a phenomenon.

Types of quantitative methods include:

  • Survey research
  • Descriptive research
  • Correlational research

LEARN MORE: Descriptive Research vs Correlational Research

Remember, it is only valuable and useful when it is valid, accurate, and reliable. Incorrect results can lead to customer churn and a decrease in sales.

It is essential to ensure that your data is:

  • Valid – founded, logical, rigorous, and impartial.
  • Accurate – free of errors and including required details.
  • Reliable – other people who investigate in the same way can produce similar results.
  • Timely – current and collected within an appropriate time frame.
  • Complete – includes all the data you need to support your business decisions.

Gather insights

What is a research - tips

  • Identify the main trends and issues, opportunities, and problems you observe. Write a sentence describing each one.
  • Keep track of the frequency with which each of the main findings appears.
  • Make a list of your findings from the most common to the least common.
  • Evaluate a list of the strengths, weaknesses, opportunities, and threats identified in a SWOT analysis .
  • Prepare conclusions and recommendations about your study.
  • Act on your strategies
  • Look for gaps in the information, and consider doing additional inquiry if necessary
  • Plan to review the results and consider efficient methods to analyze and interpret results.

Review your goals before making any conclusions about your study. Remember how the process you have completed and the data you have gathered help answer your questions. Ask yourself if what your analysis revealed facilitates the identification of your conclusions and recommendations.

LEARN MORE ABOUT OUR SOFTWARE         FREE TRIAL

MORE LIKE THIS

Techaton QuestionPro

Techaton by QuestionPro: An Amazing Showcase of Tech Brilliance

Jul 3, 2024

Stakeholder Interviews

Stakeholder Interviews: A Guide to Effective Engagement

Jul 2, 2024

zero correlation

Zero Correlation: Definition, Examples + How to Determine It

Jul 1, 2024

methods of study in research

When You Have Something Important to Say, You want to Shout it From the Rooftops

Jun 28, 2024

Other categories

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Tuesday CX Thoughts (TCXT)
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence

Banner

Research Guide: Research Methods

  • Postgraduate Online Training subject guide This link opens in a new window
  • Open Educational Resources (OERs)
  • Library support
  • Research ideas
  • You and your supervisor
  • Researcher skills
  • Research Data Management This link opens in a new window
  • Literature review
  • Plagiarism This link opens in a new window
  • Research Methods
  • Data analysis and reporting findings
  • Statistical support
  • Writing support
  • Researcher visibility
  • Conferences and Presentations
  • Postgraduate Forums
  • Soft skills development
  • Emotional support
  • The Commons Informer (blog)
  • Research Tip Archives
  • RC Newsletter Archives
  • Evaluation Forms
  • Editing FAQs

Past presentations

Prof. D. Walwyn: 2017

  • Lecture 1: Introduction
  • Lecture 2: Research Proposal
  • Lecture 3: Research Design
  • Lecture 4: Data Gathering and Analysis
  • Article: Singh & Walwyn

Dr. A. Nyika: 2017

  • Mixed Methods
  • Qualitative Research Tools

 Prof. C.M.E. McCrindle: 2017

  • Quantitative Research Tools

Prof. D. Walwyn: 2016

  • Lecture 2: Research Design
  • Lecture 3: Mixed Methods

Prof. J. Burnett: 2009

  • Qualitative Data Collection Methods

External Resources

methods of study in research

Methodology

Research methodology can be understood as a way to systemically solve or answer the research problem. Thus essentially, it can be understood as the process of studying how research is done in a scientific manner. Through the methodology, we study the various steps that are generally adopted by a researcher in studying his/her research problem and the underlying logic behind them. The selection of the research method is crucial for what conclusions you can make about a phenomenon. It affects what you can say about the cause and factors influencing the phenomenon.

Research methods

Research methods refers to the tools that one uses to do research. These can either be qualitative or quantitative or mixed. Quantitative methods examines numerical data and often requires the use of statistical tools to analyse data collected. This allows for the measurement of variables and relationships between them can then be established. This type of data can be represented using graphs and tables. Qualitative data is non-numerical and focuses on establishing patterns. Mixed methods are composed of both qualitative and quantitative research methods. Mixed methods allow for explanation of unexpected results.

methods of study in research

  • Determine what kind of knowledge you are trying to uncover (is it subjective or objective? experimental or interpretive?).
  • Let the literature be your guide: A thorough literature review is the best starting point for choosing your methods. Due to the fact that evaluating previous researchers' efforts can suggest a direction to answer your own research question.
  • Align your chosen methodology with research questions, aims and objectives (in other words, make sure your research questions and objectives can be answered through your chosen methodology).
  • The authenticity of your research depends upon the validity of the research data, the reliability of measures taken to amass the data, as well as the time taken to conduct the analysis, so it is essential to ensure that there is continuity throughout the research process.
  • It is also important to choose a research method which is within the limits of what the researcher can do. Time, money, feasibility, ethics and availability to measure the phenomenon correctly are examples of issues constraining the research.
  • When confused, ask! Do not be afraid to lean on the expertise of your supervisor, departmental research specialists etc. They are all there to help you.

Recommended Quantitative Methods books

methods of study in research

Recommended Qualitative Methods books

methods of study in research

Recommended Mixed Methods research sources

Cover Art

  • Advances in Mixed Methods Research by Prof J. W. Creswell
  • << Previous: Plagiarism
  • Next: Data collection techniques >>
  • Last Updated: Jul 2, 2024 7:20 AM
  • URL: https://library.up.ac.za/c.php?g=485435

Pfeiffer Library

Research Methodologies

  • What are research designs?
  • What are research methodologies?

What are research methods?

Quantitative research methods, qualitative research methods, mixed method approach, selecting the best research method.

  • Additional Sources

Research methods are different from research methodologies because they are the ways in which you will collect the data for your research project.  The best method for your project largely depends on your topic, the type of data you will need, and the people or items from which you will be collecting data.  The following boxes below contain a list of quantitative, qualitative, and mixed research methods.

  • Closed-ended questionnaires/survey: These types of questionnaires or surveys are like "multiple choice" tests, where participants must select from a list of premade answers.  According to the content of the question, they must select the one that they agree with the most.  This approach is the simplest form of quantitative research because the data is easy to combine and quantify.
  • Structured interviews: These are a common research method in market research because the data can be quantified.  They are strictly designed for little "wiggle room" in the interview process so that the data will not be skewed.  You can conduct structured interviews in-person, online, or over the phone (Dawson, 2019).

Constructing Questionnaires

When constructing your questions for a survey or questionnaire, there are things you can do to ensure that your questions are accurate and easy to understand (Dawson, 2019):

  • Keep the questions brief and simple.
  • Eliminate any potential bias from your questions.  Make sure that they do not word things in a way that favor one perspective over another.
  • If your topic is very sensitive, you may want to ask indirect questions rather than direct ones.  This prevents participants from being intimidated and becoming unwilling to share their true responses.
  • If you are using a closed-ended question, try to offer every possible answer that a participant could give to that question.
  • Do not ask questions that assume something of the participant.  The question "How often do you exercise?" assumes that the participant exercises (when they may not), so you would want to include a question that asks if they exercise at all before asking them how often.
  • Try and keep the questionnaire as short as possible.  The longer a questionnaire takes, the more likely the participant will not complete it or get too tired to put truthful answers.
  • Promise confidentiality to your participants at the beginning of the questionnaire.

Quantitative Research Measures

When you are considering a quantitative approach to your research, you need to identify why types of measures you will use in your study.  This will determine what type of numbers you will be using to collect your data.  There are four levels of measurement:

  • Nominal: These are numbers where the order of the numbers do not matter.  They aim to identify separate information.  One example is collecting zip codes from research participants.  The order of the numbers does not matter, but the series of numbers in each zip code indicate different information (Adamson and Prion, 2013).
  • Ordinal: Also known as rankings because the order of these numbers matter.  This is when items are given a specific rank according to specific criteria.  A common example of ordinal measurements include ranking-based questionnaires, where participants are asked to rank items from least favorite to most favorite.  Another common example is a pain scale, where a patient is asked to rank their pain on a scale from 1 to 10 (Adamson and Prion, 2013).
  • Interval: This is when the data are ordered and the distance between the numbers matters to the researcher (Adamson and Prion, 2013).  The distance between each number is the same.  An example of interval data is test grades.
  • Ratio: This is when the data are ordered and have a consistent distance between numbers, but has a "zero point."  This means that there could be a measurement of zero of whatever you are measuring in your study (Adamson and Prion, 2013).  An example of ratio data is measuring the height of something because the "zero point" remains constant in all measurements.  The height of something could also be zero.

Focus Groups

This is when a select group of people gather to talk about a particular topic.  They can also be called discussion groups or group interviews (Dawson, 2019).  They are usually lead by a moderator  to help guide the discussion and ask certain questions.  It is critical that a moderator allows everyone in the group to get a chance to speak so that no one dominates the discussion.  The data that are gathered from focus groups tend to be thoughts, opinions, and perspectives about an issue.

Advantages of Focus Groups

  • Only requires one meeting to get different types of responses.
  • Less researcher bias due to participants being able to speak openly.
  • Helps participants overcome insecurities or fears about a topic.
  • The researcher can also consider the impact of participant interaction.

Disadvantages of Focus Groups

  • Participants may feel uncomfortable to speak in front of an audience, especially if the topic is sensitive or controversial.
  • Since participation is voluntary, not every participant may contribute equally to the discussion.
  • Participants may impact what others say or think.
  • A researcher may feel intimidated by running a focus group on their own.
  • A researcher may need extra funds/resources to provide a safe space to host the focus group.
  • Because the data is collective, it may be difficult to determine a participant's individual thoughts about the research topic.

Observation

There are two ways to conduct research observations:

  • Direct Observation: The researcher observes a participant in an environment.  The researcher often takes notes or uses technology to gather data, such as a voice recorder or video camera.  The researcher does not interact or interfere with the participants.  This approach is often used in psychology and health studies (Dawson, 2019).
  • Participant Observation:  The researcher interacts directly with the participants to get a better understanding of the research topic.  This is a common research method when trying to understand another culture or community.  It is important to decide if you will conduct a covert (participants do not know they are part of the research) or overt (participants know the researcher is observing them) observation because it can be unethical in some situations (Dawson, 2019).

Open-Ended Questionnaires

These types of questionnaires are the opposite of "multiple choice" questionnaires because the answer boxes are left open for the participant to complete.  This means that participants can write short or extended answers to the questions.  Upon gathering the responses, researchers will often "quantify" the data by organizing the responses into different categories.  This can be time consuming because the researcher needs to read all responses carefully.

Semi-structured Interviews

This is the most common type of interview where researchers aim to get specific information so they can compare it to other interview data.  This requires asking the same questions for each interview, but keeping their responses flexible.  This means including follow-up questions if a subject answers a certain way.  Interview schedules are commonly used to aid the interviewers, which list topics or questions that will be discussed at each interview (Dawson, 2019).

Theoretical Analysis

Often used for nonhuman research, theoretical analysis is a qualitative approach where the researcher applies a theoretical framework to analyze something about their topic.  A theoretical framework gives the researcher a specific "lens" to view the topic and think about it critically. it also serves as context to guide the entire study.  This is a popular research method for analyzing works of literature, films, and other forms of media.  You can implement more than one theoretical framework with this method, as many theories complement one another.

Common theoretical frameworks for qualitative research are (Grant and Osanloo, 2014):

  • Behavioral theory
  • Change theory
  • Cognitive theory
  • Content analysis
  • Cross-sectional analysis
  • Developmental theory
  • Feminist theory
  • Gender theory
  • Marxist theory
  • Queer theory
  • Systems theory
  • Transformational theory

Unstructured Interviews

These are in-depth interviews where the researcher tries to understand an interviewee's perspective on a situation or issue.  They are sometimes called life history interviews.  It is important not to bombard the interviewee with too many questions so they can freely disclose their thoughts (Dawson, 2019).

  • Open-ended and closed-ended questionnaires: This approach means implementing elements of both questionnaire types into your data collection.  Participants may answer some questions with premade answers and write their own answers to other questions.  The advantage to this method is that you benefit from both types of data collection to get a broader understanding of you participants.  However, you must think carefully about how you will analyze this data to arrive at a conclusion.

Other mixed method approaches that incorporate quantitative and qualitative research methods depend heavily on the research topic.  It is strongly recommended that you collaborate with your academic advisor before finalizing a mixed method approach.

How do you determine which research method would be best for your proposal?  This heavily depends on your research objective.  According to Dawson (2019), there are several questions to ask yourself when determining the best research method for your project:

  • Are you good with numbers and mathematics?
  • Would you be interested in conducting interviews with human subjects?
  • Would you enjoy creating a questionnaire for participants to complete?
  • Do you prefer written communication or face-to-face interaction?
  • What skills or experiences do you have that might help you with your research?  Do you have any experiences from past research projects that can help with this one?
  • How much time do you have to complete the research?  Some methods take longer to collect data than others.
  • What is your budget?  Do you have adequate funding to conduct the research in the method you  want?
  • How much data do you need?  Some research topics need only a small amount of data while others may need significantly larger amounts.
  • What is the purpose of your research? This can provide a good indicator as to what research method will be most appropriate.
  • << Previous: What are research methodologies?
  • Next: Additional Sources >>
  • Last Updated: Aug 2, 2022 2:36 PM
  • URL: https://library.tiffin.edu/researchmethodologies
  • Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Best Family Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Guided Meditations
  • Verywell Mind Insights
  • 2024 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

Introduction to Research Methods in Psychology

Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

methods of study in research

Emily is a board-certified science editor who has worked with top digital publishing brands like Voices for Biodiversity, Study.com, GoodTherapy, Vox, and Verywell.

methods of study in research

There are several different research methods in psychology , each of which can help researchers learn more about the way people think, feel, and behave. If you're a psychology student or just want to know the types of research in psychology, here are the main ones as well as how they work.

Three Main Types of Research in Psychology

stevecoleimages/Getty Images

Psychology research can usually be classified as one of three major types.

1. Causal or Experimental Research

When most people think of scientific experimentation, research on cause and effect is most often brought to mind. Experiments on causal relationships investigate the effect of one or more variables on one or more outcome variables. This type of research also determines if one variable causes another variable to occur or change.

An example of this type of research in psychology would be changing the length of a specific mental health treatment and measuring the effect on study participants.

2. Descriptive Research

Descriptive research seeks to depict what already exists in a group or population. Three types of psychology research utilizing this method are:

  • Case studies
  • Observational studies

An example of this psychology research method would be an opinion poll to determine which presidential candidate people plan to vote for in the next election. Descriptive studies don't try to measure the effect of a variable; they seek only to describe it.

3. Relational or Correlational Research

A study that investigates the connection between two or more variables is considered relational research. The variables compared are generally already present in the group or population.

For example, a study that looks at the proportion of males and females that would purchase either a classical CD or a jazz CD would be studying the relationship between gender and music preference.

Theory vs. Hypothesis in Psychology Research

People often confuse the terms theory and hypothesis or are not quite sure of the distinctions between the two concepts. If you're a psychology student, it's essential to understand what each term means, how they differ, and how they're used in psychology research.

A theory is a well-established principle that has been developed to explain some aspect of the natural world. A theory arises from repeated observation and testing and incorporates facts, laws, predictions, and tested hypotheses that are widely accepted.

A hypothesis is a specific, testable prediction about what you expect to happen in your study. For example, an experiment designed to look at the relationship between study habits and test anxiety might have a hypothesis that states, "We predict that students with better study habits will suffer less test anxiety." Unless your study is exploratory in nature, your hypothesis should always explain what you expect to happen during the course of your experiment or research.

While the terms are sometimes used interchangeably in everyday use, the difference between a theory and a hypothesis is important when studying experimental design.

Some other important distinctions to note include:

  • A theory predicts events in general terms, while a hypothesis makes a specific prediction about a specified set of circumstances.
  • A theory has been extensively tested and is generally accepted, while a hypothesis is a speculative guess that has yet to be tested.

The Effect of Time on Research Methods in Psychology

There are two types of time dimensions that can be used in designing a research study:

  • Cross-sectional research takes place at a single point in time. All tests, measures, or variables are administered to participants on one occasion. This type of research seeks to gather data on present conditions instead of looking at the effects of a variable over a period of time.
  • Longitudinal research is a study that takes place over a period of time. Data is first collected at the beginning of the study, and may then be gathered repeatedly throughout the length of the study. Some longitudinal studies may occur over a short period of time, such as a few days, while others may take place over a period of months, years, or even decades.

The effects of aging are often investigated using longitudinal research.

Causal Relationships Between Psychology Research Variables

What do we mean when we talk about a “relationship” between variables? In psychological research, we're referring to a connection between two or more factors that we can measure or systematically vary.

One of the most important distinctions to make when discussing the relationship between variables is the meaning of causation.

A causal relationship is when one variable causes a change in another variable. These types of relationships are investigated by experimental research to determine if changes in one variable actually result in changes in another variable.

Correlational Relationships Between Psychology Research Variables

A correlation is the measurement of the relationship between two variables. These variables already occur in the group or population and are not controlled by the experimenter.

  • A positive correlation is a direct relationship where, as the amount of one variable increases, the amount of a second variable also increases.
  • In a negative correlation , as the amount of one variable goes up, the levels of another variable go down.

In both types of correlation, there is no evidence or proof that changes in one variable cause changes in the other variable. A correlation simply indicates that there is a relationship between the two variables.

The most important concept is that correlation does not equal causation. Many popular media sources make the mistake of assuming that simply because two variables are related, a causal relationship exists.

Psychologists use descriptive, correlational, and experimental research designs to understand behavior . In:  Introduction to Psychology . Minneapolis, MN: University of Minnesota Libraries Publishing; 2010.

Caruana EJ, Roman M, Herandez-Sanchez J, Solli P. Longitudinal studies . Journal of Thoracic Disease. 2015;7(11):E537-E540. doi:10.3978/j.issn.2072-1439.2015.10.63

University of Berkeley. Science at multiple levels . Understanding Science 101 . Published 2012.

By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

  • Open access
  • Published: 07 September 2020

A tutorial on methodological studies: the what, when, how and why

  • Lawrence Mbuagbaw   ORCID: orcid.org/0000-0001-5855-5461 1 , 2 , 3 ,
  • Daeria O. Lawson 1 ,
  • Livia Puljak 4 ,
  • David B. Allison 5 &
  • Lehana Thabane 1 , 2 , 6 , 7 , 8  

BMC Medical Research Methodology volume  20 , Article number:  226 ( 2020 ) Cite this article

40k Accesses

53 Citations

60 Altmetric

Metrics details

Methodological studies – studies that evaluate the design, analysis or reporting of other research-related reports – play an important role in health research. They help to highlight issues in the conduct of research with the aim of improving health research methodology, and ultimately reducing research waste.

We provide an overview of some of the key aspects of methodological studies such as what they are, and when, how and why they are done. We adopt a “frequently asked questions” format to facilitate reading this paper and provide multiple examples to help guide researchers interested in conducting methodological studies. Some of the topics addressed include: is it necessary to publish a study protocol? How to select relevant research reports and databases for a methodological study? What approaches to data extraction and statistical analysis should be considered when conducting a methodological study? What are potential threats to validity and is there a way to appraise the quality of methodological studies?

Appropriate reflection and application of basic principles of epidemiology and biostatistics are required in the design and analysis of methodological studies. This paper provides an introduction for further discussion about the conduct of methodological studies.

Peer Review reports

The field of meta-research (or research-on-research) has proliferated in recent years in response to issues with research quality and conduct [ 1 , 2 , 3 ]. As the name suggests, this field targets issues with research design, conduct, analysis and reporting. Various types of research reports are often examined as the unit of analysis in these studies (e.g. abstracts, full manuscripts, trial registry entries). Like many other novel fields of research, meta-research has seen a proliferation of use before the development of reporting guidance. For example, this was the case with randomized trials for which risk of bias tools and reporting guidelines were only developed much later – after many trials had been published and noted to have limitations [ 4 , 5 ]; and for systematic reviews as well [ 6 , 7 , 8 ]. However, in the absence of formal guidance, studies that report on research differ substantially in how they are named, conducted and reported [ 9 , 10 ]. This creates challenges in identifying, summarizing and comparing them. In this tutorial paper, we will use the term methodological study to refer to any study that reports on the design, conduct, analysis or reporting of primary or secondary research-related reports (such as trial registry entries and conference abstracts).

In the past 10 years, there has been an increase in the use of terms related to methodological studies (based on records retrieved with a keyword search [in the title and abstract] for “methodological review” and “meta-epidemiological study” in PubMed up to December 2019), suggesting that these studies may be appearing more frequently in the literature. See Fig.  1 .

figure 1

Trends in the number studies that mention “methodological review” or “meta-

epidemiological study” in PubMed.

The methods used in many methodological studies have been borrowed from systematic and scoping reviews. This practice has influenced the direction of the field, with many methodological studies including searches of electronic databases, screening of records, duplicate data extraction and assessments of risk of bias in the included studies. However, the research questions posed in methodological studies do not always require the approaches listed above, and guidance is needed on when and how to apply these methods to a methodological study. Even though methodological studies can be conducted on qualitative or mixed methods research, this paper focuses on and draws examples exclusively from quantitative research.

The objectives of this paper are to provide some insights on how to conduct methodological studies so that there is greater consistency between the research questions posed, and the design, analysis and reporting of findings. We provide multiple examples to illustrate concepts and a proposed framework for categorizing methodological studies in quantitative research.

What is a methodological study?

Any study that describes or analyzes methods (design, conduct, analysis or reporting) in published (or unpublished) literature is a methodological study. Consequently, the scope of methodological studies is quite extensive and includes, but is not limited to, topics as diverse as: research question formulation [ 11 ]; adherence to reporting guidelines [ 12 , 13 , 14 ] and consistency in reporting [ 15 ]; approaches to study analysis [ 16 ]; investigating the credibility of analyses [ 17 ]; and studies that synthesize these methodological studies [ 18 ]. While the nomenclature of methodological studies is not uniform, the intents and purposes of these studies remain fairly consistent – to describe or analyze methods in primary or secondary studies. As such, methodological studies may also be classified as a subtype of observational studies.

Parallel to this are experimental studies that compare different methods. Even though they play an important role in informing optimal research methods, experimental methodological studies are beyond the scope of this paper. Examples of such studies include the randomized trials by Buscemi et al., comparing single data extraction to double data extraction [ 19 ], and Carrasco-Labra et al., comparing approaches to presenting findings in Grading of Recommendations, Assessment, Development and Evaluations (GRADE) summary of findings tables [ 20 ]. In these studies, the unit of analysis is the person or groups of individuals applying the methods. We also direct readers to the Studies Within a Trial (SWAT) and Studies Within a Review (SWAR) programme operated through the Hub for Trials Methodology Research, for further reading as a potential useful resource for these types of experimental studies [ 21 ]. Lastly, this paper is not meant to inform the conduct of research using computational simulation and mathematical modeling for which some guidance already exists [ 22 ], or studies on the development of methods using consensus-based approaches.

When should we conduct a methodological study?

Methodological studies occupy a unique niche in health research that allows them to inform methodological advances. Methodological studies should also be conducted as pre-cursors to reporting guideline development, as they provide an opportunity to understand current practices, and help to identify the need for guidance and gaps in methodological or reporting quality. For example, the development of the popular Preferred Reporting Items of Systematic reviews and Meta-Analyses (PRISMA) guidelines were preceded by methodological studies identifying poor reporting practices [ 23 , 24 ]. In these instances, after the reporting guidelines are published, methodological studies can also be used to monitor uptake of the guidelines.

These studies can also be conducted to inform the state of the art for design, analysis and reporting practices across different types of health research fields, with the aim of improving research practices, and preventing or reducing research waste. For example, Samaan et al. conducted a scoping review of adherence to different reporting guidelines in health care literature [ 18 ]. Methodological studies can also be used to determine the factors associated with reporting practices. For example, Abbade et al. investigated journal characteristics associated with the use of the Participants, Intervention, Comparison, Outcome, Timeframe (PICOT) format in framing research questions in trials of venous ulcer disease [ 11 ].

How often are methodological studies conducted?

There is no clear answer to this question. Based on a search of PubMed, the use of related terms (“methodological review” and “meta-epidemiological study”) – and therefore, the number of methodological studies – is on the rise. However, many other terms are used to describe methodological studies. There are also many studies that explore design, conduct, analysis or reporting of research reports, but that do not use any specific terms to describe or label their study design in terms of “methodology”. This diversity in nomenclature makes a census of methodological studies elusive. Appropriate terminology and key words for methodological studies are needed to facilitate improved accessibility for end-users.

Why do we conduct methodological studies?

Methodological studies provide information on the design, conduct, analysis or reporting of primary and secondary research and can be used to appraise quality, quantity, completeness, accuracy and consistency of health research. These issues can be explored in specific fields, journals, databases, geographical regions and time periods. For example, Areia et al. explored the quality of reporting of endoscopic diagnostic studies in gastroenterology [ 25 ]; Knol et al. investigated the reporting of p -values in baseline tables in randomized trial published in high impact journals [ 26 ]; Chen et al. describe adherence to the Consolidated Standards of Reporting Trials (CONSORT) statement in Chinese Journals [ 27 ]; and Hopewell et al. describe the effect of editors’ implementation of CONSORT guidelines on reporting of abstracts over time [ 28 ]. Methodological studies provide useful information to researchers, clinicians, editors, publishers and users of health literature. As a result, these studies have been at the cornerstone of important methodological developments in the past two decades and have informed the development of many health research guidelines including the highly cited CONSORT statement [ 5 ].

Where can we find methodological studies?

Methodological studies can be found in most common biomedical bibliographic databases (e.g. Embase, MEDLINE, PubMed, Web of Science). However, the biggest caveat is that methodological studies are hard to identify in the literature due to the wide variety of names used and the lack of comprehensive databases dedicated to them. A handful can be found in the Cochrane Library as “Cochrane Methodology Reviews”, but these studies only cover methodological issues related to systematic reviews. Previous attempts to catalogue all empirical studies of methods used in reviews were abandoned 10 years ago [ 29 ]. In other databases, a variety of search terms may be applied with different levels of sensitivity and specificity.

Some frequently asked questions about methodological studies

In this section, we have outlined responses to questions that might help inform the conduct of methodological studies.

Q: How should I select research reports for my methodological study?

A: Selection of research reports for a methodological study depends on the research question and eligibility criteria. Once a clear research question is set and the nature of literature one desires to review is known, one can then begin the selection process. Selection may begin with a broad search, especially if the eligibility criteria are not apparent. For example, a methodological study of Cochrane Reviews of HIV would not require a complex search as all eligible studies can easily be retrieved from the Cochrane Library after checking a few boxes [ 30 ]. On the other hand, a methodological study of subgroup analyses in trials of gastrointestinal oncology would require a search to find such trials, and further screening to identify trials that conducted a subgroup analysis [ 31 ].

The strategies used for identifying participants in observational studies can apply here. One may use a systematic search to identify all eligible studies. If the number of eligible studies is unmanageable, a random sample of articles can be expected to provide comparable results if it is sufficiently large [ 32 ]. For example, Wilson et al. used a random sample of trials from the Cochrane Stroke Group’s Trial Register to investigate completeness of reporting [ 33 ]. It is possible that a simple random sample would lead to underrepresentation of units (i.e. research reports) that are smaller in number. This is relevant if the investigators wish to compare multiple groups but have too few units in one group. In this case a stratified sample would help to create equal groups. For example, in a methodological study comparing Cochrane and non-Cochrane reviews, Kahale et al. drew random samples from both groups [ 34 ]. Alternatively, systematic or purposeful sampling strategies can be used and we encourage researchers to justify their selected approaches based on the study objective.

Q: How many databases should I search?

A: The number of databases one should search would depend on the approach to sampling, which can include targeting the entire “population” of interest or a sample of that population. If you are interested in including the entire target population for your research question, or drawing a random or systematic sample from it, then a comprehensive and exhaustive search for relevant articles is required. In this case, we recommend using systematic approaches for searching electronic databases (i.e. at least 2 databases with a replicable and time stamped search strategy). The results of your search will constitute a sampling frame from which eligible studies can be drawn.

Alternatively, if your approach to sampling is purposeful, then we recommend targeting the database(s) or data sources (e.g. journals, registries) that include the information you need. For example, if you are conducting a methodological study of high impact journals in plastic surgery and they are all indexed in PubMed, you likely do not need to search any other databases. You may also have a comprehensive list of all journals of interest and can approach your search using the journal names in your database search (or by accessing the journal archives directly from the journal’s website). Even though one could also search journals’ web pages directly, using a database such as PubMed has multiple advantages, such as the use of filters, so the search can be narrowed down to a certain period, or study types of interest. Furthermore, individual journals’ web sites may have different search functionalities, which do not necessarily yield a consistent output.

Q: Should I publish a protocol for my methodological study?

A: A protocol is a description of intended research methods. Currently, only protocols for clinical trials require registration [ 35 ]. Protocols for systematic reviews are encouraged but no formal recommendation exists. The scientific community welcomes the publication of protocols because they help protect against selective outcome reporting, the use of post hoc methodologies to embellish results, and to help avoid duplication of efforts [ 36 ]. While the latter two risks exist in methodological research, the negative consequences may be substantially less than for clinical outcomes. In a sample of 31 methodological studies, 7 (22.6%) referenced a published protocol [ 9 ]. In the Cochrane Library, there are 15 protocols for methodological reviews (21 July 2020). This suggests that publishing protocols for methodological studies is not uncommon.

Authors can consider publishing their study protocol in a scholarly journal as a manuscript. Advantages of such publication include obtaining peer-review feedback about the planned study, and easy retrieval by searching databases such as PubMed. The disadvantages in trying to publish protocols includes delays associated with manuscript handling and peer review, as well as costs, as few journals publish study protocols, and those journals mostly charge article-processing fees [ 37 ]. Authors who would like to make their protocol publicly available without publishing it in scholarly journals, could deposit their study protocols in publicly available repositories, such as the Open Science Framework ( https://osf.io/ ).

Q: How to appraise the quality of a methodological study?

A: To date, there is no published tool for appraising the risk of bias in a methodological study, but in principle, a methodological study could be considered as a type of observational study. Therefore, during conduct or appraisal, care should be taken to avoid the biases common in observational studies [ 38 ]. These biases include selection bias, comparability of groups, and ascertainment of exposure or outcome. In other words, to generate a representative sample, a comprehensive reproducible search may be necessary to build a sampling frame. Additionally, random sampling may be necessary to ensure that all the included research reports have the same probability of being selected, and the screening and selection processes should be transparent and reproducible. To ensure that the groups compared are similar in all characteristics, matching, random sampling or stratified sampling can be used. Statistical adjustments for between-group differences can also be applied at the analysis stage. Finally, duplicate data extraction can reduce errors in assessment of exposures or outcomes.

Q: Should I justify a sample size?

A: In all instances where one is not using the target population (i.e. the group to which inferences from the research report are directed) [ 39 ], a sample size justification is good practice. The sample size justification may take the form of a description of what is expected to be achieved with the number of articles selected, or a formal sample size estimation that outlines the number of articles required to answer the research question with a certain precision and power. Sample size justifications in methodological studies are reasonable in the following instances:

Comparing two groups

Determining a proportion, mean or another quantifier

Determining factors associated with an outcome using regression-based analyses

For example, El Dib et al. computed a sample size requirement for a methodological study of diagnostic strategies in randomized trials, based on a confidence interval approach [ 40 ].

Q: What should I call my study?

A: Other terms which have been used to describe/label methodological studies include “ methodological review ”, “methodological survey” , “meta-epidemiological study” , “systematic review” , “systematic survey”, “meta-research”, “research-on-research” and many others. We recommend that the study nomenclature be clear, unambiguous, informative and allow for appropriate indexing. Methodological study nomenclature that should be avoided includes “ systematic review” – as this will likely be confused with a systematic review of a clinical question. “ Systematic survey” may also lead to confusion about whether the survey was systematic (i.e. using a preplanned methodology) or a survey using “ systematic” sampling (i.e. a sampling approach using specific intervals to determine who is selected) [ 32 ]. Any of the above meanings of the words “ systematic” may be true for methodological studies and could be potentially misleading. “ Meta-epidemiological study” is ideal for indexing, but not very informative as it describes an entire field. The term “ review ” may point towards an appraisal or “review” of the design, conduct, analysis or reporting (or methodological components) of the targeted research reports, yet it has also been used to describe narrative reviews [ 41 , 42 ]. The term “ survey ” is also in line with the approaches used in many methodological studies [ 9 ], and would be indicative of the sampling procedures of this study design. However, in the absence of guidelines on nomenclature, the term “ methodological study ” is broad enough to capture most of the scenarios of such studies.

Q: Should I account for clustering in my methodological study?

A: Data from methodological studies are often clustered. For example, articles coming from a specific source may have different reporting standards (e.g. the Cochrane Library). Articles within the same journal may be similar due to editorial practices and policies, reporting requirements and endorsement of guidelines. There is emerging evidence that these are real concerns that should be accounted for in analyses [ 43 ]. Some cluster variables are described in the section: “ What variables are relevant to methodological studies?”

A variety of modelling approaches can be used to account for correlated data, including the use of marginal, fixed or mixed effects regression models with appropriate computation of standard errors [ 44 ]. For example, Kosa et al. used generalized estimation equations to account for correlation of articles within journals [ 15 ]. Not accounting for clustering could lead to incorrect p -values, unduly narrow confidence intervals, and biased estimates [ 45 ].

Q: Should I extract data in duplicate?

A: Yes. Duplicate data extraction takes more time but results in less errors [ 19 ]. Data extraction errors in turn affect the effect estimate [ 46 ], and therefore should be mitigated. Duplicate data extraction should be considered in the absence of other approaches to minimize extraction errors. However, much like systematic reviews, this area will likely see rapid new advances with machine learning and natural language processing technologies to support researchers with screening and data extraction [ 47 , 48 ]. However, experience plays an important role in the quality of extracted data and inexperienced extractors should be paired with experienced extractors [ 46 , 49 ].

Q: Should I assess the risk of bias of research reports included in my methodological study?

A : Risk of bias is most useful in determining the certainty that can be placed in the effect measure from a study. In methodological studies, risk of bias may not serve the purpose of determining the trustworthiness of results, as effect measures are often not the primary goal of methodological studies. Determining risk of bias in methodological studies is likely a practice borrowed from systematic review methodology, but whose intrinsic value is not obvious in methodological studies. When it is part of the research question, investigators often focus on one aspect of risk of bias. For example, Speich investigated how blinding was reported in surgical trials [ 50 ], and Abraha et al., investigated the application of intention-to-treat analyses in systematic reviews and trials [ 51 ].

Q: What variables are relevant to methodological studies?

A: There is empirical evidence that certain variables may inform the findings in a methodological study. We outline some of these and provide a brief overview below:

Country: Countries and regions differ in their research cultures, and the resources available to conduct research. Therefore, it is reasonable to believe that there may be differences in methodological features across countries. Methodological studies have reported loco-regional differences in reporting quality [ 52 , 53 ]. This may also be related to challenges non-English speakers face in publishing papers in English.

Authors’ expertise: The inclusion of authors with expertise in research methodology, biostatistics, and scientific writing is likely to influence the end-product. Oltean et al. found that among randomized trials in orthopaedic surgery, the use of analyses that accounted for clustering was more likely when specialists (e.g. statistician, epidemiologist or clinical trials methodologist) were included on the study team [ 54 ]. Fleming et al. found that including methodologists in the review team was associated with appropriate use of reporting guidelines [ 55 ].

Source of funding and conflicts of interest: Some studies have found that funded studies report better [ 56 , 57 ], while others do not [ 53 , 58 ]. The presence of funding would indicate the availability of resources deployed to ensure optimal design, conduct, analysis and reporting. However, the source of funding may introduce conflicts of interest and warrant assessment. For example, Kaiser et al. investigated the effect of industry funding on obesity or nutrition randomized trials and found that reporting quality was similar [ 59 ]. Thomas et al. looked at reporting quality of long-term weight loss trials and found that industry funded studies were better [ 60 ]. Kan et al. examined the association between industry funding and “positive trials” (trials reporting a significant intervention effect) and found that industry funding was highly predictive of a positive trial [ 61 ]. This finding is similar to that of a recent Cochrane Methodology Review by Hansen et al. [ 62 ]

Journal characteristics: Certain journals’ characteristics may influence the study design, analysis or reporting. Characteristics such as journal endorsement of guidelines [ 63 , 64 ], and Journal Impact Factor (JIF) have been shown to be associated with reporting [ 63 , 65 , 66 , 67 ].

Study size (sample size/number of sites): Some studies have shown that reporting is better in larger studies [ 53 , 56 , 58 ].

Year of publication: It is reasonable to assume that design, conduct, analysis and reporting of research will change over time. Many studies have demonstrated improvements in reporting over time or after the publication of reporting guidelines [ 68 , 69 ].

Type of intervention: In a methodological study of reporting quality of weight loss intervention studies, Thabane et al. found that trials of pharmacologic interventions were reported better than trials of non-pharmacologic interventions [ 70 ].

Interactions between variables: Complex interactions between the previously listed variables are possible. High income countries with more resources may be more likely to conduct larger studies and incorporate a variety of experts. Authors in certain countries may prefer certain journals, and journal endorsement of guidelines and editorial policies may change over time.

Q: Should I focus only on high impact journals?

A: Investigators may choose to investigate only high impact journals because they are more likely to influence practice and policy, or because they assume that methodological standards would be higher. However, the JIF may severely limit the scope of articles included and may skew the sample towards articles with positive findings. The generalizability and applicability of findings from a handful of journals must be examined carefully, especially since the JIF varies over time. Even among journals that are all “high impact”, variations exist in methodological standards.

Q: Can I conduct a methodological study of qualitative research?

A: Yes. Even though a lot of methodological research has been conducted in the quantitative research field, methodological studies of qualitative studies are feasible. Certain databases that catalogue qualitative research including the Cumulative Index to Nursing & Allied Health Literature (CINAHL) have defined subject headings that are specific to methodological research (e.g. “research methodology”). Alternatively, one could also conduct a qualitative methodological review; that is, use qualitative approaches to synthesize methodological issues in qualitative studies.

Q: What reporting guidelines should I use for my methodological study?

A: There is no guideline that covers the entire scope of methodological studies. One adaptation of the PRISMA guidelines has been published, which works well for studies that aim to use the entire target population of research reports [ 71 ]. However, it is not widely used (40 citations in 2 years as of 09 December 2019), and methodological studies that are designed as cross-sectional or before-after studies require a more fit-for purpose guideline. A more encompassing reporting guideline for a broad range of methodological studies is currently under development [ 72 ]. However, in the absence of formal guidance, the requirements for scientific reporting should be respected, and authors of methodological studies should focus on transparency and reproducibility.

Q: What are the potential threats to validity and how can I avoid them?

A: Methodological studies may be compromised by a lack of internal or external validity. The main threats to internal validity in methodological studies are selection and confounding bias. Investigators must ensure that the methods used to select articles does not make them differ systematically from the set of articles to which they would like to make inferences. For example, attempting to make extrapolations to all journals after analyzing high-impact journals would be misleading.

Many factors (confounders) may distort the association between the exposure and outcome if the included research reports differ with respect to these factors [ 73 ]. For example, when examining the association between source of funding and completeness of reporting, it may be necessary to account for journals that endorse the guidelines. Confounding bias can be addressed by restriction, matching and statistical adjustment [ 73 ]. Restriction appears to be the method of choice for many investigators who choose to include only high impact journals or articles in a specific field. For example, Knol et al. examined the reporting of p -values in baseline tables of high impact journals [ 26 ]. Matching is also sometimes used. In the methodological study of non-randomized interventional studies of elective ventral hernia repair, Parker et al. matched prospective studies with retrospective studies and compared reporting standards [ 74 ]. Some other methodological studies use statistical adjustments. For example, Zhang et al. used regression techniques to determine the factors associated with missing participant data in trials [ 16 ].

With regard to external validity, researchers interested in conducting methodological studies must consider how generalizable or applicable their findings are. This should tie in closely with the research question and should be explicit. For example. Findings from methodological studies on trials published in high impact cardiology journals cannot be assumed to be applicable to trials in other fields. However, investigators must ensure that their sample truly represents the target sample either by a) conducting a comprehensive and exhaustive search, or b) using an appropriate and justified, randomly selected sample of research reports.

Even applicability to high impact journals may vary based on the investigators’ definition, and over time. For example, for high impact journals in the field of general medicine, Bouwmeester et al. included the Annals of Internal Medicine (AIM), BMJ, the Journal of the American Medical Association (JAMA), Lancet, the New England Journal of Medicine (NEJM), and PLoS Medicine ( n  = 6) [ 75 ]. In contrast, the high impact journals selected in the methodological study by Schiller et al. were BMJ, JAMA, Lancet, and NEJM ( n  = 4) [ 76 ]. Another methodological study by Kosa et al. included AIM, BMJ, JAMA, Lancet and NEJM ( n  = 5). In the methodological study by Thabut et al., journals with a JIF greater than 5 were considered to be high impact. Riado Minguez et al. used first quartile journals in the Journal Citation Reports (JCR) for a specific year to determine “high impact” [ 77 ]. Ultimately, the definition of high impact will be based on the number of journals the investigators are willing to include, the year of impact and the JIF cut-off [ 78 ]. We acknowledge that the term “generalizability” may apply differently for methodological studies, especially when in many instances it is possible to include the entire target population in the sample studied.

Finally, methodological studies are not exempt from information bias which may stem from discrepancies in the included research reports [ 79 ], errors in data extraction, or inappropriate interpretation of the information extracted. Likewise, publication bias may also be a concern in methodological studies, but such concepts have not yet been explored.

A proposed framework

In order to inform discussions about methodological studies, the development of guidance for what should be reported, we have outlined some key features of methodological studies that can be used to classify them. For each of the categories outlined below, we provide an example. In our experience, the choice of approach to completing a methodological study can be informed by asking the following four questions:

What is the aim?

Methodological studies that investigate bias

A methodological study may be focused on exploring sources of bias in primary or secondary studies (meta-bias), or how bias is analyzed. We have taken care to distinguish bias (i.e. systematic deviations from the truth irrespective of the source) from reporting quality or completeness (i.e. not adhering to a specific reporting guideline or norm). An example of where this distinction would be important is in the case of a randomized trial with no blinding. This study (depending on the nature of the intervention) would be at risk of performance bias. However, if the authors report that their study was not blinded, they would have reported adequately. In fact, some methodological studies attempt to capture both “quality of conduct” and “quality of reporting”, such as Richie et al., who reported on the risk of bias in randomized trials of pharmacy practice interventions [ 80 ]. Babic et al. investigated how risk of bias was used to inform sensitivity analyses in Cochrane reviews [ 81 ]. Further, biases related to choice of outcomes can also be explored. For example, Tan et al investigated differences in treatment effect size based on the outcome reported [ 82 ].

Methodological studies that investigate quality (or completeness) of reporting

Methodological studies may report quality of reporting against a reporting checklist (i.e. adherence to guidelines) or against expected norms. For example, Croituro et al. report on the quality of reporting in systematic reviews published in dermatology journals based on their adherence to the PRISMA statement [ 83 ], and Khan et al. described the quality of reporting of harms in randomized controlled trials published in high impact cardiovascular journals based on the CONSORT extension for harms [ 84 ]. Other methodological studies investigate reporting of certain features of interest that may not be part of formally published checklists or guidelines. For example, Mbuagbaw et al. described how often the implications for research are elaborated using the Evidence, Participants, Intervention, Comparison, Outcome, Timeframe (EPICOT) format [ 30 ].

Methodological studies that investigate the consistency of reporting

Sometimes investigators may be interested in how consistent reports of the same research are, as it is expected that there should be consistency between: conference abstracts and published manuscripts; manuscript abstracts and manuscript main text; and trial registration and published manuscript. For example, Rosmarakis et al. investigated consistency between conference abstracts and full text manuscripts [ 85 ].

Methodological studies that investigate factors associated with reporting

In addition to identifying issues with reporting in primary and secondary studies, authors of methodological studies may be interested in determining the factors that are associated with certain reporting practices. Many methodological studies incorporate this, albeit as a secondary outcome. For example, Farrokhyar et al. investigated the factors associated with reporting quality in randomized trials of coronary artery bypass grafting surgery [ 53 ].

Methodological studies that investigate methods

Methodological studies may also be used to describe methods or compare methods, and the factors associated with methods. Muller et al. described the methods used for systematic reviews and meta-analyses of observational studies [ 86 ].

Methodological studies that summarize other methodological studies

Some methodological studies synthesize results from other methodological studies. For example, Li et al. conducted a scoping review of methodological reviews that investigated consistency between full text and abstracts in primary biomedical research [ 87 ].

Methodological studies that investigate nomenclature and terminology

Some methodological studies may investigate the use of names and terms in health research. For example, Martinic et al. investigated the definitions of systematic reviews used in overviews of systematic reviews (OSRs), meta-epidemiological studies and epidemiology textbooks [ 88 ].

Other types of methodological studies

In addition to the previously mentioned experimental methodological studies, there may exist other types of methodological studies not captured here.

What is the design?

Methodological studies that are descriptive

Most methodological studies are purely descriptive and report their findings as counts (percent) and means (standard deviation) or medians (interquartile range). For example, Mbuagbaw et al. described the reporting of research recommendations in Cochrane HIV systematic reviews [ 30 ]. Gohari et al. described the quality of reporting of randomized trials in diabetes in Iran [ 12 ].

Methodological studies that are analytical

Some methodological studies are analytical wherein “analytical studies identify and quantify associations, test hypotheses, identify causes and determine whether an association exists between variables, such as between an exposure and a disease.” [ 89 ] In the case of methodological studies all these investigations are possible. For example, Kosa et al. investigated the association between agreement in primary outcome from trial registry to published manuscript and study covariates. They found that larger and more recent studies were more likely to have agreement [ 15 ]. Tricco et al. compared the conclusion statements from Cochrane and non-Cochrane systematic reviews with a meta-analysis of the primary outcome and found that non-Cochrane reviews were more likely to report positive findings. These results are a test of the null hypothesis that the proportions of Cochrane and non-Cochrane reviews that report positive results are equal [ 90 ].

What is the sampling strategy?

Methodological studies that include the target population

Methodological reviews with narrow research questions may be able to include the entire target population. For example, in the methodological study of Cochrane HIV systematic reviews, Mbuagbaw et al. included all of the available studies ( n  = 103) [ 30 ].

Methodological studies that include a sample of the target population

Many methodological studies use random samples of the target population [ 33 , 91 , 92 ]. Alternatively, purposeful sampling may be used, limiting the sample to a subset of research-related reports published within a certain time period, or in journals with a certain ranking or on a topic. Systematic sampling can also be used when random sampling may be challenging to implement.

What is the unit of analysis?

Methodological studies with a research report as the unit of analysis

Many methodological studies use a research report (e.g. full manuscript of study, abstract portion of the study) as the unit of analysis, and inferences can be made at the study-level. However, both published and unpublished research-related reports can be studied. These may include articles, conference abstracts, registry entries etc.

Methodological studies with a design, analysis or reporting item as the unit of analysis

Some methodological studies report on items which may occur more than once per article. For example, Paquette et al. report on subgroup analyses in Cochrane reviews of atrial fibrillation in which 17 systematic reviews planned 56 subgroup analyses [ 93 ].

This framework is outlined in Fig.  2 .

figure 2

A proposed framework for methodological studies

Conclusions

Methodological studies have examined different aspects of reporting such as quality, completeness, consistency and adherence to reporting guidelines. As such, many of the methodological study examples cited in this tutorial are related to reporting. However, as an evolving field, the scope of research questions that can be addressed by methodological studies is expected to increase.

In this paper we have outlined the scope and purpose of methodological studies, along with examples of instances in which various approaches have been used. In the absence of formal guidance on the design, conduct, analysis and reporting of methodological studies, we have provided some advice to help make methodological studies consistent. This advice is grounded in good contemporary scientific practice. Generally, the research question should tie in with the sampling approach and planned analysis. We have also highlighted the variables that may inform findings from methodological studies. Lastly, we have provided suggestions for ways in which authors can categorize their methodological studies to inform their design and analysis.

Availability of data and materials

Data sharing is not applicable to this article as no new data were created or analyzed in this study.

Abbreviations

Consolidated Standards of Reporting Trials

Evidence, Participants, Intervention, Comparison, Outcome, Timeframe

Grading of Recommendations, Assessment, Development and Evaluations

Participants, Intervention, Comparison, Outcome, Timeframe

Preferred Reporting Items of Systematic reviews and Meta-Analyses

Studies Within a Review

Studies Within a Trial

Chalmers I, Glasziou P. Avoidable waste in the production and reporting of research evidence. Lancet. 2009;374(9683):86–9.

PubMed   Google Scholar  

Chan AW, Song F, Vickers A, Jefferson T, Dickersin K, Gotzsche PC, Krumholz HM, Ghersi D, van der Worp HB. Increasing value and reducing waste: addressing inaccessible research. Lancet. 2014;383(9913):257–66.

PubMed   PubMed Central   Google Scholar  

Ioannidis JP, Greenland S, Hlatky MA, Khoury MJ, Macleod MR, Moher D, Schulz KF, Tibshirani R. Increasing value and reducing waste in research design, conduct, and analysis. Lancet. 2014;383(9912):166–75.

Higgins JP, Altman DG, Gotzsche PC, Juni P, Moher D, Oxman AD, Savovic J, Schulz KF, Weeks L, Sterne JA. The Cochrane Collaboration's tool for assessing risk of bias in randomised trials. BMJ. 2011;343:d5928.

Moher D, Schulz KF, Altman DG. The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomised trials. Lancet. 2001;357.

Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gotzsche PC, Ioannidis JP, Clarke M, Devereaux PJ, Kleijnen J, Moher D. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. PLoS Med. 2009;6(7):e1000100.

Shea BJ, Hamel C, Wells GA, Bouter LM, Kristjansson E, Grimshaw J, Henry DA, Boers M. AMSTAR is a reliable and valid measurement tool to assess the methodological quality of systematic reviews. J Clin Epidemiol. 2009;62(10):1013–20.

Shea BJ, Reeves BC, Wells G, Thuku M, Hamel C, Moran J, Moher D, Tugwell P, Welch V, Kristjansson E, et al. AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. Bmj. 2017;358:j4008.

Lawson DO, Leenus A, Mbuagbaw L. Mapping the nomenclature, methodology, and reporting of studies that review methods: a pilot methodological review. Pilot Feasibility Studies. 2020;6(1):13.

Puljak L, Makaric ZL, Buljan I, Pieper D. What is a meta-epidemiological study? Analysis of published literature indicated heterogeneous study designs and definitions. J Comp Eff Res. 2020.

Abbade LPF, Wang M, Sriganesh K, Jin Y, Mbuagbaw L, Thabane L. The framing of research questions using the PICOT format in randomized controlled trials of venous ulcer disease is suboptimal: a systematic survey. Wound Repair Regen. 2017;25(5):892–900.

Gohari F, Baradaran HR, Tabatabaee M, Anijidani S, Mohammadpour Touserkani F, Atlasi R, Razmgir M. Quality of reporting randomized controlled trials (RCTs) in diabetes in Iran; a systematic review. J Diabetes Metab Disord. 2015;15(1):36.

Wang M, Jin Y, Hu ZJ, Thabane A, Dennis B, Gajic-Veljanoski O, Paul J, Thabane L. The reporting quality of abstracts of stepped wedge randomized trials is suboptimal: a systematic survey of the literature. Contemp Clin Trials Commun. 2017;8:1–10.

Shanthanna H, Kaushal A, Mbuagbaw L, Couban R, Busse J, Thabane L: A cross-sectional study of the reporting quality of pilot or feasibility trials in high-impact anesthesia journals Can J Anaesthesia 2018, 65(11):1180–1195.

Kosa SD, Mbuagbaw L, Borg Debono V, Bhandari M, Dennis BB, Ene G, Leenus A, Shi D, Thabane M, Valvasori S, et al. Agreement in reporting between trial publications and current clinical trial registry in high impact journals: a methodological review. Contemporary Clinical Trials. 2018;65:144–50.

Zhang Y, Florez ID, Colunga Lozano LE, Aloweni FAB, Kennedy SA, Li A, Craigie S, Zhang S, Agarwal A, Lopes LC, et al. A systematic survey on reporting and methods for handling missing participant data for continuous outcomes in randomized controlled trials. J Clin Epidemiol. 2017;88:57–66.

CAS   PubMed   Google Scholar  

Hernández AV, Boersma E, Murray GD, Habbema JD, Steyerberg EW. Subgroup analyses in therapeutic cardiovascular clinical trials: are most of them misleading? Am Heart J. 2006;151(2):257–64.

Samaan Z, Mbuagbaw L, Kosa D, Borg Debono V, Dillenburg R, Zhang S, Fruci V, Dennis B, Bawor M, Thabane L. A systematic scoping review of adherence to reporting guidelines in health care literature. J Multidiscip Healthc. 2013;6:169–88.

Buscemi N, Hartling L, Vandermeer B, Tjosvold L, Klassen TP. Single data extraction generated more errors than double data extraction in systematic reviews. J Clin Epidemiol. 2006;59(7):697–703.

Carrasco-Labra A, Brignardello-Petersen R, Santesso N, Neumann I, Mustafa RA, Mbuagbaw L, Etxeandia Ikobaltzeta I, De Stio C, McCullagh LJ, Alonso-Coello P. Improving GRADE evidence tables part 1: a randomized trial shows improved understanding of content in summary-of-findings tables with a new format. J Clin Epidemiol. 2016;74:7–18.

The Northern Ireland Hub for Trials Methodology Research: SWAT/SWAR Information [ https://www.qub.ac.uk/sites/TheNorthernIrelandNetworkforTrialsMethodologyResearch/SWATSWARInformation/ ]. Accessed 31 Aug 2020.

Chick S, Sánchez P, Ferrin D, Morrice D. How to conduct a successful simulation study. In: Proceedings of the 2003 winter simulation conference: 2003; 2003. p. 66–70.

Google Scholar  

Mulrow CD. The medical review article: state of the science. Ann Intern Med. 1987;106(3):485–8.

Sacks HS, Reitman D, Pagano D, Kupelnick B. Meta-analysis: an update. Mount Sinai J Med New York. 1996;63(3–4):216–24.

CAS   Google Scholar  

Areia M, Soares M, Dinis-Ribeiro M. Quality reporting of endoscopic diagnostic studies in gastrointestinal journals: where do we stand on the use of the STARD and CONSORT statements? Endoscopy. 2010;42(2):138–47.

Knol M, Groenwold R, Grobbee D. P-values in baseline tables of randomised controlled trials are inappropriate but still common in high impact journals. Eur J Prev Cardiol. 2012;19(2):231–2.

Chen M, Cui J, Zhang AL, Sze DM, Xue CC, May BH. Adherence to CONSORT items in randomized controlled trials of integrative medicine for colorectal Cancer published in Chinese journals. J Altern Complement Med. 2018;24(2):115–24.

Hopewell S, Ravaud P, Baron G, Boutron I. Effect of editors' implementation of CONSORT guidelines on the reporting of abstracts in high impact medical journals: interrupted time series analysis. BMJ. 2012;344:e4178.

The Cochrane Methodology Register Issue 2 2009 [ https://cmr.cochrane.org/help.htm ]. Accessed 31 Aug 2020.

Mbuagbaw L, Kredo T, Welch V, Mursleen S, Ross S, Zani B, Motaze NV, Quinlan L. Critical EPICOT items were absent in Cochrane human immunodeficiency virus systematic reviews: a bibliometric analysis. J Clin Epidemiol. 2016;74:66–72.

Barton S, Peckitt C, Sclafani F, Cunningham D, Chau I. The influence of industry sponsorship on the reporting of subgroup analyses within phase III randomised controlled trials in gastrointestinal oncology. Eur J Cancer. 2015;51(18):2732–9.

Setia MS. Methodology series module 5: sampling strategies. Indian J Dermatol. 2016;61(5):505–9.

Wilson B, Burnett P, Moher D, Altman DG, Al-Shahi Salman R. Completeness of reporting of randomised controlled trials including people with transient ischaemic attack or stroke: a systematic review. Eur Stroke J. 2018;3(4):337–46.

Kahale LA, Diab B, Brignardello-Petersen R, Agarwal A, Mustafa RA, Kwong J, Neumann I, Li L, Lopes LC, Briel M, et al. Systematic reviews do not adequately report or address missing outcome data in their analyses: a methodological survey. J Clin Epidemiol. 2018;99:14–23.

De Angelis CD, Drazen JM, Frizelle FA, Haug C, Hoey J, Horton R, Kotzin S, Laine C, Marusic A, Overbeke AJPM, et al. Is this clinical trial fully registered?: a statement from the International Committee of Medical Journal Editors*. Ann Intern Med. 2005;143(2):146–8.

Ohtake PJ, Childs JD. Why publish study protocols? Phys Ther. 2014;94(9):1208–9.

Rombey T, Allers K, Mathes T, Hoffmann F, Pieper D. A descriptive analysis of the characteristics and the peer review process of systematic review protocols published in an open peer review journal from 2012 to 2017. BMC Med Res Methodol. 2019;19(1):57.

Grimes DA, Schulz KF. Bias and causal associations in observational research. Lancet. 2002;359(9302):248–52.

Porta M (ed.): A dictionary of epidemiology, 5th edn. Oxford: Oxford University Press, Inc.; 2008.

El Dib R, Tikkinen KAO, Akl EA, Gomaa HA, Mustafa RA, Agarwal A, Carpenter CR, Zhang Y, Jorge EC, Almeida R, et al. Systematic survey of randomized trials evaluating the impact of alternative diagnostic strategies on patient-important outcomes. J Clin Epidemiol. 2017;84:61–9.

Helzer JE, Robins LN, Taibleson M, Woodruff RA Jr, Reich T, Wish ED. Reliability of psychiatric diagnosis. I. a methodological review. Arch Gen Psychiatry. 1977;34(2):129–33.

Chung ST, Chacko SK, Sunehag AL, Haymond MW. Measurements of gluconeogenesis and Glycogenolysis: a methodological review. Diabetes. 2015;64(12):3996–4010.

CAS   PubMed   PubMed Central   Google Scholar  

Sterne JA, Juni P, Schulz KF, Altman DG, Bartlett C, Egger M. Statistical methods for assessing the influence of study characteristics on treatment effects in 'meta-epidemiological' research. Stat Med. 2002;21(11):1513–24.

Moen EL, Fricano-Kugler CJ, Luikart BW, O’Malley AJ. Analyzing clustered data: why and how to account for multiple observations nested within a study participant? PLoS One. 2016;11(1):e0146721.

Zyzanski SJ, Flocke SA, Dickinson LM. On the nature and analysis of clustered data. Ann Fam Med. 2004;2(3):199–200.

Mathes T, Klassen P, Pieper D. Frequency of data extraction errors and methods to increase data extraction quality: a methodological review. BMC Med Res Methodol. 2017;17(1):152.

Bui DDA, Del Fiol G, Hurdle JF, Jonnalagadda S. Extractive text summarization system to aid data extraction from full text in systematic review development. J Biomed Inform. 2016;64:265–72.

Bui DD, Del Fiol G, Jonnalagadda S. PDF text classification to leverage information extraction from publication reports. J Biomed Inform. 2016;61:141–8.

Maticic K, Krnic Martinic M, Puljak L. Assessment of reporting quality of abstracts of systematic reviews with meta-analysis using PRISMA-A and discordance in assessments between raters without prior experience. BMC Med Res Methodol. 2019;19(1):32.

Speich B. Blinding in surgical randomized clinical trials in 2015. Ann Surg. 2017;266(1):21–2.

Abraha I, Cozzolino F, Orso M, Marchesi M, Germani A, Lombardo G, Eusebi P, De Florio R, Luchetta ML, Iorio A, et al. A systematic review found that deviations from intention-to-treat are common in randomized trials and systematic reviews. J Clin Epidemiol. 2017;84:37–46.

Zhong Y, Zhou W, Jiang H, Fan T, Diao X, Yang H, Min J, Wang G, Fu J, Mao B. Quality of reporting of two-group parallel randomized controlled clinical trials of multi-herb formulae: A survey of reports indexed in the Science Citation Index Expanded. Eur J Integrative Med. 2011;3(4):e309–16.

Farrokhyar F, Chu R, Whitlock R, Thabane L. A systematic review of the quality of publications reporting coronary artery bypass grafting trials. Can J Surg. 2007;50(4):266–77.

Oltean H, Gagnier JJ. Use of clustering analysis in randomized controlled trials in orthopaedic surgery. BMC Med Res Methodol. 2015;15:17.

Fleming PS, Koletsi D, Pandis N. Blinded by PRISMA: are systematic reviewers focusing on PRISMA and ignoring other guidelines? PLoS One. 2014;9(5):e96407.

Balasubramanian SP, Wiener M, Alshameeri Z, Tiruvoipati R, Elbourne D, Reed MW. Standards of reporting of randomized controlled trials in general surgery: can we do better? Ann Surg. 2006;244(5):663–7.

de Vries TW, van Roon EN. Low quality of reporting adverse drug reactions in paediatric randomised controlled trials. Arch Dis Child. 2010;95(12):1023–6.

Borg Debono V, Zhang S, Ye C, Paul J, Arya A, Hurlburt L, Murthy Y, Thabane L. The quality of reporting of RCTs used within a postoperative pain management meta-analysis, using the CONSORT statement. BMC Anesthesiol. 2012;12:13.

Kaiser KA, Cofield SS, Fontaine KR, Glasser SP, Thabane L, Chu R, Ambrale S, Dwary AD, Kumar A, Nayyar G, et al. Is funding source related to study reporting quality in obesity or nutrition randomized control trials in top-tier medical journals? Int J Obes. 2012;36(7):977–81.

Thomas O, Thabane L, Douketis J, Chu R, Westfall AO, Allison DB. Industry funding and the reporting quality of large long-term weight loss trials. Int J Obes. 2008;32(10):1531–6.

Khan NR, Saad H, Oravec CS, Rossi N, Nguyen V, Venable GT, Lillard JC, Patel P, Taylor DR, Vaughn BN, et al. A review of industry funding in randomized controlled trials published in the neurosurgical literature-the elephant in the room. Neurosurgery. 2018;83(5):890–7.

Hansen C, Lundh A, Rasmussen K, Hrobjartsson A. Financial conflicts of interest in systematic reviews: associations with results, conclusions, and methodological quality. Cochrane Database Syst Rev. 2019;8:Mr000047.

Kiehna EN, Starke RM, Pouratian N, Dumont AS. Standards for reporting randomized controlled trials in neurosurgery. J Neurosurg. 2011;114(2):280–5.

Liu LQ, Morris PJ, Pengel LH. Compliance to the CONSORT statement of randomized controlled trials in solid organ transplantation: a 3-year overview. Transpl Int. 2013;26(3):300–6.

Bala MM, Akl EA, Sun X, Bassler D, Mertz D, Mejza F, Vandvik PO, Malaga G, Johnston BC, Dahm P, et al. Randomized trials published in higher vs. lower impact journals differ in design, conduct, and analysis. J Clin Epidemiol. 2013;66(3):286–95.

Lee SY, Teoh PJ, Camm CF, Agha RA. Compliance of randomized controlled trials in trauma surgery with the CONSORT statement. J Trauma Acute Care Surg. 2013;75(4):562–72.

Ziogas DC, Zintzaras E. Analysis of the quality of reporting of randomized controlled trials in acute and chronic myeloid leukemia, and myelodysplastic syndromes as governed by the CONSORT statement. Ann Epidemiol. 2009;19(7):494–500.

Alvarez F, Meyer N, Gourraud PA, Paul C. CONSORT adoption and quality of reporting of randomized controlled trials: a systematic analysis in two dermatology journals. Br J Dermatol. 2009;161(5):1159–65.

Mbuagbaw L, Thabane M, Vanniyasingam T, Borg Debono V, Kosa S, Zhang S, Ye C, Parpia S, Dennis BB, Thabane L. Improvement in the quality of abstracts in major clinical journals since CONSORT extension for abstracts: a systematic review. Contemporary Clin trials. 2014;38(2):245–50.

Thabane L, Chu R, Cuddy K, Douketis J. What is the quality of reporting in weight loss intervention studies? A systematic review of randomized controlled trials. Int J Obes. 2007;31(10):1554–9.

Murad MH, Wang Z. Guidelines for reporting meta-epidemiological methodology research. Evidence Based Med. 2017;22(4):139.

METRIC - MEthodological sTudy ReportIng Checklist: guidelines for reporting methodological studies in health research [ http://www.equator-network.org/library/reporting-guidelines-under-development/reporting-guidelines-under-development-for-other-study-designs/#METRIC ]. Accessed 31 Aug 2020.

Jager KJ, Zoccali C, MacLeod A, Dekker FW. Confounding: what it is and how to deal with it. Kidney Int. 2008;73(3):256–60.

Parker SG, Halligan S, Erotocritou M, Wood CPJ, Boulton RW, Plumb AAO, Windsor ACJ, Mallett S. A systematic methodological review of non-randomised interventional studies of elective ventral hernia repair: clear definitions and a standardised minimum dataset are needed. Hernia. 2019.

Bouwmeester W, Zuithoff NPA, Mallett S, Geerlings MI, Vergouwe Y, Steyerberg EW, Altman DG, Moons KGM. Reporting and methods in clinical prediction research: a systematic review. PLoS Med. 2012;9(5):1–12.

Schiller P, Burchardi N, Niestroj M, Kieser M. Quality of reporting of clinical non-inferiority and equivalence randomised trials--update and extension. Trials. 2012;13:214.

Riado Minguez D, Kowalski M, Vallve Odena M, Longin Pontzen D, Jelicic Kadic A, Jeric M, Dosenovic S, Jakus D, Vrdoljak M, Poklepovic Pericic T, et al. Methodological and reporting quality of systematic reviews published in the highest ranking journals in the field of pain. Anesth Analg. 2017;125(4):1348–54.

Thabut G, Estellat C, Boutron I, Samama CM, Ravaud P. Methodological issues in trials assessing primary prophylaxis of venous thrombo-embolism. Eur Heart J. 2005;27(2):227–36.

Puljak L, Riva N, Parmelli E, González-Lorenzo M, Moja L, Pieper D. Data extraction methods: an analysis of internal reporting discrepancies in single manuscripts and practical advice. J Clin Epidemiol. 2020;117:158–64.

Ritchie A, Seubert L, Clifford R, Perry D, Bond C. Do randomised controlled trials relevant to pharmacy meet best practice standards for quality conduct and reporting? A systematic review. Int J Pharm Pract. 2019.

Babic A, Vuka I, Saric F, Proloscic I, Slapnicar E, Cavar J, Pericic TP, Pieper D, Puljak L. Overall bias methods and their use in sensitivity analysis of Cochrane reviews were not consistent. J Clin Epidemiol. 2019.

Tan A, Porcher R, Crequit P, Ravaud P, Dechartres A. Differences in treatment effect size between overall survival and progression-free survival in immunotherapy trials: a Meta-epidemiologic study of trials with results posted at ClinicalTrials.gov. J Clin Oncol. 2017;35(15):1686–94.

Croitoru D, Huang Y, Kurdina A, Chan AW, Drucker AM. Quality of reporting in systematic reviews published in dermatology journals. Br J Dermatol. 2020;182(6):1469–76.

Khan MS, Ochani RK, Shaikh A, Vaduganathan M, Khan SU, Fatima K, Yamani N, Mandrola J, Doukky R, Krasuski RA: Assessing the Quality of Reporting of Harms in Randomized Controlled Trials Published in High Impact Cardiovascular Journals. Eur Heart J Qual Care Clin Outcomes 2019.

Rosmarakis ES, Soteriades ES, Vergidis PI, Kasiakou SK, Falagas ME. From conference abstract to full paper: differences between data presented in conferences and journals. FASEB J. 2005;19(7):673–80.

Mueller M, D’Addario M, Egger M, Cevallos M, Dekkers O, Mugglin C, Scott P. Methods to systematically review and meta-analyse observational studies: a systematic scoping review of recommendations. BMC Med Res Methodol. 2018;18(1):44.

Li G, Abbade LPF, Nwosu I, Jin Y, Leenus A, Maaz M, Wang M, Bhatt M, Zielinski L, Sanger N, et al. A scoping review of comparisons between abstracts and full reports in primary biomedical research. BMC Med Res Methodol. 2017;17(1):181.

Krnic Martinic M, Pieper D, Glatt A, Puljak L. Definition of a systematic review used in overviews of systematic reviews, meta-epidemiological studies and textbooks. BMC Med Res Methodol. 2019;19(1):203.

Analytical study [ https://medical-dictionary.thefreedictionary.com/analytical+study ]. Accessed 31 Aug 2020.

Tricco AC, Tetzlaff J, Pham B, Brehaut J, Moher D. Non-Cochrane vs. Cochrane reviews were twice as likely to have positive conclusion statements: cross-sectional study. J Clin Epidemiol. 2009;62(4):380–6 e381.

Schalken N, Rietbergen C. The reporting quality of systematic reviews and Meta-analyses in industrial and organizational psychology: a systematic review. Front Psychol. 2017;8:1395.

Ranker LR, Petersen JM, Fox MP. Awareness of and potential for dependent error in the observational epidemiologic literature: A review. Ann Epidemiol. 2019;36:15–9 e12.

Paquette M, Alotaibi AM, Nieuwlaat R, Santesso N, Mbuagbaw L. A meta-epidemiological study of subgroup analyses in cochrane systematic reviews of atrial fibrillation. Syst Rev. 2019;8(1):241.

Download references

Acknowledgements

This work did not receive any dedicated funding.

Author information

Authors and affiliations.

Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, ON, Canada

Lawrence Mbuagbaw, Daeria O. Lawson & Lehana Thabane

Biostatistics Unit/FSORC, 50 Charlton Avenue East, St Joseph’s Healthcare—Hamilton, 3rd Floor Martha Wing, Room H321, Hamilton, Ontario, L8N 4A6, Canada

Lawrence Mbuagbaw & Lehana Thabane

Centre for the Development of Best Practices in Health, Yaoundé, Cameroon

Lawrence Mbuagbaw

Center for Evidence-Based Medicine and Health Care, Catholic University of Croatia, Ilica 242, 10000, Zagreb, Croatia

Livia Puljak

Department of Epidemiology and Biostatistics, School of Public Health – Bloomington, Indiana University, Bloomington, IN, 47405, USA

David B. Allison

Departments of Paediatrics and Anaesthesia, McMaster University, Hamilton, ON, Canada

Lehana Thabane

Centre for Evaluation of Medicine, St. Joseph’s Healthcare-Hamilton, Hamilton, ON, Canada

Population Health Research Institute, Hamilton Health Sciences, Hamilton, ON, Canada

You can also search for this author in PubMed   Google Scholar

Contributions

LM conceived the idea and drafted the outline and paper. DOL and LT commented on the idea and draft outline. LM, LP and DOL performed literature searches and data extraction. All authors (LM, DOL, LT, LP, DBA) reviewed several draft versions of the manuscript and approved the final manuscript.

Corresponding author

Correspondence to Lawrence Mbuagbaw .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

DOL, DBA, LM, LP and LT are involved in the development of a reporting guideline for methodological studies.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Mbuagbaw, L., Lawson, D.O., Puljak, L. et al. A tutorial on methodological studies: the what, when, how and why. BMC Med Res Methodol 20 , 226 (2020). https://doi.org/10.1186/s12874-020-01107-7

Download citation

Received : 27 May 2020

Accepted : 27 August 2020

Published : 07 September 2020

DOI : https://doi.org/10.1186/s12874-020-01107-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Methodological study
  • Meta-epidemiology
  • Research methods
  • Research-on-research

BMC Medical Research Methodology

ISSN: 1471-2288

methods of study in research

2.2 Research Methods

Learning objectives.

By the end of this section, you should be able to:

  • Recall the 6 Steps of the Scientific Method
  • Differentiate between four kinds of research methods: surveys, field research, experiments, and secondary data analysis.
  • Explain the appropriateness of specific research approaches for specific topics.

Sociologists examine the social world, see a problem or interesting pattern, and set out to study it. They use research methods to design a study. Planning the research design is a key step in any sociological study. Sociologists generally choose from widely used methods of social investigation: primary source data collection such as survey, participant observation, ethnography, case study, unobtrusive observations, experiment, and secondary data analysis , or use of existing sources. Every research method comes with plusses and minuses, and the topic of study strongly influences which method or methods are put to use. When you are conducting research think about the best way to gather or obtain knowledge about your topic, think of yourself as an architect. An architect needs a blueprint to build a house, as a sociologist your blueprint is your research design including your data collection method.

When entering a particular social environment, a researcher must be careful. There are times to remain anonymous and times to be overt. There are times to conduct interviews and times to simply observe. Some participants need to be thoroughly informed; others should not know they are being observed. A researcher wouldn’t stroll into a crime-ridden neighborhood at midnight, calling out, “Any gang members around?”

Making sociologists’ presence invisible is not always realistic for other reasons. That option is not available to a researcher studying prison behaviors, early education, or the Ku Klux Klan. Researchers can’t just stroll into prisons, kindergarten classrooms, or Klan meetings and unobtrusively observe behaviors or attract attention. In situations like these, other methods are needed. Researchers choose methods that best suit their study topics, protect research participants or subjects, and that fit with their overall approaches to research.

As a research method, a survey collects data from subjects who respond to a series of questions about behaviors and opinions, often in the form of a questionnaire or an interview. The survey is one of the most widely used scientific research methods. The standard survey format allows individuals a level of anonymity in which they can express personal ideas.

At some point, most people in the United States respond to some type of survey. The 2020 U.S. Census is an excellent example of a large-scale survey intended to gather sociological data. Since 1790, United States has conducted a survey consisting of six questions to received demographical data pertaining to residents. The questions pertain to the demographics of the residents who live in the United States. Currently, the Census is received by residents in the United Stated and five territories and consists of 12 questions.

Not all surveys are considered sociological research, however, and many surveys people commonly encounter focus on identifying marketing needs and strategies rather than testing a hypothesis or contributing to social science knowledge. Questions such as, “How many hot dogs do you eat in a month?” or “Were the staff helpful?” are not usually designed as scientific research. The Nielsen Ratings determine the popularity of television programming through scientific market research. However, polls conducted by television programs such as American Idol or So You Think You Can Dance cannot be generalized, because they are administered to an unrepresentative population, a specific show’s audience. You might receive polls through your cell phones or emails, from grocery stores, restaurants, and retail stores. They often provide you incentives for completing the survey.

Sociologists conduct surveys under controlled conditions for specific purposes. Surveys gather different types of information from people. While surveys are not great at capturing the ways people really behave in social situations, they are a great method for discovering how people feel, think, and act—or at least how they say they feel, think, and act. Surveys can track preferences for presidential candidates or reported individual behaviors (such as sleeping, driving, or texting habits) or information such as employment status, income, and education levels.

A survey targets a specific population , people who are the focus of a study, such as college athletes, international students, or teenagers living with type 1 (juvenile-onset) diabetes. Most researchers choose to survey a small sector of the population, or a sample , a manageable number of subjects who represent a larger population. The success of a study depends on how well a population is represented by the sample. In a random sample , every person in a population has the same chance of being chosen for the study. As a result, a Gallup Poll, if conducted as a nationwide random sampling, should be able to provide an accurate estimate of public opinion whether it contacts 2,000 or 10,000 people.

After selecting subjects, the researcher develops a specific plan to ask questions and record responses. It is important to inform subjects of the nature and purpose of the survey up front. If they agree to participate, researchers thank subjects and offer them a chance to see the results of the study if they are interested. The researcher presents the subjects with an instrument, which is a means of gathering the information.

A common instrument is a questionnaire. Subjects often answer a series of closed-ended questions . The researcher might ask yes-or-no or multiple-choice questions, allowing subjects to choose possible responses to each question. This kind of questionnaire collects quantitative data —data in numerical form that can be counted and statistically analyzed. Just count up the number of “yes” and “no” responses or correct answers, and chart them into percentages.

Questionnaires can also ask more complex questions with more complex answers—beyond “yes,” “no,” or checkbox options. These types of inquiries use open-ended questions that require short essay responses. Participants willing to take the time to write those answers might convey personal religious beliefs, political views, goals, or morals. The answers are subjective and vary from person to person. How do you plan to use your college education?

Some topics that investigate internal thought processes are impossible to observe directly and are difficult to discuss honestly in a public forum. People are more likely to share honest answers if they can respond to questions anonymously. This type of personal explanation is qualitative data —conveyed through words. Qualitative information is harder to organize and tabulate. The researcher will end up with a wide range of responses, some of which may be surprising. The benefit of written opinions, though, is the wealth of in-depth material that they provide.

An interview is a one-on-one conversation between the researcher and the subject, and it is a way of conducting surveys on a topic. However, participants are free to respond as they wish, without being limited by predetermined choices. In the back-and-forth conversation of an interview, a researcher can ask for clarification, spend more time on a subtopic, or ask additional questions. In an interview, a subject will ideally feel free to open up and answer questions that are often complex. There are no right or wrong answers. The subject might not even know how to answer the questions honestly.

Questions such as “How does society’s view of alcohol consumption influence your decision whether or not to take your first sip of alcohol?” or “Did you feel that the divorce of your parents would put a social stigma on your family?” involve so many factors that the answers are difficult to categorize. A researcher needs to avoid steering or prompting the subject to respond in a specific way; otherwise, the results will prove to be unreliable. The researcher will also benefit from gaining a subject’s trust, from empathizing or commiserating with a subject, and from listening without judgment.

Surveys often collect both quantitative and qualitative data. For example, a researcher interviewing people who are incarcerated might receive quantitative data, such as demographics – race, age, sex, that can be analyzed statistically. For example, the researcher might discover that 20 percent of incarcerated people are above the age of 50. The researcher might also collect qualitative data, such as why people take advantage of educational opportunities during their sentence and other explanatory information.

The survey can be carried out online, over the phone, by mail, or face-to-face. When researchers collect data outside a laboratory, library, or workplace setting, they are conducting field research, which is our next topic.

Field Research

The work of sociology rarely happens in limited, confined spaces. Rather, sociologists go out into the world. They meet subjects where they live, work, and play. Field research refers to gathering primary data from a natural environment. To conduct field research, the sociologist must be willing to step into new environments and observe, participate, or experience those worlds. In field work, the sociologists, rather than the subjects, are the ones out of their element.

The researcher interacts with or observes people and gathers data along the way. The key point in field research is that it takes place in the subject’s natural environment, whether it’s a coffee shop or tribal village, a homeless shelter or the DMV, a hospital, airport, mall, or beach resort.

While field research often begins in a specific setting , the study’s purpose is to observe specific behaviors in that setting. Field work is optimal for observing how people think and behave. It seeks to understand why they behave that way. However, researchers may struggle to narrow down cause and effect when there are so many variables floating around in a natural environment. And while field research looks for correlation, its small sample size does not allow for establishing a causal relationship between two variables. Indeed, much of the data gathered in sociology do not identify a cause and effect but a correlation .

Sociology in the Real World

Beyoncé and lady gaga as sociological subjects.

Sociologists have studied Lady Gaga and Beyoncé and their impact on music, movies, social media, fan participation, and social equality. In their studies, researchers have used several research methods including secondary analysis, participant observation, and surveys from concert participants.

In their study, Click, Lee & Holiday (2013) interviewed 45 Lady Gaga fans who utilized social media to communicate with the artist. These fans viewed Lady Gaga as a mirror of themselves and a source of inspiration. Like her, they embrace not being a part of mainstream culture. Many of Lady Gaga’s fans are members of the LGBTQ community. They see the “song “Born This Way” as a rallying cry and answer her calls for “Paws Up” with a physical expression of solidarity—outstretched arms and fingers bent and curled to resemble monster claws.”

Sascha Buchanan (2019) made use of participant observation to study the relationship between two fan groups, that of Beyoncé and that of Rihanna. She observed award shows sponsored by iHeartRadio, MTV EMA, and BET that pit one group against another as they competed for Best Fan Army, Biggest Fans, and FANdemonium. Buchanan argues that the media thus sustains a myth of rivalry between the two most commercially successful Black women vocal artists.

Participant Observation

In 2000, a comic writer named Rodney Rothman wanted an insider’s view of white-collar work. He slipped into the sterile, high-rise offices of a New York “dot com” agency. Every day for two weeks, he pretended to work there. His main purpose was simply to see whether anyone would notice him or challenge his presence. No one did. The receptionist greeted him. The employees smiled and said good morning. Rothman was accepted as part of the team. He even went so far as to claim a desk, inform the receptionist of his whereabouts, and attend a meeting. He published an article about his experience in The New Yorker called “My Fake Job” (2000). Later, he was discredited for allegedly fabricating some details of the story and The New Yorker issued an apology. However, Rothman’s entertaining article still offered fascinating descriptions of the inside workings of a “dot com” company and exemplified the lengths to which a writer, or a sociologist, will go to uncover material.

Rothman had conducted a form of study called participant observation , in which researchers join people and participate in a group’s routine activities for the purpose of observing them within that context. This method lets researchers experience a specific aspect of social life. A researcher might go to great lengths to get a firsthand look into a trend, institution, or behavior. A researcher might work as a waitress in a diner, experience homelessness for several weeks, or ride along with police officers as they patrol their regular beat. Often, these researchers try to blend in seamlessly with the population they study, and they may not disclose their true identity or purpose if they feel it would compromise the results of their research.

At the beginning of a field study, researchers might have a question: “What really goes on in the kitchen of the most popular diner on campus?” or “What is it like to be homeless?” Participant observation is a useful method if the researcher wants to explore a certain environment from the inside.

Field researchers simply want to observe and learn. In such a setting, the researcher will be alert and open minded to whatever happens, recording all observations accurately. Soon, as patterns emerge, questions will become more specific, observations will lead to hypotheses, and hypotheses will guide the researcher in analyzing data and generating results.

In a study of small towns in the United States conducted by sociological researchers John S. Lynd and Helen Merrell Lynd, the team altered their purpose as they gathered data. They initially planned to focus their study on the role of religion in U.S. towns. As they gathered observations, they realized that the effect of industrialization and urbanization was the more relevant topic of this social group. The Lynds did not change their methods, but they revised the purpose of their study.

This shaped the structure of Middletown: A Study in Modern American Culture , their published results (Lynd & Lynd, 1929).

The Lynds were upfront about their mission. The townspeople of Muncie, Indiana, knew why the researchers were in their midst. But some sociologists prefer not to alert people to their presence. The main advantage of covert participant observation is that it allows the researcher access to authentic, natural behaviors of a group’s members. The challenge, however, is gaining access to a setting without disrupting the pattern of others’ behavior. Becoming an inside member of a group, organization, or subculture takes time and effort. Researchers must pretend to be something they are not. The process could involve role playing, making contacts, networking, or applying for a job.

Once inside a group, some researchers spend months or even years pretending to be one of the people they are observing. However, as observers, they cannot get too involved. They must keep their purpose in mind and apply the sociological perspective. That way, they illuminate social patterns that are often unrecognized. Because information gathered during participant observation is mostly qualitative, rather than quantitative, the end results are often descriptive or interpretive. The researcher might present findings in an article or book and describe what he or she witnessed and experienced.

This type of research is what journalist Barbara Ehrenreich conducted for her book Nickel and Dimed . One day over lunch with her editor, Ehrenreich mentioned an idea. How can people exist on minimum-wage work? How do low-income workers get by? she wondered. Someone should do a study . To her surprise, her editor responded, Why don’t you do it?

That’s how Ehrenreich found herself joining the ranks of the working class. For several months, she left her comfortable home and lived and worked among people who lacked, for the most part, higher education and marketable job skills. Undercover, she applied for and worked minimum wage jobs as a waitress, a cleaning woman, a nursing home aide, and a retail chain employee. During her participant observation, she used only her income from those jobs to pay for food, clothing, transportation, and shelter.

She discovered the obvious, that it’s almost impossible to get by on minimum wage work. She also experienced and observed attitudes many middle and upper-class people never think about. She witnessed firsthand the treatment of working class employees. She saw the extreme measures people take to make ends meet and to survive. She described fellow employees who held two or three jobs, worked seven days a week, lived in cars, could not pay to treat chronic health conditions, got randomly fired, submitted to drug tests, and moved in and out of homeless shelters. She brought aspects of that life to light, describing difficult working conditions and the poor treatment that low-wage workers suffer.

The book she wrote upon her return to her real life as a well-paid writer, has been widely read and used in many college classrooms.

Ethnography

Ethnography is the immersion of the researcher in the natural setting of an entire social community to observe and experience their everyday life and culture. The heart of an ethnographic study focuses on how subjects view their own social standing and how they understand themselves in relation to a social group.

An ethnographic study might observe, for example, a small U.S. fishing town, an Inuit community, a village in Thailand, a Buddhist monastery, a private boarding school, or an amusement park. These places all have borders. People live, work, study, or vacation within those borders. People are there for a certain reason and therefore behave in certain ways and respect certain cultural norms. An ethnographer would commit to spending a determined amount of time studying every aspect of the chosen place, taking in as much as possible.

A sociologist studying a tribe in the Amazon might watch the way villagers go about their daily lives and then write a paper about it. To observe a spiritual retreat center, an ethnographer might sign up for a retreat and attend as a guest for an extended stay, observe and record data, and collate the material into results.

Institutional Ethnography

Institutional ethnography is an extension of basic ethnographic research principles that focuses intentionally on everyday concrete social relationships. Developed by Canadian sociologist Dorothy E. Smith (1990), institutional ethnography is often considered a feminist-inspired approach to social analysis and primarily considers women’s experiences within male- dominated societies and power structures. Smith’s work is seen to challenge sociology’s exclusion of women, both academically and in the study of women’s lives (Fenstermaker, n.d.).

Historically, social science research tended to objectify women and ignore their experiences except as viewed from the male perspective. Modern feminists note that describing women, and other marginalized groups, as subordinates helps those in authority maintain their own dominant positions (Social Sciences and Humanities Research Council of Canada n.d.). Smith’s three major works explored what she called “the conceptual practices of power” and are still considered seminal works in feminist theory and ethnography (Fensternmaker n.d.).

Sociological Research

The making of middletown: a study in modern u.s. culture.

In 1924, a young married couple named Robert and Helen Lynd undertook an unprecedented ethnography: to apply sociological methods to the study of one U.S. city in order to discover what “ordinary” people in the United States did and believed. Choosing Muncie, Indiana (population about 30,000) as their subject, they moved to the small town and lived there for eighteen months.

Ethnographers had been examining other cultures for decades—groups considered minorities or outsiders—like gangs, immigrants, and the poor. But no one had studied the so-called average American.

Recording interviews and using surveys to gather data, the Lynds objectively described what they observed. Researching existing sources, they compared Muncie in 1890 to the Muncie they observed in 1924. Most Muncie adults, they found, had grown up on farms but now lived in homes inside the city. As a result, the Lynds focused their study on the impact of industrialization and urbanization.

They observed that Muncie was divided into business and working class groups. They defined business class as dealing with abstract concepts and symbols, while working class people used tools to create concrete objects. The two classes led different lives with different goals and hopes. However, the Lynds observed, mass production offered both classes the same amenities. Like wealthy families, the working class was now able to own radios, cars, washing machines, telephones, vacuum cleaners, and refrigerators. This was an emerging material reality of the 1920s.

As the Lynds worked, they divided their manuscript into six chapters: Getting a Living, Making a Home, Training the Young, Using Leisure, Engaging in Religious Practices, and Engaging in Community Activities.

When the study was completed, the Lynds encountered a big problem. The Rockefeller Foundation, which had commissioned the book, claimed it was useless and refused to publish it. The Lynds asked if they could seek a publisher themselves.

Middletown: A Study in Modern American Culture was not only published in 1929 but also became an instant bestseller, a status unheard of for a sociological study. The book sold out six printings in its first year of publication, and has never gone out of print (Caplow, Hicks, & Wattenberg. 2000).

Nothing like it had ever been done before. Middletown was reviewed on the front page of the New York Times. Readers in the 1920s and 1930s identified with the citizens of Muncie, Indiana, but they were equally fascinated by the sociological methods and the use of scientific data to define ordinary people in the United States. The book was proof that social data was important—and interesting—to the U.S. public.

Sometimes a researcher wants to study one specific person or event. A case study is an in-depth analysis of a single event, situation, or individual. To conduct a case study, a researcher examines existing sources like documents and archival records, conducts interviews, engages in direct observation and even participant observation, if possible.

Researchers might use this method to study a single case of a foster child, drug lord, cancer patient, criminal, or rape victim. However, a major criticism of the case study as a method is that while offering depth on a topic, it does not provide enough evidence to form a generalized conclusion. In other words, it is difficult to make universal claims based on just one person, since one person does not verify a pattern. This is why most sociologists do not use case studies as a primary research method.

However, case studies are useful when the single case is unique. In these instances, a single case study can contribute tremendous insight. For example, a feral child, also called “wild child,” is one who grows up isolated from human beings. Feral children grow up without social contact and language, which are elements crucial to a “civilized” child’s development. These children mimic the behaviors and movements of animals, and often invent their own language. There are only about one hundred cases of “feral children” in the world.

As you may imagine, a feral child is a subject of great interest to researchers. Feral children provide unique information about child development because they have grown up outside of the parameters of “normal” growth and nurturing. And since there are very few feral children, the case study is the most appropriate method for researchers to use in studying the subject.

At age three, a Ukranian girl named Oxana Malaya suffered severe parental neglect. She lived in a shed with dogs, and she ate raw meat and scraps. Five years later, a neighbor called authorities and reported seeing a girl who ran on all fours, barking. Officials brought Oxana into society, where she was cared for and taught some human behaviors, but she never became fully socialized. She has been designated as unable to support herself and now lives in a mental institution (Grice 2011). Case studies like this offer a way for sociologists to collect data that may not be obtained by any other method.

Experiments

You have probably tested some of your own personal social theories. “If I study at night and review in the morning, I’ll improve my retention skills.” Or, “If I stop drinking soda, I’ll feel better.” Cause and effect. If this, then that. When you test the theory, your results either prove or disprove your hypothesis.

One way researchers test social theories is by conducting an experiment , meaning they investigate relationships to test a hypothesis—a scientific approach.

There are two main types of experiments: lab-based experiments and natural or field experiments. In a lab setting, the research can be controlled so that more data can be recorded in a limited amount of time. In a natural or field- based experiment, the time it takes to gather the data cannot be controlled but the information might be considered more accurate since it was collected without interference or intervention by the researcher.

As a research method, either type of sociological experiment is useful for testing if-then statements: if a particular thing happens (cause), then another particular thing will result (effect). To set up a lab-based experiment, sociologists create artificial situations that allow them to manipulate variables.

Classically, the sociologist selects a set of people with similar characteristics, such as age, class, race, or education. Those people are divided into two groups. One is the experimental group and the other is the control group. The experimental group is exposed to the independent variable(s) and the control group is not. To test the benefits of tutoring, for example, the sociologist might provide tutoring to the experimental group of students but not to the control group. Then both groups would be tested for differences in performance to see if tutoring had an effect on the experimental group of students. As you can imagine, in a case like this, the researcher would not want to jeopardize the accomplishments of either group of students, so the setting would be somewhat artificial. The test would not be for a grade reflected on their permanent record of a student, for example.

And if a researcher told the students they would be observed as part of a study on measuring the effectiveness of tutoring, the students might not behave naturally. This is called the Hawthorne effect —which occurs when people change their behavior because they know they are being watched as part of a study. The Hawthorne effect is unavoidable in some research studies because sociologists have to make the purpose of the study known. Subjects must be aware that they are being observed, and a certain amount of artificiality may result (Sonnenfeld 1985).

A real-life example will help illustrate the process. In 1971, Frances Heussenstamm, a sociology professor at California State University at Los Angeles, had a theory about police prejudice. To test her theory, she conducted research. She chose fifteen students from three ethnic backgrounds: Black, White, and Hispanic. She chose students who routinely drove to and from campus along Los Angeles freeway routes, and who had had perfect driving records for longer than a year.

Next, she placed a Black Panther bumper sticker on each car. That sticker, a representation of a social value, was the independent variable. In the 1970s, the Black Panthers were a revolutionary group actively fighting racism. Heussenstamm asked the students to follow their normal driving patterns. She wanted to see whether seeming support for the Black Panthers would change how these good drivers were treated by the police patrolling the highways. The dependent variable would be the number of traffic stops/citations.

The first arrest, for an incorrect lane change, was made two hours after the experiment began. One participant was pulled over three times in three days. He quit the study. After seventeen days, the fifteen drivers had collected a total of thirty-three traffic citations. The research was halted. The funding to pay traffic fines had run out, and so had the enthusiasm of the participants (Heussenstamm, 1971).

Secondary Data Analysis

While sociologists often engage in original research studies, they also contribute knowledge to the discipline through secondary data analysis . Secondary data does not result from firsthand research collected from primary sources, but are the already completed work of other researchers or data collected by an agency or organization. Sociologists might study works written by historians, economists, teachers, or early sociologists. They might search through periodicals, newspapers, or magazines, or organizational data from any period in history.

Using available information not only saves time and money but can also add depth to a study. Sociologists often interpret findings in a new way, a way that was not part of an author’s original purpose or intention. To study how women were encouraged to act and behave in the 1960s, for example, a researcher might watch movies, televisions shows, and situation comedies from that period. Or to research changes in behavior and attitudes due to the emergence of television in the late 1950s and early 1960s, a sociologist would rely on new interpretations of secondary data. Decades from now, researchers will most likely conduct similar studies on the advent of mobile phones, the Internet, or social media.

Social scientists also learn by analyzing the research of a variety of agencies. Governmental departments and global groups, like the U.S. Bureau of Labor Statistics or the World Health Organization (WHO), publish studies with findings that are useful to sociologists. A public statistic like the foreclosure rate might be useful for studying the effects of a recession. A racial demographic profile might be compared with data on education funding to examine the resources accessible by different groups.

One of the advantages of secondary data like old movies or WHO statistics is that it is nonreactive research (or unobtrusive research), meaning that it does not involve direct contact with subjects and will not alter or influence people’s behaviors. Unlike studies requiring direct contact with people, using previously published data does not require entering a population and the investment and risks inherent in that research process.

Using available data does have its challenges. Public records are not always easy to access. A researcher will need to do some legwork to track them down and gain access to records. To guide the search through a vast library of materials and avoid wasting time reading unrelated sources, sociologists employ content analysis , applying a systematic approach to record and value information gleaned from secondary data as they relate to the study at hand.

Also, in some cases, there is no way to verify the accuracy of existing data. It is easy to count how many drunk drivers, for example, are pulled over by the police. But how many are not? While it’s possible to discover the percentage of teenage students who drop out of high school, it might be more challenging to determine the number who return to school or get their GED later.

Another problem arises when data are unavailable in the exact form needed or do not survey the topic from the precise angle the researcher seeks. For example, the average salaries paid to professors at a public school is public record. But these figures do not necessarily reveal how long it took each professor to reach the salary range, what their educational backgrounds are, or how long they’ve been teaching.

When conducting content analysis, it is important to consider the date of publication of an existing source and to take into account attitudes and common cultural ideals that may have influenced the research. For example, when Robert S. Lynd and Helen Merrell Lynd gathered research in the 1920s, attitudes and cultural norms were vastly different then than they are now. Beliefs about gender roles, race, education, and work have changed significantly since then. At the time, the study’s purpose was to reveal insights about small U.S. communities. Today, it is an illustration of 1920s attitudes and values.

As an Amazon Associate we earn from qualifying purchases.

This book may not be used in the training of large language models or otherwise be ingested into large language models or generative AI offerings without OpenStax's permission.

Want to cite, share, or modify this book? This book uses the Creative Commons Attribution License and you must attribute OpenStax.

Access for free at https://openstax.org/books/introduction-sociology-3e/pages/1-introduction
  • Authors: Tonja R. Conerly, Kathleen Holmes, Asha Lal Tamang
  • Publisher/website: OpenStax
  • Book title: Introduction to Sociology 3e
  • Publication date: Jun 3, 2021
  • Location: Houston, Texas
  • Book URL: https://openstax.org/books/introduction-sociology-3e/pages/1-introduction
  • Section URL: https://openstax.org/books/introduction-sociology-3e/pages/2-2-research-methods

© Jan 18, 2024 OpenStax. Textbook content produced by OpenStax is licensed under a Creative Commons Attribution License . The OpenStax name, OpenStax logo, OpenStax book covers, OpenStax CNX name, and OpenStax CNX logo are not subject to the Creative Commons license and may not be reproduced without the prior and express written consent of Rice University.

  • Privacy Policy

Research Method

Home » Historical Research – Types, Methods and Examples

Historical Research – Types, Methods and Examples

Table of Contents

Historical Research

Historical Research

Definition:

Historical research is the process of investigating and studying past events, people, and societies using a variety of sources and methods. This type of research aims to reconstruct and interpret the past based on the available evidence.

Types of Historical Research

There are several types of historical research, including:

Descriptive Research

This type of historical research focuses on describing events, people, or cultures in detail. It can involve examining artifacts, documents, or other sources of information to create a detailed account of what happened or existed.

Analytical Research

This type of historical research aims to explain why events, people, or cultures occurred in a certain way. It involves analyzing data to identify patterns, causes, and effects, and making interpretations based on this analysis.

Comparative Research

This type of historical research involves comparing two or more events, people, or cultures to identify similarities and differences. This can help researchers understand the unique characteristics of each and how they interacted with each other.

Interpretive Research

This type of historical research focuses on interpreting the meaning of past events, people, or cultures. It can involve analyzing cultural symbols, beliefs, and practices to understand their significance in a particular historical context.

Quantitative Research

This type of historical research involves using statistical methods to analyze historical data. It can involve examining demographic information, economic indicators, or other quantitative data to identify patterns and trends.

Qualitative Research

This type of historical research involves examining non-numerical data such as personal accounts, letters, or diaries. It can provide insights into the experiences and perspectives of individuals during a particular historical period.

Data Collection Methods

Data Collection Methods are as follows:

  • Archival research : This involves analyzing documents and records that have been preserved over time, such as government records, diaries, letters, newspapers, and photographs. Archival research is often conducted in libraries, archives, and museums.
  • Oral history : This involves conducting interviews with individuals who have lived through a particular historical period or event. Oral history can provide a unique perspective on past events and can help to fill gaps in the historical record.
  • Artifact analysis: This involves examining physical objects from the past, such as tools, clothing, and artwork, to gain insights into past cultures and practices.
  • Secondary sources: This involves analyzing published works, such as books, articles, and academic papers, that discuss past events and cultures. Secondary sources can provide context and insights into the historical period being studied.
  • Statistical analysis : This involves analyzing numerical data from the past, such as census records or economic data, to identify patterns and trends.
  • Fieldwork : This involves conducting on-site research in a particular location, such as visiting a historical site or conducting ethnographic research in a particular community. Fieldwork can provide a firsthand understanding of the culture and environment being studied.
  • Content analysis: This involves analyzing the content of media from the past, such as films, television programs, and advertisements, to gain insights into cultural attitudes and beliefs.

Data Analysis Methods

  • Content analysis : This involves analyzing the content of written or visual material, such as books, newspapers, or photographs, to identify patterns and themes. Content analysis can be used to identify changes in cultural values and beliefs over time.
  • Textual analysis : This involves analyzing written texts, such as letters or diaries, to understand the experiences and perspectives of individuals during a particular historical period. Textual analysis can provide insights into how people lived and thought in the past.
  • Discourse analysis : This involves analyzing how language is used to construct meaning and power relations in a particular historical period. Discourse analysis can help to identify how social and political ideologies were constructed and maintained over time.
  • Statistical analysis: This involves using statistical methods to analyze numerical data, such as census records or economic data, to identify patterns and trends. Statistical analysis can help to identify changes in population demographics, economic conditions, and other factors over time.
  • Comparative analysis : This involves comparing data from two or more historical periods or events to identify similarities and differences. Comparative analysis can help to identify patterns and trends that may not be apparent from analyzing data from a single historical period.
  • Qualitative analysis: This involves analyzing non-numerical data, such as oral history interviews or ethnographic field notes, to identify themes and patterns. Qualitative analysis can provide a rich understanding of the experiences and perspectives of individuals in the past.

Historical Research Methodology

Here are the general steps involved in historical research methodology:

  • Define the research question: Start by identifying a research question that you want to answer through your historical research. This question should be focused, specific, and relevant to your research goals.
  • Review the literature: Conduct a review of the existing literature on the topic of your research question. This can involve reading books, articles, and academic papers to gain a thorough understanding of the existing research.
  • Develop a research design : Develop a research design that outlines the methods you will use to collect and analyze data. This design should be based on the research question and should be feasible given the resources and time available.
  • Collect data: Use the methods outlined in your research design to collect data on past events, people, and cultures. This can involve archival research, oral history interviews, artifact analysis, and other data collection methods.
  • Analyze data : Analyze the data you have collected using the methods outlined in your research design. This can involve content analysis, textual analysis, statistical analysis, and other data analysis methods.
  • Interpret findings : Use the results of your data analysis to draw meaningful insights and conclusions related to your research question. These insights should be grounded in the data and should be relevant to the research goals.
  • Communicate results: Communicate your findings through a research report, academic paper, or other means. This should be done in a clear, concise, and well-organized manner, with appropriate citations and references to the literature.

Applications of Historical Research

Historical research has a wide range of applications in various fields, including:

  • Education : Historical research can be used to develop curriculum materials that reflect a more accurate and inclusive representation of history. It can also be used to provide students with a deeper understanding of past events and cultures.
  • Museums : Historical research is used to develop exhibits, programs, and other materials for museums. It can provide a more accurate and engaging presentation of historical events and artifacts.
  • Public policy : Historical research is used to inform public policy decisions by providing insights into the historical context of current issues. It can also be used to evaluate the effectiveness of past policies and programs.
  • Business : Historical research can be used by businesses to understand the evolution of their industry and to identify trends that may affect their future success. It can also be used to develop marketing strategies that resonate with customers’ historical interests and values.
  • Law : Historical research is used in legal proceedings to provide evidence and context for cases involving historical events or practices. It can also be used to inform the development of new laws and policies.
  • Genealogy : Historical research can be used by individuals to trace their family history and to understand their ancestral roots.
  • Cultural preservation : Historical research is used to preserve cultural heritage by documenting and interpreting past events, practices, and traditions. It can also be used to identify and preserve historical landmarks and artifacts.

Examples of Historical Research

Examples of Historical Research are as follows:

  • Examining the history of race relations in the United States: Historical research could be used to explore the historical roots of racial inequality and injustice in the United States. This could help inform current efforts to address systemic racism and promote social justice.
  • Tracing the evolution of political ideologies: Historical research could be used to study the development of political ideologies over time. This could help to contextualize current political debates and provide insights into the origins and evolution of political beliefs and values.
  • Analyzing the impact of technology on society : Historical research could be used to explore the impact of technology on society over time. This could include examining the impact of previous technological revolutions (such as the industrial revolution) on society, as well as studying the current impact of emerging technologies on society and the environment.
  • Documenting the history of marginalized communities : Historical research could be used to document the history of marginalized communities (such as LGBTQ+ communities or indigenous communities). This could help to preserve cultural heritage, promote social justice, and promote a more inclusive understanding of history.

Purpose of Historical Research

The purpose of historical research is to study the past in order to gain a better understanding of the present and to inform future decision-making. Some specific purposes of historical research include:

  • To understand the origins of current events, practices, and institutions : Historical research can be used to explore the historical roots of current events, practices, and institutions. By understanding how things developed over time, we can gain a better understanding of the present.
  • To develop a more accurate and inclusive understanding of history : Historical research can be used to correct inaccuracies and biases in historical narratives. By exploring different perspectives and sources of information, we can develop a more complete and nuanced understanding of history.
  • To inform decision-making: Historical research can be used to inform decision-making in various fields, including education, public policy, business, and law. By understanding the historical context of current issues, we can make more informed decisions about how to address them.
  • To preserve cultural heritage : Historical research can be used to document and preserve cultural heritage, including traditions, practices, and artifacts. By understanding the historical significance of these cultural elements, we can work to preserve them for future generations.
  • To stimulate curiosity and critical thinking: Historical research can be used to stimulate curiosity and critical thinking about the past. By exploring different historical perspectives and interpretations, we can develop a more critical and reflective approach to understanding history and its relevance to the present.

When to use Historical Research

Historical research can be useful in a variety of contexts. Here are some examples of when historical research might be particularly appropriate:

  • When examining the historical roots of current events: Historical research can be used to explore the historical roots of current events, practices, and institutions. By understanding how things developed over time, we can gain a better understanding of the present.
  • When examining the historical context of a particular topic : Historical research can be used to explore the historical context of a particular topic, such as a social issue, political debate, or scientific development. By understanding the historical context, we can gain a more nuanced understanding of the topic and its significance.
  • When exploring the evolution of a particular field or discipline : Historical research can be used to explore the evolution of a particular field or discipline, such as medicine, law, or art. By understanding the historical development of the field, we can gain a better understanding of its current state and future directions.
  • When examining the impact of past events on current society : Historical research can be used to examine the impact of past events (such as wars, revolutions, or social movements) on current society. By understanding the historical context and impact of these events, we can gain insights into current social and political issues.
  • When studying the cultural heritage of a particular community or group : Historical research can be used to document and preserve the cultural heritage of a particular community or group. By understanding the historical significance of cultural practices, traditions, and artifacts, we can work to preserve them for future generations.

Characteristics of Historical Research

The following are some characteristics of historical research:

  • Focus on the past : Historical research focuses on events, people, and phenomena of the past. It seeks to understand how things developed over time and how they relate to current events.
  • Reliance on primary sources: Historical research relies on primary sources such as letters, diaries, newspapers, government documents, and other artifacts from the period being studied. These sources provide firsthand accounts of events and can help researchers gain a more accurate understanding of the past.
  • Interpretation of data : Historical research involves interpretation of data from primary sources. Researchers analyze and interpret data to draw conclusions about the past.
  • Use of multiple sources: Historical research often involves using multiple sources of data to gain a more complete understanding of the past. By examining a range of sources, researchers can cross-reference information and validate their findings.
  • Importance of context: Historical research emphasizes the importance of context. Researchers analyze the historical context in which events occurred and consider how that context influenced people’s actions and decisions.
  • Subjectivity : Historical research is inherently subjective, as researchers interpret data and draw conclusions based on their own perspectives and biases. Researchers must be aware of their own biases and strive for objectivity in their analysis.
  • Importance of historical significance: Historical research emphasizes the importance of historical significance. Researchers consider the historical significance of events, people, and phenomena and their impact on the present and future.
  • Use of qualitative methods : Historical research often uses qualitative methods such as content analysis, discourse analysis, and narrative analysis to analyze data and draw conclusions about the past.

Advantages of Historical Research

There are several advantages to historical research:

  • Provides a deeper understanding of the past : Historical research can provide a more comprehensive understanding of past events and how they have shaped current social, political, and economic conditions. This can help individuals and organizations make informed decisions about the future.
  • Helps preserve cultural heritage: Historical research can be used to document and preserve cultural heritage. By studying the history of a particular culture, researchers can gain insights into the cultural practices and beliefs that have shaped that culture over time.
  • Provides insights into long-term trends : Historical research can provide insights into long-term trends and patterns. By studying historical data over time, researchers can identify patterns and trends that may be difficult to discern from short-term data.
  • Facilitates the development of hypotheses: Historical research can facilitate the development of hypotheses about how past events have influenced current conditions. These hypotheses can be tested using other research methods, such as experiments or surveys.
  • Helps identify root causes of social problems : Historical research can help identify the root causes of social problems. By studying the historical context in which these problems developed, researchers can gain a better understanding of how they emerged and what factors may have contributed to their development.
  • Provides a source of inspiration: Historical research can provide a source of inspiration for individuals and organizations seeking to address current social, political, and economic challenges. By studying the accomplishments and struggles of past generations, researchers can gain insights into how to address current challenges.

Limitations of Historical Research

Some Limitations of Historical Research are as follows:

  • Reliance on incomplete or biased data: Historical research is often limited by the availability and quality of data. Many primary sources have been lost, destroyed, or are inaccessible, making it difficult to get a complete picture of historical events. Additionally, some primary sources may be biased or represent only one perspective on an event.
  • Difficulty in generalizing findings: Historical research is often specific to a particular time and place and may not be easily generalized to other contexts. This makes it difficult to draw broad conclusions about human behavior or social phenomena.
  • Lack of control over variables : Historical research often lacks control over variables. Researchers cannot manipulate or control historical events, making it difficult to establish cause-and-effect relationships.
  • Subjectivity of interpretation : Historical research is often subjective because researchers must interpret data and draw conclusions based on their own biases and perspectives. Different researchers may interpret the same data differently, leading to different conclusions.
  • Limited ability to test hypotheses: Historical research is often limited in its ability to test hypotheses. Because the events being studied have already occurred, researchers cannot manipulate variables or conduct experiments to test their hypotheses.
  • Lack of objectivity: Historical research is often subjective, and researchers must be aware of their own biases and strive for objectivity in their analysis. However, it can be difficult to maintain objectivity when studying events that are emotionally charged or controversial.
  • Limited generalizability: Historical research is often limited in its generalizability, as the events and conditions being studied may be specific to a particular time and place. This makes it difficult to draw broad conclusions that apply to other contexts or time periods.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Original Research

Original Research – Definition, Examples, Guide

Scientific Research

Scientific Research – Types, Purpose and Guide

Humanities Research

Humanities Research – Types, Methods and Examples

Artistic Research

Artistic Research – Methods, Types and Examples

Documentary Research

Documentary Research – Types, Methods and...

Human Subjects Office

Medical terms in lay language.

Please use these descriptions in place of medical jargon in consent documents, recruitment materials and other study documents. Note: These terms are not the only acceptable plain language alternatives for these vocabulary words.

This glossary of terms is derived from a list copyrighted by the University of Kentucky, Office of Research Integrity (1990).

For clinical research-specific definitions, see also the Clinical Research Glossary developed by the Multi-Regional Clinical Trials (MRCT) Center of Brigham and Women’s Hospital and Harvard  and the Clinical Data Interchange Standards Consortium (CDISC) .

Alternative Lay Language for Medical Terms for use in Informed Consent Documents

A   B   C   D   E   F   G   H   I  J  K   L   M   N   O   P   Q   R   S   T   U   V   W  X  Y  Z

ABDOMEN/ABDOMINAL body cavity below diaphragm that contains stomach, intestines, liver and other organs ABSORB take up fluids, take in ACIDOSIS condition when blood contains more acid than normal ACUITY clearness, keenness, esp. of vision and airways ACUTE new, recent, sudden, urgent ADENOPATHY swollen lymph nodes (glands) ADJUVANT helpful, assisting, aiding, supportive ADJUVANT TREATMENT added treatment (usually to a standard treatment) ANTIBIOTIC drug that kills bacteria and other germs ANTIMICROBIAL drug that kills bacteria and other germs ANTIRETROVIRAL drug that works against the growth of certain viruses ADVERSE EFFECT side effect, bad reaction, unwanted response ALLERGIC REACTION rash, hives, swelling, trouble breathing AMBULATE/AMBULATION/AMBULATORY walk, able to walk ANAPHYLAXIS serious, potentially life-threatening allergic reaction ANEMIA decreased red blood cells; low red cell blood count ANESTHETIC a drug or agent used to decrease the feeling of pain, or eliminate the feeling of pain by putting you to sleep ANGINA pain resulting from not enough blood flowing to the heart ANGINA PECTORIS pain resulting from not enough blood flowing to the heart ANOREXIA disorder in which person will not eat; lack of appetite ANTECUBITAL related to the inner side of the forearm ANTIBODY protein made in the body in response to foreign substance ANTICONVULSANT drug used to prevent seizures ANTILIPEMIC a drug that lowers fat levels in the blood ANTITUSSIVE a drug used to relieve coughing ARRHYTHMIA abnormal heartbeat; any change from the normal heartbeat ASPIRATION fluid entering the lungs, such as after vomiting ASSAY lab test ASSESS to learn about, measure, evaluate, look at ASTHMA lung disease associated with tightening of air passages, making breathing difficult ASYMPTOMATIC without symptoms AXILLA armpit

BENIGN not malignant, without serious consequences BID twice a day BINDING/BOUND carried by, to make stick together, transported BIOAVAILABILITY the extent to which a drug or other substance becomes available to the body BLOOD PROFILE series of blood tests BOLUS a large amount given all at once BONE MASS the amount of calcium and other minerals in a given amount of bone BRADYARRHYTHMIAS slow, irregular heartbeats BRADYCARDIA slow heartbeat BRONCHOSPASM breathing distress caused by narrowing of the airways

CARCINOGENIC cancer-causing CARCINOMA type of cancer CARDIAC related to the heart CARDIOVERSION return to normal heartbeat by electric shock CATHETER a tube for withdrawing or giving fluids CATHETER a tube placed near the spinal cord and used for anesthesia (indwelling epidural) during surgery CENTRAL NERVOUS SYSTEM (CNS) brain and spinal cord CEREBRAL TRAUMA damage to the brain CESSATION stopping CHD coronary heart disease CHEMOTHERAPY treatment of disease, usually cancer, by chemical agents CHRONIC continuing for a long time, ongoing CLINICAL pertaining to medical care CLINICAL TRIAL an experiment involving human subjects COMA unconscious state COMPLETE RESPONSE total disappearance of disease CONGENITAL present before birth CONJUNCTIVITIS redness and irritation of the thin membrane that covers the eye CONSOLIDATION PHASE treatment phase intended to make a remission permanent (follows induction phase) CONTROLLED TRIAL research study in which the experimental treatment or procedure is compared to a standard (control) treatment or procedure COOPERATIVE GROUP association of multiple institutions to perform clinical trials CORONARY related to the blood vessels that supply the heart, or to the heart itself CT SCAN (CAT) computerized series of x-rays (computerized tomography) CULTURE test for infection, or for organisms that could cause infection CUMULATIVE added together from the beginning CUTANEOUS relating to the skin CVA stroke (cerebrovascular accident)

DERMATOLOGIC pertaining to the skin DIASTOLIC lower number in a blood pressure reading DISTAL toward the end, away from the center of the body DIURETIC "water pill" or drug that causes increase in urination DOPPLER device using sound waves to diagnose or test DOUBLE BLIND study in which neither investigators nor subjects know what drug or treatment the subject is receiving DYSFUNCTION state of improper function DYSPLASIA abnormal cells

ECHOCARDIOGRAM sound wave test of the heart EDEMA excess fluid collecting in tissue EEG electric brain wave tracing (electroencephalogram) EFFICACY effectiveness ELECTROCARDIOGRAM electrical tracing of the heartbeat (ECG or EKG) ELECTROLYTE IMBALANCE an imbalance of minerals in the blood EMESIS vomiting EMPIRIC based on experience ENDOSCOPIC EXAMINATION viewing an  internal part of the body with a lighted tube  ENTERAL by way of the intestines EPIDURAL outside the spinal cord ERADICATE get rid of (such as disease) Page 2 of 7 EVALUATED, ASSESSED examined for a medical condition EXPEDITED REVIEW rapid review of a protocol by the IRB Chair without full committee approval, permitted with certain low-risk research studies EXTERNAL outside the body EXTRAVASATE to leak outside of a planned area, such as out of a blood vessel

FDA U.S. Food and Drug Administration, the branch of federal government that approves new drugs FIBROUS having many fibers, such as scar tissue FIBRILLATION irregular beat of the heart or other muscle

GENERAL ANESTHESIA pain prevention by giving drugs to cause loss of consciousness, as during surgery GESTATIONAL pertaining to pregnancy

HEMATOCRIT amount of red blood cells in the blood HEMATOMA a bruise, a black and blue mark HEMODYNAMIC MEASURING blood flow HEMOLYSIS breakdown in red blood cells HEPARIN LOCK needle placed in the arm with blood thinner to keep the blood from clotting HEPATOMA cancer or tumor of the liver HERITABLE DISEASE can be transmitted to one’s offspring, resulting in damage to future children HISTOPATHOLOGIC pertaining to the disease status of body tissues or cells HOLTER MONITOR a portable machine for recording heart beats HYPERCALCEMIA high blood calcium level HYPERKALEMIA high blood potassium level HYPERNATREMIA high blood sodium level HYPERTENSION high blood pressure HYPOCALCEMIA low blood calcium level HYPOKALEMIA low blood potassium level HYPONATREMIA low blood sodium level HYPOTENSION low blood pressure HYPOXEMIA a decrease of oxygen in the blood HYPOXIA a decrease of oxygen reaching body tissues HYSTERECTOMY surgical removal of the uterus, ovaries (female sex glands), or both uterus and ovaries

IATROGENIC caused by a physician or by treatment IDE investigational device exemption, the license to test an unapproved new medical device IDIOPATHIC of unknown cause IMMUNITY defense against, protection from IMMUNOGLOBIN a protein that makes antibodies IMMUNOSUPPRESSIVE drug which works against the body's immune (protective) response, often used in transplantation and diseases caused by immune system malfunction IMMUNOTHERAPY giving of drugs to help the body's immune (protective) system; usually used to destroy cancer cells IMPAIRED FUNCTION abnormal function IMPLANTED placed in the body IND investigational new drug, the license to test an unapproved new drug INDUCTION PHASE beginning phase or stage of a treatment INDURATION hardening INDWELLING remaining in a given location, such as a catheter INFARCT death of tissue due to lack of blood supply INFECTIOUS DISEASE transmitted from one person to the next INFLAMMATION swelling that is generally painful, red, and warm INFUSION slow injection of a substance into the body, usually into the blood by means of a catheter INGESTION eating; taking by mouth INTERFERON drug which acts against viruses; antiviral agent INTERMITTENT occurring (regularly or irregularly) between two time points; repeatedly stopping, then starting again INTERNAL within the body INTERIOR inside of the body INTRAMUSCULAR into the muscle; within the muscle INTRAPERITONEAL into the abdominal cavity INTRATHECAL into the spinal fluid INTRAVENOUS (IV) through the vein INTRAVESICAL in the bladder INTUBATE the placement of a tube into the airway INVASIVE PROCEDURE puncturing, opening, or cutting the skin INVESTIGATIONAL NEW DRUG (IND) a new drug that has not been approved by the FDA INVESTIGATIONAL METHOD a treatment method which has not been proven to be beneficial or has not been accepted as standard care ISCHEMIA decreased oxygen in a tissue (usually because of decreased blood flow)

LAPAROTOMY surgical procedure in which an incision is made in the abdominal wall to enable a doctor to look at the organs inside LESION wound or injury; a diseased patch of skin LETHARGY sleepiness, tiredness LEUKOPENIA low white blood cell count LIPID fat LIPID CONTENT fat content in the blood LIPID PROFILE (PANEL) fat and cholesterol levels in the blood LOCAL ANESTHESIA creation of insensitivity to pain in a small, local area of the body, usually by injection of numbing drugs LOCALIZED restricted to one area, limited to one area LUMEN the cavity of an organ or tube (e.g., blood vessel) LYMPHANGIOGRAPHY an x-ray of the lymph nodes or tissues after injecting dye into lymph vessels (e.g., in feet) LYMPHOCYTE a type of white blood cell important in immunity (protection) against infection LYMPHOMA a cancer of the lymph nodes (or tissues)

MALAISE a vague feeling of bodily discomfort, feeling badly MALFUNCTION condition in which something is not functioning properly MALIGNANCY cancer or other progressively enlarging and spreading tumor, usually fatal if not successfully treated MEDULLABLASTOMA a type of brain tumor MEGALOBLASTOSIS change in red blood cells METABOLIZE process of breaking down substances in the cells to obtain energy METASTASIS spread of cancer cells from one part of the body to another METRONIDAZOLE drug used to treat infections caused by parasites (invading organisms that take up living in the body) or other causes of anaerobic infection (not requiring oxygen to survive) MI myocardial infarction, heart attack MINIMAL slight MINIMIZE reduce as much as possible Page 4 of 7 MONITOR check on; keep track of; watch carefully MOBILITY ease of movement MORBIDITY undesired result or complication MORTALITY death MOTILITY the ability to move MRI magnetic resonance imaging, diagnostic pictures of the inside of the body, created using magnetic rather than x-ray energy MUCOSA, MUCOUS MEMBRANE moist lining of digestive, respiratory, reproductive, and urinary tracts MYALGIA muscle aches MYOCARDIAL pertaining to the heart muscle MYOCARDIAL INFARCTION heart attack

NASOGASTRIC TUBE placed in the nose, reaching to the stomach NCI the National Cancer Institute NECROSIS death of tissue NEOPLASIA/NEOPLASM tumor, may be benign or malignant NEUROBLASTOMA a cancer of nerve tissue NEUROLOGICAL pertaining to the nervous system NEUTROPENIA decrease in the main part of the white blood cells NIH the National Institutes of Health NONINVASIVE not breaking, cutting, or entering the skin NOSOCOMIAL acquired in the hospital

OCCLUSION closing; blockage; obstruction ONCOLOGY the study of tumors or cancer OPHTHALMIC pertaining to the eye OPTIMAL best, most favorable or desirable ORAL ADMINISTRATION by mouth ORTHOPEDIC pertaining to the bones OSTEOPETROSIS rare bone disorder characterized by dense bone OSTEOPOROSIS softening of the bones OVARIES female sex glands

PARENTERAL given by injection PATENCY condition of being open PATHOGENESIS development of a disease or unhealthy condition PERCUTANEOUS through the skin PERIPHERAL not central PER OS (PO) by mouth PHARMACOKINETICS the study of the way the body absorbs, distributes, and gets rid of a drug PHASE I first phase of study of a new drug in humans to determine action, safety, and proper dosing PHASE II second phase of study of a new drug in humans, intended to gather information about safety and effectiveness of the drug for certain uses PHASE III large-scale studies to confirm and expand information on safety and effectiveness of new drug for certain uses, and to study common side effects PHASE IV studies done after the drug is approved by the FDA, especially to compare it to standard care or to try it for new uses PHLEBITIS irritation or inflammation of the vein PLACEBO an inactive substance; a pill/liquid that contains no medicine PLACEBO EFFECT improvement seen with giving subjects a placebo, though it contains no active drug/treatment PLATELETS small particles in the blood that help with clotting POTENTIAL possible POTENTIATE increase or multiply the effect of a drug or toxin (poison) by giving another drug or toxin at the same time (sometimes an unintentional result) POTENTIATOR an agent that helps another agent work better PRENATAL before birth PROPHYLAXIS a drug given to prevent disease or infection PER OS (PO) by mouth PRN as needed PROGNOSIS outlook, probable outcomes PRONE lying on the stomach PROSPECTIVE STUDY following patients forward in time PROSTHESIS artificial part, most often limbs, such as arms or legs PROTOCOL plan of study PROXIMAL closer to the center of the body, away from the end PULMONARY pertaining to the lungs

QD every day; daily QID four times a day

RADIATION THERAPY x-ray or cobalt treatment RANDOM by chance (like the flip of a coin) RANDOMIZATION chance selection RBC red blood cell RECOMBINANT formation of new combinations of genes RECONSTITUTION putting back together the original parts or elements RECUR happen again REFRACTORY not responding to treatment REGENERATION re-growth of a structure or of lost tissue REGIMEN pattern of giving treatment RELAPSE the return of a disease REMISSION disappearance of evidence of cancer or other disease RENAL pertaining to the kidneys REPLICABLE possible to duplicate RESECT remove or cut out surgically RETROSPECTIVE STUDY looking back over past experience

SARCOMA a type of cancer SEDATIVE a drug to calm or make less anxious SEMINOMA a type of testicular cancer (found in the male sex glands) SEQUENTIALLY in a row, in order SOMNOLENCE sleepiness SPIROMETER an instrument to measure the amount of air taken into and exhaled from the lungs STAGING an evaluation of the extent of the disease STANDARD OF CARE a treatment plan that the majority of the medical community would accept as appropriate STENOSIS narrowing of a duct, tube, or one of the blood vessels in the heart STOMATITIS mouth sores, inflammation of the mouth STRATIFY arrange in groups for analysis of results (e.g., stratify by age, sex, etc.) STUPOR stunned state in which it is difficult to get a response or the attention of the subject SUBCLAVIAN under the collarbone SUBCUTANEOUS under the skin SUPINE lying on the back SUPPORTIVE CARE general medical care aimed at symptoms, not intended to improve or cure underlying disease SYMPTOMATIC having symptoms SYNDROME a condition characterized by a set of symptoms SYSTOLIC top number in blood pressure; pressure during active contraction of the heart

TERATOGENIC capable of causing malformations in a fetus (developing baby still inside the mother’s body) TESTES/TESTICLES male sex glands THROMBOSIS clotting THROMBUS blood clot TID three times a day TITRATION a method for deciding on the strength of a drug or solution; gradually increasing the dose T-LYMPHOCYTES type of white blood cells TOPICAL on the surface TOPICAL ANESTHETIC applied to a certain area of the skin and reducing pain only in the area to which applied TOXICITY side effects or undesirable effects of a drug or treatment TRANSDERMAL through the skin TRANSIENTLY temporarily TRAUMA injury; wound TREADMILL walking machine used to test heart function

UPTAKE absorbing and taking in of a substance by living tissue

VALVULOPLASTY plastic repair of a valve, especially a heart valve VARICES enlarged veins VASOSPASM narrowing of the blood vessels VECTOR a carrier that can transmit disease-causing microorganisms (germs and viruses) VENIPUNCTURE needle stick, blood draw, entering the skin with a needle VERTICAL TRANSMISSION spread of disease

WBC white blood cell

  • Arnold School of Public Health
  • Location Location
  • Contact Contact
  • Colleges and Schools
  • 2024 News Archive

I Am Public Health: Xiaowen Sun

Xiaowen Sun

July 1, 2024  | Erin Bluvas,  [email protected]

Xiaowen Sun discovered her love of biostatistics and public health during her master’s program. She had already studied mathematics for her bachelor's degree, and graduate school taught Sun new ways to apply what she had learned.

Originally from China, Sun grew up in Zibo – a city famous for its BBQ. The Zibo BBQ Association tallied more than 1,270 BBQ restaurants at last count, and the city hosts hundreds of thousands of hungry customers at its local food markets during seasonal festivals.

The move was not a big leap for Sun when she decided to attend Shandong University of Technology for her undergraduate studies, but her next step would take her across the world. At the University of Missouri in the United States, Sun enrolled in a master’s program focused on statistics.

I knew that USC's biostatistics program would provide me with the skills, knowledge and connections necessary to advance my career in clinical research and public health.

“My fascination with public health and biostatistics began during my master’s studies, where I was first introduced to statistical methods and their applications in real-world problems,” she says. “I was particularly drawn to survival analysis due to its critical role in medical research and public health.”

Her coursework and research projects led Sun to discover the potential of machine learning and deep learning to revolutionize data analysis. She was intrigued by the abilities of these methods to work with large-volume data sets that were high dimensional and non-linear. The application of these methods to help solve complex health care challenges cemented her commitment to the field.

Sun chose the Arnold School’s Ph.D. in Biostatistics program to elevate her analytical skills in clinical research. The curriculum was the perfect fit for her interests, and the Department of Epidemiology and Biostatistics offered numerous opportunities to be involved in varied projects led by enthusiastic faculty.

Xiaowen Sun

“The program’s focus on hands-on projects and real-world applications was highly appealing, and its reputation and the strong network of alumni also played a crucial role in my decision,” Sun says. “I knew that USC's biostatistics program would provide me with the skills, knowledge and connections necessary to advance my career in clinical research and public health.”

She found a mentor in her dissertation advisor, biostatistics professor Jiajia Zhang .

“Under her guidance, I have gained a deep understanding of advanced statistical methodologies and their applications in public health research,” Sun says. “She taught me how to approach complex data problems with a meticulous and analytical mindset, ensuring precision and accuracy in my work. Moreover, Dr. Zhang has provided invaluable career advice, helping me to set and achieve my professional goals.”

As a graduate research assistant with the South Carolina SmartState Center for Healthcare Quality , Sun amassed the research experience she was looking for by contributing to collaborative projects. She also spent a summer interning at Novartis with the pharmaceutical company’s immunology department.

Since last fall, Sun has been working at the MD Anderson Cancer Center at the University of Texas as a research biostatistician. She will wrap up her dissertation research over the next several months and plans to graduate later this year.

“My degree from USC has equipped me with a comprehensive understanding of biostatistics and its applications in clinical research,” Sun says. “The advanced coursework and hands-on projects have significantly enhanced my analytical skills, enabling me to tackle complex data challenges effectively.”

student looking at computer

Study Biostatistics

Biostatisticians improve public health by unraveling the true story hidden within increasingly complex data. Our graduates hold positions in universities, research institutes, governmental agencies such as CDC, NIH, state health departments, and in private industry including pharmaceutical companies and even financial institutions.

Challenge the conventional. Create the exceptional. No Limits.

Psychological and Brain Sciences

Clas pbs professor receives nih grant to research how to improve lifeguard training using virtual reality.

CM

Cathleen Moore , professor in the Department of Psychological and Brain Sciences in the College of Liberal Arts and Sciences and Starch Faculty Fellow, received a grant from the National Institutes of Health for $413,267 to study how lifeguard training can be improved using virtual reality. 

Moore and her team will research the limitations and impact of attention and perception on lifeguarding. They will use virtual reality to test various methods of training. 

Understanding  the limitations will allow for development of better safety training and increased injury prevention, Moore said.  

“The basic problem is that the surveillance component of lifeguarding requires that lifeguards actively monitor a complex and constantly changing scene for poorly specified critical events,” Moore said.  

“For example, they have to notice if one swimmer goes under for too long while other swimmers are simultaneously going under for variable durations. The attentional and perceptual demands of the task are enormous but are rarely considered when identifying safety vulnerabilities at aquatics facilities.” 

Moore co-leads the Visual Perception Research Group at the University of Iowa where she researches the strengths and limitations of human perception. She also received a UI Injury Prevention Research Center pilot grant in 2022 for studying a swimming pool lifeguarding environment.  

Moore focuses on how perception interacts with cognitive processing and how this can affect our experiencing of the physical world. For lifeguarding, this means finding what causes something to stick out about the observed environment, how that is then processed, and how or why action or inaction follows.  

Assuming a lifeguard will notice all critical events if they are paying attention is misguided, Moore said. The environment is complex and simply “focusing” won’t guarantee active processing and reactions. Carefully focusing can even be the cause of missing incidents elsewhere, she added.  

 Studying lifeguarding scenarios can be challenging because complex environments and unique critical events are hard to control and keep standardized. There is also the issue of studying events that endanger human lives.  

Virtual reality will allow for precise control over the environment and what events are being studied with easy replicability, Moore said. The research team can introduce specific “critical events” at controlled times to see how perception is impacted. 

“Given our simplified controlled environment, we can compare what simulated lifeguards are doing differently than real-life lifeguards,” Moore said. “Then, we can test specific impacts of cognitive and perceptual limitations in a simulated lifeguarding task. This will allow us to identify what vulnerabilities are greatest and what kinds of mitigating factors can be introduced to reduce surveillance failures.”   

Once research is complete, the hope is to make the training system available to the public for future lifeguard training. In the lab, the team use commercially available VR equipment so the system will be accessible for other organizations once it's available.  

Moore told the Injury Prevention Research Center the team will apply for longer-term funding in the future to test alternative training programs for local pools. 

“We hope to develop customized environments that simulate real pools. For example, the City Park pool in Iowa City or parts of the water park at Adventure Land in Des Moines,” Moore said .

NOTICE: The University of Iowa Center for Advancement is an operational name for the State University of Iowa Foundation, an independent, Iowa nonprofit corporation organized as a 501(c)(3) tax-exempt, publicly supported charitable entity working to advance the University of Iowa. Please review its full disclosure statement.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 29 June 2024

Effect of cross-platform gene-expression, computational methods on breast cancer subtyping in PALOMA-2 and PALLET studies

  • Maggie Chon U. Cheang 1 ,
  • Mothaffar Rimawi   ORCID: orcid.org/0000-0002-4284-5656 2 ,
  • Stephen Johnston 3 ,
  • Samuel A. Jacobs 4 ,
  • Judith Bliss 3 ,
  • Katherine Pogue-Geile 4 ,
  • Lucy Kilburn 3 ,
  • Zhou Zhu 5 ,
  • Eugene F. Schuster 3 ,
  • Hui Xiao 3 ,
  • Lisa Swaim 5 ,
  • Shibing Deng 5 ,
  • Dongrui R. Lu 5 ,
  • Eric Gauthier   ORCID: orcid.org/0000-0002-0160-7571 5 ,
  • Jennifer Tursi 6 ,
  • Dennis J. Slamon 7 ,
  • Hope S. Rugo   ORCID: orcid.org/0000-0001-6710-4814 8 ,
  • Richard S. Finn   ORCID: orcid.org/0000-0003-2494-2126 7 &
  • Yuan Liu 5  

npj Breast Cancer volume  10 , Article number:  54 ( 2024 ) Cite this article

10 Accesses

Metrics details

  • Breast cancer
  • Predictive markers

Intrinsic breast cancer molecular subtyping (IBCMS) provides significant prognostic information for patients with breast cancer and helps determine treatment. This study compared IBCMS methods on various gene-expression platforms in PALOMA-2 and PALLET trials. PALOMA-2 tumor samples were profiled using EdgeSeq and nanostring and subtyped with AIMS, PAM50, and research-use-only (ruo)Prosigna. PALLET tumor biopsies were profiled using mRNA sequencing and subtyped with AIMS and PAM50. In PALOMA-2 ( n  = 222), a 54% agreement was observed between results from AIMS and gold-standard ruoProsigna, with AIMS assigning 67% basal-like to HER2-enriched. In PALLET ( n  = 224), a 69% agreement was observed between results from PAM50 and AIMS. Different IBCMS methods may lead to different results and could misguide treatment selection; hence, a standardized clinical PAM50 assay and computational approach should be used.

Trial number: NCT01740427

Similar content being viewed by others

methods of study in research

RNA sequencing-based single sample predictors of molecular subtype and risk of recurrence for clinical assessment of early-stage breast cancer

methods of study in research

Development and validation for research assessment of Oncotype DX® Breast Recurrence Score, EndoPredict® and Prosigna®

methods of study in research

Cross comparison and prognostic assessment of breast cancer multigene signatures in a large population-based contemporary clinical series

Introduction.

Breast cancer diagnosis and decisions regarding treatment are largely based on clinicopathologic variables such as histologic subtype, nodal status, tumor size and grade, and biomarkers such as estrogen receptor (ER) and human epidermal growth factor receptor 2 (HER2), which are suboptimal biomarkers for predicting disease outcome of targeted therapies and emerging treatments 1 , 2 . Using global gene-expression profiling, breast cancers can be molecularly classified into five intrinsic subtypes: luminal A (LumA), luminal B (LumB), HER2 enriched (HER2-E), basal-like (BL), and normal-like (NL) 3 , 4 , although these subtypes do not represent distinct disease entities but, rather, exist on a continuum. These subtypes are associated with significantly different prognoses, incidence rates between races, and survival benefits achieved from endocrine, and HER2-targeted therapies 5 , 6 , 7 , 8 , 9 . Molecular classification of breast cancers into intrinsic subtypes has been embraced in the medical community because of its importance for both clinical decision-making and the development of new breast cancer treatments 10 .

The PAM50 classifier, an optimally selected, minimized, 50-gene-based subtype predictor, was developed from 189 prototypical samples representing the five intrinsic subtypes to capture the major intrinsic subtypes in a general patient population in relative proportions 9 , 11 . The clinical value of the PAM50 classifier was validated on independent cohorts of samples 9 , 11 . The final PAM50 algorithm consists of centroids constructed as described by Parker et al. 9 , 12 . Tumor subtype classification is assigned based on the nearest of the five centroids, with distances calculated using Spearman’s rank correlation 9 . The PAM50 classifier has become the widely accepted gold standard for intrinsic subtyping. A clinical-grade, standardized version of this test is the Prosigna® Breast Cancer Prognostic Gene Signature Assay (Veracyte, Inc.), which also includes a numeric score that integrates the intrinsic subtype information with tumor size. This score has been shown to indicate the probability of cancer recurrence during the next 10 years for patients with hormone receptor-positive (HR+)/HER2-negative (HER2−) early breast cancer 13 , 14 .

Contemporary breast cancer trials and clinical studies are often focused on molecular subgroups as defined by HR+ and HER2+ status for inclusion criteria. However, with improving sequencing technologies, it is easier to obtain high-dimensional gene-expression data using RNAseq techniques for formalin-fixed paraffin-embedded (FFPE) samples collected from clinical trials. The publicly available PAM50 classifier algorithm and associated intrinsic subtype centroids (i.e., average profiles) were developed to capture the major intrinsic subtypes in a general population of patients with breast cancer in relative proportions. The clinicopathologic distribution of the study cohort should be carefully considered and normalized. For example, it should be determined if the study cohort has mainly ER-positive (ER+) breast cancer or triple-negative breast cancer. Furthermore, the technology platform should be calibrated for gene-expression profiling 15 . The absolute intrinsic molecular subtyping (AIMS) algorithm was originally trained to recapitulate the intrinsic subtype classification by the PAM50 algorithm and was claimed to have 77% agreement in testing 16 . This bioinformatic approach was suggested to accurately assign subtypes to individual patients regardless of normalization procedures used, relative frequencies of ER+ tumors or subtypes, or other clinicopathologic patient attributes. However, the accuracy of AIMS in predicting intrinsic subtyping by PAM50, as originally developed, has not been cross-validated by an independent study.

The aim of this analysis was to determine if there were any discrepancies between PAM50 and AIMS in regard to intrinsic subtyping by performing head-to-head comparisons of these different next-generation sequencing technologies using the same sample sets. Data were collected using tumor samples from postmenopausal women with ER+/HER2− breast cancer included in two randomized trials, PALOMA-2 and PALLET. The predictive value of the intrinsic subtypes, as assessed by the different methodological approaches, for progression-free survival (PFS) treatment benefit of palbociclib was also evaluated.

A total of 666 patients were enrolled in PALOMA-2; of these, 455 patients had HTG gene-expression data, and 222 patients had both ruoProsigna-PAM50 and HTG-AIMS data. Baseline demographics and disease characteristics were similar between patients in the overall cohort in PALOMA-2 and those with biomarker data (Supplementary Table 1 ). An overall 54% agreement rate was observed between the ruoProsigna-PAM50 and HTG-AIMS methods. In total, 46% of samples (56/121) assigned as LumB subtype by the gold standard ruoProsigna-PAM50 were assigned as LumA by HTG-AIMS, and 67% (6/9) of those assigned as BL by ruoProsigna‑PAM50 were assigned as HER2-E by HTG-AIMS (Table 1 ). Cohen’s kappa statistic of agreement was 0.30 ( P  < 0.0001), indicating a fair agreement between the two computational subtyping methods that were generated from their respective gene-expression profiles. Since the clinical-grade Prosigna does not provide an NL subtype, we assigned the NL in AIMS to its closest subtype, LumA, to calculate a kappa statistic.

In PALOMA-2, the ruoProsigna-PAM50 identified 54.5% of samples (121/222) as LumB, while HTG-AIMS and HTG-PAM50.sgPct methods identified 50.3% and 48.6% of the samples as LumA, respectively (Fig. 1a ). HER2-E and BL subtyping also differed between methods (ruoProsigna-PAM50, 9.0% and 4.1%; HTG-PAM50.sgPct, 5.7% and 7.5%; HTG-AIMS, 18.7% and 0.4%, respectively). Among the methods that tested for the NL subtype, HTG-PAM50.sgPct identified 10.3% as NL, whereas HTG-AIMs identified 0.9% as NL. Although the agreement between intrinsic subtyping methods was fair, the prognostic nature of the subtyping was conserved and consistent across the methods, particularly for PAM50 subtyping results on HTG and NanoString data. PFS by ruoProsigna-PAM50-derived subtype showed that palbociclib plus letrozole versus letrozole alone conferred a greater benefit for all patients regardless of LumA, LumB, HER2-E, or BL subtype; however, the sample size for the BL subtype was small, limiting the interpretation of the finding (treatment interaction P  = 0.28; likelihood ratio test) (Fig. 1b ). Hazard ratios and 95% CIs for PFS by breast cancer subtype and subtyping method are shown in Fig. 1c . These observations were corroborated by the survival analysis of HTG-PAM50.sgPct data. Regardless of the subtyping method used, palbociclib plus letrozole improved PFS compared with letrozole alone.

figure 1

a Pie charts of subtype distributions by HTG-AIMs, HTG-PAM50.sgPct, and ruoProsigna-PAM50 subtyping methods among total available samples. HTG-AIMS and HTG-PAM50.sgPct were applied on the entire cohort of 455 samples; only 222 samples were available for ruoProsigna-PAM50. b Kaplan–Meier curves of median PFS by subtype and treatment in PALOMA-2 with ruoProsigna-PAM50, the gold standard. c Hazard ratios and 95% CIs for PFS by breast cancer subtype and subtyping method in PALOMA-2; for HGT-AIMS analysis, six patients were not included (four NL and two BL). *Data from Finn et al. 17 . LET letrozole, N number of patients in the analysis group, n number of patients, NA not available, PAL palbociclib, PBO placebo.

Comparison of PAM50 intrinsic subtyping with AIMS on HTG data

PAM50 subtyping methods with the HTG panel on the PALOMA-2 patient samples are shown in Supplementary Table 2 . As described in the Methods, proper normalization should be performed before applying the original PAM50 method on sample subgroups. As shown in Supplementary Table 2 , applying the PAM50 method 9 (without normalization), namely HTG-PAM50, tended to classify samples equally to each subtype even though PALOMA-2 included only patients with ER+/HER2− disease. After proper normalization, HTG-PAM50.sgPct improved the intrinsic subtyping assignments by reassigning many of the patients to LumA or LumB from other subtypes. HTG-AIMS classified patients into primarily LumA, LumB, and HER2-E subtypes. When comparing HTG-PAM50 and HTG-PAM50.sgPct with HTG-AIMS, the highest subtype agreement was observed with the LumA subtype (Supplementary Tables 3 and 4 ).

Molecular subtyping of PALLET samples was performed using different methods with the RNAseq gene-expression data, including RNAseq-AIMS and RNAseq-PAM50.sgMd.TC. In PALLET, 224 patients had RNAseq data at baseline, and a 69% agreement between the RNAseq-AIMS and RNAseq-PAM50.sgMd.TC computational approaches were observed. Only 4% of samples were assigned LumB by RNAseq-PAM50.sgMd.TC were assigned as LumA by RNAseq-AIMS, but 17% and 16% of samples that were assigned as LumA were classified as LumB or NL by RNAseq-AIMS, respectively (Table 2 ). Distributions of subtypes identified by additional subtyping methods are shown in Supplementary Table 5 .

In PALLET, the RNAseq-AIMS method classified 44.2% of samples as LumA, whereas RNAseq-PAM50.sgMd.TC classified 69.6% of samples as LumA, with an overall reduction in the percentage of samples classified as other subtypes (i.e., LumB, HER2-E, and NL; Fig. 2a ). An equal percentage of samples was classified as BL between the two subtyping methods (1.8%). Odds ratios and 95% CIs for non-CCCA by breast cancer subtype in PALLET are shown in Fig. 2b . Overall, percentages of patients with non-CCCA were significantly lower in the palbociclib arm using both the RNAseq-AIMS and RNAseq-PAM50.sgMd.TC subtyping methods. The individual subtypes are classified with RNAseq-AIMS and PAM50.sgMd.TC also favored the palbociclib arm, which was significant for LumB patients subtyped with the RNAseq-AIMS method and LumA patients subtyped with the RNAseq-PAM50.sgMd.TC method but did not reach significance for other subtype groups owing to the small numbers of patients.

figure 2

a Pie chart of subtype distributions by the RNAseq-AIMS and RNAseq-PAM50.sgMd.TC subtyping methods for PALLET samples. b Odds ratios and 95% CIs for non-CCCA by breast cancer RNAseq-AIMS and RNAseq-PAM50.sgMd.TC subtype in PALLET. LET letrozole, n number of patients in the subtype treatment group, PAL palbociclib.

Evaluation of the heterogeneity of intrinsic subtyping assignments

In examining the largest and second-largest distance to PAM50 intrinsic subtyping centroids in the gold-standard ruoProsigna-PAM50 assay, the distances and correlation between the first two closest centroids were very close among some samples (Fig. 3a ). Particularly, most of those close pairs were LumA‒LumB, LumB‒HER2-E, or HER2-E–BL. This suggests the potential for misclassification of samples with vague boundaries between LumA and LumB, LumB and HER2-E, and HER2-E and BL. On the other hand, it is unlikely that a LumA sample would be misclassified as HER2-E or BL, or vice versa by PAM50.

figure 3

a Plot of the largest two correlations between sample expression data and ruoProsigna subtype centroids. The solid line is the equal line, and the dashed line is 0.1 from the equal line. Each symbol is a sample. Symbol “AB” means the largest correlation coefficient is with LumA and the second largest correlation is with LumB. The same pattern follows for the remaining symbol combinations. b PFS in the palbociclib plus letrozole group between patients with clear-defined subtypes (solid lines) versus patients with discord subtypes (dashed lines) based on agreement between the HTG-AIMS and ruoProsigna-PAM50 methods. In the figure on the left, the subtype is based on AIMS; in the figure on the right, the subtype is based on PAM50. LET letrozole, NonLum nonluminal, PAL palbociclib.

For the 222 patients in PALOMA-2 with both HTG-AIMS and ruoProsigna-PAM50 subtyping results, the subtype agreement was 54% (κ = 0.3). Patients whose subtypes agreed between the two methods were defined as subtype clear-defined ( n  = 119), and the remaining patients were subtype borderline-defined or discord. Figure 3b shows PFS among patients in the palbociclib arm with subtypes defined by HTG-AIMS (left panel) and by ruoProsigna-PAM50 (right panel). In the AIMS subtype, the clear-defined subtypes (solid lines) are highly prognostic, with LumA having better PFS than LumB, and HER2-E having the worst. However, in patients with subtype discord (dashed lines), PFS in patients with HER2-E was similar to PFS in patients with clear-defined LumB subtype. This suggests that AIMS did not separate LumB and HER2-E subtypes well, with discord HER2-E more like clear-defined LumB, and discord LumB more like clear-defined HER2-E. The same data plotted with ruoProsigna-PAM50 subtypes (Fig. 3b , right panel) showed that PFS in the discord LumA and clear-defined LumB subtypes were similar, which suggests that PAM50 separated LumA and LumB poorly, with discord LumA more like LumB and discord LumB more like LumA. The discord nonluminal subtype was very similar to clear-defined nonluminal, indicating that the ruoProsigna-PAM50‒defined nonluminal subtype was consistently assigned.

Furthermore, a numeric experiment was conducted by switching the subtype from the closest centroid to the second-closest centroid if the correlation distance was within 0.1. About 22.5% of patients ( n  = 50/222) had their subtype switched to the second-closest centroid. All the switches happened between two adjacent subtypes if they were ordered as LumA‒LumB‒HER2-E‒BL (Supplementary Table 6 ). The switched subtypes were still very prognostic, and treatment benefit of palbociclib was maintained in all subtypes (data not shown).

This study reports the first analysis with a head-to-head comparison of breast cancer subtyping methods and the clinical association of their differences with survival outcomes in randomized trials. Intrinsic subtyping classification is based on the concept of assigning a tumor to a subtype with the highest similarity to a defined molecular profile, thus representing a spectrum of similarities. This molecular continuum can be clearly seen in Fig. 4 , which shows that there is overlap between the different intrinsic subtypes. In this study, we found a considerable lack of agreement (less than 70%) between the different subtyping methods for classifying ER+/HER2− tumors. Using gene-expression data derived from RNAseq in patients with operable breast cancer from PALLET, there was a 69% agreement in assigning molecular subtypes between the RNAseq-AIMS and RNAseq-PAM50.sgMd.TC methods. Using the HTG-derived gene-expression data from patients with advanced breast cancer in PALOMA-2, there was a 54% agreement in assigning molecular subtypes between the HTG-AIMS and ruoProsigna-PAM50 classifying methods. In addition, among the 119 patients in PALOMA-2 in which the two classifying methods were in agreement (clear-defined subtypes), both the HTG-AIMS and ruoProsigna-PAM50 classifying methods were highly prognostic for PFS among the subtypes; however, for the 103 patients in which the two classifying methods did not agree (discord subtypes), the HTG-AIMS and ruoProsigna-PAM50 classifying methods were poorly prognostic for PFS. These findings highlight the limitations in clearly assigning a tumor to a specific molecular subtype and indicate that caution should be taken when evaluating the molecular subtype of tumors.

figure 4

Heatmap of cross-classification of different breast cancer molecular intrinsic subtypes according to PAM50 bioclassifier centroids and their proposed sensitivities to CDK4/6 inhibitors.

While there was a lack of agreement in assigning molecular subtypes between methods, the subtyping methods did provide prognostic information for the treatment of palbociclib, consistent with previous studies. A 2020 analysis by Finn et al. presented PFS data by AIMS subtyping in PALOMA-2 17 . The PFS benefit of palbociclib plus letrozole over letrozole plus placebo was greatest among luminal subtypes and reduced in the HER2-like subtype, although observations in the HER2-like subtype were limited by the small sample size (19% of patients). Similar AIMS subtyping results were observed in PALOMA-3, with the majority of patients being assigned a LumA subtype (44.0%), followed by LumB (30.8%), HER2-E (20.9%), NL (2.6%), and BL (1.7%) 18 . A benefit of palbociclib plus fulvestrant versus placebo plus fulvestrant was also observed regardless of the luminal subtype. However, patients with the LumA subtype had longer PFS than patients with the LumB subtype. In the current study, when patients in PALOMA-2 were subtyped by the HTG-AIMS method, palbociclib plus letrozole versus placebo plus letrozole also demonstrated a significant PFS benefit in patients with LumA and LumB subtype. When the PALOMA-2 samples were reanalyzed with the ruoPAM50-Prosigna classifier, results also demonstrated that palbociclib provided a PFS benefit in all patients regardless of the luminal subtype, and additionally similarly benefited the HER2-E subtype; however, the benefit of palbociclib plus letrozole was marginal in patients with the BL subtype. These findings align with those reported in an analysis of 1303 tumor samples from patients with HR+/HER2− disease across the MONALEESA-2, MONALEESA-3, and MONALEESA-7 trials, which demonstrated that the cyclin-dependent kinases 4 and 6 (CDK4/6) inhibitor ribociclib plus endocrine therapy benefited patients with tumors characterized by all molecular subtypes except BL 19 . Of note, in this subtyping analysis of tumor samples across the MONALEESA trials, intrinsic subtyping was not assessed with the HTG-PAM50, RNAseq-PAM50, or ruoProsigna-PAM50 methods. This consistency between studies further suggests the relatively robust utility of intrinsic subtyping of the BL subtype as resistant to CDK4/6 inhibitors and could be considered in future biomarker trial design when evaluating the efficacy of CDK4/6 inhibitors alone or in combination with other experimental drugs in an early breast cancer setting. In addition, the benefit of palbociclib for the HER2-E subtype demonstrated in the PALOMA-2 and PALLET cohorts in this study and in the PALOMA-2 cohort by Finn et al., and the benefit of ribociclib for the HER2-E subtype shown in the MONALEESA trials, suggests that there could be a role for CDK4/6 inhibitors in patients with ER+/HER2+ breast cancer, which is also consistent with preclinical data 20 .

This analysis shows the importance of using validated methods as the determination of intrinsic subtypes varied between methods, and not all tumors had a clearly defined subtype, with some having an intrinsic subtype at the boundary of two subtypes. However, despite the lack of agreement between the methods used the prognostic value of PAM50 subtyping prevailed. AIMS resulted in a better classification of LumA versus LumB but separated LumB and HER2-E poorly. Prosigna PAM50 defined HER2-E clearly but did not provide as clear a distinction between LumA and LumB. In this analysis of intrinsic subtypes, palbociclib plus endocrine therapy should be considered for all patients with ER+/HER2− metastatic breast cancer; the value of alternative treatment for patients with BL tumors warrants future evaluation. A standardized clinical intrinsic subtyping assay and bioinformatics approach such as the PAM50 classifier should be used in clinical practice because discrepancies in gene-expression platforms and algorithms may lead to different results and could misdirect treatment decisions.

Patient population

PALOMA-2 (NCT01740427) was a double-blind, randomized, phase 3 study, in which 666 postmenopausal women with ER+/HER2− advanced breast cancer with no prior treatment for advanced disease were randomly assigned 2:1 to receive palbociclib plus letrozole or placebo plus letrozole 21 . The primary endpoint was investigator-assessed PFS. The details of PALOMA-2 have been previously published 21 ; results demonstrated significantly longer PFS with palbociclib plus letrozole compared with placebo plus letrozole.

PALLET was a randomized, multicenter, phase two neoadjuvant trial 22 . Postmenopausal women with unilateral, operable, ER+/HER2− tumors that were ≥2 cm as observed by ultrasound with no evidence of metastatic disease were randomly assigned in a ratio to 3:2:2:2 (A:B:C:D) to 1 of 4 treatment groups. Group A received letrozole alone for 14 weeks; group B received letrozole for 2 weeks followed by palbociclib plus letrozole for 12 weeks, for a total duration of 14 weeks; group C received palbociclib for 2 weeks followed by palbociclib plus letrozole for 12 weeks, for a total duration of 14 weeks; and group D received palbociclib plus letrozole for 14 weeks. The parallel 4-group design allowed the role of each drug in the suppression of the proliferation marker Ki-67 to be evaluated both alone and/or in combination. Ki-67 was centrally assessed. The main results of PALLET have been previously published 22 ; briefly, adding palbociclib to letrozole significantly enhanced the suppression of malignant cell proliferation (Ki-67) in people with primary ER+ breast cancer but did not increase the clinical response rate over 14 weeks.

PALOMA-2 was approved by the WCG institutional review board in accordance with the International Council on Harmonization Good Clinical Practice guidelines and the provisions of the Declaration of Helsinki 21 . PALLET was approved by the National Research Ethics Service Committee London—Fulham in accordance with the International Ethical Guidelines for Biomedical Research Involving Human Subjects 22 . For both trials, all patients provided written informed consent and an independent data and safety monitoring committee met every 6 months to review safety data and perform the interim analysis.

Tissue samples

In PALOMA-2, submission of FFPE tumor samples was mandatory, as previously described 17 . Patients consented to the evaluation of biomarkers associated with sensitivity and/or resistance to palbociclib plus letrozole per the study protocol. In PALLET, core-cut biopsies and trial-specific blood samples were obtained at baseline (after randomization), 2 weeks (before the start of the second drug for groups B and C), and at 14 weeks or at the discontinuation of study therapy (within 48 h of the last dose of study treatment) 22 .

Gene-expression profiling

In PALOMA-2, gene-expression profiling assays were only performed on samples from patients who had consented to their use. Analyses of gene expression (RNA) were performed with the EdgeSeq Oncology Biomarker Panel (HTG Molecular Diagnostics, Inc), as previously reported 17 . RNA expression levels of 2549 gene targets in FFPE tissues were quantified with targeted capture sequencing. The first section of breast cancer FFPE tissue was stained with hematoxylin and eosin (H&E). The tumor cell content and tissue necrosis were assessed by a board-certified pathologist, and the number of malignant cells as a proportion of all cells (i.e., malignant plus normal cells in the tissue section) was used to estimate tumor content. Acceptance criterion for analysis of tumors was established at >70% of tumor content, and the percentage of necrotic tissue within the total tissue area was used to determine necrosis. The necrosis acceptance criterion for analysis was established at <20% necrosis. If the tumor content was <70% or if necrosis was ≥20%, macrodissection was performed on the tissue sections per standard laboratory processes and manufacturer protocols. Sequencing was performed on an Illumina NextSeq 500 sequencer (Illumina, Inc.). For normalization, probe counts were transformed into log 2 counts per million. Expression values were quantile normalized. HTG Molecular Diagnostics, Inc., was blinded to patient information and clinical outcomes.

In PALOMA-2, research-use-only (RUO) PAM50 NanoString Breast Cancer Prognostic Gene Signature Assay using the NanoString nCounter Dx Analysis System was validated and implemented at HistoGeneX, Belgium. NanoString confirmed that the kit components and instructions for use were identical between the RUO PAM 50 NanoString Breast Cancer Prognostic Gene Signature Assay kit and investigational-use-only-labeled PAM50 NanoString Breast Cancer Prognostic Gene Signature Assay kit. FFPE breast tumor tissue blocks were submitted to HistoGeneX, and the FFPE tumor blocks were sectioned for ≤10 tissue slides of 5 µm each. A certified pathologist reviewed and evaluated the prepared H&E slides to confirm the area of breast carcinoma and tumor surface area were suitable for PAM50 testing before sample processing. If confirmed that no tumor was present, sample processing was canceled, and results were not reported for that sample. After macrodissection of the tumor area, RNA was extracted, and an elution volume of 30 µL was used for analysis. nCounter gene expression analysis was performed at HistoGeneX in batches of ≤10 samples along with a duplicate control sample. Raw data reporter code count files were transferred to NanoString for analysis using a software module with the same normalization and algorithm used for the investigational use only PAM50 NanoString Breast Cancer Prognostic Gene Signature Assay, which reports the intrinsic subtyping according to the PAM50 gene expression algorithm. In addition, a 300-ng aliquot in a volume of 12 µL was submitted to NanoString for further analysis. NanoString transferred the PAM50 results to HistoGeneX and Pfizer for statistical analyses. Both HistoGeneX and NanoString were blinded to patient information and clinical outcomes.

In PALLET, RNA sequencing (RNAseq) of baseline samples was performed on fresh frozen biopsies for 224 patients (letrozole only, n  = 77; letrozole plus palbociclib, n  = 147). Transcriptome RNAseq was performed using total RNA. Strand-specific, poly-A+ RNAseq libraries for sequencing on the Illumina platform were prepared as previously described 23 . Strand-specific, poly-A+ RNAseq libraries for sequencing were prepared with the Illumina platform. At the ligation step, Illumina unique dual barcode adapters (Cat# 20022370) were ligated onto samples. Libraries were amplified in 50-µL reactions containing 150 pmol of P1.1 (5’-AATGATACGGCGACCACCGAGA) and P3 (5’-CAAGCAGAAGACGGCATACGAGA) primer and Kapa HiFi HotStart Library Amplification kit (Cat# kk2612, Roche Sequencing and Life Science). The following PCR conditions were used: incubation at 95 °C for 45 s; followed by 13–15 cycles of 95 °C for 15 s, 60 °C for 30 s, and 72 °C for 1 min; and 1 cycle at 72 °C for 5 min. The amplified libraries were purified with 1.4× AMPure XP beads and eluted into 50 µL of H 2 O. Libraries were quality controlled on a fragment analyzer using a DNA7500 kit (5067–1506, Agilent Technologies), and library yields were determined based on a range of 200–800 bp. Libraries were pooled in equimolar ratios and sequenced on the Illumina platform. Of 427 samples, 34 were sequenced on a HiSeq 2000/2500 instrument to generate 2 × 100-bp reads, and the remaining samples were sequenced on a NovaSeq 6000 instrument using the S4 reagent kit (300 cycles) to generate 2 × 150-bp paired-end reads. An average of 82 million reads per sample were generated. The raw reads of the RNAseq data were aligned to the human genome GRCh38 with gene annotation GENCODE v22 using STAR (v2.5.3a). The read count (i.e., the number of reads mapped to each gene) was produced using HTSeq (v0.12.4). The RNAseq gene expression was evaluated by the upper quartile fragments per kilobase of transcript per million mapped reads, which normalized the read count by dividing it by the gene length and the 75th percentile read count of protein-coding genes for the sample.

Because the EdgeSeq Oncology platform has not been used to profile diverse, large reference tumor sets, and because the PALOMA-2 study only included patients with ER+ disease, the widely used PAM50 classification scheme was not feasible. Instead, the single sample predictor algorithm AIMS (referred to as HTG-AIMS in this paper) used a set of binary rules to compare expression measurements for pairs of genes to classify tumors into intrinsic subtypes for each patient 16 , 17 . Because only 42 of the 100 binary rules could be applied based on genes in the EdgeSeq Oncology BM panel, classification performance was assessed by downsampling the cancer genome atlas (TCGA) data from genome-wide to the EdgeSeq oncology panel subset. Furthermore, the impact of using 42 rules on the agreement between AIMS and PAM50 is minimal, since the AIMS subtype derived from the 42 rules is highly consistent with those derived from the 100 rules. Using all genes versus EdgeSeq Oncology panel genes only, the agreement between the AIMS subtypes and those classified by PAM50 was 77% vs 76%, respectively 17 . For PALLET, AIMS was applied to the whole transcriptomic data as described and referred to as RNAseq-AIMS 16 .

A summary of the PAM50 algorithm used in PALOMA-2 provided by NanoString is as follows, and results from this algorithm are referred to as ruoProsigna-PAM50 in this paper 24 . The NanoString RUO PAM50 algorithm is a 50-gene signature measuring the gene expression profile of each sample that allows for the classification of breast cancer into four biologically distinct subtypes (LumA, LumB, HER2-E, and BL) 9 . Quality control thresholds are determined using the geometric mean of eight housekeeping genes (HK Geomean), six positive controls, and eight negative controls to ensure RNA quality, and all samples must pass quality-control thresholds for results to be reported. Signal normalization is performed using the eight housekeeping genes. The algorithm was performed in three steps. The first step involves scaling using two sets of scaling factors to bring the housekeeping and reference sample expression values into the scale necessary for the next step. The second step calculates the Pearson correlation between the observed scaled expression for the PAM50 genes and a centroid for each of the four subtypes, resulting in a set of four correlation values for each sample. The final step is to identify the subtype correlation with the greatest value and set that subtype as the subtype call for that sample.

Intrinsic subtypes of the PALOMA-2 cohort were identified based on the HTG data using the PAM50 classifier 9 with subgroup-specific gene percentile centering (sgPct) as suggested by Zhao et al. 15 . Data were available for 49 of the 50 genes used for the PAM50 classifier. Subtyping results associated with this method are referred to as HTG-PAM50.sgPct in this paper (Supplementary Fig. 1 ). Each gene in the HTG data was centered on a specific percentile, where the percentile was determined from the RNAseq data of the ER+/HER2− subgroup of the TCGA breast cancer cohort. In the TCGA cohort, we took the percentile of the ER+/HER2− subgroup where the expression value corresponded to the global median expression value of the entire cohort for each gene. Then, in the PALOMA-2 cohort, the TCGA-derived subgroup percentile was assigned to each corresponding gene, and the gene expression was centered by subtracting the value at this percentile. The PAM50 classifier was then applied to the sgPct-centered HTG data to obtain the intrinsic subtypes. Technical calibration was not performed on the HTG data because there was a lack of data performed on both the HTG and microarray platforms. Tumor samples from consenting patients in the PALOMA-2 trial were subtyped using the validated RUO PAM50 assay (ruoProsigna-PAM50); results were compared with published subtype results using AIMS on EdgeSeq Oncology Biomarker Panel (HTG-AIMS; HTG Molecular Diagnostics®, Tucson, AZ, USA; Supplementary Fig. 1 ) 17 .

In PALLET, PAM50 subtyping was performed on data normalized with subgroup-specific gene centering and microarray-RNAseq calibration, which is labeled as RNAseq-PAM50.sgMd.TC. Because the publicly available PAM50 classifier was developed and validated for breast cancer subtype determination based on microarray data 9 , additional steps should be performed to calibrate the technical bias between RNAseq and microarray platforms when applying the PAM50 classifier to PALLET RNAseq data. RNAseq-PAM50.sgMd.TC includes a two-step calibration: subgroup-specific gene median centering normalization to correct clinical group bias and technical calibration for RNAseq to correct platform bias, as shown in Supplementary Fig. 1 . Each gene in the PALLET RNAseq data was centered to the median of the ER+/HER2− subgroup of the TCGA cohort by subtracting the differences between the PALLET median and the TCGA subgroup median from PALLET gene expression 5 . The subgroup-median-normalized RNAseq data were scaled to pretrained microarray-to-RNAseq technical calibration factors to correct the RNAseq bias to microarray before intrinsic subtype classification by the PAM50 classifier 5 , 9 .

Additional methods included the publicly available PAM50 classifier applied directly on PALLET RNAseq data (RNAseq-PAM50), the publicly available PAM50 classifier with subgroup-specific percentile gene centering (RNAseq-PAM50.sgPct) 15 , and the publicly available PAM50 classifier with subgroup-specific median gene centering (RNAseq-PAM50.sgMd).

We are reporting the NL as was done in the original classifier. Prosigna did not include an NL subtype, therefore no NL was reported. Prosigna assigned NL to LumA. PAM50 and AIMS have NL breast cancer, therefore NL is included in the classifier.

Statistical analyses

In PALOMA-2, PFS was estimated using the Kaplan–Meier method, hazard ratios were calculated using Cox proportional hazard models, and 1-sided P values were calculated by the log-rank test. The agreements between intrinsic subtypes defined by the computational methods were compared by kappa statistics, and percentages were reported as descriptive statistics.

In PALLET, patients with breast cancer with Ki-67 ≤ 2.7% after 14 weeks were classified as achieving complete cell cycle arrest (CCCA), and patients with Ki-67 > 2.7% were classified as not achieving CCCA (non-CCCA), which suggested resistance to treatment 22 . Odds ratios of non-CCCA and 95% CIs were estimated by Fisher’s exact test for each subtype.

Data availability

Upon request, and subject to review, Pfizer will provide the data that support the findings of this study. Subject to certain criteria, conditions, and exceptions. Pfizer may also provide access to the related individual de-identified participant data. See https://www.pfizer.com/science/clinical-trials/trial-data-and-results for more information.

Code availability

The PAM50 and AIMS codes are publicly available. All steps for the data analyses and processes are described in the methods.

Rugo, H. et al. Endocrine therapy for hormone receptor-positive metastatic breast cancer: American Society of Clinical Oncology guideline. J. Clin. Oncol. 34 , 3069–3103 (2016).

Article   CAS   PubMed   Google Scholar  

Thomssen, C., Balic, M., Harbeck, N. & Gnant, M. St. Gallen/Vienna 2021: a brief summary of the consensus discussion on customizing therapies for women with early breast cancer. Breast Care (Basel) 16 , 135–143 (2021).

Article   PubMed   Google Scholar  

Perou, C. M. et al. Molecular portraits of human breast tumours. Nature 406 , 747–752 (2000).

Sorlie, T. et al. Gene expression patterns of breast carcinomas distinguish tumor subclasses with clinical implications. Proc. Natl. Acad. Sci. USA 98 , 10869–10874 (2001).

Article   CAS   PubMed Central   PubMed   Google Scholar  

Carey, L. A. et al. Molecular heterogeneity and response to neoadjuvant human epidermal growth factor receptor 2 targeting in CALGB 40601, a randomized phase III trial of paclitaxel plus trastuzumab with or without lapatinib. J. Clin. Oncol. 34 , 542–549 (2016).

Llombart-Cussac, A. et al. HER2-enriched subtype as a predictor of pathological complete response following trastuzumab and lapatinib without chemotherapy in early-stage HER2-positive breast cancer (PAMELA): an open-label, single-group, multicentre, phase 2 trial. Lancet Oncol. 18 , 545–554 (2017).

Acheampong, T., Kehm, R. D., Terry, M. B., Argov, E. L. & Tehranifar, P. Incidence trends of breast cancer molecular subtypes by age and race/ethnicity in the US from 2010 to 2016. JAMA Netw. Open 3 , e2013226 (2020).

Article   PubMed Central   PubMed   Google Scholar  

Sestak, I. et al. Comparison of the performance of 6 prognostic signatures for estrogen receptor-positive breast cancer: a secondary analysis of a randomized clinical trial. JAMA Oncol. 4 , 545–553 (2018).

Parker, J. S. et al. Supervised risk predictor of breast cancer based on intrinsic subtypes. J. Clin. Oncol. 27 , 1160–1167 (2009).

Russnes, H. G., Lingjaerde, O. C., Borresen-Dale, A. L. & Caldas, C. Breast cancer molecular stratification: from intrinsic subtypes to integrative clusters. Am. J. Pathol. 187 , 2152–2162 (2017).

Nielsen, T. O. et al. A comparison of PAM50 intrinsic subtyping with immunohistochemistry and clinical prognostic factors in tamoxifen-treated estrogen receptor-positive breast cancer. Clin. Cancer Res. 16 , 5222–5232 (2010).

Tibshirani, R., Hastie, T., Narasimhan, B. & Chu, G. Diagnosis of multiple cancer types by shrunken centroids of gene expression. Proc. Natl. Acad. Sci. USA 99 , 6567–6572 (2002).

Dowsett, M. et al. Comparison of PAM50 risk of recurrence score with oncotype DX and IHC4 for predicting risk of distant recurrence after endocrine therapy. J. Clin. Oncol. 31 , 2783–2790 (2013).

Gnant, M. et al. Predicting distant recurrence in receptor-positive breast cancer patients with limited clinicopathological risk: using the PAM50 risk of recurrence score in 1478 postmenopausal patients of the ABCSG-8 trial treated with adjuvant endocrine therapy alone. Ann. Oncol. 25 , 339–345 (2014).

Zhao, X., Rodland, E. A., Tibshirani, R. & Plevritis, S. Molecular subtyping for clinically defined breast cancer subgroups. Breast Cancer Res. 17 , 29 (2015).

Paquet, E. R. & Hallett, M. T. Absolute assignment of breast cancer intrinsic molecular subtype. J. Natl. Cancer Inst. 107 , 357 (2015).

Finn, R. S. et al. Biomarker analyses of response to cyclin-dependent kinase 4/6 inhibition and endocrine therapy in women with treatment-naive metastatic breast cancer. Clin. Cancer Res. 26 , 110–121 (2020).

Turner, N. C. et al. Cyclin E1 expression and palbociclib efficacy in previously treated hormone receptor-positive metastatic breast cancer. J. Clin. Oncol. 37 , 1169–1178 (2019).

Prat, A. et al. Correlative biomarker analysis of intrinsic subtypes and efficacy across the MONALEESA phase III studies. J. Clin. Oncol. 39 , 1458–1467 (2021).

Finn, R. S. et al. PD 0332991, a selective cyclin D kinase 4/6 inhibitor, preferentially inhibits proliferation of luminal estrogen receptor-positive human breast cancer cell lines in vitro. Breast Cancer Res. 11 , R77 (2009).

Finn, R. S. et al. Palbociclib and letrozole in advanced breast cancer. N. Engl. J. Med. 375 , 1925–1936 (2016).

Johnston, S. et al. Randomized phase II study evaluating palbociclib in addition to letrozole as neoadjuvant therapy in estrogen receptor-positive early breast cancer: PALLET trial. J. Clin. Oncol. 37 , 178–189 (2019).

Rokita, J. L. et al. Genomic profiling of childhood tumor patient-derived xenograft models to enable rational clinical trial design. Cell Rep. 29 , 1675–1689.e1679 (2019).

Wallden, B. et al. Development and verification of the PAM50-based Prosigna breast cancer gene signature assay. BMC Med. Genom. 8 , 54 (2015).

Article   Google Scholar  

Download references

Acknowledgements

This study was funded by Pfizer Inc. Editorial support was provided by Jill Shults, PhD, of ICON (Blue Bell, PA, USA) and Oxford PharmaGenesis, Inc, (Newtown, PA, USA), which was funded by Pfizer Inc. Previous presentation information: poster presentation at the 2021 San Antonio Breast Cancer Symposium (SABCS); December 7–10, 2021; San Antonio, TX, USA.

Author information

Authors and affiliations.

The Institute of Cancer Research, Sutton, UK

Maggie Chon U. Cheang

Baylor College of Medicine, Houston, TX, USA

Mothaffar Rimawi

The Institute of Cancer Research, London, UK

Stephen Johnston, Judith Bliss, Lucy Kilburn, Eugene F. Schuster & Hui Xiao

NSABP Foundation, Pittsburgh, PA, USA

Samuel A. Jacobs & Katherine Pogue-Geile

Pfizer Inc, La Jolla, CA, USA

Zhou Zhu, Lisa Swaim, Shibing Deng, Dongrui R. Lu, Eric Gauthier & Yuan Liu

Pfizer Srl, Milan, Italy

Jennifer Tursi

David Geffen School of Medicine, University of California Los Angeles, Santa Monica, CA, USA

Dennis J. Slamon & Richard S. Finn

University of California San Francisco Helen Diller Family Comprehensive Cancer Center, San Francisco, CA, USA

Hope S. Rugo

You can also search for this author in PubMed   Google Scholar

Contributions

M.C.U.C. and Y.L. conceived, designed, and supervised the study. All authors participated in data curation and investigation. M.C.U.C., Y.L., H.X., E.F.S., and S.D. developed the methodology, performed formal analyses, and wrote the original draft. L.K. and J.B. were project administrators. Resources were obtained by Pfizer, M.C.U.C., and Y.L. Software was maintained by H.X., E.F.S., and S.D. All authors reviewed, edited, approved, and are accountable for the final manuscript.

Corresponding author

Correspondence to Maggie Chon U. Cheang .

Ethics declarations

Competing interests.

M.C.U.C. reports royalties and intellectual property rights/patent holder as a coinventor of PAM50 Bioclassifier, consulting fees for Veracyte, and research funding from NanoString Technologies. M.R. reports consulting fees from Macrogenics, Seagen, AstraZeneca, and Novartis, and research funding from Pfizer. S.A.J. reports consulting fees from Eli Lilly, Novartis, Pfizer, and Eisai, fees for non-CME services from AstraZeneca and Eisai, and contracted research from Eli Lilly, Pfizer, and AstraZeneca. J.B. reports research grants from AstraZeneca, Merck Sharp & Dohme, Puma Biotechnology, Clovis Oncology, Pfizer, Janssen-Cilag, Novartis, Roche, and Eli Lilly. D.J.S. has served as a consultant/advisor for Pfizer Inc, Eli Lilly, and Novartis, has received research funding from Pfizer Inc and Seagen, has received consulting fees from Pfizer Inc and Novartis and travel support from Pfizer Inc and Novartis, participated on a data safety monitoring board or advisory board for Eli Lilly and Novartis, is a board member of BioMarin, and is a stockholder in Amgen, BioMarin, 1200 Pharma, Merck, Pfizer Inc, Seagen, Vertex, and TORL Biotherapeutics. H.S.R. reports sponsored research to her institution from Pfizer Inc, Merck, Novartis, Eli Lilly, Roche, Daiichi-Sankyo, Seattle Genetics, Macrogenics, Sermonix, Boehringer Ingelheim, Polyphor, AstraZeneca, Ayala, and Gilead and honoraria from PUMA, Samsung, and Mylan. R.S.F. has received consulting fees from Pfizer, Bayer, Novartis, Bristol Myers Squibb, and Merck, as well as other research funding from Pfizer, and honoraria from Bayer, Pfizer, Bristol Myers Squibb, Novartis, Eisai, and Eli Lilly. L.S., S.D., D.R.L., E.G., and Y.L. are employees of and stockholders in Pfizer Inc. Z.Z. and J.T. are former employees of and own stock in Pfizer Inc. S.A.J., K.P.-G., L.K., E.F.S., and H.X. have no competing interests.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Affiliations at the time of the study: Zhou Zhu, Jennifer Tursi.

Supplementary information

Supplemental material, related manuscript file, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Cheang, M.C.U., Rimawi, M., Johnston, S. et al. Effect of cross-platform gene-expression, computational methods on breast cancer subtyping in PALOMA-2 and PALLET studies. npj Breast Cancer 10 , 54 (2024). https://doi.org/10.1038/s41523-024-00658-y

Download citation

Received : 02 August 2023

Accepted : 14 June 2024

Published : 29 June 2024

DOI : https://doi.org/10.1038/s41523-024-00658-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing: Cancer newsletter — what matters in cancer research, free to your inbox weekly.

methods of study in research

  • IRB-SBS Home
  • Contact IRB-SBS
  • IRB SBS Staff Directory
  • IRB SBS Board Members
  • About the IRB-SBS
  • CITI Training
  • Education Events
  • Virginia IRB Consortium
  • IRB-SBS Learning Shots
  • HRPP Education & Training
  • Student Support
  • Access iProtocol
  • Getting Started
  • iProtocol Question Guide
  • iProtocol Management
  • Protocol Review Process
  • Certificate of Confidentiality
  • Deception and/or Withholding Information from a Participant

Ethnographic Research

  • IRB-SBS 101
  • IRB-SBS Glossary
  • Participant Pools 
  • Paying Participants
  • Research in an Educational Setting
  • Research in an International Setting and/or Location
  • Risk-Sensitive Populations
  • Student Researchers and Faculty Sponsors
  • Study Funding and the IRB
  • Understanding Risk in Research
  • Vulnerable Participants
  • IRB-SBS PAM & Ed
  • Federal Regulations
  • Ethical Principals
  • Partner Offices
  • Determining Human Subjects Research
  • Determining HSR or SBS

IRB-SBS Researcher

Ethnography is a qualitative method for collecting data often used in the social and behavioral sciences.  Data are collected through observations and interviews, which are then used to draw conclusions about how societies and individuals function. Ethnographers observe life as it happens instead of trying to manipulate it in a lab.  Because of the unpredictability of life, ethnographers often find is challenging to nail down their projects in a protocol for the Board to review.  Nevertheless, the Board needs a good explanation of a study in order to approve it.  Helping the Board to understand the parameters of the study, the situations in which the participants will be contacted and will participate, and the risks involved will allow them to approve studies where some flexibility is needed. 

The following sections generalize typical situations in an ethnographic study. However, your study may not fit these models exactly, so please  contact  our staff if you have questions about what is appropriate, etc. The Board expects you to interact with your participants in a way that is natural, polite, and culturally appropriate. D iscuss the cultural context and how that shapes your methodology, demonstrating that you are aware of your participants' particular needs and sensitive to the way that they navigate their world.  

Interviews and observations are common methods for data collection in an ethnographic study; please see Interviews and Observations for more information. 

As an ethnographer becomes integrated in a community, he or she will talk to many people in order to become familiar with their way of life and to refine the research ideas. Not everyone that an ethnographer interacts with is necessarily a  participant in the research study . Participation depends on the type of information that is collected and how the data are recorded. If you are recording information that is  specific to a person  and about that  person’s experiences and opinions , and if that information can be  identified with a specific person (whether anonymous or not), that person becomes a participant in the study. For example, talking to an individual on the bus about general bus policies and atmosphere would not qualify the conversation as part of the human subjects aspect of your research.  Talking to that same individual about their specific experiences as a passenger on the bus and recording that information in your notes qualifies that individual as a participant in the study. Depending on whether you gather identifying information about the person and the potential to harm the person will determine what level of consent information you should provide and how it should be documented. Understanding when a person becomes a participant will help you to understand when you should obtain consent from that person or when an interaction can be defined as just a casual conversation.  For specific examples of when a casual conversation becomes an interview, please see  Interviews  for more information.     

Ethnographers are often involved with their participants on a very intimate level and can collect sensitive data about them, thus it is important to recognize areas and situations that may be risky for participants and develop procedures for reducing  risk . Participants in ethnographic studies may be at risk for legal, social, economic, psychological, and physical harms. A well-designed  consent process  can be an easy way to reduce risk in a study. For participants where consent has limitations (i.e.  children ,  prisoners , other  vulnerable participants ), additional requirements may be made in order to facilitate the consent process, such as providing a minor with an assent form and obtaining parental consent (though it may be necessary to modify this process so that it is culturally appropriate). Some participants may be  highly sensitive to risk  because of who they are and the situation in which they live and you may need to make additional accommodations for participants where the potential for harm is high. Often a participant’s potential for harm doesn’t end when your interaction is over; protecting the materials you collect will continue to protect your participants from harm.  Loss of confidentiality  is a risk that participants may face when participating in an ethnographic study; in some cases, participants may not be interested in keeping their information confidential but it is important to maintain a clear dialog with participants so that they understand the implications of sharing their data with you. Identifying the needs of your participants and modifying your approach in order to accommodate those needs will help to protect participants from incurring harm as a result of participating in your study.

Before you include participants in your study, you will need to identify who is eligible to participate. Often in ethnographic studies it is important to integrate into the community and tap into the community’s network in order to identify potential participants. You may use word-of-mouth methods to reach your participants or more formal methods such as advertisements, flyers, emails, phone calls, etc (please include samples of your recruitment materials with your study). When you describe your procedures in your protocol, it is important to include information about how you will navigate the community you will study and access eligible participants.

The consent process begins as soon as you share information about the study, so it is important that when you contact participants, you are providing them with accurate information about participating in the study. Participants should know early on in the process that you are researcher and you are asking them to participate in a study, and you shouldn’t provide information that is misleading or inappropriately enticing. For further guidance on recruiting participants, see  Participant Recruitment .

The consent process outlined in the  Basic Consent  section describes the baseline expectation for obtaining consent from participants, as described in the  federal regulations . However, this scenario does not always fit every research study nor is it adequate for providing informed consent to all participants, and there is some flexibility in modifying the informed consent process. The Oral Consent section describes how to conduct an oral consent procedure, which modifies the consent procedure to accommodate participants where presenting a written consent form would be inappropriate. If you feel that it is necessary to provide your participants with a modified informed consent process, it is important that you provide a complete and accurate description of the process, and provide justification as to why the process is necessary and will provide the best informed consent opportunity for your participants. Including information about cultural norms, language issues, and other important factors will help the Board to understand your population and why it is necessary to approach the population in the manner in which you recommend. As you develop your procedure, it is important that you consider not only the informed consent meeting, but also the recruitment process and how you will document consent. 

  • Participant Recruitment
  • Oral Consent
  • Consent Templates
  • Observations
  • Risks-Sensitive Populations

IMAGES

  1. Types of Research Archives

    methods of study in research

  2. What is Research

    methods of study in research

  3. 15 Research Methodology Examples (2024)

    methods of study in research

  4. Types of Research Methodology: Uses, Types & Benefits

    methods of study in research

  5. Research Methods

    methods of study in research

  6. Types of Research

    methods of study in research

VIDEO

  1. The scientific approach and alternative approaches to investigation

  2. 3.Three type of main Research in education

  3. PSY 2120: Why study research methods in psychology?

  4. Research Methods Definitions Types and Examples

  5. Mastering Research Methodology

  6. Criteria Of Good Research

COMMENTS

  1. Research Methods

    Research methods are specific procedures for collecting and analyzing data. Developing your research methods is an integral part of your research design. When planning your methods, there are two key decisions you will make. First, decide how you will collect data. Your methods depend on what type of data you need to answer your research question:

  2. Research Methods

    Research methods are used in various fields to investigate, analyze, and answer research questions. Here are some examples of how research methods are applied in different fields: Psychology: Research methods are widely used in psychology to study human behavior, emotions, and mental processes. For example, researchers may use experiments ...

  3. Research Methods--Quantitative, Qualitative, and More: Overview

    About Research Methods. This guide provides an overview of research methods, how to choose and use them, and supports and resources at UC Berkeley. As Patten and Newhart note in the book Understanding Research Methods, "Research methods are the building blocks of the scientific enterprise. They are the "how" for building systematic knowledge.

  4. A Practical Guide to Writing Quantitative and Qualitative Research

    INTRODUCTION. Scientific research is usually initiated by posing evidenced-based research questions which are then explicitly restated as hypotheses.1,2 The hypotheses provide directions to guide the study, solutions, explanations, and expected results.3,4 Both research questions and hypotheses are essentially formulated based on conventional theories and real-world processes, which allow the ...

  5. What Is a Research Design

    A research design is a strategy for answering your research question using empirical data. Creating a research design means making decisions about: Your overall research objectives and approach. Whether you'll rely on primary research or secondary research. Your sampling methods or criteria for selecting subjects. Your data collection methods.

  6. Research Methods

    To analyse data collected in a statistically valid manner (e.g. from experiments, surveys, and observations). Meta-analysis. Quantitative. To statistically analyse the results of a large collection of studies. Can only be applied to studies that collected data in a statistically valid manner.

  7. Choosing the Right Research Methodology: A Guide

    Understanding different research methods: There are several research methods available depending on the type of study you are conducting, i.e., whether it is laboratory-based, clinical, epidemiological, or survey based. Some common methodologies include qualitative research, quantitative research, experimental research, survey-based research ...

  8. Study designs: Part 1

    Research study design is a framework, or the set of methods and procedures used to collect and analyze data on variables specified in a particular research problem. Research study designs are of many types, each with its advantages and limitations. The type of study design used to answer a particular research question is determined by the ...

  9. Theory and Methods

    What is a method? A method is the process or tool used to collect data. There are three method types: qualitative, quantitative, and historical. Likewise, some research uses mixed methods. Qualitative research is interested in the specific. It studies things in their natural settings, attempting to make sense of, or interpret, phenomena in ...

  10. (PDF) Understanding research methods: An overview of the essentials

    The researchers employed qualitative research methods and case study design. The techniques that were employed to acquire credible data were document analysis, interviews, and classroom observation.

  11. 15 Types of Research Methods (2024)

    Case study research is a qualitative method that involves a deep and thorough investigation of a single individual, group, or event in order to explore facets of that phenomenon that cannot be captured using other methods (Stokes & Wall, 2017).

  12. Research Approach

    Research approach methods are the specific techniques or tools that are used to conduct research within a particular research approach. Below are some examples of methods that are commonly used in each research approach: ... Social Research: Researchers use research approaches to study social issues, such as poverty, crime, discrimination, and ...

  13. A tutorial on methodological studies: the what, when, how and why

    Even though methodological studies can be conducted on qualitative or mixed methods research, this paper focuses on and draws examples exclusively from quantitative research. ... Many methodological studies use a research report (e.g. full manuscript of study, abstract portion of the study) as the unit of analysis, and inferences can be made at ...

  14. Types of Study Design

    A randomised controlled trial (RCT) is an important study design commonly used in medical research to determine the effectiveness of a treatment or intervention. It is considered the gold standard in research design because it allows researchers to draw cause-and-effect conclusions about the effects of an intervention.

  15. How To Choose The Right Research Methodology

    Mixed methods-based research, as you'd expect, attempts to bring these two types of research together, drawing on both qualitative and quantitative data.Quite often, mixed methods-based studies will use qualitative research to explore a situation and develop a potential model of understanding (this is called a conceptual framework), and then go on to use quantitative methods to test that ...

  16. What is Research

    Research is the careful consideration of study regarding a particular concern or research problem using scientific methods. According to the American sociologist Earl Robert Babbie, "research is a systematic inquiry to describe, explain, predict, and control the observed phenomenon. It involves inductive and deductive methods.".

  17. Research Guide: Research Methods

    The authors walk readers through the entire research process and present updated examples from published mixed methods studies drawn from multiple disciplines. In addition, this new edition includes information about the dynamic and evolving nature of the field of mixed methods research, four additional methodological approaches, and coverage ...

  18. What are research methods?

    There are two ways to conduct research observations: Direct Observation: The researcher observes a participant in an environment. The researcher often takes notes or uses technology to gather data, such as a voice recorder or video camera. The researcher does not interact or interfere with the participants.

  19. Introduction to Research Methods in Psychology

    An example of this type of research in psychology would be changing the length of a specific mental health treatment and measuring the effect on study participants. 2. Descriptive Research . Descriptive research seeks to depict what already exists in a group or population. Three types of psychology research utilizing this method are: Case studies

  20. 6 Basic Types of Research Studies (Plus Pros and Cons)

    Here are six common types of research studies, along with examples that help explain the advantages and disadvantages of each: 1. Meta-analysis. A meta-analysis study helps researchers compile the quantitative data available from previous studies. It's an observational study in which the researchers don't manipulate variables.

  21. A tutorial on methodological studies: the what, when, how and why

    Methodological studies - studies that evaluate the design, analysis or reporting of other research-related reports - play an important role in health research. They help to highlight issues in the conduct of research with the aim of improving health research methodology, and ultimately reducing research waste. We provide an overview of some of the key aspects of methodological studies such ...

  22. 2.2 Research Methods

    Researchers choose methods that best suit their study topics, protect research participants or subjects, and that fit with their overall approaches to research. Surveys. As a research method, a survey collects data from subjects who respond to a series of questions about behaviors and opinions, often in the form of a questionnaire or an ...

  23. Historical Research

    Historical research is the process of investigating and studying past events, people, and societies using a variety of sources and methods. This type of research aims to reconstruct and interpret the past based on the available evidence. Types of Historical Research. There are several types of historical research, including: Descriptive Research

  24. <em>British Educational Research Journal</em>

    This study aimed to understand pre-service teachers' motivations and learning psychologies in language teaching. The study used a phenomenology pattern, a qualitative research method, to gather pre-service teachers' views about language teaching, motivations and learning psychologies during translation activities.

  25. Medical Terms in Lay Language

    PHARMACOKINETICS the study of the way the body absorbs, distributes, and gets rid of a drug PHASE I first phase of study of a new drug in humans to determine action, safety, and proper dosing PHASE II second phase of study of a new drug in humans, intended to gather information about safety and effectiveness of the drug for certain uses

  26. Arnold School of Public Health

    Ph.D. in Biostatistics candidate Xiaowen Sun entered the field because she was passionate about using statistical methods to solve real-world problems. She's particularly interested in using machine and deep learning to improve medical research and public health through data analysis.

  27. CLAS PBS professor receives NIH grant to research how to improve

    Cathleen Moore, professor in the Department of Psychological and Brain Sciences in the College of Liberal Arts and Sciences and Starch Faculty Fellow, received a grant from the National Institutes of Health for $413,267 to study how lifeguard training can be improved using virtual reality.. Moore and her team will research the limitations and impact of attention and perception on lifeguarding.

  28. A method to study immune cells offers hope for finding new disease

    A method to study immune cells offers hope for finding new disease treatments. ScienceDaily . Retrieved July 1, 2024 from www.sciencedaily.com / releases / 2024 / 06 / 240626152014.htm

  29. Effect of cross-platform gene-expression, computational methods on

    This study compared IBCMS methods on various gene-expression platforms in PALOMA-2 and PALLET trials. PALOMA-2 tumor samples were profiled using EdgeSeq and nanostring and subtyped with AIMS ...

  30. Ethnographic Research

    Interviews and observations are common methods for data collection in an ethnographic study; please see Interviews and Observations for more information. Study Participants As an ethnographer becomes integrated in a community, he or she will talk to many people in order to become familiar with their way of life and to refine the research ideas.