Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology

Research Design | Step-by-Step Guide with Examples

Published on 5 May 2022 by Shona McCombes . Revised on 20 March 2023.

A research design is a strategy for answering your research question  using empirical data. Creating a research design means making decisions about:

  • Your overall aims and approach
  • The type of research design you’ll use
  • Your sampling methods or criteria for selecting subjects
  • Your data collection methods
  • The procedures you’ll follow to collect data
  • Your data analysis methods

A well-planned research design helps ensure that your methods match your research aims and that you use the right kind of analysis for your data.

Table of contents

Step 1: consider your aims and approach, step 2: choose a type of research design, step 3: identify your population and sampling method, step 4: choose your data collection methods, step 5: plan your data collection procedures, step 6: decide on your data analysis strategies, frequently asked questions.

  • Introduction

Before you can start designing your research, you should already have a clear idea of the research question you want to investigate.

There are many different ways you could go about answering this question. Your research design choices should be driven by your aims and priorities – start by thinking carefully about what you want to achieve.

The first choice you need to make is whether you’ll take a qualitative or quantitative approach.

Qualitative approach Quantitative approach

Qualitative research designs tend to be more flexible and inductive , allowing you to adjust your approach based on what you find throughout the research process.

Quantitative research designs tend to be more fixed and deductive , with variables and hypotheses clearly defined in advance of data collection.

It’s also possible to use a mixed methods design that integrates aspects of both approaches. By combining qualitative and quantitative insights, you can gain a more complete picture of the problem you’re studying and strengthen the credibility of your conclusions.

Practical and ethical considerations when designing research

As well as scientific considerations, you need to think practically when designing your research. If your research involves people or animals, you also need to consider research ethics .

  • How much time do you have to collect data and write up the research?
  • Will you be able to gain access to the data you need (e.g., by travelling to a specific location or contacting specific people)?
  • Do you have the necessary research skills (e.g., statistical analysis or interview techniques)?
  • Will you need ethical approval ?

At each stage of the research design process, make sure that your choices are practically feasible.

Prevent plagiarism, run a free check.

Within both qualitative and quantitative approaches, there are several types of research design to choose from. Each type provides a framework for the overall shape of your research.

Types of quantitative research designs

Quantitative designs can be split into four main types. Experimental and   quasi-experimental designs allow you to test cause-and-effect relationships, while descriptive and correlational designs allow you to measure variables and describe relationships between them.

Type of design Purpose and characteristics
Experimental
Quasi-experimental
Correlational
Descriptive

With descriptive and correlational designs, you can get a clear picture of characteristics, trends, and relationships as they exist in the real world. However, you can’t draw conclusions about cause and effect (because correlation doesn’t imply causation ).

Experiments are the strongest way to test cause-and-effect relationships without the risk of other variables influencing the results. However, their controlled conditions may not always reflect how things work in the real world. They’re often also more difficult and expensive to implement.

Types of qualitative research designs

Qualitative designs are less strictly defined. This approach is about gaining a rich, detailed understanding of a specific context or phenomenon, and you can often be more creative and flexible in designing your research.

The table below shows some common types of qualitative design. They often have similar approaches in terms of data collection, but focus on different aspects when analysing the data.

Type of design Purpose and characteristics
Grounded theory
Phenomenology

Your research design should clearly define who or what your research will focus on, and how you’ll go about choosing your participants or subjects.

In research, a population is the entire group that you want to draw conclusions about, while a sample is the smaller group of individuals you’ll actually collect data from.

Defining the population

A population can be made up of anything you want to study – plants, animals, organisations, texts, countries, etc. In the social sciences, it most often refers to a group of people.

For example, will you focus on people from a specific demographic, region, or background? Are you interested in people with a certain job or medical condition, or users of a particular product?

The more precisely you define your population, the easier it will be to gather a representative sample.

Sampling methods

Even with a narrowly defined population, it’s rarely possible to collect data from every individual. Instead, you’ll collect data from a sample.

To select a sample, there are two main approaches: probability sampling and non-probability sampling . The sampling method you use affects how confidently you can generalise your results to the population as a whole.

Probability sampling Non-probability sampling

Probability sampling is the most statistically valid option, but it’s often difficult to achieve unless you’re dealing with a very small and accessible population.

For practical reasons, many studies use non-probability sampling, but it’s important to be aware of the limitations and carefully consider potential biases. You should always make an effort to gather a sample that’s as representative as possible of the population.

Case selection in qualitative research

In some types of qualitative designs, sampling may not be relevant.

For example, in an ethnography or a case study, your aim is to deeply understand a specific context, not to generalise to a population. Instead of sampling, you may simply aim to collect as much data as possible about the context you are studying.

In these types of design, you still have to carefully consider your choice of case or community. You should have a clear rationale for why this particular case is suitable for answering your research question.

For example, you might choose a case study that reveals an unusual or neglected aspect of your research problem, or you might choose several very similar or very different cases in order to compare them.

Data collection methods are ways of directly measuring variables and gathering information. They allow you to gain first-hand knowledge and original insights into your research problem.

You can choose just one data collection method, or use several methods in the same study.

Survey methods

Surveys allow you to collect data about opinions, behaviours, experiences, and characteristics by asking people directly. There are two main survey methods to choose from: questionnaires and interviews.

Questionnaires Interviews

Observation methods

Observations allow you to collect data unobtrusively, observing characteristics, behaviours, or social interactions without relying on self-reporting.

Observations may be conducted in real time, taking notes as you observe, or you might make audiovisual recordings for later analysis. They can be qualitative or quantitative.

Quantitative observation

Other methods of data collection

There are many other ways you might collect data depending on your field and topic.

Field Examples of data collection methods
Media & communication Collecting a sample of texts (e.g., speeches, articles, or social media posts) for data on cultural norms and narratives
Psychology Using technologies like neuroimaging, eye-tracking, or computer-based tasks to collect data on things like attention, emotional response, or reaction time
Education Using tests or assignments to collect data on knowledge and skills
Physical sciences Using scientific instruments to collect data on things like weight, blood pressure, or chemical composition

If you’re not sure which methods will work best for your research design, try reading some papers in your field to see what data collection methods they used.

Secondary data

If you don’t have the time or resources to collect data from the population you’re interested in, you can also choose to use secondary data that other researchers already collected – for example, datasets from government surveys or previous studies on your topic.

With this raw data, you can do your own analysis to answer new research questions that weren’t addressed by the original study.

Using secondary data can expand the scope of your research, as you may be able to access much larger and more varied samples than you could collect yourself.

However, it also means you don’t have any control over which variables to measure or how to measure them, so the conclusions you can draw may be limited.

As well as deciding on your methods, you need to plan exactly how you’ll use these methods to collect data that’s consistent, accurate, and unbiased.

Planning systematic procedures is especially important in quantitative research, where you need to precisely define your variables and ensure your measurements are reliable and valid.

Operationalisation

Some variables, like height or age, are easily measured. But often you’ll be dealing with more abstract concepts, like satisfaction, anxiety, or competence. Operationalisation means turning these fuzzy ideas into measurable indicators.

If you’re using observations , which events or actions will you count?

If you’re using surveys , which questions will you ask and what range of responses will be offered?

You may also choose to use or adapt existing materials designed to measure the concept you’re interested in – for example, questionnaires or inventories whose reliability and validity has already been established.

Reliability and validity

Reliability means your results can be consistently reproduced , while validity means that you’re actually measuring the concept you’re interested in.

Reliability Validity

For valid and reliable results, your measurement materials should be thoroughly researched and carefully designed. Plan your procedures to make sure you carry out the same steps in the same way for each participant.

If you’re developing a new questionnaire or other instrument to measure a specific concept, running a pilot study allows you to check its validity and reliability in advance.

Sampling procedures

As well as choosing an appropriate sampling method, you need a concrete plan for how you’ll actually contact and recruit your selected sample.

That means making decisions about things like:

  • How many participants do you need for an adequate sample size?
  • What inclusion and exclusion criteria will you use to identify eligible participants?
  • How will you contact your sample – by mail, online, by phone, or in person?

If you’re using a probability sampling method, it’s important that everyone who is randomly selected actually participates in the study. How will you ensure a high response rate?

If you’re using a non-probability method, how will you avoid bias and ensure a representative sample?

Data management

It’s also important to create a data management plan for organising and storing your data.

Will you need to transcribe interviews or perform data entry for observations? You should anonymise and safeguard any sensitive data, and make sure it’s backed up regularly.

Keeping your data well organised will save time when it comes to analysing them. It can also help other researchers validate and add to your findings.

On their own, raw data can’t answer your research question. The last step of designing your research is planning how you’ll analyse the data.

Quantitative data analysis

In quantitative research, you’ll most likely use some form of statistical analysis . With statistics, you can summarise your sample data, make estimates, and test hypotheses.

Using descriptive statistics , you can summarise your sample data in terms of:

  • The distribution of the data (e.g., the frequency of each score on a test)
  • The central tendency of the data (e.g., the mean to describe the average score)
  • The variability of the data (e.g., the standard deviation to describe how spread out the scores are)

The specific calculations you can do depend on the level of measurement of your variables.

Using inferential statistics , you can:

  • Make estimates about the population based on your sample data.
  • Test hypotheses about a relationship between variables.

Regression and correlation tests look for associations between two or more variables, while comparison tests (such as t tests and ANOVAs ) look for differences in the outcomes of different groups.

Your choice of statistical test depends on various aspects of your research design, including the types of variables you’re dealing with and the distribution of your data.

Qualitative data analysis

In qualitative research, your data will usually be very dense with information and ideas. Instead of summing it up in numbers, you’ll need to comb through the data in detail, interpret its meanings, identify patterns, and extract the parts that are most relevant to your research question.

Two of the most common approaches to doing this are thematic analysis and discourse analysis .

Approach Characteristics
Thematic analysis
Discourse analysis

There are many other ways of analysing qualitative data depending on the aims of your research. To get a sense of potential approaches, try reading some qualitative research papers in your field.

A sample is a subset of individuals from a larger population. Sampling means selecting the group that you will actually collect data from in your research.

For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

Statistical sampling allows you to test a hypothesis about the characteristics of a population. There are various sampling methods you can use to ensure that your sample is representative of the population as a whole.

Operationalisation means turning abstract conceptual ideas into measurable observations.

For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioural avoidance of crowded places, or physical anxiety symptoms in social situations.

Before collecting data , it’s important to consider how you will operationalise the variables that you want to measure.

The research methods you use depend on the type of data you need to answer your research question .

  • If you want to measure something or test a hypothesis , use quantitative methods . If you want to explore ideas, thoughts, and meanings, use qualitative methods .
  • If you want to analyse a large amount of readily available data, use secondary data. If you want data specific to your purposes with control over how they are generated, collect primary data.
  • If you want to establish cause-and-effect relationships between variables , use experimental methods. If you want to understand the characteristics of a research subject, use descriptive methods.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

McCombes, S. (2023, March 20). Research Design | Step-by-Step Guide with Examples. Scribbr. Retrieved 18 September 2024, from https://www.scribbr.co.uk/research-methods/research-design/

Is this article helpful?

Shona McCombes

Shona McCombes

  • USC Libraries
  • Research Guides

Organizing Your Social Sciences Research Paper

  • Types of Research Designs
  • Purpose of Guide
  • Design Flaws to Avoid
  • Independent and Dependent Variables
  • Glossary of Research Terms
  • Reading Research Effectively
  • Narrowing a Topic Idea
  • Broadening a Topic Idea
  • Extending the Timeliness of a Topic Idea
  • Academic Writing Style
  • Applying Critical Thinking
  • Choosing a Title
  • Making an Outline
  • Paragraph Development
  • Research Process Video Series
  • Executive Summary
  • The C.A.R.S. Model
  • Background Information
  • The Research Problem/Question
  • Theoretical Framework
  • Citation Tracking
  • Content Alert Services
  • Evaluating Sources
  • Primary Sources
  • Secondary Sources
  • Tiertiary Sources
  • Scholarly vs. Popular Publications
  • Qualitative Methods
  • Quantitative Methods
  • Insiderness
  • Using Non-Textual Elements
  • Limitations of the Study
  • Common Grammar Mistakes
  • Writing Concisely
  • Avoiding Plagiarism
  • Footnotes or Endnotes?
  • Further Readings
  • Generative AI and Writing
  • USC Libraries Tutorials and Other Guides
  • Bibliography

Introduction

Before beginning your paper, you need to decide how you plan to design the study .

The research design refers to the overall strategy and analytical approach that you have chosen in order to integrate, in a coherent and logical way, the different components of the study, thus ensuring that the research problem will be thoroughly investigated. It constitutes the blueprint for the collection, measurement, and interpretation of information and data. Note that the research problem determines the type of design you choose, not the other way around!

De Vaus, D. A. Research Design in Social Research . London: SAGE, 2001; Trochim, William M.K. Research Methods Knowledge Base. 2006.

General Structure and Writing Style

The function of a research design is to ensure that the evidence obtained enables you to effectively address the research problem logically and as unambiguously as possible . In social sciences research, obtaining information relevant to the research problem generally entails specifying the type of evidence needed to test the underlying assumptions of a theory, to evaluate a program, or to accurately describe and assess meaning related to an observable phenomenon.

With this in mind, a common mistake made by researchers is that they begin their investigations before they have thought critically about what information is required to address the research problem. Without attending to these design issues beforehand, the overall research problem will not be adequately addressed and any conclusions drawn will run the risk of being weak and unconvincing. As a consequence, the overall validity of the study will be undermined.

The length and complexity of describing the research design in your paper can vary considerably, but any well-developed description will achieve the following :

  • Identify the research problem clearly and justify its selection, particularly in relation to any valid alternative designs that could have been used,
  • Review and synthesize previously published literature associated with the research problem,
  • Clearly and explicitly specify hypotheses [i.e., research questions] central to the problem,
  • Effectively describe the information and/or data which will be necessary for an adequate testing of the hypotheses and explain how such information and/or data will be obtained, and
  • Describe the methods of analysis to be applied to the data in determining whether or not the hypotheses are true or false.

The research design is usually incorporated into the introduction of your paper . You can obtain an overall sense of what to do by reviewing studies that have utilized the same research design [e.g., using a case study approach]. This can help you develop an outline to follow for your own paper.

NOTE: Use the SAGE Research Methods Online and Cases and the SAGE Research Methods Videos databases to search for scholarly resources on how to apply specific research designs and methods . The Research Methods Online database contains links to more than 175,000 pages of SAGE publisher's book, journal, and reference content on quantitative, qualitative, and mixed research methodologies. Also included is a collection of case studies of social research projects that can be used to help you better understand abstract or complex methodological concepts. The Research Methods Videos database contains hours of tutorials, interviews, video case studies, and mini-documentaries covering the entire research process.

Creswell, John W. and J. David Creswell. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches . 5th edition. Thousand Oaks, CA: Sage, 2018; De Vaus, D. A. Research Design in Social Research . London: SAGE, 2001; Gorard, Stephen. Research Design: Creating Robust Approaches for the Social Sciences . Thousand Oaks, CA: Sage, 2013; Leedy, Paul D. and Jeanne Ellis Ormrod. Practical Research: Planning and Design . Tenth edition. Boston, MA: Pearson, 2013; Vogt, W. Paul, Dianna C. Gardner, and Lynne M. Haeffele. When to Use What Research Design . New York: Guilford, 2012.

Action Research Design

Definition and Purpose

The essentials of action research design follow a characteristic cycle whereby initially an exploratory stance is adopted, where an understanding of a problem is developed and plans are made for some form of interventionary strategy. Then the intervention is carried out [the "action" in action research] during which time, pertinent observations are collected in various forms. The new interventional strategies are carried out, and this cyclic process repeats, continuing until a sufficient understanding of [or a valid implementation solution for] the problem is achieved. The protocol is iterative or cyclical in nature and is intended to foster deeper understanding of a given situation, starting with conceptualizing and particularizing the problem and moving through several interventions and evaluations.

What do these studies tell you ?

  • This is a collaborative and adaptive research design that lends itself to use in work or community situations.
  • Design focuses on pragmatic and solution-driven research outcomes rather than testing theories.
  • When practitioners use action research, it has the potential to increase the amount they learn consciously from their experience; the action research cycle can be regarded as a learning cycle.
  • Action research studies often have direct and obvious relevance to improving practice and advocating for change.
  • There are no hidden controls or preemption of direction by the researcher.

What these studies don't tell you ?

  • It is harder to do than conducting conventional research because the researcher takes on responsibilities of advocating for change as well as for researching the topic.
  • Action research is much harder to write up because it is less likely that you can use a standard format to report your findings effectively [i.e., data is often in the form of stories or observation].
  • Personal over-involvement of the researcher may bias research results.
  • The cyclic nature of action research to achieve its twin outcomes of action [e.g. change] and research [e.g. understanding] is time-consuming and complex to conduct.
  • Advocating for change usually requires buy-in from study participants.

Coghlan, David and Mary Brydon-Miller. The Sage Encyclopedia of Action Research . Thousand Oaks, CA:  Sage, 2014; Efron, Sara Efrat and Ruth Ravid. Action Research in Education: A Practical Guide . New York: Guilford, 2013; Gall, Meredith. Educational Research: An Introduction . Chapter 18, Action Research. 8th ed. Boston, MA: Pearson/Allyn and Bacon, 2007; Gorard, Stephen. Research Design: Creating Robust Approaches for the Social Sciences . Thousand Oaks, CA: Sage, 2013; Kemmis, Stephen and Robin McTaggart. “Participatory Action Research.” In Handbook of Qualitative Research . Norman Denzin and Yvonna S. Lincoln, eds. 2nd ed. (Thousand Oaks, CA: SAGE, 2000), pp. 567-605; McNiff, Jean. Writing and Doing Action Research . London: Sage, 2014; Reason, Peter and Hilary Bradbury. Handbook of Action Research: Participative Inquiry and Practice . Thousand Oaks, CA: SAGE, 2001.

Case Study Design

A case study is an in-depth study of a particular research problem rather than a sweeping statistical survey or comprehensive comparative inquiry. It is often used to narrow down a very broad field of research into one or a few easily researchable examples. The case study research design is also useful for testing whether a specific theory and model actually applies to phenomena in the real world. It is a useful design when not much is known about an issue or phenomenon.

  • Approach excels at bringing us to an understanding of a complex issue through detailed contextual analysis of a limited number of events or conditions and their relationships.
  • A researcher using a case study design can apply a variety of methodologies and rely on a variety of sources to investigate a research problem.
  • Design can extend experience or add strength to what is already known through previous research.
  • Social scientists, in particular, make wide use of this research design to examine contemporary real-life situations and provide the basis for the application of concepts and theories and the extension of methodologies.
  • The design can provide detailed descriptions of specific and rare cases.
  • A single or small number of cases offers little basis for establishing reliability or to generalize the findings to a wider population of people, places, or things.
  • Intense exposure to the study of a case may bias a researcher's interpretation of the findings.
  • Design does not facilitate assessment of cause and effect relationships.
  • Vital information may be missing, making the case hard to interpret.
  • The case may not be representative or typical of the larger problem being investigated.
  • If the criteria for selecting a case is because it represents a very unusual or unique phenomenon or problem for study, then your interpretation of the findings can only apply to that particular case.

Case Studies. Writing@CSU. Colorado State University; Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 4, Flexible Methods: Case Study Design. 2nd ed. New York: Columbia University Press, 1999; Gerring, John. “What Is a Case Study and What Is It Good for?” American Political Science Review 98 (May 2004): 341-354; Greenhalgh, Trisha, editor. Case Study Evaluation: Past, Present and Future Challenges . Bingley, UK: Emerald Group Publishing, 2015; Mills, Albert J. , Gabrielle Durepos, and Eiden Wiebe, editors. Encyclopedia of Case Study Research . Thousand Oaks, CA: SAGE Publications, 2010; Stake, Robert E. The Art of Case Study Research . Thousand Oaks, CA: SAGE, 1995; Yin, Robert K. Case Study Research: Design and Theory . Applied Social Research Methods Series, no. 5. 3rd ed. Thousand Oaks, CA: SAGE, 2003.

Causal Design

Causality studies may be thought of as understanding a phenomenon in terms of conditional statements in the form, “If X, then Y.” This type of research is used to measure what impact a specific change will have on existing norms and assumptions. Most social scientists seek causal explanations that reflect tests of hypotheses. Causal effect (nomothetic perspective) occurs when variation in one phenomenon, an independent variable, leads to or results, on average, in variation in another phenomenon, the dependent variable.

Conditions necessary for determining causality:

  • Empirical association -- a valid conclusion is based on finding an association between the independent variable and the dependent variable.
  • Appropriate time order -- to conclude that causation was involved, one must see that cases were exposed to variation in the independent variable before variation in the dependent variable.
  • Nonspuriousness -- a relationship between two variables that is not due to variation in a third variable.
  • Causality research designs assist researchers in understanding why the world works the way it does through the process of proving a causal link between variables and by the process of eliminating other possibilities.
  • Replication is possible.
  • There is greater confidence the study has internal validity due to the systematic subject selection and equity of groups being compared.
  • Not all relationships are causal! The possibility always exists that, by sheer coincidence, two unrelated events appear to be related [e.g., Punxatawney Phil could accurately predict the duration of Winter for five consecutive years but, the fact remains, he's just a big, furry rodent].
  • Conclusions about causal relationships are difficult to determine due to a variety of extraneous and confounding variables that exist in a social environment. This means causality can only be inferred, never proven.
  • If two variables are correlated, the cause must come before the effect. However, even though two variables might be causally related, it can sometimes be difficult to determine which variable comes first and, therefore, to establish which variable is the actual cause and which is the  actual effect.

Beach, Derek and Rasmus Brun Pedersen. Causal Case Study Methods: Foundations and Guidelines for Comparing, Matching, and Tracing . Ann Arbor, MI: University of Michigan Press, 2016; Bachman, Ronet. The Practice of Research in Criminology and Criminal Justice . Chapter 5, Causation and Research Designs. 3rd ed. Thousand Oaks, CA: Pine Forge Press, 2007; Brewer, Ernest W. and Jennifer Kubn. “Causal-Comparative Design.” In Encyclopedia of Research Design . Neil J. Salkind, editor. (Thousand Oaks, CA: Sage, 2010), pp. 125-132; Causal Research Design: Experimentation. Anonymous SlideShare Presentation; Gall, Meredith. Educational Research: An Introduction . Chapter 11, Nonexperimental Research: Correlational Designs. 8th ed. Boston, MA: Pearson/Allyn and Bacon, 2007; Trochim, William M.K. Research Methods Knowledge Base. 2006.

Cohort Design

Often used in the medical sciences, but also found in the applied social sciences, a cohort study generally refers to a study conducted over a period of time involving members of a population which the subject or representative member comes from, and who are united by some commonality or similarity. Using a quantitative framework, a cohort study makes note of statistical occurrence within a specialized subgroup, united by same or similar characteristics that are relevant to the research problem being investigated, rather than studying statistical occurrence within the general population. Using a qualitative framework, cohort studies generally gather data using methods of observation. Cohorts can be either "open" or "closed."

  • Open Cohort Studies [dynamic populations, such as the population of Los Angeles] involve a population that is defined just by the state of being a part of the study in question (and being monitored for the outcome). Date of entry and exit from the study is individually defined, therefore, the size of the study population is not constant. In open cohort studies, researchers can only calculate rate based data, such as, incidence rates and variants thereof.
  • Closed Cohort Studies [static populations, such as patients entered into a clinical trial] involve participants who enter into the study at one defining point in time and where it is presumed that no new participants can enter the cohort. Given this, the number of study participants remains constant (or can only decrease).
  • The use of cohorts is often mandatory because a randomized control study may be unethical. For example, you cannot deliberately expose people to asbestos, you can only study its effects on those who have already been exposed. Research that measures risk factors often relies upon cohort designs.
  • Because cohort studies measure potential causes before the outcome has occurred, they can demonstrate that these “causes” preceded the outcome, thereby avoiding the debate as to which is the cause and which is the effect.
  • Cohort analysis is highly flexible and can provide insight into effects over time and related to a variety of different types of changes [e.g., social, cultural, political, economic, etc.].
  • Either original data or secondary data can be used in this design.
  • In cases where a comparative analysis of two cohorts is made [e.g., studying the effects of one group exposed to asbestos and one that has not], a researcher cannot control for all other factors that might differ between the two groups. These factors are known as confounding variables.
  • Cohort studies can end up taking a long time to complete if the researcher must wait for the conditions of interest to develop within the group. This also increases the chance that key variables change during the course of the study, potentially impacting the validity of the findings.
  • Due to the lack of randominization in the cohort design, its external validity is lower than that of study designs where the researcher randomly assigns participants.

Healy P, Devane D. “Methodological Considerations in Cohort Study Designs.” Nurse Researcher 18 (2011): 32-36; Glenn, Norval D, editor. Cohort Analysis . 2nd edition. Thousand Oaks, CA: Sage, 2005; Levin, Kate Ann. Study Design IV: Cohort Studies. Evidence-Based Dentistry 7 (2003): 51–52; Payne, Geoff. “Cohort Study.” In The SAGE Dictionary of Social Research Methods . Victor Jupp, editor. (Thousand Oaks, CA: Sage, 2006), pp. 31-33; Study Design 101. Himmelfarb Health Sciences Library. George Washington University, November 2011; Cohort Study. Wikipedia.

Cross-Sectional Design

Cross-sectional research designs have three distinctive features: no time dimension; a reliance on existing differences rather than change following intervention; and, groups are selected based on existing differences rather than random allocation. The cross-sectional design can only measure differences between or from among a variety of people, subjects, or phenomena rather than a process of change. As such, researchers using this design can only employ a relatively passive approach to making causal inferences based on findings.

  • Cross-sectional studies provide a clear 'snapshot' of the outcome and the characteristics associated with it, at a specific point in time.
  • Unlike an experimental design, where there is an active intervention by the researcher to produce and measure change or to create differences, cross-sectional designs focus on studying and drawing inferences from existing differences between people, subjects, or phenomena.
  • Entails collecting data at and concerning one point in time. While longitudinal studies involve taking multiple measures over an extended period of time, cross-sectional research is focused on finding relationships between variables at one moment in time.
  • Groups identified for study are purposely selected based upon existing differences in the sample rather than seeking random sampling.
  • Cross-section studies are capable of using data from a large number of subjects and, unlike observational studies, is not geographically bound.
  • Can estimate prevalence of an outcome of interest because the sample is usually taken from the whole population.
  • Because cross-sectional designs generally use survey techniques to gather data, they are relatively inexpensive and take up little time to conduct.
  • Finding people, subjects, or phenomena to study that are very similar except in one specific variable can be difficult.
  • Results are static and time bound and, therefore, give no indication of a sequence of events or reveal historical or temporal contexts.
  • Studies cannot be utilized to establish cause and effect relationships.
  • This design only provides a snapshot of analysis so there is always the possibility that a study could have differing results if another time-frame had been chosen.
  • There is no follow up to the findings.

Bethlehem, Jelke. "7: Cross-sectional Research." In Research Methodology in the Social, Behavioural and Life Sciences . Herman J Adèr and Gideon J Mellenbergh, editors. (London, England: Sage, 1999), pp. 110-43; Bourque, Linda B. “Cross-Sectional Design.” In  The SAGE Encyclopedia of Social Science Research Methods . Michael S. Lewis-Beck, Alan Bryman, and Tim Futing Liao. (Thousand Oaks, CA: 2004), pp. 230-231; Hall, John. “Cross-Sectional Survey Design.” In Encyclopedia of Survey Research Methods . Paul J. Lavrakas, ed. (Thousand Oaks, CA: Sage, 2008), pp. 173-174; Helen Barratt, Maria Kirwan. Cross-Sectional Studies: Design Application, Strengths and Weaknesses of Cross-Sectional Studies. Healthknowledge, 2009. Cross-Sectional Study. Wikipedia.

Descriptive Design

Descriptive research designs help provide answers to the questions of who, what, when, where, and how associated with a particular research problem; a descriptive study cannot conclusively ascertain answers to why. Descriptive research is used to obtain information concerning the current status of the phenomena and to describe "what exists" with respect to variables or conditions in a situation.

  • The subject is being observed in a completely natural and unchanged natural environment. True experiments, whilst giving analyzable data, often adversely influence the normal behavior of the subject [a.k.a., the Heisenberg effect whereby measurements of certain systems cannot be made without affecting the systems].
  • Descriptive research is often used as a pre-cursor to more quantitative research designs with the general overview giving some valuable pointers as to what variables are worth testing quantitatively.
  • If the limitations are understood, they can be a useful tool in developing a more focused study.
  • Descriptive studies can yield rich data that lead to important recommendations in practice.
  • Appoach collects a large amount of data for detailed analysis.
  • The results from a descriptive research cannot be used to discover a definitive answer or to disprove a hypothesis.
  • Because descriptive designs often utilize observational methods [as opposed to quantitative methods], the results cannot be replicated.
  • The descriptive function of research is heavily dependent on instrumentation for measurement and observation.

Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 5, Flexible Methods: Descriptive Research. 2nd ed. New York: Columbia University Press, 1999; Given, Lisa M. "Descriptive Research." In Encyclopedia of Measurement and Statistics . Neil J. Salkind and Kristin Rasmussen, editors. (Thousand Oaks, CA: Sage, 2007), pp. 251-254; McNabb, Connie. Descriptive Research Methodologies. Powerpoint Presentation; Shuttleworth, Martyn. Descriptive Research Design, September 26, 2008; Erickson, G. Scott. "Descriptive Research Design." In New Methods of Market Research and Analysis . (Northampton, MA: Edward Elgar Publishing, 2017), pp. 51-77; Sahin, Sagufta, and Jayanta Mete. "A Brief Study on Descriptive Research: Its Nature and Application in Social Science." International Journal of Research and Analysis in Humanities 1 (2021): 11; K. Swatzell and P. Jennings. “Descriptive Research: The Nuts and Bolts.” Journal of the American Academy of Physician Assistants 20 (2007), pp. 55-56; Kane, E. Doing Your Own Research: Basic Descriptive Research in the Social Sciences and Humanities . London: Marion Boyars, 1985.

Experimental Design

A blueprint of the procedure that enables the researcher to maintain control over all factors that may affect the result of an experiment. In doing this, the researcher attempts to determine or predict what may occur. Experimental research is often used where there is time priority in a causal relationship (cause precedes effect), there is consistency in a causal relationship (a cause will always lead to the same effect), and the magnitude of the correlation is great. The classic experimental design specifies an experimental group and a control group. The independent variable is administered to the experimental group and not to the control group, and both groups are measured on the same dependent variable. Subsequent experimental designs have used more groups and more measurements over longer periods. True experiments must have control, randomization, and manipulation.

  • Experimental research allows the researcher to control the situation. In so doing, it allows researchers to answer the question, “What causes something to occur?”
  • Permits the researcher to identify cause and effect relationships between variables and to distinguish placebo effects from treatment effects.
  • Experimental research designs support the ability to limit alternative explanations and to infer direct causal relationships in the study.
  • Approach provides the highest level of evidence for single studies.
  • The design is artificial, and results may not generalize well to the real world.
  • The artificial settings of experiments may alter the behaviors or responses of participants.
  • Experimental designs can be costly if special equipment or facilities are needed.
  • Some research problems cannot be studied using an experiment because of ethical or technical reasons.
  • Difficult to apply ethnographic and other qualitative methods to experimentally designed studies.

Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 7, Flexible Methods: Experimental Research. 2nd ed. New York: Columbia University Press, 1999; Chapter 2: Research Design, Experimental Designs. School of Psychology, University of New England, 2000; Chow, Siu L. "Experimental Design." In Encyclopedia of Research Design . Neil J. Salkind, editor. (Thousand Oaks, CA: Sage, 2010), pp. 448-453; "Experimental Design." In Social Research Methods . Nicholas Walliman, editor. (London, England: Sage, 2006), pp, 101-110; Experimental Research. Research Methods by Dummies. Department of Psychology. California State University, Fresno, 2006; Kirk, Roger E. Experimental Design: Procedures for the Behavioral Sciences . 4th edition. Thousand Oaks, CA: Sage, 2013; Trochim, William M.K. Experimental Design. Research Methods Knowledge Base. 2006; Rasool, Shafqat. Experimental Research. Slideshare presentation.

Exploratory Design

An exploratory design is conducted about a research problem when there are few or no earlier studies to refer to or rely upon to predict an outcome . The focus is on gaining insights and familiarity for later investigation or undertaken when research problems are in a preliminary stage of investigation. Exploratory designs are often used to establish an understanding of how best to proceed in studying an issue or what methodology would effectively apply to gathering information about the issue.

The goals of exploratory research are intended to produce the following possible insights:

  • Familiarity with basic details, settings, and concerns.
  • Well grounded picture of the situation being developed.
  • Generation of new ideas and assumptions.
  • Development of tentative theories or hypotheses.
  • Determination about whether a study is feasible in the future.
  • Issues get refined for more systematic investigation and formulation of new research questions.
  • Direction for future research and techniques get developed.
  • Design is a useful approach for gaining background information on a particular topic.
  • Exploratory research is flexible and can address research questions of all types (what, why, how).
  • Provides an opportunity to define new terms and clarify existing concepts.
  • Exploratory research is often used to generate formal hypotheses and develop more precise research problems.
  • In the policy arena or applied to practice, exploratory studies help establish research priorities and where resources should be allocated.
  • Exploratory research generally utilizes small sample sizes and, thus, findings are typically not generalizable to the population at large.
  • The exploratory nature of the research inhibits an ability to make definitive conclusions about the findings. They provide insight but not definitive conclusions.
  • The research process underpinning exploratory studies is flexible but often unstructured, leading to only tentative results that have limited value to decision-makers.
  • Design lacks rigorous standards applied to methods of data gathering and analysis because one of the areas for exploration could be to determine what method or methodologies could best fit the research problem.

Cuthill, Michael. “Exploratory Research: Citizen Participation, Local Government, and Sustainable Development in Australia.” Sustainable Development 10 (2002): 79-89; Streb, Christoph K. "Exploratory Case Study." In Encyclopedia of Case Study Research . Albert J. Mills, Gabrielle Durepos and Eiden Wiebe, editors. (Thousand Oaks, CA: Sage, 2010), pp. 372-374; Taylor, P. J., G. Catalano, and D.R.F. Walker. “Exploratory Analysis of the World City Network.” Urban Studies 39 (December 2002): 2377-2394; Exploratory Research. Wikipedia.

Field Research Design

Sometimes referred to as ethnography or participant observation, designs around field research encompass a variety of interpretative procedures [e.g., observation and interviews] rooted in qualitative approaches to studying people individually or in groups while inhabiting their natural environment as opposed to using survey instruments or other forms of impersonal methods of data gathering. Information acquired from observational research takes the form of “ field notes ” that involves documenting what the researcher actually sees and hears while in the field. Findings do not consist of conclusive statements derived from numbers and statistics because field research involves analysis of words and observations of behavior. Conclusions, therefore, are developed from an interpretation of findings that reveal overriding themes, concepts, and ideas. More information can be found HERE .

  • Field research is often necessary to fill gaps in understanding the research problem applied to local conditions or to specific groups of people that cannot be ascertained from existing data.
  • The research helps contextualize already known information about a research problem, thereby facilitating ways to assess the origins, scope, and scale of a problem and to gage the causes, consequences, and means to resolve an issue based on deliberate interaction with people in their natural inhabited spaces.
  • Enables the researcher to corroborate or confirm data by gathering additional information that supports or refutes findings reported in prior studies of the topic.
  • Because the researcher in embedded in the field, they are better able to make observations or ask questions that reflect the specific cultural context of the setting being investigated.
  • Observing the local reality offers the opportunity to gain new perspectives or obtain unique data that challenges existing theoretical propositions or long-standing assumptions found in the literature.

What these studies don't tell you

  • A field research study requires extensive time and resources to carry out the multiple steps involved with preparing for the gathering of information, including for example, examining background information about the study site, obtaining permission to access the study site, and building trust and rapport with subjects.
  • Requires a commitment to staying engaged in the field to ensure that you can adequately document events and behaviors as they unfold.
  • The unpredictable nature of fieldwork means that researchers can never fully control the process of data gathering. They must maintain a flexible approach to studying the setting because events and circumstances can change quickly or unexpectedly.
  • Findings can be difficult to interpret and verify without access to documents and other source materials that help to enhance the credibility of information obtained from the field  [i.e., the act of triangulating the data].
  • Linking the research problem to the selection of study participants inhabiting their natural environment is critical. However, this specificity limits the ability to generalize findings to different situations or in other contexts or to infer courses of action applied to other settings or groups of people.
  • The reporting of findings must take into account how the researcher themselves may have inadvertently affected respondents and their behaviors.

Historical Design

The purpose of a historical research design is to collect, verify, and synthesize evidence from the past to establish facts that defend or refute a hypothesis. It uses secondary sources and a variety of primary documentary evidence, such as, diaries, official records, reports, archives, and non-textual information [maps, pictures, audio and visual recordings]. The limitation is that the sources must be both authentic and valid.

  • The historical research design is unobtrusive; the act of research does not affect the results of the study.
  • The historical approach is well suited for trend analysis.
  • Historical records can add important contextual background required to more fully understand and interpret a research problem.
  • There is often no possibility of researcher-subject interaction that could affect the findings.
  • Historical sources can be used over and over to study different research problems or to replicate a previous study.
  • The ability to fulfill the aims of your research are directly related to the amount and quality of documentation available to understand the research problem.
  • Since historical research relies on data from the past, there is no way to manipulate it to control for contemporary contexts.
  • Interpreting historical sources can be very time consuming.
  • The sources of historical materials must be archived consistently to ensure access. This may especially challenging for digital or online-only sources.
  • Original authors bring their own perspectives and biases to the interpretation of past events and these biases are more difficult to ascertain in historical resources.
  • Due to the lack of control over external variables, historical research is very weak with regard to the demands of internal validity.
  • It is rare that the entirety of historical documentation needed to fully address a research problem is available for interpretation, therefore, gaps need to be acknowledged.

Howell, Martha C. and Walter Prevenier. From Reliable Sources: An Introduction to Historical Methods . Ithaca, NY: Cornell University Press, 2001; Lundy, Karen Saucier. "Historical Research." In The Sage Encyclopedia of Qualitative Research Methods . Lisa M. Given, editor. (Thousand Oaks, CA: Sage, 2008), pp. 396-400; Marius, Richard. and Melvin E. Page. A Short Guide to Writing about History . 9th edition. Boston, MA: Pearson, 2015; Savitt, Ronald. “Historical Research in Marketing.” Journal of Marketing 44 (Autumn, 1980): 52-58;  Gall, Meredith. Educational Research: An Introduction . Chapter 16, Historical Research. 8th ed. Boston, MA: Pearson/Allyn and Bacon, 2007.

Longitudinal Design

A longitudinal study follows the same sample over time and makes repeated observations. For example, with longitudinal surveys, the same group of people is interviewed at regular intervals, enabling researchers to track changes over time and to relate them to variables that might explain why the changes occur. Longitudinal research designs describe patterns of change and help establish the direction and magnitude of causal relationships. Measurements are taken on each variable over two or more distinct time periods. This allows the researcher to measure change in variables over time. It is a type of observational study sometimes referred to as a panel study.

  • Longitudinal data facilitate the analysis of the duration of a particular phenomenon.
  • Enables survey researchers to get close to the kinds of causal explanations usually attainable only with experiments.
  • The design permits the measurement of differences or change in a variable from one period to another [i.e., the description of patterns of change over time].
  • Longitudinal studies facilitate the prediction of future outcomes based upon earlier factors.
  • The data collection method may change over time.
  • Maintaining the integrity of the original sample can be difficult over an extended period of time.
  • It can be difficult to show more than one variable at a time.
  • This design often needs qualitative research data to explain fluctuations in the results.
  • A longitudinal research design assumes present trends will continue unchanged.
  • It can take a long period of time to gather results.
  • There is a need to have a large sample size and accurate sampling to reach representativness.

Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 6, Flexible Methods: Relational and Longitudinal Research. 2nd ed. New York: Columbia University Press, 1999; Forgues, Bernard, and Isabelle Vandangeon-Derumez. "Longitudinal Analyses." In Doing Management Research . Raymond-Alain Thiétart and Samantha Wauchope, editors. (London, England: Sage, 2001), pp. 332-351; Kalaian, Sema A. and Rafa M. Kasim. "Longitudinal Studies." In Encyclopedia of Survey Research Methods . Paul J. Lavrakas, ed. (Thousand Oaks, CA: Sage, 2008), pp. 440-441; Menard, Scott, editor. Longitudinal Research . Thousand Oaks, CA: Sage, 2002; Ployhart, Robert E. and Robert J. Vandenberg. "Longitudinal Research: The Theory, Design, and Analysis of Change.” Journal of Management 36 (January 2010): 94-120; Longitudinal Study. Wikipedia.

Meta-Analysis Design

Meta-analysis is an analytical methodology designed to systematically evaluate and summarize the results from a number of individual studies, thereby, increasing the overall sample size and the ability of the researcher to study effects of interest. The purpose is to not simply summarize existing knowledge, but to develop a new understanding of a research problem using synoptic reasoning. The main objectives of meta-analysis include analyzing differences in the results among studies and increasing the precision by which effects are estimated. A well-designed meta-analysis depends upon strict adherence to the criteria used for selecting studies and the availability of information in each study to properly analyze their findings. Lack of information can severely limit the type of analyzes and conclusions that can be reached. In addition, the more dissimilarity there is in the results among individual studies [heterogeneity], the more difficult it is to justify interpretations that govern a valid synopsis of results. A meta-analysis needs to fulfill the following requirements to ensure the validity of your findings:

  • Clearly defined description of objectives, including precise definitions of the variables and outcomes that are being evaluated;
  • A well-reasoned and well-documented justification for identification and selection of the studies;
  • Assessment and explicit acknowledgment of any researcher bias in the identification and selection of those studies;
  • Description and evaluation of the degree of heterogeneity among the sample size of studies reviewed; and,
  • Justification of the techniques used to evaluate the studies.
  • Can be an effective strategy for determining gaps in the literature.
  • Provides a means of reviewing research published about a particular topic over an extended period of time and from a variety of sources.
  • Is useful in clarifying what policy or programmatic actions can be justified on the basis of analyzing research results from multiple studies.
  • Provides a method for overcoming small sample sizes in individual studies that previously may have had little relationship to each other.
  • Can be used to generate new hypotheses or highlight research problems for future studies.
  • Small violations in defining the criteria used for content analysis can lead to difficult to interpret and/or meaningless findings.
  • A large sample size can yield reliable, but not necessarily valid, results.
  • A lack of uniformity regarding, for example, the type of literature reviewed, how methods are applied, and how findings are measured within the sample of studies you are analyzing, can make the process of synthesis difficult to perform.
  • Depending on the sample size, the process of reviewing and synthesizing multiple studies can be very time consuming.

Beck, Lewis W. "The Synoptic Method." The Journal of Philosophy 36 (1939): 337-345; Cooper, Harris, Larry V. Hedges, and Jeffrey C. Valentine, eds. The Handbook of Research Synthesis and Meta-Analysis . 2nd edition. New York: Russell Sage Foundation, 2009; Guzzo, Richard A., Susan E. Jackson and Raymond A. Katzell. “Meta-Analysis Analysis.” In Research in Organizational Behavior , Volume 9. (Greenwich, CT: JAI Press, 1987), pp 407-442; Lipsey, Mark W. and David B. Wilson. Practical Meta-Analysis . Thousand Oaks, CA: Sage Publications, 2001; Study Design 101. Meta-Analysis. The Himmelfarb Health Sciences Library, George Washington University; Timulak, Ladislav. “Qualitative Meta-Analysis.” In The SAGE Handbook of Qualitative Data Analysis . Uwe Flick, editor. (Los Angeles, CA: Sage, 2013), pp. 481-495; Walker, Esteban, Adrian V. Hernandez, and Micheal W. Kattan. "Meta-Analysis: It's Strengths and Limitations." Cleveland Clinic Journal of Medicine 75 (June 2008): 431-439.

Mixed-Method Design

  • Narrative and non-textual information can add meaning to numeric data, while numeric data can add precision to narrative and non-textual information.
  • Can utilize existing data while at the same time generating and testing a grounded theory approach to describe and explain the phenomenon under study.
  • A broader, more complex research problem can be investigated because the researcher is not constrained by using only one method.
  • The strengths of one method can be used to overcome the inherent weaknesses of another method.
  • Can provide stronger, more robust evidence to support a conclusion or set of recommendations.
  • May generate new knowledge new insights or uncover hidden insights, patterns, or relationships that a single methodological approach might not reveal.
  • Produces more complete knowledge and understanding of the research problem that can be used to increase the generalizability of findings applied to theory or practice.
  • A researcher must be proficient in understanding how to apply multiple methods to investigating a research problem as well as be proficient in optimizing how to design a study that coherently melds them together.
  • Can increase the likelihood of conflicting results or ambiguous findings that inhibit drawing a valid conclusion or setting forth a recommended course of action [e.g., sample interview responses do not support existing statistical data].
  • Because the research design can be very complex, reporting the findings requires a well-organized narrative, clear writing style, and precise word choice.
  • Design invites collaboration among experts. However, merging different investigative approaches and writing styles requires more attention to the overall research process than studies conducted using only one methodological paradigm.
  • Concurrent merging of quantitative and qualitative research requires greater attention to having adequate sample sizes, using comparable samples, and applying a consistent unit of analysis. For sequential designs where one phase of qualitative research builds on the quantitative phase or vice versa, decisions about what results from the first phase to use in the next phase, the choice of samples and estimating reasonable sample sizes for both phases, and the interpretation of results from both phases can be difficult.
  • Due to multiple forms of data being collected and analyzed, this design requires extensive time and resources to carry out the multiple steps involved in data gathering and interpretation.

Burch, Patricia and Carolyn J. Heinrich. Mixed Methods for Policy Research and Program Evaluation . Thousand Oaks, CA: Sage, 2016; Creswell, John w. et al. Best Practices for Mixed Methods Research in the Health Sciences . Bethesda, MD: Office of Behavioral and Social Sciences Research, National Institutes of Health, 2010Creswell, John W. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches . 4th edition. Thousand Oaks, CA: Sage Publications, 2014; Domínguez, Silvia, editor. Mixed Methods Social Networks Research . Cambridge, UK: Cambridge University Press, 2014; Hesse-Biber, Sharlene Nagy. Mixed Methods Research: Merging Theory with Practice . New York: Guilford Press, 2010; Niglas, Katrin. “How the Novice Researcher Can Make Sense of Mixed Methods Designs.” International Journal of Multiple Research Approaches 3 (2009): 34-46; Onwuegbuzie, Anthony J. and Nancy L. Leech. “Linking Research Questions to Mixed Methods Data Analysis Procedures.” The Qualitative Report 11 (September 2006): 474-498; Tashakorri, Abbas and John W. Creswell. “The New Era of Mixed Methods.” Journal of Mixed Methods Research 1 (January 2007): 3-7; Zhanga, Wanqing. “Mixed Methods Application in Health Intervention Research: A Multiple Case Study.” International Journal of Multiple Research Approaches 8 (2014): 24-35 .

Observational Design

This type of research design draws a conclusion by comparing subjects against a control group, in cases where the researcher has no control over the experiment. There are two general types of observational designs. In direct observations, people know that you are watching them. Unobtrusive measures involve any method for studying behavior where individuals do not know they are being observed. An observational study allows a useful insight into a phenomenon and avoids the ethical and practical difficulties of setting up a large and cumbersome research project.

  • Observational studies are usually flexible and do not necessarily need to be structured around a hypothesis about what you expect to observe [data is emergent rather than pre-existing].
  • The researcher is able to collect in-depth information about a particular behavior.
  • Can reveal interrelationships among multifaceted dimensions of group interactions.
  • You can generalize your results to real life situations.
  • Observational research is useful for discovering what variables may be important before applying other methods like experiments.
  • Observation research designs account for the complexity of group behaviors.
  • Reliability of data is low because seeing behaviors occur over and over again may be a time consuming task and are difficult to replicate.
  • In observational research, findings may only reflect a unique sample population and, thus, cannot be generalized to other groups.
  • There can be problems with bias as the researcher may only "see what they want to see."
  • There is no possibility to determine "cause and effect" relationships since nothing is manipulated.
  • Sources or subjects may not all be equally credible.
  • Any group that is knowingly studied is altered to some degree by the presence of the researcher, therefore, potentially skewing any data collected.

Atkinson, Paul and Martyn Hammersley. “Ethnography and Participant Observation.” In Handbook of Qualitative Research . Norman K. Denzin and Yvonna S. Lincoln, eds. (Thousand Oaks, CA: Sage, 1994), pp. 248-261; Observational Research. Research Methods by Dummies. Department of Psychology. California State University, Fresno, 2006; Patton Michael Quinn. Qualitiative Research and Evaluation Methods . Chapter 6, Fieldwork Strategies and Observational Methods. 3rd ed. Thousand Oaks, CA: Sage, 2002; Payne, Geoff and Judy Payne. "Observation." In Key Concepts in Social Research . The SAGE Key Concepts series. (London, England: Sage, 2004), pp. 158-162; Rosenbaum, Paul R. Design of Observational Studies . New York: Springer, 2010;Williams, J. Patrick. "Nonparticipant Observation." In The Sage Encyclopedia of Qualitative Research Methods . Lisa M. Given, editor.(Thousand Oaks, CA: Sage, 2008), pp. 562-563.

Philosophical Design

Understood more as an broad approach to examining a research problem than a methodological design, philosophical analysis and argumentation is intended to challenge deeply embedded, often intractable, assumptions underpinning an area of study. This approach uses the tools of argumentation derived from philosophical traditions, concepts, models, and theories to critically explore and challenge, for example, the relevance of logic and evidence in academic debates, to analyze arguments about fundamental issues, or to discuss the root of existing discourse about a research problem. These overarching tools of analysis can be framed in three ways:

  • Ontology -- the study that describes the nature of reality; for example, what is real and what is not, what is fundamental and what is derivative?
  • Epistemology -- the study that explores the nature of knowledge; for example, by what means does knowledge and understanding depend upon and how can we be certain of what we know?
  • Axiology -- the study of values; for example, what values does an individual or group hold and why? How are values related to interest, desire, will, experience, and means-to-end? And, what is the difference between a matter of fact and a matter of value?
  • Can provide a basis for applying ethical decision-making to practice.
  • Functions as a means of gaining greater self-understanding and self-knowledge about the purposes of research.
  • Brings clarity to general guiding practices and principles of an individual or group.
  • Philosophy informs methodology.
  • Refine concepts and theories that are invoked in relatively unreflective modes of thought and discourse.
  • Beyond methodology, philosophy also informs critical thinking about epistemology and the structure of reality (metaphysics).
  • Offers clarity and definition to the practical and theoretical uses of terms, concepts, and ideas.
  • Limited application to specific research problems [answering the "So What?" question in social science research].
  • Analysis can be abstract, argumentative, and limited in its practical application to real-life issues.
  • While a philosophical analysis may render problematic that which was once simple or taken-for-granted, the writing can be dense and subject to unnecessary jargon, overstatement, and/or excessive quotation and documentation.
  • There are limitations in the use of metaphor as a vehicle of philosophical analysis.
  • There can be analytical difficulties in moving from philosophy to advocacy and between abstract thought and application to the phenomenal world.

Burton, Dawn. "Part I, Philosophy of the Social Sciences." In Research Training for Social Scientists . (London, England: Sage, 2000), pp. 1-5; Chapter 4, Research Methodology and Design. Unisa Institutional Repository (UnisaIR), University of South Africa; Jarvie, Ian C., and Jesús Zamora-Bonilla, editors. The SAGE Handbook of the Philosophy of Social Sciences . London: Sage, 2011; Labaree, Robert V. and Ross Scimeca. “The Philosophical Problem of Truth in Librarianship.” The Library Quarterly 78 (January 2008): 43-70; Maykut, Pamela S. Beginning Qualitative Research: A Philosophic and Practical Guide . Washington, DC: Falmer Press, 1994; McLaughlin, Hugh. "The Philosophy of Social Research." In Understanding Social Work Research . 2nd edition. (London: SAGE Publications Ltd., 2012), pp. 24-47; Stanford Encyclopedia of Philosophy . Metaphysics Research Lab, CSLI, Stanford University, 2013.

Sequential Design

  • The researcher has a limitless option when it comes to sample size and the sampling schedule.
  • Due to the repetitive nature of this research design, minor changes and adjustments can be done during the initial parts of the study to correct and hone the research method.
  • This is a useful design for exploratory studies.
  • There is very little effort on the part of the researcher when performing this technique. It is generally not expensive, time consuming, or workforce intensive.
  • Because the study is conducted serially, the results of one sample are known before the next sample is taken and analyzed. This provides opportunities for continuous improvement of sampling and methods of analysis.
  • The sampling method is not representative of the entire population. The only possibility of approaching representativeness is when the researcher chooses to use a very large sample size significant enough to represent a significant portion of the entire population. In this case, moving on to study a second or more specific sample can be difficult.
  • The design cannot be used to create conclusions and interpretations that pertain to an entire population because the sampling technique is not randomized. Generalizability from findings is, therefore, limited.
  • Difficult to account for and interpret variation from one sample to another over time, particularly when using qualitative methods of data collection.

Betensky, Rebecca. Harvard University, Course Lecture Note slides; Bovaird, James A. and Kevin A. Kupzyk. "Sequential Design." In Encyclopedia of Research Design . Neil J. Salkind, editor. (Thousand Oaks, CA: Sage, 2010), pp. 1347-1352; Cresswell, John W. Et al. “Advanced Mixed-Methods Research Designs.” In Handbook of Mixed Methods in Social and Behavioral Research . Abbas Tashakkori and Charles Teddle, eds. (Thousand Oaks, CA: Sage, 2003), pp. 209-240; Henry, Gary T. "Sequential Sampling." In The SAGE Encyclopedia of Social Science Research Methods . Michael S. Lewis-Beck, Alan Bryman and Tim Futing Liao, editors. (Thousand Oaks, CA: Sage, 2004), pp. 1027-1028; Nataliya V. Ivankova. “Using Mixed-Methods Sequential Explanatory Design: From Theory to Practice.” Field Methods 18 (February 2006): 3-20; Bovaird, James A. and Kevin A. Kupzyk. “Sequential Design.” In Encyclopedia of Research Design . Neil J. Salkind, ed. Thousand Oaks, CA: Sage, 2010; Sequential Analysis. Wikipedia.

Systematic Review

  • A systematic review synthesizes the findings of multiple studies related to each other by incorporating strategies of analysis and interpretation intended to reduce biases and random errors.
  • The application of critical exploration, evaluation, and synthesis methods separates insignificant, unsound, or redundant research from the most salient and relevant studies worthy of reflection.
  • They can be use to identify, justify, and refine hypotheses, recognize and avoid hidden problems in prior studies, and explain data inconsistencies and conflicts in data.
  • Systematic reviews can be used to help policy makers formulate evidence-based guidelines and regulations.
  • The use of strict, explicit, and pre-determined methods of synthesis, when applied appropriately, provide reliable estimates about the effects of interventions, evaluations, and effects related to the overarching research problem investigated by each study under review.
  • Systematic reviews illuminate where knowledge or thorough understanding of a research problem is lacking and, therefore, can then be used to guide future research.
  • The accepted inclusion of unpublished studies [i.e., grey literature] ensures the broadest possible way to analyze and interpret research on a topic.
  • Results of the synthesis can be generalized and the findings extrapolated into the general population with more validity than most other types of studies .
  • Systematic reviews do not create new knowledge per se; they are a method for synthesizing existing studies about a research problem in order to gain new insights and determine gaps in the literature.
  • The way researchers have carried out their investigations [e.g., the period of time covered, number of participants, sources of data analyzed, etc.] can make it difficult to effectively synthesize studies.
  • The inclusion of unpublished studies can introduce bias into the review because they may not have undergone a rigorous peer-review process prior to publication. Examples may include conference presentations or proceedings, publications from government agencies, white papers, working papers, and internal documents from organizations, and doctoral dissertations and Master's theses.

Denyer, David and David Tranfield. "Producing a Systematic Review." In The Sage Handbook of Organizational Research Methods .  David A. Buchanan and Alan Bryman, editors. ( Thousand Oaks, CA: Sage Publications, 2009), pp. 671-689; Foster, Margaret J. and Sarah T. Jewell, editors. Assembling the Pieces of a Systematic Review: A Guide for Librarians . Lanham, MD: Rowman and Littlefield, 2017; Gough, David, Sandy Oliver, James Thomas, editors. Introduction to Systematic Reviews . 2nd edition. Los Angeles, CA: Sage Publications, 2017; Gopalakrishnan, S. and P. Ganeshkumar. “Systematic Reviews and Meta-analysis: Understanding the Best Evidence in Primary Healthcare.” Journal of Family Medicine and Primary Care 2 (2013): 9-14; Gough, David, James Thomas, and Sandy Oliver. "Clarifying Differences between Review Designs and Methods." Systematic Reviews 1 (2012): 1-9; Khan, Khalid S., Regina Kunz, Jos Kleijnen, and Gerd Antes. “Five Steps to Conducting a Systematic Review.” Journal of the Royal Society of Medicine 96 (2003): 118-121; Mulrow, C. D. “Systematic Reviews: Rationale for Systematic Reviews.” BMJ 309:597 (September 1994); O'Dwyer, Linda C., and Q. Eileen Wafford. "Addressing Challenges with Systematic Review Teams through Effective Communication: A Case Report." Journal of the Medical Library Association 109 (October 2021): 643-647; Okoli, Chitu, and Kira Schabram. "A Guide to Conducting a Systematic Literature Review of Information Systems Research."  Sprouts: Working Papers on Information Systems 10 (2010); Siddaway, Andy P., Alex M. Wood, and Larry V. Hedges. "How to Do a Systematic Review: A Best Practice Guide for Conducting and Reporting Narrative Reviews, Meta-analyses, and Meta-syntheses." Annual Review of Psychology 70 (2019): 747-770; Torgerson, Carole J. “Publication Bias: The Achilles’ Heel of Systematic Reviews?” British Journal of Educational Studies 54 (March 2006): 89-102; Torgerson, Carole. Systematic Reviews . New York: Continuum, 2003.

  • << Previous: Purpose of Guide
  • Next: Design Flaws to Avoid >>
  • Last Updated: Sep 17, 2024 10:59 AM
  • URL: https://libguides.usc.edu/writingguide

research design of a study should not include

Research Design 101

By: Derek Jansen (MBA) | Reviewers: Eunice Rautenbach (DTech) & Kerryn Warren (PhD) | April 2023

Dissertation Coaching

Overview: Research Design 101

What is research design.

  • Research design types for quantitative studies
  • Video explainer : quantitative research design
  • Research design types for qualitative studies
  • Video explainer : qualitative research design
  • How to choose a research design
  • Key takeaways

Research design refers to the overall plan, structure or strategy that guides a research project , from its conception to the final data analysis. A good research design serves as the blueprint for how you, as the researcher, will collect and analyse data while ensuring consistency, reliability and validity throughout your study.

Understanding different types of research designs is essential as helps ensure that your approach is suitable  given your research aims, objectives and questions , as well as the resources you have available to you. Without a clear big-picture view of how you’ll design your research, you run the risk of potentially making misaligned choices in terms of your methodology – especially your sampling , data collection and data analysis decisions.

The problem with defining research design…

One of the reasons students struggle with a clear definition of research design is because the term is used very loosely across the internet, and even within academia.

Some sources claim that the three research design types are qualitative, quantitative and mixed methods , which isn’t quite accurate (these just refer to the type of data that you’ll collect and analyse). Other sources state that research design refers to the sum of all your design choices, suggesting it’s more like a research methodology . Others run off on other less common tangents. No wonder there’s confusion!

In this article, we’ll clear up the confusion. We’ll explain the most common research design types for both qualitative and quantitative research projects, whether that is for a full dissertation or thesis, or a smaller research paper or article.

Research methodology webinar

Research Design: Quantitative Studies

Quantitative research involves collecting and analysing data in a numerical form. Broadly speaking, there are four types of quantitative research designs: descriptive , correlational , experimental , and quasi-experimental .

As the name suggests, descriptive research design focuses on describing existing conditions, behaviours, or characteristics by systematically gathering information without manipulating any variables. In other words, there is no intervention on the researcher’s part – only data collection.

For example, if you’re studying smartphone addiction among adolescents in your community, you could deploy a survey to a sample of teens asking them to rate their agreement with certain statements that relate to smartphone addiction. The collected data would then provide insight regarding how widespread the issue may be – in other words, it would describe the situation.

The key defining attribute of this type of research design is that it purely describes the situation . In other words, descriptive research design does not explore potential relationships between different variables or the causes that may underlie those relationships. Therefore, descriptive research is useful for generating insight into a research problem by describing its characteristics . By doing so, it can provide valuable insights and is often used as a precursor to other research design types.

Correlational Research Design

Correlational design is a popular choice for researchers aiming to identify and measure the relationship between two or more variables without manipulating them . In other words, this type of research design is useful when you want to know whether a change in one thing tends to be accompanied by a change in another thing.

For example, if you wanted to explore the relationship between exercise frequency and overall health, you could use a correlational design to help you achieve this. In this case, you might gather data on participants’ exercise habits, as well as records of their health indicators like blood pressure, heart rate, or body mass index. Thereafter, you’d use a statistical test to assess whether there’s a relationship between the two variables (exercise frequency and health).

As you can see, correlational research design is useful when you want to explore potential relationships between variables that cannot be manipulated or controlled for ethical, practical, or logistical reasons. It is particularly helpful in terms of developing predictions , and given that it doesn’t involve the manipulation of variables, it can be implemented at a large scale more easily than experimental designs (which will look at next).

Need a helping hand?

research design of a study should not include

Experimental research design is used to determine if there is a causal relationship between two or more variables . With this type of research design, you, as the researcher, manipulate one variable (the independent variable) while controlling others (dependent variables). Doing so allows you to observe the effect of the former on the latter and draw conclusions about potential causality.

For example, if you wanted to measure if/how different types of fertiliser affect plant growth, you could set up several groups of plants, with each group receiving a different type of fertiliser, as well as one with no fertiliser at all. You could then measure how much each plant group grew (on average) over time and compare the results from the different groups to see which fertiliser was most effective.

Overall, experimental research design provides researchers with a powerful way to identify and measure causal relationships (and the direction of causality) between variables. However, developing a rigorous experimental design can be challenging as it’s not always easy to control all the variables in a study. This often results in smaller sample sizes , which can reduce the statistical power and generalisability of the results.

Moreover, experimental research design requires random assignment . This means that the researcher needs to assign participants to different groups or conditions in a way that each participant has an equal chance of being assigned to any group (note that this is not the same as random sampling ). Doing so helps reduce the potential for bias and confounding variables . This need for random assignment can lead to ethics-related issues . For example, withholding a potentially beneficial medical treatment from a control group may be considered unethical in certain situations.

Quasi-Experimental Research Design

Quasi-experimental research design is used when the research aims involve identifying causal relations , but one cannot (or doesn’t want to) randomly assign participants to different groups (for practical or ethical reasons). Instead, with a quasi-experimental research design, the researcher relies on existing groups or pre-existing conditions to form groups for comparison.

For example, if you were studying the effects of a new teaching method on student achievement in a particular school district, you may be unable to randomly assign students to either group and instead have to choose classes or schools that already use different teaching methods. This way, you still achieve separate groups, without having to assign participants to specific groups yourself.

Naturally, quasi-experimental research designs have limitations when compared to experimental designs. Given that participant assignment is not random, it’s more difficult to confidently establish causality between variables, and, as a researcher, you have less control over other variables that may impact findings.

The four most common quantitative research design types are descriptive, correlational, experimental and quasi-experimental.

Research Design: Qualitative Studies

There are many different research design types when it comes to qualitative studies, but here we’ll narrow our focus to explore the “Big 4”. Specifically, we’ll look at phenomenological design, grounded theory design, ethnographic design, and case study design.

Phenomenological design involves exploring the meaning of lived experiences and how they are perceived by individuals. This type of research design seeks to understand people’s perspectives , emotions, and behaviours in specific situations. Here, the aim for researchers is to uncover the essence of human experience without making any assumptions or imposing preconceived ideas on their subjects.

For example, you could adopt a phenomenological design to study why cancer survivors have such varied perceptions of their lives after overcoming their disease. This could be achieved by interviewing survivors and then analysing the data using a qualitative analysis method such as thematic analysis to identify commonalities and differences.

Phenomenological research design typically involves in-depth interviews or open-ended questionnaires to collect rich, detailed data about participants’ subjective experiences. This richness is one of the key strengths of phenomenological research design but, naturally, it also has limitations. These include potential biases in data collection and interpretation and the lack of generalisability of findings to broader populations.

Grounded Theory Research Design

Grounded theory (also referred to as “GT”) aims to develop theories by continuously and iteratively analysing and comparing data collected from a relatively large number of participants in a study. It takes an inductive (bottom-up) approach, with a focus on letting the data “speak for itself”, without being influenced by preexisting theories or the researcher’s preconceptions.

As an example, let’s assume your research aims involved understanding how people cope with chronic pain from a specific medical condition, with a view to developing a theory around this. In this case, grounded theory design would allow you to explore this concept thoroughly without preconceptions about what coping mechanisms might exist. You may find that some patients prefer cognitive-behavioural therapy (CBT) while others prefer to rely on herbal remedies. Based on multiple, iterative rounds of analysis, you could then develop a theory in this regard, derived directly from the data (as opposed to other preexisting theories and models).

Grounded theory typically involves collecting data through interviews or observations and then analysing it to identify patterns and themes that emerge from the data. These emerging ideas are then validated by collecting more data until a saturation point is reached (i.e., no new information can be squeezed from the data). From that base, a theory can then be developed .

Private Coaching

Ethnographic design involves observing and studying a culture-sharing group of people in their natural setting to gain insight into their behaviours, beliefs, and values. The focus here is on observing participants in their natural environment (as opposed to a controlled environment). This typically involves the researcher spending an extended period of time with the participants in their environment, carefully observing and taking field notes .

All of this is not to say that ethnographic research design relies purely on observation. On the contrary, this design typically also involves in-depth interviews to explore participants’ views, beliefs, etc. However, unobtrusive observation is a core component of the ethnographic approach.

As an example, an ethnographer may study how different communities celebrate traditional festivals or how individuals from different generations interact with technology differently. This may involve a lengthy period of observation, combined with in-depth interviews to further explore specific areas of interest that emerge as a result of the observations that the researcher has made.

As you can probably imagine, ethnographic research design has the ability to provide rich, contextually embedded insights into the socio-cultural dynamics of human behaviour within a natural, uncontrived setting. Naturally, however, it does come with its own set of challenges, including researcher bias (since the researcher can become quite immersed in the group), participant confidentiality and, predictably, ethical complexities . All of these need to be carefully managed if you choose to adopt this type of research design.

Case Study Design

With case study research design, you, as the researcher, investigate a single individual (or a single group of individuals) to gain an in-depth understanding of their experiences, behaviours or outcomes. Unlike other research designs that are aimed at larger sample sizes, case studies offer a deep dive into the specific circumstances surrounding a person, group of people, event or phenomenon, generally within a bounded setting or context .

As an example, a case study design could be used to explore the factors influencing the success of a specific small business. This would involve diving deeply into the organisation to explore and understand what makes it tick – from marketing to HR to finance. In terms of data collection, this could include interviews with staff and management, review of policy documents and financial statements, surveying customers, etc.

While the above example is focused squarely on one organisation, it’s worth noting that case study research designs can have different variation s, including single-case, multiple-case and longitudinal designs. As you can see in the example, a single-case design involves intensely examining a single entity to understand its unique characteristics and complexities. Conversely, in a multiple-case design , multiple cases are compared and contrasted to identify patterns and commonalities. Lastly, in a longitudinal case design , a single case or multiple cases are studied over an extended period of time to understand how factors develop over time.

Case study design often involves investigating an individual to gain an in-depth understanding of their experiences, behaviours or outcomes.

How To Choose A Research Design

Having worked through all of these potential research designs, you’d be forgiven for feeling a little overwhelmed and wondering, “ But how do I decide which research design to use? ”. While we could write an entire post covering that alone, here are a few factors to consider that will help you choose a suitable research design for your study.

Data type: The first determining factor is naturally the type of data you plan to be collecting – i.e., qualitative or quantitative. This may sound obvious, but we have to be clear about this – don’t try to use a quantitative research design on qualitative data (or vice versa)!

Research aim(s) and question(s): As with all methodological decisions, your research aim and research questions will heavily influence your research design. For example, if your research aims involve developing a theory from qualitative data, grounded theory would be a strong option. Similarly, if your research aims involve identifying and measuring relationships between variables, one of the experimental designs would likely be a better option.

Time: It’s essential that you consider any time constraints you have, as this will impact the type of research design you can choose. For example, if you’ve only got a month to complete your project, a lengthy design such as ethnography wouldn’t be a good fit.

Resources: Take into account the resources realistically available to you, as these need to factor into your research design choice. For example, if you require highly specialised lab equipment to execute an experimental design, you need to be sure that you’ll have access to that before you make a decision.

Keep in mind that when it comes to research, it’s important to manage your risks and play as conservatively as possible. If your entire project relies on you achieving a huge sample, having access to niche equipment or holding interviews with very difficult-to-reach participants, you’re creating risks that could kill your project. So, be sure to think through your choices carefully and make sure that you have backup plans for any existential risks. Remember that a relatively simple methodology executed well generally will typically earn better marks than a highly-complex methodology executed poorly.

research design of a study should not include

Recap: Key Takeaways

We’ve covered a lot of ground here. Let’s recap by looking at the key takeaways:

  • Research design refers to the overall plan, structure or strategy that guides a research project, from its conception to the final analysis of data.
  • Research designs for quantitative studies include descriptive , correlational , experimental and quasi-experimenta l designs.
  • Research designs for qualitative studies include phenomenological , grounded theory , ethnographic and case study designs.
  • When choosing a research design, you need to consider a variety of factors, including the type of data you’ll be working with, your research aims and questions, your time and the resources available to you.

If you need a helping hand with your research design (or any other aspect of your research), check out our private coaching services .

Research Bootcamps

Learn More About Quantitative:

Triangulation: The Ultimate Credibility Enhancer

Triangulation: The Ultimate Credibility Enhancer

Triangulation is one of the best ways to enhance the credibility of your research. Learn about the different options here.

Inferential Statistics 101: Simple Explainer (With Examples)

Inferential Statistics 101: Simple Explainer (With Examples)

Quant Analysis 101: Inferential Statistics Everything You Need To Get Started (With...

Descriptive Statistics 101: Simple Explainer (With Examples)

Descriptive Statistics 101: Simple Explainer (With Examples)

Quant Analysis 101: Descriptive Statistics Everything You Need To Get Started (With...

Validity & Reliability: Explained Simply

Validity & Reliability: Explained Simply

Validity & Reliability In Research A Plain-Language Explanation (With Examples)By:...

How To Write The Results/Findings Chapter (Quantitative)

How To Write The Results/Findings Chapter (Quantitative)

How To Write The Results/Findings Chapter For Quantitative Studies (Dissertations &...

📄 FREE TEMPLATES

Research Topic Ideation

Proposal Writing

Literature Review

Methodology & Analysis

Academic Writing

Referencing & Citing

Apps, Tools & Tricks

The Grad Coach Podcast

15 Comments

Wei Leong YONG

Is there any blog article explaining more on Case study research design? Is there a Case study write-up template? Thank you.

Solly Khan

Thanks this was quite valuable to clarify such an important concept.

hetty

Thanks for this simplified explanations. it is quite very helpful.

Belz

This was really helpful. thanks

Imur

Thank you for your explanation. I think case study research design and the use of secondary data in researches needs to be talked about more in your videos and articles because there a lot of case studies research design tailored projects out there.

Please is there any template for a case study research design whose data type is a secondary data on your repository?

Sam Msongole

This post is very clear, comprehensive and has been very helpful to me. It has cleared the confusion I had in regard to research design and methodology.

Robyn Pritchard

This post is helpful, easy to understand, and deconstructs what a research design is. Thanks

Rachael Opoku

This post is really helpful.

kelebogile

how to cite this page

Peter

Thank you very much for the post. It is wonderful and has cleared many worries in my mind regarding research designs. I really appreciate .

ali

how can I put this blog as my reference(APA style) in bibliography part?

Joreme

This post has been very useful to me. Confusing areas have been cleared

Esther Mwamba

This is very helpful and very useful!

Lilo_22

Wow! This post has an awful explanation. Appreciated.

Florence

Thanks This has been helpful

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

Submit Comment

research design of a study should not include

  • Print Friendly

Pfeiffer Library

Research Methodologies

Research design, external validity, internal validity, threats to validity.

  • What are research methodologies?
  • What are research methods?
  • Additional Sources

According to Jenkins-Smith, et al. (2017), a research design is the set of steps you take to collect and analyze your research data.  In other words, it is the general plan to answer your research topic or question.  You can also think of it as a combination of your research methodology and your research method.  Your research design should include the following: 

  • A clear research question
  • Theoretical frameworks you will use to analyze your data
  • Key concepts
  • Your hypothesis/hypotheses
  • Independent and dependent variables (if applicable)
  • Strengths and weaknesses of your chosen design

There are two types of research designs:

  • Experimental design: This design is like a standard science lab experiment because the researcher controls as many variables as they can and assigns research subjects to groups.  The researcher manipulates the experimental treatment and gives it to one group.  The other group receives the unmanipulated treatment (or not treatment) and the researcher examines affect of the treatment in each group (dependent variable).  This design can have more than two groups depending on your study requirements.
  • Observational design: This is when the researcher has no control over the independent variable and which research participants get exposed to it.  Depending on your research topic, this is the only design you can use.  This is a more natural approach to a study because you are not controlling the experimental treatment.  You are allowing the variable to occur on its own without your interference.  Weather experiments are a great example of observational design because the researcher has no control over the weather and how it changes.

When considering your research design, you will also need to consider your study's validity and any potential threats to its validity.  There are two types of validity: external and internal validity.  Each type demonstrates a degree of accuracy and thoughtfulness in a study and they contribute to a study's reliability.  Information about external and internal validity is included below.

External validity is the degree to which you can generalize the findings of your research study.  It is determining whether or not the findings are applicable to other settings (Jenkins-Smith, 2017).  In many cases, the external validity of a study is strongly linked to the sample population.  For example, if you studied a group of twenty-five year old male Americans, you could potentially generalize your findings to all twenty-five year old American males.  External validity is also the ability for someone else to replicate your study and achieve the same results (Jenkins-Smith, 2017).  If someone replicates your exact study and gets different results, then your study may have weak external validity.

Questions to ask when assessing external validity:

  • Do my conclusions apply to other studies?
  • If someone were to replicate my study, would they get the same results?
  • Are my findings generalizable to a certain population?

Internal validity is when a researcher can conclude a causal relationship between their independent variable and their dependent variable.  It is a way to verify the study's findings because it draws a relationship between the variables (Jenkins-Smith, 2017).  In other words, it is the actual factors that result in the study's outcome (Singh, 2007).  According to Singh (2007), internal validity can be placed into 4 subcategories:

  • Face validity: This confirms the fact that the measure accurately reflects the research question.
  • Content validity: This assesses the measurement technique's compatibility with other literature on the topic.  It determines how well the tool used to gather data measures the item or concept that the researcher is interested in.
  • Criterion validity: This demonstrates the accuracy of a study by comparing it to a similar study.
  • Construct validity: This measures the appropriateness of the conclusions drawn from a study.

According to Jenkins-Smith (2017), there are several threats that may impact the internal and external validity of a study:

Threats to External Validity

  • Interaction with testing: Any testing done before the actual experiment may decrease participants' sensitivity to the actual treatment.
  • Sample misrepresentation: A population sample that is unrepresentative of the entire population.
  • Selection bias: Researchers may have bias towards selecting certain subjects to participate in the study who may be more or less sensitive to the experimental treatment.
  • Environment: If the study was conducted in a lab setting, the findings may not be able to transfer to a more natural setting.

Threats to Internal Validity

  • Unplanned events that occur during the experiment that effect the results.
  • Changes to the participants during the experiment, such as fatigue, aging, etc.
  • Selection bias: When research subjects are not selected randomly.
  • If participants drop out of the study without completing it.
  • Changing the way the data is collected or measured during the study.
  • << Previous: Welcome
  • Next: What are research methodologies? >>
  • Last Updated: Aug 2, 2022 2:36 PM
  • URL: https://library.tiffin.edu/researchmethodologies

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

How to choose your study design

Affiliation.

  • 1 Department of Medicine, Sydney Medical School, Faculty of Medicine and Health, University of Sydney, Sydney, New South Wales, Australia.
  • PMID: 32479703
  • DOI: 10.1111/jpc.14929

Research designs are broadly divided into observational studies (i.e. cross-sectional; case-control and cohort studies) and experimental studies (randomised control trials, RCTs). Each design has a specific role, and each has both advantages and disadvantages. Moreover, while the typical RCT is a parallel group design, there are now many variants to consider. It is important that both researchers and paediatricians are aware of the role of each study design, their respective pros and cons, and the inherent risk of bias with each design. While there are numerous quantitative study designs available to researchers, the final choice is dictated by two key factors. First, by the specific research question. That is, if the question is one of 'prevalence' (disease burden) then the ideal is a cross-sectional study; if it is a question of 'harm' - a case-control study; prognosis - a cohort and therapy - a RCT. Second, by what resources are available to you. This includes budget, time, feasibility re-patient numbers and research expertise. All these factors will severely limit the choice. While paediatricians would like to see more RCTs, these require a huge amount of resources, and in many situations will be unethical (e.g. potentially harmful intervention) or impractical (e.g. rare diseases). This paper gives a brief overview of the common study types, and for those embarking on such studies you will need far more comprehensive, detailed sources of information.

Keywords: experimental studies; observational studies; research method.

© 2020 Paediatrics and Child Health Division (The Royal Australasian College of Physicians).

PubMed Disclaimer

Similar articles

  • Observational Studies. Hess DR. Hess DR. Respir Care. 2023 Nov;68(11):1585-1597. doi: 10.4187/respcare.11170. Epub 2023 Jun 20. Respir Care. 2023. PMID: 37339891
  • Observational designs in clinical multiple sclerosis research: Particulars, practices and potentialities. Jongen PJ. Jongen PJ. Mult Scler Relat Disord. 2019 Oct;35:142-149. doi: 10.1016/j.msard.2019.07.006. Epub 2019 Jul 20. Mult Scler Relat Disord. 2019. PMID: 31394404 Review.
  • Study designs in clinical research. Noordzij M, Dekker FW, Zoccali C, Jager KJ. Noordzij M, et al. Nephron Clin Pract. 2009;113(3):c218-21. doi: 10.1159/000235610. Epub 2009 Aug 18. Nephron Clin Pract. 2009. PMID: 19690439 Review.
  • Study Types in Orthopaedics Research: Is My Study Design Appropriate for the Research Question? Zaniletti I, Devick KL, Larson DR, Lewallen DG, Berry DJ, Maradit Kremers H. Zaniletti I, et al. J Arthroplasty. 2022 Oct;37(10):1939-1944. doi: 10.1016/j.arth.2022.05.028. Epub 2022 Sep 6. J Arthroplasty. 2022. PMID: 36162926 Free PMC article.
  • Design choices for observational studies of the effect of exposure on disease incidence. Gail MH, Altman DG, Cadarette SM, Collins G, Evans SJ, Sekula P, Williamson E, Woodward M. Gail MH, et al. BMJ Open. 2019 Dec 9;9(12):e031031. doi: 10.1136/bmjopen-2019-031031. BMJ Open. 2019. PMID: 31822541 Free PMC article.
  • Effects of Electronic Serious Games on Older Adults With Alzheimer's Disease and Mild Cognitive Impairment: Systematic Review With Meta-Analysis of Randomized Controlled Trials. Zuo X, Tang Y, Chen Y, Zhou Z. Zuo X, et al. JMIR Serious Games. 2024 Jul 31;12:e55785. doi: 10.2196/55785. JMIR Serious Games. 2024. PMID: 39083796 Free PMC article. Review.
  • Nurses' Adherence to the Portuguese Standard to Prevent Catheter-Associated Urinary Tract Infections (CAUTIs): An Observational Study. Paiva-Santos F, Santos-Costa P, Bastos C, Graveto J. Paiva-Santos F, et al. Nurs Rep. 2023 Oct 10;13(4):1432-1441. doi: 10.3390/nursrep13040120. Nurs Rep. 2023. PMID: 37873827 Free PMC article.
  • Effects of regional anaesthesia on mortality in patients undergoing lower extremity amputation: A retrospective pooled analysis. Quak SM, Pillay N, Wong SN, Karthekeyan RB, Chan DXH, Liu CWY. Quak SM, et al. Indian J Anaesth. 2022 Jun;66(6):419-430. doi: 10.4103/ija.ija_917_21. Epub 2022 Jun 21. Indian J Anaesth. 2022. PMID: 35903599 Free PMC article.
  • Peat J, Mellis CM, Williams K, Xuan W. Health Science Research: A Handbook of Quantitative Methods Chapter 2, Planning the Study. Sydney: Allen & Unwin; 2001.
  • Guyatt G, Rennie D, Meade MO, Cook DJ. Users Guide to the Medical Literature: A Manual for Evidence-Based Clinical Practice, 3rd edn; Chapter 14, Harm (observational studies). New York, NY: McGraw-Hill; 2015.
  • Centre for Evidence Based Medicine. Oxford EBM ‘Critical Appraisal tools’. Oxford University, UK. Available from: cebm.net [Accessed March 2020].
  • Kahlert J, Bjerge Gribsholt S, Gammelager H, Dekkers OMet al. Control of confounding in the analysis phase - An overview for clinicians. Clin. Epidemiol. 2017; 9: 195-204.
  • Sedgwick P. Cross sectional studies: Advantages and disadvantages. BMJ 2014; 348: g2276.
  • Search in MeSH

LinkOut - more resources

Full text sources.

  • Ovid Technologies, Inc.

Miscellaneous

  • NCI CPTAC Assay Portal

full text provider logo

  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

  • Privacy Policy

Research Method

Home » Research Design – Types, Methods and Examples

Research Design – Types, Methods and Examples

Table of Contents

Research Design

Research Design

Definition:

Research design refers to the overall strategy or plan for conducting a research study. It outlines the methods and procedures that will be used to collect and analyze data, as well as the goals and objectives of the study. Research design is important because it guides the entire research process and ensures that the study is conducted in a systematic and rigorous manner.

Types of Research Design

Types of Research Design are as follows:

Descriptive Research Design

This type of research design is used to describe a phenomenon or situation. It involves collecting data through surveys, questionnaires, interviews, and observations. The aim of descriptive research is to provide an accurate and detailed portrayal of a particular group, event, or situation. It can be useful in identifying patterns, trends, and relationships in the data.

Correlational Research Design

Correlational research design is used to determine if there is a relationship between two or more variables. This type of research design involves collecting data from participants and analyzing the relationship between the variables using statistical methods. The aim of correlational research is to identify the strength and direction of the relationship between the variables.

Experimental Research Design

Experimental research design is used to investigate cause-and-effect relationships between variables. This type of research design involves manipulating one variable and measuring the effect on another variable. It usually involves randomly assigning participants to groups and manipulating an independent variable to determine its effect on a dependent variable. The aim of experimental research is to establish causality.

Quasi-experimental Research Design

Quasi-experimental research design is similar to experimental research design, but it lacks one or more of the features of a true experiment. For example, there may not be random assignment to groups or a control group. This type of research design is used when it is not feasible or ethical to conduct a true experiment.

Case Study Research Design

Case study research design is used to investigate a single case or a small number of cases in depth. It involves collecting data through various methods, such as interviews, observations, and document analysis. The aim of case study research is to provide an in-depth understanding of a particular case or situation.

Longitudinal Research Design

Longitudinal research design is used to study changes in a particular phenomenon over time. It involves collecting data at multiple time points and analyzing the changes that occur. The aim of longitudinal research is to provide insights into the development, growth, or decline of a particular phenomenon over time.

Structure of Research Design

The format of a research design typically includes the following sections:

  • Introduction : This section provides an overview of the research problem, the research questions, and the importance of the study. It also includes a brief literature review that summarizes previous research on the topic and identifies gaps in the existing knowledge.
  • Research Questions or Hypotheses: This section identifies the specific research questions or hypotheses that the study will address. These questions should be clear, specific, and testable.
  • Research Methods : This section describes the methods that will be used to collect and analyze data. It includes details about the study design, the sampling strategy, the data collection instruments, and the data analysis techniques.
  • Data Collection: This section describes how the data will be collected, including the sample size, data collection procedures, and any ethical considerations.
  • Data Analysis: This section describes how the data will be analyzed, including the statistical techniques that will be used to test the research questions or hypotheses.
  • Results : This section presents the findings of the study, including descriptive statistics and statistical tests.
  • Discussion and Conclusion : This section summarizes the key findings of the study, interprets the results, and discusses the implications of the findings. It also includes recommendations for future research.
  • References : This section lists the sources cited in the research design.

Example of Research Design

An Example of Research Design could be:

Research question: Does the use of social media affect the academic performance of high school students?

Research design:

  • Research approach : The research approach will be quantitative as it involves collecting numerical data to test the hypothesis.
  • Research design : The research design will be a quasi-experimental design, with a pretest-posttest control group design.
  • Sample : The sample will be 200 high school students from two schools, with 100 students in the experimental group and 100 students in the control group.
  • Data collection : The data will be collected through surveys administered to the students at the beginning and end of the academic year. The surveys will include questions about their social media usage and academic performance.
  • Data analysis : The data collected will be analyzed using statistical software. The mean scores of the experimental and control groups will be compared to determine whether there is a significant difference in academic performance between the two groups.
  • Limitations : The limitations of the study will be acknowledged, including the fact that social media usage can vary greatly among individuals, and the study only focuses on two schools, which may not be representative of the entire population.
  • Ethical considerations: Ethical considerations will be taken into account, such as obtaining informed consent from the participants and ensuring their anonymity and confidentiality.

How to Write Research Design

Writing a research design involves planning and outlining the methodology and approach that will be used to answer a research question or hypothesis. Here are some steps to help you write a research design:

  • Define the research question or hypothesis : Before beginning your research design, you should clearly define your research question or hypothesis. This will guide your research design and help you select appropriate methods.
  • Select a research design: There are many different research designs to choose from, including experimental, survey, case study, and qualitative designs. Choose a design that best fits your research question and objectives.
  • Develop a sampling plan : If your research involves collecting data from a sample, you will need to develop a sampling plan. This should outline how you will select participants and how many participants you will include.
  • Define variables: Clearly define the variables you will be measuring or manipulating in your study. This will help ensure that your results are meaningful and relevant to your research question.
  • Choose data collection methods : Decide on the data collection methods you will use to gather information. This may include surveys, interviews, observations, experiments, or secondary data sources.
  • Create a data analysis plan: Develop a plan for analyzing your data, including the statistical or qualitative techniques you will use.
  • Consider ethical concerns : Finally, be sure to consider any ethical concerns related to your research, such as participant confidentiality or potential harm.

When to Write Research Design

Research design should be written before conducting any research study. It is an important planning phase that outlines the research methodology, data collection methods, and data analysis techniques that will be used to investigate a research question or problem. The research design helps to ensure that the research is conducted in a systematic and logical manner, and that the data collected is relevant and reliable.

Ideally, the research design should be developed as early as possible in the research process, before any data is collected. This allows the researcher to carefully consider the research question, identify the most appropriate research methodology, and plan the data collection and analysis procedures in advance. By doing so, the research can be conducted in a more efficient and effective manner, and the results are more likely to be valid and reliable.

Purpose of Research Design

The purpose of research design is to plan and structure a research study in a way that enables the researcher to achieve the desired research goals with accuracy, validity, and reliability. Research design is the blueprint or the framework for conducting a study that outlines the methods, procedures, techniques, and tools for data collection and analysis.

Some of the key purposes of research design include:

  • Providing a clear and concise plan of action for the research study.
  • Ensuring that the research is conducted ethically and with rigor.
  • Maximizing the accuracy and reliability of the research findings.
  • Minimizing the possibility of errors, biases, or confounding variables.
  • Ensuring that the research is feasible, practical, and cost-effective.
  • Determining the appropriate research methodology to answer the research question(s).
  • Identifying the sample size, sampling method, and data collection techniques.
  • Determining the data analysis method and statistical tests to be used.
  • Facilitating the replication of the study by other researchers.
  • Enhancing the validity and generalizability of the research findings.

Applications of Research Design

There are numerous applications of research design in various fields, some of which are:

  • Social sciences: In fields such as psychology, sociology, and anthropology, research design is used to investigate human behavior and social phenomena. Researchers use various research designs, such as experimental, quasi-experimental, and correlational designs, to study different aspects of social behavior.
  • Education : Research design is essential in the field of education to investigate the effectiveness of different teaching methods and learning strategies. Researchers use various designs such as experimental, quasi-experimental, and case study designs to understand how students learn and how to improve teaching practices.
  • Health sciences : In the health sciences, research design is used to investigate the causes, prevention, and treatment of diseases. Researchers use various designs, such as randomized controlled trials, cohort studies, and case-control studies, to study different aspects of health and healthcare.
  • Business : Research design is used in the field of business to investigate consumer behavior, marketing strategies, and the impact of different business practices. Researchers use various designs, such as survey research, experimental research, and case studies, to study different aspects of the business world.
  • Engineering : In the field of engineering, research design is used to investigate the development and implementation of new technologies. Researchers use various designs, such as experimental research and case studies, to study the effectiveness of new technologies and to identify areas for improvement.

Advantages of Research Design

Here are some advantages of research design:

  • Systematic and organized approach : A well-designed research plan ensures that the research is conducted in a systematic and organized manner, which makes it easier to manage and analyze the data.
  • Clear objectives: The research design helps to clarify the objectives of the study, which makes it easier to identify the variables that need to be measured, and the methods that need to be used to collect and analyze data.
  • Minimizes bias: A well-designed research plan minimizes the chances of bias, by ensuring that the data is collected and analyzed objectively, and that the results are not influenced by the researcher’s personal biases or preferences.
  • Efficient use of resources: A well-designed research plan helps to ensure that the resources (time, money, and personnel) are used efficiently and effectively, by focusing on the most important variables and methods.
  • Replicability: A well-designed research plan makes it easier for other researchers to replicate the study, which enhances the credibility and reliability of the findings.
  • Validity: A well-designed research plan helps to ensure that the findings are valid, by ensuring that the methods used to collect and analyze data are appropriate for the research question.
  • Generalizability : A well-designed research plan helps to ensure that the findings can be generalized to other populations, settings, or situations, which increases the external validity of the study.

Research Design Vs Research Methodology

Research DesignResearch Methodology
The plan and structure for conducting research that outlines the procedures to be followed to collect and analyze data.The set of principles, techniques, and tools used to carry out the research plan and achieve research objectives.
Describes the overall approach and strategy used to conduct research, including the type of data to be collected, the sources of data, and the methods for collecting and analyzing data.Refers to the techniques and methods used to gather, analyze and interpret data, including sampling techniques, data collection methods, and data analysis techniques.
Helps to ensure that the research is conducted in a systematic, rigorous, and valid way, so that the results are reliable and can be used to make sound conclusions.Includes a set of procedures and tools that enable researchers to collect and analyze data in a consistent and valid manner, regardless of the research design used.
Common research designs include experimental, quasi-experimental, correlational, and descriptive studies.Common research methodologies include qualitative, quantitative, and mixed-methods approaches.
Determines the overall structure of the research project and sets the stage for the selection of appropriate research methodologies.Guides the researcher in selecting the most appropriate research methods based on the research question, research design, and other contextual factors.
Helps to ensure that the research project is feasible, relevant, and ethical.Helps to ensure that the data collected is accurate, valid, and reliable, and that the research findings can be interpreted and generalized to the population of interest.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

APA Table of Contents

APA Table of Contents – Format and Example

Research Topic

Research Topics – Ideas and Examples

Assignment

Assignment – Types, Examples and Writing Guide

Research Findings

Research Findings – Types Examples and Writing...

Thesis Outline

Thesis Outline – Example, Template and Writing...

Research Paper Citation

How to Cite Research Paper – All Formats and...

Leave a comment x.

Save my name, email, and website in this browser for the next time I comment.

Sacred Heart University Library

Organizing Academic Research Papers: Types of Research Designs

  • Purpose of Guide
  • Design Flaws to Avoid
  • Glossary of Research Terms
  • Narrowing a Topic Idea
  • Broadening a Topic Idea
  • Extending the Timeliness of a Topic Idea
  • Academic Writing Style
  • Choosing a Title
  • Making an Outline
  • Paragraph Development
  • Executive Summary
  • Background Information
  • The Research Problem/Question
  • Theoretical Framework
  • Citation Tracking
  • Content Alert Services
  • Evaluating Sources
  • Primary Sources
  • Secondary Sources
  • Tertiary Sources
  • What Is Scholarly vs. Popular?
  • Qualitative Methods
  • Quantitative Methods
  • Using Non-Textual Elements
  • Limitations of the Study
  • Common Grammar Mistakes
  • Avoiding Plagiarism
  • Footnotes or Endnotes?
  • Further Readings
  • Annotated Bibliography
  • Dealing with Nervousness
  • Using Visual Aids
  • Grading Someone Else's Paper
  • How to Manage Group Projects
  • Multiple Book Review Essay
  • Reviewing Collected Essays
  • About Informed Consent
  • Writing Field Notes
  • Writing a Policy Memo
  • Writing a Research Proposal
  • Acknowledgements

Introduction

Before beginning your paper, you need to decide how you plan to design the study .

The research design refers to the overall strategy that you choose to integrate the different components of the study in a coherent and logical way, thereby, ensuring you will effectively address the research problem; it constitutes the blueprint for the collection, measurement, and analysis of data. Note that your research problem determines the type of design you can use, not the other way around!

General Structure and Writing Style

Action research design, case study design, causal design, cohort design, cross-sectional design, descriptive design, experimental design, exploratory design, historical design, longitudinal design, observational design, philosophical design, sequential design.

Kirshenblatt-Gimblett, Barbara. Part 1, What Is Research Design? The Context of Design. Performance Studies Methods Course syllabus . New York University, Spring 2006; Trochim, William M.K. Research Methods Knowledge Base . 2006.

The function of a research design is to ensure that the evidence obtained enables you to effectively address the research problem as unambiguously as possible. In social sciences research, obtaining evidence relevant to the research problem generally entails specifying the type of evidence needed to test a theory, to evaluate a program, or to accurately describe a phenomenon. However, researchers can often begin their investigations far too early, before they have thought critically about about what information is required to answer the study's research questions. Without attending to these design issues beforehand, the conclusions drawn risk being weak and unconvincing and, consequently, will fail to adequate address the overall research problem.

 Given this, the length and complexity of research designs can vary considerably, but any sound design will do the following things:

  • Identify the research problem clearly and justify its selection,
  • Review previously published literature associated with the problem area,
  • Clearly and explicitly specify hypotheses [i.e., research questions] central to the problem selected,
  • Effectively describe the data which will be necessary for an adequate test of the hypotheses and explain how such data will be obtained, and
  • Describe the methods of analysis which will be applied to the data in determining whether or not the hypotheses are true or false.

Kirshenblatt-Gimblett, Barbara. Part 1, What Is Research Design? The Context of Design. Performance Studies Methods Course syllabus . New Yortk University, Spring 2006.

Definition and Purpose

The essentials of action research design follow a characteristic cycle whereby initially an exploratory stance is adopted, where an understanding of a problem is developed and plans are made for some form of interventionary strategy. Then the intervention is carried out (the action in Action Research) during which time, pertinent observations are collected in various forms. The new interventional strategies are carried out, and the cyclic process repeats, continuing until a sufficient understanding of (or implement able solution for) the problem is achieved. The protocol is iterative or cyclical in nature and is intended to foster deeper understanding of a given situation, starting with conceptualizing and particularizing the problem and moving through several interventions and evaluations.

What do these studies tell you?

  • A collaborative and adaptive research design that lends itself to use in work or community situations.
  • Design focuses on pragmatic and solution-driven research rather than testing theories.
  • When practitioners use action research it has the potential to increase the amount they learn consciously from their experience. The action research cycle can also be regarded as a learning cycle.
  • Action search studies often have direct and obvious relevance to practice.
  • There are no hidden controls or preemption of direction by the researcher.

What these studies don't tell you?

  • It is harder to do than conducting conventional studies because the researcher takes on responsibilities for encouraging change as well as for research.
  • Action research is much harder to write up because you probably can’t use a standard format to report your findings effectively.
  • Personal over-involvement of the researcher may bias research results.
  • The cyclic nature of action research to achieve its twin outcomes of action (e.g. change) and research (e.g. understanding) is time-consuming and complex to conduct.

Gall, Meredith. Educational Research: An Introduction . Chapter 18, Action Research. 8th ed. Boston, MA: Pearson/Allyn and Bacon, 2007; Kemmis, Stephen and Robin McTaggart. “Participatory Action Research.” In Handbook of Qualitative Research . Norman Denzin and Yvonna S. Locoln, eds. 2nd ed. (Thousand Oaks, CA: SAGE, 2000), pp. 567-605.; Reason, Peter and Hilary Bradbury. Handbook of Action Research: Participative Inquiry and Practice . Thousand Oaks, CA: SAGE, 2001.

A case study is an in-depth study of a particular research problem rather than a sweeping statistical survey. It is often used to narrow down a very broad field of research into one or a few easily researchable examples. The case study research design is also useful for testing whether a specific theory and model actually applies to phenomena in the real world. It is a useful design when not much is known about a phenomenon.

  • Approach excels at bringing us to an understanding of a complex issue through detailed contextual analysis of a limited number of events or conditions and their relationships.
  • A researcher using a case study design can apply a vaiety of methodologies and rely on a variety of sources to investigate a research problem.
  • Design can extend experience or add strength to what is already known through previous research.
  • Social scientists, in particular, make wide use of this research design to examine contemporary real-life situations and provide the basis for the application of concepts and theories and extension of methods.
  • The design can provide detailed descriptions of specific and rare cases.
  • A single or small number of cases offers little basis for establishing reliability or to generalize the findings to a wider population of people, places, or things.
  • The intense exposure to study of the case may bias a researcher's interpretation of the findings.
  • Design does not facilitate assessment of cause and effect relationships.
  • Vital information may be missing, making the case hard to interpret.
  • The case may not be representative or typical of the larger problem being investigated.
  • If the criteria for selecting a case is because it represents a very unusual or unique phenomenon or problem for study, then your intepretation of the findings can only apply to that particular case.

Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 4, Flexible Methods: Case Study Design. 2nd ed. New York: Columbia University Press, 1999; Stake, Robert E. The Art of Case Study Research . Thousand Oaks, CA: SAGE, 1995; Yin, Robert K. Case Study Research: Design and Theory . Applied Social Research Methods Series, no. 5. 3rd ed. Thousand Oaks, CA: SAGE, 2003.

Causality studies may be thought of as understanding a phenomenon in terms of conditional statements in the form, “If X, then Y.” This type of research is used to measure what impact a specific change will have on existing norms and assumptions. Most social scientists seek causal explanations that reflect tests of hypotheses. Causal effect (nomothetic perspective) occurs when variation in one phenomenon, an independent variable, leads to or results, on average, in variation in another phenomenon, the dependent variable.

Conditions necessary for determining causality:

  • Empirical association--a valid conclusion is based on finding an association between the independent variable and the dependent variable.
  • Appropriate time order--to conclude that causation was involved, one must see that cases were exposed to variation in the independent variable before variation in the dependent variable.
  • Nonspuriousness--a relationship between two variables that is not due to variation in a third variable.
  • Causality research designs helps researchers understand why the world works the way it does through the process of proving a causal link between variables and eliminating other possibilities.
  • Replication is possible.
  • There is greater confidence the study has internal validity due to the systematic subject selection and equity of groups being compared.
  • Not all relationships are casual! The possibility always exists that, by sheer coincidence, two unrelated events appear to be related [e.g., Punxatawney Phil could accurately predict the duration of Winter for five consecutive years but, the fact remains, he's just a big, furry rodent].
  • Conclusions about causal relationships are difficult to determine due to a variety of extraneous and confounding variables that exist in a social environment. This means causality can only be inferred, never proven.
  • If two variables are correlated, the cause must come before the effect. However, even though two variables might be causally related, it can sometimes be difficult to determine which variable comes first and therefore to establish which variable is the actual cause and which is the  actual effect.

Bachman, Ronet. The Practice of Research in Criminology and Criminal Justice . Chapter 5, Causation and Research Designs. 3rd ed.  Thousand Oaks, CA: Pine Forge Press, 2007; Causal Research Design: Experimentation. Anonymous SlideShare Presentation ; Gall, Meredith. Educational Research: An Introduction . Chapter 11, Nonexperimental Research: Correlational Designs. 8th ed. Boston, MA: Pearson/Allyn and Bacon, 2007; Trochim, William M.K. Research Methods Knowledge Base . 2006.

Often used in the medical sciences, but also found in the applied social sciences, a cohort study generally refers to a study conducted over a period of time involving members of a population which the subject or representative member comes from, and who are united by some commonality or similarity. Using a quantitative framework, a cohort study makes note of statistical occurrence within a specialized subgroup, united by same or similar characteristics that are relevant to the research problem being investigated, r ather than studying statistical occurrence within the general population. Using a qualitative framework, cohort studies generally gather data using methods of observation. Cohorts can be either "open" or "closed."

  • Open Cohort Studies [dynamic populations, such as the population of Los Angeles] involve a population that is defined just by the state of being a part of the study in question (and being monitored for the outcome). Date of entry and exit from the study is individually defined, therefore, the size of the study population is not constant. In open cohort studies, researchers can only calculate rate based data, such as, incidence rates and variants thereof.
  • Closed Cohort Studies [static populations, such as patients entered into a clinical trial] involve participants who enter into the study at one defining point in time and where it is presumed that no new participants can enter the cohort. Given this, the number of study participants remains constant (or can only decrease).
  • The use of cohorts is often mandatory because a randomized control study may be unethical. For example, you cannot deliberately expose people to asbestos, you can only study its effects on those who have already been exposed. Research that measures risk factors  often relies on cohort designs.
  • Because cohort studies measure potential causes before the outcome has occurred, they can demonstrate that these “causes” preceded the outcome, thereby avoiding the debate as to which is the cause and which is the effect.
  • Cohort analysis is highly flexible and can provide insight into effects over time and related to a variety of different types of changes [e.g., social, cultural, political, economic, etc.].
  • Either original data or secondary data can be used in this design.
  • In cases where a comparative analysis of two cohorts is made [e.g., studying the effects of one group exposed to asbestos and one that has not], a researcher cannot control for all other factors that might differ between the two groups. These factors are known as confounding variables.
  • Cohort studies can end up taking a long time to complete if the researcher must wait for the conditions of interest to develop within the group. This also increases the chance that key variables change during the course of the study, potentially impacting the validity of the findings.
  • Because of the lack of randominization in the cohort design, its external validity is lower than that of study designs where the researcher randomly assigns participants.

Healy P, Devane D. “Methodological Considerations in Cohort Study Designs.” Nurse Researcher 18 (2011): 32-36;  Levin, Kate Ann. Study Design IV: Cohort Studies. Evidence-Based Dentistry 7 (2003): 51–52; Study Design 101 . Himmelfarb Health Sciences Library. George Washington University, November 2011; Cohort Study . Wikipedia.

Cross-sectional research designs have three distinctive features: no time dimension, a reliance on existing differences rather than change following intervention; and, groups are selected based on existing differences rather than random allocation. The cross-sectional design can only measure diffrerences between or from among a variety of people, subjects, or phenomena rather than change. As such, researchers using this design can only employ a relative passive approach to making causal inferences based on findings.

  • Cross-sectional studies provide a 'snapshot' of the outcome and the characteristics associated with it, at a specific point in time.
  • Unlike the experimental design where there is an active intervention by the researcher to produce and measure change or to create differences, cross-sectional designs focus on studying and drawing inferences from existing differences between people, subjects, or phenomena.
  • Entails collecting data at and concerning one point in time. While longitudinal studies involve taking multiple measures over an extended period of time, cross-sectional research is focused on finding relationships between variables at one moment in time.
  • Groups identified for study are purposely selected based upon existing differences in the sample rather than seeking random sampling.
  • Cross-section studies are capable of using data from a large number of subjects and, unlike observational studies, is not geographically bound.
  • Can estimate prevalence of an outcome of interest because the sample is usually taken from the whole population.
  • Because cross-sectional designs generally use survey techniques to gather data, they are relatively inexpensive and take up little time to conduct.
  • Finding people, subjects, or phenomena to study that are very similar except in one specific variable can be difficult.
  • Results are static and time bound and, therefore, give no indication of a sequence of events or reveal historical contexts.
  • Studies cannot be utilized to establish cause and effect relationships.
  • Provide only a snapshot of analysis so there is always the possibility that a study could have differing results if another time-frame had been chosen.
  • There is no follow up to the findings.

Hall, John. “Cross-Sectional Survey Design.” In Encyclopedia of Survey Research Methods. Paul J. Lavrakas, ed. (Thousand Oaks, CA: Sage, 2008), pp. 173-174; Helen Barratt, Maria Kirwan. Cross-Sectional Studies: Design, Application, Strengths and Weaknesses of Cross-Sectional Studies . Healthknowledge, 2009. Cross-Sectional Study . Wikipedia.

Descriptive research designs help provide answers to the questions of who, what, when, where, and how associated with a particular research problem; a descriptive study cannot conclusively ascertain answers to why. Descriptive research is used to obtain information concerning the current status of the phenomena and to describe "what exists" with respect to variables or conditions in a situation.

  • The subject is being observed in a completely natural and unchanged natural environment. True experiments, whilst giving analyzable data, often adversely influence the normal behavior of the subject.
  • Descriptive research is often used as a pre-cursor to more quantitatively research designs, the general overview giving some valuable pointers as to what variables are worth testing quantitatively.
  • If the limitations are understood, they can be a useful tool in developing a more focused study.
  • Descriptive studies can yield rich data that lead to important recommendations.
  • Appoach collects a large amount of data for detailed analysis.
  • The results from a descriptive research can not be used to discover a definitive answer or to disprove a hypothesis.
  • Because descriptive designs often utilize observational methods [as opposed to quantitative methods], the results cannot be replicated.
  • The descriptive function of research is heavily dependent on instrumentation for measurement and observation.

Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 5, Flexible Methods: Descriptive Research. 2nd ed. New York: Columbia University Press, 1999;  McNabb, Connie. Descriptive Research Methodologies . Powerpoint Presentation; Shuttleworth, Martyn. Descriptive Research Design , September 26, 2008. Explorable.com website.

A blueprint of the procedure that enables the researcher to maintain control over all factors that may affect the result of an experiment. In doing this, the researcher attempts to determine or predict what may occur. Experimental Research is often used where there is time priority in a causal relationship (cause precedes effect), there is consistency in a causal relationship (a cause will always lead to the same effect), and the magnitude of the correlation is great. The classic experimental design specifies an experimental group and a control group. The independent variable is administered to the experimental group and not to the control group, and both groups are measured on the same dependent variable. Subsequent experimental designs have used more groups and more measurements over longer periods. True experiments must have control, randomization, and manipulation.

  • Experimental research allows the researcher to control the situation. In so doing, it allows researchers to answer the question, “what causes something to occur?”
  • Permits the researcher to identify cause and effect relationships between variables and to distinguish placebo effects from treatment effects.
  • Experimental research designs support the ability to limit alternative explanations and to infer direct causal relationships in the study.
  • Approach provides the highest level of evidence for single studies.
  • The design is artificial, and results may not generalize well to the real world.
  • The artificial settings of experiments may alter subject behaviors or responses.
  • Experimental designs can be costly if special equipment or facilities are needed.
  • Some research problems cannot be studied using an experiment because of ethical or technical reasons.
  • Difficult to apply ethnographic and other qualitative methods to  experimental designed research studies.

Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 7, Flexible Methods: Experimental Research. 2nd ed. New York: Columbia University Press, 1999; Chapter 2: Research Design, Experimental Designs . School of Psychology, University of New England, 2000; Experimental Research. Research Methods by Dummies. Department of Psychology. California State University, Fresno, 2006; Trochim, William M.K. Experimental Design . Research Methods Knowledge Base. 2006; Rasool, Shafqat. Experimental Research . Slideshare presentation.

An exploratory design is conducted about a research problem when there are few or no earlier studies to refer to. The focus is on gaining insights and familiarity for later investigation or undertaken when problems are in a preliminary stage of investigation.

The goals of exploratory research are intended to produce the following possible insights:

  • Familiarity with basic details, settings and concerns.
  • Well grounded picture of the situation being developed.
  • Generation of new ideas and assumption, development of tentative theories or hypotheses.
  • Determination about whether a study is feasible in the future.
  • Issues get refined for more systematic investigation and formulation of new research questions.
  • Direction for future research and techniques get developed.
  • Design is a useful approach for gaining background information on a particular topic.
  • Exploratory research is flexible and can address research questions of all types (what, why, how).
  • Provides an opportunity to define new terms and clarify existing concepts.
  • Exploratory research is often used to generate formal hypotheses and develop more precise research problems.
  • Exploratory studies help establish research priorities.
  • Exploratory research generally utilizes small sample sizes and, thus, findings are typically not generalizable to the population at large.
  • The exploratory nature of the research inhibits an ability to make definitive conclusions about the findings.
  • The research process underpinning exploratory studies is flexible but often unstructured, leading to only tentative results that have limited value in decision-making.
  • Design lacks rigorous standards applied to methods of data gathering and analysis because one of the areas for exploration could be to determine what method or methodologies could best fit the research problem.

Cuthill, Michael. “Exploratory Research: Citizen Participation, Local Government, and Sustainable Development in Australia.” Sustainable Development 10 (2002): 79-89; Taylor, P. J., G. Catalano, and D.R.F. Walker. “Exploratory Analysis of the World City Network.” Urban Studies 39 (December 2002): 2377-2394; Exploratory Research . Wikipedia.

The purpose of a historical research design is to collect, verify, and synthesize evidence from the past to establish facts that defend or refute your hypothesis. It uses secondary sources and a variety of primary documentary evidence, such as, logs, diaries, official records, reports, archives, and non-textual information [maps, pictures, audio and visual recordings]. The limitation is that the sources must be both authentic and valid.

  • The historical research design is unobtrusive; the act of research does not affect the results of the study.
  • The historical approach is well suited for trend analysis.
  • Historical records can add important contextual background required to more fully understand and interpret a research problem.
  • There is no possibility of researcher-subject interaction that could affect the findings.
  • Historical sources can be used over and over to study different research problems or to replicate a previous study.
  • The ability to fulfill the aims of your research are directly related to the amount and quality of documentation available to understand the research problem.
  • Since historical research relies on data from the past, there is no way to manipulate it to control for contemporary contexts.
  • Interpreting historical sources can be very time consuming.
  • The sources of historical materials must be archived consistentally to ensure access.
  • Original authors bring their own perspectives and biases to the interpretation of past events and these biases are more difficult to ascertain in historical resources.
  • Due to the lack of control over external variables, historical research is very weak with regard to the demands of internal validity.
  • It rare that the entirety of historical documentation needed to fully address a research problem is available for interpretation, therefore, gaps need to be acknowledged.

Savitt, Ronald. “Historical Research in Marketing.” Journal of Marketing 44 (Autumn, 1980): 52-58;  Gall, Meredith. Educational Research: An Introduction . Chapter 16, Historical Research. 8th ed. Boston, MA: Pearson/Allyn and Bacon, 2007.

A longitudinal study follows the same sample over time and makes repeated observations. With longitudinal surveys, for example, the same group of people is interviewed at regular intervals, enabling researchers to track changes over time and to relate them to variables that might explain why the changes occur. Longitudinal research designs describe patterns of change and help establish the direction and magnitude of causal relationships. Measurements are taken on each variable over two or more distinct time periods. This allows the researcher to measure change in variables over time. It is a type of observational study and is sometimes referred to as a panel study.

  • Longitudinal data allow the analysis of duration of a particular phenomenon.
  • Enables survey researchers to get close to the kinds of causal explanations usually attainable only with experiments.
  • The design permits the measurement of differences or change in a variable from one period to another [i.e., the description of patterns of change over time].
  • Longitudinal studies facilitate the prediction of future outcomes based upon earlier factors.
  • The data collection method may change over time.
  • Maintaining the integrity of the original sample can be difficult over an extended period of time.
  • It can be difficult to show more than one variable at a time.
  • This design often needs qualitative research to explain fluctuations in the data.
  • A longitudinal research design assumes present trends will continue unchanged.
  • It can take a long period of time to gather results.
  • There is a need to have a large sample size and accurate sampling to reach representativness.

Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 6, Flexible Methods: Relational and Longitudinal Research. 2nd ed. New York: Columbia University Press, 1999; Kalaian, Sema A. and Rafa M. Kasim. "Longitudinal Studies." In Encyclopedia of Survey Research Methods . Paul J. Lavrakas, ed. (Thousand Oaks, CA: Sage, 2008), pp. 440-441; Ployhart, Robert E. and Robert J. Vandenberg. "Longitudinal Research: The Theory, Design, and Analysis of Change.” Journal of Management 36 (January 2010): 94-120; Longitudinal Study . Wikipedia.

This type of research design draws a conclusion by comparing subjects against a control group, in cases where the researcher has no control over the experiment. There are two general types of observational designs. In direct observations, people know that you are watching them. Unobtrusive measures involve any method for studying behavior where individuals do not know they are being observed. An observational study allows a useful insight into a phenomenon and avoids the ethical and practical difficulties of setting up a large and cumbersome research project.

  • Observational studies are usually flexible and do not necessarily need to be structured around a hypothesis about what you expect to observe (data is emergent rather than pre-existing).
  • The researcher is able to collect a depth of information about a particular behavior.
  • Can reveal interrelationships among multifaceted dimensions of group interactions.
  • You can generalize your results to real life situations.
  • Observational research is useful for discovering what variables may be important before applying other methods like experiments.
  • Observation researchd esigns account for the complexity of group behaviors.
  • Reliability of data is low because seeing behaviors occur over and over again may be a time consuming task and difficult to replicate.
  • In observational research, findings may only reflect a unique sample population and, thus, cannot be generalized to other groups.
  • There can be problems with bias as the researcher may only "see what they want to see."
  • There is no possiblility to determine "cause and effect" relationships since nothing is manipulated.
  • Sources or subjects may not all be equally credible.
  • Any group that is studied is altered to some degree by the very presence of the researcher, therefore, skewing to some degree any data collected (the Heisenburg Uncertainty Principle).

Atkinson, Paul and Martyn Hammersley. “Ethnography and Participant Observation.” In Handbook of Qualitative Research . Norman K. Denzin and Yvonna S. Lincoln, eds. (Thousand Oaks, CA: Sage, 1994), pp. 248-261; Observational Research. Research Methods by Dummies. Department of Psychology. California State University, Fresno, 2006; Patton Michael Quinn. Qualitiative Research and Evaluation Methods . Chapter 6, Fieldwork Strategies and Observational Methods. 3rd ed. Thousand Oaks, CA: Sage, 2002; Rosenbaum, Paul R. Design of Observational Studies . New York: Springer, 2010.

Understood more as an broad approach to examining a research problem than a methodological design, philosophical analysis and argumentation is intended to challenge deeply embedded, often intractable, assumptions underpinning an area of study. This approach uses the tools of argumentation derived from philosophical traditions, concepts, models, and theories to critically explore and challenge, for example, the relevance of logic and evidence in academic debates, to analyze arguments about fundamental issues, or to discuss the root of existing discourse about a research problem. These overarching tools of analysis can be framed in three ways:

  • Ontology -- the study that describes the nature of reality; for example, what is real and what is not, what is fundamental and what is derivative?
  • Epistemology -- the study that explores the nature of knowledge; for example, on what does knowledge and understanding depend upon and how can we be certain of what we know?
  • Axiology -- the study of values; for example, what values does an individual or group hold and why? How are values related to interest, desire, will, experience, and means-to-end? And, what is the difference between a matter of fact and a matter of value?
  • Can provide a basis for applying ethical decision-making to practice.
  • Functions as a means of gaining greater self-understanding and self-knowledge about the purposes of research.
  • Brings clarity to general guiding practices and principles of an individual or group.
  • Philosophy informs methodology.
  • Refine concepts and theories that are invoked in relatively unreflective modes of thought and discourse.
  • Beyond methodology, philosophy also informs critical thinking about epistemology and the structure of reality (metaphysics).
  • Offers clarity and definition to the practical and theoretical uses of terms, concepts, and ideas.
  • Limited application to specific research problems [answering the "So What?" question in social science research].
  • Analysis can be abstract, argumentative, and limited in its practical application to real-life issues.
  • While a philosophical analysis may render problematic that which was once simple or taken-for-granted, the writing can be dense and subject to unnecessary jargon, overstatement, and/or excessive quotation and documentation.
  • There are limitations in the use of metaphor as a vehicle of philosophical analysis.
  • There can be analytical difficulties in moving from philosophy to advocacy and between abstract thought and application to the phenomenal world.

Chapter 4, Research Methodology and Design . Unisa Institutional Repository (UnisaIR), University of South Africa;  Labaree, Robert V. and Ross Scimeca. “The Philosophical Problem of Truth in Librarianship.” The Library Quarterly 78 (January 2008): 43-70; Maykut, Pamela S. Beginning Qualitative Research: A Philosophic and Practical Guide . Washington, D.C.: Falmer Press, 1994; Stanford Encyclopedia of Philosophy . Metaphysics Research Lab, CSLI, Stanford University, 2013.

  • The researcher has a limitless option when it comes to sample size and the sampling schedule.
  • Due to the repetitive nature of this research design, minor changes and adjustments can be done during the initial parts of the study to correct and hone the research method. Useful design for exploratory studies.
  • There is very little effort on the part of the researcher when performing this technique. It is generally not expensive, time consuming, or workforce extensive.
  • Because the study is conducted serially, the results of one sample are known before the next sample is taken and analyzed.
  • The sampling method is not representative of the entire population. The only possibility of approaching representativeness is when the researcher chooses to use a very large sample size significant enough to represent a significant portion of the entire population. In this case, moving on to study a second or more sample can be difficult.
  • Because the sampling technique is not randomized, the design cannot be used to create conclusions and interpretations that pertain to an entire population. Generalizability from findings is limited.
  • Difficult to account for and interpret variation from one sample to another over time, particularly when using qualitative methods of data collection.

Rebecca Betensky, Harvard University, Course Lecture Note slides ; Cresswell, John W. Et al. “Advanced Mixed-Methods Research Designs.” In Handbook of Mixed Methods in Social and Behavioral Research . Abbas Tashakkori and Charles Teddle, eds. (Thousand Oaks, CA: Sage, 2003), pp. 209-240; Nataliya V. Ivankova. “Using Mixed-Methods Sequential Explanatory Design: From Theory to Practice.” Field Methods 18 (February 2006): 3-20; Bovaird, James A. and Kevin A. Kupzyk. “Sequential Design.” In Encyclopedia of Research Design . Neil J. Salkind, ed. Thousand Oaks, CA: Sage, 2010; Sequential Analysis . Wikipedia.  

  • << Previous: Purpose of Guide
  • Next: Design Flaws to Avoid >>
  • Last Updated: Jul 18, 2023 11:58 AM
  • URL: https://library.sacredheart.edu/c.php?g=29803
  • QuickSearch
  • Library Catalog
  • Databases A-Z
  • Publication Finder
  • Course Reserves
  • Citation Linker
  • Digital Commons
  • Our Website

Research Support

  • Ask a Librarian
  • Appointments
  • Interlibrary Loan (ILL)
  • Research Guides
  • Databases by Subject
  • Citation Help

Using the Library

  • Reserve a Group Study Room
  • Renew Books
  • Honors Study Rooms
  • Off-Campus Access
  • Library Policies
  • Library Technology

User Information

  • Grad Students
  • Online Students
  • COVID-19 Updates
  • Staff Directory
  • News & Announcements
  • Library Newsletter

My Accounts

  • Interlibrary Loan
  • Staff Site Login

Sacred Heart University

FIND US ON  

  • University Libraries
  • Research Guides
  • Topic Guides
  • Research Methods Guide
  • Research Design & Method

Research Methods Guide: Research Design & Method

  • Introduction
  • Survey Research
  • Interview Research
  • Data Analysis
  • Resources & Consultation

Tutorial Videos: Research Design & Method

Research Methods (sociology-focused)

Qualitative vs. Quantitative Methods (intro)

Qualitative vs. Quantitative Methods (advanced)

research design of a study should not include

FAQ: Research Design & Method

What is the difference between Research Design and Research Method?

Research design is a plan to answer your research question.  A research method is a strategy used to implement that plan.  Research design and methods are different but closely related, because good research design ensures that the data you obtain will help you answer your research question more effectively.

Which research method should I choose ?

It depends on your research goal.  It depends on what subjects (and who) you want to study.  Let's say you are interested in studying what makes people happy, or why some students are more conscious about recycling on campus.  To answer these questions, you need to make a decision about how to collect your data.  Most frequently used methods include:

  • Observation / Participant Observation
  • Focus Groups
  • Experiments
  • Secondary Data Analysis / Archival Study
  • Mixed Methods (combination of some of the above)

One particular method could be better suited to your research goal than others, because the data you collect from different methods will be different in quality and quantity.   For instance, surveys are usually designed to produce relatively short answers, rather than the extensive responses expected in qualitative interviews.

What other factors should I consider when choosing one method over another?

Time for data collection and analysis is something you want to consider.  An observation or interview method, so-called qualitative approach, helps you collect richer information, but it takes time.  Using a survey helps you collect more data quickly, yet it may lack details.  So, you will need to consider the time you have for research and the balance between strengths and weaknesses associated with each method (e.g., qualitative vs. quantitative).

  • << Previous: Introduction
  • Next: Survey Research >>
  • Last Updated: Aug 21, 2023 10:42 AM

Library Tutorials: Study Design 101

  • Himmelfarb Library Introduction
  • AccessMedicine
  • CINAHL Complete
  • ClinicalKey
  • Conducting Library Research
  • Cochrane Library
  • Google Scholar
  • Health Info @ Himmelfarb
  • Presenting a Poster/GW Research Days
  • Scholarly Publishing Videos
  • Study Design 101
  • Instructional Software
  • Programming/Statistical Software
  • LibKey Nomad

Study Design General Information

What is Study Design?

Clinical medical studies vary in design, scope, rigor, and function. Types of study design include: meta-analyses, systematic reviews, practice guidelines, randomized controlled trials, cohort studies, case control studies, and case reports. An understanding of the different types of clinical medical studies and how they relate is crucial when practicing evidence-based medicine (EBM).

Study Design Tutorials

Visit Himmelfarb Library's complete Study Design 101 guide to learn more.

  • << Previous: Scopus
  • Next: Software >>

Creative Commons License

  • Last Updated: Aug 21, 2024 1:28 PM
  • URL: https://guides.himmelfarb.gwu.edu/tutorials

GW logo

  • Himmelfarb Intranet
  • Privacy Notice
  • Terms of Use
  • GW is committed to digital accessibility. If you experience a barrier that affects your ability to access content on this page, let us know via the Accessibility Feedback Form .
  • Himmelfarb Health Sciences Library
  • 2300 Eye St., NW, Washington, DC 20037
  • Phone: (202) 994-2962
  • [email protected]
  • https://himmelfarb.gwu.edu

Frequently asked questions

What do i need to include in my research design.

The priorities of a research design can vary depending on the field, but you usually have to specify:

  • Your research questions and/or hypotheses
  • Your overall approach (e.g., qualitative or quantitative )
  • The type of design you’re using (e.g., a survey , experiment , or case study )
  • Your sampling methods or criteria for selecting subjects
  • Your data collection methods (e.g., questionnaires , observations)
  • Your data collection procedures (e.g., operationalization , timing and data management)
  • Your data analysis methods (e.g., statistical tests  or thematic analysis )

Frequently asked questions: Methodology

Attrition refers to participants leaving a study. It always happens to some extent—for example, in randomized controlled trials for medical research.

Differential attrition occurs when attrition or dropout rates differ systematically between the intervention and the control group . As a result, the characteristics of the participants who drop out differ from the characteristics of those who stay in the study. Because of this, study results may be biased .

Action research is conducted in order to solve a particular issue immediately, while case studies are often conducted over a longer period of time and focus more on observing and analyzing a particular ongoing phenomenon.

Action research is focused on solving a problem or informing individual and community-based knowledge in a way that impacts teaching, learning, and other related processes. It is less focused on contributing theoretical input, instead producing actionable input.

Action research is particularly popular with educators as a form of systematic inquiry because it prioritizes reflection and bridges the gap between theory and practice. Educators are able to simultaneously investigate an issue as they solve it, and the method is very iterative and flexible.

A cycle of inquiry is another name for action research . It is usually visualized in a spiral shape following a series of steps, such as “planning → acting → observing → reflecting.”

To make quantitative observations , you need to use instruments that are capable of measuring the quantity you want to observe. For example, you might use a ruler to measure the length of an object or a thermometer to measure its temperature.

Criterion validity and construct validity are both types of measurement validity . In other words, they both show you how accurately a method measures something.

While construct validity is the degree to which a test or other measurement method measures what it claims to measure, criterion validity is the degree to which a test can predictively (in the future) or concurrently (in the present) measure something.

Construct validity is often considered the overarching type of measurement validity . You need to have face validity , content validity , and criterion validity in order to achieve construct validity.

Convergent validity and discriminant validity are both subtypes of construct validity . Together, they help you evaluate whether a test measures the concept it was designed to measure.

  • Convergent validity indicates whether a test that is designed to measure a particular construct correlates with other tests that assess the same or similar construct.
  • Discriminant validity indicates whether two tests that should not be highly related to each other are indeed not related. This type of validity is also called divergent validity .

You need to assess both in order to demonstrate construct validity. Neither one alone is sufficient for establishing construct validity.

  • Discriminant validity indicates whether two tests that should not be highly related to each other are indeed not related

Content validity shows you how accurately a test or other measurement method taps  into the various aspects of the specific construct you are researching.

In other words, it helps you answer the question: “does the test measure all aspects of the construct I want to measure?” If it does, then the test has high content validity.

The higher the content validity, the more accurate the measurement of the construct.

If the test fails to include parts of the construct, or irrelevant parts are included, the validity of the instrument is threatened, which brings your results into question.

Face validity and content validity are similar in that they both evaluate how suitable the content of a test is. The difference is that face validity is subjective, and assesses content at surface level.

When a test has strong face validity, anyone would agree that the test’s questions appear to measure what they are intended to measure.

For example, looking at a 4th grade math test consisting of problems in which students have to add and multiply, most people would agree that it has strong face validity (i.e., it looks like a math test).

On the other hand, content validity evaluates how well a test represents all the aspects of a topic. Assessing content validity is more systematic and relies on expert evaluation. of each question, analyzing whether each one covers the aspects that the test was designed to cover.

A 4th grade math test would have high content validity if it covered all the skills taught in that grade. Experts(in this case, math teachers), would have to evaluate the content validity by comparing the test to the learning objectives.

Snowball sampling is a non-probability sampling method . Unlike probability sampling (which involves some form of random selection ), the initial individuals selected to be studied are the ones who recruit new participants.

Because not every member of the target population has an equal chance of being recruited into the sample, selection in snowball sampling is non-random.

Snowball sampling is a non-probability sampling method , where there is not an equal chance for every member of the population to be included in the sample .

This means that you cannot use inferential statistics and make generalizations —often the goal of quantitative research . As such, a snowball sample is not representative of the target population and is usually a better fit for qualitative research .

Snowball sampling relies on the use of referrals. Here, the researcher recruits one or more initial participants, who then recruit the next ones.

Participants share similar characteristics and/or know each other. Because of this, not every member of the population has an equal chance of being included in the sample, giving rise to sampling bias .

Snowball sampling is best used in the following cases:

  • If there is no sampling frame available (e.g., people with a rare disease)
  • If the population of interest is hard to access or locate (e.g., people experiencing homelessness)
  • If the research focuses on a sensitive topic (e.g., extramarital affairs)

The reproducibility and replicability of a study can be ensured by writing a transparent, detailed method section and using clear, unambiguous language.

Reproducibility and replicability are related terms.

  • Reproducing research entails reanalyzing the existing data in the same manner.
  • Replicating (or repeating ) the research entails reconducting the entire analysis, including the collection of new data . 
  • A successful reproduction shows that the data analyses were conducted in a fair and honest manner.
  • A successful replication shows that the reliability of the results is high.

Stratified sampling and quota sampling both involve dividing the population into subgroups and selecting units from each subgroup. The purpose in both cases is to select a representative sample and/or to allow comparisons between subgroups.

The main difference is that in stratified sampling, you draw a random sample from each subgroup ( probability sampling ). In quota sampling you select a predetermined number or proportion of units, in a non-random manner ( non-probability sampling ).

Purposive and convenience sampling are both sampling methods that are typically used in qualitative data collection.

A convenience sample is drawn from a source that is conveniently accessible to the researcher. Convenience sampling does not distinguish characteristics among the participants. On the other hand, purposive sampling focuses on selecting participants possessing characteristics associated with the research study.

The findings of studies based on either convenience or purposive sampling can only be generalized to the (sub)population from which the sample is drawn, and not to the entire population.

Random sampling or probability sampling is based on random selection. This means that each unit has an equal chance (i.e., equal probability) of being included in the sample.

On the other hand, convenience sampling involves stopping people at random, which means that not everyone has an equal chance of being selected depending on the place, time, or day you are collecting your data.

Convenience sampling and quota sampling are both non-probability sampling methods. They both use non-random criteria like availability, geographical proximity, or expert knowledge to recruit study participants.

However, in convenience sampling, you continue to sample units or cases until you reach the required sample size.

In quota sampling, you first need to divide your population of interest into subgroups (strata) and estimate their proportions (quota) in the population. Then you can start your data collection, using convenience sampling to recruit participants, until the proportions in each subgroup coincide with the estimated proportions in the population.

A sampling frame is a list of every member in the entire population . It is important that the sampling frame is as complete as possible, so that your sample accurately reflects your population.

Stratified and cluster sampling may look similar, but bear in mind that groups created in cluster sampling are heterogeneous , so the individual characteristics in the cluster vary. In contrast, groups created in stratified sampling are homogeneous , as units share characteristics.

Relatedly, in cluster sampling you randomly select entire groups and include all units of each group in your sample. However, in stratified sampling, you select some units of all groups and include them in your sample. In this way, both methods can ensure that your sample is representative of the target population .

A systematic review is secondary research because it uses existing research. You don’t collect new data yourself.

The key difference between observational studies and experimental designs is that a well-done observational study does not influence the responses of participants, while experiments do have some sort of treatment condition applied to at least some participants by random assignment .

An observational study is a great choice for you if your research question is based purely on observations. If there are ethical, logistical, or practical concerns that prevent you from conducting a traditional experiment , an observational study may be a good choice. In an observational study, there is no interference or manipulation of the research subjects, as well as no control or treatment groups .

It’s often best to ask a variety of people to review your measurements. You can ask experts, such as other researchers, or laypeople, such as potential participants, to judge the face validity of tests.

While experts have a deep understanding of research methods , the people you’re studying can provide you with valuable insights you may have missed otherwise.

Face validity is important because it’s a simple first step to measuring the overall validity of a test or technique. It’s a relatively intuitive, quick, and easy way to start checking whether a new measure seems useful at first glance.

Good face validity means that anyone who reviews your measure says that it seems to be measuring what it’s supposed to. With poor face validity, someone reviewing your measure may be left confused about what you’re measuring and why you’re using this method.

Face validity is about whether a test appears to measure what it’s supposed to measure. This type of validity is concerned with whether a measure seems relevant and appropriate for what it’s assessing only on the surface.

Statistical analyses are often applied to test validity with data from your measures. You test convergent validity and discriminant validity with correlations to see if results from your test are positively or negatively related to those of other established tests.

You can also use regression analyses to assess whether your measure is actually predictive of outcomes that you expect it to predict theoretically. A regression analysis that supports your expectations strengthens your claim of construct validity .

When designing or evaluating a measure, construct validity helps you ensure you’re actually measuring the construct you’re interested in. If you don’t have construct validity, you may inadvertently measure unrelated or distinct constructs and lose precision in your research.

Construct validity is often considered the overarching type of measurement validity ,  because it covers all of the other types. You need to have face validity , content validity , and criterion validity to achieve construct validity.

Construct validity is about how well a test measures the concept it was designed to evaluate. It’s one of four types of measurement validity , which includes construct validity, face validity , and criterion validity.

There are two subtypes of construct validity.

  • Convergent validity : The extent to which your measure corresponds to measures of related constructs
  • Discriminant validity : The extent to which your measure is unrelated or negatively related to measures of distinct constructs

Naturalistic observation is a valuable tool because of its flexibility, external validity , and suitability for topics that can’t be studied in a lab setting.

The downsides of naturalistic observation include its lack of scientific control , ethical considerations , and potential for bias from observers and subjects.

Naturalistic observation is a qualitative research method where you record the behaviors of your research subjects in real world settings. You avoid interfering or influencing anything in a naturalistic observation.

You can think of naturalistic observation as “people watching” with a purpose.

A dependent variable is what changes as a result of the independent variable manipulation in experiments . It’s what you’re interested in measuring, and it “depends” on your independent variable.

In statistics, dependent variables are also called:

  • Response variables (they respond to a change in another variable)
  • Outcome variables (they represent the outcome you want to measure)
  • Left-hand-side variables (they appear on the left-hand side of a regression equation)

An independent variable is the variable you manipulate, control, or vary in an experimental study to explore its effects. It’s called “independent” because it’s not influenced by any other variables in the study.

Independent variables are also called:

  • Explanatory variables (they explain an event or outcome)
  • Predictor variables (they can be used to predict the value of a dependent variable)
  • Right-hand-side variables (they appear on the right-hand side of a regression equation).

As a rule of thumb, questions related to thoughts, beliefs, and feelings work well in focus groups. Take your time formulating strong questions, paying special attention to phrasing. Be careful to avoid leading questions , which can bias your responses.

Overall, your focus group questions should be:

  • Open-ended and flexible
  • Impossible to answer with “yes” or “no” (questions that start with “why” or “how” are often best)
  • Unambiguous, getting straight to the point while still stimulating discussion
  • Unbiased and neutral

A structured interview is a data collection method that relies on asking questions in a set order to collect data on a topic. They are often quantitative in nature. Structured interviews are best used when: 

  • You already have a very clear understanding of your topic. Perhaps significant research has already been conducted, or you have done some prior research yourself, but you already possess a baseline for designing strong structured questions.
  • You are constrained in terms of time or resources and need to analyze your data quickly and efficiently.
  • Your research question depends on strong parity between participants, with environmental conditions held constant.

More flexible interview options include semi-structured interviews , unstructured interviews , and focus groups .

Social desirability bias is the tendency for interview participants to give responses that will be viewed favorably by the interviewer or other participants. It occurs in all types of interviews and surveys , but is most common in semi-structured interviews , unstructured interviews , and focus groups .

Social desirability bias can be mitigated by ensuring participants feel at ease and comfortable sharing their views. Make sure to pay attention to your own body language and any physical or verbal cues, such as nodding or widening your eyes.

This type of bias can also occur in observations if the participants know they’re being observed. They might alter their behavior accordingly.

The interviewer effect is a type of bias that emerges when a characteristic of an interviewer (race, age, gender identity, etc.) influences the responses given by the interviewee.

There is a risk of an interviewer effect in all types of interviews , but it can be mitigated by writing really high-quality interview questions.

A semi-structured interview is a blend of structured and unstructured types of interviews. Semi-structured interviews are best used when:

  • You have prior interview experience. Spontaneous questions are deceptively challenging, and it’s easy to accidentally ask a leading question or make a participant uncomfortable.
  • Your research question is exploratory in nature. Participant answers can guide future research questions and help you develop a more robust knowledge base for future research.

An unstructured interview is the most flexible type of interview, but it is not always the best fit for your research topic.

Unstructured interviews are best used when:

  • You are an experienced interviewer and have a very strong background in your research topic, since it is challenging to ask spontaneous, colloquial questions.
  • Your research question is exploratory in nature. While you may have developed hypotheses, you are open to discovering new or shifting viewpoints through the interview process.
  • You are seeking descriptive data, and are ready to ask questions that will deepen and contextualize your initial thoughts and hypotheses.
  • Your research depends on forming connections with your participants and making them feel comfortable revealing deeper emotions, lived experiences, or thoughts.

The four most common types of interviews are:

  • Structured interviews : The questions are predetermined in both topic and order. 
  • Semi-structured interviews : A few questions are predetermined, but other questions aren’t planned.
  • Unstructured interviews : None of the questions are predetermined.
  • Focus group interviews : The questions are presented to a group instead of one individual.

Deductive reasoning is commonly used in scientific research, and it’s especially associated with quantitative research .

In research, you might have come across something called the hypothetico-deductive method . It’s the scientific method of testing hypotheses to check whether your predictions are substantiated by real-world data.

Deductive reasoning is a logical approach where you progress from general ideas to specific conclusions. It’s often contrasted with inductive reasoning , where you start with specific observations and form general conclusions.

Deductive reasoning is also called deductive logic.

There are many different types of inductive reasoning that people use formally or informally.

Here are a few common types:

  • Inductive generalization : You use observations about a sample to come to a conclusion about the population it came from.
  • Statistical generalization: You use specific numbers about samples to make statements about populations.
  • Causal reasoning: You make cause-and-effect links between different things.
  • Sign reasoning: You make a conclusion about a correlational relationship between different things.
  • Analogical reasoning: You make a conclusion about something based on its similarities to something else.

Inductive reasoning is a bottom-up approach, while deductive reasoning is top-down.

Inductive reasoning takes you from the specific to the general, while in deductive reasoning, you make inferences by going from general premises to specific conclusions.

In inductive research , you start by making observations or gathering data. Then, you take a broad scan of your data and search for patterns. Finally, you make general conclusions that you might incorporate into theories.

Inductive reasoning is a method of drawing conclusions by going from the specific to the general. It’s usually contrasted with deductive reasoning, where you proceed from general information to specific conclusions.

Inductive reasoning is also called inductive logic or bottom-up reasoning.

A hypothesis states your predictions about what your research will find. It is a tentative answer to your research question that has not yet been tested. For some research projects, you might have to write several hypotheses that address different aspects of your research question.

A hypothesis is not just a guess — it should be based on existing theories and knowledge. It also has to be testable, which means you can support or refute it through scientific research methods (such as experiments, observations and statistical analysis of data).

Triangulation can help:

  • Reduce research bias that comes from using a single method, theory, or investigator
  • Enhance validity by approaching the same topic with different tools
  • Establish credibility by giving you a complete picture of the research problem

But triangulation can also pose problems:

  • It’s time-consuming and labor-intensive, often involving an interdisciplinary team.
  • Your results may be inconsistent or even contradictory.

There are four main types of triangulation :

  • Data triangulation : Using data from different times, spaces, and people
  • Investigator triangulation : Involving multiple researchers in collecting or analyzing data
  • Theory triangulation : Using varying theoretical perspectives in your research
  • Methodological triangulation : Using different methodologies to approach the same topic

Many academic fields use peer review , largely to determine whether a manuscript is suitable for publication. Peer review enhances the credibility of the published manuscript.

However, peer review is also common in non-academic settings. The United Nations, the European Union, and many individual nations use peer review to evaluate grant applications. It is also widely used in medical and health-related fields as a teaching or quality-of-care measure. 

Peer assessment is often used in the classroom as a pedagogical tool. Both receiving feedback and providing it are thought to enhance the learning process, helping students think critically and collaboratively.

Peer review can stop obviously problematic, falsified, or otherwise untrustworthy research from being published. It also represents an excellent opportunity to get feedback from renowned experts in your field. It acts as a first defense, helping you ensure your argument is clear and that there are no gaps, vague terms, or unanswered questions for readers who weren’t involved in the research process.

Peer-reviewed articles are considered a highly credible source due to this stringent process they go through before publication.

In general, the peer review process follows the following steps: 

  • First, the author submits the manuscript to the editor.
  • Reject the manuscript and send it back to author, or 
  • Send it onward to the selected peer reviewer(s) 
  • Next, the peer review process occurs. The reviewer provides feedback, addressing any major or minor issues with the manuscript, and gives their advice regarding what edits should be made. 
  • Lastly, the edited manuscript is sent back to the author. They input the edits, and resubmit it to the editor for publication.

Exploratory research is often used when the issue you’re studying is new or when the data collection process is challenging for some reason.

You can use exploratory research if you have a general idea or a specific question that you want to study but there is no preexisting knowledge or paradigm with which to study it.

Exploratory research is a methodology approach that explores research questions that have not previously been studied in depth. It is often used when the issue you’re studying is new, or the data collection process is challenging in some way.

Explanatory research is used to investigate how or why a phenomenon occurs. Therefore, this type of research is often one of the first stages in the research process , serving as a jumping-off point for future research.

Exploratory research aims to explore the main aspects of an under-researched problem, while explanatory research aims to explain the causes and consequences of a well-defined problem.

Explanatory research is a research method used to investigate how or why something occurs when only a small amount of information is available pertaining to that topic. It can help you increase your understanding of a given topic.

Clean data are valid, accurate, complete, consistent, unique, and uniform. Dirty data include inconsistencies and errors.

Dirty data can come from any part of the research process, including poor research design , inappropriate measurement materials, or flawed data entry.

Data cleaning takes place between data collection and data analyses. But you can use some methods even before collecting data.

For clean data, you should start by designing measures that collect valid data. Data validation at the time of data entry or collection helps you minimize the amount of data cleaning you’ll need to do.

After data collection, you can use data standardization and data transformation to clean your data. You’ll also deal with any missing values, outliers, and duplicate values.

Every dataset requires different techniques to clean dirty data , but you need to address these issues in a systematic way. You focus on finding and resolving data points that don’t agree or fit with the rest of your dataset.

These data might be missing values, outliers, duplicate values, incorrectly formatted, or irrelevant. You’ll start with screening and diagnosing your data. Then, you’ll often standardize and accept or remove data to make your dataset consistent and valid.

Data cleaning is necessary for valid and appropriate analyses. Dirty data contain inconsistencies or errors , but cleaning your data helps you minimize or resolve these.

Without data cleaning, you could end up with a Type I or II error in your conclusion. These types of erroneous conclusions can be practically significant with important consequences, because they lead to misplaced investments or missed opportunities.

Data cleaning involves spotting and resolving potential data inconsistencies or errors to improve your data quality. An error is any value (e.g., recorded weight) that doesn’t reflect the true value (e.g., actual weight) of something that’s being measured.

In this process, you review, analyze, detect, modify, or remove “dirty” data to make your dataset “clean.” Data cleaning is also called data cleansing or data scrubbing.

Research misconduct means making up or falsifying data, manipulating data analyses, or misrepresenting results in research reports. It’s a form of academic fraud.

These actions are committed intentionally and can have serious consequences; research misconduct is not a simple mistake or a point of disagreement but a serious ethical failure.

Anonymity means you don’t know who the participants are, while confidentiality means you know who they are but remove identifying information from your research report. Both are important ethical considerations .

You can only guarantee anonymity by not collecting any personally identifying information—for example, names, phone numbers, email addresses, IP addresses, physical characteristics, photos, or videos.

You can keep data confidential by using aggregate information in your research report, so that you only refer to groups of participants rather than individuals.

Research ethics matter for scientific integrity, human rights and dignity, and collaboration between science and society. These principles make sure that participation in studies is voluntary, informed, and safe.

Ethical considerations in research are a set of principles that guide your research designs and practices. These principles include voluntary participation, informed consent, anonymity, confidentiality, potential for harm, and results communication.

Scientists and researchers must always adhere to a certain code of conduct when collecting data from others .

These considerations protect the rights of research participants, enhance research validity , and maintain scientific integrity.

In multistage sampling , you can use probability or non-probability sampling methods .

For a probability sample, you have to conduct probability sampling at every stage.

You can mix it up by using simple random sampling , systematic sampling , or stratified sampling to select units at different stages, depending on what is applicable and relevant to your study.

Multistage sampling can simplify data collection when you have large, geographically spread samples, and you can obtain a probability sample without a complete sampling frame.

But multistage sampling may not lead to a representative sample, and larger samples are needed for multistage samples to achieve the statistical properties of simple random samples .

These are four of the most common mixed methods designs :

  • Convergent parallel: Quantitative and qualitative data are collected at the same time and analyzed separately. After both analyses are complete, compare your results to draw overall conclusions. 
  • Embedded: Quantitative and qualitative data are collected at the same time, but within a larger quantitative or qualitative design. One type of data is secondary to the other.
  • Explanatory sequential: Quantitative data is collected and analyzed first, followed by qualitative data. You can use this design if you think your qualitative data will explain and contextualize your quantitative findings.
  • Exploratory sequential: Qualitative data is collected and analyzed first, followed by quantitative data. You can use this design if you think the quantitative data will confirm or validate your qualitative findings.

Triangulation in research means using multiple datasets, methods, theories and/or investigators to address a research question. It’s a research strategy that can help you enhance the validity and credibility of your findings.

Triangulation is mainly used in qualitative research , but it’s also commonly applied in quantitative research . Mixed methods research always uses triangulation.

In multistage sampling , or multistage cluster sampling, you draw a sample from a population using smaller and smaller groups at each stage.

This method is often used to collect data from a large, geographically spread group of people in national surveys, for example. You take advantage of hierarchical groupings (e.g., from state to city to neighborhood) to create a sample that’s less expensive and time-consuming to collect data from.

No, the steepness or slope of the line isn’t related to the correlation coefficient value. The correlation coefficient only tells you how closely your data fit on a line, so two datasets with the same correlation coefficient can have very different slopes.

To find the slope of the line, you’ll need to perform a regression analysis .

Correlation coefficients always range between -1 and 1.

The sign of the coefficient tells you the direction of the relationship: a positive value means the variables change together in the same direction, while a negative value means they change together in opposite directions.

The absolute value of a number is equal to the number without its sign. The absolute value of a correlation coefficient tells you the magnitude of the correlation: the greater the absolute value, the stronger the correlation.

These are the assumptions your data must meet if you want to use Pearson’s r :

  • Both variables are on an interval or ratio level of measurement
  • Data from both variables follow normal distributions
  • Your data have no outliers
  • Your data is from a random or representative sample
  • You expect a linear relationship between the two variables

Quantitative research designs can be divided into two main categories:

  • Correlational and descriptive designs are used to investigate characteristics, averages, trends, and associations between variables.
  • Experimental and quasi-experimental designs are used to test causal relationships .

Qualitative research designs tend to be more flexible. Common types of qualitative design include case study , ethnography , and grounded theory designs.

A well-planned research design helps ensure that your methods match your research aims, that you collect high-quality data, and that you use the right kind of analysis to answer your questions, utilizing credible sources . This allows you to draw valid , trustworthy conclusions.

A research design is a strategy for answering your   research question . It defines your overall approach and determines how you will collect and analyze data.

Questionnaires can be self-administered or researcher-administered.

Self-administered questionnaires can be delivered online or in paper-and-pen formats, in person or through mail. All questions are standardized so that all respondents receive the same questions with identical wording.

Researcher-administered questionnaires are interviews that take place by phone, in-person, or online between researchers and respondents. You can gain deeper insights by clarifying questions for respondents or asking follow-up questions.

You can organize the questions logically, with a clear progression from simple to complex, or randomly between respondents. A logical flow helps respondents process the questionnaire easier and quicker, but it may lead to bias. Randomization can minimize the bias from order effects.

Closed-ended, or restricted-choice, questions offer respondents a fixed set of choices to select from. These questions are easier to answer quickly.

Open-ended or long-form questions allow respondents to answer in their own words. Because there are no restrictions on their choices, respondents can answer in ways that researchers may not have otherwise considered.

A questionnaire is a data collection tool or instrument, while a survey is an overarching research method that involves collecting and analyzing data from people using questionnaires.

The third variable and directionality problems are two main reasons why correlation isn’t causation .

The third variable problem means that a confounding variable affects both variables to make them seem causally related when they are not.

The directionality problem is when two variables correlate and might actually have a causal relationship, but it’s impossible to conclude which variable causes changes in the other.

Correlation describes an association between variables : when one variable changes, so does the other. A correlation is a statistical indicator of the relationship between variables.

Causation means that changes in one variable brings about changes in the other (i.e., there is a cause-and-effect relationship between variables). The two variables are correlated with each other, and there’s also a causal link between them.

While causation and correlation can exist simultaneously, correlation does not imply causation. In other words, correlation is simply a relationship where A relates to B—but A doesn’t necessarily cause B to happen (or vice versa). Mistaking correlation for causation is a common error and can lead to false cause fallacy .

Controlled experiments establish causality, whereas correlational studies only show associations between variables.

  • In an experimental design , you manipulate an independent variable and measure its effect on a dependent variable. Other variables are controlled so they can’t impact the results.
  • In a correlational design , you measure variables without manipulating any of them. You can test whether your variables change together, but you can’t be sure that one variable caused a change in another.

In general, correlational research is high in external validity while experimental research is high in internal validity .

A correlation is usually tested for two variables at a time, but you can test correlations between three or more variables.

A correlation coefficient is a single number that describes the strength and direction of the relationship between your variables.

Different types of correlation coefficients might be appropriate for your data based on their levels of measurement and distributions . The Pearson product-moment correlation coefficient (Pearson’s r ) is commonly used to assess a linear relationship between two quantitative variables.

A correlational research design investigates relationships between two variables (or more) without the researcher controlling or manipulating any of them. It’s a non-experimental type of quantitative research .

A correlation reflects the strength and/or direction of the association between two or more variables.

  • A positive correlation means that both variables change in the same direction.
  • A negative correlation means that the variables change in opposite directions.
  • A zero correlation means there’s no relationship between the variables.

Random error  is almost always present in scientific studies, even in highly controlled settings. While you can’t eradicate it completely, you can reduce random error by taking repeated measurements, using a large sample, and controlling extraneous variables .

You can avoid systematic error through careful design of your sampling , data collection , and analysis procedures. For example, use triangulation to measure your variables using multiple methods; regularly calibrate instruments or procedures; use random sampling and random assignment ; and apply masking (blinding) where possible.

Systematic error is generally a bigger problem in research.

With random error, multiple measurements will tend to cluster around the true value. When you’re collecting data from a large sample , the errors in different directions will cancel each other out.

Systematic errors are much more problematic because they can skew your data away from the true value. This can lead you to false conclusions ( Type I and II errors ) about the relationship between the variables you’re studying.

Random and systematic error are two types of measurement error.

Random error is a chance difference between the observed and true values of something (e.g., a researcher misreading a weighing scale records an incorrect measurement).

Systematic error is a consistent or proportional difference between the observed and true values of something (e.g., a miscalibrated scale consistently records weights as higher than they actually are).

On graphs, the explanatory variable is conventionally placed on the x-axis, while the response variable is placed on the y-axis.

  • If you have quantitative variables , use a scatterplot or a line graph.
  • If your response variable is categorical, use a scatterplot or a line graph.
  • If your explanatory variable is categorical, use a bar graph.

The term “ explanatory variable ” is sometimes preferred over “ independent variable ” because, in real world contexts, independent variables are often influenced by other variables. This means they aren’t totally independent.

Multiple independent variables may also be correlated with each other, so “explanatory variables” is a more appropriate term.

The difference between explanatory and response variables is simple:

  • An explanatory variable is the expected cause, and it explains the results.
  • A response variable is the expected effect, and it responds to other variables.

In a controlled experiment , all extraneous variables are held constant so that they can’t influence the results. Controlled experiments require:

  • A control group that receives a standard treatment, a fake treatment, or no treatment.
  • Random assignment of participants to ensure the groups are equivalent.

Depending on your study topic, there are various other methods of controlling variables .

There are 4 main types of extraneous variables :

  • Demand characteristics : environmental cues that encourage participants to conform to researchers’ expectations.
  • Experimenter effects : unintentional actions by researchers that influence study outcomes.
  • Situational variables : environmental variables that alter participants’ behaviors.
  • Participant variables : any characteristic or aspect of a participant’s background that could affect study results.

An extraneous variable is any variable that you’re not investigating that can potentially affect the dependent variable of your research study.

A confounding variable is a type of extraneous variable that not only affects the dependent variable, but is also related to the independent variable.

In a factorial design, multiple independent variables are tested.

If you test two variables, each level of one independent variable is combined with each level of the other independent variable to create different conditions.

Within-subjects designs have many potential threats to internal validity , but they are also very statistically powerful .

Advantages:

  • Only requires small samples
  • Statistically powerful
  • Removes the effects of individual differences on the outcomes

Disadvantages:

  • Internal validity threats reduce the likelihood of establishing a direct relationship between variables
  • Time-related effects, such as growth, can influence the outcomes
  • Carryover effects mean that the specific order of different treatments affect the outcomes

While a between-subjects design has fewer threats to internal validity , it also requires more participants for high statistical power than a within-subjects design .

  • Prevents carryover effects of learning and fatigue.
  • Shorter study duration.
  • Needs larger samples for high power.
  • Uses more resources to recruit participants, administer sessions, cover costs, etc.
  • Individual differences may be an alternative explanation for results.

Yes. Between-subjects and within-subjects designs can be combined in a single study when you have two or more independent variables (a factorial design). In a mixed factorial design, one variable is altered between subjects and another is altered within subjects.

In a between-subjects design , every participant experiences only one condition, and researchers assess group differences between participants in various conditions.

In a within-subjects design , each participant experiences all conditions, and researchers test the same participants repeatedly for differences between conditions.

The word “between” means that you’re comparing different conditions between groups, while the word “within” means you’re comparing different conditions within the same group.

Random assignment is used in experiments with a between-groups or independent measures design. In this research design, there’s usually a control group and one or more experimental groups. Random assignment helps ensure that the groups are comparable.

In general, you should always use random assignment in this type of experimental design when it is ethically possible and makes sense for your study topic.

To implement random assignment , assign a unique number to every member of your study’s sample .

Then, you can use a random number generator or a lottery method to randomly assign each number to a control or experimental group. You can also do so manually, by flipping a coin or rolling a dice to randomly assign participants to groups.

Random selection, or random sampling , is a way of selecting members of a population for your study’s sample.

In contrast, random assignment is a way of sorting the sample into control and experimental groups.

Random sampling enhances the external validity or generalizability of your results, while random assignment improves the internal validity of your study.

In experimental research, random assignment is a way of placing participants from your sample into different groups using randomization. With this method, every member of the sample has a known or equal chance of being placed in a control group or an experimental group.

“Controlling for a variable” means measuring extraneous variables and accounting for them statistically to remove their effects on other variables.

Researchers often model control variable data along with independent and dependent variable data in regression analyses and ANCOVAs . That way, you can isolate the control variable’s effects from the relationship between the variables of interest.

Control variables help you establish a correlational or causal relationship between variables by enhancing internal validity .

If you don’t control relevant extraneous variables , they may influence the outcomes of your study, and you may not be able to demonstrate that your results are really an effect of your independent variable .

A control variable is any variable that’s held constant in a research study. It’s not a variable of interest in the study, but it’s controlled because it could influence the outcomes.

Including mediators and moderators in your research helps you go beyond studying a simple relationship between two variables for a fuller picture of the real world. They are important to consider when studying complex correlational or causal relationships.

Mediators are part of the causal pathway of an effect, and they tell you how or why an effect takes place. Moderators usually help you judge the external validity of your study by identifying the limitations of when the relationship between variables holds.

If something is a mediating variable :

  • It’s caused by the independent variable .
  • It influences the dependent variable
  • When it’s taken into account, the statistical correlation between the independent and dependent variables is higher than when it isn’t considered.

A confounder is a third variable that affects variables of interest and makes them seem related when they are not. In contrast, a mediator is the mechanism of a relationship between two variables: it explains the process by which they are related.

A mediator variable explains the process through which two variables are related, while a moderator variable affects the strength and direction of that relationship.

There are three key steps in systematic sampling :

  • Define and list your population , ensuring that it is not ordered in a cyclical or periodic order.
  • Decide on your sample size and calculate your interval, k , by dividing your population by your target sample size.
  • Choose every k th member of the population as your sample.

Systematic sampling is a probability sampling method where researchers select members of the population at a regular interval – for example, by selecting every 15th person on a list of the population. If the population is in a random order, this can imitate the benefits of simple random sampling .

Yes, you can create a stratified sample using multiple characteristics, but you must ensure that every participant in your study belongs to one and only one subgroup. In this case, you multiply the numbers of subgroups for each characteristic to get the total number of groups.

For example, if you were stratifying by location with three subgroups (urban, rural, or suburban) and marital status with five subgroups (single, divorced, widowed, married, or partnered), you would have 3 x 5 = 15 subgroups.

You should use stratified sampling when your sample can be divided into mutually exclusive and exhaustive subgroups that you believe will take on different mean values for the variable that you’re studying.

Using stratified sampling will allow you to obtain more precise (with lower variance ) statistical estimates of whatever you are trying to measure.

For example, say you want to investigate how income differs based on educational attainment, but you know that this relationship can vary based on race. Using stratified sampling, you can ensure you obtain a large enough sample from each racial group, allowing you to draw more precise conclusions.

In stratified sampling , researchers divide subjects into subgroups called strata based on characteristics that they share (e.g., race, gender, educational attainment).

Once divided, each subgroup is randomly sampled using another probability sampling method.

Cluster sampling is more time- and cost-efficient than other probability sampling methods , particularly when it comes to large samples spread across a wide geographical area.

However, it provides less statistical certainty than other methods, such as simple random sampling , because it is difficult to ensure that your clusters properly represent the population as a whole.

There are three types of cluster sampling : single-stage, double-stage and multi-stage clustering. In all three types, you first divide the population into clusters, then randomly select clusters for use in your sample.

  • In single-stage sampling , you collect data from every unit within the selected clusters.
  • In double-stage sampling , you select a random sample of units from within the clusters.
  • In multi-stage sampling , you repeat the procedure of randomly sampling elements from within the clusters until you have reached a manageable sample.

Cluster sampling is a probability sampling method in which you divide a population into clusters, such as districts or schools, and then randomly select some of these clusters as your sample.

The clusters should ideally each be mini-representations of the population as a whole.

If properly implemented, simple random sampling is usually the best sampling method for ensuring both internal and external validity . However, it can sometimes be impractical and expensive to implement, depending on the size of the population to be studied,

If you have a list of every member of the population and the ability to reach whichever members are selected, you can use simple random sampling.

The American Community Survey  is an example of simple random sampling . In order to collect detailed data on the population of the US, the Census Bureau officials randomly select 3.5 million households per year and use a variety of methods to convince them to fill out the survey.

Simple random sampling is a type of probability sampling in which the researcher randomly selects a subset of participants from a population . Each member of the population has an equal chance of being selected. Data is then collected from as large a percentage as possible of this random subset.

Quasi-experimental design is most useful in situations where it would be unethical or impractical to run a true experiment .

Quasi-experiments have lower internal validity than true experiments, but they often have higher external validity  as they can use real-world interventions instead of artificial laboratory settings.

A quasi-experiment is a type of research design that attempts to establish a cause-and-effect relationship. The main difference with a true experiment is that the groups are not randomly assigned.

Blinding is important to reduce research bias (e.g., observer bias , demand characteristics ) and ensure a study’s internal validity .

If participants know whether they are in a control or treatment group , they may adjust their behavior in ways that affect the outcome that researchers are trying to measure. If the people administering the treatment are aware of group assignment, they may treat participants differently and thus directly or indirectly influence the final results.

  • In a single-blind study , only the participants are blinded.
  • In a double-blind study , both participants and experimenters are blinded.
  • In a triple-blind study , the assignment is hidden not only from participants and experimenters, but also from the researchers analyzing the data.

Blinding means hiding who is assigned to the treatment group and who is assigned to the control group in an experiment .

A true experiment (a.k.a. a controlled experiment) always includes at least one control group that doesn’t receive the experimental treatment.

However, some experiments use a within-subjects design to test treatments without a control group. In these designs, you usually compare one group’s outcomes before and after a treatment (instead of comparing outcomes between different groups).

For strong internal validity , it’s usually best to include a control group if possible. Without a control group, it’s harder to be certain that the outcome was caused by the experimental treatment and not by other variables.

An experimental group, also known as a treatment group, receives the treatment whose effect researchers wish to study, whereas a control group does not. They should be identical in all other ways.

Individual Likert-type questions are generally considered ordinal data , because the items have clear rank order, but don’t have an even distribution.

Overall Likert scale scores are sometimes treated as interval data. These scores are considered to have directionality and even spacing between them.

The type of data determines what statistical tests you should use to analyze your data.

A Likert scale is a rating scale that quantitatively assesses opinions, attitudes, or behaviors. It is made up of 4 or more questions that measure a single attitude or trait when response scores are combined.

To use a Likert scale in a survey , you present participants with Likert-type questions or statements, and a continuum of items, usually with 5 or 7 possible responses, to capture their degree of agreement.

In scientific research, concepts are the abstract ideas or phenomena that are being studied (e.g., educational achievement). Variables are properties or characteristics of the concept (e.g., performance at school), while indicators are ways of measuring or quantifying variables (e.g., yearly grade reports).

The process of turning abstract concepts into measurable variables and indicators is called operationalization .

There are various approaches to qualitative data analysis , but they all share five steps in common:

  • Prepare and organize your data.
  • Review and explore your data.
  • Develop a data coding system.
  • Assign codes to the data.
  • Identify recurring themes.

The specifics of each step depend on the focus of the analysis. Some common approaches include textual analysis , thematic analysis , and discourse analysis .

There are five common approaches to qualitative research :

  • Grounded theory involves collecting data in order to develop new theories.
  • Ethnography involves immersing yourself in a group or organization to understand its culture.
  • Narrative research involves interpreting stories to understand how people make sense of their experiences and perceptions.
  • Phenomenological research involves investigating phenomena through people’s lived experiences.
  • Action research links theory and practice in several cycles to drive innovative changes.

Hypothesis testing is a formal procedure for investigating our ideas about the world using statistics. It is used by scientists to test specific predictions, called hypotheses , by calculating how likely it is that a pattern or relationship between variables could have arisen by chance.

Operationalization means turning abstract conceptual ideas into measurable observations.

For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioral avoidance of crowded places, or physical anxiety symptoms in social situations.

Before collecting data , it’s important to consider how you will operationalize the variables that you want to measure.

When conducting research, collecting original data has significant advantages:

  • You can tailor data collection to your specific research aims (e.g. understanding the needs of your consumers or user testing your website)
  • You can control and standardize the process for high reliability and validity (e.g. choosing appropriate measurements and sampling methods )

However, there are also some drawbacks: data collection can be time-consuming, labor-intensive and expensive. In some cases, it’s more efficient to use secondary data that has already been collected by someone else, but the data might be less reliable.

Data collection is the systematic process by which observations or measurements are gathered in research. It is used in many different contexts by academics, governments, businesses, and other organizations.

There are several methods you can use to decrease the impact of confounding variables on your research: restriction, matching, statistical control and randomization.

In restriction , you restrict your sample by only including certain subjects that have the same values of potential confounding variables.

In matching , you match each of the subjects in your treatment group with a counterpart in the comparison group. The matched subjects have the same values on any potential confounding variables, and only differ in the independent variable .

In statistical control , you include potential confounders as variables in your regression .

In randomization , you randomly assign the treatment (or independent variable) in your study to a sufficiently large number of subjects, which allows you to control for all potential confounding variables.

A confounding variable is closely related to both the independent and dependent variables in a study. An independent variable represents the supposed cause , while the dependent variable is the supposed effect . A confounding variable is a third variable that influences both the independent and dependent variables.

Failing to account for confounding variables can cause you to wrongly estimate the relationship between your independent and dependent variables.

To ensure the internal validity of your research, you must consider the impact of confounding variables. If you fail to account for them, you might over- or underestimate the causal relationship between your independent and dependent variables , or even find a causal relationship where none exists.

Yes, but including more than one of either type requires multiple research questions .

For example, if you are interested in the effect of a diet on health, you can use multiple measures of health: blood sugar, blood pressure, weight, pulse, and many more. Each of these is its own dependent variable with its own research question.

You could also choose to look at the effect of exercise levels as well as diet, or even the additional effect of the two combined. Each of these is a separate independent variable .

To ensure the internal validity of an experiment , you should only change one independent variable at a time.

No. The value of a dependent variable depends on an independent variable, so a variable cannot be both independent and dependent at the same time. It must be either the cause or the effect, not both!

You want to find out how blood sugar levels are affected by drinking diet soda and regular soda, so you conduct an experiment .

  • The type of soda – diet or regular – is the independent variable .
  • The level of blood sugar that you measure is the dependent variable – it changes depending on the type of soda.

Determining cause and effect is one of the most important parts of scientific research. It’s essential to know which is the cause – the independent variable – and which is the effect – the dependent variable.

In non-probability sampling , the sample is selected based on non-random criteria, and not every member of the population has a chance of being included.

Common non-probability sampling methods include convenience sampling , voluntary response sampling, purposive sampling , snowball sampling, and quota sampling .

Probability sampling means that every member of the target population has a known chance of being included in the sample.

Probability sampling methods include simple random sampling , systematic sampling , stratified sampling , and cluster sampling .

Using careful research design and sampling procedures can help you avoid sampling bias . Oversampling can be used to correct undercoverage bias .

Some common types of sampling bias include self-selection bias , nonresponse bias , undercoverage bias , survivorship bias , pre-screening or advertising bias, and healthy user bias.

Sampling bias is a threat to external validity – it limits the generalizability of your findings to a broader group of people.

A sampling error is the difference between a population parameter and a sample statistic .

A statistic refers to measures about the sample , while a parameter refers to measures about the population .

Populations are used when a research question requires data from every member of the population. This is usually only feasible when the population is small and easily accessible.

Samples are used to make inferences about populations . Samples are easier to collect data from because they are practical, cost-effective, convenient, and manageable.

There are seven threats to external validity : selection bias , history, experimenter effect, Hawthorne effect , testing effect, aptitude-treatment and situation effect.

The two types of external validity are population validity (whether you can generalize to other groups of people) and ecological validity (whether you can generalize to other situations and settings).

The external validity of a study is the extent to which you can generalize your findings to different groups of people, situations, and measures.

Cross-sectional studies cannot establish a cause-and-effect relationship or analyze behavior over a period of time. To investigate cause and effect, you need to do a longitudinal study or an experimental study .

Cross-sectional studies are less expensive and time-consuming than many other types of study. They can provide useful insights into a population’s characteristics and identify correlations for further research.

Sometimes only cross-sectional data is available for analysis; other times your research question may only require a cross-sectional study to answer it.

Longitudinal studies can last anywhere from weeks to decades, although they tend to be at least a year long.

The 1970 British Cohort Study , which has collected data on the lives of 17,000 Brits since their births in 1970, is one well-known example of a longitudinal study .

Longitudinal studies are better to establish the correct sequence of events, identify changes over time, and provide insight into cause-and-effect relationships, but they also tend to be more expensive and time-consuming than other types of studies.

Longitudinal studies and cross-sectional studies are two different types of research design . In a cross-sectional study you collect data from a population at a specific point in time; in a longitudinal study you repeatedly collect data from the same sample over an extended period of time.

Longitudinal study Cross-sectional study
observations Observations at a in time
Observes the multiple times Observes (a “cross-section”) in the population
Follows in participants over time Provides of society at a given point

There are eight threats to internal validity : history, maturation, instrumentation, testing, selection bias , regression to the mean, social interaction and attrition .

Internal validity is the extent to which you can be confident that a cause-and-effect relationship established in a study cannot be explained by other factors.

In mixed methods research , you use both qualitative and quantitative data collection and analysis methods to answer your research question .

The research methods you use depend on the type of data you need to answer your research question .

  • If you want to measure something or test a hypothesis , use quantitative methods . If you want to explore ideas, thoughts and meanings, use qualitative methods .
  • If you want to analyze a large amount of readily-available data, use secondary data. If you want data specific to your purposes with control over how it is generated, collect primary data.
  • If you want to establish cause-and-effect relationships between variables , use experimental methods. If you want to understand the characteristics of a research subject, use descriptive methods.

A confounding variable , also called a confounder or confounding factor, is a third variable in a study examining a potential cause-and-effect relationship.

A confounding variable is related to both the supposed cause and the supposed effect of the study. It can be difficult to separate the true effect of the independent variable from the effect of the confounding variable.

In your research design , it’s important to identify potential confounding variables and plan how you will reduce their impact.

Discrete and continuous variables are two types of quantitative variables :

  • Discrete variables represent counts (e.g. the number of objects in a collection).
  • Continuous variables represent measurable amounts (e.g. water volume or weight).

Quantitative variables are any variables where the data represent amounts (e.g. height, weight, or age).

Categorical variables are any variables where the data represent groups. This includes rankings (e.g. finishing places in a race), classifications (e.g. brands of cereal), and binary outcomes (e.g. coin flips).

You need to know what type of variables you are working with to choose the right statistical test for your data and interpret your results .

You can think of independent and dependent variables in terms of cause and effect: an independent variable is the variable you think is the cause , while a dependent variable is the effect .

In an experiment, you manipulate the independent variable and measure the outcome in the dependent variable. For example, in an experiment about the effect of nutrients on crop growth:

  • The  independent variable  is the amount of nutrients added to the crop field.
  • The  dependent variable is the biomass of the crops at harvest time.

Defining your variables, and deciding how you will manipulate and measure them, is an important part of experimental design .

Experimental design means planning a set of procedures to investigate a relationship between variables . To design a controlled experiment, you need:

  • A testable hypothesis
  • At least one independent variable that can be precisely manipulated
  • At least one dependent variable that can be precisely measured

When designing the experiment, you decide:

  • How you will manipulate the variable(s)
  • How you will control for any potential confounding variables
  • How many subjects or samples will be included in the study
  • How subjects will be assigned to treatment levels

Experimental design is essential to the internal and external validity of your experiment.

I nternal validity is the degree of confidence that the causal relationship you are testing is not influenced by other factors or variables .

External validity is the extent to which your results can be generalized to other contexts.

The validity of your experiment depends on your experimental design .

Reliability and validity are both about how well a method measures something:

  • Reliability refers to the  consistency of a measure (whether the results can be reproduced under the same conditions).
  • Validity   refers to the  accuracy of a measure (whether the results really do represent what they are supposed to measure).

If you are doing experimental research, you also have to consider the internal and external validity of your experiment.

A sample is a subset of individuals from a larger population . Sampling means selecting the group that you will actually collect data from in your research. For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

In statistics, sampling allows you to test a hypothesis about the characteristics of a population.

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to systematically measure variables and test hypotheses . Qualitative methods allow you to explore concepts and experiences in more detail.

Methodology refers to the overarching strategy and rationale of your research project . It involves studying the methods used in your field and the theories or principles behind them, in order to develop an approach that matches your objectives.

Methods are the specific tools and procedures you use to collect and analyze data (for example, experiments, surveys , and statistical tests ).

In shorter scientific papers, where the aim is to report the findings of a specific study, you might simply describe what you did in a methods section .

In a longer or more complex research project, such as a thesis or dissertation , you will probably include a methodology section , where you explain your approach to answering the research questions and cite relevant sources to support your choice of methods.

Ask our team

Want to contact us directly? No problem.  We  are always here for you.

Support team - Nina

Our team helps students graduate by offering:

  • A world-class citation generator
  • Plagiarism Checker software powered by Turnitin
  • Innovative Citation Checker software
  • Professional proofreading services
  • Over 300 helpful articles about academic writing, citing sources, plagiarism, and more

Scribbr specializes in editing study-related documents . We proofread:

  • PhD dissertations
  • Research proposals
  • Personal statements
  • Admission essays
  • Motivation letters
  • Reflection papers
  • Journal articles
  • Capstone projects

Scribbr’s Plagiarism Checker is powered by elements of Turnitin’s Similarity Checker , namely the plagiarism detection software and the Internet Archive and Premium Scholarly Publications content databases .

The add-on AI detector is powered by Scribbr’s proprietary software.

The Scribbr Citation Generator is developed using the open-source Citation Style Language (CSL) project and Frank Bennett’s citeproc-js . It’s the same technology used by dozens of other popular citation tools, including Mendeley and Zotero.

You can find all the citation styles and locales used in the Scribbr Citation Generator in our publicly accessible repository on Github .

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

The PMC website is updating on October 15, 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Indian J Anaesth
  • v.60(9); 2016 Sep

Types of studies and research design

Mukul chandra kapoor.

Department of Anesthesiology, Max Smart Super Specialty Hospital, New Delhi, India

Medical research has evolved, from individual expert described opinions and techniques, to scientifically designed methodology-based studies. Evidence-based medicine (EBM) was established to re-evaluate medical facts and remove various myths in clinical practice. Research methodology is now protocol based with predefined steps. Studies were classified based on the method of collection and evaluation of data. Clinical study methodology now needs to comply to strict ethical, moral, truth, and transparency standards, ensuring that no conflict of interest is involved. A medical research pyramid has been designed to grade the quality of evidence and help physicians determine the value of the research. Randomised controlled trials (RCTs) have become gold standards for quality research. EBM now scales systemic reviews and meta-analyses at a level higher than RCTs to overcome deficiencies in the randomised trials due to errors in methodology and analyses.

INTRODUCTION

Expert opinion, experience, and authoritarian judgement were the norm in clinical medical practice. At scientific meetings, one often heard senior professionals emphatically expressing ‘In my experience,…… what I have said is correct!’ In 1981, articles published by Sackett et al . introduced ‘critical appraisal’ as they felt a need to teach methods of understanding scientific literature and its application at the bedside.[ 1 ] To improve clinical outcomes, clinical expertise must be complemented by the best external evidence.[ 2 ] Conversely, without clinical expertise, good external evidence may be used inappropriately [ Figure 1 ]. Practice gets outdated, if not updated with current evidence, depriving the clientele of the best available therapy.

An external file that holds a picture, illustration, etc.
Object name is IJA-60-626-g001.jpg

Triad of evidence-based medicine

EVIDENCE-BASED MEDICINE

In 1971, in his book ‘Effectiveness and Efficiency’, Archibald Cochrane highlighted the lack of reliable evidence behind many accepted health-care interventions.[ 3 ] This triggered re-evaluation of many established ‘supposed’ scientific facts and awakened physicians to the need for evidence in medicine. Evidence-based medicine (EBM) thus evolved, which was defined as ‘the conscientious, explicit and judicious use of the current best evidence in making decisions about the care of individual patients.’[ 2 ]

The goal of EBM was scientific endowment to achieve consistency, efficiency, effectiveness, quality, safety, reduction in dilemma and limitation of idiosyncrasies in clinical practice.[ 4 ] EBM required the physician to diligently assess the therapy, make clinical adjustments using the best available external evidence, ensure awareness of current research and discover clinical pathways to ensure best patient outcomes.[ 5 ]

With widespread internet use, phenomenally large number of publications, training and media resources are available but determining the quality of this literature is difficult for a busy physician. Abstracts are available freely on the internet, but full-text articles require a subscription. To complicate issues, contradictory studies are published making decision-making difficult.[ 6 ] Publication bias, especially against negative studies, makes matters worse.

In 1993, the Cochrane Collaboration was founded by Ian Chalmers and others to create and disseminate up-to-date review of randomised controlled trials (RCTs) to help health-care professionals make informed decisions.[ 7 ] In 1995, the American College of Physicians and the British Medical Journal Publishing Group collaborated to publish the journal ‘Evidence-based medicine’, leading to the evolution of EBM in all spheres of medicine.

MEDICAL RESEARCH

Medical research needs to be conducted to increase knowledge about the human species, its social/natural environment and to combat disease/infirmity in humans. Research should be conducted in a manner conducive to and consistent with dignity and well-being of the participant; in a professional and transparent manner; and ensuring minimal risk.[ 8 ] Research thus must be subjected to careful evaluation at all stages, i.e., research design/experimentation; results and their implications; the objective of the research sought; anticipated benefits/dangers; potential uses/abuses of the experiment and its results; and on ensuring the safety of human life. Table 1 lists the principles any research should follow.[ 8 ]

General principles of medical research

An external file that holds a picture, illustration, etc.
Object name is IJA-60-626-g002.jpg

Types of study design

Medical research is classified into primary and secondary research. Clinical/experimental studies are performed in primary research, whereas secondary research consolidates available studies as reviews, systematic reviews and meta-analyses. Three main areas in primary research are basic medical research, clinical research and epidemiological research [ Figure 2 ]. Basic research includes fundamental research in fields shown in Figure 2 . In almost all studies, at least one independent variable is varied, whereas the effects on the dependent variables are investigated. Clinical studies include observational studies and interventional studies and are subclassified as in Figure 2 .

An external file that holds a picture, illustration, etc.
Object name is IJA-60-626-g003.jpg

Classification of types of medical research

Interventional clinical study is performed with the purpose of studying or demonstrating clinical or pharmacological properties of drugs/devices, their side effects and to establish their efficacy or safety. They also include studies in which surgical, physical or psychotherapeutic procedures are examined.[ 9 ] Studies on drugs/devices are subject to legal and ethical requirements including the Drug Controller General India (DCGI) directives. They require the approval of DCGI recognized Ethics Committee and must be performed in accordance with the rules of ‘Good Clinical Practice’.[ 10 ] Further details are available under ‘Methodology for research II’ section in this issue of IJA. In 2004, the World Health Organization advised registration of all clinical trials in a public registry. In India, the Clinical Trials Registry of India was launched in 2007 ( www.ctri.nic.in ). The International Committee of Medical Journal Editors (ICMJE) mandates its member journals to publish only registered trials.[ 11 ]

Observational clinical study is a study in which knowledge from treatment of persons with drugs is analysed using epidemiological methods. In these studies, the diagnosis, treatment and monitoring are performed exclusively according to medical practice and not according to a specified study protocol.[ 9 ] They are subclassified as per Figure 2 .

Epidemiological studies have two basic approaches, the interventional and observational. Clinicians are more familiar with interventional research, whereas epidemiologists usually perform observational research.

Interventional studies are experimental in character and are subdivided into field and group studies, for example, iodine supplementation of cooking salt to prevent hypothyroidism. Many interventions are unsuitable for RCTs, as the exposure may be harmful to the subjects.

Observational studies can be subdivided into cohort, case–control, cross-sectional and ecological studies.

  • Cohort studies are suited to detect connections between exposure and development of disease. They are normally prospective studies of two healthy groups of subjects observed over time, in which one group is exposed to a specific substance, whereas the other is not. The occurrence of the disease can be determined in the two groups. Cohort studies can also be retrospective
  • Case–control studies are retrospective analyses performed to establish the prevalence of a disease in two groups exposed to a factor or disease. The incidence rate cannot be calculated, and there is also a risk of selection bias and faulty recall.

Secondary research

Narrative review.

An expert senior author writes about a particular field, condition or treatment, including an overview, and this information is fortified by his experience. The article is in a narrative format. Its limitation is that one cannot tell whether recommendations are based on author's clinical experience, available literature and why some studies were given more emphasis. It can be biased, with selective citation of reports that reinforce the authors' views of a topic.[ 12 ]

Systematic review

Systematic reviews methodically and comprehensively identify studies focused on a specified topic, appraise their methodology, summate the results, identify key findings and reasons for differences across studies, and cite limitations of current knowledge.[ 13 ] They adhere to reproducible methods and recommended guidelines.[ 14 ] The methods used to compile data are explicit and transparent, allowing the reader to gauge the quality of the review and the potential for bias.[ 15 ]

A systematic review can be presented in text or graphic form. In graphic form, data of different trials can be plotted with the point estimate and 95% confidence interval for each study, presented on an individual line. A properly conducted systematic review presents the best available research evidence for a focused clinical question. The review team may obtain information, not available in the original reports, from the primary authors. This ensures that findings are consistent and generalisable across populations, environment, therapies and groups.[ 12 ] A systematic review attempts to reduce bias identification and studies selection for review, using a comprehensive search strategy and specifying inclusion criteria. The strength of a systematic review lies in the transparency of each phase and highlighting the merits of each decision made, while compiling information.

Meta-analysis

A review team compiles aggregate-level data in each primary study, and in some cases, data are solicited from each of the primary studies.[ 16 , 17 ] Although difficult to perform, individual patient meta-analyses offer advantages over aggregate-level analyses.[ 18 ] These mathematically pooled results are referred to as meta-analysis. Combining data from well-conducted primary studies provide a precise estimate of the “true effect.”[ 19 ] Pooling the samples of individual studies increases overall sample size, enhances statistical analysis power, reduces confidence interval and thereby improves statistical value.

The structured process of Cochrane Collaboration systematic reviews has contributed to the improvement of their quality. For the meta-analysis to be definitive, the primary RCTs should have been conducted methodically. When the existing studies have important scientific and methodological limitations, such as smaller sized samples, the systematic review may identify where gaps exist in the available literature.[ 20 ] RCTs and systematic review of several randomised trials are less likely to mislead us, and thereby help judge whether an intervention is better.[ 2 ] Practice guidelines supported by large RCTs and meta-analyses are considered as ‘gold standard’ in EBM. This issue of IJA is accompanied by an editorial on Importance of EBM on research and practice (Guyat and Sriganesh 471_16).[ 21 ] The EBM pyramid grading the value of different types of research studies is shown in Figure 3 .

An external file that holds a picture, illustration, etc.
Object name is IJA-60-626-g004.jpg

The evidence-based medicine pyramid

In the last decade, a number of studies and guidelines brought about path-breaking changes in anaesthesiology and critical care. Some guidelines such as the ‘Surviving Sepsis Guidelines-2004’[ 22 ] were later found to be flawed and biased. A number of large RCTs were rejected as their findings were erroneous. Another classic example is that of ENIGMA-I (Evaluation of Nitrous oxide In the Gas Mixture for Anaesthesia)[ 23 ] which implicated nitrous oxide for poor outcomes, but ENIGMA-II[ 24 , 25 ] conducted later, by the same investigators, declared it as safe. The rise and fall of the ‘tight glucose control’ regimen was similar.[ 26 ]

Although RCTs are considered ‘gold standard’ in research, their status is at crossroads today. RCTs have conflicting interests and thus must be evaluated with careful scrutiny. EBM can promote evidence reflected in RCTs and meta-analyses. However, it cannot promulgate evidence not reflected in RCTs. Flawed RCTs and meta-analyses may bring forth erroneous recommendations. EBM thus should not be restricted to RCTs and meta-analyses but must involve tracking down the best external evidence to answer our clinical questions.

Financial support and sponsorship

Conflicts of interest.

There are no conflicts of interest.

U.S. flag

A .gov website belongs to an official government organization in the United States.

A lock ( ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

  • Table of Contents
  • Chapter 1: Introduction
  • Chapter 2: Creating Trustworthy Guidelines
  • Chapter 3: Overview of the Guideline Development Process
  • Chapter 4: Formulating PICO Questions
  • Chapter 5: Choosing and Ranking Outcomes
  • Chapter 6: Systematic Review Overview
  • Chapter 7: GRADE Criteria Determining Certainty of Evidence
  • Chapter 8: Domains Decreasing Certainty in the Evidence
  • Chapter 9: Domains Increasing One's Certainty in the Evidence
  • Chapter 10: Overall Certainty of Evidence
  • Chapter 11: Communicating findings from the GRADE certainty assessment
  • Chapter 12: Integrating Randomized and Non-randomized Studies in Evidence Synthesis

Related Topics:

  • Advisory Committee on Immunization Practices (ACIP)
  • Vaccine-Specific Recommendations
  • Evidence-Based Recommendations—GRADE
  • This ACIP GRADE handbook provides guidance to the ACIP workgroups on how to use the GRADE approach for assessing the certainty of evidence.

As described in section 4, authors at the protocol stage may decide that both RCTs and NRS need to be considered, and both types of evidence are retrieved and evaluated. Once the search is complete, the evidence is organized by study design as either randomized or non-randomized. The GRADE certainty of the RCTs should be evaluated first. After assessing each outcome separately, if there is high certainty in the body of evidence coming from RCTs, there is no need to further evaluate or use the NRS to complement or replace the RCTs. If the certainty of evidence from NRS is higher than RCTs, they can be considered as replacement evidence, especially if the NRS have low concerns with indirectness and imprecision. Reviewers might consider using NRS to complement evidence if RCTs do not provide data on populations of interest, or if the NRS studies provide evidence for possible effect modification. Figure 9 provides a visual representation of when NRS may be needed to support evidence from RCTs.

Figure 9: Flow chart depicting when to integrate RCTs and NRS in the evidence synthesis

References in this figure: 1

Figure 9: Flow chart depicting when to integrate RCTs and NRS in the evidence synthesis

When high certainty evidence for an outcome is not available in the RCT body of evidence, NRS can be used. There are two scenarios in which this may occur 1 :

  • When evidence from RCTs has low or very low certainty, NRS could help increase the overall certainty in the results. The NRS should be evaluated and if the certainty in the evidence is equal to or better than the certainty level of the RCTs, both types of evidence can be used in the decision-making process.
  • If an RCT was conducted in men and the target population in the research question was women, NRS may be used to make judgements about the certainty in these results. If the NRS shows the intervention has the same effect in both men and women, then the NRSs can be used to complement the RCT. Conversely, if the studies had shown that there was a notable difference in men and women, the overall certainty in the RCT evidence may need to be downgraded.
  • When the RCT evidence does not provide enough information about baseline risk of the control event, NRS may be used. For example, if the PICO question specified children between the ages of 12 and 15 as the target population, however the RCT evidence only provided baseline risk for children under the age of 5, NRS could be used to provide the control event rate for the target age group. The NRS could provide evidence that shows the baseline risk varies between populations or supports the evidence from the RCT.

When either of the two scenarios that result in the use of NRS occur, there are three ways in which the evidence can interact with the RCTs (Figure 10) 2

  • Complementary NRS: The NRS can provide information on whether the intervention works similarly in different populations or if there are differential baseline risks between populations. Therefore, when the RCT evidence is indirect, NRS can be used to complement and contextualize as seen in the examples above.
  • Sequential NRS: When evidence from RCTs is not sufficient, NRS can help by providing additional information. For example, NRS could provide information on long-term outcomes for patients involved in short-term RCTs. Additionally, when RCTs use surrogate outcomes, the NRS could help determine if the surrogate is relevant to patient-important outcomes.
  • Replacement NRS: When the NRS is assessed and the results have a higher level of certainty than the body of evidence from RCTs, the NRS may replace the RCTs. In spite of the lack of randomization, if the NRS is more direct and has better certainty, then decision-makers can consider the NRS as the best available evidence.

Figure 10. Steps that systematic review authors might follow when considering NRS evidence (adapted)

References in this figure: 2

Figure 10. Steps that systematic review authors might follow when considering NRS evidence (adapted)

Figure 10 provides an overview of the steps taken when deciding whether to use NRS in addition to evidence from RCTs. When presenting both NRS and RCTs for an outcome in a systematic review, the results can either be presented separately as a narrative synthesis, in separate meta-analysis as a quantitative synthesis or a combination of the two 1 .

  • Cuello-Garcia CA, Santesso N, Morgan RL, et al. GRADE guidance 24 optimizing the integration of randomized and non-randomized studies of interventions in evidence syntheses and health guidelines. J Clin Epidemiol. 2022/02// 2022;142:200-208. doi:10.1016/j.jclinepi.2021.11.026
  • Schünemann HJ, Tugwell P, Reeves BC, et al. Non-randomized studies as a source of complementary, sequential or replacement evidence for randomized controlled trials in systematic reviews on the effects of interventions. Research Synthesis Methods. 2013 2013;4(1):49-62. doi:10.1002/jrsm.1078

ACIP GRADE Handbook

This handbook provides guidance to the ACIP workgroups on how to use the GRADE approach for assessing the certainty of evidence.

What Helps Children and Young People to Disclose their Experience of Sexual Abuse and What Gets in the Way? A Systematic Scoping Review

  • Open access
  • Published: 18 September 2024

Cite this article

You have full access to this open access article

research design of a study should not include

  • Lynne McPherson   ORCID: orcid.org/0000-0002-3356-2216 1 ,
  • Kathomi Gatwiri   ORCID: orcid.org/0000-0002-7794-6481 1 ,
  • Anne Graham   ORCID: orcid.org/0000-0002-9308-8536 1 ,
  • Darlene Rotumah   ORCID: orcid.org/0000-0002-2346-7856 2 ,
  • Kelly Hand   ORCID: orcid.org/0009-0008-1269-8983 1 ,
  • Corina Modderman   ORCID: orcid.org/0000-0002-7375-5047 3 ,
  • Jaime Chubb 4 &
  • Samara James 1  

Global research has found that prevalence rates of child sexual abuse suggest that this is a significant ongoing public health concern. A recent Australian study, for example, revealed that more than three girls and almost one in five boys reported experiencing sexual abuse before the age of 18. Self-reported rates of abuse, however, far exceed official figures, suggesting that large numbers of children who experience sexual abuse do not come to the attention of relevant authorities. Whether and how those children have tried to tell their stories remains unclear.

The goal of the review was to explore scholarly literature to determine what was known about what enables or constrains children to disclose their experience of sexual abuse.

A systematic scoping review was undertaken to better understand the current state of knowledge in the scholarly literature on child sexual abuse disclosure. Thirty-two scholarly publications were included for analysis following a rigorous process of sourcing articles from five databases and systematically screening them based on transparent inclusion and exclusion criteria. Ecological systems and trauma-informed theoretical paradigms underpinned an inductive thematic analysis of the included manuscripts.

Three multi-dimensional themes were identified from the thirty-two publications. These themes were: factors enabling disclosure are multifaceted; barriers to disclosure include a complex interplay of individual, familial, contextual and cultural issues; and Indigenous victims and survivors, male survivors, and survivors with a minoritised cultural background may face additional barriers to disclosing their experiences of abuse.

Conclusions

The literature suggests that a greater understanding of the barriers to disclosures exists. Further research that supports a deeper understanding of the complex interplay of enablers and the barriers to disclosure across diverse populations is needed. In particular, future research should privilege the voices of victims and survivors of child sexual abuse, mobilising their lived experiences to co-create improved practice and policy.

Avoid common mistakes on your manuscript.

Introduction

The prevalence of child sexual abuse is a matter of critical interest for researchers, policymakers and practitioners working with children, young people and their families. Worldwide estimates of child sexual abuse (CSA) prevalence are alarming, with an average of 18–20% of females and 8–10% of males reporting experiences of abuse (Pereda et al., 2009 ). A recent study in the USA, drawing from a sample of 2639 respondents aged 18–28, concluded that the overall prevalence rate of child sexual abuse was 21.7%. For females, this rate was found to be 31.6% and for males, 10.8% (Finkelhor et al., 2024 ). In Australia, the recently published Australian Child Maltreatment Study collected nationally representative data on rates of abuse and neglect and found that 37.3% of girls and 18.8% of boys had experienced child sexual abuse (Matthews et al., 2023 ).

Self-reported rates of abuse far exceed official figures, suggesting that large numbers of children who experience sexual abuse do not come to the attention of relevant authorities. As an example of the discrepancy between official statistics versus reports by survivors, offender conviction rates appear to be far lower than reported abuse. One study, for example, found that police did not lay charges in more than half of 659 cases where child sexual abuse was reported to them (Christensen et al., 2016 ). The two reasons provided were, first, insufficient evidence, and second, aspects of the child’s disclosure, particularly timing and detail, were inadequate for successful prosecution. In another example, a study based on an analysis of administrative data over a fourteen-year timeframe found that only one in five reported child sexual abuse matters proceeded further than the initial investigation phase. In this study, only 12% of reported offences resulted in a conviction, with the authors claiming that their findings were consistent with other studies (Cashmore et al., 2020 ). Further research to enable a better understanding of “how these (prosecution) decisions are made, over and above the characteristics of the complainant, suspect and type of offence” was recommended (Cashmore et al., 2020 , p. 93).

In another example, a meta-analysis that combined estimates of prevalence rates of child sexual abuse across 217 studies, then comparing these rates with official data from sources such as the police and child protection, found that analyses based on self-reports of victims and survivors revealed prevalence rates of up to 30 times greater than official reports (Stoltenborgh et al., 2015 ), indicating a sizeable gap between self-reported experiences of child sexual abuse by survivors and rates recorded by official authorities. Such a sizable gap suggests that further investigation into research that examines the process of disclosure is much needed, with a focus on what factors enable and constrain children and young people from talking about the abuse that they have experienced.

Child sexual abuse disclosure is theorised as a multifaceted, iterative and contextualised phenomenon that interacts directly or indirectly across a range of ecological variables. Both ecological (Bronfenbrenner., 1979 ) and trauma theories (Alaggia et al., 2019 ) consider a child victim within their context by considering the micro, meso and macro implications issues faced by children who have experienced the trauma of child sexual abuse.

In summary, the rationale for this review emerges from the child sexual abuse research literature, which reports very high prevalence rates of abuse, particularly where research participants are offered anonymity as young adults to recall their experiences (Finkelhor et al., 2024 ; Matthews et al., 2023 ). Evidence of these high rates of abuse, drawn from research, are not matched by official administrative data published in government reports, with research outcomes reporting on child sexual abuse prevalence up to 30 times greater than official statistics from relevant authorities (Stoltenborgh et al., 2015 ).

These issues raise serious and urgent questions about how children and young people who have experienced child sexual abuse are listened to, heard and responded to. Children and young people may raise their concerns in attempts to tell, only to meet with barriers that prevent them from feeling supported and safe.

This scoping review aimed to address that gap by examining the literature reporting on disclosures of child sexual abuse by examining the literature reporting on disclosures of child sexual abuse by considering the question: What do we know about what influences or enables children and young people to disclose their experience of child sexual abuse, and what are the barriers to disclosure?

A Systematic Scoping Review

Using the framework developed by Arksey and O’Malley ( 2005 ), a systematic scoping review methodology was used to identify the available research literature on the disclosure of child sexual abuse. To clarify the use of the term ‘systematic’ in the context of a scoping review, we adopted a methodologically sound process for searching the literature to scope the current state of knowledge concerning child sexual abuse disclosure (Allagia et al., 2019 ). The purpose of this review was to map the literature on child sexual abuse disclosure, identify key concepts that hinder or enable disclosure, and highlight gaps in the research. Scoping studies are particularly well-suited for complex topics, as they provide valuable insights for policymakers, practitioners, and future research (McPherson et al., 2019 ). Mapping the literature involved a five-stage sequential process as follows: developing a research question, systematically identifying potentially relevant studies, screening and selecting relevant studies based on identified inclusion and exclusion criteria, charting the data and collating, summarising and reporting the results (Arksey & O’Malley, 2005 , p. 8). This five-stage approach emphasises the importance of building a credible critique when investigating a largely unexplored topic (Munn et al., 2018 ).

Theoretical Frame

This review took a multi-theoretical approach. Drawing on ecological systems and trauma-informed theoretical paradigms provided a robust framework for understanding the complex barriers to disclosing childhood sexual abuse. By integrating these two theories, we gained an understanding of how, at the individual level, trauma symptoms like shame, guilt, and fear can inhibit disclosure and, additionally, how relational dynamics (microsystems) and broader systemic and societal factors at the exo-system and macrosystem levels, can either support or hinder disclosure. CSA disclosure is often not a one-off event but rather a dynamic process reflecting the trauma of the abuse that may take place over time and can include incidents of retraction where survivors recant their stories (Alaggia et al., 2019 ). This phenomenon was first theorised by Roland Summit in 1983 and was revisited some decades later as child victims of abuse were reported to ‘accommodate’ abuse to the extent that disclosure was often delayed, conflicted and ultimately retracted (McPherson et al., 2017 ).

An ecological framework (Bronfenbrenner, 1979 ) considers a child contextually by taking into account the “ontogenic, micro-system, exo-system and macro-system” layers that inform childhood experiences (Alaggia, 2010 . p. 36). At the micro level, family dynamics can obstruct disclosure due to concerns about not being believed or feelings of loyalty to the abuser. In a different study, Alaggia ( 2004 ) points out that although children disclose in many different ways, the closer the familial relationship between the child and the perpetrator, the more difficult disclosure gets. CSA disclosure within a mesosystem encompasses the interactions among different components of the microsystem, such as churches, schools, and neighbourhoods, which can impede the disclosure process. In such interacting systems, the child who discloses can be placed in a liminal place, on the boundaries of the systems that the family is situated in, leading to demands for “compromise” for the “purposes of damage limitation” (Gardner, 2012 , p. 102). The exosystem, encompassing broader social systems like social services, can introduce complexities in the disclosure process due to inadequate reporting structures and limited interagency collaboration and resources for investigating child sexual abuse claims and frameworks of support to children who disclose. Gardner ( 2012 , p. 105) refers to these as “anxiety-provoking institutional dilemmas” wherein institutions respond with procedures that contain anxiety rather than through a trauma-informed practice of prioritising safety to reduce the risk of re-traumatisation. The macro-system envelopes the societal norms, laws, and policies, which influence the stigma and cultural taboos around child sexual abuse, potentially affecting how authorities or the adults in a child’s life respond to disclosures of child sexual abuse. Child sexual abuse disclosure is, therefore, a multifaceted, iterative and contextualised phenomenon that interacts directly or indirectly across all these ecological variables (Alaggia et al., 2019 ).

Five-Phased Approach

Phase one: developing the research question.

The following research question framed the systematic scoping review:

What do we know about what influences or enables children and young people to disclose their experience of child sexual abuse, and what are the barriers to disclosure?

Phase Two: The Framework for Systematically Identifying Relevant Studies

A search strategy that aimed to identify peer-reviewed literature was developed. With the support of a research librarian, five electronic databases (InfoRMIT; Psychology and Behavioural Sciences Collection; APA PsycInfo; Academic Search Premier; ProQuest) were searched using a combination of carefully selected keywords: Child*ren, youth, AND Sexual Abuse OR Sexual Assault AND Disclosure OR Telling OR Sharing AND Barrier s OR Hindrance OR Facilitators OR Enablers . Searches were run from 2013 to (July) 2023.

Inclusion criteria The search was restricted to peer-reviewed academic journal articles published in English between 2013 and 2023. Articles focusing on what helped or hindered disclosure that helped to better understand children’s experience of disclosing were included. The inclusion criteria included both articles about children and young people (aged under 18) and articles about adults with lived experience of child sexual abuse who were recalling their experiences of disclosure.

Exclusion criteria Articles were excluded if published before 2013, were not published in a peer-reviewed scholarly journal or did not address the research question. Therefore, articles reporting rates and prevalence, prevention literature (unless it addressed responses to disclosure), diagnostic tools, practice frameworks, and legislative requirements were excluded. Non-English articles were also excluded due to the resources required for translation.

Grey literature was excluded due to quality, reliability, and publication bias concerns. Additionally, challenges in standardising and accessing globally available grey literature made it difficult to ensure evidence-based verification and reproducibility in the review (Mahood, 2014 ). Only peer-reviewed scholarly articles were included to maintain a systematic and transparent methodology.

Phase Three: Selection of Relevant Studies and Charting of the Data

Two researchers applied the inclusion and exclusion criteria to all the citations that the search strategy identified, continually reflecting on search strategies and methodological choices at each stage of sifting, charting and sorting (Arksey & O’Malley, 2005 ). Initial searches from the databases with the date, source and language criteria applied provided a list of 1625 publications. Titles were screened to ensure broad relevance to the research question and duplicates, with 1532 articles excluded. A review of abstracts was then undertaken for the remaining 93 articles, which led to a further 24 articles being removed.

Full-text articles (n = 69) were retrieved for those articles that had been included. Authors 1 and 4 examined these articles independently to decide if the articles confirmed the inclusion criteria. Author 2 resolved disagreement, resulting in 32 articles being included in the scoping review for inclusion in a thematic analysis. See Fig.  1 for the PRISMA that charts the screening process.

figure 1

Prisma flow chart. Moher et al. ( 2009 )

Phases 4 and 5: Collating and Analysing the Results

Two researchers (Researchers 1 and 2) reviewed the selected thirty-two articles using Braun and Clarke’s ( 2021 ) ‘reflexive thematic analysis’ framework to code and identify emerging themes in the data. The six-phase process includes 1) data familiarisation and writing familiarisation notes; 2) systematic data coding; 3) generating initial themes from coded and collated data; 4) developing and reviewing themes; 5) refining, defining and naming themes; and 6) writing the report (Braun & Clarke, 2021 ).

As part of phase one, two researchers familiarised themselves with the data using a ‘descriptive-analytical’ method to consistently describe and categorise the key findings relevant to the research question, which formed the basis of the analysis (Arksey & O'Malley, 2005 ). Through this process, the researchers mapped the types, locations and key findings of included studies. The final set of 32 publications was collated and presented as a first-level analysis in Table  1 . There was no attempt to ‘weigh’ or assess the quality of each study as it is not the purpose of a scoping review, which seeks to present an overview of the material reviewed and, consequently, enable the identification of gaps in existing literature (Arksey & O'Malley, 2005 , p. 17).

In phases 2 and 3, the two researchers began reviewing and generating initial codes to “identify and make sense of patterns of meaning across a dataset” (Braun & Clarke, 2021 , p. 331) before organising the data thematically using the database program Excel. In phases 4 and 5, the researchers continued to refine and develop themes, encompassing the reflexive qualitative skills of the researchers as analytic resources. The themes were reviewed carefully together and independently by the broader research team to evolve the analysis, an “analytic process involving immersion in the data, reading, reflecting, questioning, imagining, wondering, writing, retreating, returning.” (Braun & Clarke, 2021 , p. 332).

Results and Thematic Discussion

The researchers undertook reflexive consultation together and independently to enhance the overall research process. This critical process involved two researchers screening, charting, and collating data. By incorporating this reflexive consultative approach, the researchers ensured they continually reflected on search strategies and methodological choices. This method is not linear but iterative and requires the researchers to engage with each stage of the scoping review reflexively (Arksey & O’Malley, 2005 ).

The researchers “made sense of” the data by summarising and interpreting key themes, patterns, and gaps using various frameworks, including a ‘descriptive-analysis’ (Arksey & O’Malley, 2005 ) and ‘reflexive thematic analysis’ (Braun & Clarke, 2021 ). Preliminary themes and findings were then developed, reported and refined with the broader research team of eight academic researchers and practitioners as subject matter experts to gather their insights, perspectives, and feedback on the preliminary findings. Using a ‘reflexive thematic analysis’ to gather insights, perspectives, and feedback, the researchers enhanced and evolved understandings of child sexual abuse disclosure (Braun & Clarke, 2021 ). This ‘consultation exercise’ is supported by other researchers who have recognised the value of consultation in enriching and confirming research outcomes (Oliver, 2001 ).

Following the research team's engagement with the ‘reflexive thematic analysis’ process in the analysis phase, the researchers continued to workshop emergent themes concerning the research question and theoretical framework. Three core themes were identified in the analyses of the 32 articles: (i) Factors enabling disclosure are multifaceted; (ii) Barriers to disclosure include a complex interplay of individual, familial, contextual and cultural issues; (iii) Indigenous victims and survivors, male survivors and survivors with a minoritised cultural background may face additional barriers to disclosing their experiences of abuse.

A summary of the multifaceted barriers and enablers impacting the disclosure of child sexual abuse across various domains is presented below in Table  2 .

Within each theme, these factors are discussed below using a social-ecological and reflexive critical theoretical lens.

Factors Enabling Disclosure are Multifaceted

While most research in this review identified barriers to disclosure, some enabling influences were also identified. Disclosure is conceptualised as a process rather than a one-time event (Tat & Ozturk, 2019 ) that can be affected by personal (individual), interpersonal (mutual or related) and societal (socio-political) factors (Easton et al., 2014 ; Ullman, 2023 ). For example, strong personal factors that influence disclosure may be the desire to protect oneself and prevent further abuse, seek support, clarification, and validation, unburden themselves, seek justice, and document the abuse. (Easton et al., 2014 ; Kasstan, 2022 ; Lusky-Weisrose et al., 2022 ; Ullman, 2023 ). Often, the likelihood of disclosing increases with age (Wallis & Woodworth, 2020 ).

A trusted and supportive individual, such as a parent, friend, teacher, or counsellor, is a significant interpersonal factor that encourages disclosure. The perception of protectiveness and safety from ‘trusted adults’ is crucial, particularly from mothers, who are often recipients of disclosure (Russell & Higgins, 2023 ). According to Rakovec-Felser and Vidovič ( 2016 ), this is especially important for female child victims of sexual abuse. These researchers found that those with safe and supportive mothers needed about nine months to disclose the abuse, whereas those without such support took approximately 6.9 years to disclose.

Having safe or ‘trusted adults’ also appeared in other research as an enabler of what helps children to ‘tell’ or disclose instances of abuse or CSA-related concerns (Russell & Higgins, 2023 ). However, an important finding was that disclosures to ‘trusted adults’ primarily occurred when the perpetrator was also an adult. In instances when the perpetrators of CSA were peers, children and young people were less likely to ‘tell’ adults, professionals, or organisations and more likely to ‘tell’ a friend (Russell & Higgins, 2023 ).

Societal or environmental factors that enable disclosure were linked to ‘memorable life events’ by Allnock ( 2017 ). These events are significant moments that can change one's life, which Allnock ( 2017 ) calls ‘turning points’, critical moments where survivors feel motivated to disclose their experiences. Turning points could occur accidentally following discussions, conversations, or watching television programs where sexual abuse appeared as a theme, enabling awareness of abusive behaviours and acting as a catalyst to tell (Allnock, 2017 ). Turning points could also represent the escalation of the offender’s behaviour, survivors becoming aware of other victims, or interventions by police investigations or child protection that may mutually ‘help others’ (Ullman, 2023 ).

Barriers to Disclosure include a Complex Interplay of Individual, Interpersonal, and Contextual Issues

Reflecting previous research, barriers to disclosure were found to outweigh facilitators of disclosure and tend to be multifaceted (Collin-Vézina et al., 2015 ; Easton et al., 2014 ). Barriers involve a complex interplay of individual, familial, contextual, and cultural issues, with age and gender predictive of delayed disclosure for younger children and adolescents (Sivagurunathan et al., 2019 ).

Multiple studies identified barriers across three broad domains, including personal (internal) barriers, which may include not identifying the experience as sexual abuse, and internal emotions such as shame, self-blame, fear and hopelessness (Collin-Vézina et al., 2015 ; Devgun et al., 2021 ; Easton, 2013 ) or the ‘the normality/ambiguity of the situation’ (Wager, 2015 ). Young children, particularly preschoolers, often have specific fears and barriers to telling or disclosing even when asked by professionals, as they might not understand the purpose of the interview, the crime they have been victim to, or the consequences of disclosing (Magnusson et al., 2017 ). Interpersonal barriers, including dynamics with the perpetrator, the relationship between the perpetrator and family, and the fear of consequences or negative self-representation, were found to impact disclosure significantly (Allnock, 2017 ; Collin-Vézina et al., 2015 ; Devgun et al., 2021 ; Easton, 2013 ; Gemara & Katz, 2023 ; Gruenfeld et al., 2017 ; Halvorsen et al., 2020 ; Wager, 2015 ).

Social or environmental barriers including limited social networks, a lack of opportunities or access to safe adults to disclose to can also lead to disclosures being downplayed or ignored by those who received them, often reinforcing internalised victim-blaming (Collin-Vézina et al., 2015 ). These barriers may include social and cultural norms related to sex, misconceptions and stereotypes about child sexual abuse survivors and perpetrators, and a lack of viable services to respond to disclosures (Collin-Vézina et al., 2015 ; Devgun et al., 2021 ; Easton, 2013 ; Mooney, 2021 ). In fact, according to Easton ( 2013 ) and Marmor ( 2023 ), many survivors who disclosed their experiences of CSA were unable to receive help despite their disclosures. In some cases, the mishandling of disclosures by law enforcement officers, child protection specialists, medical staff, and mental health professionals also created further barriers to disclosing from a sense of hopelessness (Pacheco et al., 2023 ; Wager, 2015 ). Furthermore, a range of context-specific issues were identified in the literature as barriers to disclosure. These included the impact of colonisation, cultural issues, and gender, which are discussed below.

Indigenous Victims and Survivors, Male Survivors and Survivors with a Minoritised Cultural Background May Face Additional Barriers to Disclosing their Experiences of Abuse

Some authors highlighted the ongoing legacy of colonial violence as a personal and structural barrier to the disclosure of child sexual abuse (Braithwaite, 2018 ; Tolliday, 2016 ). For Australian First Nations Peoples who were victims and survivors, “child sexual abuse in Aboriginal and Torres Strait Islander communities is a complex issue that cannot be understood in isolation from the ongoing impacts of colonial invasion, genocide, assimilation, institutionalised racism, and severe socio-economic deprivation. Service responses to child sexual abuse are often experienced as racist, culturally, financially, and/or geographically inaccessible” (Funston, 2013 , p. 381). Consistent with these findings, Tolliday ( 2016 ) examines historical efforts to address sexual safety for Aboriginal and Torres Strait Islander women and children, concluding that these problems cannot be resolved unless the underlying trauma experienced by First Nations Peoples is attended to. An additional barrier for Australian First Nations Peoples may be a level of mistrust in authorities such as police and child protection services, who were found to be involved in the forced removal of Aboriginal and Torres Strait Islander children from their families (Human Rights & Equal Opportunity Commission, 1997 ).

In investigating delayed disclosure, Braithwaite ( 2018 ) found that for rural Alaskan Native survivors, the impact of colonisation may be a significant barrier to survivors disclosing abuse. The inability to trust authorities directly results from colonisation and systemic, intergenerational poverty, where disclosing abuse may negatively impact already impoverished families.

Cultural and Racial Issues

In reporting on these issues, it is important not to present child sexual abuse as an inherent racial, religious, or cultural concern. As Taylor and Norma ( 2013 ) argue, describing interpersonal barriers for women of culturally or racially diverse backgrounds in Australia to disclose childhood sexual abuse has often been described as “cultural”, but it is more a “familial culture” rather than an aspect of ethnic culture, wherein barriers to reporting sexual abuse are from wanting to protect their family and community from shame, stigma, or loss of dignity in a society where a community as a whole can be racially and culturally vilified for the actions of a few offenders.

In other contexts, researchers found that “familial culture” barriers were experienced by many survivors in other highly racialized contexts. For example, researchers found that in South Africa, the desire for families to preserve the dignity of the family and avoid shame in the community may have inhibited children from wanting to disclose sexual abuse, consequently prioritising the reputation of the family over disclosure (Ramphabana et al., 2019 ). Likewise, in East Asian communities in Canada, the concern that such a negative incident can ruin the family and the victim’s reputation and damage relationships with other community members can also dissuade disclosure from children and reporting from their families (Roberts et al., 2016 ). When living within cultural norms that promote self-scrutiny, children feel responsible for their actions and may blame themselves for the abuse or for the impacts of disclosing (Roberts et al., 2016 ).

Fear of family disruption or breaking up the family, including placement in foster care or the criminal justice system (Allnock, 2017 ), were also mentioned as barriers to disclosure. This was found particularly in contexts where perpetrators contribute financially to the family or are the breadwinners upon whom the children rely for survival. These fears may be compounded within cultures enshrined within strong patriarchal values, where male dominance over women and children is normalised or socially accepted. This has been witnessed in East Asian communities in Canada, which are greatly influenced by Confucian philosophy and patriarchal lineage and where societal and familial harmony is expected to outweigh personal needs. Taken together, this could contribute significantly to the low reporting rate of Asian child sexual abuse, which is disproportionate to that of Caucasian children in Canada (Roberts et al., 2016 ). Other factors for low disclosure are linked to fears of condemnation or desire to protect parents, family, and community from reprisal, including, in extreme circumstances, fear of ostracization, death threats, honour killings (Marmor, 2023 ), physical violence, the risk of being disowned by family or expelled from school, discrimination, isolation from social networks, and emotional abuse within the community (Obong'o et al., 2020 ). For already vulnerable, minoritised communities, this creates a double layer of vulnerability in broader society.

How a community views sex can also make it difficult for children, families, and communities to identify and disclose child sexual abuse, particularly in sexually conservative, religious-cultural contexts where sex may be taboo, stigmatising, or disrespectful to discuss with children (Ramphabana et al., 2019 ). In a study from Zimbabwe, stigma and discrimination from being labelled as having sexually transmitted diseases or for losing their virginity were expressed as a fear of disclosure (Obong'o et al., 2020 ). There are also religious prohibitions against reporting sexual abuse or violence to secular authorities (Marmor, 2023 ), as this would tarnish the religious image in secular contexts. This suggests that the emphasis on purity culture, silencing of discussions on sexuality, diminished reporting due to fear of the influence of secular values, and reliance on disclosing to religious authority figures rather than professionals act as religious and cultural barriers to reporting child sexual abuse (Lusky-Weisrose et al., 2022 ). When combined, it reduces survivors’ ability to identify and disclose child sexual abuse alongside institutional barriers and adds layers of possible isolation in cultural contexts that also serve as social protection for minoritised groups.

Gender Issues

The role of gender in child sexual abuse disclosure was identified as a noteworthy barrier. Researchers highlight the difference in disclosure patterns of male child sexual abuse survivors, which tend to be delayed for years or even decades compared to female survivors, and some male survivors were found to have lower rates of ever disclosing the abuse (Easton, 2013 ; Easton et al., 2014 ). Like many survivors of child sexual abuse, male survivors feared not being believed, justifiably, as historically there was a lack of awareness of the existence of male child sexual abuse, despite researchers finding that approximately 15% of adult men report being sexually abused during childhood (Easton et al., 2014 ). The mass media coverage of institutional abuse scandals, such as those at the Catholic Church, Boy Scouts of America, and Penn State University, have now raised public awareness of the sexual abuse of boys and how the impacts of child sexual abuse, such as deep-seated rage, shame, spiritual distress, and stigma (Easton, 2013 ) have influenced delayed or non-disclosure.

Gendered societal norms also strongly influence individual, group, and societal ideas and behaviours towards male sexual abuse (Sivagurunathan et al., 2019 ). These include, notably, ideas of male gender identity, masculinity, and masculine norms such as winning, emotional control, homophobia, and self-reliance, including negative attitudes towards victimhood and help-seeking. Additionally, as boys are often sexually abused by other males, many survivors fear the stigma of being labelled homosexual (Easton, 2013 ; Easton et al., 2014 ). Some survivors who self-identified as gay or bisexual also feared that others would use their abuse to explain their sexual orientation, saying it “made me gay” (Easton, 2013 ; Easton et al., 2014 ). Other survivors also questioned their sexual orientation due to their abuse experiences, blamed themselves, or feared being seen by others as having unconsciously invited the abuse (Sivagurunathan et al., 2019 ).

External barriers to disclosure were also identified regarding child protection workers, law enforcement, and clinicians (Easton, 2013 ), as well as religious institutions, such as churches and mosques, who were also found to have obstructed the identification and treatment of child sexual abuse in males due to societal attitudes about sex and the stigma of child sexual abuse. Additionally, there is a double standard when it comes to how sexual abuse among men is framed in mainstream media in a society that tends to glorify the sexual abuse of male children as a sexual initiation or sexual prowess if the perpetrator is an older woman. These double standards may, in turn, result in the further reluctance of male child sexual abuse survivors to disclose such experiences (Sivagurunathan et al., 2019 ), which is part of the reason why the helpfulness of responses to child sexual abuse disclosure across a male survivor’s lifespan is mixed (Easton, 2013 ). Combined, they all link to larger societal issues around gendered social expectations and how they impact child sexual abuse disclosure. If hegemonic masculinity and the conforming of traditional gendered roles lead to delayed disclosure or not disclosing at all for male survivors, a question arises concerning the child sexual abuse experiences of transgender and gender-diverse people, who are disproportionately affected by prejudice-motivated discrimination and violence.

Implications for Policy, Practice and Further Research

Thirty-two manuscripts were reviewed to respond to the question: What is known about what influences or enables children and young people to disclose their experience of child sexual abuse, and what are the barriers to disclosure?

This review found that a significant enabler for disclosure is the presence of a safe relationship. This finding is consistent with emerging knowledge about the impact of trauma, which suggests that children may first choose to disclose to a friend or person they trust. Another clear finding in the literature is that disclosure should not be conceptualised as a single event at a point in time. Disclosure is seen as multifaceted, contextual and likely to be iterative, taking place over time. This raises critical questions about the extent to which legislative, policy and practice frameworks are sensitised to this finding.

These findings should contribute to the design of policies that support practices enabling children to experience safe spaces and relationships within which they may feel able to disclose, in their own time, the abuse that they have experienced. Services designed to engage and support all children and young people, including schools, sports and recreation facilities, should give attention to various strategies to promote a sense of safety for their child participants. These services should be accompanied by clearly articulated policies to support children and young people through the process of disclosure. In addition, services designed to respond to child victims, such as statutory child protection and police, must be designed with children in mind. In practice, adult-centric forensic models of interviews conducted by police and child protection may be premised on a single contact with the child. This approach may not match the child’s need to reveal details of their experience over time in what we know to be an often iterative process. All children’s services should become familiar with the behavioural indicators that some children, particularly younger children, may demonstrate rather than using words to disclose.

The notable research gaps are of importance for future research. For example, critical questions are raised concerning the lack of studies on diverse cohorts, including LGBTQIA + survivors, Indigenous survivors or survivors living with a disability. Whilst the prevailing research does address the facilitators of disclosure to an extent, the volume of literature reporting on the barriers to disclosure is greater. A more in-depth understanding by policymakers, practitioners and researchers of some of the obstacles, including broader social and sociocultural barriers, is needed.

Further research to hear from a diverse cohort of survivors to explore their experiences of disclosing child sexual abuse is urgently needed. Overall, this review highlights the need to advance the understanding of the processes of child sexual abuse across diverse cohorts and contexts to improve service systems’ capacity to listen, hear, and respond appropriately to children and young people.

Overall, this review highlights the need to advance the understanding of the processes of child sexual abuse across diverse cohorts and contexts to improve service systems’ capacity to listen, hear, and respond appropriately to children and young people.

Limitations of the Study

Several methodological limitations apply to this analysis. This review has not identified all relevant literature due to the scope of databases searched and the likelihood that not all contemporary search terms were utilised, which might limit the comprehensiveness of the review. The research question sought information about disclosures of child sexual abuse; however, many practice responses to disclosure are likely unpublished in scholarly journals. As grey literature was excluded, potentially valuable insights from reports, theses, conference papers, and other non-peer-reviewed sources were not considered. This limitation is compounded by the inherent difficulty in drawing generalisable conclusions from scoping reviews, which encompass a variety of methodologies, populations, and contexts.

Another limitation is that only articles published in English were included, potentially resulting in the exclusion of crucial studies published in other languages. Additionally, the reliance on peer-reviewed journals may introduce publication bias, as studies with significant or positive results from the UK or North America are more likely to be published. There is also the possibility of subjective bias, as the identification and interpretation of themes depend on the researchers' perspectives.

Furthermore, as it is not within the remit of a scoping review to assess the quality of included studies, findings from lower-quality studies are considered alongside those from higher-quality studies without differentiation. However, the choice to include and conduct scholarly literature that undergoes independent double-blind peer review was made to reduce quality and publication bias risks.

Rather than simply being a one-off event, the disclosure of child sexual abuse is often a complex and ongoing process (Alaggia et al., 2019 ). More is known about barriers than enablers to disclosure, with barriers dominating the published literature sourced in this review. It is evident that, for children and young people, talking about the abuse that they have endured can be overwhelmingly challenging for them across personal, interpersonal and broader levels.

When children and young people begin to disclose, this review raised critical questions about how service systems respond to initial disclosure, particularly the extent to which policies and systems are designed to reflect children’s best interests.

Adults noticing when children and young people are distressed helps victims and survivors to disclose, as does creating trusting relationships to provide opportunities to tell their stories (Russell et al., 2023 ). To whom children elect to disclose is an important question, with recent research suggesting that when children and young people feel unsafe, they are more likely to tell a friend than an adult (Russell et al., 2023 ). Research is urgently required to develop a more robust understanding of the enablers of disclosure across diverse populations. This research needs to privilege the voices of victims and survivors with lived and living experiences of child sexual abuse.

*Denotes a reference included in this scoping review

Alaggia, R. (2004). Many ways of telling: Expanding conceptualizations of child sexual abuse disclosure. Child Abuse & Neglect, 28 (11), 1213–1227. https://doi.org/10.1016/j.chiabu.2004.03.016

Article   Google Scholar  

Alaggia, R. (2010). An ecological analysis of child sexual abuse disclosure: Considerations for child and adolescent mental health. Journal of the Canadian Academy of Child and Adolescent Psychiatry, 19 (1), 32–39.

PubMed   PubMed Central   Google Scholar  

Alaggia, R., Collin-Vézina, D., & Lateef, R. (2019). Facilitators and barriers to child sexual abuse (CSA) disclosures: A research update (2000–2016). Trauma, Violence, & Abuse, 20 (2), 260–283. https://doi.org/10.1177/1524838017697312

*Allnock, D. S. (2017). Memorable life events and disclosure of child sexual abuse: Possibilities and challenges across diverse contexts. Families, Relationships and Societies, 6 (2), 185–200. https://doi.org/10.1332/204674317X14866455118142

Arksey, H., & O’Malley, L. (2005). Scoping studies: Towards a methodological framework. International Journal of Social Research Methodology, 8 (1), 19–32. https://doi.org/10.1080/1364557032000119616

*Braithwaite, J. (2018). Colonized silence: Confronting the colonial link in rural Alaska native survivors’ non-disclosure of child sexual abuse. Journal of Child Sexual Abuse, 27 (6), 589–611. https://doi.org/10.1080/10538712.2018.1491914

Article   PubMed   Google Scholar  

Braun, V., & Clarke, V. (2021). One size fits all? What counts as quality practice in (reflexive) thematic analysis? Qualitative Research in Psychology, 18 (3), 328–352. https://doi.org/10.1080/14780887.2020.1769238

Bronfenbrenner, U. (1979). Reality and research in the ecology of human development. Proceedings of the American Philosophical Society, 119 (6), 439–469.

Google Scholar  

Cashmore, J., Taylor, A., & Parkinson, P. (2020). Fourteen-year trends in the criminal justice response to child sexual abuse reports in New South Wales. Child Maltreatment, 25 (1), 85–95. https://doi.org/10.1177/1077559519853042

Christensen, L. S., Sharman, S. J., & Powell, M. B. (2016). Identifying the characteristics of child sexual abuse cases associated with the child or child’s parents withdrawing the complaint. Child Abuse & Neglect, 57 , 53–60. https://doi.org/10.1016/j.chiabu.2016.05.004

*Collin-Vézina, D., De La Sablonnière-Griffin, M., Palmer, A. M., & Milne, L. (2015). A preliminary mapping of individual, relational, and social factors that impede disclosure of childhood sexual abuse. Child Abuse & Neglect, 43 , 123–134. https://doi.org/10.1016/j.chiabu.2015.03.010

*Devgun, M., Roopesh, B. N., & Seshadri, S. (2021). Breaking the silence: Development of a qualitative measure for inquiry of child sexual abuse (CSA) awareness and perceived barriers to CSA disclosure. Asian Journal of Psychiatry, 57 , 102558–102558. https://doi.org/10.1016/j.ajp.2021.102558

*Easton, S. D. (2013). Disclosure of child sexual abuse among adult male survivors. Clinical Social Work Journal, 41 (4), 344–355. https://doi.org/10.1007/s10615-012-0420-3

*Easton, S. D., Saltzman, L. Y., & Willis, D. G. (2014). “Would you tell under circumstances like that?”: Barriers to disclosure of child sexual abuse for men. Psychology of Men & Masculinity, 15 (4), 460–469. https://doi.org/10.1037/a0034223

Finkelhor, D., Turner, H., & Colburn, D. (2024). The prevalence of child sexual abuse with online sexual abuse added. Child Abuse & Neglect, 149 , 106634. https://doi.org/10.1016/j.chiabu.2024.106634

*Funston, L. (2013). Aboriginal and Torres Strait Islander worldviews and cultural safety transforming sexual assault service provision for children and young people. International Journal of Environmental Research and Public Health, 10 (9), 3818–3833. https://doi.org/10.3390/ijerph10093818

Article   PubMed   PubMed Central   Google Scholar  

Gardner, F. (2012). Defensive processes and deception: An analysis of the response of the institutional church to disclosures of child sexual abuse. British Journal of Psychotherapy, 28 (1), 98–109. https://doi.org/10.1111/j.1752-0118.2011.01255.x

Gemara, N., & Katz, C. (2023). “It was really hard for me to tell”: The gap between the child’s difficulty in disclosing sexual abuse, and their perception of the disclosure recipient’s response. Journal of Interpersonal Violence, 38 (1–2), 2068–2091. https://doi.org/10.1177/08862605221099949

*Gruenfeld, E., Willis, D. G., & Easton, S. D. (2017). “A very steep climb”: Therapists’ perspectives on barriers to disclosure of child sexual abuse experiences for men. Journal of Child Sexual Abuse, 26 (6), 731–751. https://doi.org/10.1080/10538712.2017.1332704

*Halvorsen, J. E., Tvedt Solberg, E., & Hjelen Stige, S. (2020). “To say it out loud is to kill your own childhood: an exploration of the first person perspective of barriers to disclosing child sexual abuse. Children and Youth Services Review, 113 , 104999. https://doi.org/10.1016/j.childyouth.2020.104999

Human Rights and Equal Opportunity Commission. (1997). Bringing them home: Inquiry into the separation of indigenous children from their families . Human Rights and Equal Opportunity Commission.

*Kasstan, B. (2022). everyone’s accountable? Peer sexual abuse in religious schools, digital revelations, and denominational contests over protection. Religions (Basel), 13 (6), 556. https://doi.org/10.3390/rel13060556

*Lusky-Weisrose, E., Fleishman, T., & Tener, D. (2022). “A little bit of light dispels a lot of darkness”: online disclosure of child sexual abuse by authority figures in the Ultraorthodox Jewish community in Israel. Journal of Interpersonal Violence, 37 , NP17758–NP17783. https://doi.org/10.1177/08862605211028370

*Magnusson, M., Ernberg, E., & Landström, S. (2017). Preschoolers’ disclosures of child sexual abuse: Examining corroborated cases from Swedish courts. Child Abuse & Neglect, 70 , 199–209. https://doi.org/10.1016/j.chiabu.2017.05.018

Mahood, Q., Eerd, D. V., & Irvin, E. (2014). Searching for grey literature for systematic reviews: Challenges and benefits. Research Synthesis Methods, 5 (3), 221–234. https://doi.org/10.1002/jrsm.1106

*Marmor, A. (2023). “I never said anything. I didn’t tell anyone. What would I tell?” Adults’ perspectives on disclosing childhood sibling sexual behavior and abuse in the Orthodox Jewish communities. Journal of Interpersonal Violence, 38 , 10839–10864. https://doi.org/10.1177/08862605231175906

Mathews, B., Pacella, R., Scott, J. G., Finkelhor, D., Meinck, F., Higgins, D. J., Erskine, H. E., Thomas, H. J., Lawrence, D. M., Haslam, D. M., Malacova, E., & Dunne, M. P. (2023). The prevalence of child maltreatment in Australia: Findings from a national survey. Medical Journal of Australia, 218 (S6), S13–S18. https://doi.org/10.5694/mja2.51873

McPherson, L., Gatwiri, K., Cameron, N., & Parmenter, N. (2019). The evidence base for therapeutic group care: a systematic scoping review. Centre for excellence in therapeutic care. https://www.cetc.org.au/the-evidence-base-for-therapeutic-group-care-a-systematic-scoping-review-research-brief/

McPherson, L., Long, M., Nicholson, M., Cameron, N., Atkins, P., & Morris, M. E. (2017). Secrecy surrounding the physical abuse of child athletes in Australia. Australian Social Work, 70 (1), 42–53. https://doi.org/10.1080/0312407X.2016.1142589

Moher, D., Liberati, A., Tetzlaff, J., & Altman, D. G. (2009). Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement: e1000097. PLoS Medicine . https://doi.org/10.1371/journal.pmed.1000097

*Mooney, J. (2021). How adults tell: a study of adults’ experiences of disclosure to child protection social work services. Child Abuse Review, 30 (3), 193–209. https://doi.org/10.1002/car.2677

Munn, Z., Peters, M. D. J., Stern, C., Tufanaru, C., McArthur, A., & Aromataris, E. (2018). Systematic review or scoping review? Guidance for authors when choosing between a systematic or scoping review approach. BMC Medical Research Methodology, 18 (1), 143–143. https://doi.org/10.1186/s12874-018-0611-x

*Obong’o, C. O., Patel, S. N., Cain, M., Kasese, C., Mupambireyi, Z., Bangani, Z., Pichon, L. C., & Miller, K. S. (2020). Suffering whether you tell or don’t tell: Perceived re-victimization as a barrier to disclosing child sexual abuse in Zimbabwe. Journal of Child Sexual Abuse, 29 (8), 944–964. https://doi.org/10.1080/10538712.2020.1832176

Oliver, S. (2001). Marking research more useful: integrating different perspectives and different methods. In S. Oliver & G. Peersman (Eds.), Using research for effective health promotion (pp. 167–179). Open University Press.

*Pacheco, E. L. M., Buenaventura, A. E., & Miles, G. M. (2023). “She was willing to send me there”: Intrafamilial child sexual abuse, exploitation and trafficking of boys. Child Abuse & Neglect, 142 (Pt 2), 105849–105849. https://doi.org/10.1016/j.chiabu.2022.105849

Pereda, N., Guilera, G., Forns, M., & Gómez-Benito, J. (2009). The international epidemiology of child sexual abuse: A continuation of Finkelhor (1994). Child Abuse & Neglect, 33 (6), 331–342. https://doi.org/10.1016/j.chiabu.2008.07.007

*Rakovec-Felser, Z., & Vidovič, L. (2016). Maternal perceptions of and responses to child sexual abuse. Zdravstveno Varstvo, 55 (2), 1–7. https://doi.org/10.1515/sjph-2016-0017

*Ramphabana, L. B., Rapholo, S. F., & Makhubele, J. C. (2019). The influence of socio-cultural practices amongst Vhavenda towards the disclosure of child sexual abuse: Implications for practice. Gender & Behaviour, 17 (4), 13948–13961.

*Roberts, K. P., Qi, H., & Zhang, H. H. (2016). Challenges facing East Asian immigrant children in sexual abuse cases. Canadian Psychology Psychologie = Canadienne, 57 (4), 300–307. https://doi.org/10.1037/cap0000066

*Romano, E., Moorman, J., Ressel, M., & Lyons, J. (2019). Men with childhood sexual abuse histories: Disclosure experiences and links with mental health. Child Abuse & Neglect, 89 , 212–224. https://doi.org/10.1016/j.chiabu.2018.12.010

*Rothman, E. F., Bazzi, A. R., & Bair-Merritt, M. (2015). “I’ll do whatever as long as you keep telling me that I’m important”: a case study illustrating the link between adolescent dating violence and sex trafficking victimization. The Journal of Applied Research on Children . https://doi.org/10.58464/2155-5834.1238

*Russell, D. H., & Higgins, D. J. (2023). Friends and safeguarding: Young people’s views about safety and to whom they would share safety concerns. Child Abuse Review, 32 (3), e2825. https://doi.org/10.1002/car.2825

*Sivagurunathan, M., Orchard, T., MacDermid, J. C., & Evans, M. (2019). Barriers and facilitators affecting self-disclosure among male survivors of child sexual abuse: The service providers’ perspective. Child Abuse & Neglect, 88 , 455–465. https://doi.org/10.1016/j.chiabu.2018.08.015

Stoltenborgh, M., Bakermans-Kranenburg, M. J., Alink, L. R. A., & van Ijzendoorn, M. H. (2015). The prevalence of child maltreatment across the globe: Review of a series of meta-analyses. Child Abuse Review, 24 (1), 37–50. https://doi.org/10.1002/car.2353

Summit, R. C. (1983). The child sexual abuse accommodation syndrome. Child Abuse & Neglect, 7 (2), 177–193. https://doi.org/10.1016/0145-2134(83)90070-4

*Swain, S. (2015). Giving voice to narratives of institutional sex abuse. The Australian Feminist Law Journal, 41 (2), 289–304. https://doi.org/10.1080/13200968.2015.1077554

*Tat, M. C., & Ozturk, A. (2019). Ecological system model approach to self-disclosure process in child sexual abuse. Current Approaches to Psychiatry, 11 (3), 363–386. https://doi.org/10.18863/pgy.455511

*Taylor, S. C., & Norma, C. (2013). The ties that bind: Family barriers for adult women seeking to report childhood sexual assault in Australia. Women’s Studies International Forum, 37 , 114–124. https://doi.org/10.1016/j.wsif.2012.11.004

*Tolliday, D. (2016). “Until we talk about everything, everything we talk about is just whistling into the wind”: An interview with Pam Greer and Sigrid (‘Sig’) Herring. Sexual Abuse in Australia and New Zealand, 7 (1), 70–80.

*Ullman, S. E. (2023). Facilitators of sexual assault disclosure: A dyadic study of female survivors and their informal supports. Journal of Child Sexual Abuse, 32 (5), 615–636. https://doi.org/10.1080/10538712.2023.2217812

*Wager, N. M. (2015). Understanding children’s non-disclosure of child sexual assault: Implications for assisting parents and teachers to become effective guardians. Safer Communities, 14 (1), 16–26. https://doi.org/10.1108/SC-03-2015-0009

*Wallis, C. R. D., & Woodworth, M. D. (2020). Child sexual abuse: An examination of individual and abuse characteristics that may impact delays of disclosure. Child Abuse & Neglect, 107 , 104604–104604. https://doi.org/10.1016/j.chiabu.2020.104604

*Weiss, K. G. (2013). “You just don’t report that kind of stuff”: Investigating teens’ ambivalence toward peer-perpetrated, unwanted sexual incidents. Violence and Victims, 28 (2), 288–302. https://doi.org/10.1891/0886-6708.11-061

Download references

Acknowledgements

This project was funded by the National Centre for Action on Child Sexual Abuse. The findings and views reported within are those of the authors.

Open Access funding enabled and organized by CAUL and its Member Institutions.

Author information

Authors and affiliations.

Faculty of Health, Centre for Children and Young People, Southern Cross University, Locked Mail Bag 4, Coolangatta, QLD, 4225, Australia

Lynne McPherson, Kathomi Gatwiri, Anne Graham, Kelly Hand & Samara James

Gnibi College of Australian Indigenous Peoples, Southern Cross University, Lismore, Australia

Darlene Rotumah

Department of Rural Allied Health, La Trobe University, Shepparton, Australia

Corina Modderman

Centre Against Violence, Wangaratta, Australia

Jaime Chubb

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Lynne McPherson .

Ethics declarations

Conflict of interest, additional information, publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

McPherson, L., Gatwiri, K., Graham, A. et al. What Helps Children and Young People to Disclose their Experience of Sexual Abuse and What Gets in the Way? A Systematic Scoping Review. Child Youth Care Forum (2024). https://doi.org/10.1007/s10566-024-09825-5

Download citation

Accepted : 08 September 2024

Published : 18 September 2024

DOI : https://doi.org/10.1007/s10566-024-09825-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Sexual abuse
  • Child abuse
  • Systematic scoping review
  • Find a journal
  • Publish with us
  • Track your research

COMMENTS

  1. What Is a Research Design

    A research design is a strategy for answering your research question using empirical data. Creating a research design means making decisions about: Your overall research objectives and approach. Whether you'll rely on primary research or secondary research. Your sampling methods or criteria for selecting subjects. Your data collection methods.

  2. Design Flaws to Avoid

    Therefore, your research design should include an explicit set of logically derived hypotheses, basic postulates, or assumptions that can be tested in relation to the research problem. ... In describing the research design, state why your study is important and how it contributes to the larger body of studies about the topic being investigated.

  3. Study designs: Part 1

    The study design used to answer a particular research question depends on the nature of the question and the availability of resources. In this article, which is the first part of a series on "study designs," we provide an overview of research study designs and their classification. The subsequent articles will focus on individual designs.

  4. Understanding Research Study Designs

    Ranganathan P. Understanding Research Study Designs. Indian J Crit Care Med 2019;23 (Suppl 4):S305-S307. Keywords: Clinical trials as topic, Observational studies as topic, Research designs. We use a variety of research study designs in biomedical research. In this article, the main features of each of these designs are summarized. Go to:

  5. Clinical research study designs: The essentials

    Introduction. In clinical research, our aim is to design a study, which would be able to derive a valid and meaningful scientific conclusion using appropriate statistical methods that can be translated to the "real world" setting. 1 Before choosing a study design, one must establish aims and objectives of the study, and choose an appropriate target population that is most representative of ...

  6. Research Design

    Table of contents. Step 1: Consider your aims and approach. Step 2: Choose a type of research design. Step 3: Identify your population and sampling method. Step 4: Choose your data collection methods. Step 5: Plan your data collection procedures. Step 6: Decide on your data analysis strategies.

  7. Organizing Your Social Sciences Research Paper

    The case study research design is also useful for testing whether a specific theory and model actually applies to phenomena in the real world. It is a useful design when not much is known about an issue or phenomenon. ... The main objectives of meta-analysis include analyzing differences in the results among studies and increasing the precision ...

  8. What Is Research Design? 8 Types + Examples

    Research design refers to the overall plan, structure or strategy that guides a research project, from its conception to the final analysis of data. Research designs for quantitative studies include descriptive, correlational, experimental and quasi-experimenta l designs. Research designs for qualitative studies include phenomenological ...

  9. What are research designs?

    Your research design should include the following: A clear research question; Theoretical frameworks you will use to analyze your data; Key concepts; Your hypothesis/hypotheses ... External validity is the degree to which you can generalize the findings of your research study. It is determining whether or not the findings are applicable to ...

  10. What is a Research Design? Definition, Types, Methods and Examples

    Research design methods refer to the systematic approaches and techniques used to plan, structure, and conduct a research study. The choice of research design method depends on the research questions, objectives, and the nature of the study. Here are some key research design methods commonly used in various fields: 1.

  11. PDF The Selection of a Research Design

    involves which design should be used to study a topic. Informing this decision should be the worldview assumptions the researcher brings to the study; procedures of inquiry (called strategies); and specific meth-ods of data collection, analysis, and interpretation. The selection of a research design is also based on the nature of the research ...

  12. How to choose your study design

    First, by the specific research question. That is, if the question is one of 'prevalence' (disease burden) then the ideal is a cross-sectional study; if it is a question of 'harm' - a case-control study; prognosis - a cohort and therapy - a RCT. Second, by what resources are available to you. This includes budget, time, feasibility re-patient ...

  13. Research Design

    Research design is the blueprint or the framework for conducting a study that outlines the methods, procedures, techniques, and tools for data collection and analysis. Some of the key purposes of research design include: Providing a clear and concise plan of action for the research study.

  14. Planning Qualitative Research: Design and Decision Making for New

    While many books and articles guide various qualitative research methods and analyses, there is currently no concise resource that explains and differentiates among the most common qualitative approaches. We believe novice qualitative researchers, students planning the design of a qualitative study or taking an introductory qualitative research course, and faculty teaching such courses can ...

  15. Organizing Academic Research Papers: Types of Research Designs

    Before beginning your paper, you need to decide how you plan to design the study.. The research design refers to the overall strategy that you choose to integrate the different components of the study in a coherent and logical way, thereby, ensuring you will effectively address the research problem; it constitutes the blueprint for the collection, measurement, and analysis of data.

  16. Research Design Considerations

    Purposive sampling is often used in qualitative research, with a goal of finding information-rich cases, not to generalize. 6. Be reflexive: Examine the ways in which your history, education, experiences, and worldviews have affected the research questions you have selected and your data collection methods, analyses, and writing. 13. Go to:

  17. Basics of Research Design: A Guide to selecting appropriate research design

    2.4 Choosing the correct research design for a research. The essence of research design is to achieve the research objective clearly, objectively, precisely and economically, control extraneous ...

  18. Research Methods Guide: Research Design & Method

    Most frequently used methods include: Observation / Participant Observation. Surveys. Interviews. Focus Groups. Experiments. Secondary Data Analysis / Archival Study. Mixed Methods (combination of some of the above) One particular method could be better suited to your research goal than others, because the data you collect from different ...

  19. Library Tutorials: Study Design 101

    An understanding of the different types of clinical medical studies and how they relate is crucial when practicing evidence-based medicine (EBM). Study Design Tutorials. Visit Himmelfarb Library's complete Study Design 101 guide to learn more. This work is licensed under a. Creative Commons Attribution-NonCommercial 4.0 International License.

  20. Introducing Research Designs

    We define research design as a combination of decisions within a research process. These decisions enable us to make a specific type of argument by answering the research question. It is the implementation plan for the research study that allows reaching the desired (type of) conclusion. Different research designs make it possible to draw ...

  21. (PDF) Research Design

    design'. The research design refers to the overall strategy that you choose to integrate the. different components of the study in a coherent and logical way, thereby, ensuring you will ...

  22. What do I need to include in my research design?

    The priorities of a research design can vary depending on the field, but you usually have to specify: Your research questions and/or hypotheses. Your overall approach (e.g., qualitative or quantitative) The type of design you're using (e.g., a survey, experiment, or case study) Your sampling methods or criteria for selecting subjects.

  23. Types of studies and research design

    Basic research includes fundamental research in fields shown in Figure 2. In almost all studies, at least one independent variable is varied, whereas the effects on the dependent variables are investigated. Clinical studies include observational studies and interventional studies and are subclassified as in Figure 2. Figure 2.

  24. Chapter 12: Integrating Randomized and Non-randomized Studies in

    As described in section 4, authors at the protocol stage may decide that both RCTs and NRS need to be considered, and both types of evidence are retrieved and evaluated. Once the search is complete, the evidence is organized by study design as either randomized or non-randomized. The GRADE certainty of the RCTs should be evaluated first.

  25. What Helps Children and Young People to Disclose their Experience of

    Background Global research has found that prevalence rates of child sexual abuse suggest that this is a significant ongoing public health concern. A recent Australian study, for example, revealed that more than three girls and almost one in five boys reported experiencing sexual abuse before the age of 18. Self-reported rates of abuse, however, far exceed official figures, suggesting that ...