• Privacy Policy

Research Method

Home » Research Design – Types, Methods and Examples

Research Design – Types, Methods and Examples

Table of Contents

Research Design

Research Design

Definition:

Research design refers to the overall strategy or plan for conducting a research study. It outlines the methods and procedures that will be used to collect and analyze data, as well as the goals and objectives of the study. Research design is important because it guides the entire research process and ensures that the study is conducted in a systematic and rigorous manner.

Types of Research Design

Types of Research Design are as follows:

Descriptive Research Design

This type of research design is used to describe a phenomenon or situation. It involves collecting data through surveys, questionnaires, interviews, and observations. The aim of descriptive research is to provide an accurate and detailed portrayal of a particular group, event, or situation. It can be useful in identifying patterns, trends, and relationships in the data.

Correlational Research Design

Correlational research design is used to determine if there is a relationship between two or more variables. This type of research design involves collecting data from participants and analyzing the relationship between the variables using statistical methods. The aim of correlational research is to identify the strength and direction of the relationship between the variables.

Experimental Research Design

Experimental research design is used to investigate cause-and-effect relationships between variables. This type of research design involves manipulating one variable and measuring the effect on another variable. It usually involves randomly assigning participants to groups and manipulating an independent variable to determine its effect on a dependent variable. The aim of experimental research is to establish causality.

Quasi-experimental Research Design

Quasi-experimental research design is similar to experimental research design, but it lacks one or more of the features of a true experiment. For example, there may not be random assignment to groups or a control group. This type of research design is used when it is not feasible or ethical to conduct a true experiment.

Case Study Research Design

Case study research design is used to investigate a single case or a small number of cases in depth. It involves collecting data through various methods, such as interviews, observations, and document analysis. The aim of case study research is to provide an in-depth understanding of a particular case or situation.

Longitudinal Research Design

Longitudinal research design is used to study changes in a particular phenomenon over time. It involves collecting data at multiple time points and analyzing the changes that occur. The aim of longitudinal research is to provide insights into the development, growth, or decline of a particular phenomenon over time.

Structure of Research Design

The format of a research design typically includes the following sections:

  • Introduction : This section provides an overview of the research problem, the research questions, and the importance of the study. It also includes a brief literature review that summarizes previous research on the topic and identifies gaps in the existing knowledge.
  • Research Questions or Hypotheses: This section identifies the specific research questions or hypotheses that the study will address. These questions should be clear, specific, and testable.
  • Research Methods : This section describes the methods that will be used to collect and analyze data. It includes details about the study design, the sampling strategy, the data collection instruments, and the data analysis techniques.
  • Data Collection: This section describes how the data will be collected, including the sample size, data collection procedures, and any ethical considerations.
  • Data Analysis: This section describes how the data will be analyzed, including the statistical techniques that will be used to test the research questions or hypotheses.
  • Results : This section presents the findings of the study, including descriptive statistics and statistical tests.
  • Discussion and Conclusion : This section summarizes the key findings of the study, interprets the results, and discusses the implications of the findings. It also includes recommendations for future research.
  • References : This section lists the sources cited in the research design.

Example of Research Design

An Example of Research Design could be:

Research question: Does the use of social media affect the academic performance of high school students?

Research design:

  • Research approach : The research approach will be quantitative as it involves collecting numerical data to test the hypothesis.
  • Research design : The research design will be a quasi-experimental design, with a pretest-posttest control group design.
  • Sample : The sample will be 200 high school students from two schools, with 100 students in the experimental group and 100 students in the control group.
  • Data collection : The data will be collected through surveys administered to the students at the beginning and end of the academic year. The surveys will include questions about their social media usage and academic performance.
  • Data analysis : The data collected will be analyzed using statistical software. The mean scores of the experimental and control groups will be compared to determine whether there is a significant difference in academic performance between the two groups.
  • Limitations : The limitations of the study will be acknowledged, including the fact that social media usage can vary greatly among individuals, and the study only focuses on two schools, which may not be representative of the entire population.
  • Ethical considerations: Ethical considerations will be taken into account, such as obtaining informed consent from the participants and ensuring their anonymity and confidentiality.

How to Write Research Design

Writing a research design involves planning and outlining the methodology and approach that will be used to answer a research question or hypothesis. Here are some steps to help you write a research design:

  • Define the research question or hypothesis : Before beginning your research design, you should clearly define your research question or hypothesis. This will guide your research design and help you select appropriate methods.
  • Select a research design: There are many different research designs to choose from, including experimental, survey, case study, and qualitative designs. Choose a design that best fits your research question and objectives.
  • Develop a sampling plan : If your research involves collecting data from a sample, you will need to develop a sampling plan. This should outline how you will select participants and how many participants you will include.
  • Define variables: Clearly define the variables you will be measuring or manipulating in your study. This will help ensure that your results are meaningful and relevant to your research question.
  • Choose data collection methods : Decide on the data collection methods you will use to gather information. This may include surveys, interviews, observations, experiments, or secondary data sources.
  • Create a data analysis plan: Develop a plan for analyzing your data, including the statistical or qualitative techniques you will use.
  • Consider ethical concerns : Finally, be sure to consider any ethical concerns related to your research, such as participant confidentiality or potential harm.

When to Write Research Design

Research design should be written before conducting any research study. It is an important planning phase that outlines the research methodology, data collection methods, and data analysis techniques that will be used to investigate a research question or problem. The research design helps to ensure that the research is conducted in a systematic and logical manner, and that the data collected is relevant and reliable.

Ideally, the research design should be developed as early as possible in the research process, before any data is collected. This allows the researcher to carefully consider the research question, identify the most appropriate research methodology, and plan the data collection and analysis procedures in advance. By doing so, the research can be conducted in a more efficient and effective manner, and the results are more likely to be valid and reliable.

Purpose of Research Design

The purpose of research design is to plan and structure a research study in a way that enables the researcher to achieve the desired research goals with accuracy, validity, and reliability. Research design is the blueprint or the framework for conducting a study that outlines the methods, procedures, techniques, and tools for data collection and analysis.

Some of the key purposes of research design include:

  • Providing a clear and concise plan of action for the research study.
  • Ensuring that the research is conducted ethically and with rigor.
  • Maximizing the accuracy and reliability of the research findings.
  • Minimizing the possibility of errors, biases, or confounding variables.
  • Ensuring that the research is feasible, practical, and cost-effective.
  • Determining the appropriate research methodology to answer the research question(s).
  • Identifying the sample size, sampling method, and data collection techniques.
  • Determining the data analysis method and statistical tests to be used.
  • Facilitating the replication of the study by other researchers.
  • Enhancing the validity and generalizability of the research findings.

Applications of Research Design

There are numerous applications of research design in various fields, some of which are:

  • Social sciences: In fields such as psychology, sociology, and anthropology, research design is used to investigate human behavior and social phenomena. Researchers use various research designs, such as experimental, quasi-experimental, and correlational designs, to study different aspects of social behavior.
  • Education : Research design is essential in the field of education to investigate the effectiveness of different teaching methods and learning strategies. Researchers use various designs such as experimental, quasi-experimental, and case study designs to understand how students learn and how to improve teaching practices.
  • Health sciences : In the health sciences, research design is used to investigate the causes, prevention, and treatment of diseases. Researchers use various designs, such as randomized controlled trials, cohort studies, and case-control studies, to study different aspects of health and healthcare.
  • Business : Research design is used in the field of business to investigate consumer behavior, marketing strategies, and the impact of different business practices. Researchers use various designs, such as survey research, experimental research, and case studies, to study different aspects of the business world.
  • Engineering : In the field of engineering, research design is used to investigate the development and implementation of new technologies. Researchers use various designs, such as experimental research and case studies, to study the effectiveness of new technologies and to identify areas for improvement.

Advantages of Research Design

Here are some advantages of research design:

  • Systematic and organized approach : A well-designed research plan ensures that the research is conducted in a systematic and organized manner, which makes it easier to manage and analyze the data.
  • Clear objectives: The research design helps to clarify the objectives of the study, which makes it easier to identify the variables that need to be measured, and the methods that need to be used to collect and analyze data.
  • Minimizes bias: A well-designed research plan minimizes the chances of bias, by ensuring that the data is collected and analyzed objectively, and that the results are not influenced by the researcher’s personal biases or preferences.
  • Efficient use of resources: A well-designed research plan helps to ensure that the resources (time, money, and personnel) are used efficiently and effectively, by focusing on the most important variables and methods.
  • Replicability: A well-designed research plan makes it easier for other researchers to replicate the study, which enhances the credibility and reliability of the findings.
  • Validity: A well-designed research plan helps to ensure that the findings are valid, by ensuring that the methods used to collect and analyze data are appropriate for the research question.
  • Generalizability : A well-designed research plan helps to ensure that the findings can be generalized to other populations, settings, or situations, which increases the external validity of the study.

Research Design Vs Research Methodology

Research DesignResearch Methodology
The plan and structure for conducting research that outlines the procedures to be followed to collect and analyze data.The set of principles, techniques, and tools used to carry out the research plan and achieve research objectives.
Describes the overall approach and strategy used to conduct research, including the type of data to be collected, the sources of data, and the methods for collecting and analyzing data.Refers to the techniques and methods used to gather, analyze and interpret data, including sampling techniques, data collection methods, and data analysis techniques.
Helps to ensure that the research is conducted in a systematic, rigorous, and valid way, so that the results are reliable and can be used to make sound conclusions.Includes a set of procedures and tools that enable researchers to collect and analyze data in a consistent and valid manner, regardless of the research design used.
Common research designs include experimental, quasi-experimental, correlational, and descriptive studies.Common research methodologies include qualitative, quantitative, and mixed-methods approaches.
Determines the overall structure of the research project and sets the stage for the selection of appropriate research methodologies.Guides the researcher in selecting the most appropriate research methods based on the research question, research design, and other contextual factors.
Helps to ensure that the research project is feasible, relevant, and ethical.Helps to ensure that the data collected is accurate, valid, and reliable, and that the research findings can be interpreted and generalized to the population of interest.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Context of the Study

Context of the Study – Writing Guide and Examples

Research Topic

Research Topics – Ideas and Examples

Significance of the Study

Significance of the Study – Examples and Writing...

APA Research Paper Format

APA Research Paper Format – Example, Sample and...

Research Paper Title Page

Research Paper Title Page – Example and Making...

Background of The Study

Background of The Study – Examples and Writing...

Leave a comment x.

Save my name, email, and website in this browser for the next time I comment.

a research design meaning

Research Design 101

Everything You Need To Get Started (With Examples)

By: Derek Jansen (MBA) | Reviewers: Eunice Rautenbach (DTech) & Kerryn Warren (PhD) | April 2023

Research design for qualitative and quantitative studies

Navigating the world of research can be daunting, especially if you’re a first-time researcher. One concept you’re bound to run into fairly early in your research journey is that of “ research design ”. Here, we’ll guide you through the basics using practical examples , so that you can approach your research with confidence.

Overview: Research Design 101

What is research design.

  • Research design types for quantitative studies
  • Video explainer : quantitative research design
  • Research design types for qualitative studies
  • Video explainer : qualitative research design
  • How to choose a research design
  • Key takeaways

Research design refers to the overall plan, structure or strategy that guides a research project , from its conception to the final data analysis. A good research design serves as the blueprint for how you, as the researcher, will collect and analyse data while ensuring consistency, reliability and validity throughout your study.

Understanding different types of research designs is essential as helps ensure that your approach is suitable  given your research aims, objectives and questions , as well as the resources you have available to you. Without a clear big-picture view of how you’ll design your research, you run the risk of potentially making misaligned choices in terms of your methodology – especially your sampling , data collection and data analysis decisions.

The problem with defining research design…

One of the reasons students struggle with a clear definition of research design is because the term is used very loosely across the internet, and even within academia.

Some sources claim that the three research design types are qualitative, quantitative and mixed methods , which isn’t quite accurate (these just refer to the type of data that you’ll collect and analyse). Other sources state that research design refers to the sum of all your design choices, suggesting it’s more like a research methodology . Others run off on other less common tangents. No wonder there’s confusion!

In this article, we’ll clear up the confusion. We’ll explain the most common research design types for both qualitative and quantitative research projects, whether that is for a full dissertation or thesis, or a smaller research paper or article.

Free Webinar: Research Methodology 101

Research Design: Quantitative Studies

Quantitative research involves collecting and analysing data in a numerical form. Broadly speaking, there are four types of quantitative research designs: descriptive , correlational , experimental , and quasi-experimental . 

Descriptive Research Design

As the name suggests, descriptive research design focuses on describing existing conditions, behaviours, or characteristics by systematically gathering information without manipulating any variables. In other words, there is no intervention on the researcher’s part – only data collection.

For example, if you’re studying smartphone addiction among adolescents in your community, you could deploy a survey to a sample of teens asking them to rate their agreement with certain statements that relate to smartphone addiction. The collected data would then provide insight regarding how widespread the issue may be – in other words, it would describe the situation.

The key defining attribute of this type of research design is that it purely describes the situation . In other words, descriptive research design does not explore potential relationships between different variables or the causes that may underlie those relationships. Therefore, descriptive research is useful for generating insight into a research problem by describing its characteristics . By doing so, it can provide valuable insights and is often used as a precursor to other research design types.

Correlational Research Design

Correlational design is a popular choice for researchers aiming to identify and measure the relationship between two or more variables without manipulating them . In other words, this type of research design is useful when you want to know whether a change in one thing tends to be accompanied by a change in another thing.

For example, if you wanted to explore the relationship between exercise frequency and overall health, you could use a correlational design to help you achieve this. In this case, you might gather data on participants’ exercise habits, as well as records of their health indicators like blood pressure, heart rate, or body mass index. Thereafter, you’d use a statistical test to assess whether there’s a relationship between the two variables (exercise frequency and health).

As you can see, correlational research design is useful when you want to explore potential relationships between variables that cannot be manipulated or controlled for ethical, practical, or logistical reasons. It is particularly helpful in terms of developing predictions , and given that it doesn’t involve the manipulation of variables, it can be implemented at a large scale more easily than experimental designs (which will look at next).

That said, it’s important to keep in mind that correlational research design has limitations – most notably that it cannot be used to establish causality . In other words, correlation does not equal causation . To establish causality, you’ll need to move into the realm of experimental design, coming up next…

Need a helping hand?

a research design meaning

Experimental Research Design

Experimental research design is used to determine if there is a causal relationship between two or more variables . With this type of research design, you, as the researcher, manipulate one variable (the independent variable) while controlling others (dependent variables). Doing so allows you to observe the effect of the former on the latter and draw conclusions about potential causality.

For example, if you wanted to measure if/how different types of fertiliser affect plant growth, you could set up several groups of plants, with each group receiving a different type of fertiliser, as well as one with no fertiliser at all. You could then measure how much each plant group grew (on average) over time and compare the results from the different groups to see which fertiliser was most effective.

Overall, experimental research design provides researchers with a powerful way to identify and measure causal relationships (and the direction of causality) between variables. However, developing a rigorous experimental design can be challenging as it’s not always easy to control all the variables in a study. This often results in smaller sample sizes , which can reduce the statistical power and generalisability of the results.

Moreover, experimental research design requires random assignment . This means that the researcher needs to assign participants to different groups or conditions in a way that each participant has an equal chance of being assigned to any group (note that this is not the same as random sampling ). Doing so helps reduce the potential for bias and confounding variables . This need for random assignment can lead to ethics-related issues . For example, withholding a potentially beneficial medical treatment from a control group may be considered unethical in certain situations.

Quasi-Experimental Research Design

Quasi-experimental research design is used when the research aims involve identifying causal relations , but one cannot (or doesn’t want to) randomly assign participants to different groups (for practical or ethical reasons). Instead, with a quasi-experimental research design, the researcher relies on existing groups or pre-existing conditions to form groups for comparison.

For example, if you were studying the effects of a new teaching method on student achievement in a particular school district, you may be unable to randomly assign students to either group and instead have to choose classes or schools that already use different teaching methods. This way, you still achieve separate groups, without having to assign participants to specific groups yourself.

Naturally, quasi-experimental research designs have limitations when compared to experimental designs. Given that participant assignment is not random, it’s more difficult to confidently establish causality between variables, and, as a researcher, you have less control over other variables that may impact findings.

All that said, quasi-experimental designs can still be valuable in research contexts where random assignment is not possible and can often be undertaken on a much larger scale than experimental research, thus increasing the statistical power of the results. What’s important is that you, as the researcher, understand the limitations of the design and conduct your quasi-experiment as rigorously as possible, paying careful attention to any potential confounding variables .

The four most common quantitative research design types are descriptive, correlational, experimental and quasi-experimental.

Research Design: Qualitative Studies

There are many different research design types when it comes to qualitative studies, but here we’ll narrow our focus to explore the “Big 4”. Specifically, we’ll look at phenomenological design, grounded theory design, ethnographic design, and case study design.

Phenomenological Research Design

Phenomenological design involves exploring the meaning of lived experiences and how they are perceived by individuals. This type of research design seeks to understand people’s perspectives , emotions, and behaviours in specific situations. Here, the aim for researchers is to uncover the essence of human experience without making any assumptions or imposing preconceived ideas on their subjects.

For example, you could adopt a phenomenological design to study why cancer survivors have such varied perceptions of their lives after overcoming their disease. This could be achieved by interviewing survivors and then analysing the data using a qualitative analysis method such as thematic analysis to identify commonalities and differences.

Phenomenological research design typically involves in-depth interviews or open-ended questionnaires to collect rich, detailed data about participants’ subjective experiences. This richness is one of the key strengths of phenomenological research design but, naturally, it also has limitations. These include potential biases in data collection and interpretation and the lack of generalisability of findings to broader populations.

Grounded Theory Research Design

Grounded theory (also referred to as “GT”) aims to develop theories by continuously and iteratively analysing and comparing data collected from a relatively large number of participants in a study. It takes an inductive (bottom-up) approach, with a focus on letting the data “speak for itself”, without being influenced by preexisting theories or the researcher’s preconceptions.

As an example, let’s assume your research aims involved understanding how people cope with chronic pain from a specific medical condition, with a view to developing a theory around this. In this case, grounded theory design would allow you to explore this concept thoroughly without preconceptions about what coping mechanisms might exist. You may find that some patients prefer cognitive-behavioural therapy (CBT) while others prefer to rely on herbal remedies. Based on multiple, iterative rounds of analysis, you could then develop a theory in this regard, derived directly from the data (as opposed to other preexisting theories and models).

Grounded theory typically involves collecting data through interviews or observations and then analysing it to identify patterns and themes that emerge from the data. These emerging ideas are then validated by collecting more data until a saturation point is reached (i.e., no new information can be squeezed from the data). From that base, a theory can then be developed .

As you can see, grounded theory is ideally suited to studies where the research aims involve theory generation , especially in under-researched areas. Keep in mind though that this type of research design can be quite time-intensive , given the need for multiple rounds of data collection and analysis.

Private Coaching

Ethnographic Research Design

Ethnographic design involves observing and studying a culture-sharing group of people in their natural setting to gain insight into their behaviours, beliefs, and values. The focus here is on observing participants in their natural environment (as opposed to a controlled environment). This typically involves the researcher spending an extended period of time with the participants in their environment, carefully observing and taking field notes .

All of this is not to say that ethnographic research design relies purely on observation. On the contrary, this design typically also involves in-depth interviews to explore participants’ views, beliefs, etc. However, unobtrusive observation is a core component of the ethnographic approach.

As an example, an ethnographer may study how different communities celebrate traditional festivals or how individuals from different generations interact with technology differently. This may involve a lengthy period of observation, combined with in-depth interviews to further explore specific areas of interest that emerge as a result of the observations that the researcher has made.

As you can probably imagine, ethnographic research design has the ability to provide rich, contextually embedded insights into the socio-cultural dynamics of human behaviour within a natural, uncontrived setting. Naturally, however, it does come with its own set of challenges, including researcher bias (since the researcher can become quite immersed in the group), participant confidentiality and, predictably, ethical complexities . All of these need to be carefully managed if you choose to adopt this type of research design.

Case Study Design

With case study research design, you, as the researcher, investigate a single individual (or a single group of individuals) to gain an in-depth understanding of their experiences, behaviours or outcomes. Unlike other research designs that are aimed at larger sample sizes, case studies offer a deep dive into the specific circumstances surrounding a person, group of people, event or phenomenon, generally within a bounded setting or context .

As an example, a case study design could be used to explore the factors influencing the success of a specific small business. This would involve diving deeply into the organisation to explore and understand what makes it tick – from marketing to HR to finance. In terms of data collection, this could include interviews with staff and management, review of policy documents and financial statements, surveying customers, etc.

While the above example is focused squarely on one organisation, it’s worth noting that case study research designs can have different variation s, including single-case, multiple-case and longitudinal designs. As you can see in the example, a single-case design involves intensely examining a single entity to understand its unique characteristics and complexities. Conversely, in a multiple-case design , multiple cases are compared and contrasted to identify patterns and commonalities. Lastly, in a longitudinal case design , a single case or multiple cases are studied over an extended period of time to understand how factors develop over time.

As you can see, a case study research design is particularly useful where a deep and contextualised understanding of a specific phenomenon or issue is desired. However, this strength is also its weakness. In other words, you can’t generalise the findings from a case study to the broader population. So, keep this in mind if you’re considering going the case study route.

Case study design often involves investigating an individual to gain an in-depth understanding of their experiences, behaviours or outcomes.

How To Choose A Research Design

Having worked through all of these potential research designs, you’d be forgiven for feeling a little overwhelmed and wondering, “ But how do I decide which research design to use? ”. While we could write an entire post covering that alone, here are a few factors to consider that will help you choose a suitable research design for your study.

Data type: The first determining factor is naturally the type of data you plan to be collecting – i.e., qualitative or quantitative. This may sound obvious, but we have to be clear about this – don’t try to use a quantitative research design on qualitative data (or vice versa)!

Research aim(s) and question(s): As with all methodological decisions, your research aim and research questions will heavily influence your research design. For example, if your research aims involve developing a theory from qualitative data, grounded theory would be a strong option. Similarly, if your research aims involve identifying and measuring relationships between variables, one of the experimental designs would likely be a better option.

Time: It’s essential that you consider any time constraints you have, as this will impact the type of research design you can choose. For example, if you’ve only got a month to complete your project, a lengthy design such as ethnography wouldn’t be a good fit.

Resources: Take into account the resources realistically available to you, as these need to factor into your research design choice. For example, if you require highly specialised lab equipment to execute an experimental design, you need to be sure that you’ll have access to that before you make a decision.

Keep in mind that when it comes to research, it’s important to manage your risks and play as conservatively as possible. If your entire project relies on you achieving a huge sample, having access to niche equipment or holding interviews with very difficult-to-reach participants, you’re creating risks that could kill your project. So, be sure to think through your choices carefully and make sure that you have backup plans for any existential risks. Remember that a relatively simple methodology executed well generally will typically earn better marks than a highly-complex methodology executed poorly.

a research design meaning

Recap: Key Takeaways

We’ve covered a lot of ground here. Let’s recap by looking at the key takeaways:

  • Research design refers to the overall plan, structure or strategy that guides a research project, from its conception to the final analysis of data.
  • Research designs for quantitative studies include descriptive , correlational , experimental and quasi-experimenta l designs.
  • Research designs for qualitative studies include phenomenological , grounded theory , ethnographic and case study designs.
  • When choosing a research design, you need to consider a variety of factors, including the type of data you’ll be working with, your research aims and questions, your time and the resources available to you.

If you need a helping hand with your research design (or any other aspect of your research), check out our private coaching services .

a research design meaning

Psst... there’s more!

This post was based on one of our popular Research Bootcamps . If you're working on a research project, you'll definitely want to check this out ...

14 Comments

Wei Leong YONG

Is there any blog article explaining more on Case study research design? Is there a Case study write-up template? Thank you.

Solly Khan

Thanks this was quite valuable to clarify such an important concept.

hetty

Thanks for this simplified explanations. it is quite very helpful.

Belz

This was really helpful. thanks

Imur

Thank you for your explanation. I think case study research design and the use of secondary data in researches needs to be talked about more in your videos and articles because there a lot of case studies research design tailored projects out there.

Please is there any template for a case study research design whose data type is a secondary data on your repository?

Sam Msongole

This post is very clear, comprehensive and has been very helpful to me. It has cleared the confusion I had in regard to research design and methodology.

Robyn Pritchard

This post is helpful, easy to understand, and deconstructs what a research design is. Thanks

Rachael Opoku

This post is really helpful.

kelebogile

how to cite this page

Peter

Thank you very much for the post. It is wonderful and has cleared many worries in my mind regarding research designs. I really appreciate .

ali

how can I put this blog as my reference(APA style) in bibliography part?

Joreme

This post has been very useful to me. Confusing areas have been cleared

Esther Mwamba

This is very helpful and very useful!

Lilo_22

Wow! This post has an awful explanation. Appreciated.

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

a research design meaning

  • Print Friendly
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case AskWhy Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

a research design meaning

Home Market Research Research Tools and Apps

Research Design: What it is, Elements & Types

Research Design

Can you imagine doing research without a plan? Probably not. When we discuss a strategy to collect, study, and evaluate data, we talk about research design. This design addresses problems and creates a consistent and logical model for data analysis. Let’s learn more about it.

What is Research Design?

Research design is the framework of research methods and techniques chosen by a researcher to conduct a study. The design allows researchers to sharpen the research methods suitable for the subject matter and set up their studies for success.

Creating a research topic explains the type of research (experimental,  survey research ,  correlational , semi-experimental, review) and its sub-type (experimental design, research problem , descriptive case-study). 

There are three main types of designs for research:

  • Data collection
  • Measurement
  • Data Analysis

The research problem an organization faces will determine the design, not vice-versa. The design phase of a study determines which tools to use and how they are used.

The Process of Research Design

The research design process is a systematic and structured approach to conducting research. The process is essential to ensure that the study is valid, reliable, and produces meaningful results.

  • Consider your aims and approaches: Determine the research questions and objectives, and identify the theoretical framework and methodology for the study.
  • Choose a type of Research Design: Select the appropriate research design, such as experimental, correlational, survey, case study, or ethnographic, based on the research questions and objectives.
  • Identify your population and sampling method: Determine the target population and sample size, and choose the sampling method, such as random , stratified random sampling , or convenience sampling.
  • Choose your data collection methods: Decide on the data collection methods , such as surveys, interviews, observations, or experiments, and select the appropriate instruments or tools for collecting data.
  • Plan your data collection procedures: Develop a plan for data collection, including the timeframe, location, and personnel involved, and ensure ethical considerations.
  • Decide on your data analysis strategies: Select the appropriate data analysis techniques, such as statistical analysis , content analysis, or discourse analysis, and plan how to interpret the results.

The process of research design is a critical step in conducting research. By following the steps of research design, researchers can ensure that their study is well-planned, ethical, and rigorous.

Research Design Elements

Impactful research usually creates a minimum bias in data and increases trust in the accuracy of collected data. A design that produces the slightest margin of error in experimental research is generally considered the desired outcome. The essential elements are:

  • Accurate purpose statement
  • Techniques to be implemented for collecting and analyzing research
  • The method applied for analyzing collected details
  • Type of research methodology
  • Probable objections to research
  • Settings for the research study
  • Measurement of analysis

Characteristics of Research Design

A proper design sets your study up for success. Successful research studies provide insights that are accurate and unbiased. You’ll need to create a survey that meets all of the main characteristics of a design. There are four key characteristics:

Characteristics of Research Design

  • Neutrality: When you set up your study, you may have to make assumptions about the data you expect to collect. The results projected in the research should be free from research bias and neutral. Understand opinions about the final evaluated scores and conclusions from multiple individuals and consider those who agree with the results.
  • Reliability: With regularly conducted research, the researcher expects similar results every time. You’ll only be able to reach the desired results if your design is reliable. Your plan should indicate how to form research questions to ensure the standard of results.
  • Validity: There are multiple measuring tools available. However, the only correct measuring tools are those which help a researcher in gauging results according to the objective of the research. The  questionnaire  developed from this design will then be valid.
  • Generalization:  The outcome of your design should apply to a population and not just a restricted sample . A generalized method implies that your survey can be conducted on any part of a population with similar accuracy.

The above factors affect how respondents answer the research questions, so they should balance all the above characteristics in a good design. If you want, you can also learn about Selection Bias through our blog.

Research Design Types

A researcher must clearly understand the various types to select which model to implement for a study. Like the research itself, the design of your analysis can be broadly classified into quantitative and qualitative.

Qualitative research

Qualitative research determines relationships between collected data and observations based on mathematical calculations. Statistical methods can prove or disprove theories related to a naturally existing phenomenon. Researchers rely on qualitative observation research methods that conclude “why” a particular theory exists and “what” respondents have to say about it.

Quantitative research

Quantitative research is for cases where statistical conclusions to collect actionable insights are essential. Numbers provide a better perspective for making critical business decisions. Quantitative research methods are necessary for the growth of any organization. Insights drawn from complex numerical data and analysis prove to be highly effective when making decisions about the business’s future.

Qualitative Research vs Quantitative Research

Here is a chart that highlights the major differences between qualitative and quantitative research:

Qualitative ResearchQuantitative Research
Focus on explaining and understanding experiences and perspectives.Focus on quantifying and measuring phenomena.
Use of non-numerical data, such as words, images, and observations.Use of numerical data, such as statistics and surveys.
Usually uses small sample sizes.Usually uses larger sample sizes.
Typically emphasizes in-depth exploration and interpretation.Typically emphasizes precision and objectivity.
Data analysis involves interpretation and narrative analysis.Data analysis involves statistical analysis and hypothesis testing.
Results are presented descriptively.Results are presented numerically and statistically.

In summary or analysis , the step of qualitative research is more exploratory and focuses on understanding the subjective experiences of individuals, while quantitative research is more focused on objective data and statistical analysis.

You can further break down the types of research design into five categories:

types of research design

1. Descriptive: In a descriptive composition, a researcher is solely interested in describing the situation or case under their research study. It is a theory-based design method created by gathering, analyzing, and presenting collected data. This allows a researcher to provide insights into the why and how of research. Descriptive design helps others better understand the need for the research. If the problem statement is not clear, you can conduct exploratory research. 

2. Experimental: Experimental research establishes a relationship between the cause and effect of a situation. It is a causal research design where one observes the impact caused by the independent variable on the dependent variable. For example, one monitors the influence of an independent variable such as a price on a dependent variable such as customer satisfaction or brand loyalty. It is an efficient research method as it contributes to solving a problem.

The independent variables are manipulated to monitor the change it has on the dependent variable. Social sciences often use it to observe human behavior by analyzing two groups. Researchers can have participants change their actions and study how the people around them react to understand social psychology better.

3. Correlational research: Correlational research  is a non-experimental research technique. It helps researchers establish a relationship between two closely connected variables. There is no assumption while evaluating a relationship between two other variables, and statistical analysis techniques calculate the relationship between them. This type of research requires two different groups.

A correlation coefficient determines the correlation between two variables whose values range between -1 and +1. If the correlation coefficient is towards +1, it indicates a positive relationship between the variables, and -1 means a negative relationship between the two variables. 

4. Diagnostic research: In diagnostic design, the researcher is looking to evaluate the underlying cause of a specific topic or phenomenon. This method helps one learn more about the factors that create troublesome situations. 

This design has three parts of the research:

  • Inception of the issue
  • Diagnosis of the issue
  • Solution for the issue

5. Explanatory research : Explanatory design uses a researcher’s ideas and thoughts on a subject to further explore their theories. The study explains unexplored aspects of a subject and details the research questions’ what, how, and why.

Benefits of Research Design

There are several benefits of having a well-designed research plan. Including:

  • Clarity of research objectives: Research design provides a clear understanding of the research objectives and the desired outcomes.
  • Increased validity and reliability: To ensure the validity and reliability of results, research design help to minimize the risk of bias and helps to control extraneous variables.
  • Improved data collection: Research design helps to ensure that the proper data is collected and data is collected systematically and consistently.
  • Better data analysis: Research design helps ensure that the collected data can be analyzed effectively, providing meaningful insights and conclusions.
  • Improved communication: A well-designed research helps ensure the results are clean and influential within the research team and external stakeholders.
  • Efficient use of resources: reducing the risk of waste and maximizing the impact of the research, research design helps to ensure that resources are used efficiently.

A well-designed research plan is essential for successful research, providing clear and meaningful insights and ensuring that resources are practical.

QuestionPro offers a comprehensive solution for researchers looking to conduct research. With its user-friendly interface, robust data collection and analysis tools, and the ability to integrate results from multiple sources, QuestionPro provides a versatile platform for designing and executing research projects.

Our robust suite of research tools provides you with all you need to derive research results. Our online survey platform includes custom point-and-click logic and advanced question types. Uncover the insights that matter the most.

LEARN MORE         FREE TRIAL

MORE LIKE THIS

Participant Engagement

Participant Engagement: Strategies + Improving Interaction

Sep 12, 2024

Employee Recognition Programs

Employee Recognition Programs: A Complete Guide

Sep 11, 2024

Agile Qual for Rapid Insights

A guide to conducting agile qualitative research for rapid insights with Digsite 

Cultural Insights

Cultural Insights: What it is, Importance + How to Collect?

Sep 10, 2024

Other categories

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Tuesday CX Thoughts (TCXT)
  • Uncategorized
  • What’s Coming Up
  • Workforce Intelligence

Educational resources and simple solutions for your research journey

What is research design? Types, elements, and examples

What is Research Design? Understand Types of Research Design, with Examples

Have you been wondering “ what is research design ?” or “what are some research design examples ?” Are you unsure about the research design elements or which of the different types of research design best suit your study? Don’t worry! In this article, we’ve got you covered!   

Table of Contents

What is research design?  

Have you been wondering “ what is research design ?” or “what are some research design examples ?” Don’t worry! In this article, we’ve got you covered!  

A research design is the plan or framework used to conduct a research study. It involves outlining the overall approach and methods that will be used to collect and analyze data in order to answer research questions or test hypotheses. A well-designed research study should have a clear and well-defined research question, a detailed plan for collecting data, and a method for analyzing and interpreting the results. A well-thought-out research design addresses all these features.  

Research design elements  

Research design elements include the following:  

  • Clear purpose: The research question or hypothesis must be clearly defined and focused.  
  • Sampling: This includes decisions about sample size, sampling method, and criteria for inclusion or exclusion. The approach varies for different research design types .  
  • Data collection: This research design element involves the process of gathering data or information from the study participants or sources. It includes decisions about what data to collect, how to collect it, and the tools or instruments that will be used.  
  • Data analysis: All research design types require analysis and interpretation of the data collected. This research design element includes decisions about the statistical tests or methods that will be used to analyze the data, as well as any potential confounding variables or biases that may need to be addressed.  
  • Type of research methodology: This includes decisions about the overall approach for the study.  
  • Time frame: An important research design element is the time frame, which includes decisions about the duration of the study, the timeline for data collection and analysis, and follow-up periods.  
  • Ethical considerations: The research design must include decisions about ethical considerations such as informed consent, confidentiality, and participant protection.  
  • Resources: A good research design takes into account decisions about the budget, staffing, and other resources needed to carry out the study.  

The elements of research design should be carefully planned and executed to ensure the validity and reliability of the study findings. Let’s go deeper into the concepts of research design .    

a research design meaning

Characteristics of research design  

Some basic characteristics of research design are common to different research design types . These characteristics of research design are as follows:  

  • Neutrality : Right from the study assumptions to setting up the study, a neutral stance must be maintained, free of pre-conceived notions. The researcher’s expectations or beliefs should not color the findings or interpretation of the findings. Accordingly, a good research design should address potential sources of bias and confounding factors to be able to yield unbiased and neutral results.   
  •   Reliability : Reliability is one of the characteristics of research design that refers to consistency in measurement over repeated measures and fewer random errors. A reliable research design must allow for results to be consistent, with few errors due to chance.   
  •   Validity : Validity refers to the minimization of nonrandom (systematic) errors. A good research design must employ measurement tools that ensure validity of the results.  
  •   Generalizability: The outcome of the research design should be applicable to a larger population and not just a small sample . A generalized method means the study can be conducted on any part of a population with similar accuracy.   
  •   Flexibility: A research design should allow for changes to be made to the research plan as needed, based on the data collected and the outcomes of the study  

A well-planned research design is critical for conducting a scientifically rigorous study that will generate neutral, reliable, valid, and generalizable results. At the same time, it should allow some level of flexibility.  

Different types of research design  

A research design is essential to systematically investigate, understand, and interpret phenomena of interest. Let’s look at different types of research design and research design examples .  

Broadly, research design types can be divided into qualitative and quantitative research.  

Qualitative research is subjective and exploratory. It determines relationships between collected data and observations. It is usually carried out through interviews with open-ended questions, observations that are described in words, etc.  

Quantitative research is objective and employs statistical approaches. It establishes the cause-and-effect relationship among variables using different statistical and computational methods. This type of research is usually done using surveys and experiments.  

Qualitative research vs. Quantitative research  

   
Deals with subjective aspects, e.g., experiences, beliefs, perspectives, and concepts.  Measures different types of variables and describes frequencies, averages, correlations, etc. 
Deals with non-numerical data, such as words, images, and observations.  Tests hypotheses about relationships between variables. Results are presented numerically and statistically. 
In qualitative research design, data are collected via direct observations, interviews, focus groups, and naturally occurring data. Methods for conducting qualitative research are grounded theory, thematic analysis, and discourse analysis. 

 

Quantitative research design is empirical. Data collection methods involved are experiments, surveys, and observations expressed in numbers. The research design categories under this are descriptive, experimental, correlational, diagnostic, and explanatory. 
Data analysis involves interpretation and narrative analysis.  Data analysis involves statistical analysis and hypothesis testing. 
The reasoning used to synthesize data is inductive. 

 

The reasoning used to synthesize data is deductive. 

 

Typically used in fields such as sociology, linguistics, and anthropology.  Typically used in fields such as economics, ecology, statistics, and medicine. 
Example: Focus group discussions with women farmers about climate change perception. 

 

Example: Testing the effectiveness of a new treatment for insomnia. 

Qualitative research design types and qualitative research design examples  

The following will familiarize you with the research design categories in qualitative research:  

  • Grounded theory: This design is used to investigate research questions that have not previously been studied in depth. Also referred to as exploratory design , it creates sequential guidelines, offers strategies for inquiry, and makes data collection and analysis more efficient in qualitative research.   

Example: A researcher wants to study how people adopt a certain app. The researcher collects data through interviews and then analyzes the data to look for patterns. These patterns are used to develop a theory about how people adopt that app.  

  •   Thematic analysis: This design is used to compare the data collected in past research to find similar themes in qualitative research.  

Example: A researcher examines an interview transcript to identify common themes, say, topics or patterns emerging repeatedly.  

  • Discourse analysis : This research design deals with language or social contexts used in data gathering in qualitative research.   

Example: Identifying ideological frameworks and viewpoints of writers of a series of policies.  

Quantitative research design types and quantitative research design examples  

Note the following research design categories in quantitative research:  

  • Descriptive research design : This quantitative research design is applied where the aim is to identify characteristics, frequencies, trends, and categories. It may not often begin with a hypothesis. The basis of this research type is a description of an identified variable. This research design type describes the “what,” “when,” “where,” or “how” of phenomena (but not the “why”).   

Example: A study on the different income levels of people who use nutritional supplements regularly.  

  • Correlational research design : Correlation reflects the strength and/or direction of the relationship among variables. The direction of a correlation can be positive or negative. Correlational research design helps researchers establish a relationship between two variables without the researcher controlling any of them.  

Example : An example of correlational research design could be studying the correlation between time spent watching crime shows and aggressive behavior in teenagers.  

  •   Diagnostic research design : In diagnostic design, the researcher aims to understand the underlying cause of a specific topic or phenomenon (usually an area of improvement) and find the most effective solution. In simpler terms, a researcher seeks an accurate “diagnosis” of a problem and identifies a solution.  

Example : A researcher analyzing customer feedback and reviews to identify areas where an app can be improved.    

  • Explanatory research design : In explanatory research design , a researcher uses their ideas and thoughts on a topic to explore their theories in more depth. This design is used to explore a phenomenon when limited information is available. It can help increase current understanding of unexplored aspects of a subject. It is thus a kind of “starting point” for future research.  

Example : Formulating hypotheses to guide future studies on delaying school start times for better mental health in teenagers.  

  •   Causal research design : This can be considered a type of explanatory research. Causal research design seeks to define a cause and effect in its data. The researcher does not use a randomly chosen control group but naturally or pre-existing groupings. Importantly, the researcher does not manipulate the independent variable.   

Example : Comparing school dropout levels and possible bullying events.  

  •   Experimental research design : This research design is used to study causal relationships . One or more independent variables are manipulated, and their effect on one or more dependent variables is measured.  

Example: Determining the efficacy of a new vaccine plan for influenza.  

Benefits of research design  

 T here are numerous benefits of research design . These are as follows:  

  • Clear direction: Among the benefits of research design , the main one is providing direction to the research and guiding the choice of clear objectives, which help the researcher to focus on the specific research questions or hypotheses they want to investigate.  
  • Control: Through a proper research design , researchers can control variables, identify potential confounding factors, and use randomization to minimize bias and increase the reliability of their findings.
  • Replication: Research designs provide the opportunity for replication. This helps to confirm the findings of a study and ensures that the results are not due to chance or other factors. Thus, a well-chosen research design also eliminates bias and errors.  
  • Validity: A research design ensures the validity of the research, i.e., whether the results truly reflect the phenomenon being investigated.  
  • Reliability: Benefits of research design also include reducing inaccuracies and ensuring the reliability of the research (i.e., consistency of the research results over time, across different samples, and under different conditions).  
  • Efficiency: A strong research design helps increase the efficiency of the research process. Researchers can use a variety of designs to investigate their research questions, choose the most appropriate research design for their study, and use statistical analysis to make the most of their data. By effectively describing the data necessary for an adequate test of the hypotheses and explaining how such data will be obtained, research design saves a researcher’s time.   

Overall, an appropriately chosen and executed research design helps researchers to conduct high-quality research, draw meaningful conclusions, and contribute to the advancement of knowledge in their field.

a research design meaning

Frequently Asked Questions (FAQ) on Research Design

Q: What are th e main types of research design?

Broadly speaking there are two basic types of research design –

qualitative and quantitative research. Qualitative research is subjective and exploratory; it determines relationships between collected data and observations. It is usually carried out through interviews with open-ended questions, observations that are described in words, etc. Quantitative research , on the other hand, is more objective and employs statistical approaches. It establishes the cause-and-effect relationship among variables using different statistical and computational methods. This type of research design is usually done using surveys and experiments.

Q: How do I choose the appropriate research design for my study?

Choosing the appropriate research design for your study requires careful consideration of various factors. Start by clarifying your research objectives and the type of data you need to collect. Determine whether your study is exploratory, descriptive, or experimental in nature. Consider the availability of resources, time constraints, and the feasibility of implementing the different research designs. Review existing literature to identify similar studies and their research designs, which can serve as a guide. Ultimately, the chosen research design should align with your research questions, provide the necessary data to answer them, and be feasible given your own specific requirements/constraints.

Q: Can research design be modified during the course of a study?

Yes, research design can be modified during the course of a study based on emerging insights, practical constraints, or unforeseen circumstances. Research is an iterative process and, as new data is collected and analyzed, it may become necessary to adjust or refine the research design. However, any modifications should be made judiciously and with careful consideration of their impact on the study’s integrity and validity. It is advisable to document any changes made to the research design, along with a clear rationale for the modifications, in order to maintain transparency and allow for proper interpretation of the results.

Q: How can I ensure the validity and reliability of my research design?

Validity refers to the accuracy and meaningfulness of your study’s findings, while reliability relates to the consistency and stability of the measurements or observations. To enhance validity, carefully define your research variables, use established measurement scales or protocols, and collect data through appropriate methods. Consider conducting a pilot study to identify and address any potential issues before full implementation. To enhance reliability, use standardized procedures, conduct inter-rater or test-retest reliability checks, and employ appropriate statistical techniques for data analysis. It is also essential to document and report your methodology clearly, allowing for replication and scrutiny by other researchers.

Editage All Access is a subscription-based platform that unifies the best AI tools and services designed to speed up, simplify, and streamline every step of a researcher’s journey. The Editage All Access Pack is a one-of-a-kind subscription that unlocks full access to an AI writing assistant, literature recommender, journal finder, scientific illustration tool, and exclusive discounts on professional publication services from Editage.  

Based on 22+ years of experience in academia, Editage All Access empowers researchers to put their best research forward and move closer to success. Explore our top AI Tools pack, AI Tools + Publication Services pack, or Build Your Own Plan. Find everything a researcher needs to succeed, all in one place –  Get All Access now starting at just $14 a month !    

Related Posts

Peer Review Week 2024

Join Us for Peer Review Week 2024

Editage All Access Boosting Productivity for Academics in India

How Editage All Access is Boosting Productivity for Academics in India

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology

Research Design | Step-by-Step Guide with Examples

Published on 5 May 2022 by Shona McCombes . Revised on 20 March 2023.

A research design is a strategy for answering your research question  using empirical data. Creating a research design means making decisions about:

  • Your overall aims and approach
  • The type of research design you’ll use
  • Your sampling methods or criteria for selecting subjects
  • Your data collection methods
  • The procedures you’ll follow to collect data
  • Your data analysis methods

A well-planned research design helps ensure that your methods match your research aims and that you use the right kind of analysis for your data.

Table of contents

Step 1: consider your aims and approach, step 2: choose a type of research design, step 3: identify your population and sampling method, step 4: choose your data collection methods, step 5: plan your data collection procedures, step 6: decide on your data analysis strategies, frequently asked questions.

  • Introduction

Before you can start designing your research, you should already have a clear idea of the research question you want to investigate.

There are many different ways you could go about answering this question. Your research design choices should be driven by your aims and priorities – start by thinking carefully about what you want to achieve.

The first choice you need to make is whether you’ll take a qualitative or quantitative approach.

Qualitative approach Quantitative approach

Qualitative research designs tend to be more flexible and inductive , allowing you to adjust your approach based on what you find throughout the research process.

Quantitative research designs tend to be more fixed and deductive , with variables and hypotheses clearly defined in advance of data collection.

It’s also possible to use a mixed methods design that integrates aspects of both approaches. By combining qualitative and quantitative insights, you can gain a more complete picture of the problem you’re studying and strengthen the credibility of your conclusions.

Practical and ethical considerations when designing research

As well as scientific considerations, you need to think practically when designing your research. If your research involves people or animals, you also need to consider research ethics .

  • How much time do you have to collect data and write up the research?
  • Will you be able to gain access to the data you need (e.g., by travelling to a specific location or contacting specific people)?
  • Do you have the necessary research skills (e.g., statistical analysis or interview techniques)?
  • Will you need ethical approval ?

At each stage of the research design process, make sure that your choices are practically feasible.

Prevent plagiarism, run a free check.

Within both qualitative and quantitative approaches, there are several types of research design to choose from. Each type provides a framework for the overall shape of your research.

Types of quantitative research designs

Quantitative designs can be split into four main types. Experimental and   quasi-experimental designs allow you to test cause-and-effect relationships, while descriptive and correlational designs allow you to measure variables and describe relationships between them.

Type of design Purpose and characteristics
Experimental
Quasi-experimental
Correlational
Descriptive

With descriptive and correlational designs, you can get a clear picture of characteristics, trends, and relationships as they exist in the real world. However, you can’t draw conclusions about cause and effect (because correlation doesn’t imply causation ).

Experiments are the strongest way to test cause-and-effect relationships without the risk of other variables influencing the results. However, their controlled conditions may not always reflect how things work in the real world. They’re often also more difficult and expensive to implement.

Types of qualitative research designs

Qualitative designs are less strictly defined. This approach is about gaining a rich, detailed understanding of a specific context or phenomenon, and you can often be more creative and flexible in designing your research.

The table below shows some common types of qualitative design. They often have similar approaches in terms of data collection, but focus on different aspects when analysing the data.

Type of design Purpose and characteristics
Grounded theory
Phenomenology

Your research design should clearly define who or what your research will focus on, and how you’ll go about choosing your participants or subjects.

In research, a population is the entire group that you want to draw conclusions about, while a sample is the smaller group of individuals you’ll actually collect data from.

Defining the population

A population can be made up of anything you want to study – plants, animals, organisations, texts, countries, etc. In the social sciences, it most often refers to a group of people.

For example, will you focus on people from a specific demographic, region, or background? Are you interested in people with a certain job or medical condition, or users of a particular product?

The more precisely you define your population, the easier it will be to gather a representative sample.

Sampling methods

Even with a narrowly defined population, it’s rarely possible to collect data from every individual. Instead, you’ll collect data from a sample.

To select a sample, there are two main approaches: probability sampling and non-probability sampling . The sampling method you use affects how confidently you can generalise your results to the population as a whole.

Probability sampling Non-probability sampling

Probability sampling is the most statistically valid option, but it’s often difficult to achieve unless you’re dealing with a very small and accessible population.

For practical reasons, many studies use non-probability sampling, but it’s important to be aware of the limitations and carefully consider potential biases. You should always make an effort to gather a sample that’s as representative as possible of the population.

Case selection in qualitative research

In some types of qualitative designs, sampling may not be relevant.

For example, in an ethnography or a case study, your aim is to deeply understand a specific context, not to generalise to a population. Instead of sampling, you may simply aim to collect as much data as possible about the context you are studying.

In these types of design, you still have to carefully consider your choice of case or community. You should have a clear rationale for why this particular case is suitable for answering your research question.

For example, you might choose a case study that reveals an unusual or neglected aspect of your research problem, or you might choose several very similar or very different cases in order to compare them.

Data collection methods are ways of directly measuring variables and gathering information. They allow you to gain first-hand knowledge and original insights into your research problem.

You can choose just one data collection method, or use several methods in the same study.

Survey methods

Surveys allow you to collect data about opinions, behaviours, experiences, and characteristics by asking people directly. There are two main survey methods to choose from: questionnaires and interviews.

Questionnaires Interviews

Observation methods

Observations allow you to collect data unobtrusively, observing characteristics, behaviours, or social interactions without relying on self-reporting.

Observations may be conducted in real time, taking notes as you observe, or you might make audiovisual recordings for later analysis. They can be qualitative or quantitative.

Quantitative observation

Other methods of data collection

There are many other ways you might collect data depending on your field and topic.

Field Examples of data collection methods
Media & communication Collecting a sample of texts (e.g., speeches, articles, or social media posts) for data on cultural norms and narratives
Psychology Using technologies like neuroimaging, eye-tracking, or computer-based tasks to collect data on things like attention, emotional response, or reaction time
Education Using tests or assignments to collect data on knowledge and skills
Physical sciences Using scientific instruments to collect data on things like weight, blood pressure, or chemical composition

If you’re not sure which methods will work best for your research design, try reading some papers in your field to see what data collection methods they used.

Secondary data

If you don’t have the time or resources to collect data from the population you’re interested in, you can also choose to use secondary data that other researchers already collected – for example, datasets from government surveys or previous studies on your topic.

With this raw data, you can do your own analysis to answer new research questions that weren’t addressed by the original study.

Using secondary data can expand the scope of your research, as you may be able to access much larger and more varied samples than you could collect yourself.

However, it also means you don’t have any control over which variables to measure or how to measure them, so the conclusions you can draw may be limited.

As well as deciding on your methods, you need to plan exactly how you’ll use these methods to collect data that’s consistent, accurate, and unbiased.

Planning systematic procedures is especially important in quantitative research, where you need to precisely define your variables and ensure your measurements are reliable and valid.

Operationalisation

Some variables, like height or age, are easily measured. But often you’ll be dealing with more abstract concepts, like satisfaction, anxiety, or competence. Operationalisation means turning these fuzzy ideas into measurable indicators.

If you’re using observations , which events or actions will you count?

If you’re using surveys , which questions will you ask and what range of responses will be offered?

You may also choose to use or adapt existing materials designed to measure the concept you’re interested in – for example, questionnaires or inventories whose reliability and validity has already been established.

Reliability and validity

Reliability means your results can be consistently reproduced , while validity means that you’re actually measuring the concept you’re interested in.

Reliability Validity

For valid and reliable results, your measurement materials should be thoroughly researched and carefully designed. Plan your procedures to make sure you carry out the same steps in the same way for each participant.

If you’re developing a new questionnaire or other instrument to measure a specific concept, running a pilot study allows you to check its validity and reliability in advance.

Sampling procedures

As well as choosing an appropriate sampling method, you need a concrete plan for how you’ll actually contact and recruit your selected sample.

That means making decisions about things like:

  • How many participants do you need for an adequate sample size?
  • What inclusion and exclusion criteria will you use to identify eligible participants?
  • How will you contact your sample – by mail, online, by phone, or in person?

If you’re using a probability sampling method, it’s important that everyone who is randomly selected actually participates in the study. How will you ensure a high response rate?

If you’re using a non-probability method, how will you avoid bias and ensure a representative sample?

Data management

It’s also important to create a data management plan for organising and storing your data.

Will you need to transcribe interviews or perform data entry for observations? You should anonymise and safeguard any sensitive data, and make sure it’s backed up regularly.

Keeping your data well organised will save time when it comes to analysing them. It can also help other researchers validate and add to your findings.

On their own, raw data can’t answer your research question. The last step of designing your research is planning how you’ll analyse the data.

Quantitative data analysis

In quantitative research, you’ll most likely use some form of statistical analysis . With statistics, you can summarise your sample data, make estimates, and test hypotheses.

Using descriptive statistics , you can summarise your sample data in terms of:

  • The distribution of the data (e.g., the frequency of each score on a test)
  • The central tendency of the data (e.g., the mean to describe the average score)
  • The variability of the data (e.g., the standard deviation to describe how spread out the scores are)

The specific calculations you can do depend on the level of measurement of your variables.

Using inferential statistics , you can:

  • Make estimates about the population based on your sample data.
  • Test hypotheses about a relationship between variables.

Regression and correlation tests look for associations between two or more variables, while comparison tests (such as t tests and ANOVAs ) look for differences in the outcomes of different groups.

Your choice of statistical test depends on various aspects of your research design, including the types of variables you’re dealing with and the distribution of your data.

Qualitative data analysis

In qualitative research, your data will usually be very dense with information and ideas. Instead of summing it up in numbers, you’ll need to comb through the data in detail, interpret its meanings, identify patterns, and extract the parts that are most relevant to your research question.

Two of the most common approaches to doing this are thematic analysis and discourse analysis .

Approach Characteristics
Thematic analysis
Discourse analysis

There are many other ways of analysing qualitative data depending on the aims of your research. To get a sense of potential approaches, try reading some qualitative research papers in your field.

A sample is a subset of individuals from a larger population. Sampling means selecting the group that you will actually collect data from in your research.

For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

Statistical sampling allows you to test a hypothesis about the characteristics of a population. There are various sampling methods you can use to ensure that your sample is representative of the population as a whole.

Operationalisation means turning abstract conceptual ideas into measurable observations.

For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioural avoidance of crowded places, or physical anxiety symptoms in social situations.

Before collecting data , it’s important to consider how you will operationalise the variables that you want to measure.

The research methods you use depend on the type of data you need to answer your research question .

  • If you want to measure something or test a hypothesis , use quantitative methods . If you want to explore ideas, thoughts, and meanings, use qualitative methods .
  • If you want to analyse a large amount of readily available data, use secondary data. If you want data specific to your purposes with control over how they are generated, collect primary data.
  • If you want to establish cause-and-effect relationships between variables , use experimental methods. If you want to understand the characteristics of a research subject, use descriptive methods.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

McCombes, S. (2023, March 20). Research Design | Step-by-Step Guide with Examples. Scribbr. Retrieved 9 September 2024, from https://www.scribbr.co.uk/research-methods/research-design/

Is this article helpful?

Shona McCombes

Shona McCombes

a research design meaning

What Is a Research Design? | Definition, Types & Guide

a research design meaning

Introduction

Parts of a research design, types of research methodology in qualitative research, narrative research designs, phenomenological research designs, grounded theory research designs.

  • Ethnographic research designs

Case study research design

Important reminders when designing a research study.

A research design in qualitative research is a critical framework that guides the methodological approach to studying complex social phenomena. Qualitative research designs determine how data is collected, analyzed, and interpreted, ensuring that the research captures participants' nuanced and subjective perspectives. Research designs also recognize ethical considerations and involve informed consent, ensuring confidentiality, and handling sensitive topics with the utmost respect and care. These considerations are crucial in qualitative research and other contexts where participants may share personal or sensitive information. A research design should convey coherence as it is essential for producing high-quality qualitative research, often following a recursive and evolving process.

a research design meaning

Theoretical concepts and research question

The first step in creating a research design is identifying the main theoretical concepts. To identify these concepts, a researcher should ask which theoretical keywords are implicit in the investigation. The next step is to develop a research question using these theoretical concepts. This can be done by identifying the relationship of interest among the concepts that catch the focus of the investigation. The question should address aspects of the topic that need more knowledge, shed light on new information, and specify which aspects should be prioritized before others. This step is essential in identifying which participants to include or which data collection methods to use. Research questions also put into practice the conceptual framework and make the initial theoretical concepts more explicit. Once the research question has been established, the main objectives of the research can be specified. For example, these objectives may involve identifying shared experiences around a phenomenon or evaluating perceptions of a new treatment.

Methodology

After identifying the theoretical concepts, research question, and objectives, the next step is to determine the methodology that will be implemented. This is the lifeline of a research design and should be coherent with the objectives and questions of the study. The methodology will determine how data is collected, analyzed, and presented. Popular qualitative research methodologies include case studies, ethnography , grounded theory , phenomenology, and narrative research . Each methodology is tailored to specific research questions and facilitates the collection of rich, detailed data. For example, a narrative approach may focus on only one individual and their story, while phenomenology seeks to understand participants' lived common experiences. Qualitative research designs differ significantly from quantitative research, which often involves experimental research, correlational designs, or variance analysis to test hypotheses about relationships between two variables, a dependent variable and an independent variable while controlling for confounding variables.

a research design meaning

Literature review

After the methodology is identified, conducting a thorough literature review is integral to the research design. This review identifies gaps in knowledge, positioning the new study within the larger academic dialogue and underlining its contribution and relevance. Meta-analysis, a form of secondary research, can be particularly useful in synthesizing findings from multiple studies to provide a clear picture of the research landscape.

Data collection

The sampling method in qualitative research is designed to delve deeply into specific phenomena rather than to generalize findings across a broader population. The data collection methods—whether interviews, focus groups, observations, or document analysis—should align with the chosen methodology, ethical considerations, and other factors such as sample size. In some cases, repeated measures may be collected to observe changes over time.

Data analysis

Analysis in qualitative research typically involves methods such as coding and thematic analysis to distill patterns from the collected data. This process delineates how the research results will be systematically derived from the data. It is recommended that the researcher ensures that the final interpretations are coherent with the observations and analyses, making clear connections between the data and the conclusions drawn. Reporting should be narrative-rich, offering a comprehensive view of the context and findings.

Overall, a coherent qualitative research design that incorporates these elements facilitates a study that not only adds theoretical and practical value to the field but also adheres to high quality. This methodological thoroughness is essential for achieving significant, insightful findings. Examples of well-executed research designs can be valuable references for other researchers conducting qualitative or quantitative investigations. An effective research design is critical for producing robust and impactful research outcomes.

Each qualitative research design is unique, diverse, and meticulously tailored to answer specific research questions, meet distinct objectives, and explore the unique nature of the phenomenon under investigation. The methodology is the wider framework that a research design follows. Each methodology in a research design consists of methods, tools, or techniques that compile data and analyze it following a specific approach.

The methods enable researchers to collect data effectively across individuals, different groups, or observations, ensuring they are aligned with the research design. The following list includes the most commonly used methodologies employed in qualitative research designs, highlighting how they serve different purposes and utilize distinct methods to gather and analyze data.

a research design meaning

The narrative approach in research focuses on the collection and detailed examination of life stories, personal experiences, or narratives to gain insights into individuals' lives as told from their perspectives. It involves constructing a cohesive story out of the diverse experiences shared by participants, often using chronological accounts. It seeks to understand human experience and social phenomena through the form and content of the stories. These can include spontaneous narrations such as memoirs or diaries from participants or diaries solicited by the researcher. Narration helps construct the identity of an individual or a group and can rationalize, persuade, argue, entertain, confront, or make sense of an event or tragedy. To conduct a narrative investigation, it is recommended that researchers follow these steps:

Identify if the research question fits the narrative approach. Its methods are best employed when a researcher wants to learn about the lifestyle and life experience of a single participant or a small number of individuals.

Select the best-suited participants for the research design and spend time compiling their stories using different methods such as observations, diaries, interviewing their family members, or compiling related secondary sources.

Compile the information related to the stories. Narrative researchers collect data based on participants' stories concerning their personal experiences, for example about their workplace or homes, their racial or ethnic culture, and the historical context in which the stories occur.

Analyze the participant stories and "restore" them within a coherent framework. This involves collecting the stories, analyzing them based on key elements such as time, place, plot, and scene, and then rewriting them in a chronological sequence (Ollerenshaw & Creswell, 2000). The framework may also include elements such as a predicament, conflict, or struggle; a protagonist; and a sequence with implicit causality, where the predicament is somehow resolved (Carter, 1993).

Collaborate with participants by actively involving them in the research. Both the researcher and the participant negotiate the meaning of their stories, adding a credibility check to the analysis (Creswell & Miller, 2000).

A narrative investigation includes collecting a large amount of data from the participants and the researcher needs to understand the context of the individual's life. A keen eye is needed to collect particular stories that capture the individual experiences. Active collaboration with the participant is necessary, and researchers need to discuss and reflect on their own beliefs and backgrounds. Multiple questions could arise in the collection, analysis, and storytelling of individual stories that need to be addressed, such as: Whose story is it? Who can tell it? Who can change it? Which version is compelling? What happens when narratives compete? In a community, what do the stories do among them? (Pinnegar & Daynes, 2006).

a research design meaning

Make the most of your data with ATLAS.ti

Powerful tools in an intuitive interface, ready for you with a free trial today.

A research design based on phenomenology aims to understand the essence of the lived experiences of a group of people regarding a particular concept or phenomenon. Researchers gather deep insights from individuals who have experienced the phenomenon, striving to describe "what" they experienced and "how" they experienced it. This approach to a research design typically involves detailed interviews and aims to reach a deep existential understanding. The purpose is to reduce individual experiences to a description of the universal essence or understanding the phenomenon's nature (van Manen, 1990). In phenomenology, the following steps are usually followed:

Identify a phenomenon of interest . For example, the phenomenon might be anger, professionalism in the workplace, or what it means to be a fighter.

Recognize and specify the philosophical assumptions of phenomenology , for example, one could reflect on the nature of objective reality and individual experiences.

Collect data from individuals who have experienced the phenomenon . This typically involves conducting in-depth interviews, including multiple sessions with each participant. Additionally, other forms of data may be collected using several methods, such as observations, diaries, art, poetry, music, recorded conversations, written responses, or other secondary sources.

Ask participants two general questions that encompass the phenomenon and how the participant experienced it (Moustakas, 1994). For example, what have you experienced in this phenomenon? And what contexts or situations have typically influenced your experiences within the phenomenon? Other open-ended questions may also be asked, but these two questions particularly focus on collecting research data that will lead to a textural description and a structural description of the experiences, and ultimately provide an understanding of the common experiences of the participants.

Review data from the questions posed to participants . It is recommended that researchers review the answers and highlight "significant statements," phrases, or quotes that explain how participants experienced the phenomenon. The researcher can then develop meaningful clusters from these significant statements into patterns or key elements shared across participants.

Write a textual description of what the participants experienced based on the answers and themes of the two main questions. The answers are also used to write about the characteristics and describe the context that influenced the way the participants experienced the phenomenon, called imaginative variation or structural description. Researchers should also write about their own experiences and context or situations that influenced them.

Write a composite description from the structural and textural description that presents the "essence" of the phenomenon, called the essential and invariant structure.

A phenomenological approach to a research design includes the strict and careful selection of participants in the study where bracketing personal experiences can be difficult to implement. The researcher decides how and in which way their knowledge will be introduced. It also involves some understanding and identification of the broader philosophical assumptions.

a research design meaning

Grounded theory is used in a research design when the goal is to inductively develop a theory "grounded" in data that has been systematically gathered and analyzed. Starting from the data collection, researchers identify characteristics, patterns, themes, and relationships, gradually forming a theoretical framework that explains relevant processes, actions, or interactions grounded in the observed reality. A grounded theory study goes beyond descriptions and its objective is to generate a theory, an abstract analytical scheme of a process. Developing a theory doesn't come "out of nothing" but it is constructed and based on clear data collection. We suggest the following steps to follow a grounded theory approach in a research design:

Determine if grounded theory is the best for your research problem . Grounded theory is a good design when a theory is not already available to explain a process.

Develop questions that aim to understand how individuals experienced or enacted the process (e.g., What was the process? How did it unfold?). Data collection and analysis occur in tandem, so that researchers can ask more detailed questions that shape further analysis, such as: What was the focal point of the process (central phenomenon)? What influenced or caused this phenomenon to occur (causal conditions)? What strategies were employed during the process? What effect did it have (consequences)?

Gather relevant data about the topic in question . Data gathering involves questions that are usually asked in interviews, although other forms of data can also be collected, such as observations, documents, and audio-visual materials from different groups.

Carry out the analysis in stages . Grounded theory analysis begins with open coding, where the researcher forms codes that inductively emerge from the data (rather than preconceived categories). Researchers can thus identify specific properties and dimensions relevant to their research question.

Assemble the data in new ways and proceed to axial coding . Axial coding involves using a coding paradigm or logic diagram, such as a visual model, to systematically analyze the data. Begin by identifying a central phenomenon, which is the main category or focus of the research problem. Next, explore the causal conditions, which are the categories of factors that influence the phenomenon. Specify the strategies, which are the actions or interactions associated with the phenomenon. Then, identify the context and intervening conditions—both narrow and broad factors that affect the strategies. Finally, delineate the consequences, which are the outcomes or results of employing the strategies.

Use selective coding to construct a "storyline" that links the categories together. Alternatively, the researcher may formulate propositions or theory-driven questions that specify predicted relationships among these categories.

Develop and visually present a matrix that clarifies the social, historical, and economic conditions influencing the central phenomenon. This optional step encourages viewing the model from the narrowest to the broadest perspective.

Write a substantive-level theory that is closely related to a specific problem or population. This step is optional but provides a focused theoretical framework that can later be tested with quantitative data to explore its generalizability to a broader sample.

Allow theory to emerge through the memo-writing process, where ideas about the theory evolve continuously throughout the stages of open, axial, and selective coding.

The researcher should initially set aside any preconceived theoretical ideas to allow for the emergence of analytical and substantive theories. This is a systematic research approach, particularly when following the methodological steps outlined by Strauss and Corbin (1990). For those seeking more flexibility in their research process, the approach suggested by Charmaz (2006) might be preferable.

One of the challenges when using this method in a research design is determining when categories are sufficiently saturated and when the theory is detailed enough. To achieve saturation, discriminant sampling may be employed, where additional information is gathered from individuals similar to those initially interviewed to verify the applicability of the theory to these new participants. Ultimately, its goal is to develop a theory that comprehensively describes the central phenomenon, causal conditions, strategies, context, and consequences.

a research design meaning

Ethnographic research design

An ethnographic approach in research design involves the extended observation and data collection of a group or community. The researcher immerses themselves in the setting, often living within the community for long periods. During this time, they collect data by observing and recording behaviours, conversations, and rituals to understand the group's social dynamics and cultural norms. We suggest following these steps for ethnographic methods in a research design:

Assess whether ethnography is the best approach for the research design and questions. It's suitable if the goal is to describe how a cultural group functions and to delve into their beliefs, language, behaviours, and issues like power, resistance, and domination, particularly if there is limited literature due to the group’s marginal status or unfamiliarity to mainstream society.

Identify and select a cultural group for your research design. Choose one that has a long history together, forming distinct languages, behaviours, and attitudes. This group often might be marginalized within society.

Choose cultural themes or issues to examine within the group. Analyze interactions in everyday settings to identify pervasive patterns such as life cycles, events, and overarching cultural themes. Culture is inferred from the group members' words, actions, and the tension between their actual and expected behaviours, as well as the artifacts they use.

Conduct fieldwork to gather detailed information about the group’s living and working environments. Visit the site, respect the daily lives of the members, and collect a diverse range of materials, considering ethical aspects such as respect and reciprocity.

Compile and analyze cultural data to develop a set of descriptive and thematic insights. Begin with a detailed description of the group based on observations of specific events or activities over time. Then, conduct a thematic analysis to identify patterns or themes that illustrate how the group functions and lives. The final output should be a comprehensive cultural portrait that integrates both the participants (emic) and the researcher’s (etic) perspectives, potentially advocating for the group’s needs or suggesting societal changes to better accommodate them.

Researchers engaging in ethnography need a solid understanding of cultural anthropology and the dynamics of sociocultural systems, which are commonly explored in ethnographic research. The data collection phase is notably extensive, requiring prolonged periods in the field. Ethnographers often employ a literary, quasi-narrative style in their narratives, which can pose challenges for those accustomed to more conventional social science writing methods.

Another potential issue is the risk of researchers "going native," where they become overly assimilated into the community under study, potentially jeopardizing the objectivity and completion of their research. It's crucial for researchers to be aware of their impact on the communities and environments they are studying.

The case study approach in a research design focuses on a detailed examination of a single case or a small number of cases. Cases can be individuals, groups, organizations, or events. Case studies are particularly useful for research designs that aim to understand complex issues in real-life contexts. The aim is to provide a thorough description and contextual analysis of the cases under investigation. We suggest following these steps in a case study design:

Assess if a case study approach suits your research questions . This approach works well when you have distinct cases with defined boundaries and aim to deeply understand these cases or compare multiple cases.

Choose your case or cases. These could involve individuals, groups, programs, events, or activities. Decide whether an individual or collective, multi-site or single-site case study is most appropriate, focusing on specific cases or themes (Stake, 1995; Yin, 2003).

Gather data extensively from diverse sources . Collect information through archival records, interviews, direct and participant observations, and physical artifacts (Yin, 2003).

Analyze the data holistically or in focused segments . Provide a comprehensive overview of the entire case or concentrate on specific aspects. Start with a detailed description including the history of the case and its chronological events then narrow down to key themes. The aim is to delve into the case's complexity rather than generalize findings.

Interpret and report the significance of the case in the final phase . Explain what insights were gained, whether about the subject of the case in an instrumental study or an unusual situation in an intrinsic study (Lincoln & Guba, 1985).

The investigator must carefully select the case or cases to study, recognizing that multiple potential cases could illustrate a chosen topic or issue. This selection process involves deciding whether to focus on a single case for deeper analysis or multiple cases, which may provide broader insights but less depth per case. Each choice requires a well-justified rationale for the selected cases. Researchers face the challenge of defining the boundaries of a case, such as its temporal scope and the events and processes involved. This decision in a research design is crucial as it affects the depth and value of the information presented in the study, and therefore should be planned to ensure a comprehensive portrayal of the case.

a research design meaning

Qualitative and quantitative research designs are distinct in their approach to data collection and data analysis. Unlike quantitative research, which focuses on numerical data and statistical analysis, qualitative research prioritizes understanding the depth and richness of human experiences, behaviours, and interactions.

Qualitative methods in a research design have to have internal coherence, meaning that all elements of the research project—research question, data collection, data analysis, findings, and theory—are well-aligned and consistent with each other. This coherence in the research study is especially crucial in inductive qualitative research, where the research process often follows a recursive and evolving path. Ensuring that each component of the research design fits seamlessly with the others enhances the clarity and impact of the study, making the research findings more robust and compelling. Whether it is a descriptive research design, explanatory research design, diagnostic research design, or correlational research design coherence is an important element in both qualitative and quantitative research.

Finally, a good research design ensures that the research is conducted ethically and considers the well-being and rights of participants when managing collected data. The research design guides researchers in providing a clear rationale for their methodologies, which is crucial for justifying the research objectives to the scientific community. A thorough research design also contributes to the body of knowledge, enabling researchers to build upon past research studies and explore new dimensions within their fields. At the core of the design, there is a clear articulation of the research objectives. These objectives should be aligned with the underlying concepts being investigated, offering a concise method to answer the research questions and guiding the direction of the study with proper qualitative methods.

Carter, K. (1993). The place of a story in the study of teaching and teacher education. Educational Researcher, 22(1), 5-12, 18.

Charmaz, K. (2006). Constructing grounded theory. London: Sage.

Creswell, J. W., & Miller, D. L. (2000). Determining validity in qualitative inquiry. Theory Into Practice, 39(3), 124-130.

Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry. Newbury Park, CA: Sage.

Moustakas, C. (1994). Phenomenological research methods. Thousand Oaks, CA: Sage.

Ollerenshaw, J. A., & Creswell, J. W. (2000, April). Data analysis in narrative research: A comparison of two “restoring” approaches. Paper presented at the annual meeting of the American Educational Research Association, New Orleans, LA.

Stake, R. E. (1995). The art of case study research. Thousand Oaks, CA: Sage.

Strauss, A., & Corbin, J. (1990). Basics of qualitative research: Grounded theory procedures and techniques. Newbury Park, CA: Sage.

van Manen, M. (1990). Researching lived experience: Human science for an action sensitive pedagogy. Ontario, Canada: University of Western Ontario.

Yin, R. K. (2003). Case study research: Design and methods (3rd ed.). Thousand Oaks, CA: Sage

a research design meaning

Whatever your research objectives, make it happen with ATLAS.ti!

Download a free trial today.

a research design meaning

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Perspect Clin Res
  • v.9(4); Oct-Dec 2018

Study designs: Part 1 – An overview and classification

Priya ranganathan.

Department of Anaesthesiology, Tata Memorial Centre, Mumbai, Maharashtra, India

Rakesh Aggarwal

1 Department of Gastroenterology, Sanjay Gandhi Postgraduate Institute of Medical Sciences, Lucknow, Uttar Pradesh, India

There are several types of research study designs, each with its inherent strengths and flaws. The study design used to answer a particular research question depends on the nature of the question and the availability of resources. In this article, which is the first part of a series on “study designs,” we provide an overview of research study designs and their classification. The subsequent articles will focus on individual designs.

INTRODUCTION

Research study design is a framework, or the set of methods and procedures used to collect and analyze data on variables specified in a particular research problem.

Research study designs are of many types, each with its advantages and limitations. The type of study design used to answer a particular research question is determined by the nature of question, the goal of research, and the availability of resources. Since the design of a study can affect the validity of its results, it is important to understand the different types of study designs and their strengths and limitations.

There are some terms that are used frequently while classifying study designs which are described in the following sections.

A variable represents a measurable attribute that varies across study units, for example, individual participants in a study, or at times even when measured in an individual person over time. Some examples of variables include age, sex, weight, height, health status, alive/dead, diseased/healthy, annual income, smoking yes/no, and treated/untreated.

Exposure (or intervention) and outcome variables

A large proportion of research studies assess the relationship between two variables. Here, the question is whether one variable is associated with or responsible for change in the value of the other variable. Exposure (or intervention) refers to the risk factor whose effect is being studied. It is also referred to as the independent or the predictor variable. The outcome (or predicted or dependent) variable develops as a consequence of the exposure (or intervention). Typically, the term “exposure” is used when the “causative” variable is naturally determined (as in observational studies – examples include age, sex, smoking, and educational status), and the term “intervention” is preferred where the researcher assigns some or all participants to receive a particular treatment for the purpose of the study (experimental studies – e.g., administration of a drug). If a drug had been started in some individuals but not in the others, before the study started, this counts as exposure, and not as intervention – since the drug was not started specifically for the study.

Observational versus interventional (or experimental) studies

Observational studies are those where the researcher is documenting a naturally occurring relationship between the exposure and the outcome that he/she is studying. The researcher does not do any active intervention in any individual, and the exposure has already been decided naturally or by some other factor. For example, looking at the incidence of lung cancer in smokers versus nonsmokers, or comparing the antenatal dietary habits of mothers with normal and low-birth babies. In these studies, the investigator did not play any role in determining the smoking or dietary habit in individuals.

For an exposure to determine the outcome, it must precede the latter. Any variable that occurs simultaneously with or following the outcome cannot be causative, and hence is not considered as an “exposure.”

Observational studies can be either descriptive (nonanalytical) or analytical (inferential) – this is discussed later in this article.

Interventional studies are experiments where the researcher actively performs an intervention in some or all members of a group of participants. This intervention could take many forms – for example, administration of a drug or vaccine, performance of a diagnostic or therapeutic procedure, and introduction of an educational tool. For example, a study could randomly assign persons to receive aspirin or placebo for a specific duration and assess the effect on the risk of developing cerebrovascular events.

Descriptive versus analytical studies

Descriptive (or nonanalytical) studies, as the name suggests, merely try to describe the data on one or more characteristics of a group of individuals. These do not try to answer questions or establish relationships between variables. Examples of descriptive studies include case reports, case series, and cross-sectional surveys (please note that cross-sectional surveys may be analytical studies as well – this will be discussed in the next article in this series). Examples of descriptive studies include a survey of dietary habits among pregnant women or a case series of patients with an unusual reaction to a drug.

Analytical studies attempt to test a hypothesis and establish causal relationships between variables. In these studies, the researcher assesses the effect of an exposure (or intervention) on an outcome. As described earlier, analytical studies can be observational (if the exposure is naturally determined) or interventional (if the researcher actively administers the intervention).

Directionality of study designs

Based on the direction of inquiry, study designs may be classified as forward-direction or backward-direction. In forward-direction studies, the researcher starts with determining the exposure to a risk factor and then assesses whether the outcome occurs at a future time point. This design is known as a cohort study. For example, a researcher can follow a group of smokers and a group of nonsmokers to determine the incidence of lung cancer in each. In backward-direction studies, the researcher begins by determining whether the outcome is present (cases vs. noncases [also called controls]) and then traces the presence of prior exposure to a risk factor. These are known as case–control studies. For example, a researcher identifies a group of normal-weight babies and a group of low-birth weight babies and then asks the mothers about their dietary habits during the index pregnancy.

Prospective versus retrospective study designs

The terms “prospective” and “retrospective” refer to the timing of the research in relation to the development of the outcome. In retrospective studies, the outcome of interest has already occurred (or not occurred – e.g., in controls) in each individual by the time s/he is enrolled, and the data are collected either from records or by asking participants to recall exposures. There is no follow-up of participants. By contrast, in prospective studies, the outcome (and sometimes even the exposure or intervention) has not occurred when the study starts and participants are followed up over a period of time to determine the occurrence of outcomes. Typically, most cohort studies are prospective studies (though there may be retrospective cohorts), whereas case–control studies are retrospective studies. An interventional study has to be, by definition, a prospective study since the investigator determines the exposure for each study participant and then follows them to observe outcomes.

The terms “prospective” versus “retrospective” studies can be confusing. Let us think of an investigator who starts a case–control study. To him/her, the process of enrolling cases and controls over a period of several months appears prospective. Hence, the use of these terms is best avoided. Or, at the very least, one must be clear that the terms relate to work flow for each individual study participant, and not to the study as a whole.

Classification of study designs

Figure 1 depicts a simple classification of research study designs. The Centre for Evidence-based Medicine has put forward a useful three-point algorithm which can help determine the design of a research study from its methods section:[ 1 ]

An external file that holds a picture, illustration, etc.
Object name is PCR-9-184-g001.jpg

Classification of research study designs

  • Does the study describe the characteristics of a sample or does it attempt to analyze (or draw inferences about) the relationship between two variables? – If no, then it is a descriptive study, and if yes, it is an analytical (inferential) study
  • If analytical, did the investigator determine the exposure? – If no, it is an observational study, and if yes, it is an experimental study
  • If observational, when was the outcome determined? – at the start of the study (case–control study), at the end of a period of follow-up (cohort study), or simultaneously (cross sectional).

In the next few pieces in the series, we will discuss various study designs in greater detail.

Financial support and sponsorship

Conflicts of interest.

There are no conflicts of interest.

What is Research Design? Characteristics, Types, Process, & Examples

Link Copied

Share on Facebook

Share on Twitter

Share on LinkedIn

What is Research Design? Characteristics, Types, Process, & Examples

Your search has come to an end!

Ever felt like a hamster on a research wheel fast, spinning with a million questions but going nowhere? You've got your topic; you're brimming with curiosity, but... what next? So, forget the research rut and get your papers! This ultimate guide to "what is research design?" will have you navigating your project like a pro, uncovering answers and avoiding dead ends. Know the features of good research design, what you mean by research design, elements of research design, and more.

What is Research Design?

Before starting with the topic, do you know what is research design? Research design is the structure of research methods and techniques selected to conduct a study. It refines the methods suited to the subject and ensures a successful setup. Defining a research topic clarifies the type of research (experimental, survey research, correlational, semi-experimental, review) and its sub-type (experimental design, research problem, descriptive case-study).

There are three main types of designs for research:

1. Data Collection

2. Measurement

3. Data Analysis

Elements of Research Design 

Now that you know what is research design, it is important to know the elements and components of research design. Impactful research minimises bias and enhances data accuracy. Designs with minimal error margins are ideal. Key elements include:

1. Accurate purpose statement

2. Techniques for data collection and analysis

3. Methods for data analysis

4. Type of research methodology

5. Probable objections to research

6. Research settings

7. Timeline

8. Measurement of analysis

Got a hang of research, now book yout student accommodation with one click!

Book through amber today!

Characteristics of Research Design

Research design has several key characteristics that contribute to the validity, reliability, and overall success of a research study. To know the answer for what is research design, it is important to know the characteristics. These are-

1. Reliability

A reliable research design ensures that each study’s results are accurate and can be replicated. This means that if the research is conducted again under the same conditions, it should yield similar results.

2. Validity

A valid research design uses appropriate measuring tools to gauge the results according to the research objective. This ensures that the data collected and the conclusions drawn are relevant and accurately reflect the phenomenon being studied.

3. Neutrality

A neutral research design ensures that the assumptions made at the beginning of the research are free from bias. This means that the data collected throughout the research is based on these unbiased assumptions.

4. Generalizability

A good research design draws an outcome that can be applied to a large set of people and is not limited to the sample size or the research group.

Research Design Process

What is research design? A good research helps you do a really good study that gives fair, trustworthy, and useful results. But it's also good to have a bit of wiggle room for changes. If you’re wondering how to conduct a research in just 5 mins , here's a breakdown and examples to work even better.

1. Consider Aims and Approaches

Define the research questions and objectives, and establish the theoretical framework and methodology.

2. Choose a Type of Research Design

Select the suitable research design, such as experimental, correlational, survey, case study, or ethnographic, according to the research questions and objectives.

3. Identify Population and Sampling Method

Determine the target population and sample size, and select the sampling method, like random, stratified random sampling, or convenience sampling.

4. Choose Data Collection Methods

Decide on the data collection methods, such as surveys, interviews, observations, or experiments, and choose the appropriate instruments for data collection.

5. Plan Data Collection Procedures

Create a plan for data collection, detailing the timeframe, location, and personnel involved, while ensuring ethical considerations are met.

6. Decide on Data Analysis Strategies

Select the appropriate data analysis techniques, like statistical analysis, content analysis, or discourse analysis, and plan the interpretation of the results.

What are the Types of Research Design?

A researcher must grasp various types to decide which model to use for a study. There are different research designs that can be broadly classified into quantitative and qualitative.

Qualitative Research

Qualitative research identifies relationships between collected data and observations through mathematical calculations. Statistical methods validate or refute theories about natural phenomena. This research method answers "why" a theory exists and explores respondents' perspectives.

Quantitative Research

Quantitative research is essential when statistical conclusions are needed to gather actionable insights. Numbers provide clarity for critical business decisions. This method is crucial for organizational growth, with insights from complex numerical data guiding future business decisions.

Qualitative Research vs Quantitative Research

While researching, it is important to know the difference between qualitative and quantitative research. Here's a quick difference between the two:

amber

Aspect Qualitative Research  Quantitative Research
Data Type Non-numerical data such as words, images, and sounds. Numerical data that can be measured and expressed in numerical terms.
Purpose To understand concepts, thoughts, or experiences. To test hypotheses, identify patterns, and make predictions.
Data Collection Common methods include interviews with open-ended questions, observations described in words, and literature reviews. Common methods include surveys with closed-ended questions, experiments, and observations recorded as numbers.
Data Analysis Data is analyzed using grounded theory or thematic analysis. Data is analyzed using statistical methods.
Outcome Produces rich and detailed descriptions of the phenomenon being studied, and uncovers new insights and meanings. Produces objective, empirical data that can be measured.

The research types can be further divided into 5 categories:

1. Descriptive Research

Descriptive research design focuses on detailing a situation or case. It's a theory-driven method that involves gathering, analysing, and presenting data. This approach offers insights into the reasons and mechanisms behind a research subject, enhancing understanding of the research's importance. When the problem statement is unclear, exploratory research can be conducted.

2. Experimental Research

Experimental research design investigates cause-and-effect relationships. It’s a causal design where the impact of an independent variable on a dependent variable is observed. For example, the effect of price on customer satisfaction. This method efficiently addresses problems by manipulating independent variables to see their effect on dependent variables. Often used in social sciences, it involves analysing human behaviour by studying changes in one group's actions and their impact on another group.

3. Correlational Research

Correlational research design is a non-experimental technique that identifies relationships between closely linked variables. It uses statistical analysis to determine these relationships without assumptions. This method requires two different groups. A correlation coefficient between -1 and +1 indicates the strength and direction of the relationship, with +1 showing a positive correlation and -1 a negative correlation.

4. Diagnostic Research

Diagnostic research design aims to identify the underlying causes of specific issues. This method delves into factors creating problematic situations and has three phases: 

  • Issue inception
  • Issue diagnosis
  • Issue resolution

5. Explanatory Research

Explanatory research design builds on a researcher’s ideas to explore theories further. It seeks to explain the unexplored aspects of a subject, addressing the what, how, and why of research questions.

Benefits of Research Design

After learning about what is research design and the process, it is important to know the key benefits of a well-structured research design:

1. Minimises Risk of Errors: A good research design minimises the risk of errors and reduces inaccuracy. It ensures that the study is carried out in the right direction and that all the team members are on the same page.

2. Efficient Use of Resources: It facilitates a concrete research plan for the efficient use of time and resources. It helps the researcher better complete all the tasks, even with limited resources.

3. Provides Direction: The purpose of the research design is to enable the researcher to proceed in the right direction without deviating from the tasks. It helps to identify the major and minor tasks of the study.

4. Ensures Validity and Reliability: A well-designed research enhances the validity and reliability of the findings and allows for the replication of studies by other researchers. The main advantage of a good research design is that it provides accuracy, reliability, consistency, and legitimacy to the research.

5. Facilitates Problem-Solving: A researcher can easily frame the objectives of the research work based on the design of experiments (research design). A good research design helps the researcher find the best solution for the research problems.

6. Better Documentation: It helps in better documentation of the various activities while the project work is going on.

That's it! You've explored all the answers for what is research design in research? Remember, it's not just about picking a fancy method – it's about choosing the perfect tool to answer your burning questions. By carefully considering your goals and resources, you can design a research plan that gathers reliable information and helps you reach clear conclusions. 

Frequently Asked Questions

What are the key components of a research design, how can i choose the best research design for my study, what are some common pitfalls in research design, and how can they be avoided, how does research design impact the validity and reliability of a study, what ethical considerations should be taken into account in research design.

Your ideal student home & a flight ticket awaits

Follow us on :

cta

Related Posts

a research design meaning

The 15 Cheapest Countries To Study Medicine for International Students In 2024

a research design meaning

Main Branches Of Philosophy In 2024: Meta Physics, Epistiomology & More

a research design meaning

Top 10 Toughest Exams in the World

a research design meaning

amber © 2024. All rights reserved.

4.8/5 on Trustpilot

Rated as "Excellent" • 4800+ Reviews by students

Rated as "Excellent" • 4800+ Reviews by Students

play store

Ohio State nav bar

The Ohio State University

  • BuckeyeLink
  • Find People
  • Search Ohio State

Basic Research Design

What is research design.

  • Definition of Research Design : A procedure for generating answers to questions, crucial in determining the reliability and relevance of research outcomes.
  • Importance of Strong Designs : Strong designs lead to answers that are accurate and close to their targets, while weak designs may result in misleading or irrelevant outcomes.
  • Criteria for Assessing Design Strength : Evaluating a design’s strength involves understanding the research question and how the design will yield reliable empirical information.

The Four Elements of Research Design (Blair et al., 2023)

a research design meaning

  • The MIDA Framework : Research designs consist of four interconnected elements – Model (M), Inquiry (I), Data strategy (D), and Answer strategy (A), collectively referred to as MIDA.
  • Theoretical Side (M and I): This encompasses the researcher’s beliefs about the world (Model) and the target of inference or the primary question to be answered (Inquiry).
  • Empirical Side (D and A): This includes the strategies for collecting (Data strategy) and analyzing or summarizing information (Answer strategy).
  • Interplay between Theoretical and Empirical Sides : The theoretical side sets the research challenges, while the empirical side represents the researcher’s responses to these challenges.
  • Relation among MIDA Components: The diagram above shows how the four elements of a design are interconnected and how they relate to both real-world and simulated quantities.
  • Parallelism in Design Representation: The illustration highlights two key parallelisms in research design – between actual and simulated processes, and between the theoretical (M, I) and empirical (D, A) sides.
  • Importance of Simulated Processes: The parallelism between actual and simulated processes is crucial for understanding and evaluating research designs.
  • Balancing Theoretical and Empirical Aspects : Effective research design requires a balance between theoretical considerations (models and inquiries) and empirical methodologies (data and answer strategies).

Research Design Principles (Blair et al., 2023)

  • Integration of Components: Designs are effective not merely due to their individual components but how these components work together.
  • Focus on Entire Design: Assessing a design requires examining how each part, such as the question, estimator, and sampling method, fits into the overall design.
  • Importance of Diagnosis: The evaluation of a design’s strength lies in diagnosing the whole design, not just its parts.
  • Strong Design Characteristics: Designs with parallel theoretical and empirical aspects tend to be stronger.
  • The M:I:D:A Analogy: Effective designs often align data strategies with models and answer strategies with inquiries.
  • Flexibility in Models: Good designs should perform well even under varying world scenarios, not just under expected conditions.
  • Broadening Model Scope: Designers should consider a wide range of models, assessing the design’s effectiveness across these.
  • Robustness of Inquiries and Strategies: Inquiries should yield answers and strategies should be applicable regardless of variations in real-world events.
  • Diagnosis Across Models: It’s important to understand for which models a design excels and for which it falters.
  • Specificity of Purpose: A design is deemed good when it aligns with a specific purpose or goal.
  • Balancing Multiple Criteria: Designs should balance scientific precision, logistical constraints, policy goals, and ethical considerations.
  • Diverse Goals and Assessments: Different designs may be optimal for different goals; the purpose dictates the design evaluation.
  • Early Planning Benefits: Designing early allows for learning and improving design properties before data collection.
  • Avoiding Post-Hoc Regrets: Early design helps avoid regrets related to data collection or question formulation.
  • Iterative Improvement: The process of declaration, diagnosis, and redesign improves designs, ideally done before data collection.
  • Adaptability to Changes: Designs should be flexible to adapt to unforeseen circumstances or new information.
  • Expanding or Contracting Feasibility: The scope of feasible designs may change due to various practical factors.
  • Continual Redesign: The principle advocates for ongoing design modification, even post research completion, for robustness and response to criticism.
  • Improvement Through Sharing: Sharing designs via a formalized declaration makes it easier for others to understand and critique.
  • Enhancing Scientific Communication: Well-documented designs facilitate better communication and justification of research decisions.
  • Building a Design Library: The idea is to contribute designs to a shared library, allowing others to learn from and build upon existing work.

The Basics of Social Science Research Designs (Panke, 2018)

Deductive and inductive research.

a research design meaning

  • Starting Point: Begins with empirical observations or exploratory studies.
  • Development of Hypotheses: Hypotheses are formulated after initial empirical analysis.
  • Case Study Analysis: Involves conducting explorative case studies and analyzing dynamics at play.
  • Generalization of Findings: Insights are then generalized across multiple cases to verify their applicability.
  • Application: Suitable for novel phenomena or where existing theories are not easily applicable.
  • Example Cases: Exploring new events like Donald Trump’s 2016 nomination or Russia’s annexation of Crimea in 2014.
  • Theory-Based: Starts with existing theories to develop scientific answers to research questions.
  • Hypothesis Development: Hypotheses are specified and then empirically examined.
  • Empirical Examination: Involves a thorough empirical analysis of hypotheses using sound methods.
  • Theory Refinement: Results can refine existing theories or contribute to new theoretical insights.
  • Application: Preferred when existing theories relate to the research question.
  • Example Projects: Usually explanatory projects asking ‘why’ questions to uncover relationships.

Explanatory and Interpretative Research Designs

a research design meaning

  • Definition: Explanatory research aims to explain the relationships between variables, often addressing ‘why’ questions. It is primarily concerned with identifying cause-and-effect dynamics and is typically quantitative in nature. The goal is to test hypotheses derived from theories and to establish patterns that can predict future occurrences.
  • Definition: Interpretative research focuses on understanding the deeper meaning or underlying context of social phenomena. It often addresses ‘how is this possible’ questions, seeking to comprehend how certain outcomes or behaviors are produced within specific contexts. This type of research is usually qualitative and prioritizes individual experiences and perceptions.
  • Explanatory Research: Poses ‘why’ questions to explore causal relationships and understand what factors influence certain outcomes.
  • Interpretative Research: Asks ‘how is this possible’ questions to delve into the processes and meanings behind social phenomena.
  • Explanatory Research: Relies on established theories to form hypotheses about causal relationships between variables. These theories are then tested through empirical research.
  • Interpretative Research: Uses theories to provide a framework for understanding the social context and meanings. The focus is on constitutive relationships rather than causal ones.
  • Explanatory Research: Often involves studying multiple cases to allow for comparison and generalization. It seeks patterns across different scenarios.
  • Interpretative Research: Typically concentrates on single case studies, providing an in-depth understanding of that particular case without necessarily aiming for generalization.
  • Explanatory Research: Aims to produce findings that can be generalized to other similar cases or populations. It seeks universal or broad patterns.
  • Interpretative Research: Offers detailed insights specific to a single case or context. These findings are not necessarily intended to be generalized but to provide a deep understanding of the particular case.

Qualitative, Quantitative, and Mixed-method Projects

  • Definition: Qualitative research is exploratory and aims to understand human behavior, beliefs, feelings, and experiences. It involves collecting non-numerical data, often through interviews, focus groups, or textual analysis. This method is ideal for gaining in-depth insights into specific phenomena.
  • Example in Education: A qualitative study might involve conducting in-depth interviews with teachers to explore their experiences and challenges with remote teaching during the pandemic. This research would aim to understand the nuances of their experiences, challenges, and adaptations in a detailed and descriptive manner.
  • Definition: Quantitative research seeks to quantify data and generalize results from a sample to the population of interest. It involves measurable, numerical data and often uses statistical methods for analysis. This approach is suitable for testing hypotheses or examining relationships between variables.
  • Example in Education: A quantitative study could involve surveying a large number of students to determine the correlation between the amount of time spent on homework and their academic achievement. This would involve collecting numerical data (hours of homework, grades) and applying statistical analysis to examine relationships or differences.
  • Definition: Mixed-method research combines both qualitative and quantitative approaches, providing a more comprehensive understanding of the research problem. It allows for the exploration of complex research questions by integrating numerical data analysis with detailed narrative data.
  • Example in Education: A mixed-method study might investigate the impact of a new teaching method. The research could start with quantitative methods, like administering standardized tests to measure learning outcomes, followed by qualitative methods, such as conducting focus groups with students and teachers to understand their perceptions and experiences with the new teaching method. This combination provides both statistical results and in-depth understanding.
  • Research Questions: What kind of information is needed to answer the questions? Qualitative for “how” and “why”, quantitative for “how many” or “how much”, and mixed methods for a comprehensive understanding of both the breadth and depth of a phenomenon.
  • Nature of the Study: Is the study aiming to explore a new area (qualitative), confirm hypotheses (quantitative), or achieve both (mixed-method)?
  • Resources Available: Time, funding, and expertise available can influence the choice. Qualitative research can be more time-consuming, while quantitative research may require specific statistical skills.
  • Data Sources: Availability and type of data also guide the methodology. Existing numerical data might lean towards quantitative, while studies requiring personal experiences or opinions might be qualitative.

References:

Blair, G., Coppock, A., & Humphreys, M. (2023).  Research Design in the Social Sciences: Declaration, Diagnosis, and Redesign . Princeton University Press.

Panke, D. (2018). Research design & method selection: Making good choices in the social sciences.  Research Design & Method Selection , 1-368.

The Four Types of Research Design — Everything You Need to Know

Jenny Romanchuk

Updated: July 23, 2024

Published: January 18, 2023

When you conduct research, you need to have a clear idea of what you want to achieve and how to accomplish it. A good research design enables you to collect accurate and reliable data to draw valid conclusions.

research design used to test different beauty products

In this blog post, we'll outline the key features of the four common types of research design with real-life examples from UnderArmor, Carmex, and more. Then, you can easily choose the right approach for your project.

Table of Contents

What is research design?

The four types of research design, research design examples.

Research design is the process of planning and executing a study to answer specific questions. This process allows you to test hypotheses in the business or scientific fields.

Research design involves choosing the right methodology, selecting the most appropriate data collection methods, and devising a plan (or framework) for analyzing the data. In short, a good research design helps us to structure our research.

Marketers use different types of research design when conducting research .

There are four common types of research design — descriptive, correlational, experimental, and diagnostic designs. Let’s take a look at each in more detail.

Researchers use different designs to accomplish different research objectives. Here, we'll discuss how to choose the right type, the benefits of each, and use cases.

Research can also be classified as quantitative or qualitative at a higher level. Some experiments exhibit both qualitative and quantitative characteristics.

a research design meaning

Free Market Research Kit

5 Research and Planning Templates + a Free Guide on How to Use Them in Your Market Research

  • SWOT Analysis Template
  • Survey Template
  • Focus Group Template

Download Free

All fields are required.

You're all set!

Click this link to access this resource at any time.

Experimental

An experimental design is used when the researcher wants to examine how variables interact with each other. The researcher manipulates one variable (the independent variable) and observes the effect on another variable (the dependent variable).

In other words, the researcher wants to test a causal relationship between two or more variables.

In marketing, an example of experimental research would be comparing the effects of a television commercial versus an online advertisement conducted in a controlled environment (e.g. a lab). The objective of the research is to test which advertisement gets more attention among people of different age groups, gender, etc.

Another example is a study of the effect of music on productivity. A researcher assigns participants to one of two groups — those who listen to music while working and those who don't — and measure their productivity.

The main benefit of an experimental design is that it allows the researcher to draw causal relationships between variables.

One limitation: This research requires a great deal of control over the environment and participants, making it difficult to replicate in the real world. In addition, it’s quite costly.

Best for: Testing a cause-and-effect relationship (i.e., the effect of an independent variable on a dependent variable).

Correlational

A correlational design examines the relationship between two or more variables without intervening in the process.

Correlational design allows the analyst to observe natural relationships between variables. This results in data being more reflective of real-world situations.

For example, marketers can use correlational design to examine the relationship between brand loyalty and customer satisfaction. In particular, the researcher would look for patterns or trends in the data to see if there is a relationship between these two entities.

Similarly, you can study the relationship between physical activity and mental health. The analyst here would ask participants to complete surveys about their physical activity levels and mental health status. Data would show how the two variables are related.

Best for: Understanding the extent to which two or more variables are associated with each other in the real world.

Descriptive

Descriptive research refers to a systematic process of observing and describing what a subject does without influencing them.

Methods include surveys, interviews, case studies, and observations. Descriptive research aims to gather an in-depth understanding of a phenomenon and answers when/what/where.

SaaS companies use descriptive design to understand how customers interact with specific features. Findings can be used to spot patterns and roadblocks.

For instance, product managers can use screen recordings by Hotjar to observe in-app user behavior. This way, the team can precisely understand what is happening at a certain stage of the user journey and act accordingly.

Brand24, a social listening tool, tripled its sign-up conversion rate from 2.56% to 7.42%, thanks to locating friction points in the sign-up form through screen recordings.

different types of research design: descriptive research example.

Carma Laboratories worked with research company MMR to measure customers’ reactions to the lip-care company’s packaging and product . The goal was to find the cause of low sales for a recently launched line extension in Europe.

The team moderated a live, online focus group. Participants were shown w product samples, while AI and NLP natural language processing identified key themes in customer feedback.

This helped uncover key reasons for poor performance and guided changes in packaging.

research design example, tweezerman

  • Basics of Research Process
  • Methodology
  • Research Design: Definition, Types, Characteristics & Study Examples
  • Speech Topics
  • Basics of Essay Writing
  • Essay Topics
  • Other Essays
  • Main Academic Essays
  • Research Paper Topics
  • Basics of Research Paper Writing
  • Miscellaneous
  • Chicago/ Turabian
  • Data & Statistics
  • Admission Writing Tips
  • Admission Advice
  • Other Guides
  • Student Life
  • Studying Tips
  • Understanding Plagiarism
  • Academic Writing Tips
  • Basics of Dissertation & Thesis Writing

Illustration

  • Essay Guides
  • Research Paper Guides
  • Formatting Guides
  • Admission Guides
  • Dissertation & Thesis Guides

Research Design: Definition, Types, Characteristics & Study Examples

Research design

Table of contents

Illustration

Use our free Readability checker

A research design is the blueprint for any study. It's the plan that outlines how the research will be carried out. A study design usually includes the methods of data collection, the type of data to be gathered, and how it will be analyzed. Research designs help ensure the study is reliable, valid, and can answer the research question.

Behind every groundbreaking discovery and innovation lies a well-designed research. Whether you're investigating a new technology or exploring a social phenomenon, a solid research design is key to achieving reliable results. But what exactly does it means, and how do you create an effective one? Stay with our paper writers and find out:

  • Detailed definition
  • Types of research study designs
  • How to write a research design
  • Useful examples.

Whether you're a seasoned researcher or just getting started, understanding the core principles will help you conduct better studies and make more meaningful contributions.

What Is a Research Design: Definition

Research design is an overall study plan outlining a specific approach to investigating a research question . It covers particular methods and strategies for collecting, measuring and analyzing data. Students  are required to build a study design either as an individual task or as a separate chapter in a research paper , thesis or dissertation .

Before designing a research project, you need to consider a series aspects of your future study:

  • Research aims What research objectives do you want to accomplish with your study? What approach will you take to get there? Will you use a quantitative, qualitative, or mixed methods approach?
  • Type of data Will you gather new data (primary research), or rely on existing data (secondary research) to answer your research question?
  • Sampling methods How will you pick participants? What criteria will you use to ensure your sample is representative of the population?
  • Data collection methods What tools or instruments will you use to gather data (e.g., conducting a survey , interview, or observation)?
  • Measurement  What metrics will you use to capture and quantify data?
  • Data analysis  What statistical or qualitative techniques will you use to make sense of your findings?

By using a well-designed research plan, you can make sure your findings are solid and can be generalized to a larger group.

Research design example

You are going to investigate the effectiveness of a mindfulness-based intervention for reducing stress and anxiety among college students. You decide to organize an experiment to explore the impact. Participants should be randomly assigned to either an intervention group or a control group. You need to conduct pre- and post-intervention using self-report measures of stress and anxiety.

What Makes a Good Study Design? 

To design a research study that works, you need to carefully think things through. Make sure your strategy is tailored to your research topic and watch out for potential biases. Your procedures should be flexible enough to accommodate changes that may arise during the course of research. 

A good research design should be:

  • Clear and methodologically sound
  • Feasible and realistic
  • Knowledge-driven.

By following these guidelines, you'll set yourself up for success and be able to produce reliable results.

Research Study Design Structure

A structured research design provides a clear and organized plan for carrying out a study. It helps researchers to stay on track and ensure that the study stays within the bounds of acceptable time, resources, and funding.

A typical design includes 5 main components:

  • Research question(s): Central research topic(s) or issue(s).
  • Sampling strategy: Method for selecting participants or subjects.
  • Data collection techniques: Tools or instruments for retrieving data.
  • Data analysis approaches: Techniques for interpreting and scrutinizing assembled data.
  • Ethical considerations: Principles for protecting human subjects (e.g., obtaining a written consent, ensuring confidentiality guarantees).

Research Design Essential Characteristics

Creating a research design warrants a firm foundation for your exploration. The cost of making a mistake is too high. This is not something scholars can afford, especially if financial resources or a considerable amount of time is invested. Choose the wrong strategy, and you risk undermining your whole study and wasting resources. 

To avoid any unpleasant surprises, make sure your study conforms to the key characteristics. Here are some core features of research designs:

  • Reliability   Reliability is stability of your measures or instruments over time. A reliable research design is one that can be reproduced in the same way and deliver consistent outcomes. It should also nurture accurate representations of actual conditions and guarantee data quality.
  • Validity For a study to be valid , it must measure what it claims to measure. This means that methodological approaches should be carefully considered and aligned to the main research question(s).
  • Generalizability Generalizability means that your insights can be practiced outside of the scope of a study. When making inferences, researchers must take into account determinants such as sample size, sampling technique, and context.
  • Neutrality A study model should be free from personal or cognitive biases to ensure an impartial investigation of a research topic. Steer clear of highlighting any particular group or achievement.

Key Concepts in Research Design

Now let’s discuss the fundamental principles that underpin study designs in research. This will help you develop a strong framework and make sure all the puzzles fit together.

Primary concepts

An is hypothesized to have an impact on a . Researchers record the alterations in the dependent variable caused by manipulations in the independent variable.

An is an uncontrolled factor that may affect a dependent variable in a study.

Researchers hold all variables constant except for an independent variable to attribute changes to it, rather than other factors.

A is an educated guess about a causal relationship between 2 or more variables.

Types of Approaches to Research Design

Study frameworks can fall into 2 major categories depending on the approach to compiling data you opt for. The 2 main types of study designs in research are qualitative and quantitative research. Both approaches have their unique strengths and weaknesses, and can be utilized based on the nature of information you are dealing with. 

Quantitative Research  

Quantitative study is focused on establishing empirical relationships between variables and collecting numerical data. It involves using statistics, surveys, and experiments to measure the effects of certain phenomena. This research design type looks at hard evidence and provides measurements that can be analyzed using statistical techniques. 

Qualitative Research 

Qualitative approach is used to examine the behavior, attitudes, and perceptions of individuals in a given environment. This type of study design relies on unstructured data retrieved through interviews, open-ended questions and observational methods. 

If you need your study done yesterday, leave StudyCrumb a “ write my research paper for me ” notice and have your project completed by experts.

Types of Research Designs & Examples

Choosing a research design may be tough especially for the first-timers. One of the great ways to get started is to pick the right design that will best fit your objectives. There are 4 different types of research designs you can opt for to carry out your investigation:

  • Experimental
  • Correlational
  • Descriptive
  • Diagnostic/explanatory.

For more advanced studies, you can even combine several types. Mixed-methods research may come in handy when exploring complex phenomena that cannot be adequately captured by one method alone.

Below we will go through each type and offer you examples of study designs to assist you with selection.

1. Experimental

In experimental research design , scientists manipulate one or more independent variables and control other factors in order to observe their effect on a dependent variable. This type of research design is used for experiments where the goal is to determine a causal relationship. 

Its core characteristics include:

  • Randomization
  • Manipulation
  • Replication.
A pharmaceutical company wants to test a new drug to investigate its effectiveness in treating a specific medical condition. Researchers would randomly assign participants to either a control group (receiving a placebo) or an experimental group (receiving the new drug). They would rigorously control all variables (e.g, age, medical history) and manipulate them to get reliable results.

2. Correlational

Correlational study is used to examine the existing relationships between variables. In this type of design, you don’t need to manipulate other variables. Here, researchers just focus on observing and measuring the naturally occurring relationship.

Correlational studies encompass such features: 

  • Data collection from natural settings
  • No intervention by the researcher
  • Observation over time.
A research team wants to examine the relationship between academic performance and extracurricular activities. They would observe students' performance in courses and measure how much time they spend engaging in extracurricular activities.

3. Descriptive 

Descriptive research design is all about describing a particular population or phenomenon without any interruption. This study design is especially helpful when we're not sure about something and want to understand it better.

Descriptive studies are characterized by such features:

  • Random and convenience sampling
  • Observation
  • No intervention.
A psychologist wants to understand how parents' behavior affects their child's self-concept. They would observe the interaction between children and their parents in a natural setting. Gathered information will help her get an overview of this situation and recognize some patterns.

4. Diagnostic

Diagnostic or explanatory research is used to determine the cause of an existing problem or a chronic symptom. Unlike other types of design, here scientists try to understand why something is happening. 

Among essential hallmarks of explanatory studies are: 

  • Testing hypotheses and theories
  • Examining existing data
  • Comparative analysis.
A public health specialist wants to identify the cause of an outbreak of water-borne disease in a certain area. They would inspect water samples and records to compare them with similar outbreaks in other areas. This will help to uncover reasons behind this accident.

How to Design a Research Study: Step-by-Step Process

When designing your research don't just jump into it. It's important to take the time and do things right in order to attain accurate findings. Follow these simple steps on how to design a study to get the most out of your project.

1. Determine Your Aims 

The first step in the research design process is figuring out what you want to achieve. This involves identifying your research question, goals and specific objectives you want to accomplish. Think whether you want to explore a specific issue or develop a new theory? Setting your aims from the get-go will help you stay focused and ensure that your study is driven by purpose. 

Once  you are clear with your goals, you need to decide on the main approach. Will you use qualitative or quantitative methods? Or perhaps a mixture of both?

2. Select a Type of Research Design

Choosing a suitable design requires considering multiple factors, such as your research question, data collection methods, and resources. There are various research design types, each with its own advantages and limitations. Think about the kind of data that would be most useful to address your questions. Ultimately, a well-devised strategy should help you gather accurate data to achieve your objectives.

3. Define Your Population and Sampling Methods

To design a research project, it is essential to establish your target population and parameters for selecting participants. First, identify a cohort of individuals who share common characteristics and possess relevant experiences. 

For instance, if you are researching the impact of social media on mental health, your population could be young adults aged 18-25 who use social media frequently.

With your population in mind, you can now choose an optimal sampling method. Sampling is basically the process of narrowing down your target group to only those individuals who will participate in your study. At this point, you need to decide on whether you want to randomly choose the participants (probability sampling) or set out any selection criteria (non-probability sampling). 

To examine the influence of social media on mental well-being, we will divide a whole population into smaller subgroups using stratified random sampling . Then, we will randomly pick participants from each subcategory to make sure that findings are also true for a broader group of young adults.

4. Decide on Your Data Collection Methods

When devising your study, it is also important to consider how you will retrieve data.  Depending on the type of design you are using, you may deploy diverse methods. Below you can see various data collection techniques suited for different research designs. 

Data collection methods in various studies

Experiments, controlled trials

Surveys, observations

Direct observation, video recordings, field notes

 

Medical or psychological tests, screening, clinical interviews

Additionally, if you plan on integrating existing data sources like medical records or publicly available datasets, you want to mention this as well. 

5. Arrange Your Data Collection Process

Your data collection process should also be meticulously thought out. This stage involves scheduling interviews, arranging questionnaires and preparing all the necessary tools for collecting information from participants. Detail how long your study will take and what procedures will be followed for recording and analyzing the data. 

State which variables will be studied and what measures or scales will be used when assessing each variable.

Measures and scales 

Measures and scales are tools used to quantify variables in research. A measure is any method used to collect data on a variable, while a scale is a set of items or questions used to measure a particular construct or concept. Different types of scales include nominal, ordinal, interval, or ratio , each of which has distinct properties

Operationalization 

When working with abstract information that needs to be quantified, researchers often operationalize the variable by defining it in concrete terms that can be measured or observed. This allows the abstract concept to be studied systematically and rigorously. 

Operationalization in study design example

If studying the concept of happiness, researchers might operationalize it by using a scale that measures positive affect or life satisfaction. This allows us to quantify happiness and inspect its relationship with other variables, such as income or social support.

Remember that research design should be flexible enough to adjust for any unforeseen developments. Even with rigorous preparation, you may still face unexpected challenges during your project. That’s why you need to work out contingency plans when designing research.

6. Choose Data Analysis Techniques

It’s impossible to design research without mentioning how you are going to scrutinize data. To select a proper method, take into account the type of data you are dealing with and how many variables you need to analyze. 

Qualitative data may require thematic analysis or content analysis.

Quantitative data, on the other hand, could be processed with more sophisticated statistical analysis approaches such as regression analysis, factor analysis or descriptive statistics.

Finally, don’t forget about ethical considerations. Opt for those methods that minimize harm to participants and protect their rights.

Research Design Checklist

Having a checklist in front of you will help you design your research flawlessly.

  • checkbox I clearly defined my research question and its significance.
  • checkbox I considered crucial factors such as the nature of my study, type of required data and available resources to choose a suitable design.
  • checkbox A sample size is sufficient to provide statistically significant results.
  • checkbox My data collection methods are reliable and valid.
  • checkbox Analysis methods are appropriate for the type of data I will be gathering.
  • checkbox My research design protects the rights and privacy of my participants.
  • checkbox I created a realistic timeline for research, including deadlines for data collection, analysis, and write-up.
  • checkbox I considered funding sources and potential limitations.

Bottom Line on Research Design & Study Types

Designing a research project involves making countless decisions that can affect the quality of your work. By planning out each step and selecting the best methods for data collection and analysis, you can ensure that your project is conducted professionally.

We hope this article has helped you to better understand the research design process. If you have any questions or comments, ping us in the comments section below.

Illustration

Entrust us your study and we will find the best research paper writer to complete your project. Count on a unique work and fast turnaround.

FAQ About Research Study Designs

1. what is a study design.

Study design, or else called research design, is the overall plan for a project, including its purpose, methodology, data collection and analysis techniques. A good design ensures that your project is conducted in an organized and ethical manner. It also provides clear guidelines for replicating or extending a study in the future.

2. What is the purpose of a research design?

The purpose of a research design is to provide a structure and framework for your project. By outlining your methodology, data collection techniques, and analysis methods in advance, you can ensure that your project will be conducted effectively.

3. What is the importance of research designs?

Research designs are critical to the success of any research project for several reasons. Specifically, study designs grant:

  • Clear direction for all stages of a study
  • Validity and reliability of findings
  • Roadmap for replication or further extension
  • Accurate results by controlling for potential bias
  • Comparison between studies by providing consistent guidelines.

By following an established plan, researchers can be sure that their projects are organized, ethical, and reliable.

4. What are the 4 types of study designs?

There are generally 4 types of study designs commonly used in research:

  • Experimental studies: investigate cause-and-effect relationships by manipulating the independent variable.
  • Correlational studies: examine relationships between 2 or more variables without intruding them.
  • Descriptive studies: describe the characteristics of a population or phenomenon without making any inferences about cause and effect.
  • Explanatory studies: intended to explain causal relationships.

Joe_Eckel_1_ab59a03630.jpg

Joe Eckel is an expert on Dissertations writing. He makes sure that each student gets precious insights on composing A-grade academic writing.

You may also like

Descriptive Research

  • University Libraries
  • Research Guides
  • Topic Guides
  • Research Methods Guide
  • Research Design & Method

Research Methods Guide: Research Design & Method

  • Introduction
  • Survey Research
  • Interview Research
  • Data Analysis
  • Resources & Consultation

Tutorial Videos: Research Design & Method

Research Methods (sociology-focused)

Qualitative vs. Quantitative Methods (intro)

Qualitative vs. Quantitative Methods (advanced)

a research design meaning

FAQ: Research Design & Method

What is the difference between Research Design and Research Method?

Research design is a plan to answer your research question.  A research method is a strategy used to implement that plan.  Research design and methods are different but closely related, because good research design ensures that the data you obtain will help you answer your research question more effectively.

Which research method should I choose ?

It depends on your research goal.  It depends on what subjects (and who) you want to study.  Let's say you are interested in studying what makes people happy, or why some students are more conscious about recycling on campus.  To answer these questions, you need to make a decision about how to collect your data.  Most frequently used methods include:

  • Observation / Participant Observation
  • Focus Groups
  • Experiments
  • Secondary Data Analysis / Archival Study
  • Mixed Methods (combination of some of the above)

One particular method could be better suited to your research goal than others, because the data you collect from different methods will be different in quality and quantity.   For instance, surveys are usually designed to produce relatively short answers, rather than the extensive responses expected in qualitative interviews.

What other factors should I consider when choosing one method over another?

Time for data collection and analysis is something you want to consider.  An observation or interview method, so-called qualitative approach, helps you collect richer information, but it takes time.  Using a survey helps you collect more data quickly, yet it may lack details.  So, you will need to consider the time you have for research and the balance between strengths and weaknesses associated with each method (e.g., qualitative vs. quantitative).

  • << Previous: Introduction
  • Next: Survey Research >>
  • Last Updated: Aug 21, 2023 10:42 AM

Frequently asked questions

What is a research design.

A research design is a strategy for answering your   research question . It defines your overall approach and determines how you will collect and analyze data.

Frequently asked questions: Methodology

Attrition refers to participants leaving a study. It always happens to some extent—for example, in randomized controlled trials for medical research.

Differential attrition occurs when attrition or dropout rates differ systematically between the intervention and the control group . As a result, the characteristics of the participants who drop out differ from the characteristics of those who stay in the study. Because of this, study results may be biased .

Action research is conducted in order to solve a particular issue immediately, while case studies are often conducted over a longer period of time and focus more on observing and analyzing a particular ongoing phenomenon.

Action research is focused on solving a problem or informing individual and community-based knowledge in a way that impacts teaching, learning, and other related processes. It is less focused on contributing theoretical input, instead producing actionable input.

Action research is particularly popular with educators as a form of systematic inquiry because it prioritizes reflection and bridges the gap between theory and practice. Educators are able to simultaneously investigate an issue as they solve it, and the method is very iterative and flexible.

A cycle of inquiry is another name for action research . It is usually visualized in a spiral shape following a series of steps, such as “planning → acting → observing → reflecting.”

To make quantitative observations , you need to use instruments that are capable of measuring the quantity you want to observe. For example, you might use a ruler to measure the length of an object or a thermometer to measure its temperature.

Criterion validity and construct validity are both types of measurement validity . In other words, they both show you how accurately a method measures something.

While construct validity is the degree to which a test or other measurement method measures what it claims to measure, criterion validity is the degree to which a test can predictively (in the future) or concurrently (in the present) measure something.

Construct validity is often considered the overarching type of measurement validity . You need to have face validity , content validity , and criterion validity in order to achieve construct validity.

Convergent validity and discriminant validity are both subtypes of construct validity . Together, they help you evaluate whether a test measures the concept it was designed to measure.

  • Convergent validity indicates whether a test that is designed to measure a particular construct correlates with other tests that assess the same or similar construct.
  • Discriminant validity indicates whether two tests that should not be highly related to each other are indeed not related. This type of validity is also called divergent validity .

You need to assess both in order to demonstrate construct validity. Neither one alone is sufficient for establishing construct validity.

  • Discriminant validity indicates whether two tests that should not be highly related to each other are indeed not related

Content validity shows you how accurately a test or other measurement method taps  into the various aspects of the specific construct you are researching.

In other words, it helps you answer the question: “does the test measure all aspects of the construct I want to measure?” If it does, then the test has high content validity.

The higher the content validity, the more accurate the measurement of the construct.

If the test fails to include parts of the construct, or irrelevant parts are included, the validity of the instrument is threatened, which brings your results into question.

Face validity and content validity are similar in that they both evaluate how suitable the content of a test is. The difference is that face validity is subjective, and assesses content at surface level.

When a test has strong face validity, anyone would agree that the test’s questions appear to measure what they are intended to measure.

For example, looking at a 4th grade math test consisting of problems in which students have to add and multiply, most people would agree that it has strong face validity (i.e., it looks like a math test).

On the other hand, content validity evaluates how well a test represents all the aspects of a topic. Assessing content validity is more systematic and relies on expert evaluation. of each question, analyzing whether each one covers the aspects that the test was designed to cover.

A 4th grade math test would have high content validity if it covered all the skills taught in that grade. Experts(in this case, math teachers), would have to evaluate the content validity by comparing the test to the learning objectives.

Snowball sampling is a non-probability sampling method . Unlike probability sampling (which involves some form of random selection ), the initial individuals selected to be studied are the ones who recruit new participants.

Because not every member of the target population has an equal chance of being recruited into the sample, selection in snowball sampling is non-random.

Snowball sampling is a non-probability sampling method , where there is not an equal chance for every member of the population to be included in the sample .

This means that you cannot use inferential statistics and make generalizations —often the goal of quantitative research . As such, a snowball sample is not representative of the target population and is usually a better fit for qualitative research .

Snowball sampling relies on the use of referrals. Here, the researcher recruits one or more initial participants, who then recruit the next ones.

Participants share similar characteristics and/or know each other. Because of this, not every member of the population has an equal chance of being included in the sample, giving rise to sampling bias .

Snowball sampling is best used in the following cases:

  • If there is no sampling frame available (e.g., people with a rare disease)
  • If the population of interest is hard to access or locate (e.g., people experiencing homelessness)
  • If the research focuses on a sensitive topic (e.g., extramarital affairs)

The reproducibility and replicability of a study can be ensured by writing a transparent, detailed method section and using clear, unambiguous language.

Reproducibility and replicability are related terms.

  • Reproducing research entails reanalyzing the existing data in the same manner.
  • Replicating (or repeating ) the research entails reconducting the entire analysis, including the collection of new data . 
  • A successful reproduction shows that the data analyses were conducted in a fair and honest manner.
  • A successful replication shows that the reliability of the results is high.

Stratified sampling and quota sampling both involve dividing the population into subgroups and selecting units from each subgroup. The purpose in both cases is to select a representative sample and/or to allow comparisons between subgroups.

The main difference is that in stratified sampling, you draw a random sample from each subgroup ( probability sampling ). In quota sampling you select a predetermined number or proportion of units, in a non-random manner ( non-probability sampling ).

Purposive and convenience sampling are both sampling methods that are typically used in qualitative data collection.

A convenience sample is drawn from a source that is conveniently accessible to the researcher. Convenience sampling does not distinguish characteristics among the participants. On the other hand, purposive sampling focuses on selecting participants possessing characteristics associated with the research study.

The findings of studies based on either convenience or purposive sampling can only be generalized to the (sub)population from which the sample is drawn, and not to the entire population.

Random sampling or probability sampling is based on random selection. This means that each unit has an equal chance (i.e., equal probability) of being included in the sample.

On the other hand, convenience sampling involves stopping people at random, which means that not everyone has an equal chance of being selected depending on the place, time, or day you are collecting your data.

Convenience sampling and quota sampling are both non-probability sampling methods. They both use non-random criteria like availability, geographical proximity, or expert knowledge to recruit study participants.

However, in convenience sampling, you continue to sample units or cases until you reach the required sample size.

In quota sampling, you first need to divide your population of interest into subgroups (strata) and estimate their proportions (quota) in the population. Then you can start your data collection, using convenience sampling to recruit participants, until the proportions in each subgroup coincide with the estimated proportions in the population.

A sampling frame is a list of every member in the entire population . It is important that the sampling frame is as complete as possible, so that your sample accurately reflects your population.

Stratified and cluster sampling may look similar, but bear in mind that groups created in cluster sampling are heterogeneous , so the individual characteristics in the cluster vary. In contrast, groups created in stratified sampling are homogeneous , as units share characteristics.

Relatedly, in cluster sampling you randomly select entire groups and include all units of each group in your sample. However, in stratified sampling, you select some units of all groups and include them in your sample. In this way, both methods can ensure that your sample is representative of the target population .

A systematic review is secondary research because it uses existing research. You don’t collect new data yourself.

The key difference between observational studies and experimental designs is that a well-done observational study does not influence the responses of participants, while experiments do have some sort of treatment condition applied to at least some participants by random assignment .

An observational study is a great choice for you if your research question is based purely on observations. If there are ethical, logistical, or practical concerns that prevent you from conducting a traditional experiment , an observational study may be a good choice. In an observational study, there is no interference or manipulation of the research subjects, as well as no control or treatment groups .

It’s often best to ask a variety of people to review your measurements. You can ask experts, such as other researchers, or laypeople, such as potential participants, to judge the face validity of tests.

While experts have a deep understanding of research methods , the people you’re studying can provide you with valuable insights you may have missed otherwise.

Face validity is important because it’s a simple first step to measuring the overall validity of a test or technique. It’s a relatively intuitive, quick, and easy way to start checking whether a new measure seems useful at first glance.

Good face validity means that anyone who reviews your measure says that it seems to be measuring what it’s supposed to. With poor face validity, someone reviewing your measure may be left confused about what you’re measuring and why you’re using this method.

Face validity is about whether a test appears to measure what it’s supposed to measure. This type of validity is concerned with whether a measure seems relevant and appropriate for what it’s assessing only on the surface.

Statistical analyses are often applied to test validity with data from your measures. You test convergent validity and discriminant validity with correlations to see if results from your test are positively or negatively related to those of other established tests.

You can also use regression analyses to assess whether your measure is actually predictive of outcomes that you expect it to predict theoretically. A regression analysis that supports your expectations strengthens your claim of construct validity .

When designing or evaluating a measure, construct validity helps you ensure you’re actually measuring the construct you’re interested in. If you don’t have construct validity, you may inadvertently measure unrelated or distinct constructs and lose precision in your research.

Construct validity is often considered the overarching type of measurement validity ,  because it covers all of the other types. You need to have face validity , content validity , and criterion validity to achieve construct validity.

Construct validity is about how well a test measures the concept it was designed to evaluate. It’s one of four types of measurement validity , which includes construct validity, face validity , and criterion validity.

There are two subtypes of construct validity.

  • Convergent validity : The extent to which your measure corresponds to measures of related constructs
  • Discriminant validity : The extent to which your measure is unrelated or negatively related to measures of distinct constructs

Naturalistic observation is a valuable tool because of its flexibility, external validity , and suitability for topics that can’t be studied in a lab setting.

The downsides of naturalistic observation include its lack of scientific control , ethical considerations , and potential for bias from observers and subjects.

Naturalistic observation is a qualitative research method where you record the behaviors of your research subjects in real world settings. You avoid interfering or influencing anything in a naturalistic observation.

You can think of naturalistic observation as “people watching” with a purpose.

A dependent variable is what changes as a result of the independent variable manipulation in experiments . It’s what you’re interested in measuring, and it “depends” on your independent variable.

In statistics, dependent variables are also called:

  • Response variables (they respond to a change in another variable)
  • Outcome variables (they represent the outcome you want to measure)
  • Left-hand-side variables (they appear on the left-hand side of a regression equation)

An independent variable is the variable you manipulate, control, or vary in an experimental study to explore its effects. It’s called “independent” because it’s not influenced by any other variables in the study.

Independent variables are also called:

  • Explanatory variables (they explain an event or outcome)
  • Predictor variables (they can be used to predict the value of a dependent variable)
  • Right-hand-side variables (they appear on the right-hand side of a regression equation).

As a rule of thumb, questions related to thoughts, beliefs, and feelings work well in focus groups. Take your time formulating strong questions, paying special attention to phrasing. Be careful to avoid leading questions , which can bias your responses.

Overall, your focus group questions should be:

  • Open-ended and flexible
  • Impossible to answer with “yes” or “no” (questions that start with “why” or “how” are often best)
  • Unambiguous, getting straight to the point while still stimulating discussion
  • Unbiased and neutral

A structured interview is a data collection method that relies on asking questions in a set order to collect data on a topic. They are often quantitative in nature. Structured interviews are best used when: 

  • You already have a very clear understanding of your topic. Perhaps significant research has already been conducted, or you have done some prior research yourself, but you already possess a baseline for designing strong structured questions.
  • You are constrained in terms of time or resources and need to analyze your data quickly and efficiently.
  • Your research question depends on strong parity between participants, with environmental conditions held constant.

More flexible interview options include semi-structured interviews , unstructured interviews , and focus groups .

Social desirability bias is the tendency for interview participants to give responses that will be viewed favorably by the interviewer or other participants. It occurs in all types of interviews and surveys , but is most common in semi-structured interviews , unstructured interviews , and focus groups .

Social desirability bias can be mitigated by ensuring participants feel at ease and comfortable sharing their views. Make sure to pay attention to your own body language and any physical or verbal cues, such as nodding or widening your eyes.

This type of bias can also occur in observations if the participants know they’re being observed. They might alter their behavior accordingly.

The interviewer effect is a type of bias that emerges when a characteristic of an interviewer (race, age, gender identity, etc.) influences the responses given by the interviewee.

There is a risk of an interviewer effect in all types of interviews , but it can be mitigated by writing really high-quality interview questions.

A semi-structured interview is a blend of structured and unstructured types of interviews. Semi-structured interviews are best used when:

  • You have prior interview experience. Spontaneous questions are deceptively challenging, and it’s easy to accidentally ask a leading question or make a participant uncomfortable.
  • Your research question is exploratory in nature. Participant answers can guide future research questions and help you develop a more robust knowledge base for future research.

An unstructured interview is the most flexible type of interview, but it is not always the best fit for your research topic.

Unstructured interviews are best used when:

  • You are an experienced interviewer and have a very strong background in your research topic, since it is challenging to ask spontaneous, colloquial questions.
  • Your research question is exploratory in nature. While you may have developed hypotheses, you are open to discovering new or shifting viewpoints through the interview process.
  • You are seeking descriptive data, and are ready to ask questions that will deepen and contextualize your initial thoughts and hypotheses.
  • Your research depends on forming connections with your participants and making them feel comfortable revealing deeper emotions, lived experiences, or thoughts.

The four most common types of interviews are:

  • Structured interviews : The questions are predetermined in both topic and order. 
  • Semi-structured interviews : A few questions are predetermined, but other questions aren’t planned.
  • Unstructured interviews : None of the questions are predetermined.
  • Focus group interviews : The questions are presented to a group instead of one individual.

Deductive reasoning is commonly used in scientific research, and it’s especially associated with quantitative research .

In research, you might have come across something called the hypothetico-deductive method . It’s the scientific method of testing hypotheses to check whether your predictions are substantiated by real-world data.

Deductive reasoning is a logical approach where you progress from general ideas to specific conclusions. It’s often contrasted with inductive reasoning , where you start with specific observations and form general conclusions.

Deductive reasoning is also called deductive logic.

There are many different types of inductive reasoning that people use formally or informally.

Here are a few common types:

  • Inductive generalization : You use observations about a sample to come to a conclusion about the population it came from.
  • Statistical generalization: You use specific numbers about samples to make statements about populations.
  • Causal reasoning: You make cause-and-effect links between different things.
  • Sign reasoning: You make a conclusion about a correlational relationship between different things.
  • Analogical reasoning: You make a conclusion about something based on its similarities to something else.

Inductive reasoning is a bottom-up approach, while deductive reasoning is top-down.

Inductive reasoning takes you from the specific to the general, while in deductive reasoning, you make inferences by going from general premises to specific conclusions.

In inductive research , you start by making observations or gathering data. Then, you take a broad scan of your data and search for patterns. Finally, you make general conclusions that you might incorporate into theories.

Inductive reasoning is a method of drawing conclusions by going from the specific to the general. It’s usually contrasted with deductive reasoning, where you proceed from general information to specific conclusions.

Inductive reasoning is also called inductive logic or bottom-up reasoning.

A hypothesis states your predictions about what your research will find. It is a tentative answer to your research question that has not yet been tested. For some research projects, you might have to write several hypotheses that address different aspects of your research question.

A hypothesis is not just a guess — it should be based on existing theories and knowledge. It also has to be testable, which means you can support or refute it through scientific research methods (such as experiments, observations and statistical analysis of data).

Triangulation can help:

  • Reduce research bias that comes from using a single method, theory, or investigator
  • Enhance validity by approaching the same topic with different tools
  • Establish credibility by giving you a complete picture of the research problem

But triangulation can also pose problems:

  • It’s time-consuming and labor-intensive, often involving an interdisciplinary team.
  • Your results may be inconsistent or even contradictory.

There are four main types of triangulation :

  • Data triangulation : Using data from different times, spaces, and people
  • Investigator triangulation : Involving multiple researchers in collecting or analyzing data
  • Theory triangulation : Using varying theoretical perspectives in your research
  • Methodological triangulation : Using different methodologies to approach the same topic

Many academic fields use peer review , largely to determine whether a manuscript is suitable for publication. Peer review enhances the credibility of the published manuscript.

However, peer review is also common in non-academic settings. The United Nations, the European Union, and many individual nations use peer review to evaluate grant applications. It is also widely used in medical and health-related fields as a teaching or quality-of-care measure. 

Peer assessment is often used in the classroom as a pedagogical tool. Both receiving feedback and providing it are thought to enhance the learning process, helping students think critically and collaboratively.

Peer review can stop obviously problematic, falsified, or otherwise untrustworthy research from being published. It also represents an excellent opportunity to get feedback from renowned experts in your field. It acts as a first defense, helping you ensure your argument is clear and that there are no gaps, vague terms, or unanswered questions for readers who weren’t involved in the research process.

Peer-reviewed articles are considered a highly credible source due to this stringent process they go through before publication.

In general, the peer review process follows the following steps: 

  • First, the author submits the manuscript to the editor.
  • Reject the manuscript and send it back to author, or 
  • Send it onward to the selected peer reviewer(s) 
  • Next, the peer review process occurs. The reviewer provides feedback, addressing any major or minor issues with the manuscript, and gives their advice regarding what edits should be made. 
  • Lastly, the edited manuscript is sent back to the author. They input the edits, and resubmit it to the editor for publication.

Exploratory research is often used when the issue you’re studying is new or when the data collection process is challenging for some reason.

You can use exploratory research if you have a general idea or a specific question that you want to study but there is no preexisting knowledge or paradigm with which to study it.

Exploratory research is a methodology approach that explores research questions that have not previously been studied in depth. It is often used when the issue you’re studying is new, or the data collection process is challenging in some way.

Explanatory research is used to investigate how or why a phenomenon occurs. Therefore, this type of research is often one of the first stages in the research process , serving as a jumping-off point for future research.

Exploratory research aims to explore the main aspects of an under-researched problem, while explanatory research aims to explain the causes and consequences of a well-defined problem.

Explanatory research is a research method used to investigate how or why something occurs when only a small amount of information is available pertaining to that topic. It can help you increase your understanding of a given topic.

Clean data are valid, accurate, complete, consistent, unique, and uniform. Dirty data include inconsistencies and errors.

Dirty data can come from any part of the research process, including poor research design , inappropriate measurement materials, or flawed data entry.

Data cleaning takes place between data collection and data analyses. But you can use some methods even before collecting data.

For clean data, you should start by designing measures that collect valid data. Data validation at the time of data entry or collection helps you minimize the amount of data cleaning you’ll need to do.

After data collection, you can use data standardization and data transformation to clean your data. You’ll also deal with any missing values, outliers, and duplicate values.

Every dataset requires different techniques to clean dirty data , but you need to address these issues in a systematic way. You focus on finding and resolving data points that don’t agree or fit with the rest of your dataset.

These data might be missing values, outliers, duplicate values, incorrectly formatted, or irrelevant. You’ll start with screening and diagnosing your data. Then, you’ll often standardize and accept or remove data to make your dataset consistent and valid.

Data cleaning is necessary for valid and appropriate analyses. Dirty data contain inconsistencies or errors , but cleaning your data helps you minimize or resolve these.

Without data cleaning, you could end up with a Type I or II error in your conclusion. These types of erroneous conclusions can be practically significant with important consequences, because they lead to misplaced investments or missed opportunities.

Data cleaning involves spotting and resolving potential data inconsistencies or errors to improve your data quality. An error is any value (e.g., recorded weight) that doesn’t reflect the true value (e.g., actual weight) of something that’s being measured.

In this process, you review, analyze, detect, modify, or remove “dirty” data to make your dataset “clean.” Data cleaning is also called data cleansing or data scrubbing.

Research misconduct means making up or falsifying data, manipulating data analyses, or misrepresenting results in research reports. It’s a form of academic fraud.

These actions are committed intentionally and can have serious consequences; research misconduct is not a simple mistake or a point of disagreement but a serious ethical failure.

Anonymity means you don’t know who the participants are, while confidentiality means you know who they are but remove identifying information from your research report. Both are important ethical considerations .

You can only guarantee anonymity by not collecting any personally identifying information—for example, names, phone numbers, email addresses, IP addresses, physical characteristics, photos, or videos.

You can keep data confidential by using aggregate information in your research report, so that you only refer to groups of participants rather than individuals.

Research ethics matter for scientific integrity, human rights and dignity, and collaboration between science and society. These principles make sure that participation in studies is voluntary, informed, and safe.

Ethical considerations in research are a set of principles that guide your research designs and practices. These principles include voluntary participation, informed consent, anonymity, confidentiality, potential for harm, and results communication.

Scientists and researchers must always adhere to a certain code of conduct when collecting data from others .

These considerations protect the rights of research participants, enhance research validity , and maintain scientific integrity.

In multistage sampling , you can use probability or non-probability sampling methods .

For a probability sample, you have to conduct probability sampling at every stage.

You can mix it up by using simple random sampling , systematic sampling , or stratified sampling to select units at different stages, depending on what is applicable and relevant to your study.

Multistage sampling can simplify data collection when you have large, geographically spread samples, and you can obtain a probability sample without a complete sampling frame.

But multistage sampling may not lead to a representative sample, and larger samples are needed for multistage samples to achieve the statistical properties of simple random samples .

These are four of the most common mixed methods designs :

  • Convergent parallel: Quantitative and qualitative data are collected at the same time and analyzed separately. After both analyses are complete, compare your results to draw overall conclusions. 
  • Embedded: Quantitative and qualitative data are collected at the same time, but within a larger quantitative or qualitative design. One type of data is secondary to the other.
  • Explanatory sequential: Quantitative data is collected and analyzed first, followed by qualitative data. You can use this design if you think your qualitative data will explain and contextualize your quantitative findings.
  • Exploratory sequential: Qualitative data is collected and analyzed first, followed by quantitative data. You can use this design if you think the quantitative data will confirm or validate your qualitative findings.

Triangulation in research means using multiple datasets, methods, theories and/or investigators to address a research question. It’s a research strategy that can help you enhance the validity and credibility of your findings.

Triangulation is mainly used in qualitative research , but it’s also commonly applied in quantitative research . Mixed methods research always uses triangulation.

In multistage sampling , or multistage cluster sampling, you draw a sample from a population using smaller and smaller groups at each stage.

This method is often used to collect data from a large, geographically spread group of people in national surveys, for example. You take advantage of hierarchical groupings (e.g., from state to city to neighborhood) to create a sample that’s less expensive and time-consuming to collect data from.

No, the steepness or slope of the line isn’t related to the correlation coefficient value. The correlation coefficient only tells you how closely your data fit on a line, so two datasets with the same correlation coefficient can have very different slopes.

To find the slope of the line, you’ll need to perform a regression analysis .

Correlation coefficients always range between -1 and 1.

The sign of the coefficient tells you the direction of the relationship: a positive value means the variables change together in the same direction, while a negative value means they change together in opposite directions.

The absolute value of a number is equal to the number without its sign. The absolute value of a correlation coefficient tells you the magnitude of the correlation: the greater the absolute value, the stronger the correlation.

These are the assumptions your data must meet if you want to use Pearson’s r :

  • Both variables are on an interval or ratio level of measurement
  • Data from both variables follow normal distributions
  • Your data have no outliers
  • Your data is from a random or representative sample
  • You expect a linear relationship between the two variables

Quantitative research designs can be divided into two main categories:

  • Correlational and descriptive designs are used to investigate characteristics, averages, trends, and associations between variables.
  • Experimental and quasi-experimental designs are used to test causal relationships .

Qualitative research designs tend to be more flexible. Common types of qualitative design include case study , ethnography , and grounded theory designs.

A well-planned research design helps ensure that your methods match your research aims, that you collect high-quality data, and that you use the right kind of analysis to answer your questions, utilizing credible sources . This allows you to draw valid , trustworthy conclusions.

The priorities of a research design can vary depending on the field, but you usually have to specify:

  • Your research questions and/or hypotheses
  • Your overall approach (e.g., qualitative or quantitative )
  • The type of design you’re using (e.g., a survey , experiment , or case study )
  • Your sampling methods or criteria for selecting subjects
  • Your data collection methods (e.g., questionnaires , observations)
  • Your data collection procedures (e.g., operationalization , timing and data management)
  • Your data analysis methods (e.g., statistical tests  or thematic analysis )

Questionnaires can be self-administered or researcher-administered.

Self-administered questionnaires can be delivered online or in paper-and-pen formats, in person or through mail. All questions are standardized so that all respondents receive the same questions with identical wording.

Researcher-administered questionnaires are interviews that take place by phone, in-person, or online between researchers and respondents. You can gain deeper insights by clarifying questions for respondents or asking follow-up questions.

You can organize the questions logically, with a clear progression from simple to complex, or randomly between respondents. A logical flow helps respondents process the questionnaire easier and quicker, but it may lead to bias. Randomization can minimize the bias from order effects.

Closed-ended, or restricted-choice, questions offer respondents a fixed set of choices to select from. These questions are easier to answer quickly.

Open-ended or long-form questions allow respondents to answer in their own words. Because there are no restrictions on their choices, respondents can answer in ways that researchers may not have otherwise considered.

A questionnaire is a data collection tool or instrument, while a survey is an overarching research method that involves collecting and analyzing data from people using questionnaires.

The third variable and directionality problems are two main reasons why correlation isn’t causation .

The third variable problem means that a confounding variable affects both variables to make them seem causally related when they are not.

The directionality problem is when two variables correlate and might actually have a causal relationship, but it’s impossible to conclude which variable causes changes in the other.

Correlation describes an association between variables : when one variable changes, so does the other. A correlation is a statistical indicator of the relationship between variables.

Causation means that changes in one variable brings about changes in the other (i.e., there is a cause-and-effect relationship between variables). The two variables are correlated with each other, and there’s also a causal link between them.

While causation and correlation can exist simultaneously, correlation does not imply causation. In other words, correlation is simply a relationship where A relates to B—but A doesn’t necessarily cause B to happen (or vice versa). Mistaking correlation for causation is a common error and can lead to false cause fallacy .

Controlled experiments establish causality, whereas correlational studies only show associations between variables.

  • In an experimental design , you manipulate an independent variable and measure its effect on a dependent variable. Other variables are controlled so they can’t impact the results.
  • In a correlational design , you measure variables without manipulating any of them. You can test whether your variables change together, but you can’t be sure that one variable caused a change in another.

In general, correlational research is high in external validity while experimental research is high in internal validity .

A correlation is usually tested for two variables at a time, but you can test correlations between three or more variables.

A correlation coefficient is a single number that describes the strength and direction of the relationship between your variables.

Different types of correlation coefficients might be appropriate for your data based on their levels of measurement and distributions . The Pearson product-moment correlation coefficient (Pearson’s r ) is commonly used to assess a linear relationship between two quantitative variables.

A correlational research design investigates relationships between two variables (or more) without the researcher controlling or manipulating any of them. It’s a non-experimental type of quantitative research .

A correlation reflects the strength and/or direction of the association between two or more variables.

  • A positive correlation means that both variables change in the same direction.
  • A negative correlation means that the variables change in opposite directions.
  • A zero correlation means there’s no relationship between the variables.

Random error  is almost always present in scientific studies, even in highly controlled settings. While you can’t eradicate it completely, you can reduce random error by taking repeated measurements, using a large sample, and controlling extraneous variables .

You can avoid systematic error through careful design of your sampling , data collection , and analysis procedures. For example, use triangulation to measure your variables using multiple methods; regularly calibrate instruments or procedures; use random sampling and random assignment ; and apply masking (blinding) where possible.

Systematic error is generally a bigger problem in research.

With random error, multiple measurements will tend to cluster around the true value. When you’re collecting data from a large sample , the errors in different directions will cancel each other out.

Systematic errors are much more problematic because they can skew your data away from the true value. This can lead you to false conclusions ( Type I and II errors ) about the relationship between the variables you’re studying.

Random and systematic error are two types of measurement error.

Random error is a chance difference between the observed and true values of something (e.g., a researcher misreading a weighing scale records an incorrect measurement).

Systematic error is a consistent or proportional difference between the observed and true values of something (e.g., a miscalibrated scale consistently records weights as higher than they actually are).

On graphs, the explanatory variable is conventionally placed on the x-axis, while the response variable is placed on the y-axis.

  • If you have quantitative variables , use a scatterplot or a line graph.
  • If your response variable is categorical, use a scatterplot or a line graph.
  • If your explanatory variable is categorical, use a bar graph.

The term “ explanatory variable ” is sometimes preferred over “ independent variable ” because, in real world contexts, independent variables are often influenced by other variables. This means they aren’t totally independent.

Multiple independent variables may also be correlated with each other, so “explanatory variables” is a more appropriate term.

The difference between explanatory and response variables is simple:

  • An explanatory variable is the expected cause, and it explains the results.
  • A response variable is the expected effect, and it responds to other variables.

In a controlled experiment , all extraneous variables are held constant so that they can’t influence the results. Controlled experiments require:

  • A control group that receives a standard treatment, a fake treatment, or no treatment.
  • Random assignment of participants to ensure the groups are equivalent.

Depending on your study topic, there are various other methods of controlling variables .

There are 4 main types of extraneous variables :

  • Demand characteristics : environmental cues that encourage participants to conform to researchers’ expectations.
  • Experimenter effects : unintentional actions by researchers that influence study outcomes.
  • Situational variables : environmental variables that alter participants’ behaviors.
  • Participant variables : any characteristic or aspect of a participant’s background that could affect study results.

An extraneous variable is any variable that you’re not investigating that can potentially affect the dependent variable of your research study.

A confounding variable is a type of extraneous variable that not only affects the dependent variable, but is also related to the independent variable.

In a factorial design, multiple independent variables are tested.

If you test two variables, each level of one independent variable is combined with each level of the other independent variable to create different conditions.

Within-subjects designs have many potential threats to internal validity , but they are also very statistically powerful .

Advantages:

  • Only requires small samples
  • Statistically powerful
  • Removes the effects of individual differences on the outcomes

Disadvantages:

  • Internal validity threats reduce the likelihood of establishing a direct relationship between variables
  • Time-related effects, such as growth, can influence the outcomes
  • Carryover effects mean that the specific order of different treatments affect the outcomes

While a between-subjects design has fewer threats to internal validity , it also requires more participants for high statistical power than a within-subjects design .

  • Prevents carryover effects of learning and fatigue.
  • Shorter study duration.
  • Needs larger samples for high power.
  • Uses more resources to recruit participants, administer sessions, cover costs, etc.
  • Individual differences may be an alternative explanation for results.

Yes. Between-subjects and within-subjects designs can be combined in a single study when you have two or more independent variables (a factorial design). In a mixed factorial design, one variable is altered between subjects and another is altered within subjects.

In a between-subjects design , every participant experiences only one condition, and researchers assess group differences between participants in various conditions.

In a within-subjects design , each participant experiences all conditions, and researchers test the same participants repeatedly for differences between conditions.

The word “between” means that you’re comparing different conditions between groups, while the word “within” means you’re comparing different conditions within the same group.

Random assignment is used in experiments with a between-groups or independent measures design. In this research design, there’s usually a control group and one or more experimental groups. Random assignment helps ensure that the groups are comparable.

In general, you should always use random assignment in this type of experimental design when it is ethically possible and makes sense for your study topic.

To implement random assignment , assign a unique number to every member of your study’s sample .

Then, you can use a random number generator or a lottery method to randomly assign each number to a control or experimental group. You can also do so manually, by flipping a coin or rolling a dice to randomly assign participants to groups.

Random selection, or random sampling , is a way of selecting members of a population for your study’s sample.

In contrast, random assignment is a way of sorting the sample into control and experimental groups.

Random sampling enhances the external validity or generalizability of your results, while random assignment improves the internal validity of your study.

In experimental research, random assignment is a way of placing participants from your sample into different groups using randomization. With this method, every member of the sample has a known or equal chance of being placed in a control group or an experimental group.

“Controlling for a variable” means measuring extraneous variables and accounting for them statistically to remove their effects on other variables.

Researchers often model control variable data along with independent and dependent variable data in regression analyses and ANCOVAs . That way, you can isolate the control variable’s effects from the relationship between the variables of interest.

Control variables help you establish a correlational or causal relationship between variables by enhancing internal validity .

If you don’t control relevant extraneous variables , they may influence the outcomes of your study, and you may not be able to demonstrate that your results are really an effect of your independent variable .

A control variable is any variable that’s held constant in a research study. It’s not a variable of interest in the study, but it’s controlled because it could influence the outcomes.

Including mediators and moderators in your research helps you go beyond studying a simple relationship between two variables for a fuller picture of the real world. They are important to consider when studying complex correlational or causal relationships.

Mediators are part of the causal pathway of an effect, and they tell you how or why an effect takes place. Moderators usually help you judge the external validity of your study by identifying the limitations of when the relationship between variables holds.

If something is a mediating variable :

  • It’s caused by the independent variable .
  • It influences the dependent variable
  • When it’s taken into account, the statistical correlation between the independent and dependent variables is higher than when it isn’t considered.

A confounder is a third variable that affects variables of interest and makes them seem related when they are not. In contrast, a mediator is the mechanism of a relationship between two variables: it explains the process by which they are related.

A mediator variable explains the process through which two variables are related, while a moderator variable affects the strength and direction of that relationship.

There are three key steps in systematic sampling :

  • Define and list your population , ensuring that it is not ordered in a cyclical or periodic order.
  • Decide on your sample size and calculate your interval, k , by dividing your population by your target sample size.
  • Choose every k th member of the population as your sample.

Systematic sampling is a probability sampling method where researchers select members of the population at a regular interval – for example, by selecting every 15th person on a list of the population. If the population is in a random order, this can imitate the benefits of simple random sampling .

Yes, you can create a stratified sample using multiple characteristics, but you must ensure that every participant in your study belongs to one and only one subgroup. In this case, you multiply the numbers of subgroups for each characteristic to get the total number of groups.

For example, if you were stratifying by location with three subgroups (urban, rural, or suburban) and marital status with five subgroups (single, divorced, widowed, married, or partnered), you would have 3 x 5 = 15 subgroups.

You should use stratified sampling when your sample can be divided into mutually exclusive and exhaustive subgroups that you believe will take on different mean values for the variable that you’re studying.

Using stratified sampling will allow you to obtain more precise (with lower variance ) statistical estimates of whatever you are trying to measure.

For example, say you want to investigate how income differs based on educational attainment, but you know that this relationship can vary based on race. Using stratified sampling, you can ensure you obtain a large enough sample from each racial group, allowing you to draw more precise conclusions.

In stratified sampling , researchers divide subjects into subgroups called strata based on characteristics that they share (e.g., race, gender, educational attainment).

Once divided, each subgroup is randomly sampled using another probability sampling method.

Cluster sampling is more time- and cost-efficient than other probability sampling methods , particularly when it comes to large samples spread across a wide geographical area.

However, it provides less statistical certainty than other methods, such as simple random sampling , because it is difficult to ensure that your clusters properly represent the population as a whole.

There are three types of cluster sampling : single-stage, double-stage and multi-stage clustering. In all three types, you first divide the population into clusters, then randomly select clusters for use in your sample.

  • In single-stage sampling , you collect data from every unit within the selected clusters.
  • In double-stage sampling , you select a random sample of units from within the clusters.
  • In multi-stage sampling , you repeat the procedure of randomly sampling elements from within the clusters until you have reached a manageable sample.

Cluster sampling is a probability sampling method in which you divide a population into clusters, such as districts or schools, and then randomly select some of these clusters as your sample.

The clusters should ideally each be mini-representations of the population as a whole.

If properly implemented, simple random sampling is usually the best sampling method for ensuring both internal and external validity . However, it can sometimes be impractical and expensive to implement, depending on the size of the population to be studied,

If you have a list of every member of the population and the ability to reach whichever members are selected, you can use simple random sampling.

The American Community Survey  is an example of simple random sampling . In order to collect detailed data on the population of the US, the Census Bureau officials randomly select 3.5 million households per year and use a variety of methods to convince them to fill out the survey.

Simple random sampling is a type of probability sampling in which the researcher randomly selects a subset of participants from a population . Each member of the population has an equal chance of being selected. Data is then collected from as large a percentage as possible of this random subset.

Quasi-experimental design is most useful in situations where it would be unethical or impractical to run a true experiment .

Quasi-experiments have lower internal validity than true experiments, but they often have higher external validity  as they can use real-world interventions instead of artificial laboratory settings.

A quasi-experiment is a type of research design that attempts to establish a cause-and-effect relationship. The main difference with a true experiment is that the groups are not randomly assigned.

Blinding is important to reduce research bias (e.g., observer bias , demand characteristics ) and ensure a study’s internal validity .

If participants know whether they are in a control or treatment group , they may adjust their behavior in ways that affect the outcome that researchers are trying to measure. If the people administering the treatment are aware of group assignment, they may treat participants differently and thus directly or indirectly influence the final results.

  • In a single-blind study , only the participants are blinded.
  • In a double-blind study , both participants and experimenters are blinded.
  • In a triple-blind study , the assignment is hidden not only from participants and experimenters, but also from the researchers analyzing the data.

Blinding means hiding who is assigned to the treatment group and who is assigned to the control group in an experiment .

A true experiment (a.k.a. a controlled experiment) always includes at least one control group that doesn’t receive the experimental treatment.

However, some experiments use a within-subjects design to test treatments without a control group. In these designs, you usually compare one group’s outcomes before and after a treatment (instead of comparing outcomes between different groups).

For strong internal validity , it’s usually best to include a control group if possible. Without a control group, it’s harder to be certain that the outcome was caused by the experimental treatment and not by other variables.

An experimental group, also known as a treatment group, receives the treatment whose effect researchers wish to study, whereas a control group does not. They should be identical in all other ways.

Individual Likert-type questions are generally considered ordinal data , because the items have clear rank order, but don’t have an even distribution.

Overall Likert scale scores are sometimes treated as interval data. These scores are considered to have directionality and even spacing between them.

The type of data determines what statistical tests you should use to analyze your data.

A Likert scale is a rating scale that quantitatively assesses opinions, attitudes, or behaviors. It is made up of 4 or more questions that measure a single attitude or trait when response scores are combined.

To use a Likert scale in a survey , you present participants with Likert-type questions or statements, and a continuum of items, usually with 5 or 7 possible responses, to capture their degree of agreement.

In scientific research, concepts are the abstract ideas or phenomena that are being studied (e.g., educational achievement). Variables are properties or characteristics of the concept (e.g., performance at school), while indicators are ways of measuring or quantifying variables (e.g., yearly grade reports).

The process of turning abstract concepts into measurable variables and indicators is called operationalization .

There are various approaches to qualitative data analysis , but they all share five steps in common:

  • Prepare and organize your data.
  • Review and explore your data.
  • Develop a data coding system.
  • Assign codes to the data.
  • Identify recurring themes.

The specifics of each step depend on the focus of the analysis. Some common approaches include textual analysis , thematic analysis , and discourse analysis .

There are five common approaches to qualitative research :

  • Grounded theory involves collecting data in order to develop new theories.
  • Ethnography involves immersing yourself in a group or organization to understand its culture.
  • Narrative research involves interpreting stories to understand how people make sense of their experiences and perceptions.
  • Phenomenological research involves investigating phenomena through people’s lived experiences.
  • Action research links theory and practice in several cycles to drive innovative changes.

Hypothesis testing is a formal procedure for investigating our ideas about the world using statistics. It is used by scientists to test specific predictions, called hypotheses , by calculating how likely it is that a pattern or relationship between variables could have arisen by chance.

Operationalization means turning abstract conceptual ideas into measurable observations.

For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioral avoidance of crowded places, or physical anxiety symptoms in social situations.

Before collecting data , it’s important to consider how you will operationalize the variables that you want to measure.

When conducting research, collecting original data has significant advantages:

  • You can tailor data collection to your specific research aims (e.g. understanding the needs of your consumers or user testing your website)
  • You can control and standardize the process for high reliability and validity (e.g. choosing appropriate measurements and sampling methods )

However, there are also some drawbacks: data collection can be time-consuming, labor-intensive and expensive. In some cases, it’s more efficient to use secondary data that has already been collected by someone else, but the data might be less reliable.

Data collection is the systematic process by which observations or measurements are gathered in research. It is used in many different contexts by academics, governments, businesses, and other organizations.

There are several methods you can use to decrease the impact of confounding variables on your research: restriction, matching, statistical control and randomization.

In restriction , you restrict your sample by only including certain subjects that have the same values of potential confounding variables.

In matching , you match each of the subjects in your treatment group with a counterpart in the comparison group. The matched subjects have the same values on any potential confounding variables, and only differ in the independent variable .

In statistical control , you include potential confounders as variables in your regression .

In randomization , you randomly assign the treatment (or independent variable) in your study to a sufficiently large number of subjects, which allows you to control for all potential confounding variables.

A confounding variable is closely related to both the independent and dependent variables in a study. An independent variable represents the supposed cause , while the dependent variable is the supposed effect . A confounding variable is a third variable that influences both the independent and dependent variables.

Failing to account for confounding variables can cause you to wrongly estimate the relationship between your independent and dependent variables.

To ensure the internal validity of your research, you must consider the impact of confounding variables. If you fail to account for them, you might over- or underestimate the causal relationship between your independent and dependent variables , or even find a causal relationship where none exists.

Yes, but including more than one of either type requires multiple research questions .

For example, if you are interested in the effect of a diet on health, you can use multiple measures of health: blood sugar, blood pressure, weight, pulse, and many more. Each of these is its own dependent variable with its own research question.

You could also choose to look at the effect of exercise levels as well as diet, or even the additional effect of the two combined. Each of these is a separate independent variable .

To ensure the internal validity of an experiment , you should only change one independent variable at a time.

No. The value of a dependent variable depends on an independent variable, so a variable cannot be both independent and dependent at the same time. It must be either the cause or the effect, not both!

You want to find out how blood sugar levels are affected by drinking diet soda and regular soda, so you conduct an experiment .

  • The type of soda – diet or regular – is the independent variable .
  • The level of blood sugar that you measure is the dependent variable – it changes depending on the type of soda.

Determining cause and effect is one of the most important parts of scientific research. It’s essential to know which is the cause – the independent variable – and which is the effect – the dependent variable.

In non-probability sampling , the sample is selected based on non-random criteria, and not every member of the population has a chance of being included.

Common non-probability sampling methods include convenience sampling , voluntary response sampling, purposive sampling , snowball sampling, and quota sampling .

Probability sampling means that every member of the target population has a known chance of being included in the sample.

Probability sampling methods include simple random sampling , systematic sampling , stratified sampling , and cluster sampling .

Using careful research design and sampling procedures can help you avoid sampling bias . Oversampling can be used to correct undercoverage bias .

Some common types of sampling bias include self-selection bias , nonresponse bias , undercoverage bias , survivorship bias , pre-screening or advertising bias, and healthy user bias.

Sampling bias is a threat to external validity – it limits the generalizability of your findings to a broader group of people.

A sampling error is the difference between a population parameter and a sample statistic .

A statistic refers to measures about the sample , while a parameter refers to measures about the population .

Populations are used when a research question requires data from every member of the population. This is usually only feasible when the population is small and easily accessible.

Samples are used to make inferences about populations . Samples are easier to collect data from because they are practical, cost-effective, convenient, and manageable.

There are seven threats to external validity : selection bias , history, experimenter effect, Hawthorne effect , testing effect, aptitude-treatment and situation effect.

The two types of external validity are population validity (whether you can generalize to other groups of people) and ecological validity (whether you can generalize to other situations and settings).

The external validity of a study is the extent to which you can generalize your findings to different groups of people, situations, and measures.

Cross-sectional studies cannot establish a cause-and-effect relationship or analyze behavior over a period of time. To investigate cause and effect, you need to do a longitudinal study or an experimental study .

Cross-sectional studies are less expensive and time-consuming than many other types of study. They can provide useful insights into a population’s characteristics and identify correlations for further research.

Sometimes only cross-sectional data is available for analysis; other times your research question may only require a cross-sectional study to answer it.

Longitudinal studies can last anywhere from weeks to decades, although they tend to be at least a year long.

The 1970 British Cohort Study , which has collected data on the lives of 17,000 Brits since their births in 1970, is one well-known example of a longitudinal study .

Longitudinal studies are better to establish the correct sequence of events, identify changes over time, and provide insight into cause-and-effect relationships, but they also tend to be more expensive and time-consuming than other types of studies.

Longitudinal studies and cross-sectional studies are two different types of research design . In a cross-sectional study you collect data from a population at a specific point in time; in a longitudinal study you repeatedly collect data from the same sample over an extended period of time.

Longitudinal study Cross-sectional study
observations Observations at a in time
Observes the multiple times Observes (a “cross-section”) in the population
Follows in participants over time Provides of society at a given point

There are eight threats to internal validity : history, maturation, instrumentation, testing, selection bias , regression to the mean, social interaction and attrition .

Internal validity is the extent to which you can be confident that a cause-and-effect relationship established in a study cannot be explained by other factors.

In mixed methods research , you use both qualitative and quantitative data collection and analysis methods to answer your research question .

The research methods you use depend on the type of data you need to answer your research question .

  • If you want to measure something or test a hypothesis , use quantitative methods . If you want to explore ideas, thoughts and meanings, use qualitative methods .
  • If you want to analyze a large amount of readily-available data, use secondary data. If you want data specific to your purposes with control over how it is generated, collect primary data.
  • If you want to establish cause-and-effect relationships between variables , use experimental methods. If you want to understand the characteristics of a research subject, use descriptive methods.

A confounding variable , also called a confounder or confounding factor, is a third variable in a study examining a potential cause-and-effect relationship.

A confounding variable is related to both the supposed cause and the supposed effect of the study. It can be difficult to separate the true effect of the independent variable from the effect of the confounding variable.

In your research design , it’s important to identify potential confounding variables and plan how you will reduce their impact.

Discrete and continuous variables are two types of quantitative variables :

  • Discrete variables represent counts (e.g. the number of objects in a collection).
  • Continuous variables represent measurable amounts (e.g. water volume or weight).

Quantitative variables are any variables where the data represent amounts (e.g. height, weight, or age).

Categorical variables are any variables where the data represent groups. This includes rankings (e.g. finishing places in a race), classifications (e.g. brands of cereal), and binary outcomes (e.g. coin flips).

You need to know what type of variables you are working with to choose the right statistical test for your data and interpret your results .

You can think of independent and dependent variables in terms of cause and effect: an independent variable is the variable you think is the cause , while a dependent variable is the effect .

In an experiment, you manipulate the independent variable and measure the outcome in the dependent variable. For example, in an experiment about the effect of nutrients on crop growth:

  • The  independent variable  is the amount of nutrients added to the crop field.
  • The  dependent variable is the biomass of the crops at harvest time.

Defining your variables, and deciding how you will manipulate and measure them, is an important part of experimental design .

Experimental design means planning a set of procedures to investigate a relationship between variables . To design a controlled experiment, you need:

  • A testable hypothesis
  • At least one independent variable that can be precisely manipulated
  • At least one dependent variable that can be precisely measured

When designing the experiment, you decide:

  • How you will manipulate the variable(s)
  • How you will control for any potential confounding variables
  • How many subjects or samples will be included in the study
  • How subjects will be assigned to treatment levels

Experimental design is essential to the internal and external validity of your experiment.

I nternal validity is the degree of confidence that the causal relationship you are testing is not influenced by other factors or variables .

External validity is the extent to which your results can be generalized to other contexts.

The validity of your experiment depends on your experimental design .

Reliability and validity are both about how well a method measures something:

  • Reliability refers to the  consistency of a measure (whether the results can be reproduced under the same conditions).
  • Validity   refers to the  accuracy of a measure (whether the results really do represent what they are supposed to measure).

If you are doing experimental research, you also have to consider the internal and external validity of your experiment.

A sample is a subset of individuals from a larger population . Sampling means selecting the group that you will actually collect data from in your research. For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

In statistics, sampling allows you to test a hypothesis about the characteristics of a population.

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to systematically measure variables and test hypotheses . Qualitative methods allow you to explore concepts and experiences in more detail.

Methodology refers to the overarching strategy and rationale of your research project . It involves studying the methods used in your field and the theories or principles behind them, in order to develop an approach that matches your objectives.

Methods are the specific tools and procedures you use to collect and analyze data (for example, experiments, surveys , and statistical tests ).

In shorter scientific papers, where the aim is to report the findings of a specific study, you might simply describe what you did in a methods section .

In a longer or more complex research project, such as a thesis or dissertation , you will probably include a methodology section , where you explain your approach to answering the research questions and cite relevant sources to support your choice of methods.

Ask our team

Want to contact us directly? No problem.  We  are always here for you.

Support team - Nina

Our team helps students graduate by offering:

  • A world-class citation generator
  • Plagiarism Checker software powered by Turnitin
  • Innovative Citation Checker software
  • Professional proofreading services
  • Over 300 helpful articles about academic writing, citing sources, plagiarism, and more

Scribbr specializes in editing study-related documents . We proofread:

  • PhD dissertations
  • Research proposals
  • Personal statements
  • Admission essays
  • Motivation letters
  • Reflection papers
  • Journal articles
  • Capstone projects

Scribbr’s Plagiarism Checker is powered by elements of Turnitin’s Similarity Checker , namely the plagiarism detection software and the Internet Archive and Premium Scholarly Publications content databases .

The add-on AI detector is powered by Scribbr’s proprietary software.

The Scribbr Citation Generator is developed using the open-source Citation Style Language (CSL) project and Frank Bennett’s citeproc-js . It’s the same technology used by dozens of other popular citation tools, including Mendeley and Zotero.

You can find all the citation styles and locales used in the Scribbr Citation Generator in our publicly accessible repository on Github .

Sacred Heart University Library

Organizing Academic Research Papers: Types of Research Designs

  • Purpose of Guide
  • Design Flaws to Avoid
  • Glossary of Research Terms
  • Narrowing a Topic Idea
  • Broadening a Topic Idea
  • Extending the Timeliness of a Topic Idea
  • Academic Writing Style
  • Choosing a Title
  • Making an Outline
  • Paragraph Development
  • Executive Summary
  • Background Information
  • The Research Problem/Question
  • Theoretical Framework
  • Citation Tracking
  • Content Alert Services
  • Evaluating Sources
  • Primary Sources
  • Secondary Sources
  • Tertiary Sources
  • What Is Scholarly vs. Popular?
  • Qualitative Methods
  • Quantitative Methods
  • Using Non-Textual Elements
  • Limitations of the Study
  • Common Grammar Mistakes
  • Avoiding Plagiarism
  • Footnotes or Endnotes?
  • Further Readings
  • Annotated Bibliography
  • Dealing with Nervousness
  • Using Visual Aids
  • Grading Someone Else's Paper
  • How to Manage Group Projects
  • Multiple Book Review Essay
  • Reviewing Collected Essays
  • About Informed Consent
  • Writing Field Notes
  • Writing a Policy Memo
  • Writing a Research Proposal
  • Acknowledgements

Introduction

Before beginning your paper, you need to decide how you plan to design the study .

The research design refers to the overall strategy that you choose to integrate the different components of the study in a coherent and logical way, thereby, ensuring you will effectively address the research problem; it constitutes the blueprint for the collection, measurement, and analysis of data. Note that your research problem determines the type of design you can use, not the other way around!

General Structure and Writing Style

Action research design, case study design, causal design, cohort design, cross-sectional design, descriptive design, experimental design, exploratory design, historical design, longitudinal design, observational design, philosophical design, sequential design.

Kirshenblatt-Gimblett, Barbara. Part 1, What Is Research Design? The Context of Design. Performance Studies Methods Course syllabus . New York University, Spring 2006; Trochim, William M.K. Research Methods Knowledge Base . 2006.

The function of a research design is to ensure that the evidence obtained enables you to effectively address the research problem as unambiguously as possible. In social sciences research, obtaining evidence relevant to the research problem generally entails specifying the type of evidence needed to test a theory, to evaluate a program, or to accurately describe a phenomenon. However, researchers can often begin their investigations far too early, before they have thought critically about about what information is required to answer the study's research questions. Without attending to these design issues beforehand, the conclusions drawn risk being weak and unconvincing and, consequently, will fail to adequate address the overall research problem.

 Given this, the length and complexity of research designs can vary considerably, but any sound design will do the following things:

  • Identify the research problem clearly and justify its selection,
  • Review previously published literature associated with the problem area,
  • Clearly and explicitly specify hypotheses [i.e., research questions] central to the problem selected,
  • Effectively describe the data which will be necessary for an adequate test of the hypotheses and explain how such data will be obtained, and
  • Describe the methods of analysis which will be applied to the data in determining whether or not the hypotheses are true or false.

Kirshenblatt-Gimblett, Barbara. Part 1, What Is Research Design? The Context of Design. Performance Studies Methods Course syllabus . New Yortk University, Spring 2006.

Definition and Purpose

The essentials of action research design follow a characteristic cycle whereby initially an exploratory stance is adopted, where an understanding of a problem is developed and plans are made for some form of interventionary strategy. Then the intervention is carried out (the action in Action Research) during which time, pertinent observations are collected in various forms. The new interventional strategies are carried out, and the cyclic process repeats, continuing until a sufficient understanding of (or implement able solution for) the problem is achieved. The protocol is iterative or cyclical in nature and is intended to foster deeper understanding of a given situation, starting with conceptualizing and particularizing the problem and moving through several interventions and evaluations.

What do these studies tell you?

  • A collaborative and adaptive research design that lends itself to use in work or community situations.
  • Design focuses on pragmatic and solution-driven research rather than testing theories.
  • When practitioners use action research it has the potential to increase the amount they learn consciously from their experience. The action research cycle can also be regarded as a learning cycle.
  • Action search studies often have direct and obvious relevance to practice.
  • There are no hidden controls or preemption of direction by the researcher.

What these studies don't tell you?

  • It is harder to do than conducting conventional studies because the researcher takes on responsibilities for encouraging change as well as for research.
  • Action research is much harder to write up because you probably can’t use a standard format to report your findings effectively.
  • Personal over-involvement of the researcher may bias research results.
  • The cyclic nature of action research to achieve its twin outcomes of action (e.g. change) and research (e.g. understanding) is time-consuming and complex to conduct.

Gall, Meredith. Educational Research: An Introduction . Chapter 18, Action Research. 8th ed. Boston, MA: Pearson/Allyn and Bacon, 2007; Kemmis, Stephen and Robin McTaggart. “Participatory Action Research.” In Handbook of Qualitative Research . Norman Denzin and Yvonna S. Locoln, eds. 2nd ed. (Thousand Oaks, CA: SAGE, 2000), pp. 567-605.; Reason, Peter and Hilary Bradbury. Handbook of Action Research: Participative Inquiry and Practice . Thousand Oaks, CA: SAGE, 2001.

A case study is an in-depth study of a particular research problem rather than a sweeping statistical survey. It is often used to narrow down a very broad field of research into one or a few easily researchable examples. The case study research design is also useful for testing whether a specific theory and model actually applies to phenomena in the real world. It is a useful design when not much is known about a phenomenon.

  • Approach excels at bringing us to an understanding of a complex issue through detailed contextual analysis of a limited number of events or conditions and their relationships.
  • A researcher using a case study design can apply a vaiety of methodologies and rely on a variety of sources to investigate a research problem.
  • Design can extend experience or add strength to what is already known through previous research.
  • Social scientists, in particular, make wide use of this research design to examine contemporary real-life situations and provide the basis for the application of concepts and theories and extension of methods.
  • The design can provide detailed descriptions of specific and rare cases.
  • A single or small number of cases offers little basis for establishing reliability or to generalize the findings to a wider population of people, places, or things.
  • The intense exposure to study of the case may bias a researcher's interpretation of the findings.
  • Design does not facilitate assessment of cause and effect relationships.
  • Vital information may be missing, making the case hard to interpret.
  • The case may not be representative or typical of the larger problem being investigated.
  • If the criteria for selecting a case is because it represents a very unusual or unique phenomenon or problem for study, then your intepretation of the findings can only apply to that particular case.

Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 4, Flexible Methods: Case Study Design. 2nd ed. New York: Columbia University Press, 1999; Stake, Robert E. The Art of Case Study Research . Thousand Oaks, CA: SAGE, 1995; Yin, Robert K. Case Study Research: Design and Theory . Applied Social Research Methods Series, no. 5. 3rd ed. Thousand Oaks, CA: SAGE, 2003.

Causality studies may be thought of as understanding a phenomenon in terms of conditional statements in the form, “If X, then Y.” This type of research is used to measure what impact a specific change will have on existing norms and assumptions. Most social scientists seek causal explanations that reflect tests of hypotheses. Causal effect (nomothetic perspective) occurs when variation in one phenomenon, an independent variable, leads to or results, on average, in variation in another phenomenon, the dependent variable.

Conditions necessary for determining causality:

  • Empirical association--a valid conclusion is based on finding an association between the independent variable and the dependent variable.
  • Appropriate time order--to conclude that causation was involved, one must see that cases were exposed to variation in the independent variable before variation in the dependent variable.
  • Nonspuriousness--a relationship between two variables that is not due to variation in a third variable.
  • Causality research designs helps researchers understand why the world works the way it does through the process of proving a causal link between variables and eliminating other possibilities.
  • Replication is possible.
  • There is greater confidence the study has internal validity due to the systematic subject selection and equity of groups being compared.
  • Not all relationships are casual! The possibility always exists that, by sheer coincidence, two unrelated events appear to be related [e.g., Punxatawney Phil could accurately predict the duration of Winter for five consecutive years but, the fact remains, he's just a big, furry rodent].
  • Conclusions about causal relationships are difficult to determine due to a variety of extraneous and confounding variables that exist in a social environment. This means causality can only be inferred, never proven.
  • If two variables are correlated, the cause must come before the effect. However, even though two variables might be causally related, it can sometimes be difficult to determine which variable comes first and therefore to establish which variable is the actual cause and which is the  actual effect.

Bachman, Ronet. The Practice of Research in Criminology and Criminal Justice . Chapter 5, Causation and Research Designs. 3rd ed.  Thousand Oaks, CA: Pine Forge Press, 2007; Causal Research Design: Experimentation. Anonymous SlideShare Presentation ; Gall, Meredith. Educational Research: An Introduction . Chapter 11, Nonexperimental Research: Correlational Designs. 8th ed. Boston, MA: Pearson/Allyn and Bacon, 2007; Trochim, William M.K. Research Methods Knowledge Base . 2006.

Often used in the medical sciences, but also found in the applied social sciences, a cohort study generally refers to a study conducted over a period of time involving members of a population which the subject or representative member comes from, and who are united by some commonality or similarity. Using a quantitative framework, a cohort study makes note of statistical occurrence within a specialized subgroup, united by same or similar characteristics that are relevant to the research problem being investigated, r ather than studying statistical occurrence within the general population. Using a qualitative framework, cohort studies generally gather data using methods of observation. Cohorts can be either "open" or "closed."

  • Open Cohort Studies [dynamic populations, such as the population of Los Angeles] involve a population that is defined just by the state of being a part of the study in question (and being monitored for the outcome). Date of entry and exit from the study is individually defined, therefore, the size of the study population is not constant. In open cohort studies, researchers can only calculate rate based data, such as, incidence rates and variants thereof.
  • Closed Cohort Studies [static populations, such as patients entered into a clinical trial] involve participants who enter into the study at one defining point in time and where it is presumed that no new participants can enter the cohort. Given this, the number of study participants remains constant (or can only decrease).
  • The use of cohorts is often mandatory because a randomized control study may be unethical. For example, you cannot deliberately expose people to asbestos, you can only study its effects on those who have already been exposed. Research that measures risk factors  often relies on cohort designs.
  • Because cohort studies measure potential causes before the outcome has occurred, they can demonstrate that these “causes” preceded the outcome, thereby avoiding the debate as to which is the cause and which is the effect.
  • Cohort analysis is highly flexible and can provide insight into effects over time and related to a variety of different types of changes [e.g., social, cultural, political, economic, etc.].
  • Either original data or secondary data can be used in this design.
  • In cases where a comparative analysis of two cohorts is made [e.g., studying the effects of one group exposed to asbestos and one that has not], a researcher cannot control for all other factors that might differ between the two groups. These factors are known as confounding variables.
  • Cohort studies can end up taking a long time to complete if the researcher must wait for the conditions of interest to develop within the group. This also increases the chance that key variables change during the course of the study, potentially impacting the validity of the findings.
  • Because of the lack of randominization in the cohort design, its external validity is lower than that of study designs where the researcher randomly assigns participants.

Healy P, Devane D. “Methodological Considerations in Cohort Study Designs.” Nurse Researcher 18 (2011): 32-36;  Levin, Kate Ann. Study Design IV: Cohort Studies. Evidence-Based Dentistry 7 (2003): 51–52; Study Design 101 . Himmelfarb Health Sciences Library. George Washington University, November 2011; Cohort Study . Wikipedia.

Cross-sectional research designs have three distinctive features: no time dimension, a reliance on existing differences rather than change following intervention; and, groups are selected based on existing differences rather than random allocation. The cross-sectional design can only measure diffrerences between or from among a variety of people, subjects, or phenomena rather than change. As such, researchers using this design can only employ a relative passive approach to making causal inferences based on findings.

  • Cross-sectional studies provide a 'snapshot' of the outcome and the characteristics associated with it, at a specific point in time.
  • Unlike the experimental design where there is an active intervention by the researcher to produce and measure change or to create differences, cross-sectional designs focus on studying and drawing inferences from existing differences between people, subjects, or phenomena.
  • Entails collecting data at and concerning one point in time. While longitudinal studies involve taking multiple measures over an extended period of time, cross-sectional research is focused on finding relationships between variables at one moment in time.
  • Groups identified for study are purposely selected based upon existing differences in the sample rather than seeking random sampling.
  • Cross-section studies are capable of using data from a large number of subjects and, unlike observational studies, is not geographically bound.
  • Can estimate prevalence of an outcome of interest because the sample is usually taken from the whole population.
  • Because cross-sectional designs generally use survey techniques to gather data, they are relatively inexpensive and take up little time to conduct.
  • Finding people, subjects, or phenomena to study that are very similar except in one specific variable can be difficult.
  • Results are static and time bound and, therefore, give no indication of a sequence of events or reveal historical contexts.
  • Studies cannot be utilized to establish cause and effect relationships.
  • Provide only a snapshot of analysis so there is always the possibility that a study could have differing results if another time-frame had been chosen.
  • There is no follow up to the findings.

Hall, John. “Cross-Sectional Survey Design.” In Encyclopedia of Survey Research Methods. Paul J. Lavrakas, ed. (Thousand Oaks, CA: Sage, 2008), pp. 173-174; Helen Barratt, Maria Kirwan. Cross-Sectional Studies: Design, Application, Strengths and Weaknesses of Cross-Sectional Studies . Healthknowledge, 2009. Cross-Sectional Study . Wikipedia.

Descriptive research designs help provide answers to the questions of who, what, when, where, and how associated with a particular research problem; a descriptive study cannot conclusively ascertain answers to why. Descriptive research is used to obtain information concerning the current status of the phenomena and to describe "what exists" with respect to variables or conditions in a situation.

  • The subject is being observed in a completely natural and unchanged natural environment. True experiments, whilst giving analyzable data, often adversely influence the normal behavior of the subject.
  • Descriptive research is often used as a pre-cursor to more quantitatively research designs, the general overview giving some valuable pointers as to what variables are worth testing quantitatively.
  • If the limitations are understood, they can be a useful tool in developing a more focused study.
  • Descriptive studies can yield rich data that lead to important recommendations.
  • Appoach collects a large amount of data for detailed analysis.
  • The results from a descriptive research can not be used to discover a definitive answer or to disprove a hypothesis.
  • Because descriptive designs often utilize observational methods [as opposed to quantitative methods], the results cannot be replicated.
  • The descriptive function of research is heavily dependent on instrumentation for measurement and observation.

Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 5, Flexible Methods: Descriptive Research. 2nd ed. New York: Columbia University Press, 1999;  McNabb, Connie. Descriptive Research Methodologies . Powerpoint Presentation; Shuttleworth, Martyn. Descriptive Research Design , September 26, 2008. Explorable.com website.

A blueprint of the procedure that enables the researcher to maintain control over all factors that may affect the result of an experiment. In doing this, the researcher attempts to determine or predict what may occur. Experimental Research is often used where there is time priority in a causal relationship (cause precedes effect), there is consistency in a causal relationship (a cause will always lead to the same effect), and the magnitude of the correlation is great. The classic experimental design specifies an experimental group and a control group. The independent variable is administered to the experimental group and not to the control group, and both groups are measured on the same dependent variable. Subsequent experimental designs have used more groups and more measurements over longer periods. True experiments must have control, randomization, and manipulation.

  • Experimental research allows the researcher to control the situation. In so doing, it allows researchers to answer the question, “what causes something to occur?”
  • Permits the researcher to identify cause and effect relationships between variables and to distinguish placebo effects from treatment effects.
  • Experimental research designs support the ability to limit alternative explanations and to infer direct causal relationships in the study.
  • Approach provides the highest level of evidence for single studies.
  • The design is artificial, and results may not generalize well to the real world.
  • The artificial settings of experiments may alter subject behaviors or responses.
  • Experimental designs can be costly if special equipment or facilities are needed.
  • Some research problems cannot be studied using an experiment because of ethical or technical reasons.
  • Difficult to apply ethnographic and other qualitative methods to  experimental designed research studies.

Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 7, Flexible Methods: Experimental Research. 2nd ed. New York: Columbia University Press, 1999; Chapter 2: Research Design, Experimental Designs . School of Psychology, University of New England, 2000; Experimental Research. Research Methods by Dummies. Department of Psychology. California State University, Fresno, 2006; Trochim, William M.K. Experimental Design . Research Methods Knowledge Base. 2006; Rasool, Shafqat. Experimental Research . Slideshare presentation.

An exploratory design is conducted about a research problem when there are few or no earlier studies to refer to. The focus is on gaining insights and familiarity for later investigation or undertaken when problems are in a preliminary stage of investigation.

The goals of exploratory research are intended to produce the following possible insights:

  • Familiarity with basic details, settings and concerns.
  • Well grounded picture of the situation being developed.
  • Generation of new ideas and assumption, development of tentative theories or hypotheses.
  • Determination about whether a study is feasible in the future.
  • Issues get refined for more systematic investigation and formulation of new research questions.
  • Direction for future research and techniques get developed.
  • Design is a useful approach for gaining background information on a particular topic.
  • Exploratory research is flexible and can address research questions of all types (what, why, how).
  • Provides an opportunity to define new terms and clarify existing concepts.
  • Exploratory research is often used to generate formal hypotheses and develop more precise research problems.
  • Exploratory studies help establish research priorities.
  • Exploratory research generally utilizes small sample sizes and, thus, findings are typically not generalizable to the population at large.
  • The exploratory nature of the research inhibits an ability to make definitive conclusions about the findings.
  • The research process underpinning exploratory studies is flexible but often unstructured, leading to only tentative results that have limited value in decision-making.
  • Design lacks rigorous standards applied to methods of data gathering and analysis because one of the areas for exploration could be to determine what method or methodologies could best fit the research problem.

Cuthill, Michael. “Exploratory Research: Citizen Participation, Local Government, and Sustainable Development in Australia.” Sustainable Development 10 (2002): 79-89; Taylor, P. J., G. Catalano, and D.R.F. Walker. “Exploratory Analysis of the World City Network.” Urban Studies 39 (December 2002): 2377-2394; Exploratory Research . Wikipedia.

The purpose of a historical research design is to collect, verify, and synthesize evidence from the past to establish facts that defend or refute your hypothesis. It uses secondary sources and a variety of primary documentary evidence, such as, logs, diaries, official records, reports, archives, and non-textual information [maps, pictures, audio and visual recordings]. The limitation is that the sources must be both authentic and valid.

  • The historical research design is unobtrusive; the act of research does not affect the results of the study.
  • The historical approach is well suited for trend analysis.
  • Historical records can add important contextual background required to more fully understand and interpret a research problem.
  • There is no possibility of researcher-subject interaction that could affect the findings.
  • Historical sources can be used over and over to study different research problems or to replicate a previous study.
  • The ability to fulfill the aims of your research are directly related to the amount and quality of documentation available to understand the research problem.
  • Since historical research relies on data from the past, there is no way to manipulate it to control for contemporary contexts.
  • Interpreting historical sources can be very time consuming.
  • The sources of historical materials must be archived consistentally to ensure access.
  • Original authors bring their own perspectives and biases to the interpretation of past events and these biases are more difficult to ascertain in historical resources.
  • Due to the lack of control over external variables, historical research is very weak with regard to the demands of internal validity.
  • It rare that the entirety of historical documentation needed to fully address a research problem is available for interpretation, therefore, gaps need to be acknowledged.

Savitt, Ronald. “Historical Research in Marketing.” Journal of Marketing 44 (Autumn, 1980): 52-58;  Gall, Meredith. Educational Research: An Introduction . Chapter 16, Historical Research. 8th ed. Boston, MA: Pearson/Allyn and Bacon, 2007.

A longitudinal study follows the same sample over time and makes repeated observations. With longitudinal surveys, for example, the same group of people is interviewed at regular intervals, enabling researchers to track changes over time and to relate them to variables that might explain why the changes occur. Longitudinal research designs describe patterns of change and help establish the direction and magnitude of causal relationships. Measurements are taken on each variable over two or more distinct time periods. This allows the researcher to measure change in variables over time. It is a type of observational study and is sometimes referred to as a panel study.

  • Longitudinal data allow the analysis of duration of a particular phenomenon.
  • Enables survey researchers to get close to the kinds of causal explanations usually attainable only with experiments.
  • The design permits the measurement of differences or change in a variable from one period to another [i.e., the description of patterns of change over time].
  • Longitudinal studies facilitate the prediction of future outcomes based upon earlier factors.
  • The data collection method may change over time.
  • Maintaining the integrity of the original sample can be difficult over an extended period of time.
  • It can be difficult to show more than one variable at a time.
  • This design often needs qualitative research to explain fluctuations in the data.
  • A longitudinal research design assumes present trends will continue unchanged.
  • It can take a long period of time to gather results.
  • There is a need to have a large sample size and accurate sampling to reach representativness.

Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 6, Flexible Methods: Relational and Longitudinal Research. 2nd ed. New York: Columbia University Press, 1999; Kalaian, Sema A. and Rafa M. Kasim. "Longitudinal Studies." In Encyclopedia of Survey Research Methods . Paul J. Lavrakas, ed. (Thousand Oaks, CA: Sage, 2008), pp. 440-441; Ployhart, Robert E. and Robert J. Vandenberg. "Longitudinal Research: The Theory, Design, and Analysis of Change.” Journal of Management 36 (January 2010): 94-120; Longitudinal Study . Wikipedia.

This type of research design draws a conclusion by comparing subjects against a control group, in cases where the researcher has no control over the experiment. There are two general types of observational designs. In direct observations, people know that you are watching them. Unobtrusive measures involve any method for studying behavior where individuals do not know they are being observed. An observational study allows a useful insight into a phenomenon and avoids the ethical and practical difficulties of setting up a large and cumbersome research project.

  • Observational studies are usually flexible and do not necessarily need to be structured around a hypothesis about what you expect to observe (data is emergent rather than pre-existing).
  • The researcher is able to collect a depth of information about a particular behavior.
  • Can reveal interrelationships among multifaceted dimensions of group interactions.
  • You can generalize your results to real life situations.
  • Observational research is useful for discovering what variables may be important before applying other methods like experiments.
  • Observation researchd esigns account for the complexity of group behaviors.
  • Reliability of data is low because seeing behaviors occur over and over again may be a time consuming task and difficult to replicate.
  • In observational research, findings may only reflect a unique sample population and, thus, cannot be generalized to other groups.
  • There can be problems with bias as the researcher may only "see what they want to see."
  • There is no possiblility to determine "cause and effect" relationships since nothing is manipulated.
  • Sources or subjects may not all be equally credible.
  • Any group that is studied is altered to some degree by the very presence of the researcher, therefore, skewing to some degree any data collected (the Heisenburg Uncertainty Principle).

Atkinson, Paul and Martyn Hammersley. “Ethnography and Participant Observation.” In Handbook of Qualitative Research . Norman K. Denzin and Yvonna S. Lincoln, eds. (Thousand Oaks, CA: Sage, 1994), pp. 248-261; Observational Research. Research Methods by Dummies. Department of Psychology. California State University, Fresno, 2006; Patton Michael Quinn. Qualitiative Research and Evaluation Methods . Chapter 6, Fieldwork Strategies and Observational Methods. 3rd ed. Thousand Oaks, CA: Sage, 2002; Rosenbaum, Paul R. Design of Observational Studies . New York: Springer, 2010.

Understood more as an broad approach to examining a research problem than a methodological design, philosophical analysis and argumentation is intended to challenge deeply embedded, often intractable, assumptions underpinning an area of study. This approach uses the tools of argumentation derived from philosophical traditions, concepts, models, and theories to critically explore and challenge, for example, the relevance of logic and evidence in academic debates, to analyze arguments about fundamental issues, or to discuss the root of existing discourse about a research problem. These overarching tools of analysis can be framed in three ways:

  • Ontology -- the study that describes the nature of reality; for example, what is real and what is not, what is fundamental and what is derivative?
  • Epistemology -- the study that explores the nature of knowledge; for example, on what does knowledge and understanding depend upon and how can we be certain of what we know?
  • Axiology -- the study of values; for example, what values does an individual or group hold and why? How are values related to interest, desire, will, experience, and means-to-end? And, what is the difference between a matter of fact and a matter of value?
  • Can provide a basis for applying ethical decision-making to practice.
  • Functions as a means of gaining greater self-understanding and self-knowledge about the purposes of research.
  • Brings clarity to general guiding practices and principles of an individual or group.
  • Philosophy informs methodology.
  • Refine concepts and theories that are invoked in relatively unreflective modes of thought and discourse.
  • Beyond methodology, philosophy also informs critical thinking about epistemology and the structure of reality (metaphysics).
  • Offers clarity and definition to the practical and theoretical uses of terms, concepts, and ideas.
  • Limited application to specific research problems [answering the "So What?" question in social science research].
  • Analysis can be abstract, argumentative, and limited in its practical application to real-life issues.
  • While a philosophical analysis may render problematic that which was once simple or taken-for-granted, the writing can be dense and subject to unnecessary jargon, overstatement, and/or excessive quotation and documentation.
  • There are limitations in the use of metaphor as a vehicle of philosophical analysis.
  • There can be analytical difficulties in moving from philosophy to advocacy and between abstract thought and application to the phenomenal world.

Chapter 4, Research Methodology and Design . Unisa Institutional Repository (UnisaIR), University of South Africa;  Labaree, Robert V. and Ross Scimeca. “The Philosophical Problem of Truth in Librarianship.” The Library Quarterly 78 (January 2008): 43-70; Maykut, Pamela S. Beginning Qualitative Research: A Philosophic and Practical Guide . Washington, D.C.: Falmer Press, 1994; Stanford Encyclopedia of Philosophy . Metaphysics Research Lab, CSLI, Stanford University, 2013.

  • The researcher has a limitless option when it comes to sample size and the sampling schedule.
  • Due to the repetitive nature of this research design, minor changes and adjustments can be done during the initial parts of the study to correct and hone the research method. Useful design for exploratory studies.
  • There is very little effort on the part of the researcher when performing this technique. It is generally not expensive, time consuming, or workforce extensive.
  • Because the study is conducted serially, the results of one sample are known before the next sample is taken and analyzed.
  • The sampling method is not representative of the entire population. The only possibility of approaching representativeness is when the researcher chooses to use a very large sample size significant enough to represent a significant portion of the entire population. In this case, moving on to study a second or more sample can be difficult.
  • Because the sampling technique is not randomized, the design cannot be used to create conclusions and interpretations that pertain to an entire population. Generalizability from findings is limited.
  • Difficult to account for and interpret variation from one sample to another over time, particularly when using qualitative methods of data collection.

Rebecca Betensky, Harvard University, Course Lecture Note slides ; Cresswell, John W. Et al. “Advanced Mixed-Methods Research Designs.” In Handbook of Mixed Methods in Social and Behavioral Research . Abbas Tashakkori and Charles Teddle, eds. (Thousand Oaks, CA: Sage, 2003), pp. 209-240; Nataliya V. Ivankova. “Using Mixed-Methods Sequential Explanatory Design: From Theory to Practice.” Field Methods 18 (February 2006): 3-20; Bovaird, James A. and Kevin A. Kupzyk. “Sequential Design.” In Encyclopedia of Research Design . Neil J. Salkind, ed. Thousand Oaks, CA: Sage, 2010; Sequential Analysis . Wikipedia.  

  • << Previous: Purpose of Guide
  • Next: Design Flaws to Avoid >>
  • Last Updated: Jul 18, 2023 11:58 AM
  • URL: https://library.sacredheart.edu/c.php?g=29803
  • QuickSearch
  • Library Catalog
  • Databases A-Z
  • Publication Finder
  • Course Reserves
  • Citation Linker
  • Digital Commons
  • Our Website

Research Support

  • Ask a Librarian
  • Appointments
  • Interlibrary Loan (ILL)
  • Research Guides
  • Databases by Subject
  • Citation Help

Using the Library

  • Reserve a Group Study Room
  • Renew Books
  • Honors Study Rooms
  • Off-Campus Access
  • Library Policies
  • Library Technology

User Information

  • Grad Students
  • Online Students
  • COVID-19 Updates
  • Staff Directory
  • News & Announcements
  • Library Newsletter

My Accounts

  • Interlibrary Loan
  • Staff Site Login

Sacred Heart University

FIND US ON  

  • Social Science
  • Quantitative Social Research
  • Research Design

THE RESEARCH DESIGN

  • February 2023

Sumbl Ahmad Khanday at Aligarh Muslim University

  • Aligarh Muslim University
  • This person is not on ResearchGate, or hasn't claimed this research yet.

Discover the world's research

  • 25+ million members
  • 160+ million publication pages
  • 2.3+ billion citations
  • Norman Blaikie

Ranjit Kumar

  • L Christensen
  • W T Trochim
  • Recruit researchers
  • Join for free
  • Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up

Research Design Canvas

  • First Online: 10 June 2024

Cite this chapter

a research design meaning

  • Rashina Hoda   ORCID: orcid.org/0000-0001-5147-8096 2  

This chapter focuses on describing how to go about designing a research project. First, we will learn about framing the research study as a project . Then, we will be introduced to the research design canvas . While the details are specific to socio-technical grounded theory (STGT) studies, the research design canvas or template can be used for research project design in general. Next, we will learn about the 10 elements of the research design canvas, including the forming of the research team , identifying the domain and actors , selecting the phenomenon and topic to investigate, carefully assessing research ethics and considering the research values , formulating the guiding research questions , acknowledging the team’s research philosophy , deciding on the initial research protocols including data, techniques, and tools, and listing the desirable research impact . The chapter concludes by describing a pilot study to apply and refine the above elements of the research project design.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Durable hardcover edition
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Unable to display preview.  Download preview PDF.

Creswell, J. W., & Poth, C. N. (2016). Qualitative inquiry and research design: Choosing among five approaches . Sage publications.

Google Scholar  

Dittrich, D., Kenneally, E., et al. (2012). The menlo report: Ethical principles guiding information and communication technology research. Technical report , US Department of Homeland Security.

El-Emam, K. (2001). Ethics and open source. Empirical Software Engineering, 6 (4), 291.

Article   Google Scholar  

El-Migid, M.-A. A., Cai, D., Niven, T., Vo, J., Madampe, K., Grundy, J., & Hoda, R. (2022). Emotimonitor: A trello power-up to capture and monitor emotions of agile teams. Journal of Systems and Software, 186 , 111206. https://www.sciencedirect.com/science/article/pii/S016412122100279X

Garousi, V., Felderer, M., & Mäntylä, M. V. (2019). Guidelines for including grey literature and conducting multivocal literature reviews in software engineering. Information and Software Technology, 106 , 101–121.

Gold, N. E., & Krinke, J. (2020). Ethical mining: A case study on MSR mining challenges. In Proceedings of the 17th International Conference on Mining Software Repositories (pp. 265–276).

Hassan, A. E. (2008). The road ahead for mining software repositories. 2008 Frontiers of Software Maintenance (pp. 48–57). IEEE.

Hoda, R. (2022). Socio-technical grounded theory for software engineering. IEEE Transactions on Software Engineering, 48 (10), 3808–3832.

Hoda, R., & Noble, J. (2017). Becoming agile: a grounded theory of agile transitions in practice. In 2017 IEEE/ACM 39th International Conference on Software Engineering (ICSE) (pp. 141–151). IEEE.

Hussain, W., Perera, H., Whittle, J., Nurwidyantoro, A., Hoda, R., Shams, R. A., & Oliver, G. (2020). Human values in software engineering: Contrasting case studies of practice. IEEE Transactions on Software Engineering, 48 (5), 1818–1833.

Hussain, W., Shahin, M., Hoda, R., Whittle, J., Perera, H., Nurwidyantoro, A., Shams, R. A., & Oliver, G. (2022). How can human values be addressed in agile methods? a case study on safe. IEEE Transactions on Software Engineering, 48 (12), 5158–5175.

Institute, P. M. (2008). A guide to the project management body of knowledge (PMBOK ® guide).

Madampe, K., Hoda, R., & Grundy, J. (2021). A faceted taxonomy of requirements changes in agile contexts. IEEE Transactions on Software Engineering, 48(10), 3737–3752.

Madampe, K., Hoda, R., & Singh, P. (2020). Towards understanding emotional response to requirements changes in agile teams. In Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering: New Ideas and Emerging Results (pp. 37–40).

Osterwalder, A., & Pigneur, Y. (2010). Business model generation: A handbook for visionaries, game changers, and challengers (Vol. 1). Wiley.

Pant, A., Hoda, R., Tantithamthavorn, C., & Turhan, B. (2024). Ethics in AI through the developer’s view: A grounded theory literature review and guidelines. Empirical Software Engineering, 29 (3), 67.

Perera, H., Hussain, W., Whittle, J., Nurwidyantoro, A., Mougouei, D., Shams, R. A., & Oliver, G. (2020). A study on the prevalence of human values in software engineering publications, 2015-2018. In 2020 IEEE/ACM 42nd International Conference on Software Engineering (ICSE) (pp. 409–420). IEEE.

Perry, D. E., Porter, A. A., & Votta, L. G. (2000). Empirical studies of software engineering: A roadmap. In Proceedings of the Conference on The Future of Software Engineering (pp. 345–355).

Salleh, N., Hoda, R., Su, M. T., Kanij, T., & Grundy, J. (2018). Recruitment, engagement and feedback in empirical software engineering studies in industrial contexts. Information and Software Technology, 98 , 161–172.

Saunders, M., & Tosey, P. (2013). The layers of research design. Rapport (Winter), 58–59.

Schwartz, S. H. (1992). Universals in the content and structure of values: Theoretical advances and empirical tests in 20 countries. In Advances in experimental social psychology (Vol. 25, pp. 1–65). Elsevier.

Singer, J., & Vinson, N. G. (2001). Why and how research ethics matters to you, yes you!. Empirical Software Engineering, 6 (4), 287–290.

Singer, J., & Vinson, N. G. (2002). Ethical issues in empirical studies of software engineering. IEEE Transactions on Software Engineering, 28 (12), 1171–1180.

Townsend, L., & Wallace, C. (2016). Social media research: A guide to ethics. University of Aberdeen, 1 (16), 1–16. https://www.gla.ac.uk/media/Media_487729_smxx.pdf

Wohlin, C., & Aurum, A. (2015). Towards a decision-making structure for selecting a research design in empirical software engineering. Empirical Software Engineering, 20 , 1427–1455.

Wolfswinkel, J. F., Furtmueller, E., & Wilderom, C. P. (2013). Using grounded theory as a method for rigorously reviewing literature. European Journal of Information Systems, 22 (1), 45–55.

Download references

Author information

Authors and affiliations.

Faculty of Information Technology, Monash University, Melbourne, VIC, Australia

Rashina Hoda

You can also search for this author in PubMed   Google Scholar

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Hoda, R. (2024). Research Design Canvas. In: Qualitative Research with Socio-Technical Grounded Theory. Springer, Cham. https://doi.org/10.1007/978-3-031-60533-8_4

Download citation

DOI : https://doi.org/10.1007/978-3-031-60533-8_4

Published : 10 June 2024

Publisher Name : Springer, Cham

Print ISBN : 978-3-031-60532-1

Online ISBN : 978-3-031-60533-8

eBook Packages : Computer Science Computer Science (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research
  • Advanced search

CMAJ

Advanced Search

Managing “socially admitted” patients in hospital: a qualitative study of health care providers’ perceptions

  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Figures & Tables
  • Related Content

Background: Emergency departments are a last resort for some socially vulnerable patients without an acute medical illness (colloquially known as “socially admitted” patients), resulting in their occupation of hospital beds typically designated for patients requiring acute medical care. In this study, we aimed to explore the perceptions of health care providers regarding patients admitted as “social admissions.”

Methods: This qualitative study was informed by grounded theory and involved semistructured interviews at a Nova Scotia tertiary care centre. From October 2022 to July 2023, we interviewed eligible participants, including any health care clinician or administrator who worked directly with “socially admitted” patients. Virtual or in-person individual interviews were audio-recorded and transcribed, then independently and iteratively coded. We mapped themes on the 5 domains of the Quintuple Aim conceptual framework.

Results: We interviewed 20 nurses, physicians, administrators, and social workers. Most identified as female ( n = 11) and White ( n = 13), and were in their mid to late career ( n = 13). We categorized 9 themes into 5 domains: patient experience (patient description, provision of care); care team well-being (moral distress, hierarchy of care); health equity (stigma and missed opportunities, prejudices); cost of care (wait-lists and scarcity of alternatives); and population health (factors leading to vulnerability, system changes). Participants described experiences caring for “socially admitted” patients, perceptions and assumptions underlying “social” presentations, system barriers to care delivery, and suggestions of potential solutions.

Interpretation: Health care providers viewed “socially admitted” patients as needing enhanced care but identified individual, institutional, and system challenges that impeded its realization. Examining perceptions of the people who care for “socially admitted” patients offers insights to guide clinicians and policy-makers in caring for socially vulnerable patients.

See related editorial at www.cmaj.ca/lookup/doi/10.1503/cmaj.240577

Emergency departments have become a destination of last resort for some patients who are made vulnerable by social circumstances, resulting in their occupying hospital beds typically designated for people with acute medical issues. 1 “Social admission” is a colloquial, nondiagnostic label used to describe a person for whom no acute medical issues are recognized to be contributing to their seeking health care. However, many health care providers understand that patients who are admitted for social reasons face challenges such as a breakdown of care supports or an inability of the patient or family to cope with the demands of living at home. 2 These patients often have lengthy stays in emergency departments or hospital wards, and frequently encounter barriers (e.g., housing or home support) delaying safe discharge from hospital. The colloquial terms “failure to cope,” “acopia,” “orphan patient,” or “home care impossible,” among others, are sometimes used to refer to these patients. 3 – 5 Such terminology can be stigmatizing because it indicates a value judgment that patients require admission solely on “social” grounds, sometimes failing to account for underlying medical complexity. 6

The “social admission” phenomenon is an under-researched area in health care. These patients, often categorized by health care providers as not being acutely ill, experience in-hospital death rates as high as 22.2%–34.9%. 7 , 8 Explanations may include under-triaging in the emergency department owing to poor recognition of atypical clinical presentations and delays in timely assessments. 5 Patients may be misdiagnosed or develop acute illness during their hospital stay. In 2 international studies, by the end of hospitalization, an admission diagnosis of “acopia” was no longer the discharge diagnosis in 88%–92.5% of cases. 7 , 9 Diagnoses of falls, delirium, and mobility problems were common, but sepsis was initially undiagnosed in almost one-third of these patients. 7 This raises questions about health care providers’ awareness of atypical presentations and decision-making for “social” presentations, which often require a nuanced understanding of both medical and social care needs.

Health care providers face challenges providing high-quality care to this patient population across Canada 1 , 10 and internationally. 1 , 4 , 10 – 13 “Social admissions” may account for as many as 1 in 10 patients (0.57%–9.3%) presenting to the emergency department and 1 in 25 admissions to hospital, with increasing prevalence with age. 14 A survey from Wales showed that 51.8% of hospital physicians consider that they frequently care for these patients, encountering them several times per week. 15

Since “social admission” is a nondiagnostic label, its definition varies across regions and health care systems, meaning no guidelines exist to standardize approaches to meet medical or social care needs. Qualitative data evaluating how health care providers perceive and care for these patients are lacking. Therefore, we aimed to explore the perceptions of health care providers regarding patients admitted as “social admissions.”

Study design

This qualitative study was informed by constructivist grounded theory, which uses inductive analysis of data collected from participants to generate new theories. 16 , 17 We conducted semistructured interviews with clinicians and health care administrators between October 2022 and July 2023. Given that little is known about “social admissions,” grounded theory was best suited to our objective to generate an explanatory theory about this phenomenon. 17

The research team included qualitative methods experts, geriatric medicine specialists, clinician scientists, primary care and emergency department clinicians, and members with administrative leadership roles. We also included nursing students, medical students, and internal medicine residents of diverse backgrounds.

We reported this study using the Consolidated Criteria for Reporting Qualitative Research Checklist (Appendix 1, available at www.cmaj.ca/lookup/doi/10.1503/cmaj.231430/tab-related-content ). 18

Setting and participants

Studying “social admissions” can be challenging because of the variability in terminology and admission policies across different jurisdictions. 19 The Orphan Patient Policy is a standardized “social admission” pathway used at the Queen Elizabeth II Health Sciences Centre, a tertiary care centre in Halifax, Nova Scotia. Halifax is the provincial capital and the largest city in the Atlantic region of Canada. In Nova Scotia, health care is provided through a publicly funded health care system.

Since March 2012, any patient, regardless of age or living situation, can be admitted to the Queen Elizabeth II Health Sciences Centre under the Orphan Patient Policy if they have undergone a medical assessment by a physician in the emergency department, are determined to have no acute or new medical conditions, and have been seen by a social worker or discharge planning nurse to exhaust all home care options. Inability to return home includes situations of homelessness, unavailable community supports, or waiting for transitions to long-term care. These patients are admitted to the first available inpatient bed, based on a rotating roster of all hospital admission services (e.g., medicine, psychiatry, surgery, subspecialty medicine or surgery, and hospitalist). The admitting service and its allied health care team become responsible for the patient’s care and disposition, with the expectation that discharge planning is the primary issue. Although these patients are locally called “orphan patients,” we use the terminology “social admission” throughout this paper.

Eligible participants included any clinical provider or administrator who worked directly with “socially admitted” patients. To identify potential participants for our study, we held initial interviews with hospital nursing bed flow managers who are responsible for administering the Orphan Patient Policy.

To recruit participants, we used snowball sampling: we emailed each health care provider or department that had been recommended by the initial interviewees (i.e., the nursing bed flow managers), and those suggested by study participants during their interviews or by key knowledge users with whom we shared preliminary findings (see Data analysis). Preliminary analyses also informed recruitment, and we used purposive and theoretical sampling 20 , 21 to ensure that the perspectives of multiple health care professionals within the “social admission” care pathway were included, with the aim of data saturation. We approached several departments and individuals who declined to participate or did not respond to our requests for interviews. These included recreation therapy, physiotherapy, occupational therapy, some administrative positions, and several subspecialty medicine divisions.

Data collection

The interview guide (Appendix 2a, available at www.cmaj.ca/lookup/doi/10.1503/cmaj.231430/tab-related-content ) was based on our literature review of “social admissions” 14 and informed by our chart reviews of more than 350 “social admissions” in Nova Scotia (unpublished data, 2021). The entire research team gave input on the interview guide through several iterative processes: multiple meetings to develop the guide, a pilot test with non-author colleagues, and a meeting after all interviewers had conducted at least 1 interview to discuss whether the guide was robust enough to elicit the information we were seeking. We revised the interview guide wording for clarity and understanding, and we added 2 major questions (interview guide questions 7 and 8) and several prompting questions.

Experienced qualitative researchers (C.S. and E.G.M.) provided training. We held 2 group and 1 individual interactive training and practice sessions, which provided methodological context, and practical approaches and techniques in qualitative interviewing. One research team member (J.C.M., L.E., G.A., or M.K.) administered individual interviews. Interviews occurred virtually (via Microsoft Teams) or in person in quiet rooms on hospital wards or participants’ offices. After interviews were completed, we contacted participants by email to provide self-identified demographic data. The survey was voluntary and anonymous, and participants selected from predefined categories or supplied free text for sex, gender, ethnicity, role, and profession (Appendix 2b).

Interviews were audio-recorded and transcribed verbatim. For additional rigour and contextualization during analysis, interviewers kept detailed field notes of their reflections during the interviews.

Data analysis

Data collection and analysis occurred simultaneously. All participants were invited to review their transcripts before analysis (1 participant opted to). We used Dedoose software for data coding and organization.

Two team members independently coded interview transcripts using an inductive approach. 16 , 17 Throughout the initial coding process, the coders (J.C.M., C.S., G.A., and M.K.) met regularly to refine, merge and expand codes, come to consensus about any disagreements and interpretations, add context to certain transcripts with their field notes from the interviews, and identify additional participants suggested by the participants. Using constant comparative and selective coding processes, 16 , 17 we generated categories and subcategories to form themes to reflect participants’ perspectives on “social admissions.”

We used several strategies to ensure rigour and trustworthiness throughout the research process. As per the grounded theory approach, we incorporated reflexivity into our analytic process and acknowledged our dual roles as researchers and health care providers delivering care. Most members of the research team were affiliated with the research site and possessed an in-depth understanding of the local context and providers involved in “social admission” care. This intimate understanding enabled us to add context to the findings. However, we also challenged our preconceptions and biases by recruiting participants with diverse experiences and perspectives, and scheduling regular meetings among research team members to triangulate findings with our internal chart review, knowledge user feedback, and data analysis. 22

We put participant narratives at the forefront by presenting the data (from preliminary interviews and after completion of interviews) to engaged key knowledge users within our hospital and university network (e.g., experienced researchers, clinicians, social workers, and administrators) in a variety of settings (e.g., individual communications, small group sessions, or internal department presentations). The knowledge users provided feedback and suggested further participants. The data were also triangulated with findings from our recent literature review. 14

After data saturation was achieved, we mapped our findings on the Quintuple Aim conceptual framework at the suggestion of a knowledge user and as per consensus with the research group. 23 , 24 This framework adequately organized and contextualized our findings and is a well-known approach to optimizing health system performance and defines 5 fundamental domains (definitions in Appendix 1) for transforming health care: enhance patient experience, better population health, optimize cost of care, improve care team well-being, and advance health equity. 23 , 24

Ethics approval

Nova Scotia Health granted institutional research ethics approval (REB no. 1027628).

We conducted 20 interviews (9 in person and 11 virtual) among hospital administrators and clinicians ( Table 1 ). Clinicians were nurses (charge, discharge planning, and inpatient), physicians (residents and staff physicians), and social workers, representing the following services: emergency department, internal medicine, medical subspecialties (cardiology, neurology, and geriatric medicine), psychiatry, hospitalist, and surgical specialties (orthopedics, general surgery, cardiovascular surgery, and vascular surgery). Administrators included nursing bed managers and directors of hospital divisions and long-term care. The mean interview length was 38 (range 16–76) minutes.

  • View inline

Demographic information of hospital administrators and clinicians who were interviewed

We categorized 9 themes into each of the 5 domains of the Quintuple Aim framework as shown in Figure 1 : patient experience (patient description, provision of care); care team well-being (moral distress, hierarchy of care); health equity (stigma and missed opportunities, prejudices); cost of care (wait-lists and scarcity of alternatives); and population health (factors leading to vulnerability, system changes for addressing “social admissions”). Additional illustrative quotations are presented in Appendix 3, available at www.cmaj.ca/lookup/doi/10.1503/cmaj.231430/tab-related-content .

  • Download figure
  • Open in new tab
  • Download powerpoint

Domains (in the circle) and themes (outside the circle) using the Quintuple Aim framework. 23 , 24

Patient experience

Participants’ description of patients.

Participants provided diverse descriptions of these patients ( Table 2 ). One cited financial precarity as a key problem faced by these patients. Another highlighted recurrent health care system interactions as being important. Some mentioned these patients had a mix of medical, mental health, and social problems. Most equated “social admissions” with older patients or those who were cognitively impaired. Some deemed them the most frail, vulnerable, or complex cases. Few considered that “socially admitted” patients had no medical conditions involved (Appendix 3) or that the medical conditions could wholly be managed at a primary care level.

Descriptions and illustrative quotations of the patient description and provision of care themes in the patient experience domain

Provision of care

Participants described “socially admitted” patients as receiving passive and hands-off care, contrasting this with active approaches for medical and surgical cases. Participants reported that patients, especially those who were older or confused, often received limited attention and workup, leaving their needs unaddressed ( Table 2 ). The approach to care was characterized by patients being left in their beds, being the last person rounded on by the care team, and not being chosen to participate in rehabilitative programs or exercises. In short, these patients’ care needs were the last in the queue of nursing and physician priorities. Beyond direct provision of care, participants identified that hospital programs (e.g., recreation therapy) benefitting these patients had been discontinued or under-resourced (Appendix 3). Almost all clinical participants considered their ward was not the place to care for these patients.

Care team well-being

Moral distress.

Health care providers described their roles as acute care or sub-specialized experts but said they felt helpless when they were unable to provide care for “socially admitted” patients, who often had complex, unrecognized, or chronic health issues. They often stated that better care should be offered yet described challenges when caring for “socially admitted” patients. These included a lack of appropriate training, struggles to arrange suitable care, and resistance when attempting to involve other services, allied health care, or social work, leading to delays in appropriate management ( Table 3 ). As articulated by 1 participant (HC605): “I think that’s a lot to ask of different providers who may not have that skill set. So, sometimes I think it does cause, you know, moral distress and challenge for people sometimes, which then gets perhaps articulated as being ‘they shouldn’t be here.’” Many reported feeling negative toward the policy and labelling of these patients, and acknowledged it was used primarily to communicate with other health care providers. One participant suggested the policy prevented blame on clinicians for “admitting this [patient]” (HC840).

Descriptions and illustrative quotations of the moral distress and hierarchy of care themes in the care team well-being domain

Hierarchy of care

Participants highlighted a hierarchy in health care, prioritizing acute care patients over “social admissions.” One participant reflected on how hospitals rely on pathways with these patients not fitting into a clear “slot,” representing individuals not well differentiated, individuals with complexity, or individuals with issues that are not specialty specific. Consequently, “social admissions” were passed down the hierarchy, from physicians to residents, and sometimes to nursing assistants, implying they were less worthy of routine medical attention ( Table 3 ).

Health equity

Stigma and missed opportunities.

The term “social admission” led to incorrect assumptions about medical needs and cognitive abilities. Beliefs about behaviours were noted by several participants. These assumptions were propagated as early as handovers from paramedics to emergency nursing teams ( Table 4 ). Participants highlighted instances where these patients were not medically stable and emphasized that social stressors did not exempt patients from becoming medically ill during the admission. The label was reported to be an impediment to opportunities to look for underlying treatable medical issues, compounded by the need to make timely decisions because of pressures to free up beds.

Descriptions and illustrative quotations of the stigma and missed opportunities, and prejudices themes in the health equity domain

Ageist beliefs underpinned assumptions about capacity, especially for older “socially admitted” patients. Some participants recognized that these patients could not effectively advocate for themselves, and others pointed out that older patients were often assumed to be cognitively or functionally impaired, and decisions were made without them. Participants provided examples of premature capacity determinations made without proper medical evaluation or consultation ( Table 4 ). One participant described the invisibility of these patients, especially for women and minorities, and another noted how the care of “socially admitted” patients is undermined by negative attitudes similar to those encountered by individuals with substance use disorders (Appendix 3).

Cost of care

Wait-lists and scarcity of alternatives.

Inadequate community support often resulted in emergency department visits and hospital admissions, with the perception that hospitals are the safest place. Participants noted lengthy wait-lists for community services like home care, physiotherapy, or occupational therapy, which led to deconditioning ( Table 5 ). The transition to long-term care was described as “abysmal,” leaving patients in challenging situations for extended periods. Admissions were a “last resort” after all other options were exhausted, with patients and families struggling to access necessary care. The lack of alternatives contributed to participants’ distress when caring for “socially admitted” patients (Appendix 3).

Description and illustrative quotations of the wait-list and scarcity of alternatives theme in the cost of care domain

Population health

Factors leading to vulnerability.

Participants identified many issues that were associated with the “social admission” label, particularly for patients with cognitive impairment ( Table 6 ). These included physical barriers (e.g., inaccessible homes), homelessness, and financial challenges. Social isolation left individuals unsupported, managing alone until emergencies, such as falls, catalyzed hospital admission. The inability to advocate for oneself was also a common observation.

Descriptions and illustrative quotations of factors leading to vulnerability and system changes for addressing “social admission” themes in the population health domain

System changes for addressing “social admissions”

Participants identified systemic barriers that they considered disadvantaged “socially admitted” patients. Participants were concerned that the health care system is currently in crisis (e.g., with a lack of primary care and home support), and emergency departments cannot function as intended, causing the acute care system to become the community system or “the [inter]mediate pathway between community and long-term care” ( Table 6 ). Some called for specialized seniors’ care teams to address the unique needs of older adults. Participants emphasized the importance of understanding these patients’ situations holistically, with a multidisciplinary approach to assess medical history, social factors, and available resources; several examples of ideal approaches were shared. The system’s focus on individuals with higher functioning left “socially admitted” patients underserved, with emphases on services that are “organized from a provider lens, not from a patient-need lens” (HC605).

  • Interpretation

We sought to understand how health care providers perceive patients labelled as “socially admitted” in hospital, and we identified 9 key themes across the Quintuple Aim framework. 23 , 24 The themes in the patient experience domain highlighted inconsistent definitions and passive care approaches for these patients, who are often seen as low priority in hospital. Under the care team well-being domain, themes of moral distress and hierarchy of care showed the challenges and dilemmas faced by health care providers. Issues of stigma (e.g., “they have dementia”), prejudices (e.g., ageism), wait-lists, and scarcity of alternatives underscored systemic challenges under the health equity and cost of care domains. Finally, factors leading to vulnerability and potential system changes were described by participants as ways to better the health of this population.

Our findings highlight the potential adverse effects on care when patients are labelled as “socially admitted” (or as “orphan patients” in the study hospital), such as incorrect assumptions about medical needs and cognitive abilities, which impedes opportunities to look for treatable medical issues. Despite a “social admission” pathway ostensibly designed to ensure there are no acute or new medical issues, patients were still perceived as having “multiple comorbidities” or being “the most frail … the most complex” ( Table 2 ). This finding is in keeping with the results of a case–control study (in London, Ontario), in which medical comorbidity played a minimal role in the label of a “failure to cope” admission among adults aged 70 years or older. Instead, recent failed discharge from hospital was significantly associated with a “social admission” label, leading the authors to suggest blame was an important part of the use of this label in a system that prizes efficiency. 3 This supports the viewpoint that it is more a system’s failure to cope than the patient’s. 10

Our findings also demonstrate possible negative impacts on health care providers not addressed in previous research. Although similar patient populations (“failure to thrive” or “failure to cope”) in British Columbia 25 and Ontario, 3 and “acopia” admissions in the United Kingdom and Australia, 7 , 9 have been researched, these studies did not consider the insights of providers directly caring for these patients. We highlight some structures (e.g., propagation of the label early in care) or cultures (e.g., ageism) in our health care systems, leading to system and individual tensions caring for “socially admitted” patients, especially in the context of few readily available alternatives. We observed that participants frequently reported feeling conflicted defining, prioritizing, and managing this patient population, yet unequivocally considered these patients deserved care — albeit care delivered by someone else. This latter finding contrasts with a survey of physicians in Wales in which two-thirds (62.7%) considered patients labelled as “social admissions/acopia” were a burden on national health resources, with 44.8% of physicians admitted to feeling that these patients were a burden on their time. 15

Despite considering that “socially admitted” patients were deserving of care, our participants recounted how care was passed down to less-senior members of the health care team. This pattern of downgrading care can lead to situations in which “socially admitted” patients are looked after by team members who possess minimal experience recognizing evolving medical presentations or lack the authority to advocate strongly for clinical reassessments when needed. The implication that the care of “social admissions” should be delegated to others reflects an implicit attitude of hierarchy and detachment from the needs associated with this patient population. Not being able to provide the care that is warranted while at the same time believing that the needed care is beneath the care they provide is in keeping with cognitive dissonance literature in medicine (i.e., holding 2 or more inconsistent beliefs or behaving in a way that is inconsistent with core beliefs). 26 Cognitive dissonance can trigger negative emotions and subsequent defensive reactions resulting in fault finding in others (e.g., blaming “social admissions”), reinforced commitment to wrong actions (e.g., propagating labels), and overlooked medical errors, 26 , 27 offering some explanations for understanding how stigma and hierarchies of care can lead to missed acute medical illnesses (e.g., sepsis, malignancy, and strokes) in previous “social admission” populations. 5 , 7 , 9

Existing literature indicates that “social admission” labelling may harm patients. 14 Our findings suggest that the use of this label appears to have little benefit for the health care providers who care for this patient population. Moreover, no evidence exists to date that “social admissions” labelling or pathways help the health care system. Therefore, re-evaluating an approach to caring for “socially admitted” patients is imperative, and this may include abandoning the nondiagnostic label.

Better support for this patient population may be achieved through enhanced policies that propose feasible solutions to support these patients. To achieve this, further steps are required to define “social admissions,” and to highlight the importance and scope of the issues surrounding the patient population captured under this label. 28 However, we found inconsistencies in how “social admissions” are described, which adds to the challenge in developing effective policies for these patients, and in comparing similar presentations across Canada. 29 Developing a consistent definition for “social admissions” may also prompt clinical specialties to claim responsibility for this population, as champions are key to raising issues for prioritization in health care. 30

“Social admissions” can be considered a “wicked problem” with no single easy solution. 31 A previously proposed ecological approach can guide clinicians in managing “social” presentations. 2 , 32 Participants in our study made suggestions about community- and institutional-level solutions such as home care and primary care teams that support social integration, more multidisciplinary care teams in and out of the hospital, and “geriatrizing” acute care. These suggestions reflect many of the same calls for action made by previous scholars and advocates, 33 , 34 and are similar to solutions proposed by the National Institute on Ageing’s “Ageing in the Right Place” report. 35 Scholars in France have proposed a societal-level solution involving the procedural and financial restructuring of ultraspecialized medicine, coupled with a revival of historic values combining medicine and social work to address the needs of an increasingly frail and socially complex population. 36

Limitations

Our study was conducted in a single tertiary health centre in Nova Scotia, where “socially admitted” patients are admitted under an institution-specific Orphan Patient Policy, which likely limits the generalizability of our findings. Our participants were mainly White and female, which also limits the generalizability to other settings across the country and internationally. Furthermore, the participant sample did not include recreational therapists, volunteers, physiotherapists, or occupational therapists. In the study centre, recreation and volunteer programs had been discontinued or reduced following the COVID-19 pandemic, and there were no occupational or physiotherapists specifically assigned to this patient population. Another limitation of our study is that some interviewers had prior acquaintance with the participants they interviewed. This familiarity may introduce bias in the data collection and interpretation, although this should be balanced with constructivist grounded theory’s emphasis on researchers as co-participants in the research process.

Our research draws attention to health care providers’ challenges in managing care for “socially admitted” patients, and to perceptions regarding “social” presentations, perceived system barriers and resource shortages, and some potential solutions for better patient care. Overall, no consensus emerged as to what constitutes a “social admission” (who are the patients labelled as “socially admitted”?) or ownership for “social admissions” (who cares for these patients?), and participants reported inconsistencies in care delivered for such patients (how to care for “socially admitted” patients). To improve the patient experience and alleviate the moral distress of staff who care for “socially admitted” patients in hospital, the inherent structures of our health care system, such as hierarchies and stigmatization, should be reformed to better address the needs of patients with increasingly complex social problems who present to hospitals.

Competing interests: Jasmine Mah receives scholarships supporting her PhD research from the Department of Medicine at Dalhousie University, Dalhousie Medical Research Foundation, Dr. Patrick Madore Traineeship, and the Pierre Elliott Trudeau Foundation. Kenneth Rockwood has asserted copyright of the Clinical Frailty Scale through Dalhousie University’s Industry, Liaison, and Innovation Office. In addition to academic and hospital appointments, Kenneth Rockwood is cofounder of Ardea Outcomes, which (as DGI Clinical) in the last 3 years has contracts with pharmaceutical and device manufacturers (Danone, Hollister, INmune, Novartis, Takeda) on individualized outcome measurement. In 2020, he attended an advisory board meeting with Nutricia on dementia and chaired a Scientific Workshop & Technical Review Panel on frailty for the Singapore National Research Foundation. He is associate director of the Canadian Consortium on Neurodegeneration in Aging, itself funded by the Canadian Institutes for Health Research, the Alzheimer Society of Canada, and several other charities. He holds the Kathryn Allen Weldon Chair in Alzheimer Research, funded by the Dalhousie Medical Research Foundation. Kenneth Rockwood also reports personal fees from Ardea Outcomes, the Chinese Medical Association, Wake Forest University Medical School Centre, the University of Nebraska Omaha, the Australia and New Zealand Society for Geriatric Medicine, Atria Institute, Fraser Health Authority, McMaster University, and EpiPharma. In addition, Dr. Rockwood has licensed the Clinical Frailty Scale to Enanta Pharmaceuticals, Synairgen Research, Faraday Pharmaceuticals, KCR S.A., Icosavax, BioAge Labs, Biotest AG, Qu Biologics, AstraZeneca UK, Cellcolabs AB, Pfizer, W.L. Gore & Associates, pending to Cook Research Incorporated, Renibus Therapeutics, and, as part of Ardea Outcomes, has a pending patent for Electronic Goal Attainment Scaling. He also reports permission for the Pictorial Fit-Frail Scale licensed to Congenica. Use of both the Clinical Frailty Scale and Pictorial Fit-Frail Scale is free for education, research, and nonprofit health care with completion of a permission agreement stipulating users will not change, charge for, or commercialize the scales. For-profit entities pay a licensing fee, 15% of which is is retained by the Dalhousie University Office of Commercialization and Industry Engagement. The remainder of the licence fees are donated to the Dalhousie Medical Research Foundation. Melissa Andrew reports grants from Sanofi, grants and support to attend meetings from GSK, grants from Pfizer, grants from Canadian Frailty Network, personal fees from Sanofi, personal fees from Pfizer, personal fees from Seqirus, grants from Merck, grants from Public Health Agency of Canada, and grants from Canadian Institutes of Health Research, outside the submitted work. Dr. Andrew is a volunteer board member for the Alzheimer Society of Nova Scotia and the National Advisory Committee on Immunization. Sheliza Khan declares leadership in the patient flow department at Queen Elizabeth II Hospital. No other competing interests were declared.

This article has been peer reviewed.

Contributors: Jasmine Mah and Christie Stilwell contributed equally as co–first authors. Jasmine Mah contributed to the conceptualization and design, procurement of data, analysis of data, drafting of the original manuscript, and review of the manuscript. Christie Stilwell and Emily Marshall contributed to the conceptualization and design, analysis of data, drafting of the original manuscript, and review of the manuscript. Madeline Kubiseski and Gaurav Arora contributed to the conceptualization and design, procurement of data, analysis of data, and review of the manuscript. Karen Nicholls, Sheliza Khan, Jonathan Veinot, Lucy Eum, Susan Freter, Katalin Koller, Maia von Maltzahn, Kenneth Rockwood, Samuel Searle, and Melissa Andrew contributed to the conceptualization and design, analysis of data, and drafting of the original manuscript or review of manuscript drafts. All authors approved the final version to be published and agreed to be accountable for its accuracy and integrity.

Data sharing: Anonymized data from our study may be available on request. Interested parties are encouraged to contact the lead author via email to access these data or to obtain a copy of the Orphan Patient Policy. The data will be shared under terms that ensure the protection of participant privacy and compliance with relevant data protection regulations.

Funding: This study is supported by Nova Scotia Health, through a grant from the Nova Scotia Health Research Fund. Nova Scotia Health is the provincial health authority.

  • Accepted March 5, 2024.

This is an Open Access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY-NC-ND 4.0) licence, which permits use, distribution and reproduction in any medium, provided that the original publication is properly cited, the use is noncommercial (i.e., research or educational use), and no modifications or adaptations are made. See: https://creativecommons.org/licenses/by-nc-nd/4.0/

  • Luther RA ,
  • Richardson L ,
  • Andrew MK ,
  • Burrell A ,
  • Chahine S ,
  • Rutschmann OT ,
  • Chevalley T ,
  • Zumwald C ,
  • Rippingale C
  • Bielawska C ,
  • Murphy PJ ,
  • Campbell SG
  • Searle SD ,
  • Chattopadhyay I
  • Sainsbury P ,
  • Furlong KR ,
  • O’Donnell K ,
  • Farrell A ,
  • Strauss A ,
  • Cooper LA ,
  • Itchhaporia D
  • Harmon-Jones E ,
  • Shiffman J ,
  • Sinskey JL ,
  • Margolis RD ,
  • Bronfenbrenner U
  • Reuben DB ,
  • Auerbach J ,
  • ↵ Ageing in the right place: supporting older Canadians to live where they want . Toronto : National Institute on Ageing, Toronto Metropolitan University 2022 : 1 – 148 . Available: https://static1.squarespace.com/static/5c2fa7b03917eed9b5a436d8/t/638e0857c959d1546d9f6f3a/1670252637242/AIRP+Report+Final2022-.pdf ( accessed 2023 Dec. 18 ).

In this issue

Canadian Medical Association Journal: 196 (17)

  • Table of Contents
  • Index by author

Article tools

Thank you for your interest in spreading the word on CMAJ.

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Citation Manager Formats

  • EndNote (tagged)
  • EndNote 8 (xml)
  • RefWorks Tagged
  • Ref Manager

Twitter logo

  • Tweet Widget
  • Facebook Like

Jump to section

Related articles.

  • “Social admissions” to hospital are not personal failures but policy ones
  • « Admissions pour manque de soutien social » dans les hôpitaux: un échec des politiques, et non des personnes
  • Google Scholar

Cited By...

  • << Admissions pour manque de soutien social >> dans les hopitaux: un echec des politiques, et non des personnes
  • "Social admissions" to hospital are not personal failures but policy ones

More in this TOC Section

  • Respiratory syncytial virus vaccination strategies for older Canadian adults: a cost–utility analysis
  • Public funding for private for-profit centres and access to cataract surgery by patient socioeconomic status: an Ontario population-based study
  • Renin–angiotensin system inhibitors and risk of hepatocellular carcinoma among patients with hepatitis B virus infection

Similar Articles

Collections.

  • Access to health care
  • Geriatric medicine
  • Health policy
  • Homelessness
  • Internal medicine
  • Mental health
  • Patient safety & quality improvement
  • Physician health
  • Vulnerable populations

a research design meaning

COMMENTS

  1. What Is a Research Design

    A research design is a strategy for answering your research question using empirical data. Creating a research design means making decisions about: Your overall research objectives and approach. Whether you'll rely on primary research or secondary research. Your sampling methods or criteria for selecting subjects. Your data collection methods.

  2. Research Design

    Research design refers to the overall strategy or plan for conducting a research study. It outlines the methods and procedures that will be used to collect and analyze data, as well as the goals and objectives of the study. Research design is important because it guides the entire research process and ensures that the study is conducted in a systematic and rigorous manner.

  3. What is a Research Design? Definition, Types, Methods and Examples

    A research design is the overall plan or structure that guides the process of conducting research. Learn more about research design types, methods & examples.

  4. What Is Research Design? 8 Types + Examples

    Research design refers to the overall plan, structure or strategy that guides a research project, from its conception to the final analysis of data. Research designs for quantitative studies include descriptive, correlational, experimental and quasi-experimenta l designs. Research designs for qualitative studies include phenomenological ...

  5. Research Design: What it is, Elements & Types

    Research design is the framework of research methods and techniques chosen by a researcher to conduct a study. The design allows researchers to sharpen the research methods suitable for the subject matter and set up their studies for success. Creating a research topic explains the type of research (experimental,survey research,correlational ...

  6. What is Research Design? Types, Elements and Examples

    Have you ever wondered what is research design? Or wanted to know which type of research design is best for your study? We've got you covered! Read this article to understand what research design is, learn the different research design types and their characteristics, with examples.

  7. Research Design

    A research design is a strategy for answering your research question using empirical data and the right kind of analysis.

  8. Research design

    A research design typically outlines the theories and models underlying a project; the research question (s) of a project; a strategy for gathering data and information; and a strategy for producing answers from the data. [1] A strong research design yields valid answers to research questions while weak designs yield unreliable, imprecise or ...

  9. What Is a Research Design?

    A research design in qualitative research is a critical framework that guides the methodological approach to studying complex social phenomena. Qualitative research designs determine how data is collected, analyzed, and interpreted, ensuring that the research captures participants' nuanced and subjective perspectives.

  10. Sage Research Methods

    Comprising more than 500 entries, the Encyclopedia of Research Design explains how to make decisions about research design, undertake research projects in an ethical manner, interpret and draw valid inferences from data, and evaluate experiment design strategies and results. Two additional features carry this encyclopedia far above other works in the field: bibliographic entries devoted to ...

  11. Research Design: What is Research Design, Types, Methods, and Examples

    Research design is the blueprint of any scientific investigation, dictating the path researchers take to answer their burning questions. For young researchers stepping into this realm, understanding the intricacies of research design is paramount. In this article, we'll explore what research design entails, the different types available, and tips for choosing an appropriate research design.

  12. Study designs: Part 1

    The study design used to answer a particular research question depends on the nature of the question and the availability of resources. In this article, which is the first part of a series on "study designs," we provide an overview of research study designs and their classification. The subsequent articles will focus on individual designs.

  13. PDF WHAT IS RESEARCH DESIGN?

    Before examining types of research designs it is important to be clear about the role and purpose of research design. We need to understand what research design is and what it is not. We need to know where design fits into the whole research process from framing a question to finally analysing and reporting data. This is the purpose of this chapter. Description and explanation Social ...

  14. Types of Research Designs Compared

    Types of research can be categorized based on the research aims, the type of data, and the subjects, timescale, and location of the research.

  15. What is Research Design? Characteristics, Types, Process, & Examples

    Research design is the structure of research methods and techniques selected to conduct a study. It refines the methods suited to the subject and ensures a successful setup. Defining a research topic clarifies the type of research (experimental, survey research, correlational, semi-experimental, review) and its sub-type (experimental design ...

  16. Basic Research Design

    What is Research Design? Definition of Research Design: A procedure for generating answers to questions, crucial in determining the reliability and relevance of research outcomes. Importance of Strong Designs: Strong designs lead to answers that are accurate and close to their targets, while weak designs may result in misleading or irrelevant outcomes. Criteria for Assessing Design Strength ...

  17. The Four Types of Research Design

    Marketers use different types of research design when conducting market research. Here are four common design types.

  18. What Is a Research Design: Types, Characteristics & Examples

    Learn what a research design is, its importance and types. We will provide examples and explain how to design research that produces reliable results.

  19. Research Methods Guide: Research Design & Method

    Research design is a plan to answer your research question. A research method is a strategy used to implement that plan. Research design and methods are different but closely related, because good research design ensures that the data you obtain will help you answer your research question more effectively.

  20. (PDF) Basics of Research Design: A Guide to selecting appropriate

    The essence of research design is to translate a research problem into data for analysis so as to provide relevant answers to research questions at a minimum cost.

  21. What is a research design?

    A research design is a strategy for answering your research question. It defines your overall approach and determines how you will collect and analyze data.

  22. Organizing Academic Research Papers: Types of Research Designs

    Action Research Design Definition and Purpose The essentials of action research design follow a characteristic cycle whereby initially an exploratory stance is adopted, where an understanding of a problem is developed and plans are made for some form of interventionary strategy.

  23. (Pdf) the Research Design

    Research design is a logical and systematic plan prepared for directing a research study. It specifies the objectives of the study, the methodology, and the techniques to be adopted for achieving ...

  24. Research Design Canvas

    This chapter focuses on describing how to go about designing a research project. First, we will learn about framing the research study as a project.Then, we will be introduced to the research design canvas.While the details are specific to socio-technical grounded theory (STGT) studies, the research design canvas or template can be used for research project design in general.

  25. Managing "socially admitted" patients in hospital: a ...

    Study design. This qualitative study was informed by constructivist grounded theory, which uses inductive analysis of data collected from participants to generate new theories.16, 17 We conducted semistructured interviews with clinicians and health care administrators between October 2022 and July 2023. Given that little is known about "social admissions," grounded theory was best suited ...