• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

in research sample

Home Market Research

Sample: Definition, Types, Formula & Examples

Sample

How often do researchers look for the right survey respondents, either for a market research study or an existing survey in the field? The sample or the respondents of this research may be selected from a set of customers or users that are known or unknown.

You may often know your typical respondent profile but don’t have access to the respondents to complete your research study. At such times, researchers and research teams reach out to specialized organizations to access their panel of respondents or buy respondents from them to complete research studies and surveys.

These could be general population respondents that match demographic criteria or respondents based on specific criteria. Such respondents are imperative to the success of research studies.

This article discusses in detail the different types of samples, sampling methods, and examples of each. It also mentions the steps to calculate the size, the details of an online sample, and the advantages of using them.

Content Index

  • What is a sample?

Probability sampling methodologies with examples

Non-probability sampling methodologies with examples.

  • How to determine a sample size
  • Calculating sample size
  • Sampling advantages

What is a Sample?

A sample is a smaller set of data that a researcher chooses or selects from a larger population using a pre-defined selection bias method. These elements are known as sample points, sampling units, or observations.

Creating a sample is an efficient method of conducting research . Researching the whole population is often impossible, costly, and time-consuming. Hence, examining the sample provides insights the researcher can apply to the entire population.

For example, if a cell phone manufacturer wants to conduct a feature research study among students in US Universities. An in-depth research study must be conducted if the researcher is looking for features that the students use, features they would like to see, and the price they are willing to pay.

This step is imperative to understand the features that need development, the features that require an upgrade, the device’s pricing, and the go-to-market strategy.

In 2016/17 alone, there were 24.7 million students enrolled in universities across the US. It is impossible to research all these students; the time spent would make the new device redundant, and the money spent on development would render the study useless.

Creating a sample of universities by geographical location and further creating a sample of these students from these universities provides a large enough number of students for research.

Typically, the population for market research is enormous. Making an enumeration of the whole population is practically impossible. The sample usually represents a manageable size of this population. Researchers then collect data from these samples through surveys, polls, and questionnaires and extrapolate this data analysis to the broader community.

LEARN ABOUT: Survey Sampling

Types of Samples: Selection methodologies with examples

The process of deriving a sample is called a sampling method. Sampling forms an integral part of the research design as this method derives the quantitative and qualitative data that can be collected as part of a research study. Sampling methods are characterized into two distinct approaches: probability sampling and non-probability sampling.

Probability sampling is a method of deriving a sample where the objects are selected from a population-based on probability theory. This method includes everyone in the population, and everyone has an equal chance of being selected. Hence, there is no bias whatsoever in this type of sample.

Each person in the population can subsequently be a part of the research. The selection criteria are decided at the outset of the market research study and form an important component of research.

LEARN ABOUT:   Action Research

in research sample

Probability sampling can be further classified into four distinct types of samples. They are:

  • Simple random sampling: The most straightforward way of selecting a sample is simple random sampling . In this method, each member has an equal chance of participating in the study. The objects in this sample population are chosen randomly, and each member has the same probability of being selected. For example, if a university dean would like to collect feedback from students about their perception of the teachers and level of education, all 1000 students in the University could be a part of this sample. Any 100 students can be selected randomly to be a part of this sample.
  • Cluster sampling: Cluster sampling is a type of sampling method where the respondent population is divided into equal clusters. Clusters are identified and included in a sample based on defining demographic parameters such as age, location, sex, etc. This makes it extremely easy for a survey creator to derive practical inferences from the feedback. For example, if the FDA wants to collect data about adverse side effects from drugs, they can divide the mainland US into distinctive cluster analysis , like states. Research studies are then administered to respondents in these clusters. This type of generating a sample makes the data collection in-depth and provides easy-to-consume and act-upon, insights.
  • Systematic sampling: Systematic sampling is a sampling method where the researcher chooses respondents at equal intervals from a population. The approach to selecting the sample is to pick a starting point and then pick respondents at a pre-defined sample interval. For example, while selecting 1,000 volunteers for the Olympics from an application list of 10,000 people, each applicant is given a count of 1 to 10,000. Then starting from 1 and selecting each respondent with an interval of 10, a sample of 1,000 volunteers can be obtained.
  • Stratified random sampling: Stratified random sampling is a method of dividing the respondent population into distinctive but pre-defined parameters in the research design phase. In this method, the respondents don’t overlap but collectively represent the whole population. For example, a researcher looking to analyze people from different socioeconomic backgrounds can distinguish respondents by their annual salaries. This forms smaller groups of people or samples, and then some objects from these samples can be used for the research study.

LEARN ABOUT: Purposive Sampling

The non-probability sampling method uses the researcher’s discretion to select a sample. This type of sample is derived mostly from the researcher’s or statistician’s ability to get to this sample.

This type of sampling is used for preliminary research where the primary objective is to derive a hypothesis about the topic in research. Here each member does not have an equal chance of being a part of the sample population, and those parameters are known only post-selection to the sample.

in research sample

We can classify non-probability sampling into four distinct types of samples. They are:

  • Convenience sampling: Convenience sampling , in easy terms, stands for the convenience of a researcher accessing a respondent. There is no scientific method for deriving this sample. Researchers have nearly no authority over selecting the sample elements, and it’s purely done based on proximity and not representativeness.

This non-probability sampling method is used when there is time and costs limitations in collecting feedback. For example, researchers that are conducting a mall-intercept survey to understand the probability of using a fragrance from a perfume manufacturer. In this sampling method, the sample respondents are chosen based on their proximity to the survey desk and willingness to participate in the research.

  • Judgemental/purposive sampling: The judgemental or purposive sampling method is a method of developing a sample purely on the basis and discretion of the researcher purely, based on the nature of the study along with his/her understanding of the target audience. This sampling method selects people who only fit the research criteria and end objectives, and the remaining are kept out.

For example, if the research topic is understanding what University a student prefers for Masters, if the question asked is “Would you like to do your Masters?” anything other than a response, “Yes” to this question, everyone else is excluded from this study.

  • Snowball sampling: Snowball sampling or chain-referral sampling is defined as a non-probability sampling technique in which the samples have rare traits. This is a sampling technique in which existing subjects provide referrals to recruit samples required for a research study.

For example, while collecting feedback about a sensitive topic like AIDS, respondents aren’t forthcoming with information. In this case, the researcher can recruit people with an understanding or knowledge of such people and collect information from them or ask them to collect information.

  • Quota sampling: Quota sampling is a method of collecting a sample where the researcher has the liberty to select a sample based on their strata. The primary characteristic of this method is that two people cannot exist under two different conditions. For example, when a shoe manufacturer would like to understand millennials’ perception of the brand with other parameters like comfort, pricing, etc. It selects only females who are millennials for this study as the research objective is to collect feedback about women’s shoes.

How to determine a Sample Size

As we have learned above, the right sample size determination is essential for the success of data collection in a market research study. But is there a correct number for the sample size? What parameters decide the sample size? What are the distribution methods of the survey?

To understand all of this and make an informed calculation of the right sample size, it is first essential to understand four important variables that form the basic characteristics of a sample. They are:

  • Population size: The population size is all the people that can be considered for the research study. This number, in most cases, runs into huge amounts. For example, the population of the United States is 327 million. But in market research, it is impossible to consider all of them for the research study.
  • The margin of error (confidence interval): The margin of error is depicted by a percentage that is a statistical inference about the confidence of what number of the population depicts the actual views of the whole population. This percentage helps towards the statistical analysis in selecting a sample and how much sampling error in this would be acceptable.

LEARN ABOUT: Research Process Steps

  • Confidence level: This metric measures where the actual mean falls within a confidence interval. The most common confidence intervals are 90%, 95%, and 99%.
  • Standard deviation: This metric covers the variance in a survey. A safe number to consider is .5, which would mean that the sample size has to be that large.

Calculating Sample Size

To calculate the sample size, you need the following parameters.

  • Z-score: The Z-score value can be found   here .
  • Standard deviation
  • Margin of error
  • Confidence level

To calculate use the sample size, use this formula:

in research sample

Sample Size = (Z-score)2 * StdDev*(1-StdDev) / (margin of error)2

Consider the confidence level of 90%, standard deviation of .6 and margin of error, +/-4%

((1.64)2 x .6(.6)) / (.04)2

( 2.68x .0.36) / .0016

.9648 / .0016

603 respondents are needed and that becomes your sample size.

Try our sample size calculator to give population, margin of error calculator , and confidence level.

LEARN MORE: Population vs Sample

Sampling Advantages

As shown above, there are many advantages to sampling. Some of the most significant advantages are:

in research sample

  • Reduced cost & time: Since using a sample reduces the number of people that have to be reached out to, it reduces cost and time. Imagine the time saved between researching with a population of millions vs. conducting a research study using a sample.
  • Reduced resource deployment: It is obvious that if the number of people involved in a research study is much lower due to the sample, the resources required are also much less. The workforce needed to research the sample is much less than the workforce needed to study the whole population .
  • Accuracy of data: Since the sample indicates the population, the data collected is accurate. Also, since the respondent is willing to participate, the survey dropout rate is much lower, which increases the validity and accuracy of the data.
  • Intensive & exhaustive data: Since there are lesser respondents, the data collected from a sample is intense and thorough. More time and effort are given to each respondent rather than collecting data from many people.
  • Apply properties to a larger population: Since the sample is indicative of the broader population, it is safe to say that the data collected and analyzed from the sample can be applied to the larger population, which would hold true.

To collect accurate data for research, filter bad panelists, and eliminate sampling bias by applying different control measures. If you need any help arranging a sample audience for your next market research project, contact us at [email protected] . We have more than 22 million panelists across the world!

In conclusion, a sample is a subset of a population that is used to represent the characteristics of the entire population. Sampling is essential in research and data analysis to make inferences about a population based on a smaller group of individuals. There are different types of sampling, such as probability sampling, non-probability sampling, and others, each with its own advantages and disadvantages.

Choosing the right sampling method depends on the research question, budget, and resources is important. Furthermore, the sample size plays a crucial role in the accuracy and generalizability of the findings.

This article has provided a comprehensive overview of the definition, types, formula, and examples of sampling. By understanding the different types of sampling and the formulas used to calculate sample size, researchers and analysts can make more informed decisions when conducting research and data unit of analysis .

Sampling is an important tool that enables researchers to make inferences about a population based on a smaller group of individuals. With the right sampling method and sample size, researchers can ensure that their findings are accurate and generalizable to the population.

Utilize one of QuestionPro’s many survey questionnaire samples to help you complete your survey.

When creating online surveys for your customers, employees, or students, one of the biggest mistakes you can make is asking the wrong questions. Different businesses and organizations have different needs required for their surveys.

If you ask irrelevant questions to participants, they’re more likely to drop out before completing the survey. A questionnaire sample template will help set you up for a successful survey.

LEARN MORE         SIGN UP FREE

MORE LIKE THIS

Life@QuestionPro: The Journey of Kristie Lawrence

Life@QuestionPro: The Journey of Kristie Lawrence

Jun 7, 2024

We are on the front end of an innovation that can help us better predict how to transform our customer interactions.

How Can I Help You? — Tuesday CX Thoughts

Jun 5, 2024

in research sample

Why Multilingual 360 Feedback Surveys Provide Better Insights

Jun 3, 2024

Raked Weighting

Raked Weighting: A Key Tool for Accurate Survey Results

May 31, 2024

Other categories

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence

in research sample

Summer is here, and so is the sale. Get a yearly plan with up to 65% off today! 🌴🌞

  • Form Builder
  • Survey Maker
  • AI Form Generator
  • AI Survey Tool
  • AI Quiz Maker
  • Store Builder
  • WordPress Plugin

in research sample

HubSpot CRM

in research sample

Google Sheets

in research sample

Google Analytics

in research sample

Microsoft Excel

in research sample

  • Popular Forms
  • Job Application Form Template
  • Rental Application Form Template
  • Hotel Accommodation Form Template
  • Online Registration Form Template
  • Employment Application Form Template
  • Application Forms
  • Booking Forms
  • Consent Forms
  • Contact Forms
  • Donation Forms
  • Customer Satisfaction Surveys
  • Employee Satisfaction Surveys
  • Evaluation Surveys
  • Feedback Surveys
  • Market Research Surveys
  • Personality Quiz Template
  • Geography Quiz Template
  • Math Quiz Template
  • Science Quiz Template
  • Vocabulary Quiz Template

Try without registration Quick Start

Read engaging stories, how-to guides, learn about forms.app features.

Inspirational ready-to-use templates for getting started fast and powerful.

Spot-on guides on how to use forms.app and make the most out of it.

in research sample

See the technical measures we take and learn how we keep your data safe and secure.

  • Integrations
  • Help Center
  • Sign In Sign Up Free
  • What is a sample in research: Definition, examples & tips

What is a sample in research: Definition, examples & tips

Researchers can conduct studies on large populations. It is highly unusual for researchers to be able to get information from every member of a group of individuals they are studying. If you are researching a large population, you can pick a sample . 

The population that will participate in the study is the sample. Using samples, researchers may perform their experiments more quickly and with more manageable data. This article will explain the definition of a sample in research, what a sample is in statistics with examples, how researchers choose a sample, and how to determine the correct sample size for your research with all details.

  • What is a sample?

A sample is a condensed, controllable representation of a larger group . It is a subgroup of people with traits from a wider population . When the population size is too large for the test to include all potential participants or observations, samples are utilized in statistical testing. 

The definition of a sample

The definition of a sample

To put it simply:  a sample is a more manageable and compact version of a bigger group. A sampler population possesses the traits of a bigger group. A sample is utilized in statistical analysis when the population size is too big to include all individuals or observations in the test.

A sample is an analytical subset of a larger population in statistics . The sample should be representative of the population as a whole and should not show bias toward any particular characteristic. The researcher gains knowledge from the sample that can be applied to the entire population.  

  • How do researchers choose a sample?

Sampling is an essential component of the research design as it gathers information that can be used in a research study. Probability sampling and non probability sampling are the essential methodologies that define sampling techniques. 

Sampling methodologies

Sampling methodologies

Probability sampling

Probability sampling is a sampling technique that entails randomly picking a sample or a section of the population. It is also known as random sampling . When procedures are established to guarantee that each unit within a population has an equal probability of being picked , this is known as random selection.  Here are 4 types of probability sampling designs that are frequently used. 

1 - Simple random sampling

Simple random sampling takes a random selection from the whole population with an equal probability of selection for each unit. The most typical method of choosing a random sample is the one. 

Consider creating a list of every person in the population and giving them a number. Using a random number table, random number table, or random number generator, you choose samples at random from this population. 

2 - Stratified sampling

Stratified sampling randomly chooses a sample from one or more strata or population subgroups . Each group is distinguished from the others based on a shared trait, such as age, gender, color, and religion. 

By doing this, you can ensure that your sample population sufficiently represents each subgroup of a particular community. For example, if you divide a student population by university majors, Architecture, Linguistics, and Teaching departments, students are three different tiers within that population. 

3 - Cluster sampling

The cluster sampling method divides the population into clusters , which are smaller groupings. Then, you choose a sample of people at random from these clusters. Large or geographically distributed populations are frequently studied using cluster sampling. 

For example, you may divide all cities into neighborhoods or clusters and then choose the areas with the most significant population while filtering by mobile device users to see how well your goods perform across a city.

4 - Systematic sampling

When using systematic sampling , units are chosen at regular intervals beginning at a random point , drawing a random sample from the target population. Every member of the population is assigned a number in systematic sampling ,  but rather than being a random selection procedure, people are picked out at predetermined intervals. 

For example, while 1000 vaccine volunteers are selected from a list of 5000 applicants, each applicant is given a number from 1 to 5000. A sample of 1000 volunteers can then be obtained by starting at 1 and selecting each participant on 10 to an item scale.

Nonprobability sampling

When the number of units in the population is either unknown or difficult to identify individuals , nonprobability sampling approaches are utilized in quantitative and qualitative research. Additionally, it is employed when you wish to limit the results’ applicability to a particular group or organization rather than the broader populace. 

Besides the advantages of non-probability sampling, the most significant disadvantage is the possibility of sampling bias. As the sample selection process unfairly favors some population members over others. Here are some types of nonprobability sampling:

1. Convenience sampling

Convenience sampling comprises those who are easiest to research by the researcher. Researchers selected these samples only because they are simple to compile , and they did not think to choose a sample representative of the total population. 

For example, researchers conducted a shopping mall response survey to understand a product manufacturer's likelihood of customers using the products. In this sampling method, sample participants are selected based on their proximity to the survey table and their willingness to participate in the research.  

2. Snowball sampling

Snowball sampling is used to recruit participants through other participants if the population is difficult to reach. As you interact with additional individuals, your network of contacts "snowballs" in size.

For example, you are looking into local homeless people's experiences. Since there is no list of every homeless person in the city, probability sampling is not an option. One of the persons you meet agrees to participate in the research, and the homeless person refers you to other local homeless people he knows.

3. Purposive sampling

Purposive sampling is frequently employed in qualitative research when the researcher prefers to learn in-depth information about a particular phenomenon versus drawing general conclusions from statistics or when the population is relatively tiny and focused.

For instance, a researcher wants to learn more about how people with persistent headaches live. In such instances, they can choose a sample of people diagnosed with persistent headaches using purposive sampling. 

  • How to determine the right sample size

The sample size is crucial for reliable, statistically meaningful results and a smooth research operation. You should learn the fundamentals of the statistics involved to select the appropriate sample size , considering a few distinct elements that may affect your study.

1. Population size

The population size is the total number of individuals that can be included in the study. To determine the appropriate population size, you should be clear about who belongs or doesn’t belong in your group. 

2. The margin of error (confidence interval)

Errors are inevitable in research studies. The margin of error is represented by a percentage, which is a statistical inference about the confidence that the number of respondents accurately represents the opinions of the whole population.  

3. Confidence level

The confidence level value measures your degree of certainty on how closely a sample reflects the total population within your chosen margin of error. The most prevalent are the 90%, 95%, and 99% confidence intervals.

4. Standard deviation

The standard deviation indicates how much variation you can expect in your responses. A safe value to use as a guide is 0.5 , which denotes that significant sample size is required.

Sample size formula

You may select the appropriate sample size by considering various factors affecting your study. You may compute the sample using an online calculator or read on to learn how to do it by hand.

1. Discover the Z-score

The Z-score displays how far a certain ratio deviates from the mean by standard deviation. You should translate your degree of confidence into a Z-score.

For the most typical confidence levels, the Z-scores are as follows:

  • 90% Z-score = 1.645
  • 95% Z-score = 1.96
  • 99% Z-score = 2.576

2. Apply the formula for the sample size

Use the following formula to perform the calculation manually. 

Sample size formula

  • N = population size
  •  e = Margin of error 
  •  z = z-score
  •  p = standard of deviation

For example, you select a 95% confidence level. Let the population size be 1000, and the margin of level be 5. Based on these data, your sample size would be 370.

  • Frequently asked questions about sample

A sample is a particular group from which you will gather data. You should employ a sample when your population is sizable , spread geographically , or challenging . The population, sample, and sample frame are different from each other. Here are the frequently asked questions about the sample.

Population vs. sample

Sample and population are closely related concepts, so they can often be confused. We will explain the differences between them so that you can distinguish between the sample and population. 

Population refers to the entire group of individuals about which you want to draw conclusions. On the other hand, sample refers to the group of people you will collect data from.

A sample is more manageable, minor, and representative of a bigger group. The sample size is always less than the total population size. When a population is too vast for all the members or observations to be included in the test, a sample is employed in statistical analysis.

Sample vs. sample frame

A sample is a group of participants chosen from a broader population of interest; it is an essential component of the research. On the other hand, sample frames are crucial for  researchers to maintain organization and guarantee that the most recent data for a population is being used. Here are the differences between sample and sample frame: 

The sample is a smaller group of people or units chosen from a larger population for a survey or research project. In contrast, a sample frame is an exhaustive enumeration of all the elements or people that comprise the population from which the sample is taken. 

The sample is a subset of the population's elements chosen for research, whereas the sample frame is a comprehensive list or inventory of all population items.

  • Key points to takeaway

In conclusion, a sample is a group or subset of persons or things chosen from a broader population to study or assess particular traits or behaviors. To guarantee that every member of the population has an equal chance of being chosen, the sample should be representative of the people from which it is collected or selected using a random sampling procedure. 

Selecting the appropriate sample technique based on the research topic , budget , and available resources . Additionally, the accuracy and generalizability of the results are greatly influenced by the sample size. 

This article has explained what a sample is in research methodology, what sample is in research examples, and how to determine the correct sample size. You can learn more about the research by reading this article.

Sena is a content writer at forms.app. She likes to read and write articles on different topics. Sena also likes to learn about different cultures and travel. She likes to study and learn different languages. Her specialty is linguistics, surveys, survey questions, and sampling methods.

  • Form Features
  • Data Collection

Table of Contents

Related posts.

100+ Eye-opening mobile statistics for 2024

100+ Eye-opening mobile statistics for 2024

Fatih Özkan

50+ Onboarding questions for digital marketing agencies

50+ Onboarding questions for digital marketing agencies

Işılay Kırbaş

How to attract and hire talent for your startup

How to attract and hire talent for your startup

María Elena González

Sampling Methods In Reseach: Types, Techniques, & Examples

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

Sampling methods in psychology refer to strategies used to select a subset of individuals (a sample) from a larger population, to study and draw inferences about the entire population. Common methods include random sampling, stratified sampling, cluster sampling, and convenience sampling. Proper sampling ensures representative, generalizable, and valid research results.
  • Sampling : the process of selecting a representative group from the population under study.
  • Target population : the total group of individuals from which the sample might be drawn.
  • Sample: a subset of individuals selected from a larger population for study or investigation. Those included in the sample are termed “participants.”
  • Generalizability : the ability to apply research findings from a sample to the broader target population, contingent on the sample being representative of that population.

For instance, if the advert for volunteers is published in the New York Times, this limits how much the study’s findings can be generalized to the whole population, because NYT readers may not represent the entire population in certain respects (e.g., politically, socio-economically).

The Purpose of Sampling

We are interested in learning about large groups of people with something in common in psychological research. We call the group interested in studying our “target population.”

In some types of research, the target population might be as broad as all humans. Still, in other types of research, the target population might be a smaller group, such as teenagers, preschool children, or people who misuse drugs.

Sample Target Population

Studying every person in a target population is more or less impossible. Hence, psychologists select a sample or sub-group of the population that is likely to be representative of the target population we are interested in.

This is important because we want to generalize from the sample to the target population. The more representative the sample, the more confident the researcher can be that the results can be generalized to the target population.

One of the problems that can occur when selecting a sample from a target population is sampling bias. Sampling bias refers to situations where the sample does not reflect the characteristics of the target population.

Many psychology studies have a biased sample because they have used an opportunity sample that comprises university students as their participants (e.g., Asch ).

OK, so you’ve thought up this brilliant psychological study and designed it perfectly. But who will you try it out on, and how will you select your participants?

There are various sampling methods. The one chosen will depend on a number of factors (such as time, money, etc.).

Probability and Non-Probability Samples

Random Sampling

Random sampling is a type of probability sampling where everyone in the entire target population has an equal chance of being selected.

This is similar to the national lottery. If the “population” is everyone who bought a lottery ticket, then everyone has an equal chance of winning the lottery (assuming they all have one ticket each).

Random samples require naming or numbering the target population and then using some raffle method to choose those to make up the sample. Random samples are the best method of selecting your sample from the population of interest.

  • The advantages are that your sample should represent the target population and eliminate sampling bias.
  • The disadvantage is that it is very difficult to achieve (i.e., time, effort, and money).

Stratified Sampling

During stratified sampling , the researcher identifies the different types of people that make up the target population and works out the proportions needed for the sample to be representative.

A list is made of each variable (e.g., IQ, gender, etc.) that might have an effect on the research. For example, if we are interested in the money spent on books by undergraduates, then the main subject studied may be an important variable.

For example, students studying English Literature may spend more money on books than engineering students, so if we use a large percentage of English students or engineering students, our results will not be accurate.

We have to determine the relative percentage of each group at a university, e.g., Engineering 10%, Social Sciences 15%, English 20%, Sciences 25%, Languages 10%, Law 5%, and Medicine 15%. The sample must then contain all these groups in the same proportion as the target population (university students).

  • The disadvantage of stratified sampling is that gathering such a sample would be extremely time-consuming and difficult to do. This method is rarely used in Psychology.
  • However, the advantage is that the sample should be highly representative of the target population, and therefore we can generalize from the results obtained.

Opportunity Sampling

Opportunity sampling is a method in which participants are chosen based on their ease of availability and proximity to the researcher, rather than using random or systematic criteria. It’s a type of convenience sampling .

An opportunity sample is obtained by asking members of the population of interest if they would participate in your research. An example would be selecting a sample of students from those coming out of the library.

  • This is a quick and easy way of choosing participants (advantage)
  • It may not provide a representative sample and could be biased (disadvantage).

Systematic Sampling

Systematic sampling is a method where every nth individual is selected from a list or sequence to form a sample, ensuring even and regular intervals between chosen subjects.

Participants are systematically selected (i.e., orderly/logical) from the target population, like every nth participant on a list of names.

To take a systematic sample, you list all the population members and then decide upon a sample you would like. By dividing the number of people in the population by the number of people you want in your sample, you get a number we will call n.

If you take every nth name, you will get a systematic sample of the correct size. If, for example, you wanted to sample 150 children from a school of 1,500, you would take every 10th name.

  • The advantage of this method is that it should provide a representative sample.

Sample size

The sample size is a critical factor in determining the reliability and validity of a study’s findings. While increasing the sample size can enhance the generalizability of results, it’s also essential to balance practical considerations, such as resource constraints and diminishing returns from ever-larger samples.

Reliability and Validity

Reliability refers to the consistency and reproducibility of research findings across different occasions, researchers, or instruments. A small sample size may lead to inconsistent results due to increased susceptibility to random error or the influence of outliers. In contrast, a larger sample minimizes these errors, promoting more reliable results.

Validity pertains to the accuracy and truthfulness of research findings. For a study to be valid, it should accurately measure what it intends to do. A small, unrepresentative sample can compromise external validity, meaning the results don’t generalize well to the larger population. A larger sample captures more variability, ensuring that specific subgroups or anomalies don’t overly influence results.

Practical Considerations

Resource Constraints : Larger samples demand more time, money, and resources. Data collection becomes more extensive, data analysis more complex, and logistics more challenging.

Diminishing Returns : While increasing the sample size generally leads to improved accuracy and precision, there’s a point where adding more participants yields only marginal benefits. For instance, going from 50 to 500 participants might significantly boost a study’s robustness, but jumping from 10,000 to 10,500 might not offer a comparable advantage, especially considering the added costs.

Print Friendly, PDF & Email

Related Articles

Qualitative Data Coding

Research Methodology

Qualitative Data Coding

What Is a Focus Group?

What Is a Focus Group?

Cross-Cultural Research Methodology In Psychology

Cross-Cultural Research Methodology In Psychology

What Is Internal Validity In Research?

What Is Internal Validity In Research?

What Is Face Validity In Research? Importance & How To Measure

Research Methodology , Statistics

What Is Face Validity In Research? Importance & How To Measure

Criterion Validity: Definition & Examples

Criterion Validity: Definition & Examples

Grad Coach

Sampling Methods & Strategies 101

Everything you need to know (including examples)

By: Derek Jansen (MBA) | Expert Reviewed By: Kerryn Warren (PhD) | January 2023

If you’re new to research, sooner or later you’re bound to wander into the intimidating world of sampling methods and strategies. If you find yourself on this page, chances are you’re feeling a little overwhelmed or confused. Fear not – in this post we’ll unpack sampling in straightforward language , along with loads of examples .

Overview: Sampling Methods & Strategies

  • What is sampling in a research context?
  • The two overarching approaches

Simple random sampling

Stratified random sampling, cluster sampling, systematic sampling, purposive sampling, convenience sampling, snowball sampling.

  • How to choose the right sampling method

What (exactly) is sampling?

At the simplest level, sampling (within a research context) is the process of selecting a subset of participants from a larger group . For example, if your research involved assessing US consumers’ perceptions about a particular brand of laundry detergent, you wouldn’t be able to collect data from every single person that uses laundry detergent (good luck with that!) – but you could potentially collect data from a smaller subset of this group.

In technical terms, the larger group is referred to as the population , and the subset (the group you’ll actually engage with in your research) is called the sample . Put another way, you can look at the population as a full cake and the sample as a single slice of that cake. In an ideal world, you’d want your sample to be perfectly representative of the population, as that would allow you to generalise your findings to the entire population. In other words, you’d want to cut a perfect cross-sectional slice of cake, such that the slice reflects every layer of the cake in perfect proportion.

Achieving a truly representative sample is, unfortunately, a little trickier than slicing a cake, as there are many practical challenges and obstacles to achieving this in a real-world setting. Thankfully though, you don’t always need to have a perfectly representative sample – it all depends on the specific research aims of each study – so don’t stress yourself out about that just yet!

With the concept of sampling broadly defined, let’s look at the different approaches to sampling to get a better understanding of what it all looks like in practice.

in research sample

The two overarching sampling approaches

At the highest level, there are two approaches to sampling: probability sampling and non-probability sampling . Within each of these, there are a variety of sampling methods , which we’ll explore a little later.

Probability sampling involves selecting participants (or any unit of interest) on a statistically random basis , which is why it’s also called “random sampling”. In other words, the selection of each individual participant is based on a pre-determined process (not the discretion of the researcher). As a result, this approach achieves a random sample.

Probability-based sampling methods are most commonly used in quantitative research , especially when it’s important to achieve a representative sample that allows the researcher to generalise their findings.

Non-probability sampling , on the other hand, refers to sampling methods in which the selection of participants is not statistically random . In other words, the selection of individual participants is based on the discretion and judgment of the researcher, rather than on a pre-determined process.

Non-probability sampling methods are commonly used in qualitative research , where the richness and depth of the data are more important than the generalisability of the findings.

If that all sounds a little too conceptual and fluffy, don’t worry. Let’s take a look at some actual sampling methods to make it more tangible.

Need a helping hand?

in research sample

Probability-based sampling methods

First, we’ll look at four common probability-based (random) sampling methods:

Importantly, this is not a comprehensive list of all the probability sampling methods – these are just four of the most common ones. So, if you’re interested in adopting a probability-based sampling approach, be sure to explore all the options.

Simple random sampling involves selecting participants in a completely random fashion , where each participant has an equal chance of being selected. Basically, this sampling method is the equivalent of pulling names out of a hat , except that you can do it digitally. For example, if you had a list of 500 people, you could use a random number generator to draw a list of 50 numbers (each number, reflecting a participant) and then use that dataset as your sample.

Thanks to its simplicity, simple random sampling is easy to implement , and as a consequence, is typically quite cheap and efficient . Given that the selection process is completely random, the results can be generalised fairly reliably. However, this also means it can hide the impact of large subgroups within the data, which can result in minority subgroups having little representation in the results – if any at all. To address this, one needs to take a slightly different approach, which we’ll look at next.

Stratified random sampling is similar to simple random sampling, but it kicks things up a notch. As the name suggests, stratified sampling involves selecting participants randomly , but from within certain pre-defined subgroups (i.e., strata) that share a common trait . For example, you might divide the population into strata based on gender, ethnicity, age range or level of education, and then select randomly from each group.

The benefit of this sampling method is that it gives you more control over the impact of large subgroups (strata) within the population. For example, if a population comprises 80% males and 20% females, you may want to “balance” this skew out by selecting a random sample from an equal number of males and females. This would, of course, reduce the representativeness of the sample, but it would allow you to identify differences between subgroups. So, depending on your research aims, the stratified approach could work well.

Free Webinar: Research Methodology 101

Next on the list is cluster sampling. As the name suggests, this sampling method involves sampling from naturally occurring, mutually exclusive clusters within a population – for example, area codes within a city or cities within a country. Once the clusters are defined, a set of clusters are randomly selected and then a set of participants are randomly selected from each cluster.

Now, you’re probably wondering, “how is cluster sampling different from stratified random sampling?”. Well, let’s look at the previous example where each cluster reflects an area code in a given city.

With cluster sampling, you would collect data from clusters of participants in a handful of area codes (let’s say 5 neighbourhoods). Conversely, with stratified random sampling, you would need to collect data from all over the city (i.e., many more neighbourhoods). You’d still achieve the same sample size either way (let’s say 200 people, for example), but with stratified sampling, you’d need to do a lot more running around, as participants would be scattered across a vast geographic area. As a result, cluster sampling is often the more practical and economical option.

If that all sounds a little mind-bending, you can use the following general rule of thumb. If a population is relatively homogeneous , cluster sampling will often be adequate. Conversely, if a population is quite heterogeneous (i.e., diverse), stratified sampling will generally be more appropriate.

The last probability sampling method we’ll look at is systematic sampling. This method simply involves selecting participants at a set interval , starting from a random point .

For example, if you have a list of students that reflects the population of a university, you could systematically sample that population by selecting participants at an interval of 8 . In other words, you would randomly select a starting point – let’s say student number 40 – followed by student 48, 56, 64, etc.

What’s important with systematic sampling is that the population list you select from needs to be randomly ordered . If there are underlying patterns in the list (for example, if the list is ordered by gender, IQ, age, etc.), this will result in a non-random sample, which would defeat the purpose of adopting this sampling method. Of course, you could safeguard against this by “shuffling” your population list using a random number generator or similar tool.

Systematic sampling simply involves selecting participants at a set interval (e.g., every 10th person), starting from a random point.

Non-probability-based sampling methods

Right, now that we’ve looked at a few probability-based sampling methods, let’s look at three non-probability methods :

Again, this is not an exhaustive list of all possible sampling methods, so be sure to explore further if you’re interested in adopting a non-probability sampling approach.

First up, we’ve got purposive sampling – also known as judgment , selective or subjective sampling. Again, the name provides some clues, as this method involves the researcher selecting participants using his or her own judgement , based on the purpose of the study (i.e., the research aims).

For example, suppose your research aims were to understand the perceptions of hyper-loyal customers of a particular retail store. In that case, you could use your judgement to engage with frequent shoppers, as well as rare or occasional shoppers, to understand what judgements drive the two behavioural extremes .

Purposive sampling is often used in studies where the aim is to gather information from a small population (especially rare or hard-to-find populations), as it allows the researcher to target specific individuals who have unique knowledge or experience . Naturally, this sampling method is quite prone to researcher bias and judgement error, and it’s unlikely to produce generalisable results, so it’s best suited to studies where the aim is to go deep rather than broad .

Purposive sampling involves the researcher selecting participants using their own judgement, based on the purpose of the study.

Next up, we have convenience sampling. As the name suggests, with this method, participants are selected based on their availability or accessibility . In other words, the sample is selected based on how convenient it is for the researcher to access it, as opposed to using a defined and objective process.

Naturally, convenience sampling provides a quick and easy way to gather data, as the sample is selected based on the individuals who are readily available or willing to participate. This makes it an attractive option if you’re particularly tight on resources and/or time. However, as you’d expect, this sampling method is unlikely to produce a representative sample and will of course be vulnerable to researcher bias , so it’s important to approach it with caution.

Last but not least, we have the snowball sampling method. This method relies on referrals from initial participants to recruit additional participants. In other words, the initial subjects form the first (small) snowball and each additional subject recruited through referral is added to the snowball, making it larger as it rolls along .

Snowball sampling is often used in research contexts where it’s difficult to identify and access a particular population. For example, people with a rare medical condition or members of an exclusive group. It can also be useful in cases where the research topic is sensitive or taboo and people are unlikely to open up unless they’re referred by someone they trust.

Simply put, snowball sampling is ideal for research that involves reaching hard-to-access populations . But, keep in mind that, once again, it’s a sampling method that’s highly prone to researcher bias and is unlikely to produce a representative sample. So, make sure that it aligns with your research aims and questions before adopting this method.

How to choose a sampling method

Now that we’ve looked at a few popular sampling methods (both probability and non-probability based), the obvious question is, “ how do I choose the right sampling method for my study?”. When selecting a sampling method for your research project, you’ll need to consider two important factors: your research aims and your resources .

As with all research design and methodology choices, your sampling approach needs to be guided by and aligned with your research aims, objectives and research questions – in other words, your golden thread. Specifically, you need to consider whether your research aims are primarily concerned with producing generalisable findings (in which case, you’ll likely opt for a probability-based sampling method) or with achieving rich , deep insights (in which case, a non-probability-based approach could be more practical). Typically, quantitative studies lean toward the former, while qualitative studies aim for the latter, so be sure to consider your broader methodology as well.

The second factor you need to consider is your resources and, more generally, the practical constraints at play. If, for example, you have easy, free access to a large sample at your workplace or university and a healthy budget to help you attract participants, that will open up multiple options in terms of sampling methods. Conversely, if you’re cash-strapped, short on time and don’t have unfettered access to your population of interest, you may be restricted to convenience or referral-based methods.

In short, be ready for trade-offs – you won’t always be able to utilise the “perfect” sampling method for your study, and that’s okay. Much like all the other methodological choices you’ll make as part of your study, you’ll often need to compromise and accept practical trade-offs when it comes to sampling. Don’t let this get you down though – as long as your sampling choice is well explained and justified, and the limitations of your approach are clearly articulated, you’ll be on the right track.

in research sample

Let’s recap…

In this post, we’ve covered the basics of sampling within the context of a typical research project.

  • Sampling refers to the process of defining a subgroup (sample) from the larger group of interest (population).
  • The two overarching approaches to sampling are probability sampling (random) and non-probability sampling .
  • Common probability-based sampling methods include simple random sampling, stratified random sampling, cluster sampling and systematic sampling.
  • Common non-probability-based sampling methods include purposive sampling, convenience sampling and snowball sampling.
  • When choosing a sampling method, you need to consider your research aims , objectives and questions, as well as your resources and other practical constraints .

If you’d like to see an example of a sampling strategy in action, be sure to check out our research methodology chapter sample .

Last but not least, if you need hands-on help with your sampling (or any other aspect of your research), take a look at our 1-on-1 coaching service , where we guide you through each step of the research process, at your own pace.

in research sample

Psst... there’s more!

This post was based on one of our popular Research Bootcamps . If you're working on a research project, you'll definitely want to check this out ...

You Might Also Like:

Research constructs: construct validity and reliability

Excellent and helpful. Best site to get a full understanding of Research methodology. I’m nolonger as “clueless “..😉

Takele Gezaheg Demie

Excellent and helpful for junior researcher!

Andrea

Grad Coach tutorials are excellent – I recommend them to everyone doing research. I will be working with a sample of imprisoned women and now have a much clearer idea concerning sampling. Thank you to all at Grad Coach for generously sharing your expertise with students.

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly

If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

To log in and use all the features of Khan Academy, please enable JavaScript in your browser.

Statistics and probability

Course: statistics and probability   >   unit 6.

  • Picking fairly
  • Using probability to make fair decisions
  • Techniques for generating a simple random sample
  • Simple random samples
  • Techniques for random sampling and avoiding bias
  • Sampling methods

Sampling methods review

  • Samples and surveys

in research sample

What are sampling methods?

Bad ways to sample.

  • (Choice A)   Convenience sampling A Convenience sampling
  • (Choice B)   Voluntary response sampling B Voluntary response sampling

Good ways to sample

  • (Choice A)   Simple random sampling A Simple random sampling
  • (Choice B)   Stratified random sampling B Stratified random sampling
  • (Choice C)   Cluster random sampling C Cluster random sampling
  • (Choice D)   Systematic random sampling D Systematic random sampling

Want to join the conversation?

  • Upvote Button navigates to signup page
  • Downvote Button navigates to signup page
  • Flag Button navigates to signup page

Great Answer

Logo for Open Educational Resources

Chapter 5. Sampling

Introduction.

Most Americans will experience unemployment at some point in their lives. Sarah Damaske ( 2021 ) was interested in learning about how men and women experience unemployment differently. To answer this question, she interviewed unemployed people. After conducting a “pilot study” with twenty interviewees, she realized she was also interested in finding out how working-class and middle-class persons experienced unemployment differently. She found one hundred persons through local unemployment offices. She purposefully selected a roughly equal number of men and women and working-class and middle-class persons for the study. This would allow her to make the kinds of comparisons she was interested in. She further refined her selection of persons to interview:

I decided that I needed to be able to focus my attention on gender and class; therefore, I interviewed only people born between 1962 and 1987 (ages 28–52, the prime working and child-rearing years), those who worked full-time before their job loss, those who experienced an involuntary job loss during the past year, and those who did not lose a job for cause (e.g., were not fired because of their behavior at work). ( 244 )

The people she ultimately interviewed compose her sample. They represent (“sample”) the larger population of the involuntarily unemployed. This “theoretically informed stratified sampling design” allowed Damaske “to achieve relatively equal distribution of participation across gender and class,” but it came with some limitations. For one, the unemployment centers were located in primarily White areas of the country, so there were very few persons of color interviewed. Qualitative researchers must make these kinds of decisions all the time—who to include and who not to include. There is never an absolutely correct decision, as the choice is linked to the particular research question posed by the particular researcher, although some sampling choices are more compelling than others. In this case, Damaske made the choice to foreground both gender and class rather than compare all middle-class men and women or women of color from different class positions or just talk to White men. She leaves the door open for other researchers to sample differently. Because science is a collective enterprise, it is most likely someone will be inspired to conduct a similar study as Damaske’s but with an entirely different sample.

This chapter is all about sampling. After you have developed a research question and have a general idea of how you will collect data (observations or interviews), how do you go about actually finding people and sites to study? Although there is no “correct number” of people to interview, the sample should follow the research question and research design. You might remember studying sampling in a quantitative research course. Sampling is important here too, but it works a bit differently. Unlike quantitative research, qualitative research involves nonprobability sampling. This chapter explains why this is so and what qualities instead make a good sample for qualitative research.

Quick Terms Refresher

  • The population is the entire group that you want to draw conclusions about.
  • The sample is the specific group of individuals that you will collect data from.
  • Sampling frame is the actual list of individuals that the sample will be drawn from. Ideally, it should include the entire target population (and nobody who is not part of that population).
  • Sample size is how many individuals (or units) are included in your sample.

The “Who” of Your Research Study

After you have turned your general research interest into an actual research question and identified an approach you want to take to answer that question, you will need to specify the people you will be interviewing or observing. In most qualitative research, the objects of your study will indeed be people. In some cases, however, your objects might be content left by people (e.g., diaries, yearbooks, photographs) or documents (official or unofficial) or even institutions (e.g., schools, medical centers) and locations (e.g., nation-states, cities). Chances are, whatever “people, places, or things” are the objects of your study, you will not really be able to talk to, observe, or follow every single individual/object of the entire population of interest. You will need to create a sample of the population . Sampling in qualitative research has different purposes and goals than sampling in quantitative research. Sampling in both allows you to say something of interest about a population without having to include the entire population in your sample.

We begin this chapter with the case of a population of interest composed of actual people. After we have a better understanding of populations and samples that involve real people, we’ll discuss sampling in other types of qualitative research, such as archival research, content analysis, and case studies. We’ll then move to a larger discussion about the difference between sampling in qualitative research generally versus quantitative research, then we’ll move on to the idea of “theoretical” generalizability, and finally, we’ll conclude with some practical tips on the correct “number” to include in one’s sample.

Sampling People

To help think through samples, let’s imagine we want to know more about “vaccine hesitancy.” We’ve all lived through 2020 and 2021, and we know that a sizable number of people in the United States (and elsewhere) were slow to accept vaccines, even when these were freely available. By some accounts, about one-third of Americans initially refused vaccination. Why is this so? Well, as I write this in the summer of 2021, we know that some people actively refused the vaccination, thinking it was harmful or part of a government plot. Others were simply lazy or dismissed the necessity. And still others were worried about harmful side effects. The general population of interest here (all adult Americans who were not vaccinated by August 2021) may be as many as eighty million people. We clearly cannot talk to all of them. So we will have to narrow the number to something manageable. How can we do this?

Null

First, we have to think about our actual research question and the form of research we are conducting. I am going to begin with a quantitative research question. Quantitative research questions tend to be simpler to visualize, at least when we are first starting out doing social science research. So let us say we want to know what percentage of each kind of resistance is out there and how race or class or gender affects vaccine hesitancy. Again, we don’t have the ability to talk to everyone. But harnessing what we know about normal probability distributions (see quantitative methods for more on this), we can find this out through a sample that represents the general population. We can’t really address these particular questions if we only talk to White women who go to college with us. And if you are really trying to generalize the specific findings of your sample to the larger population, you will have to employ probability sampling , a sampling technique where a researcher sets a selection of a few criteria and chooses members of a population randomly. Why randomly? If truly random, all the members have an equal opportunity to be a part of the sample, and thus we avoid the problem of having only our friends and neighbors (who may be very different from other people in the population) in the study. Mathematically, there is going to be a certain number that will be large enough to allow us to generalize our particular findings from our sample population to the population at large. It might surprise you how small that number can be. Election polls of no more than one thousand people are routinely used to predict actual election outcomes of millions of people. Below that number, however, you will not be able to make generalizations. Talking to five people at random is simply not enough people to predict a presidential election.

In order to answer quantitative research questions of causality, one must employ probability sampling. Quantitative researchers try to generalize their findings to a larger population. Samples are designed with that in mind. Qualitative researchers ask very different questions, though. Qualitative research questions are not about “how many” of a certain group do X (in this case, what percentage of the unvaccinated hesitate for concern about safety rather than reject vaccination on political grounds). Qualitative research employs nonprobability sampling . By definition, not everyone has an equal opportunity to be included in the sample. The researcher might select White women they go to college with to provide insight into racial and gender dynamics at play. Whatever is found by doing so will not be generalizable to everyone who has not been vaccinated, or even all White women who have not been vaccinated, or even all White women who have not been vaccinated who are in this particular college. That is not the point of qualitative research at all. This is a really important distinction, so I will repeat in bold: Qualitative researchers are not trying to statistically generalize specific findings to a larger population . They have not failed when their sample cannot be generalized, as that is not the point at all.

In the previous paragraph, I said it would be perfectly acceptable for a qualitative researcher to interview five White women with whom she goes to college about their vaccine hesitancy “to provide insight into racial and gender dynamics at play.” The key word here is “insight.” Rather than use a sample as a stand-in for the general population, as quantitative researchers do, the qualitative researcher uses the sample to gain insight into a process or phenomenon. The qualitative researcher is not going to be content with simply asking each of the women to state her reason for not being vaccinated and then draw conclusions that, because one in five of these women were concerned about their health, one in five of all people were also concerned about their health. That would be, frankly, a very poor study indeed. Rather, the qualitative researcher might sit down with each of the women and conduct a lengthy interview about what the vaccine means to her, why she is hesitant, how she manages her hesitancy (how she explains it to her friends), what she thinks about others who are unvaccinated, what she thinks of those who have been vaccinated, and what she knows or thinks she knows about COVID-19. The researcher might include specific interview questions about the college context, about their status as White women, about the political beliefs they hold about racism in the US, and about how their own political affiliations may or may not provide narrative scripts about “protective whiteness.” There are many interesting things to ask and learn about and many things to discover. Where a quantitative researcher begins with clear parameters to set their population and guide their sample selection process, the qualitative researcher is discovering new parameters, making it impossible to engage in probability sampling.

Looking at it this way, sampling for qualitative researchers needs to be more strategic. More theoretically informed. What persons can be interviewed or observed that would provide maximum insight into what is still unknown? In other words, qualitative researchers think through what cases they could learn the most from, and those are the cases selected to study: “What would be ‘bias’ in statistical sampling, and therefore a weakness, becomes intended focus in qualitative sampling, and therefore a strength. The logic and power of purposeful sampling like in selecting information-rich cases for study in depth. Information-rich cases are those from which one can learn a great deal about issues of central importance to the purpose of the inquiry, thus the term purposeful sampling” ( Patton 2002:230 ; emphases in the original).

Before selecting your sample, though, it is important to clearly identify the general population of interest. You need to know this before you can determine the sample. In our example case, it is “adult Americans who have not yet been vaccinated.” Depending on the specific qualitative research question, however, it might be “adult Americans who have been vaccinated for political reasons” or even “college students who have not been vaccinated.” What insights are you seeking? Do you want to know how politics is affecting vaccination? Or do you want to understand how people manage being an outlier in a particular setting (unvaccinated where vaccinations are heavily encouraged if not required)? More clearly stated, your population should align with your research question . Think back to the opening story about Damaske’s work studying the unemployed. She drew her sample narrowly to address the particular questions she was interested in pursuing. Knowing your questions or, at a minimum, why you are interested in the topic will allow you to draw the best sample possible to achieve insight.

Once you have your population in mind, how do you go about getting people to agree to be in your sample? In qualitative research, it is permissible to find people by convenience. Just ask for people who fit your sample criteria and see who shows up. Or reach out to friends and colleagues and see if they know anyone that fits. Don’t let the name convenience sampling mislead you; this is not exactly “easy,” and it is certainly a valid form of sampling in qualitative research. The more unknowns you have about what you will find, the more convenience sampling makes sense. If you don’t know how race or class or political affiliation might matter, and your population is unvaccinated college students, you can construct a sample of college students by placing an advertisement in the student paper or posting a flyer on a notice board. Whoever answers is your sample. That is what is meant by a convenience sample. A common variation of convenience sampling is snowball sampling . This is particularly useful if your target population is hard to find. Let’s say you posted a flyer about your study and only two college students responded. You could then ask those two students for referrals. They tell their friends, and those friends tell other friends, and, like a snowball, your sample gets bigger and bigger.

Researcher Note

Gaining Access: When Your Friend Is Your Research Subject

My early experience with qualitative research was rather unique. At that time, I needed to do a project that required me to interview first-generation college students, and my friends, with whom I had been sharing a dorm for two years, just perfectly fell into the sample category. Thus, I just asked them and easily “gained my access” to the research subject; I know them, we are friends, and I am part of them. I am an insider. I also thought, “Well, since I am part of the group, I can easily understand their language and norms, I can capture their honesty, read their nonverbal cues well, will get more information, as they will be more opened to me because they trust me.” All in all, easy access with rich information. But, gosh, I did not realize that my status as an insider came with a price! When structuring the interview questions, I began to realize that rather than focusing on the unique experiences of my friends, I mostly based the questions on my own experiences, assuming we have similar if not the same experiences. I began to struggle with my objectivity and even questioned my role; am I doing this as part of the group or as a researcher? I came to know later that my status as an insider or my “positionality” may impact my research. It not only shapes the process of data collection but might heavily influence my interpretation of the data. I came to realize that although my inside status came with a lot of benefits (especially for access), it could also bring some drawbacks.

—Dede Setiono, PhD student focusing on international development and environmental policy, Oregon State University

The more you know about what you might find, the more strategic you can be. If you wanted to compare how politically conservative and politically liberal college students explained their vaccine hesitancy, for example, you might construct a sample purposively, finding an equal number of both types of students so that you can make those comparisons in your analysis. This is what Damaske ( 2021 ) did. You could still use convenience or snowball sampling as a way of recruitment. Post a flyer at the conservative student club and then ask for referrals from the one student that agrees to be interviewed. As with convenience sampling, there are variations of purposive sampling as well as other names used (e.g., judgment, quota, stratified, criterion, theoretical). Try not to get bogged down in the nomenclature; instead, focus on identifying the general population that matches your research question and then using a sampling method that is most likely to provide insight, given the types of questions you have.

There are all kinds of ways of being strategic with sampling in qualitative research. Here are a few of my favorite techniques for maximizing insight:

  • Consider using “extreme” or “deviant” cases. Maybe your college houses a prominent anti-vaxxer who has written about and demonstrated against the college’s policy on vaccines. You could learn a lot from that single case (depending on your research question, of course).
  • Consider “intensity”: people and cases and circumstances where your questions are more likely to feature prominently (but not extremely or deviantly). For example, you could compare those who volunteer at local Republican and Democratic election headquarters during an election season in a study on why party matters. Those who volunteer are more likely to have something to say than those who are more apathetic.
  • Maximize variation, as with the case of “politically liberal” versus “politically conservative,” or include an array of social locations (young vs. old; Northwest vs. Southeast region). This kind of heterogeneity sampling can capture and describe the central themes that cut across the variations: any common patterns that emerge, even in this wildly mismatched sample, are probably important to note!
  • Rather than maximize the variation, you could select a small homogenous sample to describe some particular subgroup in depth. Focus groups are often the best form of data collection for homogeneity sampling.
  • Think about which cases are “critical” or politically important—ones that “if it happens here, it would happen anywhere” or a case that is politically sensitive, as with the single “blue” (Democratic) county in a “red” (Republican) state. In both, you are choosing a site that would yield the most information and have the greatest impact on the development of knowledge.
  • On the other hand, sometimes you want to select the “typical”—the typical college student, for example. You are trying to not generalize from the typical but illustrate aspects that may be typical of this case or group. When selecting for typicality, be clear with yourself about why the typical matches your research questions (and who might be excluded or marginalized in doing so).
  • Finally, it is often a good idea to look for disconfirming cases : if you are at the stage where you have a hypothesis (of sorts), you might select those who do not fit your hypothesis—you will surely learn something important there. They may be “exceptions that prove the rule” or exceptions that force you to alter your findings in order to make sense of these additional cases.

In addition to all these sampling variations, there is the theoretical approach taken by grounded theorists in which the researcher samples comparative people (or events) on the basis of their potential to represent important theoretical constructs. The sample, one can say, is by definition representative of the phenomenon of interest. It accompanies the constant comparative method of analysis. In the words of the funders of Grounded Theory , “Theoretical sampling is sampling on the basis of the emerging concepts, with the aim being to explore the dimensional range or varied conditions along which the properties of the concepts vary” ( Strauss and Corbin 1998:73 ).

When Your Population is Not Composed of People

I think it is easiest for most people to think of populations and samples in terms of people, but sometimes our units of analysis are not actually people. They could be places or institutions. Even so, you might still want to talk to people or observe the actions of people to understand those places or institutions. Or not! In the case of content analyses (see chapter 17), you won’t even have people involved at all but rather documents or films or photographs or news clippings. Everything we have covered about sampling applies to other units of analysis too. Let’s work through some examples.

Case Studies

When constructing a case study, it is helpful to think of your cases as sample populations in the same way that we considered people above. If, for example, you are comparing campus climates for diversity, your overall population may be “four-year college campuses in the US,” and from there you might decide to study three college campuses as your sample. Which three? Will you use purposeful sampling (perhaps [1] selecting three colleges in Oregon that are different sizes or [2] selecting three colleges across the US located in different political cultures or [3] varying the three colleges by racial makeup of the student body)? Or will you select three colleges at random, out of convenience? There are justifiable reasons for all approaches.

As with people, there are different ways of maximizing insight in your sample selection. Think about the following rationales: typical, diverse, extreme, deviant, influential, crucial, or even embodying a particular “pathway” ( Gerring 2008 ). When choosing a case or particular research site, Rubin ( 2021 ) suggests you bear in mind, first, what you are leaving out by selecting this particular case/site; second, what you might be overemphasizing by studying this case/site and not another; and, finally, whether you truly need to worry about either of those things—“that is, what are the sources of bias and how bad are they for what you are trying to do?” ( 89 ).

Once you have selected your cases, you may still want to include interviews with specific people or observations at particular sites within those cases. Then you go through possible sampling approaches all over again to determine which people will be contacted.

Content: Documents, Narrative Accounts, And So On

Although not often discussed as sampling, your selection of documents and other units to use in various content/historical analyses is subject to similar considerations. When you are asking quantitative-type questions (percentages and proportionalities of a general population), you will want to follow probabilistic sampling. For example, I created a random sample of accounts posted on the website studentloanjustice.org to delineate the types of problems people were having with student debt ( Hurst 2007 ). Even though my data was qualitative (narratives of student debt), I was actually asking a quantitative-type research question, so it was important that my sample was representative of the larger population (debtors who posted on the website). On the other hand, when you are asking qualitative-type questions, the selection process should be very different. In that case, use nonprobabilistic techniques, either convenience (where you are really new to this data and do not have the ability to set comparative criteria or even know what a deviant case would be) or some variant of purposive sampling. Let’s say you were interested in the visual representation of women in media published in the 1950s. You could select a national magazine like Time for a “typical” representation (and for its convenience, as all issues are freely available on the web and easy to search). Or you could compare one magazine known for its feminist content versus one antifeminist. The point is, sample selection is important even when you are not interviewing or observing people.

Goals of Qualitative Sampling versus Goals of Quantitative Sampling

We have already discussed some of the differences in the goals of quantitative and qualitative sampling above, but it is worth further discussion. The quantitative researcher seeks a sample that is representative of the population of interest so that they may properly generalize the results (e.g., if 80 percent of first-gen students in the sample were concerned with costs of college, then we can say there is a strong likelihood that 80 percent of first-gen students nationally are concerned with costs of college). The qualitative researcher does not seek to generalize in this way . They may want a representative sample because they are interested in typical responses or behaviors of the population of interest, but they may very well not want a representative sample at all. They might want an “extreme” or deviant case to highlight what could go wrong with a particular situation, or maybe they want to examine just one case as a way of understanding what elements might be of interest in further research. When thinking of your sample, you will have to know why you are selecting the units, and this relates back to your research question or sets of questions. It has nothing to do with having a representative sample to generalize results. You may be tempted—or it may be suggested to you by a quantitatively minded member of your committee—to create as large and representative a sample as you possibly can to earn credibility from quantitative researchers. Ignore this temptation or suggestion. The only thing you should be considering is what sample will best bring insight into the questions guiding your research. This has implications for the number of people (or units) in your study as well, which is the topic of the next section.

What is the Correct “Number” to Sample?

Because we are not trying to create a generalizable representative sample, the guidelines for the “number” of people to interview or news stories to code are also a bit more nebulous. There are some brilliant insightful studies out there with an n of 1 (meaning one person or one account used as the entire set of data). This is particularly so in the case of autoethnography, a variation of ethnographic research that uses the researcher’s own subject position and experiences as the basis of data collection and analysis. But it is true for all forms of qualitative research. There are no hard-and-fast rules here. The number to include is what is relevant and insightful to your particular study.

That said, humans do not thrive well under such ambiguity, and there are a few helpful suggestions that can be made. First, many qualitative researchers talk about “saturation” as the end point for data collection. You stop adding participants when you are no longer getting any new information (or so very little that the cost of adding another interview subject or spending another day in the field exceeds any likely benefits to the research). The term saturation was first used here by Glaser and Strauss ( 1967 ), the founders of Grounded Theory. Here is their explanation: “The criterion for judging when to stop sampling the different groups pertinent to a category is the category’s theoretical saturation . Saturation means that no additional data are being found whereby the sociologist can develop properties of the category. As he [or she] sees similar instances over and over again, the researcher becomes empirically confident that a category is saturated. [They go] out of [their] way to look for groups that stretch diversity of data as far as possible, just to make certain that saturation is based on the widest possible range of data on the category” ( 61 ).

It makes sense that the term was developed by grounded theorists, since this approach is rather more open-ended than other approaches used by qualitative researchers. With so much left open, having a guideline of “stop collecting data when you don’t find anything new” is reasonable. However, saturation can’t help much when first setting out your sample. How do you know how many people to contact to interview? What number will you put down in your institutional review board (IRB) protocol (see chapter 8)? You may guess how many people or units it will take to reach saturation, but there really is no way to know in advance. The best you can do is think about your population and your questions and look at what others have done with similar populations and questions.

Here are some suggestions to use as a starting point: For phenomenological studies, try to interview at least ten people for each major category or group of people . If you are comparing male-identified, female-identified, and gender-neutral college students in a study on gender regimes in social clubs, that means you might want to design a sample of thirty students, ten from each group. This is the minimum suggested number. Damaske’s ( 2021 ) sample of one hundred allows room for up to twenty-five participants in each of four “buckets” (e.g., working-class*female, working-class*male, middle-class*female, middle-class*male). If there is more than one comparative group (e.g., you are comparing students attending three different colleges, and you are comparing White and Black students in each), you can sometimes reduce the number for each group in your sample to five for, in this case, thirty total students. But that is really a bare minimum you will want to go. A lot of people will not trust you with only “five” cases in a bucket. Lareau ( 2021:24 ) advises a minimum of seven or nine for each bucket (or “cell,” in her words). The point is to think about what your analyses might look like and how comfortable you will be with a certain number of persons fitting each category.

Because qualitative research takes so much time and effort, it is rare for a beginning researcher to include more than thirty to fifty people or units in the study. You may not be able to conduct all the comparisons you might want simply because you cannot manage a larger sample. In that case, the limits of who you can reach or what you can include may influence you to rethink an original overcomplicated research design. Rather than include students from every racial group on a campus, for example, you might want to sample strategically, thinking about the most contrast (insightful), possibly excluding majority-race (White) students entirely, and simply using previous literature to fill in gaps in our understanding. For example, one of my former students was interested in discovering how race and class worked at a predominantly White institution (PWI). Due to time constraints, she simplified her study from an original sample frame of middle-class and working-class domestic Black and international African students (four buckets) to a sample frame of domestic Black and international African students (two buckets), allowing the complexities of class to come through individual accounts rather than from part of the sample frame. She wisely decided not to include White students in the sample, as her focus was on how minoritized students navigated the PWI. She was able to successfully complete her project and develop insights from the data with fewer than twenty interviewees. [1]

But what if you had unlimited time and resources? Would it always be better to interview more people or include more accounts, documents, and units of analysis? No! Your sample size should reflect your research question and the goals you have set yourself. Larger numbers can sometimes work against your goals. If, for example, you want to help bring out individual stories of success against the odds, adding more people to the analysis can end up drowning out those individual stories. Sometimes, the perfect size really is one (or three, or five). It really depends on what you are trying to discover and achieve in your study. Furthermore, studies of one hundred or more (people, documents, accounts, etc.) can sometimes be mistaken for quantitative research. Inevitably, the large sample size will push the researcher into simplifying the data numerically. And readers will begin to expect generalizability from such a large sample.

To summarize, “There are no rules for sample size in qualitative inquiry. Sample size depends on what you want to know, the purpose of the inquiry, what’s at stake, what will be useful, what will have credibility, and what can be done with available time and resources” ( Patton 2002:244 ).

How did you find/construct a sample?

Since qualitative researchers work with comparatively small sample sizes, getting your sample right is rather important. Yet it is also difficult to accomplish. For instance, a key question you need to ask yourself is whether you want a homogeneous or heterogeneous sample. In other words, do you want to include people in your study who are by and large the same, or do you want to have diversity in your sample?

For many years, I have studied the experiences of students who were the first in their families to attend university. There is a rather large number of sampling decisions I need to consider before starting the study. (1) Should I only talk to first-in-family students, or should I have a comparison group of students who are not first-in-family? (2) Do I need to strive for a gender distribution that matches undergraduate enrollment patterns? (3) Should I include participants that reflect diversity in gender identity and sexuality? (4) How about racial diversity? First-in-family status is strongly related to some ethnic or racial identity. (5) And how about areas of study?

As you can see, if I wanted to accommodate all these differences and get enough study participants in each category, I would quickly end up with a sample size of hundreds, which is not feasible in most qualitative research. In the end, for me, the most important decision was to maximize the voices of first-in-family students, which meant that I only included them in my sample. As for the other categories, I figured it was going to be hard enough to find first-in-family students, so I started recruiting with an open mind and an understanding that I may have to accept a lack of gender, sexuality, or racial diversity and then not be able to say anything about these issues. But I would definitely be able to speak about the experiences of being first-in-family.

—Wolfgang Lehmann, author of “Habitus Transformation and Hidden Injuries”

Examples of “Sample” Sections in Journal Articles

Think about some of the studies you have read in college, especially those with rich stories and accounts about people’s lives. Do you know how the people were selected to be the focus of those stories? If the account was published by an academic press (e.g., University of California Press or Princeton University Press) or in an academic journal, chances are that the author included a description of their sample selection. You can usually find these in a methodological appendix (book) or a section on “research methods” (article).

Here are two examples from recent books and one example from a recent article:

Example 1 . In It’s Not like I’m Poor: How Working Families Make Ends Meet in a Post-welfare World , the research team employed a mixed methods approach to understand how parents use the earned income tax credit, a refundable tax credit designed to provide relief for low- to moderate-income working people ( Halpern-Meekin et al. 2015 ). At the end of their book, their first appendix is “Introduction to Boston and the Research Project.” After describing the context of the study, they include the following description of their sample selection:

In June 2007, we drew 120 names at random from the roughly 332 surveys we gathered between February and April. Within each racial and ethnic group, we aimed for one-third married couples with children and two-thirds unmarried parents. We sent each of these families a letter informing them of the opportunity to participate in the in-depth portion of our study and then began calling the home and cell phone numbers they provided us on the surveys and knocking on the doors of the addresses they provided.…In the end, we interviewed 115 of the 120 families originally selected for the in-depth interview sample (the remaining five families declined to participate). ( 22 )

Was their sample selection based on convenience or purpose? Why do you think it was important for them to tell you that five families declined to be interviewed? There is actually a trick here, as the names were pulled randomly from a survey whose sample design was probabilistic. Why is this important to know? What can we say about the representativeness or the uniqueness of whatever findings are reported here?

Example 2 . In When Diversity Drops , Park ( 2013 ) examines the impact of decreasing campus diversity on the lives of college students. She does this through a case study of one student club, the InterVarsity Christian Fellowship (IVCF), at one university (“California University,” a pseudonym). Here is her description:

I supplemented participant observation with individual in-depth interviews with sixty IVCF associates, including thirty-four current students, eight former and current staff members, eleven alumni, and seven regional or national staff members. The racial/ethnic breakdown was twenty-five Asian Americans (41.6 percent), one Armenian (1.6 percent), twelve people who were black (20.0 percent), eight Latino/as (13.3 percent), three South Asian Americans (5.0 percent), and eleven people who were white (18.3 percent). Twenty-nine were men, and thirty-one were women. Looking back, I note that the higher number of Asian Americans reflected both the group’s racial/ethnic composition and my relative ease about approaching them for interviews. ( 156 )

How can you tell this is a convenience sample? What else do you note about the sample selection from this description?

Example 3. The last example is taken from an article published in the journal Research in Higher Education . Published articles tend to be more formal than books, at least when it comes to the presentation of qualitative research. In this article, Lawson ( 2021 ) is seeking to understand why female-identified college students drop out of majors that are dominated by male-identified students (e.g., engineering, computer science, music theory). Here is the entire relevant section of the article:

Method Participants Data were collected as part of a larger study designed to better understand the daily experiences of women in MDMs [male-dominated majors].…Participants included 120 students from a midsize, Midwestern University. This sample included 40 women and 40 men from MDMs—defined as any major where at least 2/3 of students are men at both the university and nationally—and 40 women from GNMs—defined as any may where 40–60% of students are women at both the university and nationally.… Procedure A multi-faceted approach was used to recruit participants; participants were sent targeted emails (obtained based on participants’ reported gender and major listings), campus-wide emails sent through the University’s Communication Center, flyers, and in-class presentations. Recruitment materials stated that the research focused on the daily experiences of college students, including classroom experiences, stressors, positive experiences, departmental contexts, and career aspirations. Interested participants were directed to email the study coordinator to verify eligibility (at least 18 years old, man/woman in MDM or woman in GNM, access to a smartphone). Sixteen interested individuals were not eligible for the study due to the gender/major combination. ( 482ff .)

What method of sample selection was used by Lawson? Why is it important to define “MDM” at the outset? How does this definition relate to sampling? Why were interested participants directed to the study coordinator to verify eligibility?

Final Words

I have found that students often find it difficult to be specific enough when defining and choosing their sample. It might help to think about your sample design and sample recruitment like a cookbook. You want all the details there so that someone else can pick up your study and conduct it as you intended. That person could be yourself, but this analogy might work better if you have someone else in mind. When I am writing down recipes, I often think of my sister and try to convey the details she would need to duplicate the dish. We share a grandmother whose recipes are full of handwritten notes in the margins, in spidery ink, that tell us what bowl to use when or where things could go wrong. Describe your sample clearly, convey the steps required accurately, and then add any other details that will help keep you on track and remind you why you have chosen to limit possible interviewees to those of a certain age or class or location. Imagine actually going out and getting your sample (making your dish). Do you have all the necessary details to get started?

Table 5.1. Sampling Type and Strategies

Further Readings

Fusch, Patricia I., and Lawrence R. Ness. 2015. “Are We There Yet? Data Saturation in Qualitative Research.” Qualitative Report 20(9):1408–1416.

Saunders, Benjamin, Julius Sim, Tom Kinstone, Shula Baker, Jackie Waterfield, Bernadette Bartlam, Heather Burroughs, and Clare Jinks. 2018. “Saturation in Qualitative Research: Exploring Its Conceptualization and Operationalization.”  Quality & Quantity  52(4):1893–1907.

  • Rubin ( 2021 ) suggests a minimum of twenty interviews (but safer with thirty) for an interview-based study and a minimum of three to six months in the field for ethnographic studies. For a content-based study, she suggests between five hundred and one thousand documents, although some will be “very small” ( 243–244 ). ↵

The process of selecting people or other units of analysis to represent a larger population. In quantitative research, this representation is taken quite literally, as statistically representative.  In qualitative research, in contrast, sample selection is often made based on potential to generate insight about a particular topic or phenomenon.

The actual list of individuals that the sample will be drawn from. Ideally, it should include the entire target population (and nobody who is not part of that population).  Sampling frames can differ from the larger population when specific exclusions are inherent, as in the case of pulling names randomly from voter registration rolls where not everyone is a registered voter.  This difference in frame and population can undercut the generalizability of quantitative results.

The specific group of individuals that you will collect data from.  Contrast population.

The large group of interest to the researcher.  Although it will likely be impossible to design a study that incorporates or reaches all members of the population of interest, this should be clearly defined at the outset of a study so that a reasonable sample of the population can be taken.  For example, if one is studying working-class college students, the sample may include twenty such students attending a particular college, while the population is “working-class college students.”  In quantitative research, clearly defining the general population of interest is a necessary step in generalizing results from a sample.  In qualitative research, defining the population is conceptually important for clarity.

A sampling strategy in which the sample is chosen to represent (numerically) the larger population from which it is drawn by random selection.  Each person in the population has an equal chance of making it into the sample.  This is often done through a lottery or other chance mechanisms (e.g., a random selection of every twelfth name on an alphabetical list of voters).  Also known as random sampling .

The selection of research participants or other data sources based on availability or accessibility, in contrast to purposive sampling .

A sample generated non-randomly by asking participants to help recruit more participants the idea being that a person who fits your sampling criteria probably knows other people with similar criteria.

Broad codes that are assigned to the main issues emerging in the data; identifying themes is often part of initial coding . 

A form of case selection focusing on examples that do not fit the emerging patterns. This allows the researcher to evaluate rival explanations or to define the limitations of their research findings. While disconfirming cases are found (not sought out), researchers should expand their analysis or rethink their theories to include/explain them.

A methodological tradition of inquiry and approach to analyzing qualitative data in which theories emerge from a rigorous and systematic process of induction.  This approach was pioneered by the sociologists Glaser and Strauss (1967).  The elements of theory generated from comparative analysis of data are, first, conceptual categories and their properties and, second, hypotheses or generalized relations among the categories and their properties – “The constant comparing of many groups draws the [researcher’s] attention to their many similarities and differences.  Considering these leads [the researcher] to generate abstract categories and their properties, which, since they emerge from the data, will clearly be important to a theory explaining the kind of behavior under observation.” (36).

The result of probability sampling, in which a sample is chosen to represent (numerically) the larger population from which it is drawn by random selection.  Each person in the population has an equal chance of making it into the random sample.  This is often done through a lottery or other chance mechanisms (e.g., the random selection of every twelfth name on an alphabetical list of voters).  This is typically not required in qualitative research but rather essential for the generalizability of quantitative research.

A form of case selection or purposeful sampling in which cases that are unusual or special in some way are chosen to highlight processes or to illuminate gaps in our knowledge of a phenomenon.   See also extreme case .

The point at which you can conclude data collection because every person you are interviewing, the interaction you are observing, or content you are analyzing merely confirms what you have already noted.  Achieving saturation is often used as the justification for the final sample size.

The accuracy with which results or findings can be transferred to situations or people other than those originally studied.  Qualitative studies generally are unable to use (and are uninterested in) statistical generalizability where the sample population is said to be able to predict or stand in for a larger population of interest.  Instead, qualitative researchers often discuss “theoretical generalizability,” in which the findings of a particular study can shed light on processes and mechanisms that may be at play in other settings.  See also statistical generalization and theoretical generalization .

A term used by IRBs to denote all materials aimed at recruiting participants into a research study (including printed advertisements, scripts, audio or video tapes, or websites).  Copies of this material are required in research protocols submitted to IRB.

Introduction to Qualitative Research Methods Copyright © 2023 by Allison Hurst is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License , except where otherwise noted.

Logo for University of Southern Queensland

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Sampling is the statistical process of selecting a subset—called a ‘sample’—of a population of interest for the purpose of making observations and statistical inferences about that population. Social science research is generally about inferring patterns of behaviours within specific populations. We cannot study entire populations because of feasibility and cost constraints, and hence, we must select a representative sample from the population of interest for observation and analysis. It is extremely important to choose a sample that is truly representative of the population so that the inferences derived from the sample can be generalised back to the population of interest. Improper and biased sampling is the primary reason for the often divergent and erroneous inferences reported in opinion polls and exit polls conducted by different polling groups such as CNN/Gallup Poll, ABC, and CBS, prior to every US Presidential election.

The sampling process

As Figure 8.1 shows, the sampling process comprises of several stages. The first stage is defining the target population. A population can be defined as all people or items ( unit of analysis ) with the characteristics that one wishes to study. The unit of analysis may be a person, group, organisation, country, object, or any other entity that you wish to draw scientific inferences about. Sometimes the population is obvious. For example, if a manufacturer wants to determine whether finished goods manufactured at a production line meet certain quality requirements or must be scrapped and reworked, then the population consists of the entire set of finished goods manufactured at that production facility. At other times, the target population may be a little harder to understand. If you wish to identify the primary drivers of academic learning among high school students, then what is your target population: high school students, their teachers, school principals, or parents? The right answer in this case is high school students, because you are interested in their performance, not the performance of their teachers, parents, or schools. Likewise, if you wish to analyse the behaviour of roulette wheels to identify biased wheels, your population of interest is not different observations from a single roulette wheel, but different roulette wheels (i.e., their behaviour over an infinite set of wheels).

The sampling process

The second step in the sampling process is to choose a sampling frame . This is an accessible section of the target population—usually a list with contact information—from where a sample can be drawn. If your target population is professional employees at work, because you cannot access all professional employees around the world, a more realistic sampling frame will be employee lists of one or two local companies that are willing to participate in your study. If your target population is organisations, then the Fortune 500 list of firms or the Standard & Poor’s (S&P) list of firms registered with the New York Stock exchange may be acceptable sampling frames.

Note that sampling frames may not entirely be representative of the population at large, and if so, inferences derived by such a sample may not be generalisable to the population. For instance, if your target population is organisational employees at large (e.g., you wish to study employee self-esteem in this population) and your sampling frame is employees at automotive companies in the American Midwest, findings from such groups may not even be generalisable to the American workforce at large, let alone the global workplace. This is because the American auto industry has been under severe competitive pressures for the last 50 years and has seen numerous episodes of reorganisation and downsizing, possibly resulting in low employee morale and self-esteem. Furthermore, the majority of the American workforce is employed in service industries or in small businesses, and not in automotive industry. Hence, a sample of American auto industry employees is not particularly representative of the American workforce. Likewise, the Fortune 500 list includes the 500 largest American enterprises, which is not representative of all American firms, most of which are medium or small sized firms rather than large firms, and is therefore, a biased sampling frame. In contrast, the S&P list will allow you to select large, medium, and/or small companies, depending on whether you use the S&P LargeCap, MidCap, or SmallCap lists, but includes publicly traded firms (and not private firms) and is hence still biased. Also note that the population from which a sample is drawn may not necessarily be the same as the population about which we actually want information. For example, if a researcher wants to examine the success rate of a new ‘quit smoking’ program, then the target population is the universe of smokers who had access to this program, which may be an unknown population. Hence, the researcher may sample patients arriving at a local medical facility for smoking cessation treatment, some of whom may not have had exposure to this particular ‘quit smoking’ program, in which case, the sampling frame does not correspond to the population of interest.

The last step in sampling is choosing a sample from the sampling frame using a well-defined sampling technique. Sampling techniques can be grouped into two broad categories: probability (random) sampling and non-probability sampling. Probability sampling is ideal if generalisability of results is important for your study, but there may be unique circumstances where non-probability sampling can also be justified. These techniques are discussed in the next two sections.

Probability sampling

Probability sampling is a technique in which every unit in the population has a chance (non-zero probability) of being selected in the sample, and this chance can be accurately determined. Sample statistics thus produced, such as sample mean or standard deviation, are unbiased estimates of population parameters, as long as the sampled units are weighted according to their probability of selection. All probability sampling have two attributes in common: every unit in the population has a known non-zero probability of being sampled, and the sampling procedure involves random selection at some point. The different types of probability sampling techniques include:

n

Stratified sampling. In stratified sampling, the sampling frame is divided into homogeneous and non-overlapping subgroups (called ‘strata’), and a simple random sample is drawn within each subgroup. In the previous example of selecting 200 firms from a list of 1,000 firms, you can start by categorising the firms based on their size as large (more than 500 employees), medium (between 50 and 500 employees), and small (less than 50 employees). You can then randomly select 67 firms from each subgroup to make up your sample of 200 firms. However, since there are many more small firms in a sampling frame than large firms, having an equal number of small, medium, and large firms will make the sample less representative of the population (i.e., biased in favour of large firms that are fewer in number in the target population). This is called non-proportional stratified sampling because the proportion of the sample within each subgroup does not reflect the proportions in the sampling frame—or the population of interest—and the smaller subgroup (large-sized firms) is oversampled . An alternative technique will be to select subgroup samples in proportion to their size in the population. For instance, if there are 100 large firms, 300 mid-sized firms, and 600 small firms, you can sample 20 firms from the ‘large’ group, 60 from the ‘medium’ group and 120 from the ‘small’ group. In this case, the proportional distribution of firms in the population is retained in the sample, and hence this technique is called proportional stratified sampling. Note that the non-proportional approach is particularly effective in representing small subgroups, such as large-sized firms, and is not necessarily less representative of the population compared to the proportional approach, as long as the findings of the non-proportional approach are weighted in accordance to a subgroup’s proportion in the overall population.

Cluster sampling. If you have a population dispersed over a wide geographic region, it may not be feasible to conduct a simple random sampling of the entire population. In such case, it may be reasonable to divide the population into ‘clusters’—usually along geographic boundaries—randomly sample a few clusters, and measure all units within that cluster. For instance, if you wish to sample city governments in the state of New York, rather than travel all over the state to interview key city officials (as you may have to do with a simple random sample), you can cluster these governments based on their counties, randomly select a set of three counties, and then interview officials from every office in those counties. However, depending on between-cluster differences, the variability of sample estimates in a cluster sample will generally be higher than that of a simple random sample, and hence the results are less generalisable to the population than those obtained from simple random samples.

Matched-pairs sampling. Sometimes, researchers may want to compare two subgroups within one population based on a specific criterion. For instance, why are some firms consistently more profitable than other firms? To conduct such a study, you would have to categorise a sampling frame of firms into ‘high profitable’ firms and ‘low profitable firms’ based on gross margins, earnings per share, or some other measure of profitability. You would then select a simple random sample of firms in one subgroup, and match each firm in this group with a firm in the second subgroup, based on its size, industry segment, and/or other matching criteria. Now, you have two matched samples of high-profitability and low-profitability firms that you can study in greater detail. Matched-pairs sampling techniques are often an ideal way of understanding bipolar differences between different subgroups within a given population.

Multi-stage sampling. The probability sampling techniques described previously are all examples of single-stage sampling techniques. Depending on your sampling needs, you may combine these single-stage techniques to conduct multi-stage sampling. For instance, you can stratify a list of businesses based on firm size, and then conduct systematic sampling within each stratum. This is a two-stage combination of stratified and systematic sampling. Likewise, you can start with a cluster of school districts in the state of New York, and within each cluster, select a simple random sample of schools. Within each school, you can select a simple random sample of grade levels, and within each grade level, you can select a simple random sample of students for study. In this case, you have a four-stage sampling process consisting of cluster and simple random sampling.

Non-probability sampling

Non-probability sampling is a sampling technique in which some units of the population have zero chance of selection or where the probability of selection cannot be accurately determined. Typically, units are selected based on certain non-random criteria, such as quota or convenience. Because selection is non-random, non-probability sampling does not allow the estimation of sampling errors, and may be subjected to a sampling bias. Therefore, information from a sample cannot be generalised back to the population. Types of non-probability sampling techniques include:

Convenience sampling. Also called accidental or opportunity sampling, this is a technique in which a sample is drawn from that part of the population that is close to hand, readily available, or convenient. For instance, if you stand outside a shopping centre and hand out questionnaire surveys to people or interview them as they walk in, the sample of respondents you will obtain will be a convenience sample. This is a non-probability sample because you are systematically excluding all people who shop at other shopping centres. The opinions that you would get from your chosen sample may reflect the unique characteristics of this shopping centre such as the nature of its stores (e.g., high end-stores will attract a more affluent demographic), the demographic profile of its patrons, or its location (e.g., a shopping centre close to a university will attract primarily university students with unique purchasing habits), and therefore may not be representative of the opinions of the shopper population at large. Hence, the scientific generalisability of such observations will be very limited. Other examples of convenience sampling are sampling students registered in a certain class or sampling patients arriving at a certain medical clinic. This type of sampling is most useful for pilot testing, where the goal is instrument testing or measurement validation rather than obtaining generalisable inferences.

Quota sampling. In this technique, the population is segmented into mutually exclusive subgroups (just as in stratified sampling), and then a non-random set of observations is chosen from each subgroup to meet a predefined quota. In proportional quota sampling , the proportion of respondents in each subgroup should match that of the population. For instance, if the American population consists of 70 per cent Caucasians, 15 per cent Hispanic-Americans, and 13 per cent African-Americans, and you wish to understand their voting preferences in an sample of 98 people, you can stand outside a shopping centre and ask people their voting preferences. But you will have to stop asking Hispanic-looking people when you have 15 responses from that subgroup (or African-Americans when you have 13 responses) even as you continue sampling other ethnic groups, so that the ethnic composition of your sample matches that of the general American population.

Non-proportional quota sampling is less restrictive in that you do not have to achieve a proportional representation, but perhaps meet a minimum size in each subgroup. In this case, you may decide to have 50 respondents from each of the three ethnic subgroups (Caucasians, Hispanic-Americans, and African-Americans), and stop when your quota for each subgroup is reached. Neither type of quota sampling will be representative of the American population, since depending on whether your study was conducted in a shopping centre in New York or Kansas, your results may be entirely different. The non-proportional technique is even less representative of the population, but may be useful in that it allows capturing the opinions of small and under-represented groups through oversampling.

Expert sampling. This is a technique where respondents are chosen in a non-random manner based on their expertise on the phenomenon being studied. For instance, in order to understand the impacts of a new governmental policy such as the Sarbanes-Oxley Act, you can sample a group of corporate accountants who are familiar with this Act. The advantage of this approach is that since experts tend to be more familiar with the subject matter than non-experts, opinions from a sample of experts are more credible than a sample that includes both experts and non-experts, although the findings are still not generalisable to the overall population at large.

Snowball sampling. In snowball sampling, you start by identifying a few respondents that match the criteria for inclusion in your study, and then ask them to recommend others they know who also meet your selection criteria. For instance, if you wish to survey computer network administrators and you know of only one or two such people, you can start with them and ask them to recommend others who also work in network administration. Although this method hardly leads to representative samples, it may sometimes be the only way to reach hard-to-reach populations or when no sampling frame is available.

Statistics of sampling

In the preceding sections, we introduced terms such as population parameter, sample statistic, and sampling bias. In this section, we will try to understand what these terms mean and how they are related to each other.

When you measure a certain observation from a given unit, such as a person’s response to a Likert-scaled item, that observation is called a response (see Figure 8.2). In other words, a response is a measurement value provided by a sampled unit. Each respondent will give you different responses to different items in an instrument. Responses from different respondents to the same item or observation can be graphed into a frequency distribution based on their frequency of occurrences. For a large number of responses in a sample, this frequency distribution tends to resemble a bell-shaped curve called a normal distribution , which can be used to estimate overall characteristics of the entire sample, such as sample mean (average of all observations in a sample) or standard deviation (variability or spread of observations in a sample). These sample estimates are called sample statistics (a ‘statistic’ is a value that is estimated from observed data). Populations also have means and standard deviations that could be obtained if we could sample the entire population. However, since the entire population can never be sampled, population characteristics are always unknown, and are called population parameters (and not ‘statistic’ because they are not statistically estimated from data). Sample statistics may differ from population parameters if the sample is not perfectly representative of the population. The difference between the two is called sampling error . Theoretically, if we could gradually increase the sample size so that the sample approaches closer and closer to the population, then sampling error will decrease and a sample statistic will increasingly approximate the corresponding population parameter.

If a sample is truly representative of the population, then the estimated sample statistics should be identical to the corresponding theoretical population parameters. How do we know if the sample statistics are at least reasonably close to the population parameters? Here, we need to understand the concept of a sampling distribution . Imagine that you took three different random samples from a given population, as shown in Figure 8.3, and for each sample, you derived sample statistics such as sample mean and standard deviation. If each random sample was truly representative of the population, then your three sample means from the three random samples will be identical—and equal to the population parameter—and the variability in sample means will be zero. But this is extremely unlikely, given that each random sample will likely constitute a different subset of the population, and hence, their means may be slightly different from each other. However, you can take these three sample means and plot a frequency histogram of sample means. If the number of such samples increases from three to 10 to 100, the frequency histogram becomes a sampling distribution. Hence, a sampling distribution is a frequency distribution of a sample statistic (like sample mean) from a set of samples , while the commonly referenced frequency distribution is the distribution of a response (observation) from a single sample . Just like a frequency distribution, the sampling distribution will also tend to have more sample statistics clustered around the mean (which presumably is an estimate of a population parameter), with fewer values scattered around the mean. With an infinitely large number of samples, this distribution will approach a normal distribution. The variability or spread of a sample statistic in a sampling distribution (i.e., the standard deviation of a sampling statistic) is called its standard error . In contrast, the term standard deviation is reserved for variability of an observed response from a single sample.

Sample statistic

Social Science Research: Principles, Methods and Practices (Revised edition) Copyright © 2019 by Anol Bhattacherjee is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

  • Privacy Policy

Research Method

Home » Research Methodology – Types, Examples and writing Guide

Research Methodology – Types, Examples and writing Guide

Table of Contents

Research Methodology

Research Methodology

Definition:

Research Methodology refers to the systematic and scientific approach used to conduct research, investigate problems, and gather data and information for a specific purpose. It involves the techniques and procedures used to identify, collect , analyze , and interpret data to answer research questions or solve research problems . Moreover, They are philosophical and theoretical frameworks that guide the research process.

Structure of Research Methodology

Research methodology formats can vary depending on the specific requirements of the research project, but the following is a basic example of a structure for a research methodology section:

I. Introduction

  • Provide an overview of the research problem and the need for a research methodology section
  • Outline the main research questions and objectives

II. Research Design

  • Explain the research design chosen and why it is appropriate for the research question(s) and objectives
  • Discuss any alternative research designs considered and why they were not chosen
  • Describe the research setting and participants (if applicable)

III. Data Collection Methods

  • Describe the methods used to collect data (e.g., surveys, interviews, observations)
  • Explain how the data collection methods were chosen and why they are appropriate for the research question(s) and objectives
  • Detail any procedures or instruments used for data collection

IV. Data Analysis Methods

  • Describe the methods used to analyze the data (e.g., statistical analysis, content analysis )
  • Explain how the data analysis methods were chosen and why they are appropriate for the research question(s) and objectives
  • Detail any procedures or software used for data analysis

V. Ethical Considerations

  • Discuss any ethical issues that may arise from the research and how they were addressed
  • Explain how informed consent was obtained (if applicable)
  • Detail any measures taken to ensure confidentiality and anonymity

VI. Limitations

  • Identify any potential limitations of the research methodology and how they may impact the results and conclusions

VII. Conclusion

  • Summarize the key aspects of the research methodology section
  • Explain how the research methodology addresses the research question(s) and objectives

Research Methodology Types

Types of Research Methodology are as follows:

Quantitative Research Methodology

This is a research methodology that involves the collection and analysis of numerical data using statistical methods. This type of research is often used to study cause-and-effect relationships and to make predictions.

Qualitative Research Methodology

This is a research methodology that involves the collection and analysis of non-numerical data such as words, images, and observations. This type of research is often used to explore complex phenomena, to gain an in-depth understanding of a particular topic, and to generate hypotheses.

Mixed-Methods Research Methodology

This is a research methodology that combines elements of both quantitative and qualitative research. This approach can be particularly useful for studies that aim to explore complex phenomena and to provide a more comprehensive understanding of a particular topic.

Case Study Research Methodology

This is a research methodology that involves in-depth examination of a single case or a small number of cases. Case studies are often used in psychology, sociology, and anthropology to gain a detailed understanding of a particular individual or group.

Action Research Methodology

This is a research methodology that involves a collaborative process between researchers and practitioners to identify and solve real-world problems. Action research is often used in education, healthcare, and social work.

Experimental Research Methodology

This is a research methodology that involves the manipulation of one or more independent variables to observe their effects on a dependent variable. Experimental research is often used to study cause-and-effect relationships and to make predictions.

Survey Research Methodology

This is a research methodology that involves the collection of data from a sample of individuals using questionnaires or interviews. Survey research is often used to study attitudes, opinions, and behaviors.

Grounded Theory Research Methodology

This is a research methodology that involves the development of theories based on the data collected during the research process. Grounded theory is often used in sociology and anthropology to generate theories about social phenomena.

Research Methodology Example

An Example of Research Methodology could be the following:

Research Methodology for Investigating the Effectiveness of Cognitive Behavioral Therapy in Reducing Symptoms of Depression in Adults

Introduction:

The aim of this research is to investigate the effectiveness of cognitive-behavioral therapy (CBT) in reducing symptoms of depression in adults. To achieve this objective, a randomized controlled trial (RCT) will be conducted using a mixed-methods approach.

Research Design:

The study will follow a pre-test and post-test design with two groups: an experimental group receiving CBT and a control group receiving no intervention. The study will also include a qualitative component, in which semi-structured interviews will be conducted with a subset of participants to explore their experiences of receiving CBT.

Participants:

Participants will be recruited from community mental health clinics in the local area. The sample will consist of 100 adults aged 18-65 years old who meet the diagnostic criteria for major depressive disorder. Participants will be randomly assigned to either the experimental group or the control group.

Intervention :

The experimental group will receive 12 weekly sessions of CBT, each lasting 60 minutes. The intervention will be delivered by licensed mental health professionals who have been trained in CBT. The control group will receive no intervention during the study period.

Data Collection:

Quantitative data will be collected through the use of standardized measures such as the Beck Depression Inventory-II (BDI-II) and the Generalized Anxiety Disorder-7 (GAD-7). Data will be collected at baseline, immediately after the intervention, and at a 3-month follow-up. Qualitative data will be collected through semi-structured interviews with a subset of participants from the experimental group. The interviews will be conducted at the end of the intervention period, and will explore participants’ experiences of receiving CBT.

Data Analysis:

Quantitative data will be analyzed using descriptive statistics, t-tests, and mixed-model analyses of variance (ANOVA) to assess the effectiveness of the intervention. Qualitative data will be analyzed using thematic analysis to identify common themes and patterns in participants’ experiences of receiving CBT.

Ethical Considerations:

This study will comply with ethical guidelines for research involving human subjects. Participants will provide informed consent before participating in the study, and their privacy and confidentiality will be protected throughout the study. Any adverse events or reactions will be reported and managed appropriately.

Data Management:

All data collected will be kept confidential and stored securely using password-protected databases. Identifying information will be removed from qualitative data transcripts to ensure participants’ anonymity.

Limitations:

One potential limitation of this study is that it only focuses on one type of psychotherapy, CBT, and may not generalize to other types of therapy or interventions. Another limitation is that the study will only include participants from community mental health clinics, which may not be representative of the general population.

Conclusion:

This research aims to investigate the effectiveness of CBT in reducing symptoms of depression in adults. By using a randomized controlled trial and a mixed-methods approach, the study will provide valuable insights into the mechanisms underlying the relationship between CBT and depression. The results of this study will have important implications for the development of effective treatments for depression in clinical settings.

How to Write Research Methodology

Writing a research methodology involves explaining the methods and techniques you used to conduct research, collect data, and analyze results. It’s an essential section of any research paper or thesis, as it helps readers understand the validity and reliability of your findings. Here are the steps to write a research methodology:

  • Start by explaining your research question: Begin the methodology section by restating your research question and explaining why it’s important. This helps readers understand the purpose of your research and the rationale behind your methods.
  • Describe your research design: Explain the overall approach you used to conduct research. This could be a qualitative or quantitative research design, experimental or non-experimental, case study or survey, etc. Discuss the advantages and limitations of the chosen design.
  • Discuss your sample: Describe the participants or subjects you included in your study. Include details such as their demographics, sampling method, sample size, and any exclusion criteria used.
  • Describe your data collection methods : Explain how you collected data from your participants. This could include surveys, interviews, observations, questionnaires, or experiments. Include details on how you obtained informed consent, how you administered the tools, and how you minimized the risk of bias.
  • Explain your data analysis techniques: Describe the methods you used to analyze the data you collected. This could include statistical analysis, content analysis, thematic analysis, or discourse analysis. Explain how you dealt with missing data, outliers, and any other issues that arose during the analysis.
  • Discuss the validity and reliability of your research : Explain how you ensured the validity and reliability of your study. This could include measures such as triangulation, member checking, peer review, or inter-coder reliability.
  • Acknowledge any limitations of your research: Discuss any limitations of your study, including any potential threats to validity or generalizability. This helps readers understand the scope of your findings and how they might apply to other contexts.
  • Provide a summary: End the methodology section by summarizing the methods and techniques you used to conduct your research. This provides a clear overview of your research methodology and helps readers understand the process you followed to arrive at your findings.

When to Write Research Methodology

Research methodology is typically written after the research proposal has been approved and before the actual research is conducted. It should be written prior to data collection and analysis, as it provides a clear roadmap for the research project.

The research methodology is an important section of any research paper or thesis, as it describes the methods and procedures that will be used to conduct the research. It should include details about the research design, data collection methods, data analysis techniques, and any ethical considerations.

The methodology should be written in a clear and concise manner, and it should be based on established research practices and standards. It is important to provide enough detail so that the reader can understand how the research was conducted and evaluate the validity of the results.

Applications of Research Methodology

Here are some of the applications of research methodology:

  • To identify the research problem: Research methodology is used to identify the research problem, which is the first step in conducting any research.
  • To design the research: Research methodology helps in designing the research by selecting the appropriate research method, research design, and sampling technique.
  • To collect data: Research methodology provides a systematic approach to collect data from primary and secondary sources.
  • To analyze data: Research methodology helps in analyzing the collected data using various statistical and non-statistical techniques.
  • To test hypotheses: Research methodology provides a framework for testing hypotheses and drawing conclusions based on the analysis of data.
  • To generalize findings: Research methodology helps in generalizing the findings of the research to the target population.
  • To develop theories : Research methodology is used to develop new theories and modify existing theories based on the findings of the research.
  • To evaluate programs and policies : Research methodology is used to evaluate the effectiveness of programs and policies by collecting data and analyzing it.
  • To improve decision-making: Research methodology helps in making informed decisions by providing reliable and valid data.

Purpose of Research Methodology

Research methodology serves several important purposes, including:

  • To guide the research process: Research methodology provides a systematic framework for conducting research. It helps researchers to plan their research, define their research questions, and select appropriate methods and techniques for collecting and analyzing data.
  • To ensure research quality: Research methodology helps researchers to ensure that their research is rigorous, reliable, and valid. It provides guidelines for minimizing bias and error in data collection and analysis, and for ensuring that research findings are accurate and trustworthy.
  • To replicate research: Research methodology provides a clear and detailed account of the research process, making it possible for other researchers to replicate the study and verify its findings.
  • To advance knowledge: Research methodology enables researchers to generate new knowledge and to contribute to the body of knowledge in their field. It provides a means for testing hypotheses, exploring new ideas, and discovering new insights.
  • To inform decision-making: Research methodology provides evidence-based information that can inform policy and decision-making in a variety of fields, including medicine, public health, education, and business.

Advantages of Research Methodology

Research methodology has several advantages that make it a valuable tool for conducting research in various fields. Here are some of the key advantages of research methodology:

  • Systematic and structured approach : Research methodology provides a systematic and structured approach to conducting research, which ensures that the research is conducted in a rigorous and comprehensive manner.
  • Objectivity : Research methodology aims to ensure objectivity in the research process, which means that the research findings are based on evidence and not influenced by personal bias or subjective opinions.
  • Replicability : Research methodology ensures that research can be replicated by other researchers, which is essential for validating research findings and ensuring their accuracy.
  • Reliability : Research methodology aims to ensure that the research findings are reliable, which means that they are consistent and can be depended upon.
  • Validity : Research methodology ensures that the research findings are valid, which means that they accurately reflect the research question or hypothesis being tested.
  • Efficiency : Research methodology provides a structured and efficient way of conducting research, which helps to save time and resources.
  • Flexibility : Research methodology allows researchers to choose the most appropriate research methods and techniques based on the research question, data availability, and other relevant factors.
  • Scope for innovation: Research methodology provides scope for innovation and creativity in designing research studies and developing new research techniques.

Research Methodology Vs Research Methods

About the author.

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Appendix in Research Paper

Appendix in Research Paper – Examples and...

Research Results

Research Results Section – Writing Guide and...

Research Design

Research Design – Types, Methods and Examples

Future Research

Future Research – Thesis Guide

APA Research Paper Format

APA Research Paper Format – Example, Sample and...

Research Methods

Research Methods – Types, Examples and Guide

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology

Research Design | Step-by-Step Guide with Examples

Published on 5 May 2022 by Shona McCombes . Revised on 20 March 2023.

A research design is a strategy for answering your research question  using empirical data. Creating a research design means making decisions about:

  • Your overall aims and approach
  • The type of research design you’ll use
  • Your sampling methods or criteria for selecting subjects
  • Your data collection methods
  • The procedures you’ll follow to collect data
  • Your data analysis methods

A well-planned research design helps ensure that your methods match your research aims and that you use the right kind of analysis for your data.

Table of contents

Step 1: consider your aims and approach, step 2: choose a type of research design, step 3: identify your population and sampling method, step 4: choose your data collection methods, step 5: plan your data collection procedures, step 6: decide on your data analysis strategies, frequently asked questions.

  • Introduction

Before you can start designing your research, you should already have a clear idea of the research question you want to investigate.

There are many different ways you could go about answering this question. Your research design choices should be driven by your aims and priorities – start by thinking carefully about what you want to achieve.

The first choice you need to make is whether you’ll take a qualitative or quantitative approach.

Qualitative research designs tend to be more flexible and inductive , allowing you to adjust your approach based on what you find throughout the research process.

Quantitative research designs tend to be more fixed and deductive , with variables and hypotheses clearly defined in advance of data collection.

It’s also possible to use a mixed methods design that integrates aspects of both approaches. By combining qualitative and quantitative insights, you can gain a more complete picture of the problem you’re studying and strengthen the credibility of your conclusions.

Practical and ethical considerations when designing research

As well as scientific considerations, you need to think practically when designing your research. If your research involves people or animals, you also need to consider research ethics .

  • How much time do you have to collect data and write up the research?
  • Will you be able to gain access to the data you need (e.g., by travelling to a specific location or contacting specific people)?
  • Do you have the necessary research skills (e.g., statistical analysis or interview techniques)?
  • Will you need ethical approval ?

At each stage of the research design process, make sure that your choices are practically feasible.

Prevent plagiarism, run a free check.

Within both qualitative and quantitative approaches, there are several types of research design to choose from. Each type provides a framework for the overall shape of your research.

Types of quantitative research designs

Quantitative designs can be split into four main types. Experimental and   quasi-experimental designs allow you to test cause-and-effect relationships, while descriptive and correlational designs allow you to measure variables and describe relationships between them.

With descriptive and correlational designs, you can get a clear picture of characteristics, trends, and relationships as they exist in the real world. However, you can’t draw conclusions about cause and effect (because correlation doesn’t imply causation ).

Experiments are the strongest way to test cause-and-effect relationships without the risk of other variables influencing the results. However, their controlled conditions may not always reflect how things work in the real world. They’re often also more difficult and expensive to implement.

Types of qualitative research designs

Qualitative designs are less strictly defined. This approach is about gaining a rich, detailed understanding of a specific context or phenomenon, and you can often be more creative and flexible in designing your research.

The table below shows some common types of qualitative design. They often have similar approaches in terms of data collection, but focus on different aspects when analysing the data.

Your research design should clearly define who or what your research will focus on, and how you’ll go about choosing your participants or subjects.

In research, a population is the entire group that you want to draw conclusions about, while a sample is the smaller group of individuals you’ll actually collect data from.

Defining the population

A population can be made up of anything you want to study – plants, animals, organisations, texts, countries, etc. In the social sciences, it most often refers to a group of people.

For example, will you focus on people from a specific demographic, region, or background? Are you interested in people with a certain job or medical condition, or users of a particular product?

The more precisely you define your population, the easier it will be to gather a representative sample.

Sampling methods

Even with a narrowly defined population, it’s rarely possible to collect data from every individual. Instead, you’ll collect data from a sample.

To select a sample, there are two main approaches: probability sampling and non-probability sampling . The sampling method you use affects how confidently you can generalise your results to the population as a whole.

Probability sampling is the most statistically valid option, but it’s often difficult to achieve unless you’re dealing with a very small and accessible population.

For practical reasons, many studies use non-probability sampling, but it’s important to be aware of the limitations and carefully consider potential biases. You should always make an effort to gather a sample that’s as representative as possible of the population.

Case selection in qualitative research

In some types of qualitative designs, sampling may not be relevant.

For example, in an ethnography or a case study, your aim is to deeply understand a specific context, not to generalise to a population. Instead of sampling, you may simply aim to collect as much data as possible about the context you are studying.

In these types of design, you still have to carefully consider your choice of case or community. You should have a clear rationale for why this particular case is suitable for answering your research question.

For example, you might choose a case study that reveals an unusual or neglected aspect of your research problem, or you might choose several very similar or very different cases in order to compare them.

Data collection methods are ways of directly measuring variables and gathering information. They allow you to gain first-hand knowledge and original insights into your research problem.

You can choose just one data collection method, or use several methods in the same study.

Survey methods

Surveys allow you to collect data about opinions, behaviours, experiences, and characteristics by asking people directly. There are two main survey methods to choose from: questionnaires and interviews.

Observation methods

Observations allow you to collect data unobtrusively, observing characteristics, behaviours, or social interactions without relying on self-reporting.

Observations may be conducted in real time, taking notes as you observe, or you might make audiovisual recordings for later analysis. They can be qualitative or quantitative.

Other methods of data collection

There are many other ways you might collect data depending on your field and topic.

If you’re not sure which methods will work best for your research design, try reading some papers in your field to see what data collection methods they used.

Secondary data

If you don’t have the time or resources to collect data from the population you’re interested in, you can also choose to use secondary data that other researchers already collected – for example, datasets from government surveys or previous studies on your topic.

With this raw data, you can do your own analysis to answer new research questions that weren’t addressed by the original study.

Using secondary data can expand the scope of your research, as you may be able to access much larger and more varied samples than you could collect yourself.

However, it also means you don’t have any control over which variables to measure or how to measure them, so the conclusions you can draw may be limited.

As well as deciding on your methods, you need to plan exactly how you’ll use these methods to collect data that’s consistent, accurate, and unbiased.

Planning systematic procedures is especially important in quantitative research, where you need to precisely define your variables and ensure your measurements are reliable and valid.

Operationalisation

Some variables, like height or age, are easily measured. But often you’ll be dealing with more abstract concepts, like satisfaction, anxiety, or competence. Operationalisation means turning these fuzzy ideas into measurable indicators.

If you’re using observations , which events or actions will you count?

If you’re using surveys , which questions will you ask and what range of responses will be offered?

You may also choose to use or adapt existing materials designed to measure the concept you’re interested in – for example, questionnaires or inventories whose reliability and validity has already been established.

Reliability and validity

Reliability means your results can be consistently reproduced , while validity means that you’re actually measuring the concept you’re interested in.

For valid and reliable results, your measurement materials should be thoroughly researched and carefully designed. Plan your procedures to make sure you carry out the same steps in the same way for each participant.

If you’re developing a new questionnaire or other instrument to measure a specific concept, running a pilot study allows you to check its validity and reliability in advance.

Sampling procedures

As well as choosing an appropriate sampling method, you need a concrete plan for how you’ll actually contact and recruit your selected sample.

That means making decisions about things like:

  • How many participants do you need for an adequate sample size?
  • What inclusion and exclusion criteria will you use to identify eligible participants?
  • How will you contact your sample – by mail, online, by phone, or in person?

If you’re using a probability sampling method, it’s important that everyone who is randomly selected actually participates in the study. How will you ensure a high response rate?

If you’re using a non-probability method, how will you avoid bias and ensure a representative sample?

Data management

It’s also important to create a data management plan for organising and storing your data.

Will you need to transcribe interviews or perform data entry for observations? You should anonymise and safeguard any sensitive data, and make sure it’s backed up regularly.

Keeping your data well organised will save time when it comes to analysing them. It can also help other researchers validate and add to your findings.

On their own, raw data can’t answer your research question. The last step of designing your research is planning how you’ll analyse the data.

Quantitative data analysis

In quantitative research, you’ll most likely use some form of statistical analysis . With statistics, you can summarise your sample data, make estimates, and test hypotheses.

Using descriptive statistics , you can summarise your sample data in terms of:

  • The distribution of the data (e.g., the frequency of each score on a test)
  • The central tendency of the data (e.g., the mean to describe the average score)
  • The variability of the data (e.g., the standard deviation to describe how spread out the scores are)

The specific calculations you can do depend on the level of measurement of your variables.

Using inferential statistics , you can:

  • Make estimates about the population based on your sample data.
  • Test hypotheses about a relationship between variables.

Regression and correlation tests look for associations between two or more variables, while comparison tests (such as t tests and ANOVAs ) look for differences in the outcomes of different groups.

Your choice of statistical test depends on various aspects of your research design, including the types of variables you’re dealing with and the distribution of your data.

Qualitative data analysis

In qualitative research, your data will usually be very dense with information and ideas. Instead of summing it up in numbers, you’ll need to comb through the data in detail, interpret its meanings, identify patterns, and extract the parts that are most relevant to your research question.

Two of the most common approaches to doing this are thematic analysis and discourse analysis .

There are many other ways of analysing qualitative data depending on the aims of your research. To get a sense of potential approaches, try reading some qualitative research papers in your field.

A sample is a subset of individuals from a larger population. Sampling means selecting the group that you will actually collect data from in your research.

For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

Statistical sampling allows you to test a hypothesis about the characteristics of a population. There are various sampling methods you can use to ensure that your sample is representative of the population as a whole.

Operationalisation means turning abstract conceptual ideas into measurable observations.

For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioural avoidance of crowded places, or physical anxiety symptoms in social situations.

Before collecting data , it’s important to consider how you will operationalise the variables that you want to measure.

The research methods you use depend on the type of data you need to answer your research question .

  • If you want to measure something or test a hypothesis , use quantitative methods . If you want to explore ideas, thoughts, and meanings, use qualitative methods .
  • If you want to analyse a large amount of readily available data, use secondary data. If you want data specific to your purposes with control over how they are generated, collect primary data.
  • If you want to establish cause-and-effect relationships between variables , use experimental methods. If you want to understand the characteristics of a research subject, use descriptive methods.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

McCombes, S. (2023, March 20). Research Design | Step-by-Step Guide with Examples. Scribbr. Retrieved 7 June 2024, from https://www.scribbr.co.uk/research-methods/research-design/

Is this article helpful?

Shona McCombes

Shona McCombes

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Korean Med Sci
  • v.37(16); 2022 Apr 25

Logo of jkms

A Practical Guide to Writing Quantitative and Qualitative Research Questions and Hypotheses in Scholarly Articles

Edward barroga.

1 Department of General Education, Graduate School of Nursing Science, St. Luke’s International University, Tokyo, Japan.

Glafera Janet Matanguihan

2 Department of Biological Sciences, Messiah University, Mechanicsburg, PA, USA.

The development of research questions and the subsequent hypotheses are prerequisites to defining the main research purpose and specific objectives of a study. Consequently, these objectives determine the study design and research outcome. The development of research questions is a process based on knowledge of current trends, cutting-edge studies, and technological advances in the research field. Excellent research questions are focused and require a comprehensive literature search and in-depth understanding of the problem being investigated. Initially, research questions may be written as descriptive questions which could be developed into inferential questions. These questions must be specific and concise to provide a clear foundation for developing hypotheses. Hypotheses are more formal predictions about the research outcomes. These specify the possible results that may or may not be expected regarding the relationship between groups. Thus, research questions and hypotheses clarify the main purpose and specific objectives of the study, which in turn dictate the design of the study, its direction, and outcome. Studies developed from good research questions and hypotheses will have trustworthy outcomes with wide-ranging social and health implications.

INTRODUCTION

Scientific research is usually initiated by posing evidenced-based research questions which are then explicitly restated as hypotheses. 1 , 2 The hypotheses provide directions to guide the study, solutions, explanations, and expected results. 3 , 4 Both research questions and hypotheses are essentially formulated based on conventional theories and real-world processes, which allow the inception of novel studies and the ethical testing of ideas. 5 , 6

It is crucial to have knowledge of both quantitative and qualitative research 2 as both types of research involve writing research questions and hypotheses. 7 However, these crucial elements of research are sometimes overlooked; if not overlooked, then framed without the forethought and meticulous attention it needs. Planning and careful consideration are needed when developing quantitative or qualitative research, particularly when conceptualizing research questions and hypotheses. 4

There is a continuing need to support researchers in the creation of innovative research questions and hypotheses, as well as for journal articles that carefully review these elements. 1 When research questions and hypotheses are not carefully thought of, unethical studies and poor outcomes usually ensue. Carefully formulated research questions and hypotheses define well-founded objectives, which in turn determine the appropriate design, course, and outcome of the study. This article then aims to discuss in detail the various aspects of crafting research questions and hypotheses, with the goal of guiding researchers as they develop their own. Examples from the authors and peer-reviewed scientific articles in the healthcare field are provided to illustrate key points.

DEFINITIONS AND RELATIONSHIP OF RESEARCH QUESTIONS AND HYPOTHESES

A research question is what a study aims to answer after data analysis and interpretation. The answer is written in length in the discussion section of the paper. Thus, the research question gives a preview of the different parts and variables of the study meant to address the problem posed in the research question. 1 An excellent research question clarifies the research writing while facilitating understanding of the research topic, objective, scope, and limitations of the study. 5

On the other hand, a research hypothesis is an educated statement of an expected outcome. This statement is based on background research and current knowledge. 8 , 9 The research hypothesis makes a specific prediction about a new phenomenon 10 or a formal statement on the expected relationship between an independent variable and a dependent variable. 3 , 11 It provides a tentative answer to the research question to be tested or explored. 4

Hypotheses employ reasoning to predict a theory-based outcome. 10 These can also be developed from theories by focusing on components of theories that have not yet been observed. 10 The validity of hypotheses is often based on the testability of the prediction made in a reproducible experiment. 8

Conversely, hypotheses can also be rephrased as research questions. Several hypotheses based on existing theories and knowledge may be needed to answer a research question. Developing ethical research questions and hypotheses creates a research design that has logical relationships among variables. These relationships serve as a solid foundation for the conduct of the study. 4 , 11 Haphazardly constructed research questions can result in poorly formulated hypotheses and improper study designs, leading to unreliable results. Thus, the formulations of relevant research questions and verifiable hypotheses are crucial when beginning research. 12

CHARACTERISTICS OF GOOD RESEARCH QUESTIONS AND HYPOTHESES

Excellent research questions are specific and focused. These integrate collective data and observations to confirm or refute the subsequent hypotheses. Well-constructed hypotheses are based on previous reports and verify the research context. These are realistic, in-depth, sufficiently complex, and reproducible. More importantly, these hypotheses can be addressed and tested. 13

There are several characteristics of well-developed hypotheses. Good hypotheses are 1) empirically testable 7 , 10 , 11 , 13 ; 2) backed by preliminary evidence 9 ; 3) testable by ethical research 7 , 9 ; 4) based on original ideas 9 ; 5) have evidenced-based logical reasoning 10 ; and 6) can be predicted. 11 Good hypotheses can infer ethical and positive implications, indicating the presence of a relationship or effect relevant to the research theme. 7 , 11 These are initially developed from a general theory and branch into specific hypotheses by deductive reasoning. In the absence of a theory to base the hypotheses, inductive reasoning based on specific observations or findings form more general hypotheses. 10

TYPES OF RESEARCH QUESTIONS AND HYPOTHESES

Research questions and hypotheses are developed according to the type of research, which can be broadly classified into quantitative and qualitative research. We provide a summary of the types of research questions and hypotheses under quantitative and qualitative research categories in Table 1 .

Research questions in quantitative research

In quantitative research, research questions inquire about the relationships among variables being investigated and are usually framed at the start of the study. These are precise and typically linked to the subject population, dependent and independent variables, and research design. 1 Research questions may also attempt to describe the behavior of a population in relation to one or more variables, or describe the characteristics of variables to be measured ( descriptive research questions ). 1 , 5 , 14 These questions may also aim to discover differences between groups within the context of an outcome variable ( comparative research questions ), 1 , 5 , 14 or elucidate trends and interactions among variables ( relationship research questions ). 1 , 5 We provide examples of descriptive, comparative, and relationship research questions in quantitative research in Table 2 .

Hypotheses in quantitative research

In quantitative research, hypotheses predict the expected relationships among variables. 15 Relationships among variables that can be predicted include 1) between a single dependent variable and a single independent variable ( simple hypothesis ) or 2) between two or more independent and dependent variables ( complex hypothesis ). 4 , 11 Hypotheses may also specify the expected direction to be followed and imply an intellectual commitment to a particular outcome ( directional hypothesis ) 4 . On the other hand, hypotheses may not predict the exact direction and are used in the absence of a theory, or when findings contradict previous studies ( non-directional hypothesis ). 4 In addition, hypotheses can 1) define interdependency between variables ( associative hypothesis ), 4 2) propose an effect on the dependent variable from manipulation of the independent variable ( causal hypothesis ), 4 3) state a negative relationship between two variables ( null hypothesis ), 4 , 11 , 15 4) replace the working hypothesis if rejected ( alternative hypothesis ), 15 explain the relationship of phenomena to possibly generate a theory ( working hypothesis ), 11 5) involve quantifiable variables that can be tested statistically ( statistical hypothesis ), 11 6) or express a relationship whose interlinks can be verified logically ( logical hypothesis ). 11 We provide examples of simple, complex, directional, non-directional, associative, causal, null, alternative, working, statistical, and logical hypotheses in quantitative research, as well as the definition of quantitative hypothesis-testing research in Table 3 .

Research questions in qualitative research

Unlike research questions in quantitative research, research questions in qualitative research are usually continuously reviewed and reformulated. The central question and associated subquestions are stated more than the hypotheses. 15 The central question broadly explores a complex set of factors surrounding the central phenomenon, aiming to present the varied perspectives of participants. 15

There are varied goals for which qualitative research questions are developed. These questions can function in several ways, such as to 1) identify and describe existing conditions ( contextual research question s); 2) describe a phenomenon ( descriptive research questions ); 3) assess the effectiveness of existing methods, protocols, theories, or procedures ( evaluation research questions ); 4) examine a phenomenon or analyze the reasons or relationships between subjects or phenomena ( explanatory research questions ); or 5) focus on unknown aspects of a particular topic ( exploratory research questions ). 5 In addition, some qualitative research questions provide new ideas for the development of theories and actions ( generative research questions ) or advance specific ideologies of a position ( ideological research questions ). 1 Other qualitative research questions may build on a body of existing literature and become working guidelines ( ethnographic research questions ). Research questions may also be broadly stated without specific reference to the existing literature or a typology of questions ( phenomenological research questions ), may be directed towards generating a theory of some process ( grounded theory questions ), or may address a description of the case and the emerging themes ( qualitative case study questions ). 15 We provide examples of contextual, descriptive, evaluation, explanatory, exploratory, generative, ideological, ethnographic, phenomenological, grounded theory, and qualitative case study research questions in qualitative research in Table 4 , and the definition of qualitative hypothesis-generating research in Table 5 .

Qualitative studies usually pose at least one central research question and several subquestions starting with How or What . These research questions use exploratory verbs such as explore or describe . These also focus on one central phenomenon of interest, and may mention the participants and research site. 15

Hypotheses in qualitative research

Hypotheses in qualitative research are stated in the form of a clear statement concerning the problem to be investigated. Unlike in quantitative research where hypotheses are usually developed to be tested, qualitative research can lead to both hypothesis-testing and hypothesis-generating outcomes. 2 When studies require both quantitative and qualitative research questions, this suggests an integrative process between both research methods wherein a single mixed-methods research question can be developed. 1

FRAMEWORKS FOR DEVELOPING RESEARCH QUESTIONS AND HYPOTHESES

Research questions followed by hypotheses should be developed before the start of the study. 1 , 12 , 14 It is crucial to develop feasible research questions on a topic that is interesting to both the researcher and the scientific community. This can be achieved by a meticulous review of previous and current studies to establish a novel topic. Specific areas are subsequently focused on to generate ethical research questions. The relevance of the research questions is evaluated in terms of clarity of the resulting data, specificity of the methodology, objectivity of the outcome, depth of the research, and impact of the study. 1 , 5 These aspects constitute the FINER criteria (i.e., Feasible, Interesting, Novel, Ethical, and Relevant). 1 Clarity and effectiveness are achieved if research questions meet the FINER criteria. In addition to the FINER criteria, Ratan et al. described focus, complexity, novelty, feasibility, and measurability for evaluating the effectiveness of research questions. 14

The PICOT and PEO frameworks are also used when developing research questions. 1 The following elements are addressed in these frameworks, PICOT: P-population/patients/problem, I-intervention or indicator being studied, C-comparison group, O-outcome of interest, and T-timeframe of the study; PEO: P-population being studied, E-exposure to preexisting conditions, and O-outcome of interest. 1 Research questions are also considered good if these meet the “FINERMAPS” framework: Feasible, Interesting, Novel, Ethical, Relevant, Manageable, Appropriate, Potential value/publishable, and Systematic. 14

As we indicated earlier, research questions and hypotheses that are not carefully formulated result in unethical studies or poor outcomes. To illustrate this, we provide some examples of ambiguous research question and hypotheses that result in unclear and weak research objectives in quantitative research ( Table 6 ) 16 and qualitative research ( Table 7 ) 17 , and how to transform these ambiguous research question(s) and hypothesis(es) into clear and good statements.

a These statements were composed for comparison and illustrative purposes only.

b These statements are direct quotes from Higashihara and Horiuchi. 16

a This statement is a direct quote from Shimoda et al. 17

The other statements were composed for comparison and illustrative purposes only.

CONSTRUCTING RESEARCH QUESTIONS AND HYPOTHESES

To construct effective research questions and hypotheses, it is very important to 1) clarify the background and 2) identify the research problem at the outset of the research, within a specific timeframe. 9 Then, 3) review or conduct preliminary research to collect all available knowledge about the possible research questions by studying theories and previous studies. 18 Afterwards, 4) construct research questions to investigate the research problem. Identify variables to be accessed from the research questions 4 and make operational definitions of constructs from the research problem and questions. Thereafter, 5) construct specific deductive or inductive predictions in the form of hypotheses. 4 Finally, 6) state the study aims . This general flow for constructing effective research questions and hypotheses prior to conducting research is shown in Fig. 1 .

An external file that holds a picture, illustration, etc.
Object name is jkms-37-e121-g001.jpg

Research questions are used more frequently in qualitative research than objectives or hypotheses. 3 These questions seek to discover, understand, explore or describe experiences by asking “What” or “How.” The questions are open-ended to elicit a description rather than to relate variables or compare groups. The questions are continually reviewed, reformulated, and changed during the qualitative study. 3 Research questions are also used more frequently in survey projects than hypotheses in experiments in quantitative research to compare variables and their relationships.

Hypotheses are constructed based on the variables identified and as an if-then statement, following the template, ‘If a specific action is taken, then a certain outcome is expected.’ At this stage, some ideas regarding expectations from the research to be conducted must be drawn. 18 Then, the variables to be manipulated (independent) and influenced (dependent) are defined. 4 Thereafter, the hypothesis is stated and refined, and reproducible data tailored to the hypothesis are identified, collected, and analyzed. 4 The hypotheses must be testable and specific, 18 and should describe the variables and their relationships, the specific group being studied, and the predicted research outcome. 18 Hypotheses construction involves a testable proposition to be deduced from theory, and independent and dependent variables to be separated and measured separately. 3 Therefore, good hypotheses must be based on good research questions constructed at the start of a study or trial. 12

In summary, research questions are constructed after establishing the background of the study. Hypotheses are then developed based on the research questions. Thus, it is crucial to have excellent research questions to generate superior hypotheses. In turn, these would determine the research objectives and the design of the study, and ultimately, the outcome of the research. 12 Algorithms for building research questions and hypotheses are shown in Fig. 2 for quantitative research and in Fig. 3 for qualitative research.

An external file that holds a picture, illustration, etc.
Object name is jkms-37-e121-g002.jpg

EXAMPLES OF RESEARCH QUESTIONS FROM PUBLISHED ARTICLES

  • EXAMPLE 1. Descriptive research question (quantitative research)
  • - Presents research variables to be assessed (distinct phenotypes and subphenotypes)
  • “BACKGROUND: Since COVID-19 was identified, its clinical and biological heterogeneity has been recognized. Identifying COVID-19 phenotypes might help guide basic, clinical, and translational research efforts.
  • RESEARCH QUESTION: Does the clinical spectrum of patients with COVID-19 contain distinct phenotypes and subphenotypes? ” 19
  • EXAMPLE 2. Relationship research question (quantitative research)
  • - Shows interactions between dependent variable (static postural control) and independent variable (peripheral visual field loss)
  • “Background: Integration of visual, vestibular, and proprioceptive sensations contributes to postural control. People with peripheral visual field loss have serious postural instability. However, the directional specificity of postural stability and sensory reweighting caused by gradual peripheral visual field loss remain unclear.
  • Research question: What are the effects of peripheral visual field loss on static postural control ?” 20
  • EXAMPLE 3. Comparative research question (quantitative research)
  • - Clarifies the difference among groups with an outcome variable (patients enrolled in COMPERA with moderate PH or severe PH in COPD) and another group without the outcome variable (patients with idiopathic pulmonary arterial hypertension (IPAH))
  • “BACKGROUND: Pulmonary hypertension (PH) in COPD is a poorly investigated clinical condition.
  • RESEARCH QUESTION: Which factors determine the outcome of PH in COPD?
  • STUDY DESIGN AND METHODS: We analyzed the characteristics and outcome of patients enrolled in the Comparative, Prospective Registry of Newly Initiated Therapies for Pulmonary Hypertension (COMPERA) with moderate or severe PH in COPD as defined during the 6th PH World Symposium who received medical therapy for PH and compared them with patients with idiopathic pulmonary arterial hypertension (IPAH) .” 21
  • EXAMPLE 4. Exploratory research question (qualitative research)
  • - Explores areas that have not been fully investigated (perspectives of families and children who receive care in clinic-based child obesity treatment) to have a deeper understanding of the research problem
  • “Problem: Interventions for children with obesity lead to only modest improvements in BMI and long-term outcomes, and data are limited on the perspectives of families of children with obesity in clinic-based treatment. This scoping review seeks to answer the question: What is known about the perspectives of families and children who receive care in clinic-based child obesity treatment? This review aims to explore the scope of perspectives reported by families of children with obesity who have received individualized outpatient clinic-based obesity treatment.” 22
  • EXAMPLE 5. Relationship research question (quantitative research)
  • - Defines interactions between dependent variable (use of ankle strategies) and independent variable (changes in muscle tone)
  • “Background: To maintain an upright standing posture against external disturbances, the human body mainly employs two types of postural control strategies: “ankle strategy” and “hip strategy.” While it has been reported that the magnitude of the disturbance alters the use of postural control strategies, it has not been elucidated how the level of muscle tone, one of the crucial parameters of bodily function, determines the use of each strategy. We have previously confirmed using forward dynamics simulations of human musculoskeletal models that an increased muscle tone promotes the use of ankle strategies. The objective of the present study was to experimentally evaluate a hypothesis: an increased muscle tone promotes the use of ankle strategies. Research question: Do changes in the muscle tone affect the use of ankle strategies ?” 23

EXAMPLES OF HYPOTHESES IN PUBLISHED ARTICLES

  • EXAMPLE 1. Working hypothesis (quantitative research)
  • - A hypothesis that is initially accepted for further research to produce a feasible theory
  • “As fever may have benefit in shortening the duration of viral illness, it is plausible to hypothesize that the antipyretic efficacy of ibuprofen may be hindering the benefits of a fever response when taken during the early stages of COVID-19 illness .” 24
  • “In conclusion, it is plausible to hypothesize that the antipyretic efficacy of ibuprofen may be hindering the benefits of a fever response . The difference in perceived safety of these agents in COVID-19 illness could be related to the more potent efficacy to reduce fever with ibuprofen compared to acetaminophen. Compelling data on the benefit of fever warrant further research and review to determine when to treat or withhold ibuprofen for early stage fever for COVID-19 and other related viral illnesses .” 24
  • EXAMPLE 2. Exploratory hypothesis (qualitative research)
  • - Explores particular areas deeper to clarify subjective experience and develop a formal hypothesis potentially testable in a future quantitative approach
  • “We hypothesized that when thinking about a past experience of help-seeking, a self distancing prompt would cause increased help-seeking intentions and more favorable help-seeking outcome expectations .” 25
  • “Conclusion
  • Although a priori hypotheses were not supported, further research is warranted as results indicate the potential for using self-distancing approaches to increasing help-seeking among some people with depressive symptomatology.” 25
  • EXAMPLE 3. Hypothesis-generating research to establish a framework for hypothesis testing (qualitative research)
  • “We hypothesize that compassionate care is beneficial for patients (better outcomes), healthcare systems and payers (lower costs), and healthcare providers (lower burnout). ” 26
  • Compassionomics is the branch of knowledge and scientific study of the effects of compassionate healthcare. Our main hypotheses are that compassionate healthcare is beneficial for (1) patients, by improving clinical outcomes, (2) healthcare systems and payers, by supporting financial sustainability, and (3) HCPs, by lowering burnout and promoting resilience and well-being. The purpose of this paper is to establish a scientific framework for testing the hypotheses above . If these hypotheses are confirmed through rigorous research, compassionomics will belong in the science of evidence-based medicine, with major implications for all healthcare domains.” 26
  • EXAMPLE 4. Statistical hypothesis (quantitative research)
  • - An assumption is made about the relationship among several population characteristics ( gender differences in sociodemographic and clinical characteristics of adults with ADHD ). Validity is tested by statistical experiment or analysis ( chi-square test, Students t-test, and logistic regression analysis)
  • “Our research investigated gender differences in sociodemographic and clinical characteristics of adults with ADHD in a Japanese clinical sample. Due to unique Japanese cultural ideals and expectations of women's behavior that are in opposition to ADHD symptoms, we hypothesized that women with ADHD experience more difficulties and present more dysfunctions than men . We tested the following hypotheses: first, women with ADHD have more comorbidities than men with ADHD; second, women with ADHD experience more social hardships than men, such as having less full-time employment and being more likely to be divorced.” 27
  • “Statistical Analysis
  • ( text omitted ) Between-gender comparisons were made using the chi-squared test for categorical variables and Students t-test for continuous variables…( text omitted ). A logistic regression analysis was performed for employment status, marital status, and comorbidity to evaluate the independent effects of gender on these dependent variables.” 27

EXAMPLES OF HYPOTHESIS AS WRITTEN IN PUBLISHED ARTICLES IN RELATION TO OTHER PARTS

  • EXAMPLE 1. Background, hypotheses, and aims are provided
  • “Pregnant women need skilled care during pregnancy and childbirth, but that skilled care is often delayed in some countries …( text omitted ). The focused antenatal care (FANC) model of WHO recommends that nurses provide information or counseling to all pregnant women …( text omitted ). Job aids are visual support materials that provide the right kind of information using graphics and words in a simple and yet effective manner. When nurses are not highly trained or have many work details to attend to, these job aids can serve as a content reminder for the nurses and can be used for educating their patients (Jennings, Yebadokpo, Affo, & Agbogbe, 2010) ( text omitted ). Importantly, additional evidence is needed to confirm how job aids can further improve the quality of ANC counseling by health workers in maternal care …( text omitted )” 28
  • “ This has led us to hypothesize that the quality of ANC counseling would be better if supported by job aids. Consequently, a better quality of ANC counseling is expected to produce higher levels of awareness concerning the danger signs of pregnancy and a more favorable impression of the caring behavior of nurses .” 28
  • “This study aimed to examine the differences in the responses of pregnant women to a job aid-supported intervention during ANC visit in terms of 1) their understanding of the danger signs of pregnancy and 2) their impression of the caring behaviors of nurses to pregnant women in rural Tanzania.” 28
  • EXAMPLE 2. Background, hypotheses, and aims are provided
  • “We conducted a two-arm randomized controlled trial (RCT) to evaluate and compare changes in salivary cortisol and oxytocin levels of first-time pregnant women between experimental and control groups. The women in the experimental group touched and held an infant for 30 min (experimental intervention protocol), whereas those in the control group watched a DVD movie of an infant (control intervention protocol). The primary outcome was salivary cortisol level and the secondary outcome was salivary oxytocin level.” 29
  • “ We hypothesize that at 30 min after touching and holding an infant, the salivary cortisol level will significantly decrease and the salivary oxytocin level will increase in the experimental group compared with the control group .” 29
  • EXAMPLE 3. Background, aim, and hypothesis are provided
  • “In countries where the maternal mortality ratio remains high, antenatal education to increase Birth Preparedness and Complication Readiness (BPCR) is considered one of the top priorities [1]. BPCR includes birth plans during the antenatal period, such as the birthplace, birth attendant, transportation, health facility for complications, expenses, and birth materials, as well as family coordination to achieve such birth plans. In Tanzania, although increasing, only about half of all pregnant women attend an antenatal clinic more than four times [4]. Moreover, the information provided during antenatal care (ANC) is insufficient. In the resource-poor settings, antenatal group education is a potential approach because of the limited time for individual counseling at antenatal clinics.” 30
  • “This study aimed to evaluate an antenatal group education program among pregnant women and their families with respect to birth-preparedness and maternal and infant outcomes in rural villages of Tanzania.” 30
  • “ The study hypothesis was if Tanzanian pregnant women and their families received a family-oriented antenatal group education, they would (1) have a higher level of BPCR, (2) attend antenatal clinic four or more times, (3) give birth in a health facility, (4) have less complications of women at birth, and (5) have less complications and deaths of infants than those who did not receive the education .” 30

Research questions and hypotheses are crucial components to any type of research, whether quantitative or qualitative. These questions should be developed at the very beginning of the study. Excellent research questions lead to superior hypotheses, which, like a compass, set the direction of research, and can often determine the successful conduct of the study. Many research studies have floundered because the development of research questions and subsequent hypotheses was not given the thought and meticulous attention needed. The development of research questions and hypotheses is an iterative process based on extensive knowledge of the literature and insightful grasp of the knowledge gap. Focused, concise, and specific research questions provide a strong foundation for constructing hypotheses which serve as formal predictions about the research outcomes. Research questions and hypotheses are crucial elements of research that should not be overlooked. They should be carefully thought of and constructed when planning research. This avoids unethical studies and poor outcomes by defining well-founded objectives that determine the design, course, and outcome of the study.

Disclosure: The authors have no potential conflicts of interest to disclose.

Author Contributions:

  • Conceptualization: Barroga E, Matanguihan GJ.
  • Methodology: Barroga E, Matanguihan GJ.
  • Writing - original draft: Barroga E, Matanguihan GJ.
  • Writing - review & editing: Barroga E, Matanguihan GJ.

in research sample

Advertisement

Qualitative vs. Quantitative: Key Differences in Research Types

  • Share Content on Facebook
  • Share Content on LinkedIn
  • Share Content on Flipboard
  • Share Content on Reddit
  • Share Content via Email

Colleagues sit on a sofa and have a casual meeting with coffee and a laptop

Let's say you want to learn how a group will vote in an election. You face a classic decision of gathering qualitative vs. quantitative data.

With one method, you can ask voters open-ended questions that encourage them to share how they feel, what issues matter to them and the reasons they will vote in a specific way. With the other, you can ask closed-ended questions, giving respondents a list of options. You will then turn that information into statistics.

Neither method is more right than the other, but they serve different purposes. Learn more about the key differences between qualitative and quantitative research and how you can use them.

What Is Qualitative Research?

What is quantitative research, qualitative vs. quantitative research: 3 key differences, benefits of combining qualitative and quantitative research.

Qualitative research aims to explore and understand the depth, context and nuances of human experiences, behaviors and phenomena. This methodological approach emphasizes gathering rich, nonnumerical information through methods such as interviews, focus groups , observations and content analysis.

In qualitative research, the emphasis is on uncovering patterns and meanings within a specific social or cultural context. Researchers delve into the subjective aspects of human behavior , opinions and emotions.

This approach is particularly valuable for exploring complex and multifaceted issues, providing a deeper understanding of the intricacies involved.

Common qualitative research methods include open-ended interviews, where participants can express their thoughts freely, and thematic analysis, which involves identifying recurring themes in the data.

Examples of How to Use Qualitative Research

The flexibility of qualitative research allows researchers to adapt their methods based on emerging insights, fostering a more organic and holistic exploration of the research topic. This is a widely used method in social sciences, psychology and market research.

Here are just a few ways you can use qualitative research.

  • To understand the people who make up a community : If you want to learn more about a community, you can talk to them or observe them to learn more about their customs, norms and values.
  • To examine people's experiences within the healthcare system : While you can certainly look at statistics to gauge if someone feels positively or negatively about their healthcare experiences, you may not gain a deep understanding of why they feel that way. For example, if a nurse went above and beyond for a patient, they might say they are content with the care they received. But if medical professional after medical professional dismissed a person over several years, they will have more negative comments.
  • To explore the effectiveness of your marketing campaign : Marketing is a field that typically collects statistical data, but it can also benefit from qualitative research. For example, if you have a successful campaign, you can interview people to learn what resonated with them and why. If you learn they liked the humor because it shows you don't take yourself too seriously, you can try to replicate that feeling in future campaigns.

Types of Qualitative Data Collection

Qualitative data captures the qualities, characteristics or attributes of a subject. It can take various forms, including:

  • Audio data : Recordings of interviews, discussions or any other auditory information. This can be useful when dealing with events from the past. Setting up a recording device also allows a researcher to stay in the moment without having to jot down notes.
  • Observational data : With this type of qualitative data analysis, you can record behavior, events or interactions.
  • Textual data : Use verbal or written information gathered through interviews, open-ended surveys or focus groups to learn more about a topic.
  • Visual data : You can learn new information through images, photographs, videos or other visual materials.

Quantitative research is a systematic empirical investigation that involves the collection and analysis of numerical data. This approach seeks to understand, explain or predict phenomena by gathering quantifiable information and applying statistical methods for analysis.

Unlike qualitative research, which focuses on nonnumerical, descriptive data, quantitative research data involves measurements, counts and statistical techniques to draw objective conclusions.

Examples of How to Use Quantitative Research

Quantitative research focuses on statistical analysis. Here are a few ways you can employ quantitative research methods.

  • Studying the employment rates of a city : Through this research you can gauge whether any patterns exist over a given time period.
  • Seeing how air pollution has affected a neighborhood : If the creation of a highway led to more air pollution in a neighborhood, you can collect data to learn about the health impacts on the area's residents. For example, you can see what percentage of people developed respiratory issues after moving to the neighborhood.

Types of Quantitative Data

Quantitative data refers to numerical information you can measure and count. Here are a few statistics you can use.

  • Heights, yards, volume and more : You can use different measurements to gain insight on different types of research, such as learning the average distance workers are willing to travel for work or figuring out the average height of a ballerina.
  • Temperature : Measure in either degrees Celsius or Fahrenheit. Or, if you're looking for the coldest place in the universe , you may measure in Kelvins.
  • Sales figures : With this information, you can look at a store's performance over time, compare one company to another or learn what the average amount of sales is in a specific industry.

Quantitative and qualitative research methods are both valid and useful ways to collect data. Here are a few ways that they differ.

  • Data collection method : Quantitative research uses standardized instruments, such as surveys, experiments or structured observations, to gather numerical data. Qualitative research uses open-ended methods like interviews, focus groups or content analysis.
  • Nature of data : Quantitative research involves numerical data that you can measure and analyze statistically, whereas qualitative research involves exploring the depth and richness of experiences through nonnumerical, descriptive data.
  • Sampling : Quantitative research involves larger sample sizes to ensure statistical validity and generalizability of findings to a population. With qualitative research, it's better to work with a smaller sample size to gain in-depth insights into specific contexts or experiences.

You can simultaneously study qualitative and quantitative data. This method , known as mixed methods research, offers several benefits, including:

  • A comprehensive understanding : Integration of qualitative and quantitative data provides a more comprehensive understanding of the research problem. Qualitative data helps explain the context and nuances, while quantitative data offers statistical generalizability.
  • Contextualization : Qualitative data helps contextualize quantitative findings by providing explanations into the why and how behind statistical patterns. This deeper understanding contributes to more informed interpretations of quantitative results.
  • Triangulation : Triangulation involves using multiple methods to validate or corroborate findings. Combining qualitative and quantitative data allows researchers to cross-verify results, enhancing the overall validity and reliability of the study.

This article was created in conjunction with AI technology, then fact-checked and edited by a HowStuffWorks editor.

Please copy/paste the following text to properly cite this HowStuffWorks.com article:

Office of Science Policy released sample consent language for Digital Health Technologies:

To assist in the responsible deployment of digital health technologies,  OSP, through an NIH-wide collaboration, has developed and released Informed Consent for Research Using Digital Health Technologies: Points to Consider & Sample Language .  This resource presents general points to consider, instructions for use, and optional sample language for the research community.

What is cloud computing?

Group of white spheres on light blue background

With cloud computing, organizations essentially buy a range of services offered by cloud service providers (CSPs). The CSP’s servers host all the client’s applications. Organizations can enhance their computing power more quickly and cheaply via the cloud than by purchasing, installing, and maintaining their own servers.

The cloud-computing model is helping organizations to scale new digital solutions with greater speed and agility—and to create value more quickly. Developers use cloud services to build and run custom applications and to maintain infrastructure and networks for companies of virtually all sizes—especially large global ones. CSPs offer services, such as analytics, to handle and manipulate vast amounts of data. Time to market accelerates, speeding innovation to deliver better products and services across the world.

What are examples of cloud computing’s uses?

Get to know and directly engage with senior mckinsey experts on cloud computing.

Brant Carson is a senior partner in McKinsey’s Vancouver office; Chandra Gnanasambandam and Anand Swaminathan are senior partners in the Bay Area office; William Forrest is a senior partner in the Chicago office; Leandro Santos is a senior partner in the Atlanta office; Kate Smaje is a senior partner in the London office.

Cloud computing came on the scene well before the global pandemic hit, in 2020, but the ensuing digital dash  helped demonstrate its power and utility. Here are some examples of how businesses and other organizations employ the cloud:

  • A fast-casual restaurant chain’s online orders multiplied exponentially during the 2020 pandemic lockdowns, climbing to 400,000 a day, from 50,000. One pleasant surprise? The company’s online-ordering system could handle the volume—because it had already migrated to the cloud . Thanks to this success, the organization’s leadership decided to accelerate its five-year migration plan to less than one year.
  • A biotech company harnessed cloud computing to deliver the first clinical batch of a COVID-19 vaccine candidate for Phase I trials in just 42 days—thanks in part to breakthrough innovations using scalable cloud data storage and computing  to facilitate processes ensuring the drug’s safety and efficacy.
  • Banks use the cloud for several aspects of customer-service management. They automate transaction calls using voice recognition algorithms and cognitive agents (AI-based online self-service assistants directing customers to helpful information or to a human representative when necessary). In fraud and debt analytics, cloud solutions enhance the predictive power of traditional early-warning systems. To reduce churn, they encourage customer loyalty through holistic retention programs managed entirely in the cloud.
  • Automakers are also along for the cloud ride . One company uses a common cloud platform that serves 124 plants, 500 warehouses, and 1,500 suppliers to consolidate real-time data from machines and systems and to track logistics and offer insights on shop floor processes. Use of the cloud could shave 30 percent off factory costs by 2025—and spark innovation at the same time.

That’s not to mention experiences we all take for granted: using apps on a smartphone, streaming shows and movies, participating in videoconferences. All of these things can happen in the cloud.

Learn more about our Cloud by McKinsey , Digital McKinsey , and Technology, Media, & Telecommunications  practices.

How has cloud computing evolved?

Going back a few years, legacy infrastructure dominated IT-hosting budgets. Enterprises planned to move a mere 45 percent of their IT-hosting expenditures to the cloud by 2021. Enter COVID-19, and 65 percent of the decision makers surveyed by McKinsey increased their cloud budgets . An additional 55 percent ended up moving more workloads than initially planned. Having witnessed the cloud’s benefits firsthand, 40 percent of companies expect to pick up the pace of implementation.

The cloud revolution has actually been going on for years—more than 20, if you think the takeoff point was the founding of Salesforce, widely seen as the first software as a service (SaaS) company. Today, the next generation of cloud, including capabilities such as serverless computing, makes it easier for software developers to tweak software functions independently, accelerating the pace of release, and to do so more efficiently. Businesses can therefore serve customers and launch products in a more agile fashion. And the cloud continues to evolve.

Circular, white maze filled with white semicircles.

Introducing McKinsey Explainers : Direct answers to complex questions

Cost savings are commonly seen as the primary reason for moving to the cloud but managing those costs requires a different and more dynamic approach focused on OpEx rather than CapEx. Financial-operations (or FinOps) capabilities  can indeed enable the continuous management and optimization of cloud costs . But CSPs have developed their offerings so that the cloud’s greatest value opportunity is primarily through business innovation and optimization. In 2020, the top-three CSPs reached $100 billion  in combined revenues—a minor share of the global $2.4 trillion market for enterprise IT services—leaving huge value to be captured. To go beyond merely realizing cost savings, companies must activate three symbiotic rings of cloud value creation : strategy and management, business domain adoption, and foundational capabilities.

What’s the main reason to move to the cloud?

The pandemic demonstrated that the digital transformation can no longer be delayed—and can happen much more quickly than previously imagined. Nothing is more critical to a corporate digital transformation than becoming a cloud-first business. The benefits are faster time to market, simplified innovation and scalability, and reduced risk when effectively managed. The cloud lets companies provide customers with novel digital experiences—in days, not months—and delivers analytics absent on legacy platforms. But to transition to a cloud-first operating model, organizations must make a collective effort that starts at the top. Here are three actions CEOs can take to increase the value their companies get from cloud computing :

  • Establish a sustainable funding model.
  • Develop a new business technology operating model.
  • Set up policies to attract and retain the right engineering talent.

How much value will the cloud create?

Fortune 500 companies adopting the cloud could realize more than $1 trillion in value  by 2030, and not from IT cost reductions alone, according to McKinsey’s analysis of 700 use cases.

For example, the cloud speeds up design, build, and ramp-up, shortening time to market when companies have strong DevOps (the combination of development and operations) processes in place; groups of software developers customize and deploy software for operations that support the business. The cloud’s global infrastructure lets companies scale products almost instantly to reach new customers, geographies, and channels. Finally, digital-first companies use the cloud to adopt emerging technologies and innovate aggressively, using digital capabilities as a competitive differentiator to launch and build businesses .

If companies pursue the cloud’s vast potential in the right ways, they will realize huge value. Companies across diverse industries have implemented the public cloud and seen promising results. The successful ones defined a value-oriented strategy across IT and the business, acquired hands-on experience operating in the cloud, adopted a technology-first approach, and developed a cloud-literate workforce.

Learn more about our Cloud by McKinsey and Digital McKinsey practices.

What is the cloud cost/procurement model?

Some cloud services, such as server space, are leased. Leasing requires much less capital up front than buying, offers greater flexibility to switch and expand the use of services, cuts the basic cost of buying hardware and software upfront, and reduces the difficulties of upkeep and ownership. Organizations pay only for the infrastructure and computing services that meet their evolving needs. But an outsourcing model  is more apt than other analogies: the computing business issues of cloud customers are addressed by third-party providers that deliver innovative computing services on demand to a wide variety of customers, adapt those services to fit specific needs, and work to constantly improve the offering.

What are cloud risks?

The cloud offers huge cost savings and potential for innovation. However, when companies migrate to the cloud, the simple lift-and-shift approach doesn’t reduce costs, so companies must remediate their existing applications to take advantage of cloud services.

For instance, a major financial-services organization  wanted to move more than 50 percent of its applications to the public cloud within five years. Its goals were to improve resiliency, time to market, and productivity. But not all its business units needed to transition at the same pace. The IT leadership therefore defined varying adoption archetypes to meet each unit’s technical, risk, and operating-model needs.

Legacy cybersecurity architectures and operating models can also pose problems when companies shift to the cloud. The resulting problems, however, involve misconfigurations rather than inherent cloud security vulnerabilities. One powerful solution? Securing cloud workloads for speed and agility : automated security architectures and processes enable workloads to be processed at a much faster tempo.

What kind of cloud talent is needed?

The talent demands of the cloud differ from those of legacy IT. While cloud computing can improve the productivity of your technology, it requires specialized and sometimes hard-to-find talent—including full-stack developers, data engineers, cloud-security engineers, identity- and access-management specialists, and cloud engineers. The cloud talent model  should thus be revisited as you move forward.

Six practical actions can help your organization build the cloud talent you need :

  • Find engineering talent with broad experience and skills.
  • Balance talent maturity levels and the composition of teams.
  • Build an extensive and mandatory upskilling program focused on need.
  • Build an engineering culture that optimizes the developer experience.
  • Consider using partners to accelerate development and assign your best cloud leaders as owners.
  • Retain top talent by focusing on what motivates them.

How do different industries use the cloud?

Different industries are expected to see dramatically different benefits from the cloud. High-tech, retail, and healthcare organizations occupy the top end of the value capture continuum. Electronics and semiconductors, consumer-packaged-goods, and media companies make up the middle. Materials, chemicals, and infrastructure organizations cluster at the lower end.

Nevertheless, myriad use cases provide opportunities to unlock value across industries , as the following examples show:

  • a retailer enhancing omnichannel  fulfillment, using AI to optimize inventory across channels and to provide a seamless customer experience
  • a healthcare organization implementing remote heath monitoring to conduct virtual trials and improve adherence
  • a high-tech company using chatbots to provide premier-level support combining phone, email, and chat
  • an oil and gas company employing automated forecasting to automate supply-and-demand modeling and reduce the need for manual analysis
  • a financial-services organization implementing customer call optimization using real-time voice recognition algorithms to direct customers in distress to experienced representatives for retention offers
  • a financial-services provider moving applications in customer-facing business domains to the public cloud to penetrate promising markets more quickly and at minimal cost
  • a health insurance carrier accelerating the capture of billions of dollars in new revenues by moving systems to the cloud to interact with providers through easier onboarding

The cloud is evolving  to meet the industry-specific needs of companies. From 2021 to 2024, public-cloud spending on vertical applications (such as warehouse management in retailing and enterprise risk management in banking) is expected to grow by more than 40 percent annually. Spending on horizontal workloads (such as customer relationship management) is expected to grow by 25 percent. Healthcare and manufacturing organizations, for instance, plan to spend around twice as much on vertical applications as on horizontal ones.

Learn more about our Cloud by McKinsey , Digital McKinsey , Financial Services , Healthcare Systems & Services , Retail , and Technology, Media, & Telecommunications  practices.

What are the biggest cloud myths?

Views on cloud computing can be clouded by misconceptions. Here are seven common myths about the cloud —all of which can be debunked:

  • The cloud’s value lies primarily in reducing costs.
  • Cloud computing costs more than in-house computing.
  • On-premises data centers are more secure than the cloud.
  • Applications run more slowly in the cloud.
  • The cloud eliminates the need for infrastructure.
  • The best way to move to the cloud is to focus on applications or data centers.
  • You must lift and shift applications as-is or totally refactor them.

How large must my organization be to benefit from the cloud?

Here’s one more huge misconception: the cloud is just for big multinational companies. In fact, cloud can help make small local companies become multinational. A company’s benefits from implementing the cloud are not constrained by its size. In fact, the cloud shifts barrier to entry skill rather than scale, making it possible for a company of any size to compete if it has people with the right skills. With cloud, highly skilled small companies can take on established competitors. To realize the cloud’s immense potential value fully, organizations must take a thoughtful approach, with IT and the businesses working together.

For more in-depth exploration of these topics, see McKinsey’s Cloud Insights collection. Learn more about Cloud by McKinsey —and check out cloud-related job opportunities if you’re interested in working at McKinsey.

Articles referenced include:

  • “ Six practical actions for building the cloud talent you need ,” January 19, 2022, Brant Carson , Dorian Gärtner , Keerthi Iyengar, Anand Swaminathan , and Wayne Vest
  • “ Cloud-migration opportunity: Business value grows, but missteps abound ,” October 12, 2021, Tara Balakrishnan, Chandra Gnanasambandam , Leandro Santos , and Bhargs Srivathsan
  • “ Cloud’s trillion-dollar prize is up for grabs ,” February 26, 2021, Will Forrest , Mark Gu, James Kaplan , Michael Liebow, Raghav Sharma, Kate Smaje , and Steve Van Kuiken
  • “ Unlocking value: Four lessons in cloud sourcing and consumption ,” November 2, 2020, Abhi Bhatnagar , Will Forrest , Naufal Khan , and Abdallah Salami
  • “ Three actions CEOs can take to get value from cloud computing ,” July 21, 2020, Chhavi Arora , Tanguy Catlin , Will Forrest , James Kaplan , and Lars Vinter

Group of white spheres on light blue background

Want to know more about cloud computing?

Related articles.

Cloud’s trillion-dollar prize is up for grabs

Cloud’s trillion-dollar prize is up for grabs

The cloud transformation engine

The cloud transformation engine

Cloud calculator

Cloud cost-optimization simulator

Cart

  • SUGGESTED TOPICS
  • The Magazine
  • Newsletters
  • Managing Yourself
  • Managing Teams
  • Work-life Balance
  • The Big Idea
  • Data & Visuals
  • Reading Lists
  • Case Selections
  • HBR Learning
  • Topic Feeds
  • Account Settings
  • Email Preferences

Onboarding New Employees in a Hybrid Workplace

  • Dawn Klinghoffer,
  • Karen Kocher,
  • Natalie Luna

in research sample

New research from Microsoft on how much time new hires should spend in the office during their first 90 days.

As you’re navigating hybrid work, it’s a good moment to assess how your onboarding processes enable or empower your new hires to thrive. Researchers at Microsoft have conducted and identified studies that suggest that onboarding to a new role, team, or company is a key moment for building connections with the new manager and team and doing so a few days in person provides unique benefits. But just requiring newcomers to be onsite full time doesn’t guarantee success. The authors explain and offer examples of how onboarding that truly helps new employees thrive in the modern workplace is less about face time and more about intention, structure, and resources.

During the pandemic, companies around the world explored new ways of working that challenged long-held assumptions and beliefs about where work gets done. Many companies, including Microsoft , saw the benefits of flexible work and wanted to offer employees a chance to continue to work in a hybrid environment, while balancing the needs of the organization.

  • Dawn Klinghoffer is the head of people analytics at Microsoft.
  • KK Karen Kocher leads the future of work, workforce of the future and talent & learning experiences at Microsoft.
  • NL Natalie Luna is on Microsoft’s employee listening team, leading employee lifecycle and daily surveys, and researching onboarding, culture and hybrid ways of working.

Partner Center

An official website of the United States government

Here's how you know

Official websites use .gov A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS. A lock ( Lock Locked padlock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

Dear Colleague Letter: Planning Proposals for the NSF Established Program to Stimulate Competitive Research (EPSCoR) Research Incubators for STEM Excellence (E-RISE) and Collaborations for Optimizing Research Ecosystems (E-CORE) Research Infrastructure Improvement (RII) Programs

June 7, 2024

Dear Colleagues:

With this Dear Colleague Letter (DCL), NSF EPSCoR announces its intent to accept planning proposals to support planning of future submissions to the  E-RISE RII or E-CORE RII programs. E-RISE RII focuses on the development and sustainability of an EPSCoR-eligible jurisdiction’s research capacity and competitiveness in a scientific topic area by supporting the incubation of research teams and products in a scientific topical area. E-CORE RII supports jurisdictions in building capacity in one or more targeted research infrastructure cores that underlie the jurisdiction’s research ecosystem. Both E-RISE RII and E-CORE RII projects are expected to align with research priorities identified in the approved Science and Technology (S&T) plan of the jurisdiction.

PLANNING PROPOSAL PURPOSE

NSF EPSCoR is utilizing the planning type of proposal to engage institutions and organizations that may be interested in submitting proposals to future E-RISE RII or E-CORE RII competitions. NSF EPSCoR is especially interested in planning activities that would catalyze new collaborations and partnerships in EPSCoR-eligible jurisdictions and that broaden the participation of individuals or organizations underrepresented in the NSF EPSCoR award portfolio.

The planning proposal will allow up to one year of support to provide, as applicable, the PI, collaborating institutions(s), and planning team with the time and resources needed for submission of a meritorious project to E-RISE RII or E-CORE RII programs.

EPSCoR RII planning proposals are not intended to provide seed funding for research activities. Planning proposals for the collection of research data will be returned without review. Rather, for EPSCoR RII, the planning type of proposal is appropriate for the development of a complex, jurisdiction-wide, four-year, capacity-building research infrastructure or research and education proposal that is aligned with the Science & Technology (S&T) plan of the jurisdiction.

In preparation for a future submission to the E-RISE RII program, the planning proposal should include:

  • a review of the selected research focus area, including the rationale and justification for enhancing research capacity in that topic area within the jurisdiction;
  • an assessment of the jurisdiction's existing research capacity and infrastructure (including cyberinfrastructure and research personnel) for enabling research in the chosen topic area;
  • the initial coordination and planning of future jurisdiction-wide research and capacity-building efforts; and
  • an analysis of the workforce development efforts needed to support the jurisdiction's future expertise in the research topic area(s).

In preparation for a future submission to the E-CORE RII program, the planning proposal should include:

  • the integration of multi-disciplinary approaches, expertise, and organizations within the jurisdiction in order to develop a management plan for a future E-CORE project that optimizes research and capacity-building efforts while acknowledging and minimizing risks;
  • identification of additional infrastructure that may be needed to support research efforts of relevance to the jurisdictional S&T plan;
  • an analysis of the workforce development efforts needed to support the jurisdiction's S&T plan.

For examples of possible EPSCoR RII planning activities, see the Examples of Appropriate Planning Activities section below.

ELIGIBILITY

To be eligible for submission of a planning proposal or receipt of a planning award, the submitting institution or organization must be in an EPSCoR-eligible jurisdiction and must not be a funded collaborator on a pending or active E-RISE RII or E-CORE RII award.

Institutions or collaborators with a lead or collaborating role in a current EPSCoR RII Track-1 award are also eligible to submit a planning proposal.

IMPORTANT CONSIDERATIONS

Before preparing and submitting a planning proposal, the PI must contact an NSF EPSCoR RII Program Director  to provide a one-page concept outline of the project and to discuss the types of activities for which funding would be requested in the proposal. If approved, the NSF Program Director will invite submission of the planning proposal by email. The email confirming approval to submit a planning proposal must be uploaded as a document entitled "EPSCoR RII Planning - Program Director Concurrence Email" in the Program Officer Concurrence Email(s) section of  Research.gov .

PREPARATION INSTRUCTIONS

Planning proposals must be prepared and submitted in Research.gov in accordance with the guidance for Planning Proposals specified in Chapter II.F.1 of the  NSF Proposal and Award Policies and Procedures Guide (PAPPG) and the additional guidance below.

  • Select the Proposal & Award Policies & Procedures Guide as the Funding Opportunity;
  • In the "Where to Apply" section, select "Office of the Director" as the Directorate, "Office of Integrative Activities" as the Division and either "EPSCoR CORE RII" or “EPSCoR RISE RII” as the Program;
  • On the Select Proposal Type screen, select "Planning" as the proposal type.

The Project Description must not exceed eight pages in length and must include the following:

  • A brief paragraph on the purpose of the planning proposal, specifying whether the proposal is in preparation for submission to the E-RISE RII program or to the E-CORE RII program.
  • A description of goals and activities for the project, including the basis for their inclusion and their relevance for a future E-RISE RII or E-CORE RII proposal submission. The narrative should include activities that would be expected to culminate in one or more jurisdiction-wide, in-person, hybrid, or virtual gathering(s) of key participants. Preliminary consultation with an EPSCoR RII Program Director may help identify the optimal activities for a particular project and at what points would best help the jurisdiction in the planning process.

When preparing the budget and budget justification, some considerations are:

  • The budget may not exceed $100,000 for a period of up to one year.
  • The budget should allow for at least one meeting for key participants to work together toward envisioning a future E-CORE or E-RISE RII project. This meeting may engage an external facilitator to direct participants toward a product that can be developed into an E-CORE or E-RISE RII proposal. If included, the facilitator must be listed in Section G (Consultant Services).
  • The budget justification should explain how the budget allocation supports the overall goal of the planning proposal. Note that the funds are not intended to be used for research activities, such as preliminary data collection, or for proposal writing.

EXAMPLES OF APPROPRIATE PLANNING ACTIVITIES

Examples of activities appropriate during an EPSCoR RII planning award are provided below. Proposals may include activities like those described below or different activities more suitable for the submitting jurisdiction’s specific needs.

  • Developing a plan for structuring the administrative core of a planned E-CORE RII project to allow for the transitioning of an EPSCoR State Office to the administrative core.
  • Reviewing the existing research infrastructure in the jurisdiction that is needed to address the chosen focus area of the planning proposal, including an analysis of the personnel and equipment already available in the jurisdiction, or what personnel and equipment would need to be acquired to do the future work.
  • Determining the future work's critical path and the timeline for when the needed infrastructure would be in place to ensure the overall success of the future project.
  • Developing a detailed schematic illustrating how the future project would involve a coordinated, collaborative approach to the proposed problem, including using multiple investigators and organizations.
  • Creating a logic model to describe the shared relationships among the resources, activities, outputs, outcomes, and impacts of the future project.
  • Analyzing the potential sustainability of efforts, particularly in terms of commitments from the jurisdiction to sustain infrastructure after completion of the E-RISE RII or E-CORE RII award.
  • Developing a management plan for the future project that includes human resource management, particularly in showing how potential new faculty hires would be included in the project plan, and a risk analysis of how the project would succeed if the required new faculty could not be hired for any reason.
  • Ascertaining resources available at institutions across the jurisdiction, including research-intensive universities, primarily undergraduate institutions, community colleges, minority-serving institutions, and tribal colleges and universities, indicating how the chosen institutions could best fit into a four-year project as full-time, part-time, or seasonal research partners and/or sites of workforce development in the topic area of the project.
  • Determining the baseline demographics of science, technology, education, and mathematics (STEM) participation in the jurisdiction and planning for increasing the participation from the full spectrum of diverse talent that society has to offer, which includes underrepresented and underserved communities, in the future project.

POINTS OF CONTACT

Questions about this DCL may be directed to:

Sandra Richardson Section Head, Research Capacity and Competitiveness U.S. National Science Foundation

Alicia Knoedler Office Head, Office of Integrative Activities U.S. National Science Foundation

IMAGES

  1. Research Summary

    in research sample

  2. Sample of methodology in research paper

    in research sample

  3. 🌈 Introduction sample for research paper. Research Paper Introduction

    in research sample

  4. FREE 6+ Sample Research Plan Templates in PDF

    in research sample

  5. Research Statement

    in research sample

  6. Chapter 3 Research Methodology Example Qualitative

    in research sample

VIDEO

  1. How to Define Your Research Sample

  2. SAMPLING PROCEDURE AND SAMPLE (QUALITATIVE RESEARCH)

  3. How to write a research statement (with a sample)

  4. RESEARCH SAMPLE (INFOGRAPHICS)

  5. Tips for your research statement

  6. Abel Image Research Sample Reel copyright 1985

COMMENTS

  1. Sampling Methods

    Sampling Methods | Types, Techniques & Examples. Published on September 19, 2019 by Shona McCombes. Revised on June 22, 2023. When you conduct research about a group of people, it's rarely possible to collect data from every person in that group. Instead, you select a sample. The sample is the group of individuals who will actually ...

  2. Sampling Methods

    The sample should be selected randomly, or if using a non-random method, every effort should be made to minimize bias and ensure that the sample is representative of the population. Collect data: Once the sample has been selected, collect data from each member of the sample using appropriate research methods (e.g., surveys, interviews ...

  3. Sample: Definition, Types, Formula & Examples

    What is a Sample? A sample is a smaller set of data that a researcher chooses or selects from a larger population using a pre-defined selection bias method. These elements are known as sample points, sampling units, or observations. Creating a sample is an efficient method of conducting research. Researching the whole population is often ...

  4. What are Sampling Methods? Techniques, Types, and Examples

    In probability theory, the sample space comprises all possible outcomes of a random experiment, while the sample frame is the list or source guiding sample selection in statistical research. The sample represents the group of individuals participating in the study, forming the basis for the research findings. Selecting the correct sample is ...

  5. What is a sample in research: Definition, examples & tips

    The sample is a subset of the population's elements chosen for research, whereas the sample frame is a comprehensive list or inventory of all population items. Key points to takeaway In conclusion, a sample is a group or subset of persons or things chosen from a broader population to study or assess particular traits or behaviors.

  6. Sampling Methods In Reseach: Types, Techniques, & Examples

    Sampling methods in psychology refer to strategies used to select a subset of individuals (a sample) from a larger population, to study and draw inferences about the entire population. Common methods include random sampling, stratified sampling, cluster sampling, and convenience sampling. Proper sampling ensures representative, generalizable, and valid research results.

  7. Sample Size and its Importance in Research

    The sample size for a study needs to be estimated at the time the study is proposed; too large a sample is unnecessary and unethical, and too small a sample is unscientific and also unethical. The necessary sample size can be calculated, using statistical software, based on certain assumptions. If no assumptions can be made, then an arbitrary ...

  8. Sampling Methods

    Population vs sample. First, you need to understand the difference between a population and a sample, and identify the target population of your research.. The population is the entire group that you want to draw conclusions about.; The sample is the specific group of individuals that you will collect data from.; The population can be defined in terms of geographical location, age, income, and ...

  9. Sampling Methods & Strategies 101 (With Examples)

    Sampling refers to the process of defining a subgroup (sample) from the larger group of interest (population). The two overarching approaches to sampling are probability sampling (random) and non-probability sampling. Common probability-based sampling methods include simple random sampling, stratified random sampling, cluster sampling and ...

  10. Types of sampling methods

    Bad ways to sample. Convenience sample: The researcher chooses a sample that is readily available in some non-random way. Example—A researcher polls people as they walk by on the street. Why it's probably biased: The location and time of day and other factors may produce a biased sample of people. Voluntary response sample: The researcher ...

  11. Chapter 5. Sampling

    Sampling in qualitative research has different purposes and goals than sampling in quantitative research. Sampling in both allows you to say something of interest about a population without having to include the entire population in your sample. We begin this chapter with the case of a population of interest composed of actual people.

  12. Sampling: how to select participants in my research study?

    TO SAMPLE OR NOT TO SAMPLE. In a previous paper, we discussed the necessary parameters on which to estimate the sample size. 1 We define sample as a finite part or subset of participants drawn from the target population. In turn, the target population corresponds to the entire set of subjects whose characteristics are of interest to the research team.

  13. Social Science Research: Principles, Methods and Practices (Revised

    Sampling is the statistical process of selecting a subset—called a 'sample'—of a population of interest for the purpose of making observations and statistical inferences about that population. Social science research is generally about inferring patterns of behaviours within specific populations. We cannot study entire populations because of feasibility and cost constraints, and hence ...

  14. Sampling in Research

    The main purpose of sampling in research is to make the research process doable. The research sample helps to reduce bias, accurately present the population and is cost-effective.

  15. Research Methodology

    Qualitative Research Methodology. This is a research methodology that involves the collection and analysis of non-numerical data such as words, images, and observations. This type of research is often used to explore complex phenomena, to gain an in-depth understanding of a particular topic, and to generate hypotheses.

  16. Research Design

    Table of contents. Step 1: Consider your aims and approach. Step 2: Choose a type of research design. Step 3: Identify your population and sampling method. Step 4: Choose your data collection methods. Step 5: Plan your data collection procedures. Step 6: Decide on your data analysis strategies.

  17. A Practical Guide to Writing Quantitative and Qualitative Research

    INTRODUCTION. Scientific research is usually initiated by posing evidenced-based research questions which are then explicitly restated as hypotheses.1,2 The hypotheses provide directions to guide the study, solutions, explanations, and expected results.3,4 Both research questions and hypotheses are essentially formulated based on conventional theories and real-world processes, which allow the ...

  18. Qualitative vs. Quantitative: Key Differences in Research Types

    Examples of How to Use Qualitative Research. The flexibility of qualitative research allows researchers to adapt their methods based on emerging insights, fostering a more organic and holistic exploration of the research topic. This is a widely used method in social sciences, psychology and market research. ...

  19. Office of Science Policy released sample consent language for Digital

    To assist in the responsible deployment of digital health technologies, OSP, through an NIH-wide collaboration, has developed and released Informed Consent for Research Using Digital Health Technologies: Points to Consider & Sample Language. This resource presents general points to consider, instructions for use, and optional sample language for the research community.

  20. Seven models of undergraduate research for student success

    To enhance the student experience and increase access to experiential learning, colleges and universities have gotten creative with undergraduate research experiences. Undergraduate research opportunities are one way to provide experiential learning in many disciplines, introducing learners to research methods under the supervision of a faculty member and providing experience for a résumé.

  21. Introduction to Social Exchange Theory in Social Work With Examples in

    Here's a simple breakdown of how the social exchange theory works: Costs: These are the negatives or drawbacks of a relationship or interaction, like time, effort, or emotional strain. Benefits: These are the positives or rewards, such as support, friendship, or resources. In social work, practitioners use this theory to analyze and guide ...

  22. What is Natural Language Processing? Definition and Examples

    Natural language processing (NLP) is a subset of artificial intelligence, computer science, and linguistics focused on making human communication, such as speech and text, comprehensible to computers. NLP is used in a wide variety of everyday products and services. Some of the most common ways NLP is used are through voice-activated digital ...

  23. What is cloud computing: Its uses and benefits

    August 17, 2022 | Article. Cloud computing is the use of comprehensive digital capabilities delivered via the internet for organizations to operate, innovate, and serve customers. It eliminates the need for organizations to host digital applications on their own servers. Group of white spheres on light blue background.

  24. Onboarding New Employees in a Hybrid Workplace

    Summary. As you're navigating hybrid work, it's a good moment to assess how your onboarding processes enable or empower your new hires to thrive. Researchers at Microsoft have conducted and ...

  25. Dear Colleague Letter: Planning Proposals for the NSF Established

    June 7, 2024. Dear Colleagues: With this Dear Colleague Letter (DCL), NSF EPSCoR announces its intent to accept planning proposals to support planning of future submissions to the E-RISE RII or E-CORE RII programs. E-RISE RII focuses on the development and sustainability of an EPSCoR-eligible jurisdiction's research capacity and competitiveness in a scientific topic area by supporting the ...