• Privacy Policy

Research Method

Home » Survey Research – Types, Methods, Examples

Survey Research – Types, Methods, Examples

Table of Contents

Survey Research

Survey Research

Definition:

Survey Research is a quantitative research method that involves collecting standardized data from a sample of individuals or groups through the use of structured questionnaires or interviews. The data collected is then analyzed statistically to identify patterns and relationships between variables, and to draw conclusions about the population being studied.

Survey research can be used to answer a variety of questions, including:

  • What are people’s opinions about a certain topic?
  • What are people’s experiences with a certain product or service?
  • What are people’s beliefs about a certain issue?

Survey Research Methods

Survey Research Methods are as follows:

  • Telephone surveys: A survey research method where questions are administered to respondents over the phone, often used in market research or political polling.
  • Face-to-face surveys: A survey research method where questions are administered to respondents in person, often used in social or health research.
  • Mail surveys: A survey research method where questionnaires are sent to respondents through mail, often used in customer satisfaction or opinion surveys.
  • Online surveys: A survey research method where questions are administered to respondents through online platforms, often used in market research or customer feedback.
  • Email surveys: A survey research method where questionnaires are sent to respondents through email, often used in customer satisfaction or opinion surveys.
  • Mixed-mode surveys: A survey research method that combines two or more survey modes, often used to increase response rates or reach diverse populations.
  • Computer-assisted surveys: A survey research method that uses computer technology to administer or collect survey data, often used in large-scale surveys or data collection.
  • Interactive voice response surveys: A survey research method where respondents answer questions through a touch-tone telephone system, often used in automated customer satisfaction or opinion surveys.
  • Mobile surveys: A survey research method where questions are administered to respondents through mobile devices, often used in market research or customer feedback.
  • Group-administered surveys: A survey research method where questions are administered to a group of respondents simultaneously, often used in education or training evaluation.
  • Web-intercept surveys: A survey research method where questions are administered to website visitors, often used in website or user experience research.
  • In-app surveys: A survey research method where questions are administered to users of a mobile application, often used in mobile app or user experience research.
  • Social media surveys: A survey research method where questions are administered to respondents through social media platforms, often used in social media or brand awareness research.
  • SMS surveys: A survey research method where questions are administered to respondents through text messaging, often used in customer feedback or opinion surveys.
  • IVR surveys: A survey research method where questions are administered to respondents through an interactive voice response system, often used in automated customer feedback or opinion surveys.
  • Mixed-method surveys: A survey research method that combines both qualitative and quantitative data collection methods, often used in exploratory or mixed-method research.
  • Drop-off surveys: A survey research method where respondents are provided with a survey questionnaire and asked to return it at a later time or through a designated drop-off location.
  • Intercept surveys: A survey research method where respondents are approached in public places and asked to participate in a survey, often used in market research or customer feedback.
  • Hybrid surveys: A survey research method that combines two or more survey modes, data sources, or research methods, often used in complex or multi-dimensional research questions.

Types of Survey Research

There are several types of survey research that can be used to collect data from a sample of individuals or groups. following are Types of Survey Research:

  • Cross-sectional survey: A type of survey research that gathers data from a sample of individuals at a specific point in time, providing a snapshot of the population being studied.
  • Longitudinal survey: A type of survey research that gathers data from the same sample of individuals over an extended period of time, allowing researchers to track changes or trends in the population being studied.
  • Panel survey: A type of longitudinal survey research that tracks the same sample of individuals over time, typically collecting data at multiple points in time.
  • Epidemiological survey: A type of survey research that studies the distribution and determinants of health and disease in a population, often used to identify risk factors and inform public health interventions.
  • Observational survey: A type of survey research that collects data through direct observation of individuals or groups, often used in behavioral or social research.
  • Correlational survey: A type of survey research that measures the degree of association or relationship between two or more variables, often used to identify patterns or trends in data.
  • Experimental survey: A type of survey research that involves manipulating one or more variables to observe the effect on an outcome, often used to test causal hypotheses.
  • Descriptive survey: A type of survey research that describes the characteristics or attributes of a population or phenomenon, often used in exploratory research or to summarize existing data.
  • Diagnostic survey: A type of survey research that assesses the current state or condition of an individual or system, often used in health or organizational research.
  • Explanatory survey: A type of survey research that seeks to explain or understand the causes or mechanisms behind a phenomenon, often used in social or psychological research.
  • Process evaluation survey: A type of survey research that measures the implementation and outcomes of a program or intervention, often used in program evaluation or quality improvement.
  • Impact evaluation survey: A type of survey research that assesses the effectiveness or impact of a program or intervention, often used to inform policy or decision-making.
  • Customer satisfaction survey: A type of survey research that measures the satisfaction or dissatisfaction of customers with a product, service, or experience, often used in marketing or customer service research.
  • Market research survey: A type of survey research that collects data on consumer preferences, behaviors, or attitudes, often used in market research or product development.
  • Public opinion survey: A type of survey research that measures the attitudes, beliefs, or opinions of a population on a specific issue or topic, often used in political or social research.
  • Behavioral survey: A type of survey research that measures actual behavior or actions of individuals, often used in health or social research.
  • Attitude survey: A type of survey research that measures the attitudes, beliefs, or opinions of individuals, often used in social or psychological research.
  • Opinion poll: A type of survey research that measures the opinions or preferences of a population on a specific issue or topic, often used in political or media research.
  • Ad hoc survey: A type of survey research that is conducted for a specific purpose or research question, often used in exploratory research or to answer a specific research question.

Types Based on Methodology

Based on Methodology Survey are divided into two Types:

Quantitative Survey Research

Qualitative survey research.

Quantitative survey research is a method of collecting numerical data from a sample of participants through the use of standardized surveys or questionnaires. The purpose of quantitative survey research is to gather empirical evidence that can be analyzed statistically to draw conclusions about a particular population or phenomenon.

In quantitative survey research, the questions are structured and pre-determined, often utilizing closed-ended questions, where participants are given a limited set of response options to choose from. This approach allows for efficient data collection and analysis, as well as the ability to generalize the findings to a larger population.

Quantitative survey research is often used in market research, social sciences, public health, and other fields where numerical data is needed to make informed decisions and recommendations.

Qualitative survey research is a method of collecting non-numerical data from a sample of participants through the use of open-ended questions or semi-structured interviews. The purpose of qualitative survey research is to gain a deeper understanding of the experiences, perceptions, and attitudes of participants towards a particular phenomenon or topic.

In qualitative survey research, the questions are open-ended, allowing participants to share their thoughts and experiences in their own words. This approach allows for a rich and nuanced understanding of the topic being studied, and can provide insights that are difficult to capture through quantitative methods alone.

Qualitative survey research is often used in social sciences, education, psychology, and other fields where a deeper understanding of human experiences and perceptions is needed to inform policy, practice, or theory.

Data Analysis Methods

There are several Survey Research Data Analysis Methods that researchers may use, including:

  • Descriptive statistics: This method is used to summarize and describe the basic features of the survey data, such as the mean, median, mode, and standard deviation. These statistics can help researchers understand the distribution of responses and identify any trends or patterns.
  • Inferential statistics: This method is used to make inferences about the larger population based on the data collected in the survey. Common inferential statistical methods include hypothesis testing, regression analysis, and correlation analysis.
  • Factor analysis: This method is used to identify underlying factors or dimensions in the survey data. This can help researchers simplify the data and identify patterns and relationships that may not be immediately apparent.
  • Cluster analysis: This method is used to group similar respondents together based on their survey responses. This can help researchers identify subgroups within the larger population and understand how different groups may differ in their attitudes, behaviors, or preferences.
  • Structural equation modeling: This method is used to test complex relationships between variables in the survey data. It can help researchers understand how different variables may be related to one another and how they may influence one another.
  • Content analysis: This method is used to analyze open-ended responses in the survey data. Researchers may use software to identify themes or categories in the responses, or they may manually review and code the responses.
  • Text mining: This method is used to analyze text-based survey data, such as responses to open-ended questions. Researchers may use software to identify patterns and themes in the text, or they may manually review and code the text.

Applications of Survey Research

Here are some common applications of survey research:

  • Market Research: Companies use survey research to gather insights about customer needs, preferences, and behavior. These insights are used to create marketing strategies and develop new products.
  • Public Opinion Research: Governments and political parties use survey research to understand public opinion on various issues. This information is used to develop policies and make decisions.
  • Social Research: Survey research is used in social research to study social trends, attitudes, and behavior. Researchers use survey data to explore topics such as education, health, and social inequality.
  • Academic Research: Survey research is used in academic research to study various phenomena. Researchers use survey data to test theories, explore relationships between variables, and draw conclusions.
  • Customer Satisfaction Research: Companies use survey research to gather information about customer satisfaction with their products and services. This information is used to improve customer experience and retention.
  • Employee Surveys: Employers use survey research to gather feedback from employees about their job satisfaction, working conditions, and organizational culture. This information is used to improve employee retention and productivity.
  • Health Research: Survey research is used in health research to study topics such as disease prevalence, health behaviors, and healthcare access. Researchers use survey data to develop interventions and improve healthcare outcomes.

Examples of Survey Research

Here are some real-time examples of survey research:

  • COVID-19 Pandemic Surveys: Since the outbreak of the COVID-19 pandemic, surveys have been conducted to gather information about public attitudes, behaviors, and perceptions related to the pandemic. Governments and healthcare organizations have used this data to develop public health strategies and messaging.
  • Political Polls During Elections: During election seasons, surveys are used to measure public opinion on political candidates, policies, and issues in real-time. This information is used by political parties to develop campaign strategies and make decisions.
  • Customer Feedback Surveys: Companies often use real-time customer feedback surveys to gather insights about customer experience and satisfaction. This information is used to improve products and services quickly.
  • Event Surveys: Organizers of events such as conferences and trade shows often use surveys to gather feedback from attendees in real-time. This information can be used to improve future events and make adjustments during the current event.
  • Website and App Surveys: Website and app owners use surveys to gather real-time feedback from users about the functionality, user experience, and overall satisfaction with their platforms. This feedback can be used to improve the user experience and retain customers.
  • Employee Pulse Surveys: Employers use real-time pulse surveys to gather feedback from employees about their work experience and overall job satisfaction. This feedback is used to make changes in real-time to improve employee retention and productivity.

Survey Sample

Purpose of survey research.

The purpose of survey research is to gather data and insights from a representative sample of individuals. Survey research allows researchers to collect data quickly and efficiently from a large number of people, making it a valuable tool for understanding attitudes, behaviors, and preferences.

Here are some common purposes of survey research:

  • Descriptive Research: Survey research is often used to describe characteristics of a population or a phenomenon. For example, a survey could be used to describe the characteristics of a particular demographic group, such as age, gender, or income.
  • Exploratory Research: Survey research can be used to explore new topics or areas of research. Exploratory surveys are often used to generate hypotheses or identify potential relationships between variables.
  • Explanatory Research: Survey research can be used to explain relationships between variables. For example, a survey could be used to determine whether there is a relationship between educational attainment and income.
  • Evaluation Research: Survey research can be used to evaluate the effectiveness of a program or intervention. For example, a survey could be used to evaluate the impact of a health education program on behavior change.
  • Monitoring Research: Survey research can be used to monitor trends or changes over time. For example, a survey could be used to monitor changes in attitudes towards climate change or political candidates over time.

When to use Survey Research

there are certain circumstances where survey research is particularly appropriate. Here are some situations where survey research may be useful:

  • When the research question involves attitudes, beliefs, or opinions: Survey research is particularly useful for understanding attitudes, beliefs, and opinions on a particular topic. For example, a survey could be used to understand public opinion on a political issue.
  • When the research question involves behaviors or experiences: Survey research can also be useful for understanding behaviors and experiences. For example, a survey could be used to understand the prevalence of a particular health behavior.
  • When a large sample size is needed: Survey research allows researchers to collect data from a large number of people quickly and efficiently. This makes it a useful method when a large sample size is needed to ensure statistical validity.
  • When the research question is time-sensitive: Survey research can be conducted quickly, which makes it a useful method when the research question is time-sensitive. For example, a survey could be used to understand public opinion on a breaking news story.
  • When the research question involves a geographically dispersed population: Survey research can be conducted online, which makes it a useful method when the population of interest is geographically dispersed.

How to Conduct Survey Research

Conducting survey research involves several steps that need to be carefully planned and executed. Here is a general overview of the process:

  • Define the research question: The first step in conducting survey research is to clearly define the research question. The research question should be specific, measurable, and relevant to the population of interest.
  • Develop a survey instrument : The next step is to develop a survey instrument. This can be done using various methods, such as online survey tools or paper surveys. The survey instrument should be designed to elicit the information needed to answer the research question, and should be pre-tested with a small sample of individuals.
  • Select a sample : The sample is the group of individuals who will be invited to participate in the survey. The sample should be representative of the population of interest, and the size of the sample should be sufficient to ensure statistical validity.
  • Administer the survey: The survey can be administered in various ways, such as online, by mail, or in person. The method of administration should be chosen based on the population of interest and the research question.
  • Analyze the data: Once the survey data is collected, it needs to be analyzed. This involves summarizing the data using statistical methods, such as frequency distributions or regression analysis.
  • Draw conclusions: The final step is to draw conclusions based on the data analysis. This involves interpreting the results and answering the research question.

Advantages of Survey Research

There are several advantages to using survey research, including:

  • Efficient data collection: Survey research allows researchers to collect data quickly and efficiently from a large number of people. This makes it a useful method for gathering information on a wide range of topics.
  • Standardized data collection: Surveys are typically standardized, which means that all participants receive the same questions in the same order. This ensures that the data collected is consistent and reliable.
  • Cost-effective: Surveys can be conducted online, by mail, or in person, which makes them a cost-effective method of data collection.
  • Anonymity: Participants can remain anonymous when responding to a survey. This can encourage participants to be more honest and open in their responses.
  • Easy comparison: Surveys allow for easy comparison of data between different groups or over time. This makes it possible to identify trends and patterns in the data.
  • Versatility: Surveys can be used to collect data on a wide range of topics, including attitudes, beliefs, behaviors, and preferences.

Limitations of Survey Research

Here are some of the main limitations of survey research:

  • Limited depth: Surveys are typically designed to collect quantitative data, which means that they do not provide much depth or detail about people’s experiences or opinions. This can limit the insights that can be gained from the data.
  • Potential for bias: Surveys can be affected by various biases, including selection bias, response bias, and social desirability bias. These biases can distort the results and make them less accurate.
  • L imited validity: Surveys are only as valid as the questions they ask. If the questions are poorly designed or ambiguous, the results may not accurately reflect the respondents’ attitudes or behaviors.
  • Limited generalizability : Survey results are only generalizable to the population from which the sample was drawn. If the sample is not representative of the population, the results may not be generalizable to the larger population.
  • Limited ability to capture context: Surveys typically do not capture the context in which attitudes or behaviors occur. This can make it difficult to understand the reasons behind the responses.
  • Limited ability to capture complex phenomena: Surveys are not well-suited to capture complex phenomena, such as emotions or the dynamics of interpersonal relationships.

Following is an example of a Survey Sample:

Welcome to our Survey Research Page! We value your opinions and appreciate your participation in this survey. Please answer the questions below as honestly and thoroughly as possible.

1. What is your age?

  • A) Under 18
  • G) 65 or older

2. What is your highest level of education completed?

  • A) Less than high school
  • B) High school or equivalent
  • C) Some college or technical school
  • D) Bachelor’s degree
  • E) Graduate or professional degree

3. What is your current employment status?

  • A) Employed full-time
  • B) Employed part-time
  • C) Self-employed
  • D) Unemployed

4. How often do you use the internet per day?

  •  A) Less than 1 hour
  • B) 1-3 hours
  • C) 3-5 hours
  • D) 5-7 hours
  • E) More than 7 hours

5. How often do you engage in social media per day?

6. Have you ever participated in a survey research study before?

7. If you have participated in a survey research study before, how was your experience?

  • A) Excellent
  • E) Very poor

8. What are some of the topics that you would be interested in participating in a survey research study about?

……………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………….

9. How often would you be willing to participate in survey research studies?

  • A) Once a week
  • B) Once a month
  • C) Once every 6 months
  • D) Once a year

10. Any additional comments or suggestions?

Thank you for taking the time to complete this survey. Your feedback is important to us and will help us improve our survey research efforts.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Questionnaire

Questionnaire – Definition, Types, and Examples

Case Study Research

Case Study – Methods, Examples and Guide

Observational Research

Observational Research – Methods and Guide

Quantitative Research

Quantitative Research – Methods, Types and...

Qualitative Research Methods

Qualitative Research Methods

Explanatory Research

Explanatory Research – Types, Methods, Guide

  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

research question survey methodology

Home Market Research

Survey Research: Definition, Examples and Methods

Survey Research

Survey Research is a quantitative research method used for collecting data from a set of respondents. It has been perhaps one of the most used methodologies in the industry for several years due to the multiple benefits and advantages that it has when collecting and analyzing data.

LEARN ABOUT: Behavioral Research

In this article, you will learn everything about survey research, such as types, methods, and examples.

Survey Research Definition

Survey Research is defined as the process of conducting research using surveys that researchers send to survey respondents. The data collected from surveys is then statistically analyzed to draw meaningful research conclusions. In the 21st century, every organization’s eager to understand what their customers think about their products or services and make better business decisions. Researchers can conduct research in multiple ways, but surveys are proven to be one of the most effective and trustworthy research methods. An online survey is a method for extracting information about a significant business matter from an individual or a group of individuals. It consists of structured survey questions that motivate the participants to respond. Creditable survey research can give these businesses access to a vast information bank. Organizations in media, other companies, and even governments rely on survey research to obtain accurate data.

The traditional definition of survey research is a quantitative method for collecting information from a pool of respondents by asking multiple survey questions. This research type includes the recruitment of individuals collection, and analysis of data. It’s useful for researchers who aim to communicate new features or trends to their respondents.

LEARN ABOUT: Level of Analysis Generally, it’s the primary step towards obtaining quick information about mainstream topics and conducting more rigorous and detailed quantitative research methods like surveys/polls or qualitative research methods like focus groups/on-call interviews can follow. There are many situations where researchers can conduct research using a blend of both qualitative and quantitative strategies.

LEARN ABOUT: Survey Sampling

Survey Research Methods

Survey research methods can be derived based on two critical factors: Survey research tool and time involved in conducting research. There are three main survey research methods, divided based on the medium of conducting survey research:

  • Online/ Email:   Online survey research is one of the most popular survey research methods today. The survey cost involved in online survey research is extremely minimal, and the responses gathered are highly accurate.
  • Phone:  Survey research conducted over the telephone ( CATI survey ) can be useful in collecting data from a more extensive section of the target population. There are chances that the money invested in phone surveys will be higher than other mediums, and the time required will be higher.
  • Face-to-face:  Researchers conduct face-to-face in-depth interviews in situations where there is a complicated problem to solve. The response rate for this method is the highest, but it can be costly.

Further, based on the time taken, survey research can be classified into two methods:

  • Longitudinal survey research:  Longitudinal survey research involves conducting survey research over a continuum of time and spread across years and decades. The data collected using this survey research method from one time period to another is qualitative or quantitative. Respondent behavior, preferences, and attitudes are continuously observed over time to analyze reasons for a change in behavior or preferences. For example, suppose a researcher intends to learn about the eating habits of teenagers. In that case, he/she will follow a sample of teenagers over a considerable period to ensure that the collected information is reliable. Often, cross-sectional survey research follows a longitudinal study .
  • Cross-sectional survey research:  Researchers conduct a cross-sectional survey to collect insights from a target audience at a particular time interval. This survey research method is implemented in various sectors such as retail, education, healthcare, SME businesses, etc. Cross-sectional studies can either be descriptive or analytical. It is quick and helps researchers collect information in a brief period. Researchers rely on the cross-sectional survey research method in situations where descriptive analysis of a subject is required.

Survey research also is bifurcated according to the sampling methods used to form samples for research: Probability and Non-probability sampling. Every individual in a population should be considered equally to be a part of the survey research sample. Probability sampling is a sampling method in which the researcher chooses the elements based on probability theory. The are various probability research methods, such as simple random sampling , systematic sampling, cluster sampling, stratified random sampling, etc. Non-probability sampling is a sampling method where the researcher uses his/her knowledge and experience to form samples.

LEARN ABOUT: Survey Sample Sizes

The various non-probability sampling techniques are :

  • Convenience sampling
  • Snowball sampling
  • Consecutive sampling
  • Judgemental sampling
  • Quota sampling

Process of implementing survey research methods:

  • Decide survey questions:  Brainstorm and put together valid survey questions that are grammatically and logically appropriate. Understanding the objective and expected outcomes of the survey helps a lot. There are many surveys where details of responses are not as important as gaining insights about what customers prefer from the provided options. In such situations, a researcher can include multiple-choice questions or closed-ended questions . Whereas, if researchers need to obtain details about specific issues, they can consist of open-ended questions in the questionnaire. Ideally, the surveys should include a smart balance of open-ended and closed-ended questions. Use survey questions like Likert Scale , Semantic Scale, Net Promoter Score question, etc., to avoid fence-sitting.

LEARN ABOUT: System Usability Scale

  • Finalize a target audience:  Send out relevant surveys as per the target audience and filter out irrelevant questions as per the requirement. The survey research will be instrumental in case the target population decides on a sample. This way, results can be according to the desired market and be generalized to the entire population.

LEARN ABOUT:  Testimonial Questions

  • Send out surveys via decided mediums:  Distribute the surveys to the target audience and patiently wait for the feedback and comments- this is the most crucial step of the survey research. The survey needs to be scheduled, keeping in mind the nature of the target audience and its regions. Surveys can be conducted via email, embedded in a website, shared via social media, etc., to gain maximum responses.
  • Analyze survey results:  Analyze the feedback in real-time and identify patterns in the responses which might lead to a much-needed breakthrough for your organization. GAP, TURF Analysis , Conjoint analysis, Cross tabulation, and many such survey feedback analysis methods can be used to spot and shed light on respondent behavior. Researchers can use the results to implement corrective measures to improve customer/employee satisfaction.

Reasons to conduct survey research

The most crucial and integral reason for conducting market research using surveys is that you can collect answers regarding specific, essential questions. You can ask these questions in multiple survey formats as per the target audience and the intent of the survey. Before designing a study, every organization must figure out the objective of carrying this out so that the study can be structured, planned, and executed to perfection.

LEARN ABOUT: Research Process Steps

Questions that need to be on your mind while designing a survey are:

  • What is the primary aim of conducting the survey?
  • How do you plan to utilize the collected survey data?
  • What type of decisions do you plan to take based on the points mentioned above?

There are three critical reasons why an organization must conduct survey research.

  • Understand respondent behavior to get solutions to your queries:  If you’ve carefully curated a survey, the respondents will provide insights about what they like about your organization as well as suggestions for improvement. To motivate them to respond, you must be very vocal about how secure their responses will be and how you will utilize the answers. This will push them to be 100% honest about their feedback, opinions, and comments. Online surveys or mobile surveys have proved their privacy, and due to this, more and more respondents feel free to put forth their feedback through these mediums.
  • Present a medium for discussion:  A survey can be the perfect platform for respondents to provide criticism or applause for an organization. Important topics like product quality or quality of customer service etc., can be put on the table for discussion. A way you can do it is by including open-ended questions where the respondents can write their thoughts. This will make it easy for you to correlate your survey to what you intend to do with your product or service.
  • Strategy for never-ending improvements:  An organization can establish the target audience’s attributes from the pilot phase of survey research . Researchers can use the criticism and feedback received from this survey to improve the product/services. Once the company successfully makes the improvements, it can send out another survey to measure the change in feedback keeping the pilot phase the benchmark. By doing this activity, the organization can track what was effectively improved and what still needs improvement.

Survey Research Scales

There are four main scales for the measurement of variables:

  • Nominal Scale:  A nominal scale associates numbers with variables for mere naming or labeling, and the numbers usually have no other relevance. It is the most basic of the four levels of measurement.
  • Ordinal Scale:  The ordinal scale has an innate order within the variables along with labels. It establishes the rank between the variables of a scale but not the difference value between the variables.
  • Interval Scale:  The interval scale is a step ahead in comparison to the other two scales. Along with establishing a rank and name of variables, the scale also makes known the difference between the two variables. The only drawback is that there is no fixed start point of the scale, i.e., the actual zero value is absent.
  • Ratio Scale:  The ratio scale is the most advanced measurement scale, which has variables that are labeled in order and have a calculated difference between variables. In addition to what interval scale orders, this scale has a fixed starting point, i.e., the actual zero value is present.

Benefits of survey research

In case survey research is used for all the right purposes and is implemented properly, marketers can benefit by gaining useful, trustworthy data that they can use to better the ROI of the organization.

Other benefits of survey research are:

  • Minimum investment:  Mobile surveys and online surveys have minimal finance invested per respondent. Even with the gifts and other incentives provided to the people who participate in the study, online surveys are extremely economical compared to paper-based surveys.
  • Versatile sources for response collection:  You can conduct surveys via various mediums like online and mobile surveys. You can further classify them into qualitative mediums like focus groups , and interviews and quantitative mediums like customer-centric surveys. Due to the offline survey response collection option, researchers can conduct surveys in remote areas with limited internet connectivity. This can make data collection and analysis more convenient and extensive.
  • Reliable for respondents:  Surveys are extremely secure as the respondent details and responses are kept safeguarded. This anonymity makes respondents answer the survey questions candidly and with absolute honesty. An organization seeking to receive explicit responses for its survey research must mention that it will be confidential.

Survey research design

Researchers implement a survey research design in cases where there is a limited cost involved and there is a need to access details easily. This method is often used by small and large organizations to understand and analyze new trends, market demands, and opinions. Collecting information through tactfully designed survey research can be much more effective and productive than a casually conducted survey.

There are five stages of survey research design:

  • Decide an aim of the research:  There can be multiple reasons for a researcher to conduct a survey, but they need to decide a purpose for the research. This is the primary stage of survey research as it can mold the entire path of a survey, impacting its results.
  • Filter the sample from target population:  Who to target? is an essential question that a researcher should answer and keep in mind while conducting research. The precision of the results is driven by who the members of a sample are and how useful their opinions are. The quality of respondents in a sample is essential for the results received for research and not the quantity. If a researcher seeks to understand whether a product feature will work well with their target market, he/she can conduct survey research with a group of market experts for that product or technology.
  • Zero-in on a survey method:  Many qualitative and quantitative research methods can be discussed and decided. Focus groups, online interviews, surveys, polls, questionnaires, etc. can be carried out with a pre-decided sample of individuals.
  • Design the questionnaire:  What will the content of the survey be? A researcher is required to answer this question to be able to design it effectively. What will the content of the cover letter be? Or what are the survey questions of this questionnaire? Understand the target market thoroughly to create a questionnaire that targets a sample to gain insights about a survey research topic.
  • Send out surveys and analyze results:  Once the researcher decides on which questions to include in a study, they can send it across to the selected sample . Answers obtained from this survey can be analyzed to make product-related or marketing-related decisions.

Survey examples: 10 tips to design the perfect research survey

Picking the right survey design can be the key to gaining the information you need to make crucial decisions for all your research. It is essential to choose the right topic, choose the right question types, and pick a corresponding design. If this is your first time creating a survey, it can seem like an intimidating task. But with QuestionPro, each step of the process is made simple and easy.

Below are 10 Tips To Design The Perfect Research Survey:

  • Set your SMART goals:  Before conducting any market research or creating a particular plan, set your SMART Goals . What is that you want to achieve with the survey? How will you measure it promptly, and what are the results you are expecting?
  • Choose the right questions:  Designing a survey can be a tricky task. Asking the right questions may help you get the answers you are looking for and ease the task of analyzing. So, always choose those specific questions – relevant to your research.
  • Begin your survey with a generalized question:  Preferably, start your survey with a general question to understand whether the respondent uses the product or not. That also provides an excellent base and intro for your survey.
  • Enhance your survey:  Choose the best, most relevant, 15-20 questions. Frame each question as a different question type based on the kind of answer you would like to gather from each. Create a survey using different types of questions such as multiple-choice, rating scale, open-ended, etc. Look at more survey examples and four measurement scales every researcher should remember.
  • Prepare yes/no questions:  You may also want to use yes/no questions to separate people or branch them into groups of those who “have purchased” and those who “have not yet purchased” your products or services. Once you separate them, you can ask them different questions.
  • Test all electronic devices:  It becomes effortless to distribute your surveys if respondents can answer them on different electronic devices like mobiles, tablets, etc. Once you have created your survey, it’s time to TEST. You can also make any corrections if needed at this stage.
  • Distribute your survey:  Once your survey is ready, it is time to share and distribute it to the right audience. You can share handouts and share them via email, social media, and other industry-related offline/online communities.
  • Collect and analyze responses:  After distributing your survey, it is time to gather all responses. Make sure you store your results in a particular document or an Excel sheet with all the necessary categories mentioned so that you don’t lose your data. Remember, this is the most crucial stage. Segregate your responses based on demographics, psychographics, and behavior. This is because, as a researcher, you must know where your responses are coming from. It will help you to analyze, predict decisions, and help write the summary report.
  • Prepare your summary report:  Now is the time to share your analysis. At this stage, you should mention all the responses gathered from a survey in a fixed format. Also, the reader/customer must get clarity about your goal, which you were trying to gain from the study. Questions such as – whether the product or service has been used/preferred or not. Do respondents prefer some other product to another? Any recommendations?

Having a tool that helps you carry out all the necessary steps to carry out this type of study is a vital part of any project. At QuestionPro, we have helped more than 10,000 clients around the world to carry out data collection in a simple and effective way, in addition to offering a wide range of solutions to take advantage of this data in the best possible way.

From dashboards, advanced analysis tools, automation, and dedicated functions, in QuestionPro, you will find everything you need to execute your research projects effectively. Uncover insights that matter the most!

MORE LIKE THIS

Raked Weighting

Raked Weighting: A Key Tool for Accurate Survey Results

May 31, 2024

Data trends

Top 8 Data Trends to Understand the Future of Data

May 30, 2024

interactive presentation software

Top 12 Interactive Presentation Software to Engage Your User

May 29, 2024

Trend Report

Trend Report: Guide for Market Dynamics & Strategic Analysis

Other categories.

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • Doing Survey Research | A Step-by-Step Guide & Examples

Doing Survey Research | A Step-by-Step Guide & Examples

Published on 6 May 2022 by Shona McCombes . Revised on 10 October 2022.

Survey research means collecting information about a group of people by asking them questions and analysing the results. To conduct an effective survey, follow these six steps:

  • Determine who will participate in the survey
  • Decide the type of survey (mail, online, or in-person)
  • Design the survey questions and layout
  • Distribute the survey
  • Analyse the responses
  • Write up the results

Surveys are a flexible method of data collection that can be used in many different types of research .

Table of contents

What are surveys used for, step 1: define the population and sample, step 2: decide on the type of survey, step 3: design the survey questions, step 4: distribute the survey and collect responses, step 5: analyse the survey results, step 6: write up the survey results, frequently asked questions about surveys.

Surveys are used as a method of gathering data in many different fields. They are a good choice when you want to find out about the characteristics, preferences, opinions, or beliefs of a group of people.

Common uses of survey research include:

  • Social research: Investigating the experiences and characteristics of different social groups
  • Market research: Finding out what customers think about products, services, and companies
  • Health research: Collecting data from patients about symptoms and treatments
  • Politics: Measuring public opinion about parties and policies
  • Psychology: Researching personality traits, preferences, and behaviours

Surveys can be used in both cross-sectional studies , where you collect data just once, and longitudinal studies , where you survey the same sample several times over an extended period.

Prevent plagiarism, run a free check.

Before you start conducting survey research, you should already have a clear research question that defines what you want to find out. Based on this question, you need to determine exactly who you will target to participate in the survey.

Populations

The target population is the specific group of people that you want to find out about. This group can be very broad or relatively narrow. For example:

  • The population of Brazil
  • University students in the UK
  • Second-generation immigrants in the Netherlands
  • Customers of a specific company aged 18 to 24
  • British transgender women over the age of 50

Your survey should aim to produce results that can be generalised to the whole population. That means you need to carefully define exactly who you want to draw conclusions about.

It’s rarely possible to survey the entire population of your research – it would be very difficult to get a response from every person in Brazil or every university student in the UK. Instead, you will usually survey a sample from the population.

The sample size depends on how big the population is. You can use an online sample calculator to work out how many responses you need.

There are many sampling methods that allow you to generalise to broad populations. In general, though, the sample should aim to be representative of the population as a whole. The larger and more representative your sample, the more valid your conclusions.

There are two main types of survey:

  • A questionnaire , where a list of questions is distributed by post, online, or in person, and respondents fill it out themselves
  • An interview , where the researcher asks a set of questions by phone or in person and records the responses

Which type you choose depends on the sample size and location, as well as the focus of the research.

Questionnaires

Sending out a paper survey by post is a common method of gathering demographic information (for example, in a government census of the population).

  • You can easily access a large sample.
  • You have some control over who is included in the sample (e.g., residents of a specific region).
  • The response rate is often low.

Online surveys are a popular choice for students doing dissertation research , due to the low cost and flexibility of this method. There are many online tools available for constructing surveys, such as SurveyMonkey and Google Forms .

  • You can quickly access a large sample without constraints on time or location.
  • The data is easy to process and analyse.
  • The anonymity and accessibility of online surveys mean you have less control over who responds.

If your research focuses on a specific location, you can distribute a written questionnaire to be completed by respondents on the spot. For example, you could approach the customers of a shopping centre or ask all students to complete a questionnaire at the end of a class.

  • You can screen respondents to make sure only people in the target population are included in the sample.
  • You can collect time- and location-specific data (e.g., the opinions of a shop’s weekday customers).
  • The sample size will be smaller, so this method is less suitable for collecting data on broad populations.

Oral interviews are a useful method for smaller sample sizes. They allow you to gather more in-depth information on people’s opinions and preferences. You can conduct interviews by phone or in person.

  • You have personal contact with respondents, so you know exactly who will be included in the sample in advance.
  • You can clarify questions and ask for follow-up information when necessary.
  • The lack of anonymity may cause respondents to answer less honestly, and there is more risk of researcher bias.

Like questionnaires, interviews can be used to collect quantitative data : the researcher records each response as a category or rating and statistically analyses the results. But they are more commonly used to collect qualitative data : the interviewees’ full responses are transcribed and analysed individually to gain a richer understanding of their opinions and feelings.

Next, you need to decide which questions you will ask and how you will ask them. It’s important to consider:

  • The type of questions
  • The content of the questions
  • The phrasing of the questions
  • The ordering and layout of the survey

Open-ended vs closed-ended questions

There are two main forms of survey questions: open-ended and closed-ended. Many surveys use a combination of both.

Closed-ended questions give the respondent a predetermined set of answers to choose from. A closed-ended question can include:

  • A binary answer (e.g., yes/no or agree/disagree )
  • A scale (e.g., a Likert scale with five points ranging from strongly agree to strongly disagree )
  • A list of options with a single answer possible (e.g., age categories)
  • A list of options with multiple answers possible (e.g., leisure interests)

Closed-ended questions are best for quantitative research . They provide you with numerical data that can be statistically analysed to find patterns, trends, and correlations .

Open-ended questions are best for qualitative research. This type of question has no predetermined answers to choose from. Instead, the respondent answers in their own words.

Open questions are most common in interviews, but you can also use them in questionnaires. They are often useful as follow-up questions to ask for more detailed explanations of responses to the closed questions.

The content of the survey questions

To ensure the validity and reliability of your results, you need to carefully consider each question in the survey. All questions should be narrowly focused with enough context for the respondent to answer accurately. Avoid questions that are not directly relevant to the survey’s purpose.

When constructing closed-ended questions, ensure that the options cover all possibilities. If you include a list of options that isn’t exhaustive, you can add an ‘other’ field.

Phrasing the survey questions

In terms of language, the survey questions should be as clear and precise as possible. Tailor the questions to your target population, keeping in mind their level of knowledge of the topic.

Use language that respondents will easily understand, and avoid words with vague or ambiguous meanings. Make sure your questions are phrased neutrally, with no bias towards one answer or another.

Ordering the survey questions

The questions should be arranged in a logical order. Start with easy, non-sensitive, closed-ended questions that will encourage the respondent to continue.

If the survey covers several different topics or themes, group together related questions. You can divide a questionnaire into sections to help respondents understand what is being asked in each part.

If a question refers back to or depends on the answer to a previous question, they should be placed directly next to one another.

Before you start, create a clear plan for where, when, how, and with whom you will conduct the survey. Determine in advance how many responses you require and how you will gain access to the sample.

When you are satisfied that you have created a strong research design suitable for answering your research questions, you can conduct the survey through your method of choice – by post, online, or in person.

There are many methods of analysing the results of your survey. First you have to process the data, usually with the help of a computer program to sort all the responses. You should also cleanse the data by removing incomplete or incorrectly completed responses.

If you asked open-ended questions, you will have to code the responses by assigning labels to each response and organising them into categories or themes. You can also use more qualitative methods, such as thematic analysis , which is especially suitable for analysing interviews.

Statistical analysis is usually conducted using programs like SPSS or Stata. The same set of survey data can be subject to many analyses.

Finally, when you have collected and analysed all the necessary data, you will write it up as part of your thesis, dissertation , or research paper .

In the methodology section, you describe exactly how you conducted the survey. You should explain the types of questions you used, the sampling method, when and where the survey took place, and the response rate. You can include the full questionnaire as an appendix and refer to it in the text if relevant.

Then introduce the analysis by describing how you prepared the data and the statistical methods you used to analyse it. In the results section, you summarise the key results from your analysis.

A Likert scale is a rating scale that quantitatively assesses opinions, attitudes, or behaviours. It is made up of four or more questions that measure a single attitude or trait when response scores are combined.

To use a Likert scale in a survey , you present participants with Likert-type questions or statements, and a continuum of items, usually with five or seven possible responses, to capture their degree of agreement.

Individual Likert-type questions are generally considered ordinal data , because the items have clear rank order, but don’t have an even distribution.

Overall Likert scale scores are sometimes treated as interval data. These scores are considered to have directionality and even spacing between them.

The type of data determines what statistical tests you should use to analyse your data.

A questionnaire is a data collection tool or instrument, while a survey is an overarching research method that involves collecting and analysing data from people using questionnaires.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

McCombes, S. (2022, October 10). Doing Survey Research | A Step-by-Step Guide & Examples. Scribbr. Retrieved 31 May 2024, from https://www.scribbr.co.uk/research-methods/surveys/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, qualitative vs quantitative research | examples & methods, construct validity | definition, types, & examples, what is a likert scale | guide & examples.

Root out friction in every digital experience, super-charge conversion rates, and optimize digital self-service

Uncover insights from any interaction, deliver AI-powered agent coaching, and reduce cost to serve

Increase revenue and loyalty with real-time insights and recommendations delivered to teams on the ground

Know how your people feel and empower managers to improve employee engagement, productivity, and retention

Take action in the moments that matter most along the employee journey and drive bottom line growth

Whatever they’re are saying, wherever they’re saying it, know exactly what’s going on with your people

Get faster, richer insights with qual and quant tools that make powerful market research available to everyone

Run concept tests, pricing studies, prototyping + more with fast, powerful studies designed by UX research experts

Track your brand performance 24/7 and act quickly to respond to opportunities and challenges in your market

Explore the platform powering Experience Management

  • Free Account
  • For Digital
  • For Customer Care
  • For Human Resources
  • For Researchers
  • Financial Services
  • All Industries

Popular Use Cases

  • Customer Experience
  • Employee Experience
  • Net Promoter Score
  • Voice of Customer
  • Customer Success Hub
  • Product Documentation
  • Training & Certification
  • XM Institute
  • Popular Resources
  • Customer Stories
  • Artificial Intelligence
  • Market Research
  • Partnerships
  • Marketplace

The annual gathering of the experience leaders at the world’s iconic brands building breakthrough business results, live in Salt Lake City.

  • English/AU & NZ
  • Español/Europa
  • Español/América Latina
  • Português Brasileiro
  • REQUEST DEMO
  • Experience Management
  • What is a survey?
  • Survey Research

Try Qualtrics for free

What is survey research.

15 min read Find out everything you need to know about survey research, from what it is and how it works to the different methods and tools you can use to ensure you’re successful.

Survey research is the process of collecting data from a predefined group (e.g. customers or potential customers) with the ultimate goal of uncovering insights about your products, services, or brand overall .

As a quantitative data collection method, survey research can provide you with a goldmine of information that can inform crucial business and product decisions. But survey research needs careful planning and execution to get the results you want.

So if you’re thinking about using surveys to carry out research, read on.

Get started with our free survey maker tool

Types of survey research

Calling these methods ‘survey research’ slightly underplays the complexity of this type of information gathering. From the expertise required to carry out each activity to the analysis of the data and its eventual application, a considerable amount of effort is required.

As for how you can carry out your research, there are several options to choose from — face-to-face interviews, telephone surveys, focus groups (though more interviews than surveys), online surveys , and panel surveys.

Typically, the survey method you choose will largely be guided by who you want to survey, the size of your sample , your budget, and the type of information you’re hoping to gather.

Here are a few of the most-used survey types:

Face-to-face interviews

Before technology made it possible to conduct research using online surveys, telephone, and mail were the most popular methods for survey research. However face-to-face interviews were considered the gold standard — the only reason they weren’t as popular was due to their highly prohibitive costs.

When it came to face-to-face interviews, organizations would use highly trained researchers who knew when to probe or follow up on vague or problematic answers. They also knew when to offer assistance to respondents when they seemed to be struggling. The result was that these interviewers could get sample members to participate and engage in surveys in the most effective way possible, leading to higher response rates and better quality data.

Telephone surveys

While phone surveys have been popular in the past, particularly for measuring general consumer behavior or beliefs, response rates have been declining since the 1990s .

Phone surveys are usually conducted using a random dialing system and software that a researcher can use to record responses.

This method is beneficial when you want to survey a large population but don’t have the resources to conduct face-to-face research surveys or run focus groups, or want to ask multiple-choice and open-ended questions .

The downsides are they can: take a long time to complete depending on the response rate, and you may have to do a lot of cold-calling to get the information you need.

You also run the risk of respondents not being completely honest . Instead, they’ll answer your survey questions quickly just to get off the phone.

Focus groups (interviews — not surveys)

Focus groups are a separate qualitative methodology rather than surveys — even though they’re often bunched together. They’re normally used for survey pretesting and designing , but they’re also a great way to generate opinions and data from a diverse range of people.

Focus groups involve putting a cohort of demographically or socially diverse people in a room with a moderator and engaging them in a discussion on a particular topic, such as your product, brand, or service.

They remain a highly popular method for market research , but they’re expensive and require a lot of administration to conduct and analyze the data properly.

You also run the risk of more dominant members of the group taking over the discussion and swaying the opinions of other people — potentially providing you with unreliable data.

Online surveys

Online surveys have become one of the most popular survey methods due to being cost-effective, enabling researchers to accurately survey a large population quickly.

Online surveys can essentially be used by anyone for any research purpose – we’ve all seen the increasing popularity of polls on social media (although these are not scientific).

Using an online survey allows you to ask a series of different question types and collect data instantly that’s easy to analyze with the right software.

There are also several methods for running and distributing online surveys that allow you to get your questionnaire in front of a large population at a fraction of the cost of face-to-face interviews or focus groups.

This is particularly true when it comes to mobile surveys as most people with a smartphone can access them online.

However, you have to be aware of the potential dangers of using online surveys, particularly when it comes to the survey respondents. The biggest risk is because online surveys require access to a computer or mobile device to complete, they could exclude elderly members of the population who don’t have access to the technology — or don’t know how to use it.

It could also exclude those from poorer socio-economic backgrounds who can’t afford a computer or consistent internet access. This could mean the data collected is more biased towards a certain group and can lead to less accurate data when you’re looking for a representative population sample.

When it comes to surveys, every voice matters.

Find out how to create more inclusive and representative surveys for your research.

Panel surveys

A panel survey involves recruiting respondents who have specifically signed up to answer questionnaires and who are put on a list by a research company. This could be a workforce of a small company or a major subset of a national population. Usually, these groups are carefully selected so that they represent a sample of your target population — giving you balance across criteria such as age, gender, background, and so on.

Panel surveys give you access to the respondents you need and are usually provided by the research company in question. As a result, it’s much easier to get access to the right audiences as you just need to tell the research company your criteria. They’ll then determine the right panels to use to answer your questionnaire.

However, there are downsides. The main one being that if the research company offers its panels incentives, e.g. discounts, coupons, money — respondents may answer a lot of questionnaires just for the benefits.

This might mean they rush through your survey without providing considered and truthful answers. As a consequence, this can damage the credibility of your data and potentially ruin your analyses.

What are the benefits of using survey research?

Depending on the research method you use, there are lots of benefits to conducting survey research for data collection. Here, we cover a few:

1.   They’re relatively easy to do

Most research surveys are easy to set up, administer and analyze. As long as the planning and survey design is thorough and you target the right audience , the data collection is usually straightforward regardless of which survey type you use.

2.   They can be cost effective

Survey research can be relatively cheap depending on the type of survey you use.

Generally, qualitative research methods that require access to people in person or over the phone are more expensive and require more administration.

Online surveys or mobile surveys are often more cost-effective for market research and can give you access to the global population for a fraction of the cost.

3.   You can collect data from a large sample

Again, depending on the type of survey, you can obtain survey results from an entire population at a relatively low price. You can also administer a large variety of survey types to fit the project you’re running.

4.   You can use survey software to analyze results immediately

Using survey software, you can use advanced statistical analysis techniques to gain insights into your responses immediately.

Analysis can be conducted using a variety of parameters to determine the validity and reliability of your survey data at scale.

5.   Surveys can collect any type of data

While most people view surveys as a quantitative research method, they can just as easily be adapted to gain qualitative information by simply including open-ended questions or conducting interviews face to face.

How to measure concepts with survey questions

While surveys are a great way to obtain data, that data on its own is useless unless it can be analyzed and developed into actionable insights.

The easiest, and most effective way to measure survey results, is to use a dedicated research tool that puts all of your survey results into one place.

When it comes to survey measurement, there are four measurement types to be aware of that will determine how you treat your different survey results:

Nominal scale

With a nominal scale , you can only keep track of how many respondents chose each option from a question, and which response generated the most selections.

An example of this would be simply asking a responder to choose a product or brand from a list.

You could find out which brand was chosen the most but have no insight as to why.

Ordinal scale

Ordinal scales are used to judge an order of preference. They do provide some level of quantitative value because you’re asking responders to choose a preference of one option over another.

Ratio scale

Ratio scales can be used to judge the order and difference between responses. For example, asking respondents how much they spend on their weekly shopping on average.

Interval scale

In an interval scale, values are lined up in order with a meaningful difference between the two values — for example, measuring temperature or measuring a credit score between one value and another.

Step by step: How to conduct surveys and collect data

Conducting a survey and collecting data is relatively straightforward, but it does require some careful planning and design to ensure it results in reliable data.

Step 1 – Define your objectives

What do you want to learn from the survey? How is the data going to help you? Having a hypothesis or series of assumptions about survey responses will allow you to create the right questions to test them.

Step 2 – Create your survey questions

Once you’ve got your hypotheses or assumptions, write out the questions you need answering to test your theories or beliefs. Be wary about framing questions that could lead respondents or inadvertently create biased responses .

Step 3 – Choose your question types

Your survey should include a variety of question types and should aim to obtain quantitative data with some qualitative responses from open-ended questions. Using a mix of questions (simple Yes/ No, multiple-choice, rank in order, etc) not only increases the reliability of your data but also reduces survey fatigue and respondents simply answering questions quickly without thinking.

Find out how to create a survey that’s easy to engage with

Step 4 – Test your questions

Before sending your questionnaire out, you should test it (e.g. have a random internal group do the survey) and carry out A/B tests to ensure you’ll gain accurate responses.

Step 5 – Choose your target and send out the survey

Depending on your objectives, you might want to target the general population with your survey or a specific segment of the population. Once you’ve narrowed down who you want to target, it’s time to send out the survey.

After you’ve deployed the survey, keep an eye on the response rate to ensure you’re getting the number you expected. If your response rate is low, you might need to send the survey out to a second group to obtain a large enough sample — or do some troubleshooting to work out why your response rates are so low. This could be down to your questions, delivery method, selected sample, or otherwise.

Step 6 – Analyze results and draw conclusions

Once you’ve got your results back, it’s time for the fun part.

Break down your survey responses using the parameters you’ve set in your objectives and analyze the data to compare to your original assumptions. At this stage, a research tool or software can make the analysis a lot easier — and that’s somewhere Qualtrics can help.

Get reliable insights with survey software from Qualtrics

Gaining feedback from customers and leads is critical for any business, data gathered from surveys can prove invaluable for understanding your products and your market position, and with survey software from Qualtrics, it couldn’t be easier.

Used by more than 13,000 brands and supporting more than 1 billion surveys a year, Qualtrics empowers everyone in your organization to gather insights and take action. No coding required — and your data is housed in one system.

Get feedback from more than 125 sources on a single platform and view and measure your data in one place to create actionable insights and gain a deeper understanding of your target customers .

Automatically run complex text and statistical analysis to uncover exactly what your survey data is telling you, so you can react in real-time and make smarter decisions.

We can help you with survey management, too. From designing your survey and finding your target respondents to getting your survey in the field and reporting back on the results, we can help you every step of the way.

And for expert market researchers and survey designers, Qualtrics features custom programming to give you total flexibility over question types, survey design, embedded data, and other variables.

No matter what type of survey you want to run, what target audience you want to reach, or what assumptions you want to test or answers you want to uncover, we’ll help you design, deploy and analyze your survey with our team of experts.

Ready to find out more about Qualtrics CoreXM?

Get started with our free survey maker tool today

Related resources

Survey bias types 24 min read, post event survey questions 10 min read, best survey software 16 min read, close-ended questions 7 min read, survey vs questionnaire 12 min read, response bias 13 min read, double barreled question 11 min read, request demo.

Ready to learn more about Qualtrics?

  • Technical Support
  • Find My Rep

You are here

The SAGE Handbook of Survey Methodology

The SAGE Handbook of Survey Methodology

  • Christof Wolf - GESIS - Leibniz Institute for the Social Sciences, Germany
  • Dominique Joye - University of Lausanne, Switzerland
  • Tom W Smith - University of Chicago, USA
  • Yang-chih Fu - Academia Sinica
  • Description

Survey Methodology is becoming a more structured field of research, deserving of more and more academic attention. The SAGE Handbook of Survey Methodology explores both the increasingly scientific endeavour of surveys and their growing complexity, as different data collection modes and information sources are combined.

The handbook takes a global approach, with a team of international experts looking at local and national specificities, as well as problems of cross-national, comparative survey research. The chapters are organized into seven major sections, each of which represents a stage in the survey life-cycle:

  • Surveys and Societies
  • Planning a Survey
  • Measurement
  • Data Collection
  • Preparing Data for Use
  • Assessing and Improving Data Quality

The SAGE Handbook of Survey Methodology is a landmark and essential tool for any scholar within the social sciences.  

See what’s new to this edition by selecting the Features tab on this page. Should you need additional information or have questions regarding the HEOA information provided for this title, including what is new to this edition, please email [email protected] . Please include your name, contact information, and the name of the title for which you would like more information. For information on the HEOA, please go to http://ed.gov/policy/highered/leg/hea08/index.html .

For assistance with your order: Please email us at [email protected] or connect with your SAGE representative.

SAGE 2455 Teller Road Thousand Oaks, CA 91320 www.sagepub.com

This exciting new handbook provides a comprehensive guide to conducting survey research studies.  Authored by a distinguished set of experts from across the globe, chapters cover all phases of survey projects—from conceiving, designing and planning, through sampling, measurement, and data collection, to post-fieldwork data processing, data augmentation, and quality assessment.  Extensive attention to multi-national or cross-cultural survey projects is a special asset:  both challenges to comparability in such studies, and current practices to achieving it in vital domains including sampling, translation, and harmonization of data over space and time, are treated in depth.

The environment for conducting surveys has changed substantially in the last several decades.  New approaches to data collection and data dissemination are required, but the approaches must be adapted depending on country and culture.  This Sage handbook is an up-to-date reference that covers virtually all steps in the survey life cycle.  It contains methodological topics like survey modes, surveying in multicultural and cross-cultural contexts, questionnaire design and pretesting, and responsive design, along with statistical topics like sampling, weighting, analysis of complex surveys, and record linkage.  Coverage of distinctive areas, like ethical issues, surveys in conflict areas, question translation, and harmonization of questions across cultures, also set this book apart from others.  The handbook will be a valuable reference for survey practitioners and other professionals who need a broad overview of the many steps required to conduct high quality surveys. 

The new Sage Handbook broadens our view of survey research by including topics important for data sharing and reuse.  Data are expensive public resources, and we need more attention on issues like disclosure risk, harmonization, life-cycle documentation, and preservation.

" The SAGE Handbook of Survey Methodology is a useful resource for researchers and individuals who want to learn about the process of surveying. In an era where surveying is so hotly debated, the handbook comes at an important moment in history. This book is recommended for academic libraries."

"This thorough volume provides extensive information on understanding and creating surveys. Its 43 chapters cover important topics such as types of surveys (face-to-face, mail, internet), designing questions for surveys, sampling, administering surveys, handling ethical issues, combining administrative and survey data, and analyzing data from surveys... Summing Up: Highly recommended. Faculty and professionals"

The SAGE Handbook of Survey Methodology is a valuable compendium and reference reflecting a broad range of expertise from the survey world and beyond.The authors of the 43 chapters represent a diverse group of institutions, mainly from the United States and Europe, and bring a deep expertise on their given “assignments. For those whose research depends on the quality of survey data, especially in an international context, the Handbook is a solid and dependable anchor.

Preview this book

Sample materials & chapters.

Chapter 33 - Harmonizing Survey Questions Between Cultures and Over Time

Select a Purchasing Option

Related products.

The SAGE Handbook of Regression Analysis and Causal Inference

This title is also available on SAGE Knowledge , the ultimate social sciences online library. If your library doesn’t have access, ask your librarian to start a trial .

SAGE Research Methods Promotion

This title is also available on SAGE Research Methods , the ultimate digital methods library. If your library doesn’t have access, ask your librarian to start a trial .

Logo for OPEN OKSTATE

Surveys & Questionnaires

Surveys involve asking a series of questions to participants. They can be administered online, in person, or remotely (e.g. by post/mail). The data collected can be analysed quantitatively or qualitatively (or both). Researchers might carry out statistical surveys to make statistical inferences about the population being studied. Such inferences depend strongly on the survey questions used (Solomon, 2001) meaning that getting the wording right is crucial. For this reason, many test out surveys in pilot studies with smaller populations and use the results to refine their survey instrument.

Sampling for surveys can range between self-selection (e.g. where a link is shared with members of a target population in the hope they and others contribute data and share the survey) through to the use of specialised statistical techniques (“probability sampling”) that analyse results from a carefully selected sample to draw statistical conclusions about the wider population. Survey methodologies therefore cover a range of considerations including sampling, research instrument design, improving response rates, ensuring quality in data, and methods of analysis (Groves et al., 2011).

One common question format is to collect quantitative data alongside qualitative questions. This allows a more detailed description or justification for the answer given to be provided. Collecting ordinal data (e.g. ranking of preferences through a Likert scale) can be a way to make qualitative data more amenable to quantitative analysis. But there is no one superior approach: the crucial thing is that the survey questions and their phrasing aligns with the research question(s) correctly.

Surveys are widely used in education science and in the social sciences more generally. Surveys are highly efficient (both in terms of time and money) compared with other methods, and can be administered remotely. They can provide a series of data points on a subject which can be compared across the sample group(s). This provides a considerable degree of flexibility when it comes to analysing data as several variables may be tested at once. Surveys also work well when used alongside other methods, perhaps to provide a baseline of data (such as demographics) for the first step in a research study. They are also commonly used in evaluations of teaching & learning (i.e. after an intervention to assess the impact). However, there are some noteworthy disadvantages to using surveys. Respondents may not feel encouraged to provide accurate answers, or may not feel comfortable providing answers that present themselves in a unfavourable manner (particularly if the survey is not anonymous). “Closed” questions may have a lower validity rate than other question types as they might be interpreted differently. Data errors due to question non-responses may exist creating bias. Survey answer options should be selected carefully because they may be interpreted differently by respondents (Vehovar & Katja Lozar, 2008).

Surveys & Questionnaires: GO-GN Insights

Marjon Baas collected quantitative data through a questionnaire among teachers within an OER Community of Practice to explore the effect of the activities undertaken to encourage the use of the community on teachers’ behaviour in relation to OER.

“I used several theoretical models (Clements and Pawlowski, 2012; Cox and Trotter, 2017; Armellini and Nie, 2013) to conceptualise different aspects (that relate to) OER adoption. This enabled me as a researcher to design my specific research instruments.”

Judith Pete had a deliberate selection of twelve Sub-Saharan African universities across Kenya, Ghana and South Africa with randomly sampled students and lecturers to develop a representative view of OER. Separate questionnaires were used for students (n=2249) and lecturers (n=106).

“We used surveys to collect data across three continents. Online survey tools were very helpful in online data collection and, where that was not possible, local coordinators used physical copies of the survey and later entered the information into the database. This approach was cost-effective, versatile and quick and easy to implement. We were able to reach a wide range of respondents in a short time. Sometimes we wondered, though, whether all those who responded had enough time to fully process and understand the questions that they were being asked. We had to allocate a significant amount of time to curating the data afterwards.”

Samia Almousa adopted Unified Theory of Acceptance and Use of Technology (UTAUT) survey questionnaire, along with additional constructs (relating to information quality and culture) as a lens through which her research data is analysed.

“In my research, I have employed a Sequential Explanatory Mixed Methods Design (online questionnaires and semi-structured interviews) to examine the academics’ perceptions of OERs integration into their teaching practices, as well as to explore the motivations that encourage them to use and reuse OERs, and share their teaching materials in the public domain. The online questionnaire was an efficient and fast way to reach a large number of academics. I used the online survey platform, which does not require entering data or coding as data is input by the participants and answers are saved automatically (Sills & Song, 2002). Using questionnaires as a data collection tool has some drawbacks. In my study, the questionnaire I developed was long, which made some participants choose their answers randomly. In addition, I have received many responses from academics in other universities although the questionnaire was sent to the sample university. Since I expected this to happen, I required the participants to write the name of their university in the personal information section of the questionnaire, then excluded the responses from outside the research sample. My advice for any researcher attempting to use questionnaires as a data collection tool is to ensure that their questionnaire is as short and clear as possible to help the researcher in analysing the findings and the participants in answering all questions accurately. Additionally, personal questions should be as few as possible to protect the identity and privacy of the participants, and to obtain the ethical approval quickly.”

Olawale Kazeeem Iyikolakan adopted a descriptive survey of the correlational type. The author research design examines the relationship among the key research variables (technological self-efficacy, perception, and use of open educational resources) and to identify the most significant factors that influence academic performance of LIS undergraduates without a causal connection.

“The descriptive research design is used as a gathering of information about prevailing conditions or situations for the purpose of description and interpretation (Aggarwal, 2008). My research design examines the relationship among the key research variables (technological self-efficacy, perception, and use of open educational resources) to identify the most significant factors that influence academic performance of Library & Information Science undergraduates without a causal connection. Ponto (2015) describes that descriptive survey research is a useful and legitimate  approach to research that has clear benefits in helping to describe and explore variables and constructs of interest by using quantitative research strategies (e.g., using a survey with numerically rated items. “The reason for the choice of descriptive survey research instead of ex-post-facto quasi-experimental design is that this type of research design is used to capture people’s perceptions, views, use, about a current issue, current state of play or movements such as perception and use of OER. This research design comes with several merits as it enables the researcher to obtain the needed primary data directly from the respondents. Other advantages include: (1) Using this method, the researcher has no control over the variable; (2) the researcher can only report what has happened or what is happening. One of the demerits of this type of research design is that research results may reflect a certain level of bias due to the absence of statistical tests.”

Useful references for Surveys & Questionnaires: Aggarwal (2008); Fowler (2014); Groves et al., 2011); Lefever, Dal & Matthíasdóttir (2007); Ponto (2015); Sills & Song (2002); Solomon (2001); Vehovar & Manfreda (2008); Vehovar, Manfreda, & Berzelak (2018)

Research Methods Handbook Copyright © 2020 by Rob Farrow; Francisco Iniesto; Martin Weller; and Rebecca Pitt is licensed under a Creative Commons Attribution 4.0 International License , except where otherwise noted.

Share This Book

Logo for University of Southern Queensland

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

9 Survey research

Survey research is a research method involving the use of standardised questionnaires or interviews to collect data about people and their preferences, thoughts, and behaviours in a systematic manner. Although census surveys were conducted as early as Ancient Egypt, survey as a formal research method was pioneered in the 1930–40s by sociologist Paul Lazarsfeld to examine the effects of the radio on political opinion formation of the United States. This method has since become a very popular method for quantitative research in the social sciences.

The survey method can be used for descriptive, exploratory, or explanatory research. This method is best suited for studies that have individual people as the unit of analysis. Although other units of analysis, such as groups, organisations or dyads—pairs of organisations, such as buyers and sellers—are also studied using surveys, such studies often use a specific person from each unit as a ‘key informant’ or a ‘proxy’ for that unit. Consequently, such surveys may be subject to respondent bias if the chosen informant does not have adequate knowledge or has a biased opinion about the phenomenon of interest. For instance, Chief Executive Officers may not adequately know employees’ perceptions or teamwork in their own companies, and may therefore be the wrong informant for studies of team dynamics or employee self-esteem.

Survey research has several inherent strengths compared to other research methods. First, surveys are an excellent vehicle for measuring a wide variety of unobservable data, such as people’s preferences (e.g., political orientation), traits (e.g., self-esteem), attitudes (e.g., toward immigrants), beliefs (e.g., about a new law), behaviours (e.g., smoking or drinking habits), or factual information (e.g., income). Second, survey research is also ideally suited for remotely collecting data about a population that is too large to observe directly. A large area—such as an entire country—can be covered by postal, email, or telephone surveys using meticulous sampling to ensure that the population is adequately represented in a small sample. Third, due to their unobtrusive nature and the ability to respond at one’s convenience, questionnaire surveys are preferred by some respondents. Fourth, interviews may be the only way of reaching certain population groups such as the homeless or illegal immigrants for which there is no sampling frame available. Fifth, large sample surveys may allow detection of small effects even while analysing multiple variables, and depending on the survey design, may also allow comparative analysis of population subgroups (i.e., within-group and between-group analysis). Sixth, survey research is more economical in terms of researcher time, effort and cost than other methods such as experimental research and case research. At the same time, survey research also has some unique disadvantages. It is subject to a large number of biases such as non-response bias, sampling bias, social desirability bias, and recall bias, as discussed at the end of this chapter.

Depending on how the data is collected, survey research can be divided into two broad categories: questionnaire surveys (which may be postal, group-administered, or online surveys), and interview surveys (which may be personal, telephone, or focus group interviews). Questionnaires are instruments that are completed in writing by respondents, while interviews are completed by the interviewer based on verbal responses provided by respondents. As discussed below, each type has its own strengths and weaknesses in terms of their costs, coverage of the target population, and researcher’s flexibility in asking questions.

Questionnaire surveys

Invented by Sir Francis Galton, a questionnaire is a research instrument consisting of a set of questions (items) intended to capture responses from respondents in a standardised manner. Questions may be unstructured or structured. Unstructured questions ask respondents to provide a response in their own words, while structured questions ask respondents to select an answer from a given set of choices. Subjects’ responses to individual questions (items) on a structured questionnaire may be aggregated into a composite scale or index for statistical analysis. Questions should be designed in such a way that respondents are able to read, understand, and respond to them in a meaningful way, and hence the survey method may not be appropriate or practical for certain demographic groups such as children or the illiterate.

Most questionnaire surveys tend to be self-administered postal surveys , where the same questionnaire is posted to a large number of people, and willing respondents can complete the survey at their convenience and return it in prepaid envelopes. Postal surveys are advantageous in that they are unobtrusive and inexpensive to administer, since bulk postage is cheap in most countries. However, response rates from postal surveys tend to be quite low since most people ignore survey requests. There may also be long delays (several months) in respondents’ completing and returning the survey, or they may even simply lose it. Hence, the researcher must continuously monitor responses as they are being returned, track and send non-respondents repeated reminders (two or three reminders at intervals of one to one and a half months is ideal). Questionnaire surveys are also not well-suited for issues that require clarification on the part of the respondent or those that require detailed written responses. Longitudinal designs can be used to survey the same set of respondents at different times, but response rates tend to fall precipitously from one survey to the next.

A second type of survey is a group-administered questionnaire . A sample of respondents is brought together at a common place and time, and each respondent is asked to complete the survey questionnaire while in that room. Respondents enter their responses independently without interacting with one another. This format is convenient for the researcher, and a high response rate is assured. If respondents do not understand any specific question, they can ask for clarification. In many organisations, it is relatively easy to assemble a group of employees in a conference room or lunch room, especially if the survey is approved by corporate executives.

A more recent type of questionnaire survey is an online or web survey. These surveys are administered over the Internet using interactive forms. Respondents may receive an email request for participation in the survey with a link to a website where the survey may be completed. Alternatively, the survey may be embedded into an email, and can be completed and returned via email. These surveys are very inexpensive to administer, results are instantly recorded in an online database, and the survey can be easily modified if needed. However, if the survey website is not password-protected or designed to prevent multiple submissions, the responses can be easily compromised. Furthermore, sampling bias may be a significant issue since the survey cannot reach people who do not have computer or Internet access, such as many of the poor, senior, and minority groups, and the respondent sample is skewed toward a younger demographic who are online much of the time and have the time and ability to complete such surveys. Computing the response rate may be problematic if the survey link is posted on LISTSERVs or bulletin boards instead of being emailed directly to targeted respondents. For these reasons, many researchers prefer dual-media surveys (e.g., postal survey and online survey), allowing respondents to select their preferred method of response.

Constructing a survey questionnaire is an art. Numerous decisions must be made about the content of questions, their wording, format, and sequencing, all of which can have important consequences for the survey responses.

Response formats. Survey questions may be structured or unstructured. Responses to structured questions are captured using one of the following response formats:

Dichotomous response , where respondents are asked to select one of two possible choices, such as true/false, yes/no, or agree/disagree. An example of such a question is: Do you think that the death penalty is justified under some circumstances? (circle one): yes / no.

Nominal response , where respondents are presented with more than two unordered options, such as: What is your industry of employment?: manufacturing / consumer services / retail / education / healthcare / tourism and hospitality / other.

Ordinal response , where respondents have more than two ordered options, such as: What is your highest level of education?: high school / bachelor’s degree / postgraduate degree.

Interval-level response , where respondents are presented with a 5-point or 7-point Likert scale, semantic differential scale, or Guttman scale. Each of these scale types were discussed in a previous chapter.

Continuous response , where respondents enter a continuous (ratio-scaled) value with a meaningful zero point, such as their age or tenure in a firm. These responses generally tend to be of the fill-in-the blanks type.

Question content and wording. Responses obtained in survey research are very sensitive to the types of questions asked. Poorly framed or ambiguous questions will likely result in meaningless responses with very little value. Dillman (1978) [1] recommends several rules for creating good survey questions. Every single question in a survey should be carefully scrutinised for the following issues:

Is the question clear and understandable ?: Survey questions should be stated in very simple language, preferably in active voice, and without complicated words or jargon that may not be understood by a typical respondent. All questions in the questionnaire should be worded in a similar manner to make it easy for respondents to read and understand them. The only exception is if your survey is targeted at a specialised group of respondents, such as doctors, lawyers and researchers, who use such jargon in their everyday environment. Is the question worded in a negative manner ?: Negatively worded questions such as ‘Should your local government not raise taxes?’ tend to confuse many respondents and lead to inaccurate responses. Double-negatives should be avoided when designing survey questions.

Is the question ambiguous ?: Survey questions should not use words or expressions that may be interpreted differently by different respondents (e.g., words like ‘any’ or ‘just’). For instance, if you ask a respondent, ‘What is your annual income?’, it is unclear whether you are referring to salary/wages, or also dividend, rental, and other income, whether you are referring to personal income, family income (including spouse’s wages), or personal and business income. Different interpretation by different respondents will lead to incomparable responses that cannot be interpreted correctly.

Does the question have biased or value-laden words ?: Bias refers to any property of a question that encourages subjects to answer in a certain way. Kenneth Rasinky (1989) [2] examined several studies on people’s attitude toward government spending, and observed that respondents tend to indicate stronger support for ‘assistance to the poor’ and less for ‘welfare’, even though both terms had the same meaning. In this study, more support was also observed for ‘halting rising crime rate’ and less for ‘law enforcement’, more for ‘solving problems of big cities’ and less for ‘assistance to big cities’, and more for ‘dealing with drug addiction’ and less for ‘drug rehabilitation’. A biased language or tone tends to skew observed responses. It is often difficult to anticipate in advance the biasing wording, but to the greatest extent possible, survey questions should be carefully scrutinised to avoid biased language.

Is the question double-barrelled ?: Double-barrelled questions are those that can have multiple answers. For example, ‘Are you satisfied with the hardware and software provided for your work?’. In this example, how should a respondent answer if they are satisfied with the hardware, but not with the software, or vice versa? It is always advisable to separate double-barrelled questions into separate questions: ‘Are you satisfied with the hardware provided for your work?’, and ’Are you satisfied with the software provided for your work?’. Another example: ‘Does your family favour public television?’. Some people may favour public TV for themselves, but favour certain cable TV programs such as Sesame Street for their children.

Is the question too general ?: Sometimes, questions that are too general may not accurately convey respondents’ perceptions. If you asked someone how they liked a certain book and provided a response scale ranging from ‘not at all’ to ‘extremely well’, if that person selected ‘extremely well’, what do they mean? Instead, ask more specific behavioural questions, such as, ‘Will you recommend this book to others, or do you plan to read other books by the same author?’. Likewise, instead of asking, ‘How big is your firm?’ (which may be interpreted differently by respondents), ask, ‘How many people work for your firm?’, and/or ‘What is the annual revenue of your firm?’, which are both measures of firm size.

Is the question too detailed ?: Avoid unnecessarily detailed questions that serve no specific research purpose. For instance, do you need the age of each child in a household, or is just the number of children in the household acceptable? However, if unsure, it is better to err on the side of details than generality.

Is the question presumptuous ?: If you ask, ‘What do you see as the benefits of a tax cut?’, you are presuming that the respondent sees the tax cut as beneficial. Many people may not view tax cuts as being beneficial, because tax cuts generally lead to lesser funding for public schools, larger class sizes, and fewer public services such as police, ambulance, and fire services. Avoid questions with built-in presumptions.

Is the question imaginary ?: A popular question in many television game shows is, ‘If you win a million dollars on this show, how will you spend it?’. Most respondents have never been faced with such an amount of money before and have never thought about it—they may not even know that after taxes, they will get only about $640,000 or so in the United States, and in many cases, that amount is spread over a 20-year period—and so their answers tend to be quite random, such as take a tour around the world, buy a restaurant or bar, spend on education, save for retirement, help parents or children, or have a lavish wedding. Imaginary questions have imaginary answers, which cannot be used for making scientific inferences.

Do respondents have the information needed to correctly answer the question ?: Oftentimes, we assume that subjects have the necessary information to answer a question, when in reality, they do not. Even if a response is obtained, these responses tend to be inaccurate given the subjects’ lack of knowledge about the question being asked. For instance, we should not ask the CEO of a company about day-to-day operational details that they may not be aware of, or ask teachers about how much their students are learning, or ask high-schoolers, ‘Do you think the US Government acted appropriately in the Bay of Pigs crisis?’.

Question sequencing. In general, questions should flow logically from one to the next. To achieve the best response rates, questions should flow from the least sensitive to the most sensitive, from the factual and behavioural to the attitudinal, and from the more general to the more specific. Some general rules for question sequencing:

Start with easy non-threatening questions that can be easily recalled. Good options are demographics (age, gender, education level) for individual-level surveys and firmographics (employee count, annual revenues, industry) for firm-level surveys.

Never start with an open ended question.

If following a historical sequence of events, follow a chronological order from earliest to latest.

Ask about one topic at a time. When switching topics, use a transition, such as, ‘The next section examines your opinions about…’

Use filter or contingency questions as needed, such as, ‘If you answered “yes” to question 5, please proceed to Section 2. If you answered “no” go to Section 3′.

Other golden rules . Do unto your respondents what you would have them do unto you. Be attentive and appreciative of respondents’ time, attention, trust, and confidentiality of personal information. Always practice the following strategies for all survey research:

People’s time is valuable. Be respectful of their time. Keep your survey as short as possible and limit it to what is absolutely necessary. Respondents do not like spending more than 10-15 minutes on any survey, no matter how important it is. Longer surveys tend to dramatically lower response rates.

Always assure respondents about the confidentiality of their responses, and how you will use their data (e.g., for academic research) and how the results will be reported (usually, in the aggregate).

For organisational surveys, assure respondents that you will send them a copy of the final results, and make sure that you follow up with your promise.

Thank your respondents for their participation in your study.

Finally, always pretest your questionnaire, at least using a convenience sample, before administering it to respondents in a field setting. Such pretesting may uncover ambiguity, lack of clarity, or biases in question wording, which should be eliminated before administering to the intended sample.

Interview survey

Interviews are a more personalised data collection method than questionnaires, and are conducted by trained interviewers using the same research protocol as questionnaire surveys (i.e., a standardised set of questions). However, unlike a questionnaire, the interview script may contain special instructions for the interviewer that are not seen by respondents, and may include space for the interviewer to record personal observations and comments. In addition, unlike postal surveys, the interviewer has the opportunity to clarify any issues raised by the respondent or ask probing or follow-up questions. However, interviews are time-consuming and resource-intensive. Interviewers need special interviewing skills as they are considered to be part of the measurement instrument, and must proactively strive not to artificially bias the observed responses.

The most typical form of interview is a personal or face-to-face interview , where the interviewer works directly with the respondent to ask questions and record their responses. Personal interviews may be conducted at the respondent’s home or office location. This approach may even be favoured by some respondents, while others may feel uncomfortable allowing a stranger into their homes. However, skilled interviewers can persuade respondents to co-operate, dramatically improving response rates.

A variation of the personal interview is a group interview, also called a focus group . In this technique, a small group of respondents (usually 6–10 respondents) are interviewed together in a common location. The interviewer is essentially a facilitator whose job is to lead the discussion, and ensure that every person has an opportunity to respond. Focus groups allow deeper examination of complex issues than other forms of survey research, because when people hear others talk, it often triggers responses or ideas that they did not think about before. However, focus group discussion may be dominated by a dominant personality, and some individuals may be reluctant to voice their opinions in front of their peers or superiors, especially while dealing with a sensitive issue such as employee underperformance or office politics. Because of their small sample size, focus groups are usually used for exploratory research rather than descriptive or explanatory research.

A third type of interview survey is a telephone interview . In this technique, interviewers contact potential respondents over the phone, typically based on a random selection of people from a telephone directory, to ask a standard set of survey questions. A more recent and technologically advanced approach is computer-assisted telephone interviewing (CATI). This is increasing being used by academic, government, and commercial survey researchers. Here the interviewer is a telephone operator who is guided through the interview process by a computer program displaying instructions and questions to be asked. The system also selects respondents randomly using a random digit dialling technique, and records responses using voice capture technology. Once respondents are on the phone, higher response rates can be obtained. This technique is not ideal for rural areas where telephone density is low, and also cannot be used for communicating non-audio information such as graphics or product demonstrations.

Role of interviewer. The interviewer has a complex and multi-faceted role in the interview process, which includes the following tasks:

Prepare for the interview: Since the interviewer is in the forefront of the data collection effort, the quality of data collected depends heavily on how well the interviewer is trained to do the job. The interviewer must be trained in the interview process and the survey method, and also be familiar with the purpose of the study, how responses will be stored and used, and sources of interviewer bias. They should also rehearse and time the interview prior to the formal study.

Locate and enlist the co-operation of respondents: Particularly in personal, in-home surveys, the interviewer must locate specific addresses, and work around respondents’ schedules at sometimes undesirable times such as during weekends. They should also be like a salesperson, selling the idea of participating in the study.

Motivate respondents: Respondents often feed off the motivation of the interviewer. If the interviewer is disinterested or inattentive, respondents will not be motivated to provide useful or informative responses either. The interviewer must demonstrate enthusiasm about the study, communicate the importance of the research to respondents, and be attentive to respondents’ needs throughout the interview.

Clarify any confusion or concerns: Interviewers must be able to think on their feet and address unanticipated concerns or objections raised by respondents to the respondents’ satisfaction. Additionally, they should ask probing questions as necessary even if such questions are not in the script.

Observe quality of response: The interviewer is in the best position to judge the quality of information collected, and may supplement responses obtained using personal observations of gestures or body language as appropriate.

Conducting the interview. Before the interview, the interviewer should prepare a kit to carry to the interview session, consisting of a cover letter from the principal investigator or sponsor, adequate copies of the survey instrument, photo identification, and a telephone number for respondents to call to verify the interviewer’s authenticity. The interviewer should also try to call respondents ahead of time to set up an appointment if possible. To start the interview, they should speak in an imperative and confident tone, such as, ‘I’d like to take a few minutes of your time to interview you for a very important study’, instead of, ‘May I come in to do an interview?’. They should introduce themself, present personal credentials, explain the purpose of the study in one to two sentences, and assure respondents that their participation is voluntary, and their comments are confidential, all in less than a minute. No big words or jargon should be used, and no details should be provided unless specifically requested. If the interviewer wishes to record the interview, they should ask for respondents’ explicit permission before doing so. Even if the interview is recorded, the interviewer must take notes on key issues, probes, or verbatim phrases

During the interview, the interviewer should follow the questionnaire script and ask questions exactly as written, and not change the words to make the question sound friendlier. They should also not change the order of questions or skip any question that may have been answered earlier. Any issues with the questions should be discussed during rehearsal prior to the actual interview sessions. The interviewer should not finish the respondent’s sentences. If the respondent gives a brief cursory answer, the interviewer should probe the respondent to elicit a more thoughtful, thorough response. Some useful probing techniques are:

The silent probe: Just pausing and waiting without going into the next question may suggest to respondents that the interviewer is waiting for more detailed response.

Overt encouragement: An occasional ‘uh-huh’ or ‘okay’ may encourage the respondent to go into greater details. However, the interviewer must not express approval or disapproval of what the respondent says.

Ask for elaboration: Such as, ‘Can you elaborate on that?’ or ‘A minute ago, you were talking about an experience you had in high school. Can you tell me more about that?’.

Reflection: The interviewer can try the psychotherapist’s trick of repeating what the respondent said. For instance, ‘What I’m hearing is that you found that experience very traumatic’ and then pause and wait for the respondent to elaborate.

After the interview is completed, the interviewer should thank respondents for their time, tell them when to expect the results, and not leave hastily. Immediately after leaving, they should write down any notes or key observations that may help interpret the respondent’s comments better.

Biases in survey research

Despite all of its strengths and advantages, survey research is often tainted with systematic biases that may invalidate some of the inferences derived from such surveys. Five such biases are the non-response bias, sampling bias, social desirability bias, recall bias, and common method bias.

Non-response bias. Survey research is generally notorious for its low response rates. A response rate of 15-20 per cent is typical in a postal survey, even after two or three reminders. If the majority of the targeted respondents fail to respond to a survey, this may indicate a systematic reason for the low response rate, which may in turn raise questions about the validity of the study’s results. For instance, dissatisfied customers tend to be more vocal about their experience than satisfied customers, and are therefore more likely to respond to questionnaire surveys or interview requests than satisfied customers. Hence, any respondent sample is likely to have a higher proportion of dissatisfied customers than the underlying population from which it is drawn. In this instance, not only will the results lack generalisability, but the observed outcomes may also be an artefact of the biased sample. Several strategies may be employed to improve response rates:

Advance notification: Sending a short letter to the targeted respondents soliciting their participation in an upcoming survey can prepare them in advance and improve their propensity to respond. The letter should state the purpose and importance of the study, mode of data collection (e.g., via a phone call, a survey form in the mail, etc.), and appreciation for their co-operation. A variation of this technique may be to ask the respondent to return a prepaid postcard indicating whether or not they are willing to participate in the study.

Relevance of content: People are more likely to respond to surveys examining issues of relevance or importance to them.

Respondent-friendly questionnaire: Shorter survey questionnaires tend to elicit higher response rates than longer questionnaires. Furthermore, questions that are clear, non-offensive, and easy to respond tend to attract higher response rates.

Endorsement: For organisational surveys, it helps to gain endorsement from a senior executive attesting to the importance of the study to the organisation. Such endorsement can be in the form of a cover letter or a letter of introduction, which can improve the researcher’s credibility in the eyes of the respondents.

Follow-up requests: Multiple follow-up requests may coax some non-respondents to respond, even if their responses are late.

Interviewer training: Response rates for interviews can be improved with skilled interviewers trained in how to request interviews, use computerised dialling techniques to identify potential respondents, and schedule call-backs for respondents who could not be reached.

Incentives : Incentives in the form of cash or gift cards, giveaways such as pens or stress balls, entry into a lottery, draw or contest, discount coupons, promise of contribution to charity, and so forth may increase response rates.

Non-monetary incentives: Businesses, in particular, are more prone to respond to non-monetary incentives than financial incentives. An example of such a non-monetary incentive is a benchmarking report comparing the business’s individual response against the aggregate of all responses to a survey.

Confidentiality and privacy: Finally, assurances that respondents’ private data or responses will not fall into the hands of any third party may help improve response rates

Sampling bias. Telephone surveys conducted by calling a random sample of publicly available telephone numbers will systematically exclude people with unlisted telephone numbers, mobile phone numbers, and people who are unable to answer the phone when the survey is being conducted—for instance, if they are at work—and will include a disproportionate number of respondents who have landline telephone services with listed phone numbers and people who are home during the day, such as the unemployed, the disabled, and the elderly. Likewise, online surveys tend to include a disproportionate number of students and younger people who are constantly on the Internet, and systematically exclude people with limited or no access to computers or the Internet, such as the poor and the elderly. Similarly, questionnaire surveys tend to exclude children and the illiterate, who are unable to read, understand, or meaningfully respond to the questionnaire. A different kind of sampling bias relates to sampling the wrong population, such as asking teachers (or parents) about their students’ (or children’s) academic learning, or asking CEOs about operational details in their company. Such biases make the respondent sample unrepresentative of the intended population and hurt generalisability claims about inferences drawn from the biased sample.

Social desirability bias . Many respondents tend to avoid negative opinions or embarrassing comments about themselves, their employers, family, or friends. With negative questions such as, ‘Do you think that your project team is dysfunctional?’, ‘Is there a lot of office politics in your workplace?’, ‘Or have you ever illegally downloaded music files from the Internet?’, the researcher may not get truthful responses. This tendency among respondents to ‘spin the truth’ in order to portray themselves in a socially desirable manner is called the ‘social desirability bias’, which hurts the validity of responses obtained from survey research. There is practically no way of overcoming the social desirability bias in a questionnaire survey, but in an interview setting, an astute interviewer may be able to spot inconsistent answers and ask probing questions or use personal observations to supplement respondents’ comments.

Recall bias. Responses to survey questions often depend on subjects’ motivation, memory, and ability to respond. Particularly when dealing with events that happened in the distant past, respondents may not adequately remember their own motivations or behaviours, or perhaps their memory of such events may have evolved with time and no longer be retrievable. For instance, if a respondent is asked to describe his/her utilisation of computer technology one year ago, or even memorable childhood events like birthdays, their response may not be accurate due to difficulties with recall. One possible way of overcoming the recall bias is by anchoring the respondent’s memory in specific events as they happened, rather than asking them to recall their perceptions and motivations from memory.

Common method bias. Common method bias refers to the amount of spurious covariance shared between independent and dependent variables that are measured at the same point in time, such as in a cross-sectional survey, using the same instrument, such as a questionnaire. In such cases, the phenomenon under investigation may not be adequately separated from measurement artefacts. Standard statistical tests are available to test for common method bias, such as Harmon’s single-factor test (Podsakoff, MacKenzie, Lee & Podsakoff, 2003), [3] Lindell and Whitney’s (2001) [4] market variable technique, and so forth. This bias can potentially be avoided if the independent and dependent variables are measured at different points in time using a longitudinal survey design, or if these variables are measured using different methods, such as computerised recording of dependent variable versus questionnaire-based self-rating of independent variables.

  • Dillman, D. (1978). Mail and telephone surveys: The total design method . New York: Wiley. ↵
  • Rasikski, K. (1989). The effect of question wording on public support for government spending. Public Opinion Quarterly , 53(3), 388–394. ↵
  • Podsakoff, P. M., MacKenzie, S. B., Lee, J.-Y., & Podsakoff, N. P. (2003). Common method biases in behavioral research: A critical review of the literature and recommended remedies. Journal of Applied Psychology , 88(5), 879–903. http://dx.doi.org/10.1037/0021-9010.88.5.879. ↵
  • Lindell, M. K., & Whitney, D. J. (2001). Accounting for common method variance in cross-sectional research designs. Journal of Applied Psychology , 86(1), 114–121. ↵

Social Science Research: Principles, Methods and Practices (Revised edition) Copyright © 2019 by Anol Bhattacherjee is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

  • Survey Methods: Definition, Types, and Examples

busayo.longe

Data gathering is a flexible and exciting process; especially when you use surveys. There are different survey methods that allow you to collect relevant information from research participants or the people who have access to the required data. 

For instance, you can conduct an interview or simply observe the research participants as they interact in their environment. Typically, your research context, the type of systematic investigation, and many other factors should determine the survey method you adopt. 

In this article, we will discuss different types of survey methods and also show you how to conduct online surveys using Formplus . 

What is a Survey Method?

A survey method is a process, tool, or technique that you can use to gather information in research by asking questions to a predefined group of people. Typically, it facilitates the exchange of information between the research participants and the person or organization carrying out the research. 

Survey methods can be qualitative or quantitative depending on the type of research and the type of data you want to gather in the end. For instance, you can choose to create and administer an online survey with Formplus that allows you to collect statistical information from respondents. For qualitative research, you can conduct a face-to-face interview or organize a focus group. 

Types of Survey Methods  

Interviews    .

An interview is a survey research method where the researcher facilitates some sort of conversation with the research participant to gather useful information about the research subject. This conversation can happen physically as a face-to-face interview or virtually as a telephone interview or via video and audio-conferencing platforms.  

During an interview, the researcher has the opportunity to connect personally with the research subject and establish some sort of relationship. This connection allows the interviewer (researcher) to gain more insight into the information provided by the research participant in the course of the conversation. 

An interview can be structured, semi-structured, or unstructured . In a structured interview , the researcher strictly adheres to a sequence of premeditated questions throughout the conversation. This is also known as a standardized interview or a researcher-administered interview and it often results in quantitative research findings. 

In a semi-structured interview , the researcher has a set of premeditated interview questions but he or she can veer off the existing interview sequence to get more answers and gain more clarity from the interviewee. The semi-structured interview method is flexible and allows the researcher to work outside the scope of the sequence while maintaining the basic interview framework. 

Just as the name suggests, an unstructured interview is one that doesn’t restrict the researcher to a set of premeditated questions or the interview sequence. Here, the researcher is allowed to leverage his or her knowledge and to creatively weave questions to help him or her to get useful information from the participant. This is why it is also called an in-depth interview. 

Advantages of Interviews

  • Interviews, especially face-to-face interviews, allow you to capture non-verbal nuances that provide more context around the interviewee’s responses. For instance, the interview can act in a certain way to suggest that he or she is uncomfortable with a particular question. 
  • Interviews are more flexible as a method of survey research. With semi-structured and unstructured interviews, you can adjust the conversation sequence to suit prevailing circumstances. 

Disadvantages of Interviews

  • It is expensive and time-consuming; especially when you have to interview large numbers of people. 
  • It is subject to researcher bias which can affect the quality of data gathered at the end of the process. 

A survey is a data collection tool that lists a set of structured questions to which respondents provide answers based on their knowledge and experiences. It is a standard data gathering process that allows you to access information from a predefined group of respondents during research. 

In a survey, you would find different types of questions based on the research context and the type of information you want to have access to. Many surveys combine open-ended and closed-ended questions including rating scales and semantic scales. This means you can use them for qualitative and quantitative research. 

Surveys come in 2 major formats; paper forms or online forms. A paper survey is a more traditional method of data collection and it can easily result in loss of data. Paper forms are also cumbersome to organize and process. 

Online surveys, on the other hand, are usually created via data collection platforms like Formplus. These platforms have form builders where you can create your survey from scratch using different form fields and features. On Formplus, you can also find different online survey templates for data collection. 

One of the many advantages of online surveys is accuracy as it typically records a lower margin of error than paper surveys. Also, online surveys are easier to administer as you can share them with respondents via email or social media channels. 

Advantages of Surveys

  • Surveys allow you to gather data from a large sample size or research population. This helps to improve the validity and accuracy of your research findings. 
  • The cost of creating and administering a survey is usually lower compared to other research methods. 
  • It is a convenient method of data collection for the researcher and the respondents. 

Disadvantages of Surveys

  • The validity of the research data can be affected by survey response bias. 
  • High survey dropout rates can also affect the number of responses received in your survey. 

Observation  

Just as the name suggests, observation is a method of gathering data by paying attention to the actions and behaviors of the research subjects as they interact in their environment. This qualitative research method allows you to get first-hand information about the research subjects in line with the aims and objectives of your systematic investigation. 

If you have tried out this survey method, then you must have come across one or more of the 4 types of observation in research. These are; Complete observer method, observer as participant method, participant as observer method, and complete participant method. 

In the complete observer method , the researcher is entirely detached or absorbed from the research environment. This means that the participants are completely unaware of the researcher’s presence and this allows them to act naturally as they interact with their environment. You can think of it as a remote observation. 

The observer as participant method requires the researcher to be involved in the research environment; albeit with limited interaction with the participants. The participants typically know the researcher and may also be familiar with the goals and objectives of the systematic investigation. 

A good example of this is when a researcher visits a school to understand how students interact with each other during extra-curricular activities. In this case, the students may be fully aware of the research process; although they may not interact with the researcher. 

In the participant as observer method , the researcher has some kind of relationship with the participants and interacts with them often as he or she carries out the investigation. For instance, when an anthropologist goes to a host community for research, s/he builds a relationship with members of the community while the host community is aware of the research. 

In the complete participant method , the researcher interacts with the research participants and is also an active member of the research environment. However, the research participants remain unaware of the research process; they do not know that a researcher is among them and they also do not know that they are being observed. 

Advantages of Observation Method

  • It is one of the simplest methods of data collection as it does not require specialization or expertise in many cases.
  • The observation method helps you to formulate a valid research hypothesis for your systematic investigation. You can test this hypothesis via experimental research to get valid findings.  

Disadvantages of Observation Method

  • When the participants know they are being observed, they may act differently and this can affect the accuracy of the information you gather. 
  • Because observation is done in the participant’s natural environment; that is an environment without control, the findings from this process are not very reliable. 

Focus Groups

A focus group is an open conversation with a small number of carefully-selected participants who provide useful information for research. The selected participants are a subset of your research population and should represent the different groups in the larger population. 

In a focus group, the researcher can act as the moderator who sets the tone of the conversation and guides the discourse. The moderator ensures that the overall conversations are in line with the aims and objectives of the research and he or she also reduces the bias in the discussions.  

If you are conducting qualitative research with a large and diverse research population, then adopting focus groups is an effective and cost-efficient method of data collection . Typically, your focus group should have 6-10 participants, usually 8; including the moderator. 

Based on the focus of your research, you can adopt one or more types of focus groups for your investigation. Common types of focus groups you should consider include:

  • Dual-moderator focus group
  • Mini focus group
  • Client-involvement focus group
  • Virtual or online focus groups. 

Advantages of Focus Groups

  • Focus groups are open-ended and this allows you to explore a variety of opinions and ideas that may come up during the discussions. 
  • Focus groups help you to discover other salient points that you may not have considered in the systematic investigation. 

Disadvantages of Focus Groups

  • Participants may not communicate their true thoughts and experiences and this affects the validity of the entire process.
  • Participants can be easily influenced by the opinions of other people in the group. 

How to Conduct Online Surveys with Formplus  

As we’ve mentioned earlier, an online survey allows you to gather data from a large pool of respondents easily and conveniently. Unlike paper forms, online surveys are secure and it is also easy to distribute them and collate responses for valid research data. 

Formplus allows you to create your online surve y in a few easy steps. It also has several features that make data collection and organization easy for you. Let’s show you how to conduct online surveys with Formplus. 

  • Create your Formplus account here. If you already have a Formplus account, you can log in at www.formpl.us . 

research question survey methodology

  • On your Formplus dashboard, you will find several buttons and options. Click on the “create new form” button located at the top left corner of the dashboard to begin. 
  • Now, you should have access to the form builder. The Formplus builder allows you to add different form fields to your survey by simply dragging and dropping them from the builder’s fields section into your form. You will find the fields section at the left corner of the form builder. 

research question survey methodology

  • First, add the title of your form by clicking on the title tab just at the top of the builder. 
  • Next, click on the different fields you’d like to have in your survey. You can add rating fields, number fields, and more than 30 other form fields as you like. 

research question survey methodology

  • After adding the fields to your survey, it is time to populate them with questions and answer-options as needed. Click on the small pencil icon located beside each field to access their unique editing tab. 
  • Apart from adding questions and answer-options to the fields, you can also make preferred fields to be compulsory or make them read-only. 
  • Save all the changes you have made to the form by clicking on the save icon at the top right corner. This gives you immediate access to the builder’s customization section. 

research question survey methodology

  • Formplus has numerous customization options that you can use to change the outlook and layout of your online survey without any knowledge of CSS. You can change your form font, add your organization’s logo, and also add preferred background images among other things. 

research question survey methodology

  • To start collecting responses in your online survey, you can use any of the Formplus multiple form sharing options. Go to the builder’s “share” section, choose your preferred option, and follow the prompt provided. If you have a WordPress website, you can add the survey to it via the WordPress plugin. 

research question survey methodology

  • Don’t forget to track your form responses and other important data in our form analytics dashboard. 

Advantages of Online Surveys

  • Online surveys are a faster method of data collection : They help you to save time by accelerating your data collection process. Typically, respondents would spend ⅓ of the time used in completing a paper survey for an online survey. This means you will record almost-immediate responses from participants.  
  • Apart from saving time, you also get to save cost. For instance, you do not have to spend money on printing paper surveys and transporting them to respondents. Also, many online survey tools have a free subscription plan and also support affordable premium subscription plans. You can check out Formplus pricing here . 
  • Online surveys reduce the margin of error in data collection. This allows you to gather more accurate information and arrive at objective research findings. 
  • It is flexible and allows participants to respond as is convenient. For instance, Formplus has a save and resume later feature that allows respondents to save an incomplete survey and finish up when it is more convenient. The order of the questions in an online survey can also be changed. 
  • Online surveys make the data collection process easy and seamless. By leveraging the internet for distribution, you can gather information from thousands of people in your target population. 
  • Because online surveys are very convenient, they result in increased survey response rates because participants can complete the survey according to their own pace, chosen time, and preferences.

Conclusion  

When conducting research, many survey methods can help you to gather, analyze and process data effectively. In this article, we have looked at some of these methods in detail including interviews, focus groups, and the observation approach. 

As we’ve shown you, each of these survey methods has its strengths and weaknesses. This is why your choice should be informed by the type of research you are conducting and what you want to get out of it. While some of these methods work best for qualitative research, others are better suited for quantitative data collection . 

Logo

Connect to Formplus, Get Started Now - It's Free!

  • accuracy surveys
  • brand survey examples
  • survey methods
  • survey question types
  • survey questionnaire
  • busayo.longe

Formplus

You may also like:

Pilot Survey: Definition, Importance + [ Question Examples]

Before launching a new product or feature into the market, it is a good idea to find out what you

research question survey methodology

Survey & Questionnaire Introduction: Examples + [5 Types]

The Golden Rule of Surveys: Be Polite. Whether online or offline, you need to politely approach survey respondents and get th

33 Event Survey Questions + [Template Examples]

Read this article to learn how to create an event survey with Formplus

25 Training Survey Questions + Free Form Templates

Asking the right training survey questions before, during, and after a training session is an effective way to gather valuabl

Formplus - For Seamless Data Collection

Collect data the right way with a versatile data collection tool. try formplus and transform your work productivity today..

Research-Methodology

Questionnaires

Questionnaires can be classified as both, quantitative and qualitative method depending on the nature of questions. Specifically, answers obtained through closed-ended questions (also called restricted questions) with multiple choice answer options are analyzed using quantitative methods. Research findings in this case can be illustrated using tabulations, pie-charts, bar-charts and percentages.

Answers obtained to open-ended questionnaire questions (also known as unrestricted questions), on the other hand, are analyzed using qualitative methods. Primary data collected using open-ended questionnaires involve discussions and critical analyses without use of numbers and calculations.

There are following types of questionnaires:

Computer questionnaire . Respondents are asked to answer the questionnaire which is sent by mail. The advantages of the computer questionnaires include their inexpensive price, time-efficiency, and respondents do not feel pressured, therefore can answer when they have time, giving more accurate answers. However, the main shortcoming of the mail questionnaires is that sometimes respondents do not bother answering them and they can just ignore the questionnaire.

Telephone questionnaire .  Researcher may choose to call potential respondents with the aim of getting them to answer the questionnaire. The advantage of the telephone questionnaire is that, it can be completed during the short amount of time. The main disadvantage of the phone questionnaire is that it is expensive most of the time. Moreover, most people do not feel comfortable to answer many questions asked through the phone and it is difficult to get sample group to answer questionnaire over the phone.

In-house survey .  This type of questionnaire involves the researcher visiting respondents in their houses or workplaces. The advantage of in-house survey is that more focus towards the questions can be gained from respondents. However, in-house surveys also have a range of disadvantages which include being time consuming, more expensive and respondents may not wish to have the researcher in their houses or workplaces for various reasons.

Mail Questionnaire . This sort of questionnaires involve the researcher to send the questionnaire list to respondents through post, often attaching pre-paid envelope. Mail questionnaires have an advantage of providing more accurate answer, because respondents can answer the questionnaire in their spare time. The disadvantages associated with mail questionnaires include them being expensive, time consuming and sometimes they end up in the bin put by respondents.

Questionnaires can include the following types of questions:

Open question questionnaires . Open questions differ from other types of questions used in questionnaires in a way that open questions may produce unexpected results, which can make the research more original and valuable. However, it is difficult to analyze the results of the findings when the data is obtained through the questionnaire with open questions.

Multiple choice question s. Respondents are offered a set of answers they have to choose from. The downsize of questionnaire with multiple choice questions is that, if there are too many answers to choose from, it makes the questionnaire, confusing and boring, and discourages the respondent to answer the questionnaire.

Dichotomous Questions .  Thes type of questions gives two options to respondents – yes or no, to choose from. It is the easiest form of questionnaire for the respondent in terms of responding it.

Scaling Questions . Also referred to as ranking questions, they present an option for respondents to rank the available answers to questions on the scale of given range of values (for example from 1 to 10).

For a standard 15,000-20,000 word business dissertation including 25-40 questions in questionnaires will usually suffice. Questions need be formulated in an unambiguous and straightforward manner and they should be presented in a logical order.

Questionnaires as primary data collection method offer the following advantages:

  • Uniformity: all respondents are asked exactly the same questions
  • Cost-effectiveness
  • Possibility to collect the primary data in shorter period of time
  • Minimum or no bias from the researcher during the data collection process
  • Usually enough time for respondents to think before answering questions, as opposed to interviews
  • Possibility to reach respondents in distant areas through online questionnaire

At the same time, the use of questionnaires as primary data collection method is associated with the following shortcomings:

  • Random answer choices by respondents without properly reading the question.
  • In closed-ended questionnaires no possibility for respondents to express their additional thoughts about the matter due to the absence of a relevant question.
  • Collecting incomplete or inaccurate information because respondents may not be able to understand questions correctly.
  • High rate of non-response

Survey Monkey represents one of the most popular online platforms for facilitating data collection through questionnaires. Substantial benefits offered by Survey Monkey include its ease to use, presentation of questions in many different formats and advanced data analysis capabilities.

Questionnaires

Survey Monkey as a popular platform for primary data collection

There are other alternatives to Survey Monkey you might want to consider to use as a platform for your survey. These include but not limited to Jotform, Google Forms, Lime Survey, Crowd Signal, Survey Gizmo, Zoho Survey and many others.

My  e-book,  The Ultimate Guide to Writing a Dissertation in Business Studies: a step by step approach  contains a detailed, yet simple explanation of quantitative methods. The e-book explains all stages of the research process starting from the selection of the research area to writing personal reflection. Important elements of dissertations such as research philosophy, research approach, research design, methods of data collection and data analysis are explained in simple words.

John Dudovskiy

Questionnaires

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • BMC Med Res Methodol

Logo of bmcmrm

A tutorial on methodological studies: the what, when, how and why

Lawrence mbuagbaw.

1 Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, ON Canada

2 Biostatistics Unit/FSORC, 50 Charlton Avenue East, St Joseph’s Healthcare—Hamilton, 3rd Floor Martha Wing, Room H321, Hamilton, Ontario L8N 4A6 Canada

3 Centre for the Development of Best Practices in Health, Yaoundé, Cameroon

Daeria O. Lawson

Livia puljak.

4 Center for Evidence-Based Medicine and Health Care, Catholic University of Croatia, Ilica 242, 10000 Zagreb, Croatia

David B. Allison

5 Department of Epidemiology and Biostatistics, School of Public Health – Bloomington, Indiana University, Bloomington, IN 47405 USA

Lehana Thabane

6 Departments of Paediatrics and Anaesthesia, McMaster University, Hamilton, ON Canada

7 Centre for Evaluation of Medicine, St. Joseph’s Healthcare-Hamilton, Hamilton, ON Canada

8 Population Health Research Institute, Hamilton Health Sciences, Hamilton, ON Canada

Associated Data

Data sharing is not applicable to this article as no new data were created or analyzed in this study.

Methodological studies – studies that evaluate the design, analysis or reporting of other research-related reports – play an important role in health research. They help to highlight issues in the conduct of research with the aim of improving health research methodology, and ultimately reducing research waste.

We provide an overview of some of the key aspects of methodological studies such as what they are, and when, how and why they are done. We adopt a “frequently asked questions” format to facilitate reading this paper and provide multiple examples to help guide researchers interested in conducting methodological studies. Some of the topics addressed include: is it necessary to publish a study protocol? How to select relevant research reports and databases for a methodological study? What approaches to data extraction and statistical analysis should be considered when conducting a methodological study? What are potential threats to validity and is there a way to appraise the quality of methodological studies?

Appropriate reflection and application of basic principles of epidemiology and biostatistics are required in the design and analysis of methodological studies. This paper provides an introduction for further discussion about the conduct of methodological studies.

The field of meta-research (or research-on-research) has proliferated in recent years in response to issues with research quality and conduct [ 1 – 3 ]. As the name suggests, this field targets issues with research design, conduct, analysis and reporting. Various types of research reports are often examined as the unit of analysis in these studies (e.g. abstracts, full manuscripts, trial registry entries). Like many other novel fields of research, meta-research has seen a proliferation of use before the development of reporting guidance. For example, this was the case with randomized trials for which risk of bias tools and reporting guidelines were only developed much later – after many trials had been published and noted to have limitations [ 4 , 5 ]; and for systematic reviews as well [ 6 – 8 ]. However, in the absence of formal guidance, studies that report on research differ substantially in how they are named, conducted and reported [ 9 , 10 ]. This creates challenges in identifying, summarizing and comparing them. In this tutorial paper, we will use the term methodological study to refer to any study that reports on the design, conduct, analysis or reporting of primary or secondary research-related reports (such as trial registry entries and conference abstracts).

In the past 10 years, there has been an increase in the use of terms related to methodological studies (based on records retrieved with a keyword search [in the title and abstract] for “methodological review” and “meta-epidemiological study” in PubMed up to December 2019), suggesting that these studies may be appearing more frequently in the literature. See Fig.  1 .

An external file that holds a picture, illustration, etc.
Object name is 12874_2020_1107_Fig1_HTML.jpg

Trends in the number studies that mention “methodological review” or “meta-

epidemiological study” in PubMed.

The methods used in many methodological studies have been borrowed from systematic and scoping reviews. This practice has influenced the direction of the field, with many methodological studies including searches of electronic databases, screening of records, duplicate data extraction and assessments of risk of bias in the included studies. However, the research questions posed in methodological studies do not always require the approaches listed above, and guidance is needed on when and how to apply these methods to a methodological study. Even though methodological studies can be conducted on qualitative or mixed methods research, this paper focuses on and draws examples exclusively from quantitative research.

The objectives of this paper are to provide some insights on how to conduct methodological studies so that there is greater consistency between the research questions posed, and the design, analysis and reporting of findings. We provide multiple examples to illustrate concepts and a proposed framework for categorizing methodological studies in quantitative research.

What is a methodological study?

Any study that describes or analyzes methods (design, conduct, analysis or reporting) in published (or unpublished) literature is a methodological study. Consequently, the scope of methodological studies is quite extensive and includes, but is not limited to, topics as diverse as: research question formulation [ 11 ]; adherence to reporting guidelines [ 12 – 14 ] and consistency in reporting [ 15 ]; approaches to study analysis [ 16 ]; investigating the credibility of analyses [ 17 ]; and studies that synthesize these methodological studies [ 18 ]. While the nomenclature of methodological studies is not uniform, the intents and purposes of these studies remain fairly consistent – to describe or analyze methods in primary or secondary studies. As such, methodological studies may also be classified as a subtype of observational studies.

Parallel to this are experimental studies that compare different methods. Even though they play an important role in informing optimal research methods, experimental methodological studies are beyond the scope of this paper. Examples of such studies include the randomized trials by Buscemi et al., comparing single data extraction to double data extraction [ 19 ], and Carrasco-Labra et al., comparing approaches to presenting findings in Grading of Recommendations, Assessment, Development and Evaluations (GRADE) summary of findings tables [ 20 ]. In these studies, the unit of analysis is the person or groups of individuals applying the methods. We also direct readers to the Studies Within a Trial (SWAT) and Studies Within a Review (SWAR) programme operated through the Hub for Trials Methodology Research, for further reading as a potential useful resource for these types of experimental studies [ 21 ]. Lastly, this paper is not meant to inform the conduct of research using computational simulation and mathematical modeling for which some guidance already exists [ 22 ], or studies on the development of methods using consensus-based approaches.

When should we conduct a methodological study?

Methodological studies occupy a unique niche in health research that allows them to inform methodological advances. Methodological studies should also be conducted as pre-cursors to reporting guideline development, as they provide an opportunity to understand current practices, and help to identify the need for guidance and gaps in methodological or reporting quality. For example, the development of the popular Preferred Reporting Items of Systematic reviews and Meta-Analyses (PRISMA) guidelines were preceded by methodological studies identifying poor reporting practices [ 23 , 24 ]. In these instances, after the reporting guidelines are published, methodological studies can also be used to monitor uptake of the guidelines.

These studies can also be conducted to inform the state of the art for design, analysis and reporting practices across different types of health research fields, with the aim of improving research practices, and preventing or reducing research waste. For example, Samaan et al. conducted a scoping review of adherence to different reporting guidelines in health care literature [ 18 ]. Methodological studies can also be used to determine the factors associated with reporting practices. For example, Abbade et al. investigated journal characteristics associated with the use of the Participants, Intervention, Comparison, Outcome, Timeframe (PICOT) format in framing research questions in trials of venous ulcer disease [ 11 ].

How often are methodological studies conducted?

There is no clear answer to this question. Based on a search of PubMed, the use of related terms (“methodological review” and “meta-epidemiological study”) – and therefore, the number of methodological studies – is on the rise. However, many other terms are used to describe methodological studies. There are also many studies that explore design, conduct, analysis or reporting of research reports, but that do not use any specific terms to describe or label their study design in terms of “methodology”. This diversity in nomenclature makes a census of methodological studies elusive. Appropriate terminology and key words for methodological studies are needed to facilitate improved accessibility for end-users.

Why do we conduct methodological studies?

Methodological studies provide information on the design, conduct, analysis or reporting of primary and secondary research and can be used to appraise quality, quantity, completeness, accuracy and consistency of health research. These issues can be explored in specific fields, journals, databases, geographical regions and time periods. For example, Areia et al. explored the quality of reporting of endoscopic diagnostic studies in gastroenterology [ 25 ]; Knol et al. investigated the reporting of p -values in baseline tables in randomized trial published in high impact journals [ 26 ]; Chen et al. describe adherence to the Consolidated Standards of Reporting Trials (CONSORT) statement in Chinese Journals [ 27 ]; and Hopewell et al. describe the effect of editors’ implementation of CONSORT guidelines on reporting of abstracts over time [ 28 ]. Methodological studies provide useful information to researchers, clinicians, editors, publishers and users of health literature. As a result, these studies have been at the cornerstone of important methodological developments in the past two decades and have informed the development of many health research guidelines including the highly cited CONSORT statement [ 5 ].

Where can we find methodological studies?

Methodological studies can be found in most common biomedical bibliographic databases (e.g. Embase, MEDLINE, PubMed, Web of Science). However, the biggest caveat is that methodological studies are hard to identify in the literature due to the wide variety of names used and the lack of comprehensive databases dedicated to them. A handful can be found in the Cochrane Library as “Cochrane Methodology Reviews”, but these studies only cover methodological issues related to systematic reviews. Previous attempts to catalogue all empirical studies of methods used in reviews were abandoned 10 years ago [ 29 ]. In other databases, a variety of search terms may be applied with different levels of sensitivity and specificity.

Some frequently asked questions about methodological studies

In this section, we have outlined responses to questions that might help inform the conduct of methodological studies.

Q: How should I select research reports for my methodological study?

A: Selection of research reports for a methodological study depends on the research question and eligibility criteria. Once a clear research question is set and the nature of literature one desires to review is known, one can then begin the selection process. Selection may begin with a broad search, especially if the eligibility criteria are not apparent. For example, a methodological study of Cochrane Reviews of HIV would not require a complex search as all eligible studies can easily be retrieved from the Cochrane Library after checking a few boxes [ 30 ]. On the other hand, a methodological study of subgroup analyses in trials of gastrointestinal oncology would require a search to find such trials, and further screening to identify trials that conducted a subgroup analysis [ 31 ].

The strategies used for identifying participants in observational studies can apply here. One may use a systematic search to identify all eligible studies. If the number of eligible studies is unmanageable, a random sample of articles can be expected to provide comparable results if it is sufficiently large [ 32 ]. For example, Wilson et al. used a random sample of trials from the Cochrane Stroke Group’s Trial Register to investigate completeness of reporting [ 33 ]. It is possible that a simple random sample would lead to underrepresentation of units (i.e. research reports) that are smaller in number. This is relevant if the investigators wish to compare multiple groups but have too few units in one group. In this case a stratified sample would help to create equal groups. For example, in a methodological study comparing Cochrane and non-Cochrane reviews, Kahale et al. drew random samples from both groups [ 34 ]. Alternatively, systematic or purposeful sampling strategies can be used and we encourage researchers to justify their selected approaches based on the study objective.

Q: How many databases should I search?

A: The number of databases one should search would depend on the approach to sampling, which can include targeting the entire “population” of interest or a sample of that population. If you are interested in including the entire target population for your research question, or drawing a random or systematic sample from it, then a comprehensive and exhaustive search for relevant articles is required. In this case, we recommend using systematic approaches for searching electronic databases (i.e. at least 2 databases with a replicable and time stamped search strategy). The results of your search will constitute a sampling frame from which eligible studies can be drawn.

Alternatively, if your approach to sampling is purposeful, then we recommend targeting the database(s) or data sources (e.g. journals, registries) that include the information you need. For example, if you are conducting a methodological study of high impact journals in plastic surgery and they are all indexed in PubMed, you likely do not need to search any other databases. You may also have a comprehensive list of all journals of interest and can approach your search using the journal names in your database search (or by accessing the journal archives directly from the journal’s website). Even though one could also search journals’ web pages directly, using a database such as PubMed has multiple advantages, such as the use of filters, so the search can be narrowed down to a certain period, or study types of interest. Furthermore, individual journals’ web sites may have different search functionalities, which do not necessarily yield a consistent output.

Q: Should I publish a protocol for my methodological study?

A: A protocol is a description of intended research methods. Currently, only protocols for clinical trials require registration [ 35 ]. Protocols for systematic reviews are encouraged but no formal recommendation exists. The scientific community welcomes the publication of protocols because they help protect against selective outcome reporting, the use of post hoc methodologies to embellish results, and to help avoid duplication of efforts [ 36 ]. While the latter two risks exist in methodological research, the negative consequences may be substantially less than for clinical outcomes. In a sample of 31 methodological studies, 7 (22.6%) referenced a published protocol [ 9 ]. In the Cochrane Library, there are 15 protocols for methodological reviews (21 July 2020). This suggests that publishing protocols for methodological studies is not uncommon.

Authors can consider publishing their study protocol in a scholarly journal as a manuscript. Advantages of such publication include obtaining peer-review feedback about the planned study, and easy retrieval by searching databases such as PubMed. The disadvantages in trying to publish protocols includes delays associated with manuscript handling and peer review, as well as costs, as few journals publish study protocols, and those journals mostly charge article-processing fees [ 37 ]. Authors who would like to make their protocol publicly available without publishing it in scholarly journals, could deposit their study protocols in publicly available repositories, such as the Open Science Framework ( https://osf.io/ ).

Q: How to appraise the quality of a methodological study?

A: To date, there is no published tool for appraising the risk of bias in a methodological study, but in principle, a methodological study could be considered as a type of observational study. Therefore, during conduct or appraisal, care should be taken to avoid the biases common in observational studies [ 38 ]. These biases include selection bias, comparability of groups, and ascertainment of exposure or outcome. In other words, to generate a representative sample, a comprehensive reproducible search may be necessary to build a sampling frame. Additionally, random sampling may be necessary to ensure that all the included research reports have the same probability of being selected, and the screening and selection processes should be transparent and reproducible. To ensure that the groups compared are similar in all characteristics, matching, random sampling or stratified sampling can be used. Statistical adjustments for between-group differences can also be applied at the analysis stage. Finally, duplicate data extraction can reduce errors in assessment of exposures or outcomes.

Q: Should I justify a sample size?

A: In all instances where one is not using the target population (i.e. the group to which inferences from the research report are directed) [ 39 ], a sample size justification is good practice. The sample size justification may take the form of a description of what is expected to be achieved with the number of articles selected, or a formal sample size estimation that outlines the number of articles required to answer the research question with a certain precision and power. Sample size justifications in methodological studies are reasonable in the following instances:

  • Comparing two groups
  • Determining a proportion, mean or another quantifier
  • Determining factors associated with an outcome using regression-based analyses

For example, El Dib et al. computed a sample size requirement for a methodological study of diagnostic strategies in randomized trials, based on a confidence interval approach [ 40 ].

Q: What should I call my study?

A: Other terms which have been used to describe/label methodological studies include “ methodological review ”, “methodological survey” , “meta-epidemiological study” , “systematic review” , “systematic survey”, “meta-research”, “research-on-research” and many others. We recommend that the study nomenclature be clear, unambiguous, informative and allow for appropriate indexing. Methodological study nomenclature that should be avoided includes “ systematic review” – as this will likely be confused with a systematic review of a clinical question. “ Systematic survey” may also lead to confusion about whether the survey was systematic (i.e. using a preplanned methodology) or a survey using “ systematic” sampling (i.e. a sampling approach using specific intervals to determine who is selected) [ 32 ]. Any of the above meanings of the words “ systematic” may be true for methodological studies and could be potentially misleading. “ Meta-epidemiological study” is ideal for indexing, but not very informative as it describes an entire field. The term “ review ” may point towards an appraisal or “review” of the design, conduct, analysis or reporting (or methodological components) of the targeted research reports, yet it has also been used to describe narrative reviews [ 41 , 42 ]. The term “ survey ” is also in line with the approaches used in many methodological studies [ 9 ], and would be indicative of the sampling procedures of this study design. However, in the absence of guidelines on nomenclature, the term “ methodological study ” is broad enough to capture most of the scenarios of such studies.

Q: Should I account for clustering in my methodological study?

A: Data from methodological studies are often clustered. For example, articles coming from a specific source may have different reporting standards (e.g. the Cochrane Library). Articles within the same journal may be similar due to editorial practices and policies, reporting requirements and endorsement of guidelines. There is emerging evidence that these are real concerns that should be accounted for in analyses [ 43 ]. Some cluster variables are described in the section: “ What variables are relevant to methodological studies?”

A variety of modelling approaches can be used to account for correlated data, including the use of marginal, fixed or mixed effects regression models with appropriate computation of standard errors [ 44 ]. For example, Kosa et al. used generalized estimation equations to account for correlation of articles within journals [ 15 ]. Not accounting for clustering could lead to incorrect p -values, unduly narrow confidence intervals, and biased estimates [ 45 ].

Q: Should I extract data in duplicate?

A: Yes. Duplicate data extraction takes more time but results in less errors [ 19 ]. Data extraction errors in turn affect the effect estimate [ 46 ], and therefore should be mitigated. Duplicate data extraction should be considered in the absence of other approaches to minimize extraction errors. However, much like systematic reviews, this area will likely see rapid new advances with machine learning and natural language processing technologies to support researchers with screening and data extraction [ 47 , 48 ]. However, experience plays an important role in the quality of extracted data and inexperienced extractors should be paired with experienced extractors [ 46 , 49 ].

Q: Should I assess the risk of bias of research reports included in my methodological study?

A : Risk of bias is most useful in determining the certainty that can be placed in the effect measure from a study. In methodological studies, risk of bias may not serve the purpose of determining the trustworthiness of results, as effect measures are often not the primary goal of methodological studies. Determining risk of bias in methodological studies is likely a practice borrowed from systematic review methodology, but whose intrinsic value is not obvious in methodological studies. When it is part of the research question, investigators often focus on one aspect of risk of bias. For example, Speich investigated how blinding was reported in surgical trials [ 50 ], and Abraha et al., investigated the application of intention-to-treat analyses in systematic reviews and trials [ 51 ].

Q: What variables are relevant to methodological studies?

A: There is empirical evidence that certain variables may inform the findings in a methodological study. We outline some of these and provide a brief overview below:

  • Country: Countries and regions differ in their research cultures, and the resources available to conduct research. Therefore, it is reasonable to believe that there may be differences in methodological features across countries. Methodological studies have reported loco-regional differences in reporting quality [ 52 , 53 ]. This may also be related to challenges non-English speakers face in publishing papers in English.
  • Authors’ expertise: The inclusion of authors with expertise in research methodology, biostatistics, and scientific writing is likely to influence the end-product. Oltean et al. found that among randomized trials in orthopaedic surgery, the use of analyses that accounted for clustering was more likely when specialists (e.g. statistician, epidemiologist or clinical trials methodologist) were included on the study team [ 54 ]. Fleming et al. found that including methodologists in the review team was associated with appropriate use of reporting guidelines [ 55 ].
  • Source of funding and conflicts of interest: Some studies have found that funded studies report better [ 56 , 57 ], while others do not [ 53 , 58 ]. The presence of funding would indicate the availability of resources deployed to ensure optimal design, conduct, analysis and reporting. However, the source of funding may introduce conflicts of interest and warrant assessment. For example, Kaiser et al. investigated the effect of industry funding on obesity or nutrition randomized trials and found that reporting quality was similar [ 59 ]. Thomas et al. looked at reporting quality of long-term weight loss trials and found that industry funded studies were better [ 60 ]. Kan et al. examined the association between industry funding and “positive trials” (trials reporting a significant intervention effect) and found that industry funding was highly predictive of a positive trial [ 61 ]. This finding is similar to that of a recent Cochrane Methodology Review by Hansen et al. [ 62 ]
  • Journal characteristics: Certain journals’ characteristics may influence the study design, analysis or reporting. Characteristics such as journal endorsement of guidelines [ 63 , 64 ], and Journal Impact Factor (JIF) have been shown to be associated with reporting [ 63 , 65 – 67 ].
  • Study size (sample size/number of sites): Some studies have shown that reporting is better in larger studies [ 53 , 56 , 58 ].
  • Year of publication: It is reasonable to assume that design, conduct, analysis and reporting of research will change over time. Many studies have demonstrated improvements in reporting over time or after the publication of reporting guidelines [ 68 , 69 ].
  • Type of intervention: In a methodological study of reporting quality of weight loss intervention studies, Thabane et al. found that trials of pharmacologic interventions were reported better than trials of non-pharmacologic interventions [ 70 ].
  • Interactions between variables: Complex interactions between the previously listed variables are possible. High income countries with more resources may be more likely to conduct larger studies and incorporate a variety of experts. Authors in certain countries may prefer certain journals, and journal endorsement of guidelines and editorial policies may change over time.

Q: Should I focus only on high impact journals?

A: Investigators may choose to investigate only high impact journals because they are more likely to influence practice and policy, or because they assume that methodological standards would be higher. However, the JIF may severely limit the scope of articles included and may skew the sample towards articles with positive findings. The generalizability and applicability of findings from a handful of journals must be examined carefully, especially since the JIF varies over time. Even among journals that are all “high impact”, variations exist in methodological standards.

Q: Can I conduct a methodological study of qualitative research?

A: Yes. Even though a lot of methodological research has been conducted in the quantitative research field, methodological studies of qualitative studies are feasible. Certain databases that catalogue qualitative research including the Cumulative Index to Nursing & Allied Health Literature (CINAHL) have defined subject headings that are specific to methodological research (e.g. “research methodology”). Alternatively, one could also conduct a qualitative methodological review; that is, use qualitative approaches to synthesize methodological issues in qualitative studies.

Q: What reporting guidelines should I use for my methodological study?

A: There is no guideline that covers the entire scope of methodological studies. One adaptation of the PRISMA guidelines has been published, which works well for studies that aim to use the entire target population of research reports [ 71 ]. However, it is not widely used (40 citations in 2 years as of 09 December 2019), and methodological studies that are designed as cross-sectional or before-after studies require a more fit-for purpose guideline. A more encompassing reporting guideline for a broad range of methodological studies is currently under development [ 72 ]. However, in the absence of formal guidance, the requirements for scientific reporting should be respected, and authors of methodological studies should focus on transparency and reproducibility.

Q: What are the potential threats to validity and how can I avoid them?

A: Methodological studies may be compromised by a lack of internal or external validity. The main threats to internal validity in methodological studies are selection and confounding bias. Investigators must ensure that the methods used to select articles does not make them differ systematically from the set of articles to which they would like to make inferences. For example, attempting to make extrapolations to all journals after analyzing high-impact journals would be misleading.

Many factors (confounders) may distort the association between the exposure and outcome if the included research reports differ with respect to these factors [ 73 ]. For example, when examining the association between source of funding and completeness of reporting, it may be necessary to account for journals that endorse the guidelines. Confounding bias can be addressed by restriction, matching and statistical adjustment [ 73 ]. Restriction appears to be the method of choice for many investigators who choose to include only high impact journals or articles in a specific field. For example, Knol et al. examined the reporting of p -values in baseline tables of high impact journals [ 26 ]. Matching is also sometimes used. In the methodological study of non-randomized interventional studies of elective ventral hernia repair, Parker et al. matched prospective studies with retrospective studies and compared reporting standards [ 74 ]. Some other methodological studies use statistical adjustments. For example, Zhang et al. used regression techniques to determine the factors associated with missing participant data in trials [ 16 ].

With regard to external validity, researchers interested in conducting methodological studies must consider how generalizable or applicable their findings are. This should tie in closely with the research question and should be explicit. For example. Findings from methodological studies on trials published in high impact cardiology journals cannot be assumed to be applicable to trials in other fields. However, investigators must ensure that their sample truly represents the target sample either by a) conducting a comprehensive and exhaustive search, or b) using an appropriate and justified, randomly selected sample of research reports.

Even applicability to high impact journals may vary based on the investigators’ definition, and over time. For example, for high impact journals in the field of general medicine, Bouwmeester et al. included the Annals of Internal Medicine (AIM), BMJ, the Journal of the American Medical Association (JAMA), Lancet, the New England Journal of Medicine (NEJM), and PLoS Medicine ( n  = 6) [ 75 ]. In contrast, the high impact journals selected in the methodological study by Schiller et al. were BMJ, JAMA, Lancet, and NEJM ( n  = 4) [ 76 ]. Another methodological study by Kosa et al. included AIM, BMJ, JAMA, Lancet and NEJM ( n  = 5). In the methodological study by Thabut et al., journals with a JIF greater than 5 were considered to be high impact. Riado Minguez et al. used first quartile journals in the Journal Citation Reports (JCR) for a specific year to determine “high impact” [ 77 ]. Ultimately, the definition of high impact will be based on the number of journals the investigators are willing to include, the year of impact and the JIF cut-off [ 78 ]. We acknowledge that the term “generalizability” may apply differently for methodological studies, especially when in many instances it is possible to include the entire target population in the sample studied.

Finally, methodological studies are not exempt from information bias which may stem from discrepancies in the included research reports [ 79 ], errors in data extraction, or inappropriate interpretation of the information extracted. Likewise, publication bias may also be a concern in methodological studies, but such concepts have not yet been explored.

A proposed framework

In order to inform discussions about methodological studies, the development of guidance for what should be reported, we have outlined some key features of methodological studies that can be used to classify them. For each of the categories outlined below, we provide an example. In our experience, the choice of approach to completing a methodological study can be informed by asking the following four questions:

  • What is the aim?

A methodological study may be focused on exploring sources of bias in primary or secondary studies (meta-bias), or how bias is analyzed. We have taken care to distinguish bias (i.e. systematic deviations from the truth irrespective of the source) from reporting quality or completeness (i.e. not adhering to a specific reporting guideline or norm). An example of where this distinction would be important is in the case of a randomized trial with no blinding. This study (depending on the nature of the intervention) would be at risk of performance bias. However, if the authors report that their study was not blinded, they would have reported adequately. In fact, some methodological studies attempt to capture both “quality of conduct” and “quality of reporting”, such as Richie et al., who reported on the risk of bias in randomized trials of pharmacy practice interventions [ 80 ]. Babic et al. investigated how risk of bias was used to inform sensitivity analyses in Cochrane reviews [ 81 ]. Further, biases related to choice of outcomes can also be explored. For example, Tan et al investigated differences in treatment effect size based on the outcome reported [ 82 ].

Methodological studies may report quality of reporting against a reporting checklist (i.e. adherence to guidelines) or against expected norms. For example, Croituro et al. report on the quality of reporting in systematic reviews published in dermatology journals based on their adherence to the PRISMA statement [ 83 ], and Khan et al. described the quality of reporting of harms in randomized controlled trials published in high impact cardiovascular journals based on the CONSORT extension for harms [ 84 ]. Other methodological studies investigate reporting of certain features of interest that may not be part of formally published checklists or guidelines. For example, Mbuagbaw et al. described how often the implications for research are elaborated using the Evidence, Participants, Intervention, Comparison, Outcome, Timeframe (EPICOT) format [ 30 ].

Sometimes investigators may be interested in how consistent reports of the same research are, as it is expected that there should be consistency between: conference abstracts and published manuscripts; manuscript abstracts and manuscript main text; and trial registration and published manuscript. For example, Rosmarakis et al. investigated consistency between conference abstracts and full text manuscripts [ 85 ].

In addition to identifying issues with reporting in primary and secondary studies, authors of methodological studies may be interested in determining the factors that are associated with certain reporting practices. Many methodological studies incorporate this, albeit as a secondary outcome. For example, Farrokhyar et al. investigated the factors associated with reporting quality in randomized trials of coronary artery bypass grafting surgery [ 53 ].

Methodological studies may also be used to describe methods or compare methods, and the factors associated with methods. Muller et al. described the methods used for systematic reviews and meta-analyses of observational studies [ 86 ].

Some methodological studies synthesize results from other methodological studies. For example, Li et al. conducted a scoping review of methodological reviews that investigated consistency between full text and abstracts in primary biomedical research [ 87 ].

Some methodological studies may investigate the use of names and terms in health research. For example, Martinic et al. investigated the definitions of systematic reviews used in overviews of systematic reviews (OSRs), meta-epidemiological studies and epidemiology textbooks [ 88 ].

In addition to the previously mentioned experimental methodological studies, there may exist other types of methodological studies not captured here.

  • 2. What is the design?

Most methodological studies are purely descriptive and report their findings as counts (percent) and means (standard deviation) or medians (interquartile range). For example, Mbuagbaw et al. described the reporting of research recommendations in Cochrane HIV systematic reviews [ 30 ]. Gohari et al. described the quality of reporting of randomized trials in diabetes in Iran [ 12 ].

Some methodological studies are analytical wherein “analytical studies identify and quantify associations, test hypotheses, identify causes and determine whether an association exists between variables, such as between an exposure and a disease.” [ 89 ] In the case of methodological studies all these investigations are possible. For example, Kosa et al. investigated the association between agreement in primary outcome from trial registry to published manuscript and study covariates. They found that larger and more recent studies were more likely to have agreement [ 15 ]. Tricco et al. compared the conclusion statements from Cochrane and non-Cochrane systematic reviews with a meta-analysis of the primary outcome and found that non-Cochrane reviews were more likely to report positive findings. These results are a test of the null hypothesis that the proportions of Cochrane and non-Cochrane reviews that report positive results are equal [ 90 ].

  • 3. What is the sampling strategy?

Methodological reviews with narrow research questions may be able to include the entire target population. For example, in the methodological study of Cochrane HIV systematic reviews, Mbuagbaw et al. included all of the available studies ( n  = 103) [ 30 ].

Many methodological studies use random samples of the target population [ 33 , 91 , 92 ]. Alternatively, purposeful sampling may be used, limiting the sample to a subset of research-related reports published within a certain time period, or in journals with a certain ranking or on a topic. Systematic sampling can also be used when random sampling may be challenging to implement.

  • 4. What is the unit of analysis?

Many methodological studies use a research report (e.g. full manuscript of study, abstract portion of the study) as the unit of analysis, and inferences can be made at the study-level. However, both published and unpublished research-related reports can be studied. These may include articles, conference abstracts, registry entries etc.

Some methodological studies report on items which may occur more than once per article. For example, Paquette et al. report on subgroup analyses in Cochrane reviews of atrial fibrillation in which 17 systematic reviews planned 56 subgroup analyses [ 93 ].

This framework is outlined in Fig.  2 .

An external file that holds a picture, illustration, etc.
Object name is 12874_2020_1107_Fig2_HTML.jpg

A proposed framework for methodological studies

Conclusions

Methodological studies have examined different aspects of reporting such as quality, completeness, consistency and adherence to reporting guidelines. As such, many of the methodological study examples cited in this tutorial are related to reporting. However, as an evolving field, the scope of research questions that can be addressed by methodological studies is expected to increase.

In this paper we have outlined the scope and purpose of methodological studies, along with examples of instances in which various approaches have been used. In the absence of formal guidance on the design, conduct, analysis and reporting of methodological studies, we have provided some advice to help make methodological studies consistent. This advice is grounded in good contemporary scientific practice. Generally, the research question should tie in with the sampling approach and planned analysis. We have also highlighted the variables that may inform findings from methodological studies. Lastly, we have provided suggestions for ways in which authors can categorize their methodological studies to inform their design and analysis.

Acknowledgements

Abbreviations, authors’ contributions.

LM conceived the idea and drafted the outline and paper. DOL and LT commented on the idea and draft outline. LM, LP and DOL performed literature searches and data extraction. All authors (LM, DOL, LT, LP, DBA) reviewed several draft versions of the manuscript and approved the final manuscript.

This work did not receive any dedicated funding.

Availability of data and materials

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

DOL, DBA, LM, LP and LT are involved in the development of a reporting guideline for methodological studies.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

  • Help Center
  • اَلْعَرَبِيَّةُ
  • Deutsch (Schweiz)
  • Español (Mexico)
  • Bahasa Indonesia
  • Bahasa Melayu
  • Português (Brasil)
  • Tiếng việt

Survey Research — Types, Methods and Example Questions

Survey research The world of research is vast and complex, but with the right tools and understanding, it's an open field of discovery. Welcome to a journey into the heart of survey research. What is survey research? Survey research is the lens through which we view the opinions, behaviors, and experiences of a population. Think of it as the research world's detective, cleverly sleuthing out the truths hidden beneath layers of human complexity. Why is survey research important? Survey research is a Swiss Army Knife in a researcher's toolbox. It’s adaptable, reliable, and incredibly versatile, but its real power? It gives voice to the silent majority. Whether it's understanding customer preferences or assessing the impact of a social policy, survey research is the bridge between unanswered questions and insightful data. Let's embark on this exploration, armed with the spirit of openness, a sprinkle of curiosity, and the thirst for making knowledge accessible. As we journey further into the realm of survey research, we'll delve deeper into the diverse types of surveys, innovative data collection methods, and the rewards and challenges that come with them. Types of survey research Survey research is like an artist's palette, offering a variety of types to suit your unique research needs. Each type paints a different picture, giving us fascinating insights into the world around us. Cross-Sectional Surveys: Capture a snapshot of a population at a specific moment in time. They're your trusty Polaroid camera, freezing a moment for analysis and understanding. Longitudinal Surveys: Track changes over time, much like a time-lapse video. They help to identify trends and patterns, offering a dynamic perspective of your subject. Descriptive Surveys: Draw a detailed picture of the current state of affairs. They're your magnifying glass, examining the prevalence of a phenomenon or attitudes within a group. Analytical Surveys: Deep dive into the reasons behind certain outcomes. They're the research world's version of Sherlock Holmes, unraveling the complex web of cause and effect. But, what method should you choose for data collection? The plot thickens, doesn't it? Let's unravel this mystery in our next section. Survey research and data collection methods Data collection in survey research is an art form, and there's no one-size-fits-all method. Think of it as your paintbrush, each stroke represents a different way of capturing data. Online Surveys: In the digital age, online surveys have surged in popularity. They're fast, cost-effective, and can reach a global audience. But like a mysterious online acquaintance, respondents may not always be who they say they are. Mail Surveys: Like a postcard from a distant friend, mail surveys have a certain charm. They're great for reaching respondents without internet access. However, they’re slower and have lower response rates. They’re a test of patience and persistence. Telephone Surveys: With the sound of a ringing phone, the human element enters the picture. Great for reaching a diverse audience, they bring a touch of personal connection. But, remember, not all are fans of unsolicited calls. Face-to-Face Surveys: These are the heart-to-heart conversations of the survey world. While they require more resources, they're the gold standard for in-depth, high-quality data. As we journey further, let’s weigh the pros and cons of survey research. Advantages and disadvantages of survey research Every hero has its strengths and weaknesses, and survey research is no exception. Let's unwrap the gift box of survey research to see what lies inside. Advantages: Versatility: Like a superhero with multiple powers, surveys can be adapted to different topics, audiences, and research needs. Accessibility: With online surveys, geographical boundaries dissolve. We can reach out to the world from our living room. Anonymity: Like a confessional booth, surveys allow respondents to share their views without fear of judgment. Disadvantages: Response Bias: Ever met someone who says what you want to hear? Survey respondents can be like that too. Limited Depth: Like a puddle after a rainstorm, some surveys only skim the surface of complex issues. Nonresponse: Sometimes, potential respondents play hard to get, skewing the data. Survey research may have its challenges, but it also presents opportunities to learn and grow. As we forge ahead on our journey, we dive into the design process of survey research. Limitations of survey research Every research method has its limitations, like bumps on the road to discovery. But don't worry, with the right approach, these challenges become opportunities for growth. Misinterpretation: Sometimes, respondents might misunderstand your questions, like a badly translated novel. To overcome this, keep your questions simple and clear. Social Desirability Bias: People often want to present themselves in the best light. They might answer questions in a way that portrays them positively, even if it's not entirely accurate. Overcome this by ensuring anonymity and emphasizing honesty. Sample Representation: If your survey sample isn't representative of the population you're studying, it can skew your results. Aiming for a diverse sample can mitigate this. Now that we're aware of the limitations let's delve into the world of survey design. {loadmoduleid 430} Survey research design Designing a survey is like crafting a roadmap to discovery. It's an intricate process that involves careful planning, innovative strategies, and a deep understanding of your research goals. Let's get started. Approach and Strategy Your approach and strategy are the compasses guiding your survey research. Clear objectives, defined research questions, and an understanding of your target audience lay the foundation for a successful survey. Panel The panel is the heartbeat of your survey, the respondents who breathe life into your research. Selecting a representative panel ensures your research is accurate and inclusive. 9 Tips on Building the Perfect Survey Research Questionnaire Keep It Simple: Clear and straightforward questions lead to accurate responses. Make It Relevant: Ensure every question ties back to your research objectives. Order Matters: Start with easy questions to build rapport and save sensitive ones for later. Avoid Double-Barreled Questions: Stick to one idea per question. Offer a Balanced Scale: For rating scales, provide an equal number of positive and negative options. Provide a ‘Don't Know’ Option: This prevents guessing and keeps your data accurate. Pretest Your Survey: A pilot run helps you spot any issues before the final launch. Keep It Short: Respect your respondents' time. Make It Engaging: Keep your respondents interested with a mix of question types. Survey research examples and questions Examples serve as a bridge connecting theoretical concepts to real-world scenarios. Let's consider a few practical examples of survey research across various domains. User Experience (UX) Imagine being a UX designer at a budding tech start-up. Your app is gaining traction, but to keep your user base growing and engaged, you must ensure that your app's UX is top-notch. In this case, a well-designed survey could be a beacon, guiding you toward understanding user behavior, preferences, and pain points. Here's an example of how such a survey could look: "On a scale of 1 to 10, how would you rate the ease of navigating our app?" "How often do you encounter difficulties while using our app?" "What features do you use most frequently in our app?" "What improvements would you suggest for our app?" "What features would you like to see in future updates?" This line of questioning, while straightforward, provides invaluable insights. It enables the UX designer to identify strengths to capitalize on and weaknesses to improve, ultimately leading to a product that resonates with users. Psychology and Ethics in survey research The realm of survey research is not just about data and numbers, but it's also about understanding human behavior and treating respondents ethically. Psychology: In-depth understanding of cognitive biases and social dynamics can profoundly influence survey design. Let's take the 'Recency Effect,' a psychological principle stating that people tend to remember recent events more vividly than those in the past. While framing questions about user experiences, this insight could be invaluable. For example, a question like "Can you recall an instance in the past week when our customer service exceeded your expectations?" is likely to fetch more accurate responses than asking about an event several months ago. Ethics: On the other hand, maintaining privacy, confidentiality, and informed consent is more than ethical - it's fundamental to the integrity of the research process. Imagine conducting a sensitive survey about workplace culture. Ensuring respondents that their responses will remain confidential and anonymous can encourage more honest responses. An introductory note stating these assurances, along with a clear outline of the survey's purpose, can help build trust with your respondents. Survey research software In the age of digital information, survey research software has become a trusted ally for researchers. It simplifies complex processes like data collection, analysis, and visualization, democratizing research and making it more accessible to a broad audience. LimeSurvey, our innovative, user-friendly tool, brings this vision to life. It stands at the crossroads of simplicity and power, embodying the essence of accessible survey research. Whether you're a freelancer exploring new market trends, a psychology student curious about human behavior, or an HR officer aiming to improve company culture, LimeSurvey empowers you to conduct efficient, effective research. Its suite of features and intuitive design matches your research pace, allowing your curiosity to take the front seat. For instance, consider you're a researcher studying consumer behavior across different demographics. With LimeSurvey, you can easily design demographic-specific questions, distribute your survey across various channels, collect responses in real-time, and visualize your data through intuitive dashboards. This synergy of tools and functionalities makes LimeSurvey a perfect ally in your quest for knowledge. Conclusion If you've come this far, we can sense your spark of curiosity. Are you eager to take the reins and conduct your own survey research? Are you ready to embrace the simple yet powerful tool that LimeSurvey offers? If so, we can't wait to see where your journey takes you next! In the world of survey research, there's always more to explore, more to learn and more to discover. So, keep your curiosity alive, stay open to new ideas, and remember, your exploration is just beginning! We hope that our exploration has been as enlightening for you as it was exciting for us. Remember, the journey doesn't end here. With the power of knowledge and the right tools in your hands, there's no limit to what you can achieve. So, let your curiosity be your guide and dive into the fascinating world of survey research with LimeSurvey! Try it out for free now! Happy surveying! {loadmoduleid 429}

research question survey methodology

Table Content

Survey research.

The world of research is vast and complex, but with the right tools and understanding, it's an open field of discovery. Welcome to a journey into the heart of survey research.

What is survey research?

Survey research is the lens through which we view the opinions, behaviors, and experiences of a population. Think of it as the research world's detective, cleverly sleuthing out the truths hidden beneath layers of human complexity.

Why is survey research important?

Survey research is a Swiss Army Knife in a researcher's toolbox. It’s adaptable, reliable, and incredibly versatile, but its real power? It gives voice to the silent majority. Whether it's understanding customer preferences or assessing the impact of a social policy, survey research is the bridge between unanswered questions and insightful data.

Let's embark on this exploration, armed with the spirit of openness, a sprinkle of curiosity, and the thirst for making knowledge accessible. As we journey further into the realm of survey research, we'll delve deeper into the diverse types of surveys, innovative data collection methods, and the rewards and challenges that come with them.

Types of survey research

Survey research is like an artist's palette, offering a variety of types to suit your unique research needs. Each type paints a different picture, giving us fascinating insights into the world around us.

  • Cross-Sectional Surveys: Capture a snapshot of a population at a specific moment in time. They're your trusty Polaroid camera, freezing a moment for analysis and understanding.
  • Longitudinal Surveys: Track changes over time, much like a time-lapse video. They help to identify trends and patterns, offering a dynamic perspective of your subject.
  • Descriptive Surveys: Draw a detailed picture of the current state of affairs. They're your magnifying glass, examining the prevalence of a phenomenon or attitudes within a group.
  • Analytical Surveys: Deep dive into the reasons behind certain outcomes. They're the research world's version of Sherlock Holmes, unraveling the complex web of cause and effect.

But, what method should you choose for data collection? The plot thickens, doesn't it? Let's unravel this mystery in our next section.

Survey research and data collection methods

Data collection in survey research is an art form, and there's no one-size-fits-all method. Think of it as your paintbrush, each stroke represents a different way of capturing data.

  • Online Surveys: In the digital age, online surveys have surged in popularity. They're fast, cost-effective, and can reach a global audience. But like a mysterious online acquaintance, respondents may not always be who they say they are.
  • Mail Surveys: Like a postcard from a distant friend, mail surveys have a certain charm. They're great for reaching respondents without internet access. However, they’re slower and have lower response rates. They’re a test of patience and persistence.
  • Telephone Surveys: With the sound of a ringing phone, the human element enters the picture. Great for reaching a diverse audience, they bring a touch of personal connection. But, remember, not all are fans of unsolicited calls.
  • Face-to-Face Surveys: These are the heart-to-heart conversations of the survey world. While they require more resources, they're the gold standard for in-depth, high-quality data.

As we journey further, let’s weigh the pros and cons of survey research.

Advantages and disadvantages of survey research

Every hero has its strengths and weaknesses, and survey research is no exception. Let's unwrap the gift box of survey research to see what lies inside.

Advantages:

  • Versatility: Like a superhero with multiple powers, surveys can be adapted to different topics, audiences, and research needs.
  • Accessibility: With online surveys, geographical boundaries dissolve. We can reach out to the world from our living room.
  • Anonymity: Like a confessional booth, surveys allow respondents to share their views without fear of judgment.

Disadvantages:

  • Response Bias: Ever met someone who says what you want to hear? Survey respondents can be like that too.
  • Limited Depth: Like a puddle after a rainstorm, some surveys only skim the surface of complex issues.
  • Nonresponse: Sometimes, potential respondents play hard to get, skewing the data.

Survey research may have its challenges, but it also presents opportunities to learn and grow. As we forge ahead on our journey, we dive into the design process of survey research.

Limitations of survey research

Every research method has its limitations, like bumps on the road to discovery. But don't worry, with the right approach, these challenges become opportunities for growth.

Misinterpretation: Sometimes, respondents might misunderstand your questions, like a badly translated novel. To overcome this, keep your questions simple and clear.

Social Desirability Bias: People often want to present themselves in the best light. They might answer questions in a way that portrays them positively, even if it's not entirely accurate. Overcome this by ensuring anonymity and emphasizing honesty.

Sample Representation: If your survey sample isn't representative of the population you're studying, it can skew your results. Aiming for a diverse sample can mitigate this.

Now that we're aware of the limitations let's delve into the world of survey design.

  •   Create surveys in 40+ languages
  •   Unlimited number of users
  •   Ready-to-go survey templates
  •   So much more...

Survey research design

Designing a survey is like crafting a roadmap to discovery. It's an intricate process that involves careful planning, innovative strategies, and a deep understanding of your research goals. Let's get started.

Approach and Strategy

Your approach and strategy are the compasses guiding your survey research. Clear objectives, defined research questions, and an understanding of your target audience lay the foundation for a successful survey.

The panel is the heartbeat of your survey, the respondents who breathe life into your research. Selecting a representative panel ensures your research is accurate and inclusive.

9 Tips on Building the Perfect Survey Research Questionnaire

  • Keep It Simple: Clear and straightforward questions lead to accurate responses.
  • Make It Relevant: Ensure every question ties back to your research objectives.
  • Order Matters: Start with easy questions to build rapport and save sensitive ones for later.
  • Avoid Double-Barreled Questions: Stick to one idea per question.
  • Offer a Balanced Scale: For rating scales, provide an equal number of positive and negative options.
  • Provide a ‘Don't Know’ Option: This prevents guessing and keeps your data accurate.
  • Pretest Your Survey: A pilot run helps you spot any issues before the final launch.
  • Keep It Short: Respect your respondents' time.
  • Make It Engaging: Keep your respondents interested with a mix of question types.

Survey research examples and questions

Examples serve as a bridge connecting theoretical concepts to real-world scenarios. Let's consider a few practical examples of survey research across various domains.

User Experience (UX)

Imagine being a UX designer at a budding tech start-up. Your app is gaining traction, but to keep your user base growing and engaged, you must ensure that your app's UX is top-notch. In this case, a well-designed survey could be a beacon, guiding you toward understanding user behavior, preferences, and pain points.

Here's an example of how such a survey could look:

  • "On a scale of 1 to 10, how would you rate the ease of navigating our app?"
  • "How often do you encounter difficulties while using our app?"
  • "What features do you use most frequently in our app?"
  • "What improvements would you suggest for our app?"
  • "What features would you like to see in future updates?"

This line of questioning, while straightforward, provides invaluable insights. It enables the UX designer to identify strengths to capitalize on and weaknesses to improve, ultimately leading to a product that resonates with users.

Psychology and Ethics in survey research

The realm of survey research is not just about data and numbers, but it's also about understanding human behavior and treating respondents ethically.

Psychology: In-depth understanding of cognitive biases and social dynamics can profoundly influence survey design. Let's take the 'Recency Effect,' a psychological principle stating that people tend to remember recent events more vividly than those in the past. While framing questions about user experiences, this insight could be invaluable.

For example, a question like "Can you recall an instance in the past week when our customer service exceeded your expectations?" is likely to fetch more accurate responses than asking about an event several months ago.

Ethics: On the other hand, maintaining privacy, confidentiality, and informed consent is more than ethical - it's fundamental to the integrity of the research process.

Imagine conducting a sensitive survey about workplace culture. Ensuring respondents that their responses will remain confidential and anonymous can encourage more honest responses. An introductory note stating these assurances, along with a clear outline of the survey's purpose, can help build trust with your respondents.

Survey research software

In the age of digital information, survey research software has become a trusted ally for researchers. It simplifies complex processes like data collection, analysis, and visualization, democratizing research and making it more accessible to a broad audience.

LimeSurvey, our innovative, user-friendly tool, brings this vision to life. It stands at the crossroads of simplicity and power, embodying the essence of accessible survey research.

Whether you're a freelancer exploring new market trends, a psychology student curious about human behavior, or an HR officer aiming to improve company culture, LimeSurvey empowers you to conduct efficient, effective research. Its suite of features and intuitive design matches your research pace, allowing your curiosity to take the front seat.

For instance, consider you're a researcher studying consumer behavior across different demographics. With LimeSurvey, you can easily design demographic-specific questions, distribute your survey across various channels, collect responses in real-time, and visualize your data through intuitive dashboards. This synergy of tools and functionalities makes LimeSurvey a perfect ally in your quest for knowledge.

If you've come this far, we can sense your spark of curiosity. Are you eager to take the reins and conduct your own survey research? Are you ready to embrace the simple yet powerful tool that LimeSurvey offers? If so, we can't wait to see where your journey takes you next!

In the world of survey research, there's always more to explore, more to learn and more to discover. So, keep your curiosity alive, stay open to new ideas, and remember, your exploration is just beginning!

We hope that our exploration has been as enlightening for you as it was exciting for us. Remember, the journey doesn't end here. With the power of knowledge and the right tools in your hands, there's no limit to what you can achieve. So, let your curiosity be your guide and dive into the fascinating world of survey research with LimeSurvey! Try it out for free now!

Happy surveying!

Think one step ahead.

Step into a bright future with our simple online survey tool

Open Source

18 Different Types of Survey Methods + Pros & Cons

research question survey methodology

There are many reasons why surveys are important. Surveys help researchers find solutions, create discussions, and make decisions. They can also get to the bottom of the really important stuff, like, coffee or tea? Dogs or cats? Elvis or The Beatles? When it comes to finding the answers to these questions, there are 18 different types of survey methods to use.

Create your first survey, form, or poll now!

18 Different Types of Survey Methods

Different surveys serve different purposes, which is why there are a number of them to choose from. “What are the types of surveys I should use,” you ask? Here’s a look at the 18 types of survey methods researchers use today.

1. Interviews

Also known as in-person surveys or household surveys, this used to be one of the most popular types of survey to conduct. Researchers like them because they involve getting face-to-face with individuals. Of course, this method of surveying may seem antiquated when today we have online surveying at our fingertips. However, interviews still serve a purpose. 

Researchers conduct interviews when they want to discuss something personal with people. For example, they may have questions that may require extensive probing to uncover the truth. Sure, some interviewees may be more comfortable answering questions confidentially behind a keyboard. However, a skilled interviewer is able to put them at ease and get genuine responses. They can often go deeper than you may be able to using other surveying methods. 

Often, in-person interviews are recorded on camera. This way, an expert can review them afterward. They do this to determine if the answers given may be false based on an interviewee’s change in tone. A change in facial expressions and body movements may also be a signal they pick up on. 

2. Intercept Surveys

While interviews tend to choose respondents and have controls in place, intercept surveys (or “man on the spot”) surveys are conducted at certain locations or events. This involves having an interviewer, or multiple interviewers, scoping out an area and asking people, generally at random, for their thoughts or viewpoints on a particular topic. 

3. Focus Groups

These types of surveys are conducted in person as well. However, focus groups involve a number of people rather than just one individual. The group is generally small but demographically diverse and led by a moderator. The focus group may be sampling new products, or to have a discussion around a particular topic, often a hot-button one. 

The purpose of a focus group survey is often to gauge people’s reaction to a product in a group setting or to get people talking, interacting—and yes, arguing—with the moderator taking notes on the group’s behavior and attitudes. This is often the most expensive survey method as a trained moderator must be paid. In addition, locations must be secured, often in various cities, and participants must be heavily incentivized to show up. Gift cards in the $75-100 range for each survey participant are the norm.   

4. Panel Sampling

Recruiting survey-takers from a panel maintained by a research company is a surefire way to get respondents. Why? Because people have specifically signed up to take them. The benefit of these types of surveys for research, of course, is there you can be assured responses. In addition, you can filter respondents by a variety of criteria to be sure you’re speaking with your target audience.

The downside is data quality. These individuals get survey offers frequently. So, they may rush through them to get their inventive and move on to the next one. In addition, if you’re constantly tapping into the same people from the same panel, are you truly getting a representative sample?

5. Telephone Surveys

Most telephone survey research types are conducted through random digit dialing (RDD). RDD can reach both listed  and  unlisted numbers, improving sampling accuracy. Surveys are conducted by interviewers through computer-assisted telephone interviewing (CATI) software. CATI displays the questionnaire to the interviewer with a rotation of questions.  

Telephone surveys started in the 1940s. In fact, in a  recent blog , we recount how the predictions for the 1948 presidential election were completely wrong because of sampling bias in telephone surveys. Rising in popularity in the late 50s and early 60s when the telephone became common in most American households, telephone surveys are no longer a very popular method of conducting a survey. Why? Because many people refuse to take telephone surveys or simply are not answering calls from a number they don’t recognize.

6. Post-Call Surveys

If a telephone survey is going to be conducted, today it is usually a post-call survey. This is often accomplished through IVR, or interactive voice response. IVR means there is no interviewer involved. Instead, customers record answers to pre-recorded questions using numbers on their touch-tone keypads. If a question is open-ended, the interviewee can respond by speaking and the system records the answer. IVR surveys are often deployed to measure how a customer feels about a service they just received. For example, after calling your bank, you may be asked to stay on the line to answer a series of questions about your experience.

Most post-call surveys are either  NPS surveys  or customer satisfaction (CSAT) surveys. The former asks the customer “How likely are you to recommend our organization to a f riend or family based on your most recent interaction?” while the CSAT survey asks customers “How satisfied are you with the results of your most recent interaction?”.   NPS survey results reflect how the customer feels about the brand, while CSAT surveys a re all about individual agent and contact center performance.   

7. SMS Text Surveys

Many people rarely using their phone to talk anymore, and ignore calls from unknown numbers. This has given rise to the SMS (Short Messaging Service) text survey. SMS surveys are delivered via text to people who have opted in to receive notifications from the sender. This means that there is usually some level of engagement, improving response rates. The one downside is that questions typically need to be short, and answers are generally 1-2 words or simply numbers (this is why many NPS surveys, gauging customer satisfaction, are often conducted via SMS text). Be careful not to send too many text surveys, as a person can opt-out just as easily, usually by texting STOP.

8. Mail-in Surveys / Postal Surveys

These are delivered right to respondents’ doorsteps! Mail surveys were frequently used before the advent of the internet when respondents were spread out geographically and budgets were modest. After all, mail-in surveys didn’t require much cost other than the postage. 

So are mail-in surveys going the way of the dinosaur? Not necessarily. They are still occasionally more valuable compared to different methods of surveying. Because they are going to a specific name and home address, they often feel more personalized. This personalization can prompt the recipient to complete the survey. 

They’re also good for surveys of significant length. Most people have short attention spans, and won’t spend more than a few minutes on the phone or filling out an online survey. At least, not without an incentive! However, with a mail-in survey, the person can complete it at their leisure. They can fill out some of it, set it aside, and then come back to it later. This gives mail-in surveys a relatively high response rate.

9. Kiosk Surveys

These surveys happen on a computer screen at a physical location. You’ve probably seen them popping up in stores, hotel lobbies, hospitals, and office spaces. These days, they’re just about anywhere a researcher or marketer wants to collect data from customers or passers-by.  Kiosk surveys  provide immediate feedback following a purchase or an interaction. They collect responses while the experience is still fresh in the respondent’s mind. This makes their judgment more trustworthy. Below is an example of a SurveyLegend kiosk survey at McDonald’s. The kiosk survey collects information, thanks the respondent for their feedback, and then resets for the next customer. Read how to  create your own kiosk survey here .

kiosk mode

10. Email Surveys

Email surveys are one of the most effective surveying methods as they are delivered directly to your audience via their online account. They can be used by anyone for just about anything, and are easily customized for a particular audience. Another good thing about email surveys is you can easily see who did or did not open the survey and make improvements to it for a future send to increase response rates. You can also A/B test subject lines, imagery, and so on to see which is more effective. SurveyLegend offers dozens of different types of online survey questions, which we explore in our blog  12 Different Types of Survey Questions and When to Use Them (with Examples) .

Types of Questions on Surveys

11. Pop-up Surveys

A pop-up survey is a feedback form that pops up on a website or app. Although the main window a person is reading on their screen remains visible, it is temporarily disabled until a user interacts with the pop-up, either agreeing to leave feedback or closing out of it. The survey itself is typically about the company whose site or app the user is currently visiting (as opposed to an intercept survey, which is an invitation to take a survey hosted on a different site).

A pop-up survey attempts to grab website visitors’ attention in a variety of ways, popping up in the middle of the screen, moving in from the side, or covering the entire screen. While they can be intrusive, they also have many benefits. Read about the  benefits of pop-up surveys here .

12. Embedded Surveys

The opposite of pop-up surveys, these surveys live directly on your website or another website of your choice. Because the survey cannot be X’ed out of like a pop-up, it takes up valuable real estate on your site, or could be expensive to implement on someone else’s site. In addition, although the  embedded survey  is there at all times, it may not get the amount of attention a pop-up does since it’s not “in the respondent’s face.”

13. Social Media Surveys

There are more than  3.5 billion people  are using social media worldwide, a number projected to increase to almost 4.5 billion in 2025. This makes social media extremely important to marketers and researchers. Using platforms such as Facebook, Twitter, Instagram, and the new Threads, many companies and organizations send out social media surveys regularly. Because people check their social media accounts quite regularly, it’s a good way to collect responses and monitor changes in satisfaction levels or popular opinion. Check out our blog on  social media surveys  for more benefits and valuable tips.

14. Mobile Surveys

Mobile traffic has now overtaken desktop computers as the most used device for accessing the internet, with more than 54% of the share. But don’t fret – you don’t have to create an entirely new survey to reach people on their phones or tablets. Online poll makers like SurveyLegend are responsive, so when you create a desktop version of a survey, it automatically becomes mobile-friendly. The survey renders, or displays, on any device or screen regardless of size, with elements on the page automatically rearranging themselves, shrinking, or expanding as necessary. Learn more about our  responsive surveys .

15. Mobile App Surveys

Today, most companies have a mobile app. These can be an ideal way to conduct surveys as people have to willingly download your app; this means, they already have a level of engagement with your company or brand making them more likely to respond to your surveys.

16. QR Code Surveys

QR Code or QRC is an abbreviation of “Quick Response Code.” These two-dimensional encoded images, when scanned, deliver hidden information that’s stored on it. They’re different from barcodes because they can house a lot more information, including website URLs, phone numbers, or up to 4,000 characters of text. The recent QR code comeback provides a good opportunity for researchers to collect data. Place the QR code anywhere – on flyers, posters, billboards, commercials – and all someone had to do is scan it with the mobile device to have immediate access to a survey. Read more about the  benefits of QR code surveys .

17. Delphi Surveys

A Delphi survey is a structured research method used to gather the collective opinions and insights of a panel of experts on a particular topic. The process involves several rounds of questionnaires or surveys. Each round is designed to narrow things down until a consensus or hypothyses can be formed. One of the key features of the Delphi survey research is that participants are unknown to each other, thereby eliminating influence.

18. AI Surveys

Artificial intelligence is the latest types of survey method. Using AI, researchers allow the technology to ask survey questions. These “Chatbots” can even ask follow-up questions on the spot based on a respondent’s answer. There can be drawbacks, however. If a person suspects survey questions are coming from AI, they may be less likely to respond (or may respond incorrectly to mess with the AI). Additionally, AI is not good with emotions, so asking sensitive questions in an emotionless manner could be off putting to people.  Read more about AI Surveys .

Online Surveys: Ideal for Collecting Data and Feedback

Statistic: Countries with the largest digital populations in the world as of January 2023 (in millions) | Statista

That’s not all. People can take online surveys just about anywhere thanks to mobile devices. The use of these devices across age groups is balancing out as well. Check out smartphone use by age group below.

Statistic: Share of adults in the United States who owned a smartphone from 2015 to 2021, by age group | Statista

With more and more people accessing the internet through their mobile devices, now you can reach teens while they’re between classes and adults during their subway commute to work. Can’t say that for those other types of surveys !

Online surveys are also extremely cost-efficient. You don’t have to spend money on paper, printing, postage, or an interviewer. This significantly reduces set-up and administration costs. This also allows researchers and companies to send out a survey very expeditiously. Additionally, many online survey tools provide in-depth analysis of survey data. This saves you from having to spend money on further research once the survey is complete. 

Researchers have their pick of options when it’s time to survey people. Which method you choose may depend upon cost, reach, and the types of questions.

Now, you may be wondering, “ Where can I make free surveys ?” You can get started with free online surveys using SurveyLegend! He re are a few things that make SurveyLegend the ideal choice for different types of surveys for research ( or for fun) .

  • When it comes to surveys, brief is best to keep respondents attention. So, SurveyLegend automatically collects some data, such as the participant’s location, reducing the number of questions you have to ask.
  • People like eye candy and many surveys are just plain dull. SurveyLegend offers beautifully rendered pre-designed surveys that will get your participant’s attention – and keep it through to completion!
  • Today, most people take surveys on mobile devices. Often surveys desktop surveys don’t translate well, resulting in a high drop-off rate. SurveyLegend’s designs are responsive, automatically adjusting to any screen size.

What’s your favorite method of surveying people? (Hey… that’s a good topic for a survey!) Sound off in the comments!

Frequently Asked Questions (FAQs)

The 10 most common survey methods are online surveys, in-person interviews, focus groups, panel sampling, telephone surveys, post-call surveys, mail-in surveys, pop-up surveys, mobile surveys, and kiosk surveys.

Benefits of online surveys include their ability to reach a broad audience and that they are relatively inexpensive.

Kiosk surveys are surveys on a computer screen at the point of sale.

A focus group is an in-person interview or survey involving a group of people rather than just one individual. The group is generally small but demographically diverse, and led by a moderator. 

Jasko Mahmutovic

How to Write Survey Questions Ebook

Related Articles You Might Like

research question survey methodology

How To Create a Follow-up Survey & Questions To Ask

“The fortune is in the follow-up.”  – Jim Rohn Rohn, an American entrepreneur, author, and motivational speaker who passed in 2009, understood the importance of follow-up. He would often...

research question survey methodology

How To Create a Successful Webinar Survey & Questions To Ask

Webinars continue to fuel successful marketing initiatives and learning platforms. But not all webinars are created equal. If you’ve attended a virtual event in the past – and it’s 2024,...

research question survey methodology

What Is A Closed-Loop Survey & Five Steps To Closing The Loop

When we talk about “closing the loop,” we’re not referring to that childhood method of tying shoelaces! In business, closing the loop refers to completing a cycle or ensuring...

Privacy Overview

Case Study Research Method in Psychology

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

Case studies are in-depth investigations of a person, group, event, or community. Typically, data is gathered from various sources using several methods (e.g., observations & interviews).

The case study research method originated in clinical medicine (the case history, i.e., the patient’s personal history). In psychology, case studies are often confined to the study of a particular individual.

The information is mainly biographical and relates to events in the individual’s past (i.e., retrospective), as well as to significant events that are currently occurring in his or her everyday life.

The case study is not a research method, but researchers select methods of data collection and analysis that will generate material suitable for case studies.

Freud (1909a, 1909b) conducted very detailed investigations into the private lives of his patients in an attempt to both understand and help them overcome their illnesses.

This makes it clear that the case study is a method that should only be used by a psychologist, therapist, or psychiatrist, i.e., someone with a professional qualification.

There is an ethical issue of competence. Only someone qualified to diagnose and treat a person can conduct a formal case study relating to atypical (i.e., abnormal) behavior or atypical development.

case study

 Famous Case Studies

  • Anna O – One of the most famous case studies, documenting psychoanalyst Josef Breuer’s treatment of “Anna O” (real name Bertha Pappenheim) for hysteria in the late 1800s using early psychoanalytic theory.
  • Little Hans – A child psychoanalysis case study published by Sigmund Freud in 1909 analyzing his five-year-old patient Herbert Graf’s house phobia as related to the Oedipus complex.
  • Bruce/Brenda – Gender identity case of the boy (Bruce) whose botched circumcision led psychologist John Money to advise gender reassignment and raise him as a girl (Brenda) in the 1960s.
  • Genie Wiley – Linguistics/psychological development case of the victim of extreme isolation abuse who was studied in 1970s California for effects of early language deprivation on acquiring speech later in life.
  • Phineas Gage – One of the most famous neuropsychology case studies analyzes personality changes in railroad worker Phineas Gage after an 1848 brain injury involving a tamping iron piercing his skull.

Clinical Case Studies

  • Studying the effectiveness of psychotherapy approaches with an individual patient
  • Assessing and treating mental illnesses like depression, anxiety disorders, PTSD
  • Neuropsychological cases investigating brain injuries or disorders

Child Psychology Case Studies

  • Studying psychological development from birth through adolescence
  • Cases of learning disabilities, autism spectrum disorders, ADHD
  • Effects of trauma, abuse, deprivation on development

Types of Case Studies

  • Explanatory case studies : Used to explore causation in order to find underlying principles. Helpful for doing qualitative analysis to explain presumed causal links.
  • Exploratory case studies : Used to explore situations where an intervention being evaluated has no clear set of outcomes. It helps define questions and hypotheses for future research.
  • Descriptive case studies : Describe an intervention or phenomenon and the real-life context in which it occurred. It is helpful for illustrating certain topics within an evaluation.
  • Multiple-case studies : Used to explore differences between cases and replicate findings across cases. Helpful for comparing and contrasting specific cases.
  • Intrinsic : Used to gain a better understanding of a particular case. Helpful for capturing the complexity of a single case.
  • Collective : Used to explore a general phenomenon using multiple case studies. Helpful for jointly studying a group of cases in order to inquire into the phenomenon.

Where Do You Find Data for a Case Study?

There are several places to find data for a case study. The key is to gather data from multiple sources to get a complete picture of the case and corroborate facts or findings through triangulation of evidence. Most of this information is likely qualitative (i.e., verbal description rather than measurement), but the psychologist might also collect numerical data.

1. Primary sources

  • Interviews – Interviewing key people related to the case to get their perspectives and insights. The interview is an extremely effective procedure for obtaining information about an individual, and it may be used to collect comments from the person’s friends, parents, employer, workmates, and others who have a good knowledge of the person, as well as to obtain facts from the person him or herself.
  • Observations – Observing behaviors, interactions, processes, etc., related to the case as they unfold in real-time.
  • Documents & Records – Reviewing private documents, diaries, public records, correspondence, meeting minutes, etc., relevant to the case.

2. Secondary sources

  • News/Media – News coverage of events related to the case study.
  • Academic articles – Journal articles, dissertations etc. that discuss the case.
  • Government reports – Official data and records related to the case context.
  • Books/films – Books, documentaries or films discussing the case.

3. Archival records

Searching historical archives, museum collections and databases to find relevant documents, visual/audio records related to the case history and context.

Public archives like newspapers, organizational records, photographic collections could all include potentially relevant pieces of information to shed light on attitudes, cultural perspectives, common practices and historical contexts related to psychology.

4. Organizational records

Organizational records offer the advantage of often having large datasets collected over time that can reveal or confirm psychological insights.

Of course, privacy and ethical concerns regarding confidential data must be navigated carefully.

However, with proper protocols, organizational records can provide invaluable context and empirical depth to qualitative case studies exploring the intersection of psychology and organizations.

  • Organizational/industrial psychology research : Organizational records like employee surveys, turnover/retention data, policies, incident reports etc. may provide insight into topics like job satisfaction, workplace culture and dynamics, leadership issues, employee behaviors etc.
  • Clinical psychology : Therapists/hospitals may grant access to anonymized medical records to study aspects like assessments, diagnoses, treatment plans etc. This could shed light on clinical practices.
  • School psychology : Studies could utilize anonymized student records like test scores, grades, disciplinary issues, and counseling referrals to study child development, learning barriers, effectiveness of support programs, and more.

How do I Write a Case Study in Psychology?

Follow specified case study guidelines provided by a journal or your psychology tutor. General components of clinical case studies include: background, symptoms, assessments, diagnosis, treatment, and outcomes. Interpreting the information means the researcher decides what to include or leave out. A good case study should always clarify which information is the factual description and which is an inference or the researcher’s opinion.

1. Introduction

  • Provide background on the case context and why it is of interest, presenting background information like demographics, relevant history, and presenting problem.
  • Compare briefly to similar published cases if applicable. Clearly state the focus/importance of the case.

2. Case Presentation

  • Describe the presenting problem in detail, including symptoms, duration,and impact on daily life.
  • Include client demographics like age and gender, information about social relationships, and mental health history.
  • Describe all physical, emotional, and/or sensory symptoms reported by the client.
  • Use patient quotes to describe the initial complaint verbatim. Follow with full-sentence summaries of relevant history details gathered, including key components that led to a working diagnosis.
  • Summarize clinical exam results, namely orthopedic/neurological tests, imaging, lab tests, etc. Note actual results rather than subjective conclusions. Provide images if clearly reproducible/anonymized.
  • Clearly state the working diagnosis or clinical impression before transitioning to management.

3. Management and Outcome

  • Indicate the total duration of care and number of treatments given over what timeframe. Use specific names/descriptions for any therapies/interventions applied.
  • Present the results of the intervention,including any quantitative or qualitative data collected.
  • For outcomes, utilize visual analog scales for pain, medication usage logs, etc., if possible. Include patient self-reports of improvement/worsening of symptoms. Note the reason for discharge/end of care.

4. Discussion

  • Analyze the case, exploring contributing factors, limitations of the study, and connections to existing research.
  • Analyze the effectiveness of the intervention,considering factors like participant adherence, limitations of the study, and potential alternative explanations for the results.
  • Identify any questions raised in the case analysis and relate insights to established theories and current research if applicable. Avoid definitive claims about physiological explanations.
  • Offer clinical implications, and suggest future research directions.

5. Additional Items

  • Thank specific assistants for writing support only. No patient acknowledgments.
  • References should directly support any key claims or quotes included.
  • Use tables/figures/images only if substantially informative. Include permissions and legends/explanatory notes.
  • Provides detailed (rich qualitative) information.
  • Provides insight for further research.
  • Permitting investigation of otherwise impractical (or unethical) situations.

Case studies allow a researcher to investigate a topic in far more detail than might be possible if they were trying to deal with a large number of research participants (nomothetic approach) with the aim of ‘averaging’.

Because of their in-depth, multi-sided approach, case studies often shed light on aspects of human thinking and behavior that would be unethical or impractical to study in other ways.

Research that only looks into the measurable aspects of human behavior is not likely to give us insights into the subjective dimension of experience, which is important to psychoanalytic and humanistic psychologists.

Case studies are often used in exploratory research. They can help us generate new ideas (that might be tested by other methods). They are an important way of illustrating theories and can help show how different aspects of a person’s life are related to each other.

The method is, therefore, important for psychologists who adopt a holistic point of view (i.e., humanistic psychologists ).

Limitations

  • Lacking scientific rigor and providing little basis for generalization of results to the wider population.
  • Researchers’ own subjective feelings may influence the case study (researcher bias).
  • Difficult to replicate.
  • Time-consuming and expensive.
  • The volume of data, together with the time restrictions in place, impacted the depth of analysis that was possible within the available resources.

Because a case study deals with only one person/event/group, we can never be sure if the case study investigated is representative of the wider body of “similar” instances. This means the conclusions drawn from a particular case may not be transferable to other settings.

Because case studies are based on the analysis of qualitative (i.e., descriptive) data , a lot depends on the psychologist’s interpretation of the information she has acquired.

This means that there is a lot of scope for Anna O , and it could be that the subjective opinions of the psychologist intrude in the assessment of what the data means.

For example, Freud has been criticized for producing case studies in which the information was sometimes distorted to fit particular behavioral theories (e.g., Little Hans ).

This is also true of Money’s interpretation of the Bruce/Brenda case study (Diamond, 1997) when he ignored evidence that went against his theory.

Breuer, J., & Freud, S. (1895).  Studies on hysteria . Standard Edition 2: London.

Curtiss, S. (1981). Genie: The case of a modern wild child .

Diamond, M., & Sigmundson, K. (1997). Sex Reassignment at Birth: Long-term Review and Clinical Implications. Archives of Pediatrics & Adolescent Medicine , 151(3), 298-304

Freud, S. (1909a). Analysis of a phobia of a five year old boy. In The Pelican Freud Library (1977), Vol 8, Case Histories 1, pages 169-306

Freud, S. (1909b). Bemerkungen über einen Fall von Zwangsneurose (Der “Rattenmann”). Jb. psychoanal. psychopathol. Forsch ., I, p. 357-421; GW, VII, p. 379-463; Notes upon a case of obsessional neurosis, SE , 10: 151-318.

Harlow J. M. (1848). Passage of an iron rod through the head.  Boston Medical and Surgical Journal, 39 , 389–393.

Harlow, J. M. (1868).  Recovery from the Passage of an Iron Bar through the Head .  Publications of the Massachusetts Medical Society. 2  (3), 327-347.

Money, J., & Ehrhardt, A. A. (1972).  Man & Woman, Boy & Girl : The Differentiation and Dimorphism of Gender Identity from Conception to Maturity. Baltimore, Maryland: Johns Hopkins University Press.

Money, J., & Tucker, P. (1975). Sexual signatures: On being a man or a woman.

Further Information

  • Case Study Approach
  • Case Study Method
  • Enhancing the Quality of Case Studies in Health Services Research
  • “We do things together” A case study of “couplehood” in dementia
  • Using mixed methods for evaluating an integrative approach to cancer care: a case study

Print Friendly, PDF & Email

Related Articles

Qualitative Data Coding

Research Methodology

Qualitative Data Coding

What Is a Focus Group?

What Is a Focus Group?

Cross-Cultural Research Methodology In Psychology

Cross-Cultural Research Methodology In Psychology

What Is Internal Validity In Research?

What Is Internal Validity In Research?

What Is Face Validity In Research? Importance & How To Measure

Research Methodology , Statistics

What Is Face Validity In Research? Importance & How To Measure

Criterion Validity: Definition & Examples

Criterion Validity: Definition & Examples

Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

  • Publications
  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

  • Large Shares See Russia and Putin in Negative Light, While Views of Zelenskyy More Mixed

Views of NATO remain favorable among member nations

Table of contents.

  • 1. Confidence in Putin to handle world affairs
  • 2. Overall opinion of Russia
  • 3. Attitudes toward Russian oil and gas
  • 4. Confidence in Zelenskyy to handle world affairs
  • Opinion of NATO over time
  • How politics and views of the U.S. affect views of NATO
  • 6. Confidence in world leaders
  • Acknowledgments
  • Appendix A: Political categorization
  • Classifying parties as populist
  • Classifying parties as left, right or center
  • About Pew Research Center’s Spring 2023 Global Attitudes Survey
  • The American Trends Panel survey methodology

Russian President Vladimir Putin chairs a meeting with members of the Russian government via teleconference in Moscow on March 10, 2022. (Mikhail Klimentyev/Sputnik/AFP via Getty Images)

This Pew Research Center analysis focuses on public opinion of Russia and North Atlantic Treaty Organization (NATO) in 24 countries in North America, Europe, the Middle East, the Asia-Pacific region, sub-Saharan Africa and Latin America. Views of Russia, Russian President Vladimir Putin and NATO are examined in the context of long-term trend data. The report also explores views of Ukrainian President Volodymyr Zelensky. This is the first year since 2019 that the Global Attitudes Survey has included countries from Africa and Latin America due to the coronavirus outbreak .

For non-U.S. data, this report draws on nationally representative surveys of 27,285 adults conducted from Feb. 20 to May 22, 2023. All surveys were conducted over the phone with adults in Canada, France, Germany, Greece, Italy, Japan, the Netherlands, South Korea, Spain, Sweden and the United Kingdom. Surveys were conducted face to face in Hungary, Poland, India, Indonesia, Israel, Kenya, Nigeria, South Africa, Argentina, Brazil and Mexico. In Australia, we used a mixed-mode probability-based online panel.

In the United States, we surveyed 3,576 U.S. adults from March 20 to 26, 2023. Everyone who took part in this survey is a member of the Center’s American Trends Panel (ATP), an online survey panel that is recruited through national, random sampling of residential addresses. This way nearly all U.S. adults have a chance of selection. The survey is weighted to be representative of the U.S. adult population by gender, race, ethnicity, partisan affiliation, education and other categories. Read more about the ATP’s methodology .

Here are the questions used for the report , along with responses, and the survey methodology .

CORRECTION (Sept. 8, 2023): A previous version of this report included an incorrect chart. The map graphic “Lack of confidence in Putin is widespread” has been updated to accurately reflect confidence in Putin in 24 countries.

As a new Pew Research Center survey highlights, Russia and its president, Vladimir Putin, continue to be viewed negatively by people around the world. A median of 82% of adults across 24 countries have an unfavorable view of Russia, while a similar share have no confidence in Putin to do the right thing regarding world affairs, including nine-in-ten or more in Poland, Sweden, Spain, Japan, South Korea, Australia, the Netherlands, Germany, France, the United Kingdom and the United States.

A map showing that across 10 countries surveyed, lack of confidence in Putin is widespread

Ratings for the country and its leader plummeted in many nations following Russia’s invasion of Ukraine in February 2022, and they are at all-time lows in several countries in this year’s survey.

Data was collected prior to the late June uprising by Russian paramilitary organization the Wagner Group .

In contrast to Putin, Ukrainian President Volodymyr Zelenskyy is seen more positively. While a median of just 11% express confidence in the Russian leader, 51% say this of Zelenskyy. But opinions of Zelenskyy vary by country and fewer than half have confidence in his leadership in 10 nations, including NATO members Hungary, Greece and Italy.

A bar chart showing that NATO and Zelenskyy are seen in a more positive light than Russia and Putin

NATO, which has contributed weapons and training to Ukraine since the February 2022 invasion, is seen favorably by 11 member states included in the survey. NATO has been consistently viewed more favorably than not in recent years , and views have grown slightly more positive in Greece and Poland since 2022. However, negative opinion of NATO has crept upward in Hungary and the U.S. While about six-in-ten Americans continue to express a favorable view of NATO, there is a large partisan difference on this question: 76% of Democrats see the alliance positively, compared with 49% of Republicans.

In 11 European countries surveyed, as well as India – where Russian oil access is a relevant issue – people were asked if being tough with Russia on Ukraine or maintaining access to Russia’s oil and gas reserves is more important to bilateral relations. Across these countries, a median of 66% say it is more important to be tough with Russia, while 29% say it is more important to maintain access to Russia’s energy reserves. People in Hungary, India and Greece are the most likely to want to maintain access to Russia’s oil and gas reserves, while Swedes and Poles are the most likely to want to be tough on Russia.

In several ways, publics in Greece, Hungary and India are outliers when it comes to views about Russia and Putin. Greeks and Hungarians stand out relative to European counterparts as being more favorable on Russia, confident in Putin and negative toward Zelenskyy. Hungary, in particular, has faced criticism for delays in sanctioning Russia, while public sentiments in Greece have tended to favor remaining neutral in the conflict. India (which has abstained from condemning the Russian invasion) is the only country where a majority expresses a favorable view of Russia and confidence in Putin.

These findings come from a new Pew Research Center survey conducted from Feb. 20 to May 22, 2023, among 30,861 people in 24 countries. Below are some of the other findings regarding views of Russia and NATO, attitudes toward Russian energy and confidence in Putin, Zelenskyy and key NATO leaders.

Ratings of Russia and Putin in middle-income nations

The survey includes eight middle-income nations that Pew Research Center has not surveyed since 2019 due to the outbreak of the COVID-19 pandemic and subsequent difficulties in conducting face-to-face interviews across the world. Opinions toward Russia and Putin are on balance negative in these countries, although India and Indonesia are exceptions. Negative sentiment toward Russia and Putin has increased by double digits in nearly every middle-income country since 2019. For instance, 74% now express a negative view of Putin in Argentina, up from 51% four years ago.

Eight line charts that show there is a Sharp rise in negative opinion on Putin in middle-income countries since 2019

Confidence in Zelenskyy

A median of 51% across the 24 countries surveyed express confidence in Zelenskyy’s leadership, while 39% hold the opposite view. Confidence in Zelenskyy ranges from 86% in Sweden to just 11% in Hungary. Besides Hungary, half or more do not have confidence in the Ukrainian leader in Greece, Mexico, Italy, Israel, Brazil and Argentina. Age is linked to these views: Younger adults (those ages 18 to 39) are more likely to express a lack of confidence in Zelenskyy than adults 40 and older in several countries.

A map showing views of Zelenskyy around the world

Views of NATO

Positive opinion of NATO in the 11 member states polled ranges from 93% in Poland to 40% in Greece. And in Sweden, which is currently in negotiations to join NATO, 78% have a positive view of the alliance. Among these countries, NATO favorability has been largely stable, historically, and particularly high in the aftermath of the Russian invasion. Greeks, who tend to have more negative views of NATO compared with other countries surveyed in recent years, have grown more favorable since 2022, as have Poles. However, positive sentiment has declined in both the U.S. and Hungary.

A map showing that NATO is viewed favorably by majorities across most member states

Those who place themselves on the ideological right in several European countries are more likely to express a positive view of the alliance than those on the left. However, in the U.S. and Canada, this pattern is reversed: Those on the left are more likely to say they have a favorable opinion of NATO.

Confidence in Biden, Macron and Scholz

A bar chart showing the international image of world leaders, including Zelenskyy, Putin, Biden, Macron and Scholz

On balance, confidence in Zelenskyy largely aligns with views of U.S. President Joe Biden, French President Emmanuel Macron and German Chancellor Olaf Scholz. A median of 54% have confidence in Biden on the world stage, 50% have confidence in Macron and 49% say the same of Scholz.

While Biden is largely viewed with confidence across the world, support is weaker among some NATO allies, such as Hungary and Italy. For more on international views of Biden, see “ International Views of Biden and U.S. Largely Positive .” While medians of 50% and 49% have confidence in Macron and Scholz, respectively, support of both leaders has declined in many countries since 2022, particularly among NATO allies.

Road map to the report

The chapters that follow discuss these findings and others in more detail:

  • Chapter 1 looks at confidence in Russian President Vladimir Putin to do the right thing regarding world affairs.
  • Chapter 2 examines overall opinion of Russia across 24 countries surveyed.
  • Chapter 3 considers attitudes toward Russian oil and gas access .
  • Chapter 4 explores confidence in Ukrainian President Volodymyr Zelenskyy to do the right thing regarding world affairs.
  • Chapter 5 looks at overall opinion of NATO among member nations and Sweden.
  • Chapter 6 examines confidence in Putin and Zelenskyy , paired with views of U.S. President Joe Biden, French President Emmanuel Macron and German Chancellor Olaf Scholz.

Sign up for our weekly newsletter

Fresh data delivery Saturday mornings

Sign up for The Briefing

Weekly updates on the world of news & information

  • Defense & National Security
  • Global Image of Countries
  • International Affairs
  • Vladimir Putin
  • War & International Conflict
  • World Leaders

How Americans and Israelis view one another and the U.S. role in the Israel-Hamas war

Israeli views of the israel-hamas war, support for legal abortion is widespread in many places, especially in europe, americans are less likely than others around the world to feel close to people in their country or community, growing partisan divisions over nato and ukraine, most popular, report materials.

1615 L St. NW, Suite 800 Washington, DC 20036 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of  The Pew Charitable Trusts .

© 2024 Pew Research Center

Return-to-Office Orders: A Survey Analysis of Employment Impacts

How did employers expect return-to-office (RTO) orders to affect employment? Were those expectations correct? We use special questions from the Richmond Fed business surveys to shed light on these questions. Overall, RTO orders were expected to reduce employment, but there was both substantial uncertainty and heterogeneity in expectations. Some employers even expected that RTO would increase employment. Ex post, employers believe RTO orders had a muted effect on employment. We find that the service sector was more likely to both issue RTO orders and expect and experience a reduction in employment.

The COVID-19 pandemic changed the way that both employers and employees think about the location of work. 1 The advent of remote work en masse in 2020 has been followed by a gradual implementation of requiring workers to work from the office, at least for some of their workweek. These forced return-to-office (RTO) orders have come with controversy: Many employers have implemented these policies, while many employees have resisted.

In this article, we attempt to shed light on the effects of RTO by reporting on special questions we asked in the March Richmond Fed business surveys . Specifically, these questions shed light on both the anticipated and realized employment outcomes of RTO orders from the employer's perspective. We find that uncertainty in the decision-making process was prevalent, but also that realized outcomes were generally muted. RTO did have an expected and actual negative effect on employment, but only in some sectors and for some employers. For others, RTO was a means of increasing employment. Our results highlight the large uncertainty in the pandemic, the heterogeneity of firms and the large heterogeneity of workers.

Why Examine the Impacts of RTO Orders?

This survey builds on a recent literature investigating the implications of remote work for workers, businesses and local economies . Uniquely, it attempts to discern how business leaders anticipated RTO policies would impact their firms as well as the actual impact on employment within their firms. Although there is work evaluating the benefits and costs to employers in terms of productivity or labor/non-labor costs, 2 there has been little work to understand the firm-by-firm implication of articulating and enforcing an RTO order.

Research indicates that hybrid options are highly valued by employees , 3 but how many separations can be attributed to an RTO policy? There is evidence that managers value in-person work more than employees, 4 but does that result in actual separations when RTO orders are implemented? Our results suggest the effects of these policies were muted.

There is also evidence of wide variation in employee hybrid-work preferences and in their willingness to pay for the option to work from home 5 as well as evidence that the value workers place on the "amenity" of remote or hybrid work has implications for aggregate wage changes in the macroeconomy. 6 Our work indicates this heterogeneity in preferences may have dampened the effect of RTO orders on employment. Our results are consistent with a literature that is still relatively mixed about the net effect on employers and workers of remote or on-site policies.

Methodology

The Federal Reserve Bank of Richmond has surveyed CEOs and other business leaders across the Fifth Federal Reserve District 7 for almost 30 years, currently gathering around 200-250 responses per month. The survey panel underweights the smallest firms and, due to the history of the survey, manufacturing firms make up about one-third of respondents even though they make up a much smaller share of establishments in the Fifth District or the nation.

In addition to a series of questions about variables such as demand, employment and prices, respondents are commonly asked a set of ad hoc questions. Here, we focus on a set of questions asked in March 2024 regarding the extent to which respondents articulated and enforced a mandatory RTO policy and what they expected upon its implementation. Emily Corcoran reported on employers' on-site general expectations for employees and how those have changed. But here, we focus on business leaders' expectations of RTO policy effects, providing insight into the anticipated and unanticipated employment effects of RTO orders. We begin by assessing whether the establishment implemented RTO. These results are tabulated in Table 1.

Overall, explicit RTO orders were relatively rare, with only 20 percent of respondents articulating RTO orders in the last three years. This small percentage is partly because 37 percent of respondents — many of them manufacturing firms — were fully on-site before the end of 2020, and an additional 26 percent of respondents said RTO wasn't applicable for their companies. 8 Of the remaining companies, there is a roughly equal split between firms that have an explicit RTO policy (20 percent of the full sample) and those that do not (16 percent of the full sample).

We asked these 20 percent of employers about the expected consequences of issuing RTO orders. Did they expect workers to quit because of these policies? Were they sure about the effect on employment? We also asked employers about their assessment of realized outcomes. Did workers quit as anticipated? Did RTO help the firm recruit workers?

What Did Employers Expect, and What Actually Happened?

Perhaps surprisingly, we found two-thirds of employers expected no impact on (net) employment from RTO orders, while 16 percent were too unsure of the impact to answer (Table 2). Among the 18 percent that expected some impact, the anticipated outcome was split between those that expected a decrease in employment (11 percent) and those that expected an increase (7 percent).

Why might employment increase? One possibility derives from employees feeling more connected to their co-workers with greater mentoring opportunities when in the office. 9 This could reduce quitting and improve hiring, as one survey respondent reported that, "...the employees that [formerly] chose to work remotely decided that they were more productive in the office. We are [now] 90+ percent in the office."

Additionally, RTO orders have often been hybrid, 10 potentially allowing the benefits of office culture to be obtained without sacrificing all of the flexibility associated with remote work.

We also asked employers about their evaluation of outcomes, and the results are given in Table 3. Here, a greater percentage reported no impact (82 percent), while 4 percent assessed that RTO had decreased employment, and 4 percent assessed that RTO had actually increased employment. (Nine percent were still unsure.)

Sectoral level analysis reveals employment impacts (both expected and realized) were concentrated in the service sector. In manufacturing, no firms concretely expected a change in employment (though some were unsure), and ex post they believe RTO did not cause them to lose workers. In services, however, only 59 percent expected no impact, while 16 percent expected a negative impact on employment. Ex post, impact on employment was less than expected.

While our analysis is suggestive, there are a few limitations. Foremost, our effective sample size was small, meaning some of these results could be driven by sampling error. Second, it has been years since some employers implemented RTO policies, so their memories of their expectations could be inaccurate. Third, our survey did not control for any other firm changes — such as changes in wages or product demand — that could confound our findings. Fourth, although our findings provide insight into net employment gains and losses, they do not speak to hiring and firing separately. 11

With these caveats in mind, however, our results show that RTO — while still a common topic of conversation — is not necessarily important to employers' and workers' employment decisions. Concerns about employment effects ex ante mostly did not materialize. Employment effects that did materialize were concentrated in services and resulted in a net gain of employees in some cases, rather than a loss.

Grey Gordon is a senior economist and Sonya Ravindranath Waddell is a vice president and economist, both in the Research Department of the Federal Reserve Bank of Richmond. The authors thank Jason Kosakow for helping to develop and execute the survey and for providing the tabulations underlying this analysis and thank RC Balaban, Zach Edwards and Claudia Macaluso for providing feedback on an earlier draft.

See, for example, the 2023 paper " The Evolution of Work From Home " by Jose Maria Barrero, Nicholas Bloom and Steven Davis.

See, for example, the 2024 working paper " The Big Shift in Working Arrangements: Eight Ways Unusual " by Steven Davis.

See, for example, the 2023 working paper " How Hybrid Working From Home Works Out " by Nicholas Bloom, Ruobing Han and James Liang.

See the previously cited paper " How Hybrid Working From Home Works Out ."

See, for example, the 2021 working paper " Why Working From Home Will Stick " by Jose Maria Barrero, Nicholas Bloom and Steven Davis.

See, for example, the 2024 working paper " Job Amenity Shocks and Labor Reallocation (PDF) " by Sadhika Bagga, Lukas Mann, Aysegul Sahin and Giovanni Violante.

The Fifth District comprises the District of Columbia, Maryland, North Carolina, South Carolina, Virginia and most of West Virginia.

Those who answered "not applicable" are presumably firms where work is necessarily done in person.

See, for example, the 2023 article " About a Third of U.S. Workers Who Can Work From Home Now Do So All the Time " by Kim Parker.

The previously cited article by Emily Corcoran noted that 38 percent of firms are in the office in between one and four days a week.

See the 2022 article " Changing Recruiting Practices and Methods in the Tight Labor Market " by Claudia Macaluso and Sonya Ravindranath Waddell for an analysis of how hiring practices have changed in the tight labor market that has prevailed since 2020.

This article may be photocopied or reprinted in its entirety. Please credit the authors, source, and the Federal Reserve Bank of Richmond and include the italicized statement below.

V iews expressed in this article are those of the authors and not necessarily those of the Federal Reserve Bank of Richmond or the Federal Reserve System.

Subscribe to Economic Brief

Receive a notification when Economic Brief is posted online.

By submitting this form you agree to the Bank's Terms & Conditions and Privacy Notice.

Thank you for signing up!

As a new subscriber, you will need to confirm your request to receive email notifications from the Richmond Fed. Please click the  confirm subscription  link in the email to activate your request.  

If you do not receive a confirmation email, check your junk or spam folder as the email may have been diverted.

Phone Icon Contact Us

2024 Theses Doctoral

Artificial Intelligence vs. Human Coaches: A Mixed Methods Randomized Controlled Experiment on Client Experiences and Outcomes

Barger, Amber

The rise of artificial intelligence (AI) challenges us to explore whether human-to-human relationships can extend to AI, potentially reshaping the future of coaching. The purpose of this study was to examine client perceptions of being coached by a simulated AI coach, who was embodied as a vocally conversational live-motion avatar, compared to client perceptions of a human coach. It explored if and how client ratings of coaching process measures and outcome measures aligned between the two coach treatments. In this mixed methods randomized controlled trial (RCT), 81 graduate students enrolled in the study and identified a personally relevant goal to pursue. The study deployed an alternative-treatments between-subjects design, with one-third of participants receiving coaching from simulated AI coaches, another third engaging with seasoned human coaches, and the rest forming the control group. Both treatment groups had one 60-minute session guided by the CLEAR (contract, listen, explore, action, review) coaching model to support each person to gain clarity about their goal and identify specific behaviors that could help each make progress towards their goal. Quantitative data were captured through three surveys and qualitative input was captured through open-ended survey questions and 27 debrief interviews. The study utilized a Wizard of Oz technique from human-computer interaction research, ingeniously designed to sidestep the rapid obsolescence of technology by simulating an advanced AI coaching experience where participants unknowingly interacted with professional human coaches, enabling the assessment of responses to AI coaching in the absence of fully developed autonomous AI systems. The aim was to glean insights into client reactions to a future, fully autonomous AI with the expert capabilities of a human coach. Contrary to expectations from previous literature, participants did not rate professional human coaches higher than simulated AI coaches in terms of working alliance, session value, or outcomes, which included self-rated competence and goal achievement. In fact, both coached groups made significant progress compared to the control group, with participants convincingly engaging with their respective coaches, as confirmed by a novel believability index. The findings challenge prevailing assumptions about human uniqueness in relation to technology. The rapid advancement of AI suggests a revolutionary shift in coaching, where AI could take on a central and surprisingly effective role, redefining what we thought only human coaches could do and reshaping their role in the age of AI.

  • Adult education
  • Artificial intelligence--Educational applications
  • Graduate students
  • Educational technology--Evaluation
  • Education, Higher--Technological innovations
  • Education, Higher--Effect of technological innovations on

This item is currently under embargo. It will be available starting 2029-05-14.

More About This Work

  • DOI Copy DOI to clipboard

IMAGES

  1. Master Survey Design: A 10-step Guide with Examples

    research question survey methodology

  2. 15 Research Methodology Examples (2024)

    research question survey methodology

  3. A Comprehensive Guide to Survey Research Methodologies

    research question survey methodology

  4. How To Do Survey Research: The Process Flowchart

    research question survey methodology

  5. 12 Questionnaire Design Tips for Successful Surveys

    research question survey methodology

  6. Survey Research: Definition, Examples & Methods

    research question survey methodology

VIDEO

  1. Mastering Survey Methodology

  2. Webinar Series on Survey Methodology

  3. Survey Revision with PYQ JKSSB JE|SSC JE|JKPSC|

  4. Webinar Series on Survey Methodology

  5. Types of Survey Research

  6. Webinar Series on Survey Methodology

COMMENTS

  1. Survey Research

    Survey research means collecting information about a group of people by asking them questions and analyzing the results. To conduct an effective survey, follow these six steps: Determine who will participate in the survey. Decide the type of survey (mail, online, or in-person) Design the survey questions and layout.

  2. Survey Research

    Computer-assisted surveys: A survey research method that uses computer technology to administer or collect survey data, often used in large-scale surveys or data collection. Interactive voice response surveys: A survey research method where respondents answer questions through a touch-tone telephone system, often used in automated customer ...

  3. Understanding and Evaluating Survey Research

    Survey research is defined as "the collection of information from a sample of individuals through their responses to questions" ( Check & Schutt, 2012, p. 160 ). This type of research allows for a variety of methods to recruit participants, collect data, and utilize various methods of instrumentation. Survey research can use quantitative ...

  4. Survey Research: Definition, Examples and Methods

    Survey Research Definition. Survey Research is defined as the process of conducting research using surveys that researchers send to survey respondents. The data collected from surveys is then statistically analyzed to draw meaningful research conclusions. In the 21st century, every organization's eager to understand what their customers think ...

  5. Doing Survey Research

    Survey research means collecting information about a group of people by asking them questions and analysing the results. To conduct an effective survey, follow these six steps: Determine who will participate in the survey. Decide the type of survey (mail, online, or in-person) Design the survey questions and layout. Distribute the survey.

  6. PDF Question and Questionnaire Design

    the ordering of questions within a questionnaire and then discuss methods for testing and evaluating questions and questionnaires. Finally, we offer two more general ... vast majority of survey questions are closed, some open questions play prominent roles in survey research, such as those about the most important problem facing the country.

  7. Survey Research: Definition, Examples & Methods

    Survey research is the process of collecting data from a predefined group (e.g. customers or potential customers) with the ultimate goal of uncovering insights about your products, services, or brand overall.. As a quantitative data collection method, survey research can provide you with a goldmine of information that can inform crucial business and product decisions.

  8. Survey methodology

    Survey methodology is "the study of survey methods". [1] As a field of applied statistics concentrating on human-research surveys, survey methodology studies the sampling of individual units from a population and associated techniques of survey data collection, such as questionnaire construction and methods for improving the number and accuracy ...

  9. The SAGE Handbook of Survey Methodology

    The SAGE Handbook of Survey Methodology is a valuable compendium and reference reflecting a broad range of expertise from the survey world and beyond.The authors of the 43 chapters represent a diverse group of institutions, mainly from the United States and Europe, and bring a deep expertise on their given "assignments. For those whose research depends on the quality of survey data ...

  10. Writing Survey Questions

    [View more Methods 101 Videos]. An example of a wording difference that had a significant impact on responses comes from a January 2003 Pew Research Center survey. When people were asked whether they would "favor or oppose taking military action in Iraq to end Saddam Hussein's rule," 68% said they favored military action while 25% said they opposed military action.

  11. A quick guide to survey research

    Directed questions. After settling on your research goal and beginning to design a questionnaire, the main considerations are the method of data collection, the survey instrument and the type of question you are going to ask. Methods of data collection include personal interviews, telephone, postal or electronic (Table 1).

  12. Survey Research: Definition, Types & Methods

    Descriptive research is the most common and conclusive form of survey research due to its quantitative nature. Unlike exploratory research methods, descriptive research utilizes pre-planned, structured surveys with closed-ended questions. It's also deductive, meaning that the survey structure and questions are determined beforehand based on existing theories or areas of inquiry.

  13. Surveys & Questionnaires

    Survey methodologies therefore cover a range of considerations including sampling, research instrument design, improving response rates, ensuring quality in data, and methods of analysis (Groves et al., 2011). One common question format is to collect quantitative data alongside qualitative questions.

  14. Survey research

    Survey research is a research method involving the use of standardised questionnaires or interviews to collect data about people and their preferences, thoughts, and behaviours in a systematic manner. Although census surveys were conducted as early as Ancient Egypt, survey as a formal research method was pioneered in the 1930-40s by sociologist Paul Lazarsfeld to examine the effects of the ...

  15. PDF Fundamentals of Survey Research Methodology

    The survey is then constructed to test this model against observations of the phenomena. In contrast to survey research, a . survey. is simply a data collection tool for carrying out survey research. Pinsonneault and Kraemer (1993) defined a survey as a "means for gathering information about the characteristics, actions, or opinions of a ...

  16. Survey Methods: Definition, Types, and Examples

    A survey method is a process, tool, or technique that you can use to gather information in research by asking questions to a predefined group of people. Typically, it facilitates the exchange of information between the research participants and the person or organization carrying out the research. Survey methods can be qualitative or ...

  17. Questionnaires

    Questionnaires can be classified as both, quantitative and qualitative method depending on the nature of questions. Specifically, answers obtained through closed-ended questions (also called restricted questions) with multiple choice answer options are analyzed using quantitative methods. Research findings in this case can be illustrated using ...

  18. PDF Effective survey design for research: Asking the right questions to get

    Once you develop your research question(s), consider if a survey tool will capture the information you need. Like any data collection method, survey research has benefits and drawbacks (Table 1). Consider Table 1 within the context of our FLC example: We are interested in participants' perceptions and how they change over time.

  19. A tutorial on methodological studies: the what, when, how and why

    However, the research questions posed in methodological studies do not always require the approaches listed above, and guidance is needed on when and how to apply these methods to a methodological study. ... "Systematic survey" may also lead to confusion about whether the survey was systematic (i.e. using a preplanned methodology) or a ...

  20. Survey Research

    Types of survey research. Survey research is like an artist's palette, offering a variety of types to suit your unique research needs. Each type paints a different picture, giving us fascinating insights into the world around us. Cross-Sectional Surveys: Capture a snapshot of a population at a specific moment in time.

  21. Designing a Questionnaire for a Research Paper: A Comprehensive Guide

    Research Methodology; Survey/Questionnaire Development. International Journal of Academic Research in Management, 6 (1), 1-5. Taherdoost, H. (2019). [12] Taherdoost, H. (2019). What Is the Best ...

  22. 10 Different Types of Survey Methods + Pros & Cons

    One of the key features of the Delphi survey research is that participants are unknown to each other, thereby eliminating influence. 18. AI Surveys. Artificial intelligence is the latest types of survey method. Using AI, researchers allow the technology to ask survey questions.

  23. Case Study Research Method in Psychology

    Case studies are in-depth investigations of a person, group, event, or community. Typically, data is gathered from various sources using several methods (e.g., observations & interviews). The case study research method originated in clinical medicine (the case history, i.e., the patient's personal history). In psychology, case studies are ...

  24. Methodology

    Methodology. By Rakesh Kochhar. The data for the historical analysis in this report is derived from the Annual Social and Economic Supplements (ASEC) of the Current Population Survey (CPS), which are conducted in March of every year. The specific files used in this report are from March 1971 to March 2023 and contain data on the annual income ...

  25. Questionnaire survey and analysis of drug clinical research

    e23257 Background: Clinical research competence determines the quality of clinical research and the reliability of research findings and is vital in the development of new antitumor drugs and the diagnosis and treatment of breast cancer. Our study aimed to explore the clinical research implementation capabilities of breast cancer treatment departments in China and to analyze the differences ...

  26. Russia and Putin Seen Negatively Worldwide, While ...

    Read more about the ATP's methodology. Here are the questions used for the report, along with responses, and the survey methodology. CORRECTION (Sept. 8, 2023): A previous version of this report included an incorrect chart. ... These findings come from a new Pew Research Center survey conducted from Feb. 20 to May 22, 2023, among 30,861 ...

  27. Return-to-Office Orders: A Survey Analysis of Employment Impacts

    The row "we articulated a return-to-office policy" aggregates across those who implemented an RTO order in 2021, 2022, 2023 or 2024. Source: Federal Reserve Bank of Richmond business surveys (March 2024). We asked these 20 percent of employers about the expected consequences of issuing RTO orders. Did they expect workers to quit because of ...

  28. Artificial Intelligence vs. Human Coaches: A Mixed Methods Randomized

    In this mixed methods randomized controlled trial (RCT), 81 graduate students enrolled in the study and identified a personally relevant goal to pursue. ... Quantitative data were captured through three surveys and qualitative input was captured through open-ended survey questions and 27 debrief interviews. The study utilized a Wizard of Oz ...