U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Eur J Gen Pract
  • v.24(1); 2018

Series: Practical guidance to qualitative research. Part 3: Sampling, data collection and analysis

Albine moser.

a Faculty of Health Care, Research Centre Autonomy and Participation of Chronically Ill People , Zuyd University of Applied Sciences , Heerlen, The Netherlands

b Faculty of Health, Medicine and Life Sciences, Department of Family Medicine , Maastricht University , Maastricht, The Netherlands

Irene Korstjens

c Faculty of Health Care, Research Centre for Midwifery Science , Zuyd University of Applied Sciences , Maastricht, The Netherlands

In the course of our supervisory work over the years, we have noticed that qualitative research tends to evoke a lot of questions and worries, so-called frequently asked questions (FAQs). This series of four articles intends to provide novice researchers with practical guidance for conducting high-quality qualitative research in primary care. By ‘novice’ we mean Master’s students and junior researchers, as well as experienced quantitative researchers who are engaging in qualitative research for the first time. This series addresses their questions and provides researchers, readers, reviewers and editors with references to criteria and tools for judging the quality of qualitative research papers. The second article focused on context, research questions and designs, and referred to publications for further reading. This third article addresses FAQs about sampling, data collection and analysis. The data collection plan needs to be broadly defined and open at first, and become flexible during data collection. Sampling strategies should be chosen in such a way that they yield rich information and are consistent with the methodological approach used. Data saturation determines sample size and will be different for each study. The most commonly used data collection methods are participant observation, face-to-face in-depth interviews and focus group discussions. Analyses in ethnographic, phenomenological, grounded theory, and content analysis studies yield different narrative findings: a detailed description of a culture, the essence of the lived experience, a theory, and a descriptive summary, respectively. The fourth and final article will focus on trustworthiness and publishing qualitative research.

Key points on sampling, data collection and analysis

  • The data collection plan needs to be broadly defined and open during data collection.
  • Sampling strategies should be chosen in such a way that they yield rich information and are consistent with the methodological approach used.
  • Data saturation determines sample size and is different for each study.
  • The most commonly used data collection methods are participant observation, face-to-face in-depth interviews and focus group discussions.
  • Analyses of ethnographic, phenomenological, grounded theory, and content analysis studies yield different narrative findings: a detailed description of a culture, the essence of the lived experience, a theory or a descriptive summary, respectively.

Introduction

This article is the third paper in a series of four articles aiming to provide practical guidance to qualitative research. In an introductory paper, we have described the objective, nature and outline of the Series [ 1 ]. Part 2 of the series focused on context, research questions and design of qualitative research [ 2 ]. In this paper, Part 3, we address frequently asked questions (FAQs) about sampling, data collection and analysis.

What is a sampling plan?

A sampling plan is a formal plan specifying a sampling method, a sample size, and procedure for recruiting participants ( Box 1 ) [ 3 ]. A qualitative sampling plan describes how many observations, interviews, focus-group discussions or cases are needed to ensure that the findings will contribute rich data. In quantitative studies, the sampling plan, including sample size, is determined in detail in beforehand but qualitative research projects start with a broadly defined sampling plan. This plan enables you to include a variety of settings and situations and a variety of participants, including negative cases or extreme cases to obtain rich data. The key features of a qualitative sampling plan are as follows. First, participants are always sampled deliberately. Second, sample size differs for each study and is small. Third, the sample will emerge during the study: based on further questions raised in the process of data collection and analysis, inclusion and exclusion criteria might be altered, or the sampling sites might be changed. Finally, the sample is determined by conceptual requirements and not primarily by representativeness. You, therefore, need to provide a description of and rationale for your choices in the sampling plan. The sampling plan is appropriate when the selected participants and settings are sufficient to provide the information needed for a full understanding of the phenomenon under study.

Sampling strategies in qualitative research. Based on Polit & Beck [ 3 ].

SamplingDefinition
Purposive samplingSelection of participants based on the researchers’ judgement about what potential participants will be most informative.
Criterion samplingSelection of participants who meet pre-determined criteria of importance.
Theoretical samplingSelection of participants based on the emerging findings to ensure adequate representation of theoretical concepts.
Convenience samplingSelection of participants who are easily available.
Snowball samplingSelection of participants through referrals by previously selected participants or persons who have access to potential participants.
Maximum variation samplingSelection of participants based on a wide range of variation in backgrounds.
Extreme case samplingPurposeful selection of the most unusual cases.
Typical case samplingSelection of the most typical or average participants.
Confirming and disconfirming samplingConfirming and disconfirming cases sampling supports checking or challenging emerging trends or patterns in the data.

Some practicalities: a critical first step is to select settings and situations where you have access to potential participants. Subsequently, the best strategy to apply is to recruit participants who can provide the richest information. Such participants have to be knowledgeable on the phenomenon and can articulate and reflect, and are motivated to communicate at length and in depth with you. Finally, you should review the sampling plan regularly and adapt when necessary.

What sampling strategies can I use?

Sampling is the process of selecting or searching for situations, context and/or participants who provide rich data of the phenomenon of interest [ 3 ]. In qualitative research, you sample deliberately, not at random. The most commonly used deliberate sampling strategies are purposive sampling, criterion sampling, theoretical sampling, convenience sampling and snowball sampling. Occasionally, the ‘maximum variation,’ ‘typical cases’ and ‘confirming and disconfirming’ sampling strategies are used. Key informants need to be carefully chosen. Key informants hold special and expert knowledge about the phenomenon to be studied and are willing to share information and insights with you as the researcher [ 3 ]. They also help to gain access to participants, especially when groups are studied. In addition, as researcher, you can validate your ideas and perceptions with those of the key informants.

What is the connection between sampling types and qualitative designs?

The ‘big three’ approaches of ethnography, phenomenology, and grounded theory use different types of sampling.

In ethnography, the main strategy is purposive sampling of a variety of key informants, who are most knowledgeable about a culture and are able and willing to act as representatives in revealing and interpreting the culture. For example, an ethnographic study on the cultural influences of communication in maternity care will recruit key informants from among a variety of parents-to-be, midwives and obstetricians in midwifery care practices and hospitals.

Phenomenology uses criterion sampling, in which participants meet predefined criteria. The most prominent criterion is the participant’s experience with the phenomenon under study. The researchers look for participants who have shared an experience, but vary in characteristics and in their individual experiences. For example, a phenomenological study on the lived experiences of pregnant women with psychosocial support from primary care midwives will recruit pregnant women varying in age, parity and educational level in primary midwifery practices.

Grounded theory usually starts with purposive sampling and later uses theoretical sampling to select participants who can best contribute to the developing theory. As theory construction takes place concurrently with data collection and analyses, the theoretical sampling of new participants also occurs along with the emerging theoretical concepts. For example, one grounded theory study tested several theoretical constructs to build a theory on autonomy in diabetes patients [ 4 ]. In developing the theory, the researchers started by purposefully sampling participants with diabetes differing in age, onset of diabetes and social roles, for example, employees, housewives, and retired people. After the first analysis, researchers continued with theoretically sampling, for example, participants who differed in the treatment they received, with different degrees of care dependency, and participants who receive care from a general practitioner (GP), at a hospital or from a specialist nurse, etc.

In addition to the ‘big three’ approaches, content analysis is frequently applied in primary care research, and very often uses purposive, convenience, or snowball sampling. For instance, a study on peoples’ choice of a hospital for elective orthopaedic surgery used snowball sampling [ 5 ]. One elderly person in the private network of one researcher personally approached potential respondents in her social network by means of personal invitations (including letters). In turn, respondents were asked to pass on the invitation to other eligible candidates.

Sampling is also dependent on the characteristics of the setting, e.g., access, time, vulnerability of participants, and different types of stakeholders. The setting, where sampling is carried out, is described in detail to provide thick description of the context, thereby, enabling the reader to make a transferability judgement (see Part 3: transferability). Sampling also affects the data analysis, where you continue decision-making about whom or what situations to sample next. This is based on what you consider as still missing to get the necessary information for rich findings (see Part 1: emergent design). Another point of attention is the sampling of ‘invisible groups’ or vulnerable people. Sampling of these participants would require applying multiple sampling strategies, and more time calculated in the project planning stage for sampling and recruitment [ 6 ].

How do sample size and data saturation interact?

A guiding principle in qualitative research is to sample only until data saturation has been achieved. Data saturation means the collection of qualitative data to the point where a sense of closure is attained because new data yield redundant information [ 3 ].

Data saturation is reached when no new analytical information arises anymore, and the study provides maximum information on the phenomenon. In quantitative research, by contrast, the sample size is determined by a power calculation. The usually small sample size in qualitative research depends on the information richness of the data, the variety of participants (or other units), the broadness of the research question and the phenomenon, the data collection method (e.g., individual or group interviews) and the type of sampling strategy. Mostly, you and your research team will jointly decide when data saturation has been reached, and hence whether the sampling can be ended and the sample size is sufficient. The most important criterion is the availability of enough in-depth data showing the patterns, categories and variety of the phenomenon under study. You review the analysis, findings, and the quality of the participant quotes you have collected, and then decide whether sampling might be ended because of data saturation. In many cases, you will choose to carry out two or three more observations or interviews or an additional focus group discussion to confirm that data saturation has been reached.

When designing a qualitative sampling plan, we (the authors) work with estimates. We estimate that ethnographic research should require 25–50 interviews and observations, including about four-to-six focus group discussions, while phenomenological studies require fewer than 10 interviews, grounded theory studies 20–30 interviews and content analysis 15–20 interviews or three-to-four focus group discussions. However, these numbers are very tentative and should be very carefully considered before using them. Furthermore, qualitative designs do not always mean small sample numbers. Bigger sample sizes might occur, for example, in content analysis, employing rapid qualitative approaches, and in large or longitudinal qualitative studies.

Data collection

What methods of data collection are appropriate.

The most frequently used data collection methods are participant observation, interviews, and focus group discussions. Participant observation is a method of data collection through the participation in and observation of a group or individuals over an extended period of time [ 3 ]. Interviews are another data collection method in which an interviewer asks the respondents questions [ 6 ], face-to-face, by telephone or online. The qualitative research interview seeks to describe the meanings of central themes in the life world of the participants. The main task in interviewing is to understand the meaning of what participants say [ 5 ]. Focus group discussions are a data collection method with a small group of people to discuss a given topic, usually guided by a moderator using a questioning-route [ 8 ]. It is common in qualitative research to combine more than one data collection method in one study. You should always choose your data collection method wisely. Data collection in qualitative research is unstructured and flexible. You often make decisions on data collection while engaging in fieldwork, the guiding questions being with whom, what, when, where and how. The most basic or ‘light’ version of qualitative data collection is that of open questions in surveys. Box 2 provides an overview of the ‘big three’ qualitative approaches and their most commonly used data collection methods.

Qualitative data collection methods.

 DefinitionAimEthno-graphyPheno-menologyGrounded theoryContent analysis
Participants of observationsParticipation in and observation of people or groups.To obtain a close and intimate familiarity with a given group of individuals and their practices through intensive involvement with people in their environment, usually over an extended period.Suitable Very rareSometimes
Face-to-face in-depths InterviewsA conversation where the researcher poses questions and the participants provide answers face-to-face, by telephone or via mail.To elicit the participant’s experiences, perceptions, thoughts and feelings.SuitableSuitableSuitableSuitable
Focus group discussionInterview with a group of participants to answer questions on a specific topic face-to-face or via mail; people who participate interact with each other.To examine different experiences, perceptions, thoughts and feelings among various participants or parties.Suitable SometimesSuitable

What role should I adopt when conducting participant observations?

What is important is to immerse yourself in the research setting, to enable you to study it from the inside. There are four types of researcher involvement in observations, and in your qualitative study, you may apply all four. In the first type, as ‘complete participant’, you become part of the setting and play an insider role, just as you do in your own work setting. This role might be appropriate when studying persons who are difficult to access. The second type is ‘active participation’. You have gained access to a particular setting and observed the group under study. You can move around at will and can observe in detail and depth and in different situations. The third role is ‘moderate participation’. You do not actually work in the setting you wish to study but are located there as a researcher. You might adopt this role when you are not affiliated to the care setting you wish to study. The fourth role is that of the ‘complete observer’, in which you merely observe (bystander role) and do not participate in the setting at all. However, you cannot perform any observations without access to the care setting. Such access might be easily obtained when you collect data by observations in your own primary care setting. In some cases, you might observe other care settings, which are relevant to primary care, for instance observing the discharge procedure for vulnerable elderly people from hospital to primary care.

How do I perform observations?

It is important to decide what to focus on in each individual observation. The focus of observations is important because you can never observe everything, and you can only observe each situation once. Your focus might differ between observations. Each observation should provide you with answers regarding ‘Who do you observe?’, ‘What do you observe’, ‘Where does the observation take place?’, ‘When does it take place?’, ‘How does it happen?’, and ‘Why does it happen as it happens?’ Observations are not static but proceed in three stages: descriptive, focused, and selective. Descriptive means that you observe, on the basis of general questions, everything that goes on in the setting. Focused observation means that you observe certain situations for some time, with some areas becoming more prominent. Selective means that you observe highly specific issues only. For example, if you want to observe the discharge procedure for vulnerable elderly people from hospitals to general practice, you might begin with broad observations to get to know the general procedure. This might involve observing several different patient situations. You might find that the involvement of primary care nurses deserves special attention, so you might then focus on the roles of hospital staff and primary care nurses, and their interactions. Finally, you might want to observe only the specific situations where hospital staff and primary care nurses exchange information. You take field notes from all these observations and add your own reflections on the situations you observed. You jot down words, whole sentences or parts of situations, and your reflections on a piece of paper. After the observations, the field notes need to be worked out and transcribed immediately to be able to include detailed descriptions.

Further reading on interviews and focus group discussion.

Qualitative data analysis.

What are the general features of an interview?

Interviews involve interactions between the interviewer(s) and the respondent(s) based on interview questions. Individual, or face-to-face, interviews should be distinguished from focus group discussions. The interview questions are written down in an interview guide [ 7 ] for individual interviews or a questioning route [ 8 ] for focus group discussions, with questions focusing on the phenomenon under study. The sequence of the questions is pre-determined. In individual interviews, the sequence depends on the respondents and how the interviews unfold. During the interview, as the conversation evolves, you go back and forth through the sequence of questions. It should be a dialogue, not a strict question–answer interview. In a focus group discussion, the sequence is intended to facilitate the interaction between the participants, and you might adapt the sequence depending on how their discussion evolves. Working with an interview guide or questioning route enables you to collect information on specific topics from all participants. You are in control in the sense that you give direction to the interview, while the participants are in control of their answers. However, you need to be open-minded to recognize that some relevant topics for participants may not have been covered in your interview guide or questioning route, and need to be added. During the data collection process, you develop the interview guide or questioning route further and revise it based on the analysis.

The interview guide and questioning route might include open and general as well as subordinate or detailed questions, probes and prompts. Probes are exploratory questions, for example, ‘Can you tell me more about this?’ or ‘Then what happened?’ Prompts are words and signs to encourage participants to tell more. Examples of stimulating prompts are eye contact, leaning forward and open body language.

Further reading on qualitative analysis.

What is a face-to-face interview?

A face-to-face interview is an individual interview, that is, a conversation between participant and interviewer. Interviews can focus on past or present situations, and on personal issues. Most qualitative studies start with open interviews to get a broad ‘picture’ of what is going on. You should not provide a great deal of guidance and avoid influencing the answers to fit ‘your’ point of view, as you want to obtain the participant’s own experiences, perceptions, thoughts, and feelings. You should encourage the participants to speak freely. As the interview evolves, your subsequent major and subordinate questions become more focused. A face-to-face or individual interview might last between 30 and 90 min.

Most interviews are semi-structured [ 3 ]. To prepare an interview guide to enhance that a set of topics will be covered by every participant, you might use a framework for constructing a semi-structured interview guide [ 10 ]: (1) identify the prerequisites to use a semi-structured interview and evaluate if a semi-structured interview is the appropriate data collection method; (2) retrieve and utilize previous knowledge to gain a comprehensive and adequate understanding of the phenomenon under study; (3) formulate a preliminary interview guide by operationalizing the previous knowledge; (4) pilot-test the preliminary interview guide to confirm the coverage and relevance of the content and to identify the need for reformulation of questions; (5) complete the interview guide to collect rich data with a clear and logical guide.

The first few minutes of an interview are decisive. The participant wants to feel at ease before sharing his or her experiences. In a semi-structured interview, you would start with open questions related to the topic, which invite the participant to talk freely. The questions aim to encourage participants to tell their personal experiences, including feelings and emotions and often focus on a particular experience or specific events. As you want to get as much detail as possible, you also ask follow-up questions or encourage telling more details by using probes and prompts or keeping a short period of silence [ 6 ]. You first ask what and why questions and then how questions.

You need to be prepared for handling problems you might encounter, such as gaining access, dealing with multiple formal and informal gatekeepers, negotiating space and privacy for recording data, socially desirable answers from participants, reluctance of participants to tell their story, deciding on the appropriate role (emotional involvement), and exiting from fieldwork prematurely.

What is a focus group discussion and when can I use it?

A focus group discussion is a way to gather together people to discuss a specific topic of interest. The people participating in the focus group discussion share certain characteristics, e.g., professional background, or share similar experiences, e.g., having diabetes. You use their interaction to collect the information you need on a particular topic. To what depth of information the discussion goes depends on the extent to which focus group participants can stimulate each other in discussing and sharing their views and experiences. Focus group participants respond to you and to each other. Focus group discussions are often used to explore patients’ experiences of their condition and interactions with health professionals, to evaluate programmes and treatment, to gain an understanding of health professionals’ roles and identities, to examine the perception of professional education, or to obtain perspectives on primary care issues. A focus group discussion usually lasts 90–120 mins.

You might use guidelines for developing a questioning route [ 9 ]: (1) brainstorm about possible topics you want to cover; (2) sequence the questioning: arrange general questions first, and then, more specific questions, and ask positive questions before negative questions; (3) phrase the questions: use open-ended questions, ask participants to think back and reflect on their personal experiences, avoid asking ‘why’ questions, keep questions simple and make your questions sound conversational, be careful about giving examples; (4) estimate the time for each question and consider: the complexity of the question, the category of the question, level of participant’s expertise, the size of the focus group discussion, and the amount of discussion you want related to the question; (5) obtain feedback from others (peers); (6) revise the questions based on the feedback; and (7) test the questions by doing a mock focus group discussion. All questions need to provide an answer to the phenomenon under study.

You need to be prepared to manage difficulties as they arise, for example, dominant participants during the discussion, little or no interaction and discussion between participants, participants who have difficulties sharing their real feelings about sensitive topics with others, and participants who behave differently when they are observed.

How should I compose a focus group and how many participants are needed?

The purpose of the focus group discussion determines the composition. Smaller groups might be more suitable for complex (and sometimes controversial) topics. Also, smaller focus groups give the participants more time to voice their views and provide more detailed information, while participants in larger focus groups might generate greater variety of information. In composing a smaller or larger focus group, you need to ensure that the participants are likely to have different viewpoints that stimulate the discussion. For example, if you want to discuss the management of obesity in a primary care district, you might want to have a group composed of professionals who work with these patients but also have a variety of backgrounds, e.g. GPs, community nurses, practice nurses in general practice, school nurses, midwives or dieticians.

Focus groups generally consist of 6–12 participants. Careful time management is important, since you have to determine how much time you want to devote to answering each question, and how much time is available for each individual participant. For example, if you have planned a focus group discussion lasting 90 min. with eight participants, you might need 15 min. for the introduction and the concluding summary. This means you have 75 min. for asking questions, and if you have four questions, this allows a total of 18 min. of speaking time for each question. If all eight respondents participate in the discussion, this boils down to about two minutes of speaking time per respondent per question.

How can I use new media to collect qualitative data?

New media are increasingly used for collecting qualitative data, for example, through online observations, online interviews and focus group discussions, and in analysis of online sources. Data can be collected synchronously or asynchronously, with text messaging, video conferences, video calls or immersive virtual worlds or games, etcetera. Qualitative research moves from ‘virtual’ to ‘digital’. Virtual means those approaches that import traditional data collection methods into the online environment and digital means those approaches take advantage of the unique characteristics and capabilities of the Internet for research [ 10 ]. New media can also be applied. See Box 3 for further reading on interview and focus group discussion.

Face-to-face interviews
Online interviews
Focus group discussion

Can I wait with my analysis until all data have been collected?

You cannot wait with the analysis, because an iterative approach and emerging design are at the heart of qualitative research. This involves a process whereby you move back and forth between sampling, data collection and data analysis to accumulate rich data and interesting findings. The principle is that what emerges from data analysis will shape subsequent sampling decisions. Immediately after the very first observation, interview or focus group discussion, you have to start the analysis and prepare your field notes.

Why is a good transcript so important?

First, transcripts of audiotaped interviews and focus group discussions and your field notes constitute your major data sources. Trained and well-instructed transcribers preferably make transcripts. Usually, e.g., in ethnography, phenomenology, grounded theory, and content analysis, data are transcribed verbatim, which means that recordings are fully typed out, and the transcripts are accurate and reflect the interview or focus group discussion experience. Most important aspects of transcribing are the focus on the participants’ words, transcribing all parts of the audiotape, and carefully revisiting the tape and rereading the transcript. In conversation analysis non-verbal actions such as coughing, the lengths of pausing and emphasizing, tone of voice need to be described in detail using a formal transcription system (best known are G. Jefferson’s symbols).

To facilitate analysis, it is essential that you ensure and check that transcripts are accurate and reflect the totality of the interview, including pauses, punctuation and non-verbal data. To be able to make sense of qualitative data, you need to immerse yourself in the data and ‘live’ the data. In this process of incubation, you search the transcripts for meaning and essential patterns, and you try to collect legitimate and insightful findings. You familiarize yourself with the data by reading and rereading transcripts carefully and conscientiously, in search for deeper understanding.

Are there differences between the analyses in ethnography, phenomenology, grounded theory, and content analysis?

Ethnography, phenomenology, and grounded theory each have different analytical approaches, and you should be aware that each of these approaches has different schools of thought, which may also have integrated the analytical methods from other schools ( Box 4 ). When you opt for a particular approach, it is best to use a handbook describing its analytical methods, as it is better to use one approach consistently than to ‘mix up’ different schools.

 EthnographyPhenomenologyGrounded theoryContent analysis
Transcripts mainly fromObservations, face-to-face and focus group discussions, field notes.Face-to-face in- depth Interviews.Face-to-face in- depth interviews; rarely observations and sometimes focus group discussions.Face-to-face and online in-depth interviews and focus group discussions; sometimes observations.
Reading, notes and memosReading through transcripts, classifying into overarching themes, adding marginal notes, assigning preliminary codes.Reading through transcripts, adding marginal notes, defining first codes.Reading through transcripts, writing memos, assigning preliminary codes.Reading through transcripts, adding marginal notes, assigning preliminary codes.
DescribingSocial setting, actors, events.Personal experience.Open codes.Initial codes.
OrderingThemes, patterns and regularities.Major and subordinate statements.
Units of meaning.
Axial coding.
Selective coding.
Descriptive categories and subcategories.
InterpretingHow the culture works.Development of the essence.Storyline about social process.Main categories, sometimes exploratory.
FindingsNarrative offering detailed description of a culture.Narrative showing the essence of the lived experience.Description of a theory, often using a visual model.Narrative summary of main findings.

In general, qualitative analysis begins with organizing data. Large amounts of data need to be stored in smaller and manageable units, which can be retrieved and reviewed easily. To obtain a sense of the whole, analysis starts with reading and rereading the data, looking at themes, emotions and the unexpected, taking into account the overall picture. You immerse yourself in the data. The most widely used procedure is to develop an inductive coding scheme based on actual data [ 11 ]. This is a process of open coding, creating categories and abstraction. In most cases, you do not start with a predefined coding scheme. You describe what is going on in the data. You ask yourself, what is this? What does it stand for? What else is like this? What is this distinct from? Based on this close examination of what emerges from the data you make as many labels as needed. Then, you make a coding sheet, in which you collect the labels and, based on your interpretation, cluster them in preliminary categories. The next step is to order similar or dissimilar categories into broader higher order categories. Each category is named using content-characteristic words. Then, you use abstraction by formulating a general description of the phenomenon under study: subcategories with similar events and information are grouped together as categories and categories are grouped as main categories. During the analysis process, you identify ‘missing analytical information’ and you continue data collection. You reread, recode, re-analyse and re-collect data until your findings provide breadth and depth.

Throughout the qualitative study, you reflect on what you see or do not see in the data. It is common to write ‘analytic memos’ [ 3 ], write-ups or mini-analyses about what you think you are learning during the course of your study, from designing to publishing. They can be a few sentences or pages, whatever is needed to reflect upon: open codes, categories, concepts, and patterns that might be emerging in the data. Memos can contain summaries of major findings and comments and reflections on particular aspects.

In ethnography, analysis begins from the moment that the researcher sets foot in the field. The analysis involves continually looking for patterns in the behaviours and thoughts of the participants in everyday life, in order to obtain an understanding of the culture under study. When comparing one pattern with another and analysing many patterns simultaneously, you may use maps, flow charts, organizational charts and matrices to illustrate the comparisons graphically. The outcome of an ethnographic study is a narrative description of a culture.

In phenomenology, analysis aims to describe and interpret the meaning of an experience, often by identifying essential subordinate and major themes. You search for common themes featuring within an interview and across interviews, sometimes involving the study participants or other experts in the analysis process. The outcome of a phenomenological study is a detailed description of themes that capture the essential meaning of a ‘lived’ experience.

Grounded theory generates a theory that explains how a basic social problem that emerged from the data is processed in a social setting. Grounded theory uses the ‘constant comparison’ method, which involves comparing elements that are present in one data source (e.g., an interview) with elements in another source, to identify commonalities. The steps in the analysis are known as open, axial and selective coding. Throughout the analysis, you document your ideas about the data in methodological and theoretical memos. The outcome of a grounded theory study is a theory.

Descriptive generic qualitative research is defined as research designed to produce a low inference description of a phenomenon [ 12 ]. Although Sandelowski maintains that all research involves interpretation, she has also suggested that qualitative description attempts to minimize inferences made in order to remain ‘closer’ to the original data [ 12 ]. Descriptive generic qualitative research often applies content analysis. Descriptive content analysis studies are not based on a specific qualitative tradition and are varied in their methods of analysis. The analysis of the content aims to identify themes, and patterns within and among these themes. An inductive content analysis [ 11 ] involves breaking down the data into smaller units, coding and naming the units according to the content they present, and grouping the coded material based on shared concepts. They can be represented by clustering in treelike diagrams. A deductive content analysis [ 11 ] uses a theory, theoretical framework or conceptual model to analyse the data by operationalizing them in a coding matrix. An inductive content analysis might use several techniques from grounded theory, such as open and axial coding and constant comparison. However, note that your findings are merely a summary of categories, not a grounded theory.

Analysis software can support you to manage your data, for example by helping to store, annotate and retrieve texts, to locate words, phrases and segments of data, to name and label, to sort and organize, to identify data units, to prepare diagrams and to extract quotes. Still, as a researcher you would do the analytical work by looking at what is in the data, and making decisions about assigning codes, and identifying categories, concepts and patterns. The computer assisted qualitative data analysis (CAQDAS) website provides support to make informed choices between analytical software and courses: http://www.surrey.ac.uk/sociology/research/researchcentres/caqdas/support/choosing . See Box 5 for further reading on qualitative analysis.

Ethnography • Atkinson P, Coffey A, Delamount S, Lofland J, Lofmand L. Handbook of ethnography. Sage:   Thousand Oaks (CA); 2001.
 • Spradley J. The ethnographic interview. Holt Rinehart & Winston: New York (NY); 1979.
 • Spradley J. Participant observation. Holt Rinehart & Winston: New York (NY); 1980.
Phenomenology • Colaizzi PF. Psychological research as the phenomenologist views it. In: Valle R, King M, editors.   Essential phenomenological alternative for psychology. New York (NY): Oxford University   Press; 1978. p. 41-78.
 • Smith J.A, Flowers P, Larkin M. Interpretative phenomenological analysis. Theory, method and   research. Sage: London; 2010.
Grounded theory • Charmaz K. Constructing grounded theory. 2nd ed. Sage: Thousand Oaks (CA); 2014.
 • Corbin J, Strauss A. Basics of qualitative research. Techniques and procedures for developing   grounded theory. Sage: Los Angeles (CA); 2008.
Content analysis • Elo S, Kääriäinen M, Kanste O, Pölkki T, Utriainen K, Kyngäs H. Qualitative Content Analysis: a   focus on trustworthiness. Sage Open 2014: 1–10. DOI: 10.1177/2158244014522633.
 • Elo S. Kyngäs A. The qualitative content analysis process. J Adv Nurs. 2008; 62: 107–115.
 • Hsieh HF. Shannon SE. Three approaches to qualitative content analysis. Qual Health Res. 2005;   15: 1277–1288.

The next and final article in this series, Part 4, will focus on trustworthiness and publishing qualitative research [ 13 ].

Acknowledgements

The authors thank the following junior researchers who have been participating for the last few years in the so-called ‘Think tank on qualitative research’ project, a collaborative project between Zuyd University of Applied Sciences and Maastricht University, for their pertinent questions: Erica Baarends, Jerome van Dongen, Jolanda Friesen-Storms, Steffy Lenzen, Ankie Hoefnagels, Barbara Piskur, Claudia van Putten-Gamel, Wilma Savelberg, Steffy Stans, and Anita Stevens. The authors are grateful to Isabel van Helmond, Joyce Molenaar and Darcy Ummels for proofreading our manuscripts and providing valuable feedback from the ‘novice perspective’.

Disclosure statement

The authors report no conflicts of interest. The authors alone are responsible for the content and writing of the paper.

Sample Size for Interview in Qualitative Research in Social Sciences: A Guide to Novice Researchers

  • September 2022
  • Research in Educational Policy and Management 4(1):42-50

Wasihun Bekele at Mizan-Tepi University

  • Mizan-Tepi University

Fikire Yohannes at Mizan-Tepi University

Discover the world's research

  • 25+ million members
  • 160+ million publication pages
  • 2.3+ billion citations
  • J ENVIRON MANAGE

Inamutila Kahupi

  • RG Guntur Alam
  • Pebi Selviani
  • Daniel T. L. Shek
  • Pim Martens

Yan Wu

  • Pichaporn Sirisukeepradit

Chakrit Yippikun

  • ريناد عبــد اللــه القحطــانــي
  • لمى عبـد الحكــيم سحــــاب
  • Mashanim Mahazir

Rahimi A. Rahman

  • Nurhaizan Mohd Zainudin
  • Salmaliza Salleh

Poeti Nazura Gulfira Akbar

  • Mareta Maulidiyanti

Ngurah Wiwesa

  • Anisatul Auliya

Derya Gultekin

  • Li Wah Thong
  • Way Soong Lim
  • Usa Padgate

Eneli Kindsiko

  • BMC MED RES METHODOL
  • Konstantina Vasileiou

Julie Barnett

  • Susan Thorpe
  • Terry Young

Julius Sim

  • Sarah Elsie Baker

Rosalind Edwards

  • QUAL HEALTH RES

Janice M Morse

  • Creswell JW
  • Susan E. Chase
  • Recruit researchers
  • Join for free
  • Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up

qualitative research interview sample

Qualitative Research 101: Interviewing

5 Common Mistakes To Avoid When Undertaking Interviews

By: David Phair (PhD) and Kerryn Warren (PhD) | March 2022

Undertaking interviews is potentially the most important step in the qualitative research process. If you don’t collect useful, useable data in your interviews, you’ll struggle through the rest of your dissertation or thesis.  Having helped numerous students with their research over the years, we’ve noticed some common interviewing mistakes that first-time researchers make. In this post, we’ll discuss five costly interview-related mistakes and outline useful strategies to avoid making these.

Overview: 5 Interviewing Mistakes

  • Not having a clear interview strategy /plan
  • Not having good interview techniques /skills
  • Not securing a suitable location and equipment
  • Not having a basic risk management plan
  • Not keeping your “ golden thread ” front of mind

1. Not having a clear interview strategy

The first common mistake that we’ll look at is that of starting the interviewing process without having first come up with a clear interview strategy or plan of action. While it’s natural to be keen to get started engaging with your interviewees, a lack of planning can result in a mess of data and inconsistency between interviews.

There are several design choices to decide on and plan for before you start interviewing anyone. Some of the most important questions you need to ask yourself before conducting interviews include:

  • What are the guiding research aims and research questions of my study?
  • Will I use a structured, semi-structured or unstructured interview approach?
  • How will I record the interviews (audio or video)?
  • Who will be interviewed and by whom ?
  • What ethics and data law considerations do I need to adhere to?
  • How will I analyze my data? 

Let’s take a quick look at some of these.

The core objective of the interviewing process is to generate useful data that will help you address your overall research aims. Therefore, your interviews need to be conducted in a way that directly links to your research aims, objectives and research questions (i.e. your “golden thread”). This means that you need to carefully consider the questions you’ll ask to ensure that they align with and feed into your golden thread. If any question doesn’t align with this, you may want to consider scrapping it.

Another important design choice is whether you’ll use an unstructured, semi-structured or structured interview approach . For semi-structured interviews, you will have a list of questions that you plan to ask and these questions will be open-ended in nature. You’ll also allow the discussion to digress from the core question set if something interesting comes up. This means that the type of information generated might differ a fair amount between interviews.

Contrasted to this, a structured approach to interviews is more rigid, where a specific set of closed questions is developed and asked for each interviewee in exactly the same order. Closed questions have a limited set of answers, that are often single-word answers. Therefore, you need to think about what you’re trying to achieve with your research project (i.e. your research aims) and decided on which approach would be best suited in your case.

It is also important to plan ahead with regards to who will be interviewed and how. You need to think about how you will approach the possible interviewees to get their cooperation, who will conduct the interviews, when to conduct the interviews and how to record the interviews. For each of these decisions, it’s also essential to make sure that all ethical considerations and data protection laws are taken into account.

Finally, you should think through how you plan to analyze the data (i.e., your qualitative analysis method) generated by the interviews. Different types of analysis rely on different types of data, so you need to ensure you’re asking the right types of questions and correctly guiding your respondents.

Simply put, you need to have a plan of action regarding the specifics of your interview approach before you start collecting data. If not, you’ll end up drifting in your approach from interview to interview, which will result in inconsistent, unusable data.

Your interview questions need to directly  link to your research aims, objectives and  research questions - your "golden thread”.

2. Not having good interview technique

While you’re generally not expected to become you to be an expert interviewer for a dissertation or thesis, it is important to practice good interview technique and develop basic interviewing skills .

Let’s go through some basics that will help the process along.

Firstly, before the interview , make sure you know your interview questions well and have a clear idea of what you want from the interview. Naturally, the specificity of your questions will depend on whether you’re taking a structured, semi-structured or unstructured approach, but you still need a consistent starting point . Ideally, you should develop an interview guide beforehand (more on this later) that details your core question and links these to the research aims, objectives and research questions.

Before you undertake any interviews, it’s a good idea to do a few mock interviews with friends or family members. This will help you get comfortable with the interviewer role, prepare for potentially unexpected answers and give you a good idea of how long the interview will take to conduct. In the interviewing process, you’re likely to encounter two kinds of challenging interviewees ; the two-word respondent and the respondent who meanders and babbles. Therefore, you should prepare yourself for both and come up with a plan to respond to each in a way that will allow the interview to continue productively.

To begin the formal interview , provide the person you are interviewing with an overview of your research. This will help to calm their nerves (and yours) and contextualize the interaction. Ultimately, you want the interviewee to feel comfortable and be willing to be open and honest with you, so it’s useful to start in a more casual, relaxed fashion and allow them to ask any questions they may have. From there, you can ease them into the rest of the questions.

As the interview progresses , avoid asking leading questions (i.e., questions that assume something about the interviewee or their response). Make sure that you speak clearly and slowly , using plain language and being ready to paraphrase questions if the person you are interviewing misunderstands. Be particularly careful with interviewing English second language speakers to ensure that you’re both on the same page.

Engage with the interviewee by listening to them carefully and acknowledging that you are listening to them by smiling or nodding. Show them that you’re interested in what they’re saying and thank them for their openness as appropriate. This will also encourage your interviewee to respond openly.

Need a helping hand?

qualitative research interview sample

3. Not securing a suitable location and quality equipment

Where you conduct your interviews and the equipment you use to record them both play an important role in how the process unfolds. Therefore, you need to think carefully about each of these variables before you start interviewing.

Poor location: A bad location can result in the quality of your interviews being compromised, interrupted, or cancelled. If you are conducting physical interviews, you’ll need a location that is quiet, safe, and welcoming . It’s very important that your location of choice is not prone to interruptions (the workplace office is generally problematic, for example) and has suitable facilities (such as water, a bathroom, and snacks).

If you are conducting online interviews , you need to consider a few other factors. Importantly, you need to make sure that both you and your respondent have access to a good, stable internet connection and electricity. Always check before the time that both of you know how to use the relevant software and it’s accessible (sometimes meeting platforms are blocked by workplace policies or firewalls). It’s also good to have alternatives in place (such as WhatsApp, Zoom, or Teams) to cater for these types of issues.

Poor equipment: Using poor-quality recording equipment or using equipment incorrectly means that you will have trouble transcribing, coding, and analyzing your interviews. This can be a major issue , as some of your interview data may go completely to waste if not recorded well. So, make sure that you use good-quality recording equipment and that you know how to use it correctly.

To avoid issues, you should always conduct test recordings before every interview to ensure that you can use the relevant equipment properly. It’s also a good idea to spot check each recording afterwards, just to make sure it was recorded as planned. If your equipment uses batteries, be sure to always carry a spare set.

Where you conduct your interviews and the equipment you use to record them play an important role in how the process unfolds.

4. Not having a basic risk management plan

Many possible issues can arise during the interview process. Not planning for these issues can mean that you are left with compromised data that might not be useful to you. Therefore, it’s important to map out some sort of risk management plan ahead of time, considering the potential risks, how you’ll minimize their probability and how you’ll manage them if they materialize.

Common potential issues related to the actual interview include cancellations (people pulling out), delays (such as getting stuck in traffic), language and accent differences (especially in the case of poor internet connections), issues with internet connections and power supply. Other issues can also occur in the interview itself. For example, the interviewee could drift off-topic, or you might encounter an interviewee who does not say much at all.

You can prepare for these potential issues by considering possible worst-case scenarios and preparing a response for each scenario. For instance, it is important to plan a backup date just in case your interviewee cannot make it to the first meeting you scheduled with them. It’s also a good idea to factor in a 30-minute gap between your interviews for the instances where someone might be late, or an interview runs overtime for other reasons. Make sure that you also plan backup questions that could be used to bring a respondent back on topic if they start rambling, or questions to encourage those who are saying too little.

In general, it’s best practice to plan to conduct more interviews than you think you need (this is called oversampling ). Doing so will allow you some room for error if there are interviews that don’t go as planned, or if some interviewees withdraw. If you need 10 interviews, it is a good idea to plan for 15. Likely, a few will cancel , delay, or not produce useful data.

You should consider all the potential risks, how you’ll reduce their probability and how you'll respond if they do indeed materialize.

5. Not keeping your golden thread front of mind

We touched on this a little earlier, but it is a key point that should be central to your entire research process. You don’t want to end up with pages and pages of data after conducting your interviews and realize that it is not useful to your research aims . Your research aims, objectives and research questions – i.e., your golden thread – should influence every design decision and should guide the interview process at all times. 

A useful way to avoid this mistake is by developing an interview guide before you begin interviewing your respondents. An interview guide is a document that contains all of your questions with notes on how each of the interview questions is linked to the research question(s) of your study. You can also include your research aims and objectives here for a more comprehensive linkage. 

You can easily create an interview guide by drawing up a table with one column containing your core interview questions . Then add another column with your research questions , another with expectations that you may have in light of the relevant literature and another with backup or follow-up questions . As mentioned, you can also bring in your research aims and objectives to help you connect them all together. If you’d like, you can download a copy of our free interview guide here .

Recap: Qualitative Interview Mistakes

In this post, we’ve discussed 5 common costly mistakes that are easy to make in the process of planning and conducting qualitative interviews.

To recap, these include:

If you have any questions about these interviewing mistakes, drop a comment below. Alternatively, if you’re interested in getting 1-on-1 help with your thesis or dissertation , check out our dissertation coaching service or book a free initial consultation with one of our friendly Grad Coaches.

qualitative research interview sample

Psst... there’s more!

This post was based on one of our popular Research Bootcamps . If you're working on a research project, you'll definitely want to check this out ...

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly

Logo for Open Educational Resources

Chapter 5. Sampling

Introduction.

Most Americans will experience unemployment at some point in their lives. Sarah Damaske ( 2021 ) was interested in learning about how men and women experience unemployment differently. To answer this question, she interviewed unemployed people. After conducting a “pilot study” with twenty interviewees, she realized she was also interested in finding out how working-class and middle-class persons experienced unemployment differently. She found one hundred persons through local unemployment offices. She purposefully selected a roughly equal number of men and women and working-class and middle-class persons for the study. This would allow her to make the kinds of comparisons she was interested in. She further refined her selection of persons to interview:

I decided that I needed to be able to focus my attention on gender and class; therefore, I interviewed only people born between 1962 and 1987 (ages 28–52, the prime working and child-rearing years), those who worked full-time before their job loss, those who experienced an involuntary job loss during the past year, and those who did not lose a job for cause (e.g., were not fired because of their behavior at work). ( 244 )

The people she ultimately interviewed compose her sample. They represent (“sample”) the larger population of the involuntarily unemployed. This “theoretically informed stratified sampling design” allowed Damaske “to achieve relatively equal distribution of participation across gender and class,” but it came with some limitations. For one, the unemployment centers were located in primarily White areas of the country, so there were very few persons of color interviewed. Qualitative researchers must make these kinds of decisions all the time—who to include and who not to include. There is never an absolutely correct decision, as the choice is linked to the particular research question posed by the particular researcher, although some sampling choices are more compelling than others. In this case, Damaske made the choice to foreground both gender and class rather than compare all middle-class men and women or women of color from different class positions or just talk to White men. She leaves the door open for other researchers to sample differently. Because science is a collective enterprise, it is most likely someone will be inspired to conduct a similar study as Damaske’s but with an entirely different sample.

This chapter is all about sampling. After you have developed a research question and have a general idea of how you will collect data (observations or interviews), how do you go about actually finding people and sites to study? Although there is no “correct number” of people to interview, the sample should follow the research question and research design. You might remember studying sampling in a quantitative research course. Sampling is important here too, but it works a bit differently. Unlike quantitative research, qualitative research involves nonprobability sampling. This chapter explains why this is so and what qualities instead make a good sample for qualitative research.

Quick Terms Refresher

  • The population is the entire group that you want to draw conclusions about.
  • The sample is the specific group of individuals that you will collect data from.
  • Sampling frame is the actual list of individuals that the sample will be drawn from. Ideally, it should include the entire target population (and nobody who is not part of that population).
  • Sample size is how many individuals (or units) are included in your sample.

The “Who” of Your Research Study

After you have turned your general research interest into an actual research question and identified an approach you want to take to answer that question, you will need to specify the people you will be interviewing or observing. In most qualitative research, the objects of your study will indeed be people. In some cases, however, your objects might be content left by people (e.g., diaries, yearbooks, photographs) or documents (official or unofficial) or even institutions (e.g., schools, medical centers) and locations (e.g., nation-states, cities). Chances are, whatever “people, places, or things” are the objects of your study, you will not really be able to talk to, observe, or follow every single individual/object of the entire population of interest. You will need to create a sample of the population . Sampling in qualitative research has different purposes and goals than sampling in quantitative research. Sampling in both allows you to say something of interest about a population without having to include the entire population in your sample.

We begin this chapter with the case of a population of interest composed of actual people. After we have a better understanding of populations and samples that involve real people, we’ll discuss sampling in other types of qualitative research, such as archival research, content analysis, and case studies. We’ll then move to a larger discussion about the difference between sampling in qualitative research generally versus quantitative research, then we’ll move on to the idea of “theoretical” generalizability, and finally, we’ll conclude with some practical tips on the correct “number” to include in one’s sample.

Sampling People

To help think through samples, let’s imagine we want to know more about “vaccine hesitancy.” We’ve all lived through 2020 and 2021, and we know that a sizable number of people in the United States (and elsewhere) were slow to accept vaccines, even when these were freely available. By some accounts, about one-third of Americans initially refused vaccination. Why is this so? Well, as I write this in the summer of 2021, we know that some people actively refused the vaccination, thinking it was harmful or part of a government plot. Others were simply lazy or dismissed the necessity. And still others were worried about harmful side effects. The general population of interest here (all adult Americans who were not vaccinated by August 2021) may be as many as eighty million people. We clearly cannot talk to all of them. So we will have to narrow the number to something manageable. How can we do this?

Null

First, we have to think about our actual research question and the form of research we are conducting. I am going to begin with a quantitative research question. Quantitative research questions tend to be simpler to visualize, at least when we are first starting out doing social science research. So let us say we want to know what percentage of each kind of resistance is out there and how race or class or gender affects vaccine hesitancy. Again, we don’t have the ability to talk to everyone. But harnessing what we know about normal probability distributions (see quantitative methods for more on this), we can find this out through a sample that represents the general population. We can’t really address these particular questions if we only talk to White women who go to college with us. And if you are really trying to generalize the specific findings of your sample to the larger population, you will have to employ probability sampling , a sampling technique where a researcher sets a selection of a few criteria and chooses members of a population randomly. Why randomly? If truly random, all the members have an equal opportunity to be a part of the sample, and thus we avoid the problem of having only our friends and neighbors (who may be very different from other people in the population) in the study. Mathematically, there is going to be a certain number that will be large enough to allow us to generalize our particular findings from our sample population to the population at large. It might surprise you how small that number can be. Election polls of no more than one thousand people are routinely used to predict actual election outcomes of millions of people. Below that number, however, you will not be able to make generalizations. Talking to five people at random is simply not enough people to predict a presidential election.

In order to answer quantitative research questions of causality, one must employ probability sampling. Quantitative researchers try to generalize their findings to a larger population. Samples are designed with that in mind. Qualitative researchers ask very different questions, though. Qualitative research questions are not about “how many” of a certain group do X (in this case, what percentage of the unvaccinated hesitate for concern about safety rather than reject vaccination on political grounds). Qualitative research employs nonprobability sampling . By definition, not everyone has an equal opportunity to be included in the sample. The researcher might select White women they go to college with to provide insight into racial and gender dynamics at play. Whatever is found by doing so will not be generalizable to everyone who has not been vaccinated, or even all White women who have not been vaccinated, or even all White women who have not been vaccinated who are in this particular college. That is not the point of qualitative research at all. This is a really important distinction, so I will repeat in bold: Qualitative researchers are not trying to statistically generalize specific findings to a larger population . They have not failed when their sample cannot be generalized, as that is not the point at all.

In the previous paragraph, I said it would be perfectly acceptable for a qualitative researcher to interview five White women with whom she goes to college about their vaccine hesitancy “to provide insight into racial and gender dynamics at play.” The key word here is “insight.” Rather than use a sample as a stand-in for the general population, as quantitative researchers do, the qualitative researcher uses the sample to gain insight into a process or phenomenon. The qualitative researcher is not going to be content with simply asking each of the women to state her reason for not being vaccinated and then draw conclusions that, because one in five of these women were concerned about their health, one in five of all people were also concerned about their health. That would be, frankly, a very poor study indeed. Rather, the qualitative researcher might sit down with each of the women and conduct a lengthy interview about what the vaccine means to her, why she is hesitant, how she manages her hesitancy (how she explains it to her friends), what she thinks about others who are unvaccinated, what she thinks of those who have been vaccinated, and what she knows or thinks she knows about COVID-19. The researcher might include specific interview questions about the college context, about their status as White women, about the political beliefs they hold about racism in the US, and about how their own political affiliations may or may not provide narrative scripts about “protective whiteness.” There are many interesting things to ask and learn about and many things to discover. Where a quantitative researcher begins with clear parameters to set their population and guide their sample selection process, the qualitative researcher is discovering new parameters, making it impossible to engage in probability sampling.

Looking at it this way, sampling for qualitative researchers needs to be more strategic. More theoretically informed. What persons can be interviewed or observed that would provide maximum insight into what is still unknown? In other words, qualitative researchers think through what cases they could learn the most from, and those are the cases selected to study: “What would be ‘bias’ in statistical sampling, and therefore a weakness, becomes intended focus in qualitative sampling, and therefore a strength. The logic and power of purposeful sampling like in selecting information-rich cases for study in depth. Information-rich cases are those from which one can learn a great deal about issues of central importance to the purpose of the inquiry, thus the term purposeful sampling” ( Patton 2002:230 ; emphases in the original).

Before selecting your sample, though, it is important to clearly identify the general population of interest. You need to know this before you can determine the sample. In our example case, it is “adult Americans who have not yet been vaccinated.” Depending on the specific qualitative research question, however, it might be “adult Americans who have been vaccinated for political reasons” or even “college students who have not been vaccinated.” What insights are you seeking? Do you want to know how politics is affecting vaccination? Or do you want to understand how people manage being an outlier in a particular setting (unvaccinated where vaccinations are heavily encouraged if not required)? More clearly stated, your population should align with your research question . Think back to the opening story about Damaske’s work studying the unemployed. She drew her sample narrowly to address the particular questions she was interested in pursuing. Knowing your questions or, at a minimum, why you are interested in the topic will allow you to draw the best sample possible to achieve insight.

Once you have your population in mind, how do you go about getting people to agree to be in your sample? In qualitative research, it is permissible to find people by convenience. Just ask for people who fit your sample criteria and see who shows up. Or reach out to friends and colleagues and see if they know anyone that fits. Don’t let the name convenience sampling mislead you; this is not exactly “easy,” and it is certainly a valid form of sampling in qualitative research. The more unknowns you have about what you will find, the more convenience sampling makes sense. If you don’t know how race or class or political affiliation might matter, and your population is unvaccinated college students, you can construct a sample of college students by placing an advertisement in the student paper or posting a flyer on a notice board. Whoever answers is your sample. That is what is meant by a convenience sample. A common variation of convenience sampling is snowball sampling . This is particularly useful if your target population is hard to find. Let’s say you posted a flyer about your study and only two college students responded. You could then ask those two students for referrals. They tell their friends, and those friends tell other friends, and, like a snowball, your sample gets bigger and bigger.

Researcher Note

Gaining Access: When Your Friend Is Your Research Subject

My early experience with qualitative research was rather unique. At that time, I needed to do a project that required me to interview first-generation college students, and my friends, with whom I had been sharing a dorm for two years, just perfectly fell into the sample category. Thus, I just asked them and easily “gained my access” to the research subject; I know them, we are friends, and I am part of them. I am an insider. I also thought, “Well, since I am part of the group, I can easily understand their language and norms, I can capture their honesty, read their nonverbal cues well, will get more information, as they will be more opened to me because they trust me.” All in all, easy access with rich information. But, gosh, I did not realize that my status as an insider came with a price! When structuring the interview questions, I began to realize that rather than focusing on the unique experiences of my friends, I mostly based the questions on my own experiences, assuming we have similar if not the same experiences. I began to struggle with my objectivity and even questioned my role; am I doing this as part of the group or as a researcher? I came to know later that my status as an insider or my “positionality” may impact my research. It not only shapes the process of data collection but might heavily influence my interpretation of the data. I came to realize that although my inside status came with a lot of benefits (especially for access), it could also bring some drawbacks.

—Dede Setiono, PhD student focusing on international development and environmental policy, Oregon State University

The more you know about what you might find, the more strategic you can be. If you wanted to compare how politically conservative and politically liberal college students explained their vaccine hesitancy, for example, you might construct a sample purposively, finding an equal number of both types of students so that you can make those comparisons in your analysis. This is what Damaske ( 2021 ) did. You could still use convenience or snowball sampling as a way of recruitment. Post a flyer at the conservative student club and then ask for referrals from the one student that agrees to be interviewed. As with convenience sampling, there are variations of purposive sampling as well as other names used (e.g., judgment, quota, stratified, criterion, theoretical). Try not to get bogged down in the nomenclature; instead, focus on identifying the general population that matches your research question and then using a sampling method that is most likely to provide insight, given the types of questions you have.

There are all kinds of ways of being strategic with sampling in qualitative research. Here are a few of my favorite techniques for maximizing insight:

  • Consider using “extreme” or “deviant” cases. Maybe your college houses a prominent anti-vaxxer who has written about and demonstrated against the college’s policy on vaccines. You could learn a lot from that single case (depending on your research question, of course).
  • Consider “intensity”: people and cases and circumstances where your questions are more likely to feature prominently (but not extremely or deviantly). For example, you could compare those who volunteer at local Republican and Democratic election headquarters during an election season in a study on why party matters. Those who volunteer are more likely to have something to say than those who are more apathetic.
  • Maximize variation, as with the case of “politically liberal” versus “politically conservative,” or include an array of social locations (young vs. old; Northwest vs. Southeast region). This kind of heterogeneity sampling can capture and describe the central themes that cut across the variations: any common patterns that emerge, even in this wildly mismatched sample, are probably important to note!
  • Rather than maximize the variation, you could select a small homogenous sample to describe some particular subgroup in depth. Focus groups are often the best form of data collection for homogeneity sampling.
  • Think about which cases are “critical” or politically important—ones that “if it happens here, it would happen anywhere” or a case that is politically sensitive, as with the single “blue” (Democratic) county in a “red” (Republican) state. In both, you are choosing a site that would yield the most information and have the greatest impact on the development of knowledge.
  • On the other hand, sometimes you want to select the “typical”—the typical college student, for example. You are trying to not generalize from the typical but illustrate aspects that may be typical of this case or group. When selecting for typicality, be clear with yourself about why the typical matches your research questions (and who might be excluded or marginalized in doing so).
  • Finally, it is often a good idea to look for disconfirming cases : if you are at the stage where you have a hypothesis (of sorts), you might select those who do not fit your hypothesis—you will surely learn something important there. They may be “exceptions that prove the rule” or exceptions that force you to alter your findings in order to make sense of these additional cases.

In addition to all these sampling variations, there is the theoretical approach taken by grounded theorists in which the researcher samples comparative people (or events) on the basis of their potential to represent important theoretical constructs. The sample, one can say, is by definition representative of the phenomenon of interest. It accompanies the constant comparative method of analysis. In the words of the funders of Grounded Theory , “Theoretical sampling is sampling on the basis of the emerging concepts, with the aim being to explore the dimensional range or varied conditions along which the properties of the concepts vary” ( Strauss and Corbin 1998:73 ).

When Your Population is Not Composed of People

I think it is easiest for most people to think of populations and samples in terms of people, but sometimes our units of analysis are not actually people. They could be places or institutions. Even so, you might still want to talk to people or observe the actions of people to understand those places or institutions. Or not! In the case of content analyses (see chapter 17), you won’t even have people involved at all but rather documents or films or photographs or news clippings. Everything we have covered about sampling applies to other units of analysis too. Let’s work through some examples.

Case Studies

When constructing a case study, it is helpful to think of your cases as sample populations in the same way that we considered people above. If, for example, you are comparing campus climates for diversity, your overall population may be “four-year college campuses in the US,” and from there you might decide to study three college campuses as your sample. Which three? Will you use purposeful sampling (perhaps [1] selecting three colleges in Oregon that are different sizes or [2] selecting three colleges across the US located in different political cultures or [3] varying the three colleges by racial makeup of the student body)? Or will you select three colleges at random, out of convenience? There are justifiable reasons for all approaches.

As with people, there are different ways of maximizing insight in your sample selection. Think about the following rationales: typical, diverse, extreme, deviant, influential, crucial, or even embodying a particular “pathway” ( Gerring 2008 ). When choosing a case or particular research site, Rubin ( 2021 ) suggests you bear in mind, first, what you are leaving out by selecting this particular case/site; second, what you might be overemphasizing by studying this case/site and not another; and, finally, whether you truly need to worry about either of those things—“that is, what are the sources of bias and how bad are they for what you are trying to do?” ( 89 ).

Once you have selected your cases, you may still want to include interviews with specific people or observations at particular sites within those cases. Then you go through possible sampling approaches all over again to determine which people will be contacted.

Content: Documents, Narrative Accounts, And So On

Although not often discussed as sampling, your selection of documents and other units to use in various content/historical analyses is subject to similar considerations. When you are asking quantitative-type questions (percentages and proportionalities of a general population), you will want to follow probabilistic sampling. For example, I created a random sample of accounts posted on the website studentloanjustice.org to delineate the types of problems people were having with student debt ( Hurst 2007 ). Even though my data was qualitative (narratives of student debt), I was actually asking a quantitative-type research question, so it was important that my sample was representative of the larger population (debtors who posted on the website). On the other hand, when you are asking qualitative-type questions, the selection process should be very different. In that case, use nonprobabilistic techniques, either convenience (where you are really new to this data and do not have the ability to set comparative criteria or even know what a deviant case would be) or some variant of purposive sampling. Let’s say you were interested in the visual representation of women in media published in the 1950s. You could select a national magazine like Time for a “typical” representation (and for its convenience, as all issues are freely available on the web and easy to search). Or you could compare one magazine known for its feminist content versus one antifeminist. The point is, sample selection is important even when you are not interviewing or observing people.

Goals of Qualitative Sampling versus Goals of Quantitative Sampling

We have already discussed some of the differences in the goals of quantitative and qualitative sampling above, but it is worth further discussion. The quantitative researcher seeks a sample that is representative of the population of interest so that they may properly generalize the results (e.g., if 80 percent of first-gen students in the sample were concerned with costs of college, then we can say there is a strong likelihood that 80 percent of first-gen students nationally are concerned with costs of college). The qualitative researcher does not seek to generalize in this way . They may want a representative sample because they are interested in typical responses or behaviors of the population of interest, but they may very well not want a representative sample at all. They might want an “extreme” or deviant case to highlight what could go wrong with a particular situation, or maybe they want to examine just one case as a way of understanding what elements might be of interest in further research. When thinking of your sample, you will have to know why you are selecting the units, and this relates back to your research question or sets of questions. It has nothing to do with having a representative sample to generalize results. You may be tempted—or it may be suggested to you by a quantitatively minded member of your committee—to create as large and representative a sample as you possibly can to earn credibility from quantitative researchers. Ignore this temptation or suggestion. The only thing you should be considering is what sample will best bring insight into the questions guiding your research. This has implications for the number of people (or units) in your study as well, which is the topic of the next section.

What is the Correct “Number” to Sample?

Because we are not trying to create a generalizable representative sample, the guidelines for the “number” of people to interview or news stories to code are also a bit more nebulous. There are some brilliant insightful studies out there with an n of 1 (meaning one person or one account used as the entire set of data). This is particularly so in the case of autoethnography, a variation of ethnographic research that uses the researcher’s own subject position and experiences as the basis of data collection and analysis. But it is true for all forms of qualitative research. There are no hard-and-fast rules here. The number to include is what is relevant and insightful to your particular study.

That said, humans do not thrive well under such ambiguity, and there are a few helpful suggestions that can be made. First, many qualitative researchers talk about “saturation” as the end point for data collection. You stop adding participants when you are no longer getting any new information (or so very little that the cost of adding another interview subject or spending another day in the field exceeds any likely benefits to the research). The term saturation was first used here by Glaser and Strauss ( 1967 ), the founders of Grounded Theory. Here is their explanation: “The criterion for judging when to stop sampling the different groups pertinent to a category is the category’s theoretical saturation . Saturation means that no additional data are being found whereby the sociologist can develop properties of the category. As he [or she] sees similar instances over and over again, the researcher becomes empirically confident that a category is saturated. [They go] out of [their] way to look for groups that stretch diversity of data as far as possible, just to make certain that saturation is based on the widest possible range of data on the category” ( 61 ).

It makes sense that the term was developed by grounded theorists, since this approach is rather more open-ended than other approaches used by qualitative researchers. With so much left open, having a guideline of “stop collecting data when you don’t find anything new” is reasonable. However, saturation can’t help much when first setting out your sample. How do you know how many people to contact to interview? What number will you put down in your institutional review board (IRB) protocol (see chapter 8)? You may guess how many people or units it will take to reach saturation, but there really is no way to know in advance. The best you can do is think about your population and your questions and look at what others have done with similar populations and questions.

Here are some suggestions to use as a starting point: For phenomenological studies, try to interview at least ten people for each major category or group of people . If you are comparing male-identified, female-identified, and gender-neutral college students in a study on gender regimes in social clubs, that means you might want to design a sample of thirty students, ten from each group. This is the minimum suggested number. Damaske’s ( 2021 ) sample of one hundred allows room for up to twenty-five participants in each of four “buckets” (e.g., working-class*female, working-class*male, middle-class*female, middle-class*male). If there is more than one comparative group (e.g., you are comparing students attending three different colleges, and you are comparing White and Black students in each), you can sometimes reduce the number for each group in your sample to five for, in this case, thirty total students. But that is really a bare minimum you will want to go. A lot of people will not trust you with only “five” cases in a bucket. Lareau ( 2021:24 ) advises a minimum of seven or nine for each bucket (or “cell,” in her words). The point is to think about what your analyses might look like and how comfortable you will be with a certain number of persons fitting each category.

Because qualitative research takes so much time and effort, it is rare for a beginning researcher to include more than thirty to fifty people or units in the study. You may not be able to conduct all the comparisons you might want simply because you cannot manage a larger sample. In that case, the limits of who you can reach or what you can include may influence you to rethink an original overcomplicated research design. Rather than include students from every racial group on a campus, for example, you might want to sample strategically, thinking about the most contrast (insightful), possibly excluding majority-race (White) students entirely, and simply using previous literature to fill in gaps in our understanding. For example, one of my former students was interested in discovering how race and class worked at a predominantly White institution (PWI). Due to time constraints, she simplified her study from an original sample frame of middle-class and working-class domestic Black and international African students (four buckets) to a sample frame of domestic Black and international African students (two buckets), allowing the complexities of class to come through individual accounts rather than from part of the sample frame. She wisely decided not to include White students in the sample, as her focus was on how minoritized students navigated the PWI. She was able to successfully complete her project and develop insights from the data with fewer than twenty interviewees. [1]

But what if you had unlimited time and resources? Would it always be better to interview more people or include more accounts, documents, and units of analysis? No! Your sample size should reflect your research question and the goals you have set yourself. Larger numbers can sometimes work against your goals. If, for example, you want to help bring out individual stories of success against the odds, adding more people to the analysis can end up drowning out those individual stories. Sometimes, the perfect size really is one (or three, or five). It really depends on what you are trying to discover and achieve in your study. Furthermore, studies of one hundred or more (people, documents, accounts, etc.) can sometimes be mistaken for quantitative research. Inevitably, the large sample size will push the researcher into simplifying the data numerically. And readers will begin to expect generalizability from such a large sample.

To summarize, “There are no rules for sample size in qualitative inquiry. Sample size depends on what you want to know, the purpose of the inquiry, what’s at stake, what will be useful, what will have credibility, and what can be done with available time and resources” ( Patton 2002:244 ).

How did you find/construct a sample?

Since qualitative researchers work with comparatively small sample sizes, getting your sample right is rather important. Yet it is also difficult to accomplish. For instance, a key question you need to ask yourself is whether you want a homogeneous or heterogeneous sample. In other words, do you want to include people in your study who are by and large the same, or do you want to have diversity in your sample?

For many years, I have studied the experiences of students who were the first in their families to attend university. There is a rather large number of sampling decisions I need to consider before starting the study. (1) Should I only talk to first-in-family students, or should I have a comparison group of students who are not first-in-family? (2) Do I need to strive for a gender distribution that matches undergraduate enrollment patterns? (3) Should I include participants that reflect diversity in gender identity and sexuality? (4) How about racial diversity? First-in-family status is strongly related to some ethnic or racial identity. (5) And how about areas of study?

As you can see, if I wanted to accommodate all these differences and get enough study participants in each category, I would quickly end up with a sample size of hundreds, which is not feasible in most qualitative research. In the end, for me, the most important decision was to maximize the voices of first-in-family students, which meant that I only included them in my sample. As for the other categories, I figured it was going to be hard enough to find first-in-family students, so I started recruiting with an open mind and an understanding that I may have to accept a lack of gender, sexuality, or racial diversity and then not be able to say anything about these issues. But I would definitely be able to speak about the experiences of being first-in-family.

—Wolfgang Lehmann, author of “Habitus Transformation and Hidden Injuries”

Examples of “Sample” Sections in Journal Articles

Think about some of the studies you have read in college, especially those with rich stories and accounts about people’s lives. Do you know how the people were selected to be the focus of those stories? If the account was published by an academic press (e.g., University of California Press or Princeton University Press) or in an academic journal, chances are that the author included a description of their sample selection. You can usually find these in a methodological appendix (book) or a section on “research methods” (article).

Here are two examples from recent books and one example from a recent article:

Example 1 . In It’s Not like I’m Poor: How Working Families Make Ends Meet in a Post-welfare World , the research team employed a mixed methods approach to understand how parents use the earned income tax credit, a refundable tax credit designed to provide relief for low- to moderate-income working people ( Halpern-Meekin et al. 2015 ). At the end of their book, their first appendix is “Introduction to Boston and the Research Project.” After describing the context of the study, they include the following description of their sample selection:

In June 2007, we drew 120 names at random from the roughly 332 surveys we gathered between February and April. Within each racial and ethnic group, we aimed for one-third married couples with children and two-thirds unmarried parents. We sent each of these families a letter informing them of the opportunity to participate in the in-depth portion of our study and then began calling the home and cell phone numbers they provided us on the surveys and knocking on the doors of the addresses they provided.…In the end, we interviewed 115 of the 120 families originally selected for the in-depth interview sample (the remaining five families declined to participate). ( 22 )

Was their sample selection based on convenience or purpose? Why do you think it was important for them to tell you that five families declined to be interviewed? There is actually a trick here, as the names were pulled randomly from a survey whose sample design was probabilistic. Why is this important to know? What can we say about the representativeness or the uniqueness of whatever findings are reported here?

Example 2 . In When Diversity Drops , Park ( 2013 ) examines the impact of decreasing campus diversity on the lives of college students. She does this through a case study of one student club, the InterVarsity Christian Fellowship (IVCF), at one university (“California University,” a pseudonym). Here is her description:

I supplemented participant observation with individual in-depth interviews with sixty IVCF associates, including thirty-four current students, eight former and current staff members, eleven alumni, and seven regional or national staff members. The racial/ethnic breakdown was twenty-five Asian Americans (41.6 percent), one Armenian (1.6 percent), twelve people who were black (20.0 percent), eight Latino/as (13.3 percent), three South Asian Americans (5.0 percent), and eleven people who were white (18.3 percent). Twenty-nine were men, and thirty-one were women. Looking back, I note that the higher number of Asian Americans reflected both the group’s racial/ethnic composition and my relative ease about approaching them for interviews. ( 156 )

How can you tell this is a convenience sample? What else do you note about the sample selection from this description?

Example 3. The last example is taken from an article published in the journal Research in Higher Education . Published articles tend to be more formal than books, at least when it comes to the presentation of qualitative research. In this article, Lawson ( 2021 ) is seeking to understand why female-identified college students drop out of majors that are dominated by male-identified students (e.g., engineering, computer science, music theory). Here is the entire relevant section of the article:

Method Participants Data were collected as part of a larger study designed to better understand the daily experiences of women in MDMs [male-dominated majors].…Participants included 120 students from a midsize, Midwestern University. This sample included 40 women and 40 men from MDMs—defined as any major where at least 2/3 of students are men at both the university and nationally—and 40 women from GNMs—defined as any may where 40–60% of students are women at both the university and nationally.… Procedure A multi-faceted approach was used to recruit participants; participants were sent targeted emails (obtained based on participants’ reported gender and major listings), campus-wide emails sent through the University’s Communication Center, flyers, and in-class presentations. Recruitment materials stated that the research focused on the daily experiences of college students, including classroom experiences, stressors, positive experiences, departmental contexts, and career aspirations. Interested participants were directed to email the study coordinator to verify eligibility (at least 18 years old, man/woman in MDM or woman in GNM, access to a smartphone). Sixteen interested individuals were not eligible for the study due to the gender/major combination. ( 482ff .)

What method of sample selection was used by Lawson? Why is it important to define “MDM” at the outset? How does this definition relate to sampling? Why were interested participants directed to the study coordinator to verify eligibility?

Final Words

I have found that students often find it difficult to be specific enough when defining and choosing their sample. It might help to think about your sample design and sample recruitment like a cookbook. You want all the details there so that someone else can pick up your study and conduct it as you intended. That person could be yourself, but this analogy might work better if you have someone else in mind. When I am writing down recipes, I often think of my sister and try to convey the details she would need to duplicate the dish. We share a grandmother whose recipes are full of handwritten notes in the margins, in spidery ink, that tell us what bowl to use when or where things could go wrong. Describe your sample clearly, convey the steps required accurately, and then add any other details that will help keep you on track and remind you why you have chosen to limit possible interviewees to those of a certain age or class or location. Imagine actually going out and getting your sample (making your dish). Do you have all the necessary details to get started?

Table 5.1. Sampling Type and Strategies

Type Used primarily in... Strategies  
Probabilistic Quantitative research
Simple random Each member of the population has an equal chance at being selected
Stratified The sample is split into strata; members of each strata are selected in proportion to the population at large
Non-probabilistic Qualitative research
Convenience Simply includes the individuals who happen to be most accessible to the researcher
Snowball Used to recruit participants via other participants. The number of people you have access to “snowballs” as you get in contact with more people
Purposive Involves the researcher using their expertise to select a sample that is most useful to the purposes of the research; An effective purposive sample must have clear criteria and rationale for inclusion (e.g., )
Quota Set quotas to ensure that the sample you get represents certain characteristics in proportion to their prevalence in the population

Further Readings

Fusch, Patricia I., and Lawrence R. Ness. 2015. “Are We There Yet? Data Saturation in Qualitative Research.” Qualitative Report 20(9):1408–1416.

Saunders, Benjamin, Julius Sim, Tom Kinstone, Shula Baker, Jackie Waterfield, Bernadette Bartlam, Heather Burroughs, and Clare Jinks. 2018. “Saturation in Qualitative Research: Exploring Its Conceptualization and Operationalization.”  Quality & Quantity  52(4):1893–1907.

  • Rubin ( 2021 ) suggests a minimum of twenty interviews (but safer with thirty) for an interview-based study and a minimum of three to six months in the field for ethnographic studies. For a content-based study, she suggests between five hundred and one thousand documents, although some will be “very small” ( 243–244 ). ↵

The process of selecting people or other units of analysis to represent a larger population. In quantitative research, this representation is taken quite literally, as statistically representative.  In qualitative research, in contrast, sample selection is often made based on potential to generate insight about a particular topic or phenomenon.

The actual list of individuals that the sample will be drawn from. Ideally, it should include the entire target population (and nobody who is not part of that population).  Sampling frames can differ from the larger population when specific exclusions are inherent, as in the case of pulling names randomly from voter registration rolls where not everyone is a registered voter.  This difference in frame and population can undercut the generalizability of quantitative results.

The specific group of individuals that you will collect data from.  Contrast population.

The large group of interest to the researcher.  Although it will likely be impossible to design a study that incorporates or reaches all members of the population of interest, this should be clearly defined at the outset of a study so that a reasonable sample of the population can be taken.  For example, if one is studying working-class college students, the sample may include twenty such students attending a particular college, while the population is “working-class college students.”  In quantitative research, clearly defining the general population of interest is a necessary step in generalizing results from a sample.  In qualitative research, defining the population is conceptually important for clarity.

A sampling strategy in which the sample is chosen to represent (numerically) the larger population from which it is drawn by random selection.  Each person in the population has an equal chance of making it into the sample.  This is often done through a lottery or other chance mechanisms (e.g., a random selection of every twelfth name on an alphabetical list of voters).  Also known as random sampling .

The selection of research participants or other data sources based on availability or accessibility, in contrast to purposive sampling .

A sample generated non-randomly by asking participants to help recruit more participants the idea being that a person who fits your sampling criteria probably knows other people with similar criteria.

Broad codes that are assigned to the main issues emerging in the data; identifying themes is often part of initial coding . 

A form of case selection focusing on examples that do not fit the emerging patterns. This allows the researcher to evaluate rival explanations or to define the limitations of their research findings. While disconfirming cases are found (not sought out), researchers should expand their analysis or rethink their theories to include/explain them.

A methodological tradition of inquiry and approach to analyzing qualitative data in which theories emerge from a rigorous and systematic process of induction.  This approach was pioneered by the sociologists Glaser and Strauss (1967).  The elements of theory generated from comparative analysis of data are, first, conceptual categories and their properties and, second, hypotheses or generalized relations among the categories and their properties – “The constant comparing of many groups draws the [researcher’s] attention to their many similarities and differences.  Considering these leads [the researcher] to generate abstract categories and their properties, which, since they emerge from the data, will clearly be important to a theory explaining the kind of behavior under observation.” (36).

The result of probability sampling, in which a sample is chosen to represent (numerically) the larger population from which it is drawn by random selection.  Each person in the population has an equal chance of making it into the random sample.  This is often done through a lottery or other chance mechanisms (e.g., the random selection of every twelfth name on an alphabetical list of voters).  This is typically not required in qualitative research but rather essential for the generalizability of quantitative research.

A form of case selection or purposeful sampling in which cases that are unusual or special in some way are chosen to highlight processes or to illuminate gaps in our knowledge of a phenomenon.   See also extreme case .

The point at which you can conclude data collection because every person you are interviewing, the interaction you are observing, or content you are analyzing merely confirms what you have already noted.  Achieving saturation is often used as the justification for the final sample size.

The accuracy with which results or findings can be transferred to situations or people other than those originally studied.  Qualitative studies generally are unable to use (and are uninterested in) statistical generalizability where the sample population is said to be able to predict or stand in for a larger population of interest.  Instead, qualitative researchers often discuss “theoretical generalizability,” in which the findings of a particular study can shed light on processes and mechanisms that may be at play in other settings.  See also statistical generalization and theoretical generalization .

A term used by IRBs to denote all materials aimed at recruiting participants into a research study (including printed advertisements, scripts, audio or video tapes, or websites).  Copies of this material are required in research protocols submitted to IRB.

Introduction to Qualitative Research Methods Copyright © 2023 by Allison Hurst is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License , except where otherwise noted.

  • Research article
  • Open access
  • Published: 21 November 2018

Characterising and justifying sample size sufficiency in interview-based studies: systematic analysis of qualitative health research over a 15-year period

  • Konstantina Vasileiou   ORCID: orcid.org/0000-0001-5047-3920 1 ,
  • Julie Barnett 1 ,
  • Susan Thorpe 2 &
  • Terry Young 3  

BMC Medical Research Methodology volume  18 , Article number:  148 ( 2018 ) Cite this article

789k Accesses

1256 Citations

169 Altmetric

Metrics details

Choosing a suitable sample size in qualitative research is an area of conceptual debate and practical uncertainty. That sample size principles, guidelines and tools have been developed to enable researchers to set, and justify the acceptability of, their sample size is an indication that the issue constitutes an important marker of the quality of qualitative research. Nevertheless, research shows that sample size sufficiency reporting is often poor, if not absent, across a range of disciplinary fields.

A systematic analysis of single-interview-per-participant designs within three health-related journals from the disciplines of psychology, sociology and medicine, over a 15-year period, was conducted to examine whether and how sample sizes were justified and how sample size was characterised and discussed by authors. Data pertinent to sample size were extracted and analysed using qualitative and quantitative analytic techniques.

Our findings demonstrate that provision of sample size justifications in qualitative health research is limited; is not contingent on the number of interviews; and relates to the journal of publication. Defence of sample size was most frequently supported across all three journals with reference to the principle of saturation and to pragmatic considerations. Qualitative sample sizes were predominantly – and often without justification – characterised as insufficient (i.e., ‘small’) and discussed in the context of study limitations. Sample size insufficiency was seen to threaten the validity and generalizability of studies’ results, with the latter being frequently conceived in nomothetic terms.

Conclusions

We recommend, firstly, that qualitative health researchers be more transparent about evaluations of their sample size sufficiency, situating these within broader and more encompassing assessments of data adequacy . Secondly, we invite researchers critically to consider how saturation parameters found in prior methodological studies and sample size community norms might best inform, and apply to, their own project and encourage that data adequacy is best appraised with reference to features that are intrinsic to the study at hand. Finally, those reviewing papers have a vital role in supporting and encouraging transparent study-specific reporting.

Peer Review reports

Sample adequacy in qualitative inquiry pertains to the appropriateness of the sample composition and size . It is an important consideration in evaluations of the quality and trustworthiness of much qualitative research [ 1 ] and is implicated – particularly for research that is situated within a post-positivist tradition and retains a degree of commitment to realist ontological premises – in appraisals of validity and generalizability [ 2 , 3 , 4 , 5 ].

Samples in qualitative research tend to be small in order to support the depth of case-oriented analysis that is fundamental to this mode of inquiry [ 5 ]. Additionally, qualitative samples are purposive, that is, selected by virtue of their capacity to provide richly-textured information, relevant to the phenomenon under investigation. As a result, purposive sampling [ 6 , 7 ] – as opposed to probability sampling employed in quantitative research – selects ‘information-rich’ cases [ 8 ]. Indeed, recent research demonstrates the greater efficiency of purposive sampling compared to random sampling in qualitative studies [ 9 ], supporting related assertions long put forward by qualitative methodologists.

Sample size in qualitative research has been the subject of enduring discussions [ 4 , 10 , 11 ]. Whilst the quantitative research community has established relatively straightforward statistics-based rules to set sample sizes precisely, the intricacies of qualitative sample size determination and assessment arise from the methodological, theoretical, epistemological, and ideological pluralism that characterises qualitative inquiry (for a discussion focused on the discipline of psychology see [ 12 ]). This mitigates against clear-cut guidelines, invariably applied. Despite these challenges, various conceptual developments have sought to address this issue, with guidance and principles [ 4 , 10 , 11 , 13 , 14 , 15 , 16 , 17 , 18 , 19 , 20 ], and more recently, an evidence-based approach to sample size determination seeks to ground the discussion empirically [ 21 , 22 , 23 , 24 , 25 , 26 , 27 , 28 , 29 , 30 , 31 , 32 , 33 , 34 , 35 ].

Focusing on single-interview-per-participant qualitative designs, the present study aims to further contribute to the dialogue of sample size in qualitative research by offering empirical evidence around justification practices associated with sample size. We next review the existing conceptual and empirical literature on sample size determination.

Sample size in qualitative research: Conceptual developments and empirical investigations

Qualitative research experts argue that there is no straightforward answer to the question of ‘how many’ and that sample size is contingent on a number of factors relating to epistemological, methodological and practical issues [ 36 ]. Sandelowski [ 4 ] recommends that qualitative sample sizes are large enough to allow the unfolding of a ‘new and richly textured understanding’ of the phenomenon under study, but small enough so that the ‘deep, case-oriented analysis’ (p. 183) of qualitative data is not precluded. Morse [ 11 ] posits that the more useable data are collected from each person, the fewer participants are needed. She invites researchers to take into account parameters, such as the scope of study, the nature of topic (i.e. complexity, accessibility), the quality of data, and the study design. Indeed, the level of structure of questions in qualitative interviewing has been found to influence the richness of data generated [ 37 ], and so, requires attention; empirical research shows that open questions, which are asked later on in the interview, tend to produce richer data [ 37 ].

Beyond such guidance, specific numerical recommendations have also been proffered, often based on experts’ experience of qualitative research. For example, Green and Thorogood [ 38 ] maintain that the experience of most qualitative researchers conducting an interview-based study with a fairly specific research question is that little new information is generated after interviewing 20 people or so belonging to one analytically relevant participant ‘category’ (pp. 102–104). Ritchie et al. [ 39 ] suggest that studies employing individual interviews conduct no more than 50 interviews so that researchers are able to manage the complexity of the analytic task. Similarly, Britten [ 40 ] notes that large interview studies will often comprise of 50 to 60 people. Experts have also offered numerical guidelines tailored to different theoretical and methodological traditions and specific research approaches, e.g. grounded theory, phenomenology [ 11 , 41 ]. More recently, a quantitative tool was proposed [ 42 ] to support a priori sample size determination based on estimates of the prevalence of themes in the population. Nevertheless, this more formulaic approach raised criticisms relating to assumptions about the conceptual [ 43 ] and ontological status of ‘themes’ [ 44 ] and the linearity ascribed to the processes of sampling, data collection and data analysis [ 45 ].

In terms of principles, Lincoln and Guba [ 17 ] proposed that sample size determination be guided by the criterion of informational redundancy , that is, sampling can be terminated when no new information is elicited by sampling more units. Following the logic of informational comprehensiveness Malterud et al. [ 18 ] introduced the concept of information power as a pragmatic guiding principle, suggesting that the more information power the sample provides, the smaller the sample size needs to be, and vice versa.

Undoubtedly, the most widely used principle for determining sample size and evaluating its sufficiency is that of saturation . The notion of saturation originates in grounded theory [ 15 ] – a qualitative methodological approach explicitly concerned with empirically-derived theory development – and is inextricably linked to theoretical sampling. Theoretical sampling describes an iterative process of data collection, data analysis and theory development whereby data collection is governed by emerging theory rather than predefined characteristics of the population. Grounded theory saturation (often called theoretical saturation) concerns the theoretical categories – as opposed to data – that are being developed and becomes evident when ‘gathering fresh data no longer sparks new theoretical insights, nor reveals new properties of your core theoretical categories’ [ 46 p. 113]. Saturation in grounded theory, therefore, does not equate to the more common focus on data repetition and moves beyond a singular focus on sample size as the justification of sampling adequacy [ 46 , 47 ]. Sample size in grounded theory cannot be determined a priori as it is contingent on the evolving theoretical categories.

Saturation – often under the terms of ‘data’ or ‘thematic’ saturation – has diffused into several qualitative communities beyond its origins in grounded theory. Alongside the expansion of its meaning, being variously equated with ‘no new data’, ‘no new themes’, and ‘no new codes’, saturation has emerged as the ‘gold standard’ in qualitative inquiry [ 2 , 26 ]. Nevertheless, and as Morse [ 48 ] asserts, whilst saturation is the most frequently invoked ‘guarantee of qualitative rigor’, ‘it is the one we know least about’ (p. 587). Certainly researchers caution that saturation is less applicable to, or appropriate for, particular types of qualitative research (e.g. conversation analysis, [ 49 ]; phenomenological research, [ 50 ]) whilst others reject the concept altogether [ 19 , 51 ].

Methodological studies in this area aim to provide guidance about saturation and develop a practical application of processes that ‘operationalise’ and evidence saturation. Guest, Bunce, and Johnson [ 26 ] analysed 60 interviews and found that saturation of themes was reached by the twelfth interview. They noted that their sample was relatively homogeneous, their research aims focused, so studies of more heterogeneous samples and with a broader scope would be likely to need a larger size to achieve saturation. Extending the enquiry to multi-site, cross-cultural research, Hagaman and Wutich [ 28 ] showed that sample sizes of 20 to 40 interviews were required to achieve data saturation of meta-themes that cut across research sites. In a theory-driven content analysis, Francis et al. [ 25 ] reached data saturation at the 17th interview for all their pre-determined theoretical constructs. The authors further proposed two main principles upon which specification of saturation be based: (a) researchers should a priori specify an initial analysis sample (e.g. 10 interviews) which will be used for the first round of analysis and (b) a stopping criterion , that is, a number of interviews (e.g. 3) that needs to be further conducted, the analysis of which will not yield any new themes or ideas. For greater transparency, Francis et al. [ 25 ] recommend that researchers present cumulative frequency graphs supporting their judgment that saturation was achieved. A comparative method for themes saturation (CoMeTS) has also been suggested [ 23 ] whereby the findings of each new interview are compared with those that have already emerged and if it does not yield any new theme, the ‘saturated terrain’ is assumed to have been established. Because the order in which interviews are analysed can influence saturation thresholds depending on the richness of the data, Constantinou et al. [ 23 ] recommend reordering and re-analysing interviews to confirm saturation. Hennink, Kaiser and Marconi’s [ 29 ] methodological study sheds further light on the problem of specifying and demonstrating saturation. Their analysis of interview data showed that code saturation (i.e. the point at which no additional issues are identified) was achieved at 9 interviews, but meaning saturation (i.e. the point at which no further dimensions, nuances, or insights of issues are identified) required 16–24 interviews. Although breadth can be achieved relatively soon, especially for high-prevalence and concrete codes, depth requires additional data, especially for codes of a more conceptual nature.

Critiquing the concept of saturation, Nelson [ 19 ] proposes five conceptual depth criteria in grounded theory projects to assess the robustness of the developing theory: (a) theoretical concepts should be supported by a wide range of evidence drawn from the data; (b) be demonstrably part of a network of inter-connected concepts; (c) demonstrate subtlety; (d) resonate with existing literature; and (e) can be successfully submitted to tests of external validity.

Other work has sought to examine practices of sample size reporting and sufficiency assessment across a range of disciplinary fields and research domains, from nutrition [ 34 ] and health education [ 32 ], to education and the health sciences [ 22 , 27 ], information systems [ 30 ], organisation and workplace studies [ 33 ], human computer interaction [ 21 ], and accounting studies [ 24 ]. Others investigated PhD qualitative studies [ 31 ] and grounded theory studies [ 35 ]. Incomplete and imprecise sample size reporting is commonly pinpointed by these investigations whilst assessment and justifications of sample size sufficiency are even more sporadic.

Sobal [ 34 ] examined the sample size of qualitative studies published in the Journal of Nutrition Education over a period of 30 years. Studies that employed individual interviews ( n  = 30) had an average sample size of 45 individuals and none of these explicitly reported whether their sample size sought and/or attained saturation. A minority of articles discussed how sample-related limitations (with the latter most often concerning the type of sample, rather than the size) limited generalizability. A further systematic analysis [ 32 ] of health education research over 20 years demonstrated that interview-based studies averaged 104 participants (range 2 to 720 interviewees). However, 40% did not report the number of participants. An examination of 83 qualitative interview studies in leading information systems journals [ 30 ] indicated little defence of sample sizes on the basis of recommendations by qualitative methodologists, prior relevant work, or the criterion of saturation. Rather, sample size seemed to correlate with factors such as the journal of publication or the region of study (US vs Europe vs Asia). These results led the authors to call for more rigor in determining and reporting sample size in qualitative information systems research and to recommend optimal sample size ranges for grounded theory (i.e. 20–30 interviews) and single case (i.e. 15–30 interviews) projects.

Similarly, fewer than 10% of articles in organisation and workplace studies provided a sample size justification relating to existing recommendations by methodologists, prior relevant work, or saturation [ 33 ], whilst only 17% of focus groups studies in health-related journals provided an explanation of sample size (i.e. number of focus groups), with saturation being the most frequently invoked argument, followed by published sample size recommendations and practical reasons [ 22 ]. The notion of saturation was also invoked by 11 out of the 51 most highly cited studies that Guetterman [ 27 ] reviewed in the fields of education and health sciences, of which six were grounded theory studies, four phenomenological and one a narrative inquiry. Finally, analysing 641 interview-based articles in accounting, Dai et al. [ 24 ] called for more rigor since a significant minority of studies did not report precise sample size.

Despite increasing attention to rigor in qualitative research (e.g. [ 52 ]) and more extensive methodological and analytical disclosures that seek to validate qualitative work [ 24 ], sample size reporting and sufficiency assessment remain inconsistent and partial, if not absent, across a range of research domains.

Objectives of the present study

The present study sought to enrich existing systematic analyses of the customs and practices of sample size reporting and justification by focusing on qualitative research relating to health. Additionally, this study attempted to expand previous empirical investigations by examining how qualitative sample sizes are characterised and discussed in academic narratives. Qualitative health research is an inter-disciplinary field that due to its affiliation with medical sciences, often faces views and positions reflective of a quantitative ethos. Thus qualitative health research constitutes an emblematic case that may help to unfold underlying philosophical and methodological differences across the scientific community that are crystallised in considerations of sample size. The present research, therefore, incorporates a comparative element on the basis of three different disciplines engaging with qualitative health research: medicine, psychology, and sociology. We chose to focus our analysis on single-per-participant-interview designs as this not only presents a popular and widespread methodological choice in qualitative health research, but also as the method where consideration of sample size – defined as the number of interviewees – is particularly salient.

Study design

A structured search for articles reporting cross-sectional, interview-based qualitative studies was carried out and eligible reports were systematically reviewed and analysed employing both quantitative and qualitative analytic techniques.

We selected journals which (a) follow a peer review process, (b) are considered high quality and influential in their field as reflected in journal metrics, and (c) are receptive to, and publish, qualitative research (Additional File  1 presents the journals’ editorial positions in relation to qualitative research and sample considerations where available). Three health-related journals were chosen, each representing a different disciplinary field; the British Medical Journal (BMJ) representing medicine, the British Journal of Health Psychology (BJHP) representing psychology, and the Sociology of Health & Illness (SHI) representing sociology.

Search strategy to identify studies

Employing the search function of each individual journal, we used the terms ‘interview*’ AND ‘qualitative’ and limited the results to articles published between 1 January 2003 and 22 September 2017 (i.e. a 15-year review period).

Eligibility criteria

To be eligible for inclusion in the review, the article had to report a cross-sectional study design. Longitudinal studies were thus excluded whilst studies conducted within a broader research programme (e.g. interview studies nested in a trial, as part of a broader ethnography, as part of a longitudinal research) were included if they reported only single-time qualitative interviews. The method of data collection had to be individual, synchronous qualitative interviews (i.e. group interviews, structured interviews and e-mail interviews over a period of time were excluded), and the data had to be analysed qualitatively (i.e. studies that quantified their qualitative data were excluded). Mixed method studies and articles reporting more than one qualitative method of data collection (e.g. individual interviews and focus groups) were excluded. Figure  1 , a PRISMA flow diagram [ 53 ], shows the number of: articles obtained from the searches and screened; papers assessed for eligibility; and articles included in the review (Additional File  2 provides the full list of articles included in the review and their unique identifying code – e.g. BMJ01, BJHP02, SHI03). One review author (KV) assessed the eligibility of all papers identified from the searches. When in doubt, discussions about retaining or excluding articles were held between KV and JB in regular meetings, and decisions were jointly made.

figure 1

PRISMA flow diagram

Data extraction and analysis

A data extraction form was developed (see Additional File  3 ) recording three areas of information: (a) information about the article (e.g. authors, title, journal, year of publication etc.); (b) information about the aims of the study, the sample size and any justification for this, the participant characteristics, the sampling technique and any sample-related observations or comments made by the authors; and (c) information about the method or technique(s) of data analysis, the number of researchers involved in the analysis, the potential use of software, and any discussion around epistemological considerations. The Abstract, Methods and Discussion (and/or Conclusion) sections of each article were examined by one author (KV) who extracted all the relevant information. This was directly copied from the articles and, when appropriate, comments, notes and initial thoughts were written down.

To examine the kinds of sample size justifications provided by articles, an inductive content analysis [ 54 ] was initially conducted. On the basis of this analysis, the categories that expressed qualitatively different sample size justifications were developed.

We also extracted or coded quantitative data regarding the following aspects:

Journal and year of publication

Number of interviews

Number of participants

Presence of sample size justification(s) (Yes/No)

Presence of a particular sample size justification category (Yes/No), and

Number of sample size justifications provided

Descriptive and inferential statistical analyses were used to explore these data.

A thematic analysis [ 55 ] was then performed on all scientific narratives that discussed or commented on the sample size of the study. These narratives were evident both in papers that justified their sample size and those that did not. To identify these narratives, in addition to the methods sections, the discussion sections of the reviewed articles were also examined and relevant data were extracted and analysed.

In total, 214 articles – 21 in the BMJ, 53 in the BJHP and 140 in the SHI – were eligible for inclusion in the review. Table  1 provides basic information about the sample sizes – measured in number of interviews – of the studies reviewed across the three journals. Figure  2 depicts the number of eligible articles published each year per journal.

figure 2

The publication of qualitative studies in the BMJ was significantly reduced from 2012 onwards and this appears to coincide with the initiation of the BMJ Open to which qualitative studies were possibly directed.

Pairwise comparisons following a significant Kruskal-Wallis Footnote 2 test indicated that the studies published in the BJHP had significantly ( p  < .001) smaller samples sizes than those published either in the BMJ or the SHI. Sample sizes of BMJ and SHI articles did not differ significantly from each other.

Sample size justifications: Results from the quantitative and qualitative content analysis

Ten (47.6%) of the 21 BMJ studies, 26 (49.1%) of the 53 BJHP papers and 24 (17.1%) of the 140 SHI articles provided some sort of sample size justification. As shown in Table  2 , the majority of articles which justified their sample size provided one justification (70% of articles); fourteen studies (25%) provided two distinct justifications; one study (1.7%) gave three justifications and two studies (3.3%) expressed four distinct justifications.

There was no association between the number of interviews (i.e. sample size) conducted and the provision of a justification (rpb = .054, p  = .433). Within journals, Mann-Whitney tests indicated that sample sizes of ‘justifying’ and ‘non-justifying’ articles in the BMJ and SHI did not differ significantly from each other. In the BJHP, ‘justifying’ articles ( Mean rank  = 31.3) had significantly larger sample sizes than ‘non-justifying’ studies ( Mean rank  = 22.7; U = 237.000, p  < .05).

There was a significant association between the journal a paper was published in and the provision of a justification (χ 2 (2) = 23.83, p  < .001). BJHP studies provided a sample size justification significantly more often than would be expected ( z  = 2.9); SHI studies significantly less often ( z  = − 2.4). If an article was published in the BJHP, the odds of providing a justification were 4.8 times higher than if published in the SHI. Similarly if published in the BMJ, the odds of a study justifying its sample size were 4.5 times higher than in the SHI.

The qualitative content analysis of the scientific narratives identified eleven different sample size justifications. These are described below and illustrated with excerpts from relevant articles. By way of a summary, the frequency with which these were deployed across the three journals is indicated in Table  3 .

Saturation was the most commonly invoked principle (55.4% of all justifications) deployed by studies across all three journals to justify the sufficiency of their sample size. In the BMJ, two studies claimed that they achieved data saturation (BMJ17; BMJ18) and one article referred descriptively to achieving saturation without explicitly using the term (BMJ13). Interestingly, BMJ13 included data in the analysis beyond the point of saturation in search of ‘unusual/deviant observations’ and with a view to establishing findings consistency.

Thirty three women were approached to take part in the interview study. Twenty seven agreed and 21 (aged 21–64, median 40) were interviewed before data saturation was reached (one tape failure meant that 20 interviews were available for analysis). (BMJ17). No new topics were identified following analysis of approximately two thirds of the interviews; however, all interviews were coded in order to develop a better understanding of how characteristic the views and reported behaviours were, and also to collect further examples of unusual/deviant observations. (BMJ13).

Two articles reported pre-determining their sample size with a view to achieving data saturation (BMJ08 – see extract in section In line with existing research ; BMJ15 – see extract in section Pragmatic considerations ) without further specifying if this was achieved. One paper claimed theoretical saturation (BMJ06) conceived as being when “no further recurring themes emerging from the analysis” whilst another study argued that although the analytic categories were highly saturated, it was not possible to determine whether theoretical saturation had been achieved (BMJ04). One article (BMJ18) cited a reference to support its position on saturation.

In the BJHP, six articles claimed that they achieved data saturation (BJHP21; BJHP32; BJHP39; BJHP48; BJHP49; BJHP52) and one article stated that, given their sample size and the guidelines for achieving data saturation, it anticipated that saturation would be attained (BJHP50).

Recruitment continued until data saturation was reached, defined as the point at which no new themes emerged. (BJHP48). It has previously been recommended that qualitative studies require a minimum sample size of at least 12 to reach data saturation (Clarke & Braun, 2013; Fugard & Potts, 2014; Guest, Bunce, & Johnson, 2006) Therefore, a sample of 13 was deemed sufficient for the qualitative analysis and scale of this study. (BJHP50).

Two studies argued that they achieved thematic saturation (BJHP28 – see extract in section Sample size guidelines ; BJHP31) and one (BJHP30) article, explicitly concerned with theory development and deploying theoretical sampling, claimed both theoretical and data saturation.

The final sample size was determined by thematic saturation, the point at which new data appears to no longer contribute to the findings due to repetition of themes and comments by participants (Morse, 1995). At this point, data generation was terminated. (BJHP31).

Five studies argued that they achieved (BJHP05; BJHP33; BJHP40; BJHP13 – see extract in section Pragmatic considerations ) or anticipated (BJHP46) saturation without any further specification of the term. BJHP17 referred descriptively to a state of achieved saturation without specifically using the term. Saturation of coding , but not saturation of themes, was claimed to have been reached by one article (BJHP18). Two articles explicitly stated that they did not achieve saturation; instead claiming a level of theme completeness (BJHP27) or that themes being replicated (BJHP53) were arguments for sufficiency of their sample size.

Furthermore, data collection ceased on pragmatic grounds rather than at the point when saturation point was reached. Despite this, although nuances within sub-themes were still emerging towards the end of data analysis, the themes themselves were being replicated indicating a level of completeness. (BJHP27).

Finally, one article criticised and explicitly renounced the notion of data saturation claiming that, on the contrary, the criterion of theoretical sufficiency determined its sample size (BJHP16).

According to the original Grounded Theory texts, data collection should continue until there are no new discoveries ( i.e. , ‘data saturation’; Glaser & Strauss, 1967). However, recent revisions of this process have discussed how it is rare that data collection is an exhaustive process and researchers should rely on how well their data are able to create a sufficient theoretical account or ‘theoretical sufficiency’ (Dey, 1999). For this study, it was decided that theoretical sufficiency would guide recruitment, rather than looking for data saturation. (BJHP16).

Ten out of the 20 BJHP articles that employed the argument of saturation used one or more citations relating to this principle.

In the SHI, one article (SHI01) claimed that it achieved category saturation based on authors’ judgment.

This number was not fixed in advance, but was guided by the sampling strategy and the judgement, based on the analysis of the data, of the point at which ‘category saturation’ was achieved. (SHI01).

Three articles described a state of achieved saturation without using the term or specifying what sort of saturation they had achieved (i.e. data, theoretical, thematic saturation) (SHI04; SHI13; SHI30) whilst another four articles explicitly stated that they achieved saturation (SHI100; SHI125; SHI136; SHI137). Two papers stated that they achieved data saturation (SHI73 – see extract in section Sample size guidelines ; SHI113), two claimed theoretical saturation (SHI78; SHI115) and two referred to achieving thematic saturation (SHI87; SHI139) or to saturated themes (SHI29; SHI50).

Recruitment and analysis ceased once theoretical saturation was reached in the categories described below (Lincoln and Guba 1985). (SHI115). The respondents’ quotes drawn on below were chosen as representative, and illustrate saturated themes. (SHI50).

One article stated that thematic saturation was anticipated with its sample size (SHI94). Briefly referring to the difficulty in pinpointing achievement of theoretical saturation, SHI32 (see extract in section Richness and volume of data ) defended the sufficiency of its sample size on the basis of “the high degree of consensus [that] had begun to emerge among those interviewed”, suggesting that information from interviews was being replicated. Finally, SHI112 (see extract in section Further sampling to check findings consistency ) argued that it achieved saturation of discursive patterns . Seven of the 19 SHI articles cited references to support their position on saturation (see Additional File  4 for the full list of citations used by articles to support their position on saturation across the three journals).

Overall, it is clear that the concept of saturation encompassed a wide range of variants expressed in terms such as saturation, data saturation, thematic saturation, theoretical saturation, category saturation, saturation of coding, saturation of discursive themes, theme completeness. It is noteworthy, however, that although these various claims were sometimes supported with reference to the literature, they were not evidenced in relation to the study at hand.

Pragmatic considerations

The determination of sample size on the basis of pragmatic considerations was the second most frequently invoked argument (9.6% of all justifications) appearing in all three journals. In the BMJ, one article (BMJ15) appealed to pragmatic reasons, relating to time constraints and the difficulty to access certain study populations, to justify the determination of its sample size.

On the basis of the researchers’ previous experience and the literature, [30, 31] we estimated that recruitment of 15–20 patients at each site would achieve data saturation when data from each site were analysed separately. We set a target of seven to 10 caregivers per site because of time constraints and the anticipated difficulty of accessing caregivers at some home based care services. This gave a target sample of 75–100 patients and 35–50 caregivers overall. (BMJ15).

In the BJHP, four articles mentioned pragmatic considerations relating to time or financial constraints (BJHP27 – see extract in section Saturation ; BJHP53), the participant response rate (BJHP13), and the fixed (and thus limited) size of the participant pool from which interviewees were sampled (BJHP18).

We had aimed to continue interviewing until we had reached saturation, a point whereby further data collection would yield no further themes. In practice, the number of individuals volunteering to participate dictated when recruitment into the study ceased (15 young people, 15 parents). Nonetheless, by the last few interviews, significant repetition of concepts was occurring, suggesting ample sampling. (BJHP13).

Finally, three SHI articles explained their sample size with reference to practical aspects: time constraints and project manageability (SHI56), limited availability of respondents and project resources (SHI131), and time constraints (SHI113).

The size of the sample was largely determined by the availability of respondents and resources to complete the study. Its composition reflected, as far as practicable, our interest in how contextual factors (for example, gender relations and ethnicity) mediated the illness experience. (SHI131).

Qualities of the analysis

This sample size justification (8.4% of all justifications) was mainly employed by BJHP articles and referred to an intensive, idiographic and/or latently focused analysis, i.e. that moved beyond description. More specifically, six articles defended their sample size on the basis of an intensive analysis of transcripts and/or the idiographic focus of the study/analysis. Four of these papers (BJHP02; BJHP19; BJHP24; BJHP47) adopted an Interpretative Phenomenological Analysis (IPA) approach.

The current study employed a sample of 10 in keeping with the aim of exploring each participant’s account (Smith et al. , 1999). (BJHP19).

BJHP47 explicitly renounced the notion of saturation within an IPA approach. The other two BJHP articles conducted thematic analysis (BJHP34; BJHP38). The level of analysis – i.e. latent as opposed to a more superficial descriptive analysis – was also invoked as a justification by BJHP38 alongside the argument of an intensive analysis of individual transcripts

The resulting sample size was at the lower end of the range of sample sizes employed in thematic analysis (Braun & Clarke, 2013). This was in order to enable significant reflection, dialogue, and time on each transcript and was in line with the more latent level of analysis employed, to identify underlying ideas, rather than a more superficial descriptive analysis (Braun & Clarke, 2006). (BJHP38).

Finally, one BMJ paper (BMJ21) defended its sample size with reference to the complexity of the analytic task.

We stopped recruitment when we reached 30–35 interviews, owing to the depth and duration of interviews, richness of data, and complexity of the analytical task. (BMJ21).

Meet sampling requirements

Meeting sampling requirements (7.2% of all justifications) was another argument employed by two BMJ and four SHI articles to explain their sample size. Achieving maximum variation sampling in terms of specific interviewee characteristics determined and explained the sample size of two BMJ studies (BMJ02; BMJ16 – see extract in section Meet research design requirements ).

Recruitment continued until sampling frame requirements were met for diversity in age, sex, ethnicity, frequency of attendance, and health status. (BMJ02).

Regarding the SHI articles, two papers explained their numbers on the basis of their sampling strategy (SHI01- see extract in section Saturation ; SHI23) whilst sampling requirements that would help attain sample heterogeneity in terms of a particular characteristic of interest was cited by one paper (SHI127).

The combination of matching the recruitment sites for the quantitative research and the additional purposive criteria led to 104 phase 2 interviews (Internet (OLC): 21; Internet (FTF): 20); Gyms (FTF): 23; HIV testing (FTF): 20; HIV treatment (FTF): 20.) (SHI23). Of the fifty interviews conducted, thirty were translated from Spanish into English. These thirty, from which we draw our findings, were chosen for translation based on heterogeneity in depressive symptomology and educational attainment. (SHI127).

Finally, the pre-determination of sample size on the basis of sampling requirements was stated by one article though this was not used to justify the number of interviews (SHI10).

Sample size guidelines

Five BJHP articles (BJHP28; BJHP38 – see extract in section Qualities of the analysis ; BJHP46; BJHP47; BJHP50 – see extract in section Saturation ) and one SHI paper (SHI73) relied on citing existing sample size guidelines or norms within research traditions to determine and subsequently defend their sample size (7.2% of all justifications).

Sample size guidelines suggested a range between 20 and 30 interviews to be adequate (Creswell, 1998). Interviewer and note taker agreed that thematic saturation, the point at which no new concepts emerge from subsequent interviews (Patton, 2002), was achieved following completion of 20 interviews. (BJHP28). Interviewing continued until we deemed data saturation to have been reached (the point at which no new themes were emerging). Researchers have proposed 30 as an approximate or working number of interviews at which one could expect to be reaching theoretical saturation when using a semi-structured interview approach (Morse 2000), although this can vary depending on the heterogeneity of respondents interviewed and complexity of the issues explored. (SHI73).

In line with existing research

Sample sizes of published literature in the area of the subject matter under investigation (3.5% of all justifications) were used by 2 BMJ articles as guidance and a precedent for determining and defending their own sample size (BMJ08; BMJ15 – see extract in section Pragmatic considerations ).

We drew participants from a list of prisoners who were scheduled for release each week, sampling them until we reached the target of 35 cases, with a view to achieving data saturation within the scope of the study and sufficient follow-up interviews and in line with recent studies [8–10]. (BMJ08).

Similarly, BJHP38 (see extract in section Qualities of the analysis ) claimed that its sample size was within the range of sample sizes of published studies that use its analytic approach.

Richness and volume of data

BMJ21 (see extract in section Qualities of the analysis ) and SHI32 referred to the richness, detailed nature, and volume of data collected (2.3% of all justifications) to justify the sufficiency of their sample size.

Although there were more potential interviewees from those contacted by postcode selection, it was decided to stop recruitment after the 10th interview and focus on analysis of this sample. The material collected was considerable and, given the focused nature of the study, extremely detailed. Moreover, a high degree of consensus had begun to emerge among those interviewed, and while it is always difficult to judge at what point ‘theoretical saturation’ has been reached, or how many interviews would be required to uncover exception(s), it was felt the number was sufficient to satisfy the aims of this small in-depth investigation (Strauss and Corbin 1990). (SHI32).

Meet research design requirements

Determination of sample size so that it is in line with, and serves the requirements of, the research design (2.3% of all justifications) that the study adopted was another justification used by 2 BMJ papers (BMJ16; BMJ08 – see extract in section In line with existing research ).

We aimed for diverse, maximum variation samples [20] totalling 80 respondents from different social backgrounds and ethnic groups and those bereaved due to different types of suicide and traumatic death. We could have interviewed a smaller sample at different points in time (a qualitative longitudinal study) but chose instead to seek a broad range of experiences by interviewing those bereaved many years ago and others bereaved more recently; those bereaved in different circumstances and with different relations to the deceased; and people who lived in different parts of the UK; with different support systems and coroners’ procedures (see Tables 1 and 2 for more details). (BMJ16).

Researchers’ previous experience

The researchers’ previous experience (possibly referring to experience with qualitative research) was invoked by BMJ15 (see extract in section Pragmatic considerations ) as a justification for the determination of sample size.

Nature of study

One BJHP paper argued that the sample size was appropriate for the exploratory nature of the study (BJHP38).

A sample of eight participants was deemed appropriate because of the exploratory nature of this research and the focus on identifying underlying ideas about the topic. (BJHP38).

Further sampling to check findings consistency

Finally, SHI112 argued that once it had achieved saturation of discursive patterns, further sampling was decided and conducted to check for consistency of the findings.

Within each of the age-stratified groups, interviews were randomly sampled until saturation of discursive patterns was achieved. This resulted in a sample of 67 interviews. Once this sample had been analysed, one further interview from each age-stratified group was randomly chosen to check for consistency of the findings. Using this approach it was possible to more carefully explore children’s discourse about the ‘I’, agency, relationality and power in the thematic areas, revealing the subtle discursive variations described in this article. (SHI112).

Thematic analysis of passages discussing sample size

This analysis resulted in two overarching thematic areas; the first concerned the variation in the characterisation of sample size sufficiency, and the second related to the perceived threats deriving from sample size insufficiency.

Characterisations of sample size sufficiency

The analysis showed that there were three main characterisations of the sample size in the articles that provided relevant comments and discussion: (a) the vast majority of these qualitative studies ( n  = 42) considered their sample size as ‘small’ and this was seen and discussed as a limitation; only two articles viewed their small sample size as desirable and appropriate (b) a minority of articles ( n  = 4) proclaimed that their achieved sample size was ‘sufficient’; and (c) finally, a small group of studies ( n  = 5) characterised their sample size as ‘large’. Whilst achieving a ‘large’ sample size was sometimes viewed positively because it led to richer results, there were also occasions when a large sample size was problematic rather than desirable.

‘Small’ but why and for whom?

A number of articles which characterised their sample size as ‘small’ did so against an implicit or explicit quantitative framework of reference. Interestingly, three studies that claimed to have achieved data saturation or ‘theoretical sufficiency’ with their sample size, discussed or noted as a limitation in their discussion their ‘small’ sample size, raising the question of why, or for whom, the sample size was considered small given that the qualitative criterion of saturation had been satisfied.

The current study has a number of limitations. The sample size was small (n = 11) and, however, large enough for no new themes to emerge. (BJHP39). The study has two principal limitations. The first of these relates to the small number of respondents who took part in the study. (SHI73).

Other articles appeared to accept and acknowledge that their sample was flawed because of its small size (as well as other compositional ‘deficits’ e.g. non-representativeness, biases, self-selection) or anticipated that they might be criticized for their small sample size. It seemed that the imagined audience – perhaps reviewer or reader – was one inclined to hold the tenets of quantitative research, and certainly one to whom it was important to indicate the recognition that small samples were likely to be problematic. That one’s sample might be thought small was often construed as a limitation couched in a discourse of regret or apology.

Very occasionally, the articulation of the small size as a limitation was explicitly aligned against an espoused positivist framework and quantitative research.

This study has some limitations. Firstly, the 100 incidents sample represents a small number of the total number of serious incidents that occurs every year. 26 We sent out a nationwide invitation and do not know why more people did not volunteer for the study. Our lack of epidemiological knowledge about healthcare incidents, however, means that determining an appropriate sample size continues to be difficult. (BMJ20).

Indicative of an apparent oscillation of qualitative researchers between the different requirements and protocols demarcating the quantitative and qualitative worlds, there were a few instances of articles which briefly recognised their ‘small’ sample size as a limitation, but then defended their study on more qualitative grounds, such as their ability and success at capturing the complexity of experience and delving into the idiographic, and at generating particularly rich data.

This research, while limited in size, has sought to capture some of the complexity attached to men’s attitudes and experiences concerning incomes and material circumstances. (SHI35). Our numbers are small because negotiating access to social networks was slow and labour intensive, but our methods generated exceptionally rich data. (BMJ21). This study could be criticised for using a small and unrepresentative sample. Given that older adults have been ignored in the research concerning suntanning, fair-skinned older adults are the most likely to experience skin cancer, and women privilege appearance over health when it comes to sunbathing practices, our study offers depth and richness of data in a demographic group much in need of research attention. (SHI57).

‘Good enough’ sample sizes

Only four articles expressed some degree of confidence that their achieved sample size was sufficient. For example, SHI139, in line with the justification of thematic saturation that it offered, expressed trust in its sample size sufficiency despite the poor response rate. Similarly, BJHP04, which did not provide a sample size justification, argued that it targeted a larger sample size in order to eventually recruit a sufficient number of interviewees, due to anticipated low response rate.

Twenty-three people with type I diabetes from the target population of 133 ( i.e. 17.3%) consented to participate but four did not then respond to further contacts (total N = 19). The relatively low response rate was anticipated, due to the busy life-styles of young people in the age range, the geographical constraints, and the time required to participate in a semi-structured interview, so a larger target sample allowed a sufficient number of participants to be recruited. (BJHP04).

Two other articles (BJHP35; SHI32) linked the claimed sufficiency to the scope (i.e. ‘small, in-depth investigation’), aims and nature (i.e. ‘exploratory’) of their studies, thus anchoring their numbers to the particular context of their research. Nevertheless, claims of sample size sufficiency were sometimes undermined when they were juxtaposed with an acknowledgement that a larger sample size would be more scientifically productive.

Although our sample size was sufficient for this exploratory study, a more diverse sample including participants with lower socioeconomic status and more ethnic variation would be informative. A larger sample could also ensure inclusion of a more representative range of apps operating on a wider range of platforms. (BJHP35).

‘Large’ sample sizes - Promise or peril?

Three articles (BMJ13; BJHP05; BJHP48) which all provided the justification of saturation, characterised their sample size as ‘large’ and narrated this oversufficiency in positive terms as it allowed richer data and findings and enhanced the potential for generalisation. The type of generalisation aspired to (BJHP48) was not further specified however.

This study used rich data provided by a relatively large sample of expert informants on an important but under-researched topic. (BMJ13). Qualitative research provides a unique opportunity to understand a clinical problem from the patient’s perspective. This study had a large diverse sample, recruited through a range of locations and used in-depth interviews which enhance the richness and generalizability of the results. (BJHP48).

And whilst a ‘large’ sample size was endorsed and valued by some qualitative researchers, within the psychological tradition of IPA, a ‘large’ sample size was counter-normative and therefore needed to be justified. Four BJHP studies, all adopting IPA, expressed the appropriateness or desirability of ‘small’ sample sizes (BJHP41; BJHP45) or hastened to explain why they included a larger than typical sample size (BJHP32; BJHP47). For example, BJHP32 below provides a rationale for how an IPA study can accommodate a large sample size and how this was indeed suitable for the purposes of the particular research. To strengthen the explanation for choosing a non-normative sample size, previous IPA research citing a similar sample size approach is used as a precedent.

Small scale IPA studies allow in-depth analysis which would not be possible with larger samples (Smith et al. , 2009). (BJHP41). Although IPA generally involves intense scrutiny of a small number of transcripts, it was decided to recruit a larger diverse sample as this is the first qualitative study of this population in the United Kingdom (as far as we know) and we wanted to gain an overview. Indeed, Smith, Flowers, and Larkin (2009) agree that IPA is suitable for larger groups. However, the emphasis changes from an in-depth individualistic analysis to one in which common themes from shared experiences of a group of people can be elicited and used to understand the network of relationships between themes that emerge from the interviews. This large-scale format of IPA has been used by other researchers in the field of false-positive research. Baillie, Smith, Hewison, and Mason (2000) conducted an IPA study, with 24 participants, of ultrasound screening for chromosomal abnormality; they found that this larger number of participants enabled them to produce a more refined and cohesive account. (BJHP32).

The IPA articles found in the BJHP were the only instances where a ‘small’ sample size was advocated and a ‘large’ sample size problematized and defended. These IPA studies illustrate that the characterisation of sample size sufficiency can be a function of researchers’ theoretical and epistemological commitments rather than the result of an ‘objective’ sample size assessment.

Threats from sample size insufficiency

As shown above, the majority of articles that commented on their sample size, simultaneously characterized it as small and problematic. On those occasions that authors did not simply cite their ‘small’ sample size as a study limitation but rather continued and provided an account of how and why a small sample size was problematic, two important scientific qualities of the research seemed to be threatened: the generalizability and validity of results.

Generalizability

Those who characterised their sample as ‘small’ connected this to the limited potential for generalization of the results. Other features related to the sample – often some kind of compositional particularity – were also linked to limited potential for generalisation. Though not always explicitly articulated to what form of generalisation the articles referred to (see BJHP09), generalisation was mostly conceived in nomothetic terms, that is, it concerned the potential to draw inferences from the sample to the broader study population (‘representational generalisation’ – see BJHP31) and less often to other populations or cultures.

It must be noted that samples are small and whilst in both groups the majority of those women eligible participated, generalizability cannot be assumed. (BJHP09). The study’s limitations should be acknowledged: Data are presented from interviews with a relatively small group of participants, and thus, the views are not necessarily generalizable to all patients and clinicians. In particular, patients were only recruited from secondary care services where COFP diagnoses are typically confirmed. The sample therefore is unlikely to represent the full spectrum of patients, particularly those who are not referred to, or who have been discharged from dental services. (BJHP31).

Without explicitly using the term generalisation, two SHI articles noted how their ‘small’ sample size imposed limits on ‘the extent that we can extrapolate from these participants’ accounts’ (SHI114) or to the possibility ‘to draw far-reaching conclusions from the results’ (SHI124).

Interestingly, only a minority of articles alluded to, or invoked, a type of generalisation that is aligned with qualitative research, that is, idiographic generalisation (i.e. generalisation that can be made from and about cases [ 5 ]). These articles, all published in the discipline of sociology, defended their findings in terms of the possibility of drawing logical and conceptual inferences to other contexts and of generating understanding that has the potential to advance knowledge, despite their ‘small’ size. One article (SHI139) clearly contrasted nomothetic (statistical) generalisation to idiographic generalisation, arguing that the lack of statistical generalizability does not nullify the ability of qualitative research to still be relevant beyond the sample studied.

Further, these data do not need to be statistically generalisable for us to draw inferences that may advance medicalisation analyses (Charmaz 2014). These data may be seen as an opportunity to generate further hypotheses and are a unique application of the medicalisation framework. (SHI139). Although a small-scale qualitative study related to school counselling, this analysis can be usefully regarded as a case study of the successful utilisation of mental health-related resources by adolescents. As many of the issues explored are of relevance to mental health stigma more generally, it may also provide insights into adult engagement in services. It shows how a sociological analysis, which uses positioning theory to examine how people negotiate, partially accept and simultaneously resist stigmatisation in relation to mental health concerns, can contribute to an elucidation of the social processes and narrative constructions which may maintain as well as bridge the mental health service gap. (SHI103).

Only one article (SHI30) used the term transferability to argue for the potential of wider relevance of the results which was thought to be more the product of the composition of the sample (i.e. diverse sample), rather than the sample size.

The second major concern that arose from a ‘small’ sample size pertained to the internal validity of findings (i.e. here the term is used to denote the ‘truth’ or credibility of research findings). Authors expressed uncertainty about the degree of confidence in particular aspects or patterns of their results, primarily those that concerned some form of differentiation on the basis of relevant participant characteristics.

The information source preferred seemed to vary according to parents’ education; however, the sample size is too small to draw conclusions about such patterns. (SHI80). Although our numbers were too small to demonstrate gender differences with any certainty, it does seem that the biomedical and erotic scripts may be more common in the accounts of men and the relational script more common in the accounts of women. (SHI81).

In other instances, articles expressed uncertainty about whether their results accounted for the full spectrum and variation of the phenomenon under investigation. In other words, a ‘small’ sample size (alongside compositional ‘deficits’ such as a not statistically representative sample) was seen to threaten the ‘content validity’ of the results which in turn led to constructions of the study conclusions as tentative.

Data collection ceased on pragmatic grounds rather than when no new information appeared to be obtained ( i.e. , saturation point). As such, care should be taken not to overstate the findings. Whilst the themes from the initial interviews seemed to be replicated in the later interviews, further interviews may have identified additional themes or provided more nuanced explanations. (BJHP53). …it should be acknowledged that this study was based on a small sample of self-selected couples in enduring marriages who were not broadly representative of the population. Thus, participants may not be representative of couples that experience postnatal PTSD. It is therefore unlikely that all the key themes have been identified and explored. For example, couples who were excluded from the study because the male partner declined to participate may have been experiencing greater interpersonal difficulties. (BJHP03).

In other instances, articles attempted to preserve a degree of credibility of their results, despite the recognition that the sample size was ‘small’. Clarity and sharpness of emerging themes and alignment with previous relevant work were the arguments employed to warrant the validity of the results.

This study focused on British Chinese carers of patients with affective disorders, using a qualitative methodology to synthesise the sociocultural representations of illness within this community. Despite the small sample size, clear themes emerged from the narratives that were sufficient for this exploratory investigation. (SHI98).

The present study sought to examine how qualitative sample sizes in health-related research are characterised and justified. In line with previous studies [ 22 , 30 , 33 , 34 ] the findings demonstrate that reporting of sample size sufficiency is limited; just over 50% of articles in the BMJ and BJHP and 82% in the SHI did not provide any sample size justification. Providing a sample size justification was not related to the number of interviews conducted, but it was associated with the journal that the article was published in, indicating the influence of disciplinary or publishing norms, also reported in prior research [ 30 ]. This lack of transparency about sample size sufficiency is problematic given that most qualitative researchers would agree that it is an important marker of quality [ 56 , 57 ]. Moreover, and with the rise of qualitative research in social sciences, efforts to synthesise existing evidence and assess its quality are obstructed by poor reporting [ 58 , 59 ].

When authors justified their sample size, our findings indicate that sufficiency was mostly appraised with reference to features that were intrinsic to the study, in agreement with general advice on sample size determination [ 4 , 11 , 36 ]. The principle of saturation was the most commonly invoked argument [ 22 ] accounting for 55% of all justifications. A wide range of variants of saturation was evident corroborating the proliferation of the meaning of the term [ 49 ] and reflecting different underlying conceptualisations or models of saturation [ 20 ]. Nevertheless, claims of saturation were never substantiated in relation to procedures conducted in the study itself, endorsing similar observations in the literature [ 25 , 30 , 47 ]. Claims of saturation were sometimes supported with citations of other literature, suggesting a removal of the concept away from the characteristics of the study at hand. Pragmatic considerations, such as resource constraints or participant response rate and availability, was the second most frequently used argument accounting for approximately 10% of justifications and another 23% of justifications also represented intrinsic-to-the-study characteristics (i.e. qualities of the analysis, meeting sampling or research design requirements, richness and volume of the data obtained, nature of study, further sampling to check findings consistency).

Only, 12% of mentions of sample size justification pertained to arguments that were external to the study at hand, in the form of existing sample size guidelines and prior research that sets precedents. Whilst community norms and prior research can establish useful rules of thumb for estimating sample sizes [ 60 ] – and reveal what sizes are more likely to be acceptable within research communities – researchers should avoid adopting these norms uncritically, especially when such guidelines [e.g. 30 , 35 ], might be based on research that does not provide adequate evidence of sample size sufficiency. Similarly, whilst methodological research that seeks to demonstrate the achievement of saturation is invaluable since it explicates the parameters upon which saturation is contingent and indicates when a research project is likely to require a smaller or a larger sample [e.g. 29 ], specific numbers at which saturation was achieved within these projects cannot be routinely extrapolated for other projects. We concur with existing views [ 11 , 36 ] that the consideration of the characteristics of the study at hand, such as the epistemological and theoretical approach, the nature of the phenomenon under investigation, the aims and scope of the study, the quality and richness of data, or the researcher’s experience and skills of conducting qualitative research, should be the primary guide in determining sample size and assessing its sufficiency.

Moreover, although numbers in qualitative research are not unimportant [ 61 ], sample size should not be considered alone but be embedded in the more encompassing examination of data adequacy [ 56 , 57 ]. Erickson’s [ 62 ] dimensions of ‘evidentiary adequacy’ are useful here. He explains the concept in terms of adequate amounts of evidence, adequate variety in kinds of evidence, adequate interpretive status of evidence, adequate disconfirming evidence, and adequate discrepant case analysis. All dimensions might not be relevant across all qualitative research designs, but this illustrates the thickness of the concept of data adequacy, taking it beyond sample size.

The present research also demonstrated that sample sizes were commonly seen as ‘small’ and insufficient and discussed as limitation. Often unjustified (and in two cases incongruent with their own claims of saturation) these findings imply that sample size in qualitative health research is often adversely judged (or expected to be judged) against an implicit, yet omnipresent, quasi-quantitative standpoint. Indeed there were a few instances in our data where authors appeared, possibly in response to reviewers, to resist to some sort of quantification of their results. This implicit reference point became more apparent when authors discussed the threats deriving from an insufficient sample size. Whilst the concerns about internal validity might be legitimate to the extent that qualitative research projects, which are broadly related to realism, are set to examine phenomena in sufficient breadth and depth, the concerns around generalizability revealed a conceptualisation that is not compatible with purposive sampling. The limited potential for generalisation, as a result of a small sample size, was often discussed in nomothetic, statistical terms. Only occasionally was analytic or idiographic generalisation invoked to warrant the value of the study’s findings [ 5 , 17 ].

Strengths and limitations of the present study

We note, first, the limited number of health-related journals reviewed, so that only a ‘snapshot’ of qualitative health research has been captured. Examining additional disciplines (e.g. nursing sciences) as well as inter-disciplinary journals would add to the findings of this analysis. Nevertheless, our study is the first to provide some comparative insights on the basis of disciplines that are differently attached to the legacy of positivism and analysed literature published over a lengthy period of time (15 years). Guetterman [ 27 ] also examined health-related literature but this analysis was restricted to 26 most highly cited articles published over a period of five years whilst Carlsen and Glenton’s [ 22 ] study concentrated on focus groups health research. Moreover, although it was our intention to examine sample size justification in relation to the epistemological and theoretical positions of articles, this proved to be challenging largely due to absence of relevant information, or the difficulty into discerning clearly articles’ positions [ 63 ] and classifying them under specific approaches (e.g. studies often combined elements from different theoretical and epistemological traditions). We believe that such an analysis would yield useful insights as it links the methodological issue of sample size to the broader philosophical stance of the research. Despite these limitations, the analysis of the characterisation of sample size and of the threats seen to accrue from insufficient sample size, enriches our understanding of sample size (in)sufficiency argumentation by linking it to other features of the research. As the peer-review process becomes increasingly public, future research could usefully examine how reporting around sample size sufficiency and data adequacy might be influenced by the interactions between authors and reviewers.

The past decade has seen a growing appetite in qualitative research for an evidence-based approach to sample size determination and to evaluations of the sufficiency of sample size. Despite the conceptual and methodological developments in the area, the findings of the present study confirm previous studies in concluding that appraisals of sample size sufficiency are either absent or poorly substantiated. To ensure and maintain high quality research that will encourage greater appreciation of qualitative work in health-related sciences [ 64 ], we argue that qualitative researchers should be more transparent and thorough in their evaluation of sample size as part of their appraisal of data adequacy. We would encourage the practice of appraising sample size sufficiency with close reference to the study at hand and would thus caution against responding to the growing methodological research in this area with a decontextualised application of sample size numerical guidelines, norms and principles. Although researchers might find sample size community norms serve as useful rules of thumb, we recommend methodological knowledge is used to critically consider how saturation and other parameters that affect sample size sufficiency pertain to the specifics of the particular project. Those reviewing papers have a vital role in encouraging transparent study-specific reporting. The review process should support authors to exercise nuanced judgments in decisions about sample size determination in the context of the range of factors that influence sample size sufficiency and the specifics of a particular study. In light of the growing methodological evidence in the area, transparent presentation of such evidence-based judgement is crucial and in time should surely obviate the seemingly routine practice of citing the ‘small’ size of qualitative samples among the study limitations.

A non-parametric test of difference for independent samples was performed since the variable number of interviews violated assumptions of normality according to the standardized scores of skewness and kurtosis (BMJ: z skewness = 3.23, z kurtosis = 1.52; BJHP: z skewness = 4.73, z kurtosis = 4.85; SHI: z skewness = 12.04, z kurtosis = 21.72) and the Shapiro-Wilk test of normality ( p  < .001).

Abbreviations

British Journal of Health Psychology

British Medical Journal

Interpretative Phenomenological Analysis

Sociology of Health & Illness

Spencer L, Ritchie J, Lewis J, Dillon L. Quality in qualitative evaluation: a framework for assessing research evidence. National Centre for Social Research 2003 https://www.heacademy.ac.uk/system/files/166_policy_hub_a_quality_framework.pdf Accessed 11 May 2018.

Fusch PI, Ness LR. Are we there yet? Data saturation in qualitative research Qual Rep. 2015;20(9):1408–16.

Google Scholar  

Robinson OC. Sampling in interview-based qualitative research: a theoretical and practical guide. Qual Res Psychol. 2014;11(1):25–41.

Article   Google Scholar  

Sandelowski M. Sample size in qualitative research. Res Nurs Health. 1995;18(2):179–83.

Article   CAS   Google Scholar  

Sandelowski M. One is the liveliest number: the case orientation of qualitative research. Res Nurs Health. 1996;19(6):525–9.

Luborsky MR, Rubinstein RL. Sampling in qualitative research: rationale, issues. and methods Res Aging. 1995;17(1):89–113.

Marshall MN. Sampling for qualitative research. Fam Pract. 1996;13(6):522–6.

Patton MQ. Qualitative evaluation and research methods. 2nd ed. Newbury Park, CA: Sage; 1990.

van Rijnsoever FJ. (I Can’t get no) saturation: a simulation and guidelines for sample sizes in qualitative research. PLoS One. 2017;12(7):e0181689.

Morse JM. The significance of saturation. Qual Health Res. 1995;5(2):147–9.

Morse JM. Determining sample size. Qual Health Res. 2000;10(1):3–5.

Gergen KJ, Josselson R, Freeman M. The promises of qualitative inquiry. Am Psychol. 2015;70(1):1–9.

Borsci S, Macredie RD, Barnett J, Martin J, Kuljis J, Young T. Reviewing and extending the five-user assumption: a grounded procedure for interaction evaluation. ACM Trans Comput Hum Interact. 2013;20(5):29.

Borsci S, Macredie RD, Martin JL, Young T. How many testers are needed to assure the usability of medical devices? Expert Rev Med Devices. 2014;11(5):513–25.

Glaser BG, Strauss AL. The discovery of grounded theory: strategies for qualitative research. Chicago, IL: Aldine; 1967.

Kerr C, Nixon A, Wild D. Assessing and demonstrating data saturation in qualitative inquiry supporting patient-reported outcomes research. Expert Rev Pharmacoecon Outcomes Res. 2010;10(3):269–81.

Lincoln YS, Guba EG. Naturalistic inquiry. London: Sage; 1985.

Book   Google Scholar  

Malterud K, Siersma VD, Guassora AD. Sample size in qualitative interview studies: guided by information power. Qual Health Res. 2015;26:1753–60.

Nelson J. Using conceptual depth criteria: addressing the challenge of reaching saturation in qualitative research. Qual Res. 2017;17(5):554–70.

Saunders B, Sim J, Kingstone T, Baker S, Waterfield J, Bartlam B, et al. Saturation in qualitative research: exploring its conceptualization and operationalization. Qual Quant. 2017. https://doi.org/10.1007/s11135-017-0574-8 .

Caine K. Local standards for sample size at CHI. In Proceedings of the 2016 CHI conference on human factors in computing systems. 2016;981–992. ACM.

Carlsen B, Glenton C. What about N? A methodological study of sample-size reporting in focus group studies. BMC Med Res Methodol. 2011;11(1):26.

Constantinou CS, Georgiou M, Perdikogianni M. A comparative method for themes saturation (CoMeTS) in qualitative interviews. Qual Res. 2017;17(5):571–88.

Dai NT, Free C, Gendron Y. Interview-based research in accounting 2000–2014: a review. November 2016. https://ssrn.com/abstract=2711022 or https://doi.org/10.2139/ssrn.2711022 . Accessed 17 May 2018.

Francis JJ, Johnston M, Robertson C, Glidewell L, Entwistle V, Eccles MP, et al. What is an adequate sample size? Operationalising data saturation for theory-based interview studies. Psychol Health. 2010;25(10):1229–45.

Guest G, Bunce A, Johnson L. How many interviews are enough? An experiment with data saturation and variability. Field Methods. 2006;18(1):59–82.

Guetterman TC. Descriptions of sampling practices within five approaches to qualitative research in education and the health sciences. Forum Qual Soc Res. 2015;16(2):25. http://nbn-resolving.de/urn:nbn:de:0114-fqs1502256 . Accessed 17 May 2018.

Hagaman AK, Wutich A. How many interviews are enough to identify metathemes in multisited and cross-cultural research? Another perspective on guest, bunce, and Johnson’s (2006) landmark study. Field Methods. 2017;29(1):23–41.

Hennink MM, Kaiser BN, Marconi VC. Code saturation versus meaning saturation: how many interviews are enough? Qual Health Res. 2017;27(4):591–608.

Marshall B, Cardon P, Poddar A, Fontenot R. Does sample size matter in qualitative research?: a review of qualitative interviews in IS research. J Comput Inform Syst. 2013;54(1):11–22.

Mason M. Sample size and saturation in PhD studies using qualitative interviews. Forum Qual Soc Res 2010;11(3):8. http://nbn-resolving.de/urn:nbn:de:0114-fqs100387 . Accessed 17 May 2018.

Safman RM, Sobal J. Qualitative sample extensiveness in health education research. Health Educ Behav. 2004;31(1):9–21.

Saunders MN, Townsend K. Reporting and justifying the number of interview participants in organization and workplace research. Br J Manag. 2016;27(4):836–52.

Sobal J. 2001. Sample extensiveness in qualitative nutrition education research. J Nutr Educ. 2001;33(4):184–92.

Thomson SB. 2010. Sample size and grounded theory. JOAAG. 2010;5(1). http://www.joaag.com/uploads/5_1__Research_Note_1_Thomson.pdf . Accessed 17 May 2018.

Baker SE, Edwards R. How many qualitative interviews is enough?: expert voices and early career reflections on sampling and cases in qualitative research. National Centre for Research Methods Review Paper. 2012; http://eprints.ncrm.ac.uk/2273/4/how_many_interviews.pdf . Accessed 17 May 2018.

Ogden J, Cornwell D. The role of topic, interviewee, and question in predicting rich interview data in the field of health research. Sociol Health Illn. 2010;32(7):1059–71.

Green J, Thorogood N. Qualitative methods for health research. London: Sage; 2004.

Ritchie J, Lewis J, Elam G. Designing and selecting samples. In: Ritchie J, Lewis J, editors. Qualitative research practice: a guide for social science students and researchers. London: Sage; 2003. p. 77–108.

Britten N. Qualitative research: qualitative interviews in medical research. BMJ. 1995;311(6999):251–3.

Creswell JW. Qualitative inquiry and research design: choosing among five approaches. 2nd ed. London: Sage; 2007.

Fugard AJ, Potts HW. Supporting thinking on sample sizes for thematic analyses: a quantitative tool. Int J Soc Res Methodol. 2015;18(6):669–84.

Emmel N. Themes, variables, and the limits to calculating sample size in qualitative research: a response to Fugard and Potts. Int J Soc Res Methodol. 2015;18(6):685–6.

Braun V, Clarke V. (Mis) conceptualising themes, thematic analysis, and other problems with Fugard and Potts’ (2015) sample-size tool for thematic analysis. Int J Soc Res Methodol. 2016;19(6):739–43.

Hammersley M. Sampling and thematic analysis: a response to Fugard and Potts. Int J Soc Res Methodol. 2015;18(6):687–8.

Charmaz K. Constructing grounded theory: a practical guide through qualitative analysis. London: Sage; 2006.

Bowen GA. Naturalistic inquiry and the saturation concept: a research note. Qual Res. 2008;8(1):137–52.

Morse JM. Data were saturated. Qual Health Res. 2015;25(5):587–8.

O’Reilly M, Parker N. ‘Unsatisfactory saturation’: a critical exploration of the notion of saturated sample sizes in qualitative research. Qual Res. 2013;13(2):190–7.

Manen M, Higgins I, Riet P. A conversation with max van Manen on phenomenology in its original sense. Nurs Health Sci. 2016;18(1):4–7.

Dey I. Grounding grounded theory. San Francisco, CA: Academic Press; 1999.

Hays DG, Wood C, Dahl H, Kirk-Jenkins A. Methodological rigor in journal of counseling & development qualitative research articles: a 15-year review. J Couns Dev. 2016;94(2):172–83.

Moher D, Liberati A, Tetzlaff J, Altman DG, Prisma Group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med 2009; 6(7): e1000097.

Hsieh HF, Shannon SE. Three approaches to qualitative content analysis. Qual Health Res. 2005;15(9):1277–88.

Boyatzis RE. Transforming qualitative information: thematic analysis and code development. Thousand Oaks, CA: Sage; 1998.

Levitt HM, Motulsky SL, Wertz FJ, Morrow SL, Ponterotto JG. Recommendations for designing and reviewing qualitative research in psychology: promoting methodological integrity. Qual Psychol. 2017;4(1):2–22.

Morrow SL. Quality and trustworthiness in qualitative research in counseling psychology. J Couns Psychol. 2005;52(2):250–60.

Barroso J, Sandelowski M. Sample reporting in qualitative studies of women with HIV infection. Field Methods. 2003;15(4):386–404.

Glenton C, Carlsen B, Lewin S, Munthe-Kaas H, Colvin CJ, Tunçalp Ö, et al. Applying GRADE-CERQual to qualitative evidence synthesis findings—paper 5: how to assess adequacy of data. Implement Sci. 2018;13(Suppl 1):14.

Onwuegbuzie AJ. Leech NL. A call for qualitative power analyses. Qual Quant. 2007;41(1):105–21.

Sandelowski M. Real qualitative researchers do not count: the use of numbers in qualitative research. Res Nurs Health. 2001;24(3):230–40.

Erickson F. Qualitative methods in research on teaching. In: Wittrock M, editor. Handbook of research on teaching. 3rd ed. New York: Macmillan; 1986. p. 119–61.

Bradbury-Jones C, Taylor J, Herber O. How theory is used and articulated in qualitative research: development of a new typology. Soc Sci Med. 2014;120:135–41.

Greenhalgh T, Annandale E, Ashcroft R, Barlow J, Black N, Bleakley A, et al. An open letter to the BMJ editors on qualitative research. BMJ. 2016;i563:352.

Download references

Acknowledgments

We would like to thank Dr. Paula Smith and Katharine Lee for their comments on a previous draft of this paper as well as Natalie Ann Mitchell and Meron Teferra for assisting us with data extraction.

This research was initially conceived of and partly conducted with financial support from the Multidisciplinary Assessment of Technology Centre for Healthcare (MATCH) programme (EP/F063822/1 and EP/G012393/1). The research continued and was completed independent of any support. The funding body did not have any role in the study design, the collection, analysis and interpretation of the data, in the writing of the paper, and in the decision to submit the manuscript for publication. The views expressed are those of the authors alone.

Availability of data and materials

Supporting data can be accessed in the original publications. Additional File 2 lists all eligible studies that were included in the present analysis.

Author information

Authors and affiliations.

Department of Psychology, University of Bath, Building 10 West, Claverton Down, Bath, BA2 7AY, UK

Konstantina Vasileiou & Julie Barnett

School of Psychology, Newcastle University, Ridley Building 1, Queen Victoria Road, Newcastle upon Tyne, NE1 7RU, UK

Susan Thorpe

Department of Computer Science, Brunel University London, Wilfred Brown Building 108, Uxbridge, UB8 3PH, UK

Terry Young

You can also search for this author in PubMed   Google Scholar

Contributions

JB and TY conceived the study; KV, JB, and TY designed the study; KV identified the articles and extracted the data; KV and JB assessed eligibility of articles; KV, JB, ST, and TY contributed to the analysis of the data, discussed the findings and early drafts of the paper; KV developed the final manuscript; KV, JB, ST, and TY read and approved the manuscript.

Corresponding author

Correspondence to Konstantina Vasileiou .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

Terry Young is an academic who undertakes research and occasional consultancy in the areas of health technology assessment, information systems, and service design. He is unaware of any direct conflict of interest with respect to this paper. All other authors have no competing interests to declare.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional Files

Additional file 1:.

Editorial positions on qualitative research and sample considerations (where available). (DOCX 12 kb)

Additional File 2:

List of eligible articles included in the review ( N  = 214). (DOCX 38 kb)

Additional File 3:

Data Extraction Form. (DOCX 15 kb)

Additional File 4:

Citations used by articles to support their position on saturation. (DOCX 14 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article.

Vasileiou, K., Barnett, J., Thorpe, S. et al. Characterising and justifying sample size sufficiency in interview-based studies: systematic analysis of qualitative health research over a 15-year period. BMC Med Res Methodol 18 , 148 (2018). https://doi.org/10.1186/s12874-018-0594-7

Download citation

Received : 22 May 2018

Accepted : 29 October 2018

Published : 21 November 2018

DOI : https://doi.org/10.1186/s12874-018-0594-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Sample size
  • Sample size justification
  • Sample size characterisation
  • Data adequacy
  • Qualitative health research
  • Qualitative interviews
  • Systematic analysis

BMC Medical Research Methodology

ISSN: 1471-2288

qualitative research interview sample

InterviewPrep

Top 20 Qualitative Research Interview Questions & Answers

Master your responses to Qualitative Research related interview questions with our example questions and answers. Boost your chances of landing the job by learning how to effectively communicate your Qualitative Research capabilities.

qualitative research interview sample

Diving into the intricacies of human behavior, thoughts, and experiences is the lifeblood of qualitative research. As a professional in this nuanced field, you are well-versed in the art of gathering rich, descriptive data that can provide deep insights into complex issues. Now, as you prepare to take on new challenges in your career, it’s time to demonstrate not only your expertise in qualitative methodologies but also your ability to think critically and adapt to various research contexts.

Whether you’re interviewing for an academic position, a role within a market research firm, or any other setting where qualitative skills are prized, being prepared with thoughtful responses to potential interview questions can set you apart from other candidates. In this article, we will discuss some of the most common questions asked during interviews for qualitative research roles, offering guidance on how best to articulate your experience and approach to prospective employers.

Common Qualitative Research Interview Questions

1. how do you ensure the credibility of your data in qualitative research.

Ensuring credibility in qualitative research is crucial for the trustworthiness of the findings. By asking about methodological rigor, the interviewer is assessing a candidate’s understanding of strategies such as triangulation, member checking, and maintaining a detailed audit trail, which are essential for substantiating the integrity of qualitative data.

When responding to this question, you should articulate a multi-faceted approach to establishing credibility. Begin by highlighting your understanding of the importance of a well-defined research design and data collection strategy. Explain how you incorporate methods like triangulation, using multiple data sources or perspectives to confirm the consistency of the information obtained. Discuss your process for member checking—obtaining feedback on your findings from the participants themselves—to add another layer of validation. Mention your dedication to keeping a comprehensive audit trail, documenting all stages of the research process, which enables peer scrutiny and adds to the transparency of the study. Emphasize your ongoing commitment to reflexivity, where you continually examine your biases and influence on the research. Through this detailed explanation, you demonstrate a conscientious and systematic approach to safeguarding the credibility of your qualitative research.

Example: “ To ensure the credibility of data in qualitative research, I employ a rigorous research design that is both systematic and reflective. Initially, I establish clear protocols for data collection, which includes in-depth interviews, focus groups, and observations, ensuring that each method is well-suited to the research questions. To enhance the validity of the findings, I apply triangulation, drawing on various data sources, theoretical frameworks, and methodologies to cross-verify the information and interpretations.

During the analysis phase, member checking is a critical step, where I return to participants with a summary of the findings to validate the accuracy and resonance of the interpreted data with their experiences. This not only strengthens the credibility of the results but also enriches the data by incorporating participant insights. Furthermore, I maintain a comprehensive audit trail, meticulously documenting the research process, decisions made, and data transformations. This transparency allows for peer review and ensures that the research can be followed and critiqued by others in the field.

Lastly, reflexivity is integral to my practice. I continuously engage in self-reflection to understand and articulate my biases and assumptions and how they may influence the research process. By doing so, I can mitigate potential impacts on the data and interpretations, ensuring that the findings are a credible representation of the phenomenon under investigation.”

2. Describe a situation where you had to adapt your research methodology due to unforeseen challenges.

When unexpected variables arise, adaptability in research design is vital to maintain the integrity and validity of the study. This question seeks to assess a candidate’s problem-solving skills, flexibility, and resilience in the face of research challenges.

When responding, share a specific instance where you encountered a challenge that impacted your research methodology. Detail the nature of the challenge, the thought process behind your decision to adapt, the steps you took to revise your approach, and the outcome of those changes. Emphasize your critical thinking, your ability to consult relevant literature or peers if necessary, and how your adaptability contributed to the overall success or learning experience of the research project.

Example: “ In a recent qualitative study on community health practices, I encountered a significant challenge when the planned in-person interviews became unfeasible due to a sudden public health concern. The initial methodology was designed around face-to-face interactions to capture rich, detailed narratives. However, with participant safety as a priority, I quickly pivoted to remote data collection methods. After reviewing relevant literature on virtual qualitative research, I adapted the protocol to include video conferencing and phone interviews, ensuring I could still engage deeply with participants. This adaptation required a reevaluation of our ethical considerations, particularly around confidentiality and informed consent in digital formats.

The shift to remote interviews introduced concerns about potential biases, as the change might exclude individuals without access to the necessary technology. To mitigate this, I also offered the option of asynchronous voice recordings or email responses as a means to participate. This inclusive approach not only preserved the integrity of the study but also revealed an unexpected layer of data regarding digital literacy and access in the community. The study’s findings were robust, and the methodology adaptation was reflected upon in the final report, contributing to the discourse on the flexibility and resilience of qualitative research in dynamic contexts.”

3. What strategies do you employ for effective participant observation?

For effective participant observation, a balance between immersion and detachment is necessary to gather in-depth understanding without influencing the natural setting. This method allows the researcher to collect rich, contextual data that surveys or structured interviews might miss.

When responding to this question, highlight your ability to blend in with the participant group to minimize your impact on their behavior. Discuss your skills in active listening, detailed note-taking, and ethical considerations such as informed consent and maintaining confidentiality. Mention any techniques you use to reflect on your observations critically and how you ensure that your presence does not alter the dynamics of the group you are studying. It’s also effective to provide examples from past research where your participant observation led to valuable insights that informed your study’s findings.

Example: “ In participant observation, my primary strategy is to achieve a balance between immersion and detachment. I immerse myself in the environment to gain a deep understanding of the context and participants’ perspectives, while remaining sufficiently detached to observe and analyze behaviors and interactions objectively. To blend in, I adapt to the cultural norms and social cues of the group, which often involves a period of learning and adjustment to minimize my impact on their behavior.

Active listening is central to my approach, allowing me to capture the subtleties of communication beyond verbal exchanges. I complement this with meticulous note-taking, often employing a system of shorthand that enables me to record details without disrupting the flow of interaction. Ethically, I prioritize informed consent and confidentiality, ensuring participants are aware of my role and the study’s purpose. After observations, I engage in reflexive practice, critically examining my own biases and influence on the research setting. This reflexivity was instrumental in a past project where my awareness of my impact on group dynamics led to the discovery of underlying power structures that were not immediately apparent, significantly enriching the study’s findings.”

4. In what ways do you maintain ethical standards while conducting in-depth interviews?

Maintaining ethical standards during in-depth interviews involves respecting participant confidentiality, ensuring informed consent, and being sensitive to power dynamics. Ethical practice in this context is not only about adhering to institutional guidelines but also about fostering an environment where interviewees feel respected and understood.

When responding to this question, it’s vital to articulate a clear understanding of ethical frameworks such as confidentiality and informed consent. Describe specific strategies you employ, such as anonymizing data, obtaining consent through clear communication about the study’s purpose and the participant’s role, and ensuring the interviewee’s comfort and safety during the conversation. Highlight any training or certifications you’ve received in ethical research practices and give examples from past research experiences where you navigated ethical dilemmas successfully. This approach demonstrates your commitment to integrity in the research process and your ability to protect the well-being of your subjects.

Example: “ Maintaining ethical standards during in-depth interviews is paramount to the integrity of the research process. I ensure that all participants are fully aware of the study’s purpose, their role within it, and the ways in which their data will be used. This is achieved through a clear and comprehensive informed consent process. I always provide participants with the option to withdraw from the study at any point without penalty.

To safeguard confidentiality, I employ strategies such as anonymizing data and using secure storage methods. I am also attentive to the comfort and safety of interviewees, creating a respectful and non-threatening interview environment. In situations where sensitive topics may arise, I am trained to handle these with the necessary care and professionalism. For instance, in a past study involving vulnerable populations, I implemented additional privacy measures and worked closely with an ethics review board to navigate the complexities of the research context. My approach is always to prioritize the dignity and rights of the participants, adhering to ethical guidelines and best practices established in the field.”

5. How do you approach coding textual data without personal biases influencing outcomes?

When an interviewer poses a question about coding textual data free from personal biases, they are probing your ability to maintain objectivity and adhere to methodological rigor. This question tests your understanding of qualitative analysis techniques and your awareness of the researcher’s potential to skew data interpretation.

When responding, it’s essential to articulate your familiarity with established coding procedures such as open, axial, or thematic coding. Emphasize your systematic approach to data analysis, which might include multiple rounds of coding, peer debriefing, and maintaining a reflexive journal. Discuss the importance of bracketing your preconceptions during data analysis and how you would seek to validate your coding through methods such as triangulation or member checking. Your answer should convey a balance between a structured approach to coding and an openness to the data’s nuances, demonstrating your commitment to producing unbiased and trustworthy qualitative research findings.

Example: “ In approaching textual data coding, I adhere to a structured yet flexible methodology that mitigates personal bias. Initially, I engage in open coding to categorize data based on its manifest content, allowing patterns to emerge organically. This is followed by axial coding, where I explore connections between categories, and if applicable, thematic coding to identify overarching themes. Throughout this process, I maintain a reflexive journal to document my thought process and potential biases, ensuring transparency and self-awareness.

To ensure the reliability of my coding, I employ peer debriefing sessions, where colleagues scrutinize my coding decisions, challenging assumptions and offering alternative interpretations. This collaborative scrutiny helps to counteract any personal biases that might have crept into the analysis. Additionally, I utilize methods such as triangulation, comparing data across different sources, and member checking, soliciting feedback from participants on the accuracy of the coded data. These strategies collectively serve to validate the coding process and ensure that the findings are a credible representation of the data, rather than a reflection of my preconceptions.”

6. What is your experience with utilizing grounded theory in qualitative studies?

Grounded theory is a systematic methodology that operates almost in a reverse fashion from traditional research. Employers ask about your experience with grounded theory to assess your ability to conduct research that is flexible and adaptable to the data.

When responding, you should outline specific studies or projects where you’ve applied grounded theory. Discuss the nature of the data you worked with, the process of iterative data collection and analysis, and how you developed a theoretical framework as a result. Highlight any challenges you faced and how you overcame them, as well as the outcomes of your research. This will show your practical experience and your ability to engage deeply with qualitative data to extract meaningful theories and conclusions.

Example: “ In applying grounded theory to my qualitative studies, I have embraced its iterative approach to develop a theoretical framework grounded in empirical data. For instance, in a project exploring the coping mechanisms of individuals with chronic illnesses, I conducted in-depth interviews and focus groups, allowing the data to guide the research process. Through constant comparative analysis, I coded the data, identifying core categories and the relationships between them. This emergent coding process was central to refining and saturating the categories, ensuring the development of a robust theory that encapsulated the lived experiences of the participants.

Challenges such as data saturation and ensuring theoretical sensitivity were navigated by maintaining a balance between openness to the data and guiding research questions. The iterative nature of grounded theory facilitated the identification of nuanced coping strategies that were not initially apparent, leading to a theory that emphasized the dynamic interplay between personal agency and social support. The outcome was a substantive theory that not only provided a deeper understanding of the participants’ experiences but also had practical implications for designing support systems for individuals with chronic conditions.”

7. Outline the steps you take when conducting a thematic analysis.

Thematic analysis is a method used to identify, analyze, and report patterns within data, and it requires a systematic approach to ensure validity and reliability. This question assesses whether a candidate can articulate a clear, methodical process that will yield insightful findings from qualitative data.

When responding, you should outline a step-by-step process that begins with familiarization with the data, whereby you immerse yourself in the details, taking notes and highlighting initial ideas. Proceed to generating initial codes across the entire dataset, which involves organizing data into meaningful groups. Then, search for themes by collating codes into potential themes and gathering all data relevant to each potential theme. Review these themes to ensure they work in relation to the coded extracts and the entire dataset, refining them as necessary. Define and name themes, which entails developing a detailed analysis of each theme and determining the essence of what each theme is about. Finally, report the findings, weaving the analytic narrative with vivid examples, within the context of existing literature and the research questions. This methodical response not only showcases your technical knowledge but also demonstrates an organized thought process and the ability to communicate complex procedures clearly.

Example: “ In conducting a thematic analysis, I begin by thoroughly immersing myself in the data, which involves meticulously reading and re-reading the content to gain a deep understanding of its breadth and depth. During this stage, I make extensive notes and begin to mark initial ideas that strike me as potentially significant.

Following familiarization, I generate initial codes systematically across the entire dataset. This coding process is both reflective and interpretative, as it requires me to identify and categorize data segments that are pertinent to the research questions. These codes are then used to organize the data into meaningful groups.

Next, I search for themes by examining the codes and considering how they may combine to form overarching themes. This involves collating all the coded data relevant to each potential theme and considering the interrelationships between codes, themes, and different levels of themes, which may include sub-themes.

The subsequent step is to review these themes, checking them against the dataset to ensure they accurately represent the data. This may involve collapsing some themes into each other, splitting others, and refining the specifics of each theme. The essence of this iterative process is to refine the themes so that they tell a coherent story about the data.

Once the themes are satisfactorily developed, I define and name them. This involves a detailed analysis of each theme and determining what aspect of the data each theme captures. I aim to articulate the nuances within each theme, identifying the story that each tells about the data, and considering how this relates to the broader research questions and literature.

Lastly, I report the findings, weaving together the thematic analysis narrative. This includes selecting vivid examples that compellingly illustrate each theme, discussing how the themes interconnect, and situating them within the context of existing literature and the research questions. This final write-up is not merely about summarizing the data but about telling a story that provides insights into the research topic.”

8. When is it appropriate to use focus groups rather than individual interviews, and why?

Choosing between focus groups and individual interviews depends on the research goals and the nature of the information sought. Focus groups excel in exploring complex behaviors, attitudes, and experiences through the dynamic interaction of participants.

When responding to this question, articulate the strengths of both methods, matching them to specific research scenarios. For focus groups, emphasize your ability to facilitate lively, guided discussions that leverage group dynamics to elicit a breadth of perspectives. For individual interviews, highlight your skill in creating a safe, confidential space where participants can share detailed, personal experiences. Demonstrate strategic thinking by discussing how you would decide on the most suitable method based on the research question, participant characteristics, and the type of data needed to achieve your research objectives.

Example: “ Focus groups are particularly apt when the research question benefits from the interaction among participants, as the group dynamics can stimulate memories, ideas, and experiences that might not surface in one-on-one interviews. They are valuable for exploring the range of opinions or feelings about a topic, allowing researchers to observe consensus formation, the diversity of perspectives, and the reasoning behind attitudes. This method is also efficient for gathering a breadth of data in a limited timeframe. However, it’s crucial to ensure that the topic is suitable for discussion in a group setting and that participants are comfortable speaking in front of others.

Conversely, individual interviews are more appropriate when the subject matter is sensitive or requires deep exploration of personal experiences. They provide a private space for participants to share detailed and nuanced insights without the influence of others, which can be particularly important when discussing topics that may not be openly talked about in a group. The method allows for a tailored approach, where the interviewer can adapt questions based on the participant’s responses, facilitating a depth of understanding that is harder to achieve in a group setting. The decision between the two methods ultimately hinges on the specific needs of the research, the nature of the topic, and the goals of the study.”

9. Detail how you would validate findings from a case study research design.

In case study research, validation is paramount to ensure that interpretations and conclusions are credible. A well-validated case study reinforces the rigor of the research method and bolsters the transferability of its findings to other contexts.

When responding to this question, detail your process, which might include triangulation, where you corroborate findings with multiple data sources or perspectives; member checking, which involves sharing your interpretations with participants for their input; and seeking peer debriefing, where colleagues critique the process and findings. Explain how these methods contribute to the dependability and confirmability of your research, showing that you are not just collecting data but actively engaging with it to construct a solid, defensible narrative.

Example: “ In validating findings from a case study research design, I employ a multi-faceted approach to ensure the dependability and confirmability of the research. Triangulation is a cornerstone of my validation process, where I corroborate evidence from various data sources, such as interviews, observations, and documents. This method allows for cross-validation and helps in constructing a robust narrative by revealing consistencies and discrepancies in the data.

Member checking is another essential step in my process. By sharing my interpretations with participants, I not only honor their perspectives but also enhance the credibility of the findings. This iterative process ensures that the conclusions drawn are reflective of the participants’ experiences and not solely based on my own interpretations.

Lastly, peer debriefing serves as a critical checkpoint. By engaging colleagues who critique the research process and findings, I open the study to external scrutiny, which helps in mitigating any potential biases and enhances the study’s rigor. These colleagues act as devil’s advocates, challenging assumptions and conclusions, thereby strengthening the study’s validity. Collectively, these strategies form a comprehensive approach to validating case study research, ensuring that the findings are well-substantiated and trustworthy.”

10. What measures do you take to ensure the transferability of your qualitative research findings?

When asked about ensuring transferability, the interviewer is assessing your ability to articulate the relevance of your findings beyond the specific context of your study. They want to know if you can critically appraise your research design and methodology.

To respond effectively, you should discuss the thoroughness of your data collection methods, such as purposive sampling, to gather diverse perspectives that enhance the depth of the data. Explain your engagement with participants and the setting to ensure a rich understanding of the phenomenon under study. Highlight your detailed documentation of the research process, including your reflexivity, to allow others to follow your footsteps analytically. Finally, speak about how you communicate the boundaries of your research applicability and how you encourage readers to consider the transferability of findings to their contexts through clear and comprehensive descriptions of your study’s context, participants, and assumptions.

Example: “ In ensuring the transferability of my qualitative research findings, I prioritize a robust and purposive sampling strategy that captures a wide range of perspectives relevant to the research question. This approach not only enriches the data but also provides a comprehensive understanding of the phenomenon across varied contexts. By doing so, I lay a foundation for the findings to resonate with similar situations, allowing others to judge the applicability of the results to their own contexts.

I meticulously document the research process, including the setting, participant interactions, and my own reflexivity, to provide a transparent and detailed account of how conclusions were reached. This level of documentation serves as a roadmap for other researchers or practitioners to understand the intricacies of the study and evaluate the potential for transferability. Furthermore, I ensure that my findings are presented with a clear delineation of the context, including any cultural, temporal, or geographic nuances, and discuss the assumptions underpinning the study. By offering this rich, contextualized description, I invite readers to engage critically with the findings and assess their relevance to other settings, thus facilitating a responsible and informed application of the research outcomes.”

11. How do you determine when data saturation has been reached in your study?

Determining data saturation is crucial because it signals when additional data does not yield new insights, ensuring efficient use of resources without compromising the depth of understanding. This question is posed to assess a candidate’s experience and judgment in qualitative research.

When responding to this question, one should highlight their systematic approach to data collection and analysis. Discuss the iterative process of engaging with the data, constantly comparing new information with existing codes and themes. Explain how you monitor for emerging patterns and at what point these patterns become consistent and repeatable, indicating saturation. Mention any specific techniques or criteria you employ, such as the use of thematic analysis or constant comparison methods, and how you document the decision-making process to ensure transparency and validity in your research findings.

Example: “ In determining data saturation, I employ a rigorous and iterative approach to data collection and analysis. As I engage with the data, I continuously compare new information against existing codes and themes, carefully monitoring for the emergence of new patterns or insights. Saturation is approached when the data begins to yield redundant information, and no new themes or codes are emerging from the analysis.

I utilize techniques such as thematic analysis and constant comparison methods to ensure a systematic examination of the data. I document each step of the decision-making process, noting when additional data does not lead to new theme identification or when existing themes are fully fleshed out. This documentation not only serves as a checkpoint for determining saturation but also enhances the transparency and validity of the research findings. Through this meticulous process, I can confidently assert that data saturation has been achieved when the collected data offers a comprehensive understanding of the research phenomenon, with a rich and well-developed thematic structure that accurately reflects the research scope.”

12. Relate an instance where member checking significantly altered your research conclusions.

Member checking serves as a vital checkpoint to ensure accuracy, credibility, and resonance of the data with those it represents. It can reveal misunderstandings or even introduce new insights that substantially shift the study’s trajectory or outcomes.

When responding, candidates should recount a specific project where member checking made a pivotal difference in their findings. They should detail the initial conclusions, how the process of member checking was integrated, what feedback was received, and how it led to a re-evaluation or refinement of the research outcomes. This response showcases the candidate’s methodological rigor, flexibility in incorporating feedback, and dedication to producing research that authentically reflects the voices and experiences of the study’s participants.

Example: “ In a recent qualitative study on community responses to urban redevelopment, initial findings suggested broad support for the initiatives among residents. However, during the member checking phase, when participants reviewed and commented on the findings, a nuanced perspective emerged. Several participants highlighted that their apparent support was, in fact, resignation due to a lack of viable alternatives, rather than genuine enthusiasm for the redevelopment plans.

This feedback prompted a deeper dive into the data, revealing a pattern of resigned acceptance across a significant portion of the interviews. The conclusion was substantially revised to reflect this sentiment, emphasizing the complexity of community responses to redevelopment, which included both cautious optimism and skeptical resignation. This critical insight not only enriched the study’s validity but also had profound implications for policymakers interested in understanding the true sentiment of the affected communities.”

13. What are the key considerations when selecting a sample for phenomenological research?

The selection of a sample in phenomenological research is not about quantity but about the richness and relevance of the data that participants can provide. It requires an intimate knowledge of the research question and a deliberate choice to include participants who have experienced the phenomenon in question.

When responding to this question, it’s essential to emphasize the need for a purposeful sampling strategy that aims to capture a broad spectrum of perspectives on the phenomenon under study. Discuss the importance of sample diversity to ensure the findings are robust and reflect varied experiences. Mention the necessity of establishing clear criteria for participant selection and the willingness to adapt as the research progresses. Highlighting your commitment to ethical considerations, such as informed consent and the respectful treatment of participants’ information, will also demonstrate your thorough understanding of the nuances in qualitative sampling.

Example: “ In phenomenological research, the primary goal is to understand the essence of experiences concerning a particular phenomenon. Therefore, the key considerations for sample selection revolve around identifying individuals who have experienced the phenomenon of interest and can articulate their lived experiences. Purposeful sampling is essential to ensure that the participants chosen can provide rich, detailed accounts that contribute to a deep understanding of the phenomenon.

The diversity of the sample is also crucial. It is important to select participants who represent a range of perspectives within the phenomenon, not just a homogenous group. This might involve considering factors such as age, gender, socio-economic status, or other relevant characteristics that could influence their experiences. While the sample size in phenomenological studies is often small to allow for in-depth analysis, it is vital to ensure that the sample is varied enough to uncover a comprehensive understanding of the phenomenon.

Lastly, ethical considerations are paramount. Participants must give informed consent, understanding the nature of the study and their role in it. The researcher must also be prepared to handle sensitive information with confidentiality and respect, ensuring the participants’ well-being is prioritized throughout the study. Adapting the sample selection criteria as the study progresses is also important, as initial interviews may reveal additional nuances that require the inclusion of further varied perspectives to fully grasp the phenomenon.”

14. Which software tools do you prefer for qualitative data analysis, and for what reasons?

The choice of software tools for qualitative data analysis reflects a researcher’s approach to data synthesis and interpretation. It also indicates their proficiency with technology and their ability to leverage sophisticated features to deepen insights.

When responding, it’s essential to discuss specific features of the software tools you prefer, such as coding capabilities, ease of data management, collaborative features, or the ability to handle large datasets. Explain how these features have enhanced your research outcomes in the past. For example, you might highlight the use of NVivo for its robust coding structure that helped you organize complex data efficiently or Atlas.ti for its intuitive interface and visualization tools that made it easier to detect emerging patterns. Your response should demonstrate your analytical thought process and your commitment to rigorous qualitative analysis.

Example: “ In my qualitative research endeavors, I have found NVivo to be an invaluable tool, primarily due to its advanced coding capabilities and its ability to manage large and complex datasets effectively. The node structure in NVivo facilitates a hierarchical organization of themes, which streamlines the coding process and enhances the reliability of the data analysis. This feature was particularly beneficial in a recent project where the depth and volume of textual data required a robust system to ensure consistency and comprehensiveness in theme development.

Another tool I frequently utilize is Atlas.ti, which stands out for its user-friendly interface and powerful visualization tools. These features are instrumental in identifying and illustrating relationships between themes, thereby enriching the interpretive depth of the analysis. The network views in Atlas.ti have enabled me to construct clear visual representations of the data interconnections, which not only supported my analytical narrative but also facilitated stakeholder understanding and engagement. The combination of these tools, leveraging their respective strengths, has consistently augmented the quality and impact of my qualitative research outcomes.”

15. How do you handle discrepancies between participants’ words and actions in ethnographic research?

Ethnographic research hinges on the researcher’s ability to interpret both verbal and non-verbal data to draw meaningful conclusions. This question allows the interviewer to assess a candidate’s methodological rigor and analytical skills.

When responding, it’s essential to emphasize your systematic approach to reconciling such discrepancies. Discuss the importance of context, the use of triangulation to corroborate findings through multiple data sources, and the strategies you employ to interpret and integrate conflicting information. Highlight your commitment to ethical research practices, the ways you ensure participant understanding and consent, and your experience with reflective practice to mitigate researcher bias. Showcasing your ability to remain flexible and responsive to the data, while maintaining a clear analytical framework, will demonstrate your proficiency in qualitative research.

Example: “ In ethnographic research, discrepancies between participants’ words and actions are not only common but also a valuable source of insight. When I encounter such discrepancies, I first consider the context in which they occur, as it often holds the key to understanding the divergence. Cultural norms, social pressures, or even the presence of the researcher can influence participants’ behaviors and self-reporting. I employ triangulation, utilizing multiple data sources such as interviews, observations, and relevant documents to construct a more comprehensive understanding of the phenomena at hand.

I also engage in reflective practice to examine my own biases and assumptions that might influence data interpretation. By maintaining a stance of cultural humility and being open to the participants’ perspectives, I can better understand the reasons behind their actions and words. When integrating conflicting information, I look for patterns and themes that can reconcile the differences, often finding that they reveal deeper complexities within the social context being studied. Ethical research practices, including ensuring participant understanding and consent, are paramount throughout this process, as they help maintain the integrity of both the data and the relationships with participants.”

16. What role does reflexivity play in your research process?

Reflexivity is an ongoing self-assessment that ensures research findings are not merely a reflection of the researcher’s preconceptions, thereby increasing the credibility and authenticity of the work.

When responding, illustrate your understanding of reflexivity with examples from past research experiences. Discuss how you have actively engaged in reflexivity by questioning your assumptions, how this shaped your research design, and the methods you employed to ensure that your findings were informed by the data rather than your personal beliefs. Demonstrate your commitment to ethical research practice by highlighting how you’ve maintained an open dialogue with your participants and peers to challenge and refine your interpretations.

Example: “ Reflexivity is a cornerstone of my qualitative research methodology, as it allows me to critically examine my own influence on the research process and outcomes. In practice, I maintain a reflexive journal throughout the research process, documenting my preconceptions, emotional responses, and decision-making rationales. This ongoing self-analysis ensures that I remain aware of my potential biases and the ways in which my background and perspectives might shape the data collection and analysis.

For instance, in a recent ethnographic study, I recognized my own cultural assumptions could affect participant interactions. To mitigate this, I incorporated member checking and peer debriefing as integral parts of the research cycle. By actively seeking feedback on my interpretations from both participants and fellow researchers, I was able to challenge my initial readings of the data and uncover deeper, more nuanced insights. This reflexive approach not only enriched the research findings but also upheld the integrity and credibility of the study, fostering a more authentic and ethical representation of the participants’ experiences.”

17. Describe a complex qualitative dataset you’ve managed and how you navigated its challenges.

Managing a complex qualitative dataset requires meticulous organization, a strong grasp of research methods, and the ability to discern patterns and themes amidst a sea of words and narratives. This question evaluates the candidate’s analytical and critical thinking skills.

When responding to this question, you should focus on a specific project that exemplifies your experience with complex qualitative data. Outline the scope of the data, the methods you used for organization and analysis, and the challenges you encountered—such as data coding, thematic saturation, or ensuring reliability and validity. Discuss the strategies you implemented to address these challenges, such as iterative coding, member checking, or triangulation. By providing concrete examples, you demonstrate not only your technical ability but also your methodological rigor and dedication to producing insightful, credible research findings.

Example: “ In a recent project, I managed a complex qualitative dataset that comprised over 50 in-depth interviews, several focus groups, and field notes from participant observation. The data was rich with nuanced perspectives on community health practices, but it presented challenges in ensuring thematic saturation and maintaining a systematic approach to coding across multiple researchers.

To navigate these challenges, I employed a rigorous iterative coding process, utilizing NVivo software to facilitate organization and analysis. Initially, I conducted a round of open coding to identify preliminary themes, followed by axial coding to explore the relationships between these themes. As the dataset was extensive, I also implemented a strategy of constant comparison to refine and merge codes, ensuring thematic saturation was achieved. To enhance the reliability and validity of our findings, I organized regular peer debriefing sessions, where the research team could discuss and resolve discrepancies in coding and interpretation. Additionally, I conducted member checks with a subset of participants, which not only enriched the data but also validated our thematic constructs. This meticulous approach enabled us to develop a robust thematic framework that accurately reflected the complexity of the community’s health practices and informed subsequent policy recommendations.”

18. How do you integrate quantitative data to enhance the richness of a primarily qualitative study?

Integrating quantitative data with qualitative research can add a layer of objectivity, enhance validity, and offer a scalable dimension to the findings. This mixed-methods approach can help in identifying outliers or anomalies in qualitative data.

When responding to this question, a candidate should articulate their understanding of both qualitative and quantitative research methodologies. They should discuss specific techniques such as triangulation, where quantitative data serves as a corroborative tool for qualitative findings, or embedded analysis, where quantitative data provides a backdrop for deep qualitative exploration. The response should also include practical examples of past research scenarios where the candidate successfully merged both data types to strengthen their study, highlighting their ability to create a symbiotic relationship between numbers and narratives for richer, more robust research outcomes.

Example: “ Integrating quantitative data into a qualitative study can significantly enhance the depth and credibility of the research findings. In my experience, I employ triangulation to ensure that themes emerging from qualitative data are not only rich in context but also empirically grounded. For instance, in a study exploring patient satisfaction, while qualitative interviews might reveal nuanced patient experiences, quantitative satisfaction scores can be used to validate and quantify the prevalence of these experiences across a larger population.

Furthermore, I often use quantitative data as a formative tool to guide the qualitative inquiry. By initially analyzing patterns in quantitative data, I can identify areas that require a deeper understanding through qualitative methods. For example, if a survey indicates a trend in consumer behavior, follow-up interviews or focus groups can explore the motivations behind that trend. This embedded analysis approach ensures that qualitative findings are not only contextually informed but also quantitatively relevant, leading to a more comprehensive understanding of the research question.”

19. What is your rationale for choosing narrative inquiry over other qualitative methods in storytelling contexts?

Narrative inquiry delves into individual stories to find broader truths and patterns. This method captures the richness of how people perceive and make sense of their lives, revealing the interplay of various factors in shaping narratives.

When responding, articulate your understanding of narrative inquiry, emphasizing its strengths in capturing lived experiences and its ability to provide a detailed, insider’s view of a phenomenon. Highlight your knowledge of how narrative inquiry can uncover the nuances of storytelling, such as the role of language, emotions, and context, which are essential for a deep understanding of the subject matter. Demonstrate your ability to choose an appropriate research method based on the research question, objectives, and the nature of the data you aim to collect.

Example: “ Narrative inquiry is a powerful qualitative method that aligns exceptionally well with the exploration of storytelling contexts due to its focus on the richness of personal experience and the construction of meaning. By delving into individuals’ stories, narrative inquiry allows researchers to capture the complexities of lived experiences, which are often embedded with emotions, cultural values, and temporal elements that other methods may not fully grasp. The longitudinal nature of narrative inquiry, where stories can be collected and analyzed over time, also offers a dynamic perspective on how narratives evolve, intersect, and influence the storyteller’s identity and worldview.

In choosing narrative inquiry, one is committing to a methodological approach that honors the subjectivity and co-construction of knowledge between the researcher and participants. This approach is particularly adept at uncovering the layers of language use, symbolism, and the interplay of narratives with broader societal discourses. It is this depth and nuance that makes narrative inquiry the method of choice when the research aim is not just to catalog events but to understand the profound implications of storytelling on individual and collective levels. The method’s flexibility in accommodating different narrative forms – be it oral, written, or visual – further underscores its suitability for research that seeks to holistically capture the essence of storytelling within its natural context.”

20. How do you address potential power dynamics that may influence a participant’s responses during interviews?

Recognizing and mitigating the influence of power dynamics is essential to maintain the integrity of the data collected in qualitative research, ensuring that findings reflect the participants’ genuine perspectives.

When responding to this question, one should emphasize their awareness of such dynamics and articulate strategies to minimize their impact. This could include techniques like establishing rapport, using neutral language, ensuring confidentiality, and employing reflexivity—being mindful of one’s own influence on the conversation. Furthermore, demonstrating an understanding of how to create a safe space for open dialogue and acknowledging the importance of participant empowerment can convey a commitment to ethical and effective qualitative research practices.

Example: “ In addressing potential power dynamics, my approach begins with the conscious effort to create an environment of trust and safety. I employ active listening and empathetic engagement to establish rapport, which helps to level the conversational field. I am meticulous in using neutral, non-leading language to avoid inadvertently imposing my own assumptions or perspectives on participants. This is complemented by an emphasis on the voluntary nature of participation and the assurance of confidentiality, which together foster a space where participants feel secure in sharing their authentic experiences.

Reflexivity is a cornerstone of my practice; I continuously self-assess and acknowledge my positionality and its potential influence on the research process. By engaging in this critical self-reflection, I am better equipped to recognize and mitigate any power imbalances that may arise. Moreover, I strive to empower participants by validating their narratives and ensuring that the interview process is not just extractive but also offers them a platform to be heard and to contribute meaningfully to the research. This balanced approach not only enriches the data quality but also adheres to the ethical standards that underpin responsible qualitative research.”

Top 20 Stakeholder Interview Questions & Answers

Top 20 multicultural interview questions & answers, you may also be interested in..., top 20 visual design interview questions & answers, top 20 project documentation interview questions & answers, top 20 internal auditing interview questions & answers, top 20 teletherapy interview questions & answers.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Published: 05 October 2018

Interviews and focus groups in qualitative research: an update for the digital age

  • P. Gill 1 &
  • J. Baillie 2  

British Dental Journal volume  225 ,  pages 668–672 ( 2018 ) Cite this article

32k Accesses

61 Citations

20 Altmetric

Metrics details

Highlights that qualitative research is used increasingly in dentistry. Interviews and focus groups remain the most common qualitative methods of data collection.

Suggests the advent of digital technologies has transformed how qualitative research can now be undertaken.

Suggests interviews and focus groups can offer significant, meaningful insight into participants' experiences, beliefs and perspectives, which can help to inform developments in dental practice.

Qualitative research is used increasingly in dentistry, due to its potential to provide meaningful, in-depth insights into participants' experiences, perspectives, beliefs and behaviours. These insights can subsequently help to inform developments in dental practice and further related research. The most common methods of data collection used in qualitative research are interviews and focus groups. While these are primarily conducted face-to-face, the ongoing evolution of digital technologies, such as video chat and online forums, has further transformed these methods of data collection. This paper therefore discusses interviews and focus groups in detail, outlines how they can be used in practice, how digital technologies can further inform the data collection process, and what these methods can offer dentistry.

You have full access to this article via your institution.

Similar content being viewed by others

qualitative research interview sample

Interviews in the social sciences

qualitative research interview sample

Professionalism in dentistry: deconstructing common terminology

A review of technical and quality assessment considerations of audio-visual and web-conferencing focus groups in qualitative health research, introduction.

Traditionally, research in dentistry has primarily been quantitative in nature. 1 However, in recent years, there has been a growing interest in qualitative research within the profession, due to its potential to further inform developments in practice, policy, education and training. Consequently, in 2008, the British Dental Journal (BDJ) published a four paper qualitative research series, 2 , 3 , 4 , 5 to help increase awareness and understanding of this particular methodological approach.

Since the papers were originally published, two scoping reviews have demonstrated the ongoing proliferation in the use of qualitative research within the field of oral healthcare. 1 , 6 To date, the original four paper series continue to be well cited and two of the main papers remain widely accessed among the BDJ readership. 2 , 3 The potential value of well-conducted qualitative research to evidence-based practice is now also widely recognised by service providers, policy makers, funding bodies and those who commission, support and use healthcare research.

Besides increasing standalone use, qualitative methods are now also routinely incorporated into larger mixed method study designs, such as clinical trials, as they can offer additional, meaningful insights into complex problems that simply could not be provided by quantitative methods alone. Qualitative methods can also be used to further facilitate in-depth understanding of important aspects of clinical trial processes, such as recruitment. For example, Ellis et al . investigated why edentulous older patients, dissatisfied with conventional dentures, decline implant treatment, despite its established efficacy, and frequently refuse to participate in related randomised clinical trials, even when financial constraints are removed. 7 Through the use of focus groups in Canada and the UK, the authors found that fears of pain and potential complications, along with perceived embarrassment, exacerbated by age, are common reasons why older patients typically refuse dental implants. 7

The last decade has also seen further developments in qualitative research, due to the ongoing evolution of digital technologies. These developments have transformed how researchers can access and share information, communicate and collaborate, recruit and engage participants, collect and analyse data and disseminate and translate research findings. 8 Where appropriate, such technologies are therefore capable of extending and enhancing how qualitative research is undertaken. 9 For example, it is now possible to collect qualitative data via instant messaging, email or online/video chat, using appropriate online platforms.

These innovative approaches to research are therefore cost-effective, convenient, reduce geographical constraints and are often useful for accessing 'hard to reach' participants (for example, those who are immobile or socially isolated). 8 , 9 However, digital technologies are still relatively new and constantly evolving and therefore present a variety of pragmatic and methodological challenges. Furthermore, given their very nature, their use in many qualitative studies and/or with certain participant groups may be inappropriate and should therefore always be carefully considered. While it is beyond the scope of this paper to provide a detailed explication regarding the use of digital technologies in qualitative research, insight is provided into how such technologies can be used to facilitate the data collection process in interviews and focus groups.

In light of such developments, it is perhaps therefore timely to update the main paper 3 of the original BDJ series. As with the previous publications, this paper has been purposely written in an accessible style, to enhance readability, particularly for those who are new to qualitative research. While the focus remains on the most common qualitative methods of data collection – interviews and focus groups – appropriate revisions have been made to provide a novel perspective, and should therefore be helpful to those who would like to know more about qualitative research. This paper specifically focuses on undertaking qualitative research with adult participants only.

Overview of qualitative research

Qualitative research is an approach that focuses on people and their experiences, behaviours and opinions. 10 , 11 The qualitative researcher seeks to answer questions of 'how' and 'why', providing detailed insight and understanding, 11 which quantitative methods cannot reach. 12 Within qualitative research, there are distinct methodologies influencing how the researcher approaches the research question, data collection and data analysis. 13 For example, phenomenological studies focus on the lived experience of individuals, explored through their description of the phenomenon. Ethnographic studies explore the culture of a group and typically involve the use of multiple methods to uncover the issues. 14

While methodology is the 'thinking tool', the methods are the 'doing tools'; 13 the ways in which data are collected and analysed. There are multiple qualitative data collection methods, including interviews, focus groups, observations, documentary analysis, participant diaries, photography and videography. Two of the most commonly used qualitative methods are interviews and focus groups, which are explored in this article. The data generated through these methods can be analysed in one of many ways, according to the methodological approach chosen. A common approach is thematic data analysis, involving the identification of themes and subthemes across the data set. Further information on approaches to qualitative data analysis has been discussed elsewhere. 1

Qualitative research is an evolving and adaptable approach, used by different disciplines for different purposes. Traditionally, qualitative data, specifically interviews, focus groups and observations, have been collected face-to-face with participants. In more recent years, digital technologies have contributed to the ongoing evolution of qualitative research. Digital technologies offer researchers different ways of recruiting participants and collecting data, and offer participants opportunities to be involved in research that is not necessarily face-to-face.

Research interviews are a fundamental qualitative research method 15 and are utilised across methodological approaches. Interviews enable the researcher to learn in depth about the perspectives, experiences, beliefs and motivations of the participant. 3 , 16 Examples include, exploring patients' perspectives of fear/anxiety triggers in dental treatment, 17 patients' experiences of oral health and diabetes, 18 and dental students' motivations for their choice of career. 19

Interviews may be structured, semi-structured or unstructured, 3 according to the purpose of the study, with less structured interviews facilitating a more in depth and flexible interviewing approach. 20 Structured interviews are similar to verbal questionnaires and are used if the researcher requires clarification on a topic; however they produce less in-depth data about a participant's experience. 3 Unstructured interviews may be used when little is known about a topic and involves the researcher asking an opening question; 3 the participant then leads the discussion. 20 Semi-structured interviews are commonly used in healthcare research, enabling the researcher to ask predetermined questions, 20 while ensuring the participant discusses issues they feel are important.

Interviews can be undertaken face-to-face or using digital methods when the researcher and participant are in different locations. Audio-recording the interview, with the consent of the participant, is essential for all interviews regardless of the medium as it enables accurate transcription; the process of turning the audio file into a word-for-word transcript. This transcript is the data, which the researcher then analyses according to the chosen approach.

Types of interview

Qualitative studies often utilise one-to-one, face-to-face interviews with research participants. This involves arranging a mutually convenient time and place to meet the participant, signing a consent form and audio-recording the interview. However, digital technologies have expanded the potential for interviews in research, enabling individuals to participate in qualitative research regardless of location.

Telephone interviews can be a useful alternative to face-to-face interviews and are commonly used in qualitative research. They enable participants from different geographical areas to participate and may be less onerous for participants than meeting a researcher in person. 15 A qualitative study explored patients' perspectives of dental implants and utilised telephone interviews due to the quality of the data that could be yielded. 21 The researcher needs to consider how they will audio record the interview, which can be facilitated by purchasing a recorder that connects directly to the telephone. One potential disadvantage of telephone interviews is the inability of the interviewer and researcher to see each other. This is resolved using software for audio and video calls online – such as Skype – to conduct interviews with participants in qualitative studies. Advantages of this approach include being able to see the participant if video calls are used, enabling observation of non-verbal communication, and the software can be free to use. However, participants are required to have a device and internet connection, as well as being computer literate, potentially limiting who can participate in the study. One qualitative study explored the role of dental hygienists in reducing oral health disparities in Canada. 22 The researcher conducted interviews using Skype, which enabled dental hygienists from across Canada to be interviewed within the research budget, accommodating the participants' schedules. 22

A less commonly used approach to qualitative interviews is the use of social virtual worlds. A qualitative study accessed a social virtual world – Second Life – to explore the health literacy skills of individuals who use social virtual worlds to access health information. 23 The researcher created an avatar and interview room, and undertook interviews with participants using voice and text methods. 23 This approach to recruitment and data collection enables individuals from diverse geographical locations to participate, while remaining anonymous if they wish. Furthermore, for interviews conducted using text methods, transcription of the interview is not required as the researcher can save the written conversation with the participant, with the participant's consent. However, the researcher and participant need to be familiar with how the social virtual world works to engage in an interview this way.

Conducting an interview

Ensuring informed consent before any interview is a fundamental aspect of the research process. Participants in research must be afforded autonomy and respect; consent should be informed and voluntary. 24 Individuals should have the opportunity to read an information sheet about the study, ask questions, understand how their data will be stored and used, and know that they are free to withdraw at any point without reprisal. The qualitative researcher should take written consent before undertaking the interview. In a face-to-face interview, this is straightforward: the researcher and participant both sign copies of the consent form, keeping one each. However, this approach is less straightforward when the researcher and participant do not meet in person. A recent protocol paper outlined an approach for taking consent for telephone interviews, which involved: audio recording the participant agreeing to each point on the consent form; the researcher signing the consent form and keeping a copy; and posting a copy to the participant. 25 This process could be replicated in other interview studies using digital methods.

There are advantages and disadvantages of using face-to-face and digital methods for research interviews. Ultimately, for both approaches, the quality of the interview is determined by the researcher. 16 Appropriate training and preparation are thus required. Healthcare professionals can use their interpersonal communication skills when undertaking a research interview, particularly questioning, listening and conversing. 3 However, the purpose of an interview is to gain information about the study topic, 26 rather than offering help and advice. 3 The researcher therefore needs to listen attentively to participants, enabling them to describe their experience without interruption. 3 The use of active listening skills also help to facilitate the interview. 14 Spradley outlined elements and strategies for research interviews, 27 which are a useful guide for qualitative researchers:

Greeting and explaining the project/interview

Asking descriptive (broad), structural (explore response to descriptive) and contrast (difference between) questions

Asymmetry between the researcher and participant talking

Expressing interest and cultural ignorance

Repeating, restating and incorporating the participant's words when asking questions

Creating hypothetical situations

Asking friendly questions

Knowing when to leave.

For semi-structured interviews, a topic guide (also called an interview schedule) is used to guide the content of the interview – an example of a topic guide is outlined in Box 1 . The topic guide, usually based on the research questions, existing literature and, for healthcare professionals, their clinical experience, is developed by the research team. The topic guide should include open ended questions that elicit in-depth information, and offer participants the opportunity to talk about issues important to them. This is vital in qualitative research where the researcher is interested in exploring the experiences and perspectives of participants. It can be useful for qualitative researchers to pilot the topic guide with the first participants, 10 to ensure the questions are relevant and understandable, and amending the questions if required.

Regardless of the medium of interview, the researcher must consider the setting of the interview. For face-to-face interviews, this could be in the participant's home, in an office or another mutually convenient location. A quiet location is preferable to promote confidentiality, enable the researcher and participant to concentrate on the conversation, and to facilitate accurate audio-recording of the interview. For interviews using digital methods the same principles apply: a quiet, private space where the researcher and participant feel comfortable and confident to participate in an interview.

Box 1: Example of a topic guide

Study focus: Parents' experiences of brushing their child's (aged 0–5) teeth

1. Can you tell me about your experience of cleaning your child's teeth?

How old was your child when you started cleaning their teeth?

Why did you start cleaning their teeth at that point?

How often do you brush their teeth?

What do you use to brush their teeth and why?

2. Could you explain how you find cleaning your child's teeth?

Do you find anything difficult?

What makes cleaning their teeth easier for you?

3. How has your experience of cleaning your child's teeth changed over time?

Has it become easier or harder?

Have you changed how often and how you clean their teeth? If so, why?

4. Could you describe how your child finds having their teeth cleaned?

What do they enjoy about having their teeth cleaned?

Is there anything they find upsetting about having their teeth cleaned?

5. Where do you look for information/advice about cleaning your child's teeth?

What did your health visitor tell you about cleaning your child's teeth? (If anything)

What has the dentist told you about caring for your child's teeth? (If visited)

Have any family members given you advice about how to clean your child's teeth? If so, what did they tell you? Did you follow their advice?

6. Is there anything else you would like to discuss about this?

Focus groups

A focus group is a moderated group discussion on a pre-defined topic, for research purposes. 28 , 29 While not aligned to a particular qualitative methodology (for example, grounded theory or phenomenology) as such, focus groups are used increasingly in healthcare research, as they are useful for exploring collective perspectives, attitudes, behaviours and experiences. Consequently, they can yield rich, in-depth data and illuminate agreement and inconsistencies 28 within and, where appropriate, between groups. Examples include public perceptions of dental implants and subsequent impact on help-seeking and decision making, 30 and general dental practitioners' views on patient safety in dentistry. 31

Focus groups can be used alone or in conjunction with other methods, such as interviews or observations, and can therefore help to confirm, extend or enrich understanding and provide alternative insights. 28 The social interaction between participants often results in lively discussion and can therefore facilitate the collection of rich, meaningful data. However, they are complex to organise and manage, due to the number of participants, and may also be inappropriate for exploring particularly sensitive issues that many participants may feel uncomfortable about discussing in a group environment.

Focus groups are primarily undertaken face-to-face but can now also be undertaken online, using appropriate technologies such as email, bulletin boards, online research communities, chat rooms, discussion forums, social media and video conferencing. 32 Using such technologies, data collection can also be synchronous (for example, online discussions in 'real time') or, unlike traditional face-to-face focus groups, asynchronous (for example, online/email discussions in 'non-real time'). While many of the fundamental principles of focus group research are the same, regardless of how they are conducted, a number of subtle nuances are associated with the online medium. 32 Some of which are discussed further in the following sections.

Focus group considerations

Some key considerations associated with face-to-face focus groups are: how many participants are required; should participants within each group know each other (or not) and how many focus groups are needed within a single study? These issues are much debated and there is no definitive answer. However, the number of focus groups required will largely depend on the topic area, the depth and breadth of data needed, the desired level of participation required 29 and the necessity (or not) for data saturation.

The optimum group size is around six to eight participants (excluding researchers) but can work effectively with between three and 14 participants. 3 If the group is too small, it may limit discussion, but if it is too large, it may become disorganised and difficult to manage. It is, however, prudent to over-recruit for a focus group by approximately two to three participants, to allow for potential non-attenders. For many researchers, particularly novice researchers, group size may also be informed by pragmatic considerations, such as the type of study, resources available and moderator experience. 28 Similar size and mix considerations exist for online focus groups. Typically, synchronous online focus groups will have around three to eight participants but, as the discussion does not happen simultaneously, asynchronous groups may have as many as 10–30 participants. 33

The topic area and potential group interaction should guide group composition considerations. Pre-existing groups, where participants know each other (for example, work colleagues) may be easier to recruit, have shared experiences and may enjoy a familiarity, which facilitates discussion and/or the ability to challenge each other courteously. 3 However, if there is a potential power imbalance within the group or if existing group norms and hierarchies may adversely affect the ability of participants to speak freely, then 'stranger groups' (that is, where participants do not already know each other) may be more appropriate. 34 , 35

Focus group management

Face-to-face focus groups should normally be conducted by two researchers; a moderator and an observer. 28 The moderator facilitates group discussion, while the observer typically monitors group dynamics, behaviours, non-verbal cues, seating arrangements and speaking order, which is essential for transcription and analysis. The same principles of informed consent, as discussed in the interview section, also apply to focus groups, regardless of medium. However, the consent process for online discussions will probably be managed somewhat differently. For example, while an appropriate participant information leaflet (and consent form) would still be required, the process is likely to be managed electronically (for example, via email) and would need to specifically address issues relating to technology (for example, anonymity and use, storage and access to online data). 32

The venue in which a face to face focus group is conducted should be of a suitable size, private, quiet, free from distractions and in a collectively convenient location. It should also be conducted at a time appropriate for participants, 28 as this is likely to promote attendance. As with interviews, the same ethical considerations apply (as discussed earlier). However, online focus groups may present additional ethical challenges associated with issues such as informed consent, appropriate access and secure data storage. Further guidance can be found elsewhere. 8 , 32

Before the focus group commences, the researchers should establish rapport with participants, as this will help to put them at ease and result in a more meaningful discussion. Consequently, researchers should introduce themselves, provide further clarity about the study and how the process will work in practice and outline the 'ground rules'. Ground rules are designed to assist, not hinder, group discussion and typically include: 3 , 28 , 29

Discussions within the group are confidential to the group

Only one person can speak at a time

All participants should have sufficient opportunity to contribute

There should be no unnecessary interruptions while someone is speaking

Everyone can be expected to be listened to and their views respected

Challenging contrary opinions is appropriate, but ridiculing is not.

Moderating a focus group requires considered management and good interpersonal skills to help guide the discussion and, where appropriate, keep it sufficiently focused. Avoid, therefore, participating, leading, expressing personal opinions or correcting participants' knowledge 3 , 28 as this may bias the process. A relaxed, interested demeanour will also help participants to feel comfortable and promote candid discourse. Moderators should also prevent the discussion being dominated by any one person, ensure differences of opinions are discussed fairly and, if required, encourage reticent participants to contribute. 3 Asking open questions, reflecting on significant issues, inviting further debate, probing responses accordingly, and seeking further clarification, as and where appropriate, will help to obtain sufficient depth and insight into the topic area.

Moderating online focus groups requires comparable skills, particularly if the discussion is synchronous, as the discussion may be dominated by those who can type proficiently. 36 It is therefore important that sufficient time and respect is accorded to those who may not be able to type as quickly. Asynchronous discussions are usually less problematic in this respect, as interactions are less instant. However, moderating an asynchronous discussion presents additional challenges, particularly if participants are geographically dispersed, as they may be online at different times. Consequently, the moderator will not always be present and the discussion may therefore need to occur over several days, which can be difficult to manage and facilitate and invariably requires considerable flexibility. 32 It is also worth recognising that establishing rapport with participants via online medium is often more challenging than via face-to-face and may therefore require additional time, skills, effort and consideration.

As with research interviews, focus groups should be guided by an appropriate interview schedule, as discussed earlier in the paper. For example, the schedule will usually be informed by the review of the literature and study aims, and will merely provide a topic guide to help inform subsequent discussions. To provide a verbatim account of the discussion, focus groups must be recorded, using an audio-recorder with a good quality multi-directional microphone. While videotaping is possible, some participants may find it obtrusive, 3 which may adversely affect group dynamics. The use (or not) of a video recorder, should therefore be carefully considered.

At the end of the focus group, a few minutes should be spent rounding up and reflecting on the discussion. 28 Depending on the topic area, it is possible that some participants may have revealed deeply personal issues and may therefore require further help and support, such as a constructive debrief or possibly even referral on to a relevant third party. It is also possible that some participants may feel that the discussion did not adequately reflect their views and, consequently, may no longer wish to be associated with the study. 28 Such occurrences are likely to be uncommon, but should they arise, it is important to further discuss any concerns and, if appropriate, offer them the opportunity to withdraw (including any data relating to them) from the study. Immediately after the discussion, researchers should compile notes regarding thoughts and ideas about the focus group, which can assist with data analysis and, if appropriate, any further data collection.

Qualitative research is increasingly being utilised within dental research to explore the experiences, perspectives, motivations and beliefs of participants. The contributions of qualitative research to evidence-based practice are increasingly being recognised, both as standalone research and as part of larger mixed-method studies, including clinical trials. Interviews and focus groups remain commonly used data collection methods in qualitative research, and with the advent of digital technologies, their utilisation continues to evolve. However, digital methods of qualitative data collection present additional methodological, ethical and practical considerations, but also potentially offer considerable flexibility to participants and researchers. Consequently, regardless of format, qualitative methods have significant potential to inform important areas of dental practice, policy and further related research.

Gussy M, Dickson-Swift V, Adams J . A scoping review of qualitative research in peer-reviewed dental publications. Int J Dent Hygiene 2013; 11 : 174–179.

Article   Google Scholar  

Burnard P, Gill P, Stewart K, Treasure E, Chadwick B . Analysing and presenting qualitative data. Br Dent J 2008; 204 : 429–432.

Gill P, Stewart K, Treasure E, Chadwick B . Methods of data collection in qualitative research: interviews and focus groups. Br Dent J 2008; 204 : 291–295.

Gill P, Stewart K, Treasure E, Chadwick B . Conducting qualitative interviews with school children in dental research. Br Dent J 2008; 204 : 371–374.

Stewart K, Gill P, Chadwick B, Treasure E . Qualitative research in dentistry. Br Dent J 2008; 204 : 235–239.

Masood M, Thaliath E, Bower E, Newton J . An appraisal of the quality of published qualitative dental research. Community Dent Oral Epidemiol 2011; 39 : 193–203.

Ellis J, Levine A, Bedos C et al. Refusal of implant supported mandibular overdentures by elderly patients. Gerodontology 2011; 28 : 62–68.

Macfarlane S, Bucknall T . Digital Technologies in Research. In Gerrish K, Lathlean J (editors) The Research Process in Nursing . 7th edition. pp. 71–86. Oxford: Wiley Blackwell; 2015.

Google Scholar  

Lee R, Fielding N, Blank G . Online Research Methods in the Social Sciences: An Editorial Introduction. In Fielding N, Lee R, Blank G (editors) The Sage Handbook of Online Research Methods . pp. 3–16. London: Sage Publications; 2016.

Creswell J . Qualitative inquiry and research design: Choosing among five designs . Thousand Oaks, CA: Sage, 1998.

Guest G, Namey E, Mitchell M . Qualitative research: Defining and designing In Guest G, Namey E, Mitchell M (editors) Collecting Qualitative Data: A Field Manual For Applied Research . pp. 1–40. London: Sage Publications, 2013.

Chapter   Google Scholar  

Pope C, Mays N . Qualitative research: Reaching the parts other methods cannot reach: an introduction to qualitative methods in health and health services research. BMJ 1995; 311 : 42–45.

Giddings L, Grant B . A Trojan Horse for positivism? A critique of mixed methods research. Adv Nurs Sci 2007; 30 : 52–60.

Hammersley M, Atkinson P . Ethnography: Principles in Practice . London: Routledge, 1995.

Oltmann S . Qualitative interviews: A methodological discussion of the interviewer and respondent contexts Forum Qualitative Sozialforschung/Forum: Qualitative Social Research. 2016; 17 : Art. 15.

Patton M . Qualitative Research and Evaluation Methods . Thousand Oaks, CA: Sage, 2002.

Wang M, Vinall-Collier K, Csikar J, Douglas G . A qualitative study of patients' views of techniques to reduce dental anxiety. J Dent 2017; 66 : 45–51.

Lindenmeyer A, Bowyer V, Roscoe J, Dale J, Sutcliffe P . Oral health awareness and care preferences in patients with diabetes: a qualitative study. Fam Pract 2013; 30 : 113–118.

Gallagher J, Clarke W, Wilson N . Understanding the motivation: a qualitative study of dental students' choice of professional career. Eur J Dent Educ 2008; 12 : 89–98.

Tod A . Interviewing. In Gerrish K, Lacey A (editors) The Research Process in Nursing . Oxford: Blackwell Publishing, 2006.

Grey E, Harcourt D, O'Sullivan D, Buchanan H, Kipatrick N . A qualitative study of patients' motivations and expectations for dental implants. Br Dent J 2013; 214 : 10.1038/sj.bdj.2012.1178.

Farmer J, Peressini S, Lawrence H . Exploring the role of the dental hygienist in reducing oral health disparities in Canada: A qualitative study. Int J Dent Hygiene 2017; 10.1111/idh.12276.

McElhinney E, Cheater F, Kidd L . Undertaking qualitative health research in social virtual worlds. J Adv Nurs 2013; 70 : 1267–1275.

Health Research Authority. UK Policy Framework for Health and Social Care Research. Available at https://www.hra.nhs.uk/planning-and-improving-research/policies-standards-legislation/uk-policy-framework-health-social-care-research/ (accessed September 2017).

Baillie J, Gill P, Courtenay P . Knowledge, understanding and experiences of peritonitis among patients, and their families, undertaking peritoneal dialysis: A mixed methods study protocol. J Adv Nurs 2017; 10.1111/jan.13400.

Kvale S . Interviews . Thousand Oaks (CA): Sage, 1996.

Spradley J . The Ethnographic Interview . New York: Holt, Rinehart and Winston, 1979.

Goodman C, Evans C . Focus Groups. In Gerrish K, Lathlean J (editors) The Research Process in Nursing . pp. 401–412. Oxford: Wiley Blackwell, 2015.

Shaha M, Wenzell J, Hill E . Planning and conducting focus group research with nurses. Nurse Res 2011; 18 : 77–87.

Wang G, Gao X, Edward C . Public perception of dental implants: a qualitative study. J Dent 2015; 43 : 798–805.

Bailey E . Contemporary views of dental practitioners' on patient safety. Br Dent J 2015; 219 : 535–540.

Abrams K, Gaiser T . Online Focus Groups. In Field N, Lee R, Blank G (editors) The Sage Handbook of Online Research Methods . pp. 435–450. London: Sage Publications, 2016.

Poynter R . The Handbook of Online and Social Media Research . West Sussex: John Wiley & Sons, 2010.

Kevern J, Webb C . Focus groups as a tool for critical social research in nurse education. Nurse Educ Today 2001; 21 : 323–333.

Kitzinger J, Barbour R . Introduction: The Challenge and Promise of Focus Groups. In Barbour R S K J (editor) Developing Focus Group Research . pp. 1–20. London: Sage Publications, 1999.

Krueger R, Casey M . Focus Groups: A Practical Guide for Applied Research. 4th ed. Thousand Oaks, California: SAGE; 2009.

Download references

Author information

Authors and affiliations.

Senior Lecturer (Adult Nursing), School of Healthcare Sciences, Cardiff University,

Lecturer (Adult Nursing) and RCBC Wales Postdoctoral Research Fellow, School of Healthcare Sciences, Cardiff University,

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to P. Gill .

Rights and permissions

Reprints and permissions

About this article

Cite this article.

Gill, P., Baillie, J. Interviews and focus groups in qualitative research: an update for the digital age. Br Dent J 225 , 668–672 (2018). https://doi.org/10.1038/sj.bdj.2018.815

Download citation

Accepted : 02 July 2018

Published : 05 October 2018

Issue Date : 12 October 2018

DOI : https://doi.org/10.1038/sj.bdj.2018.815

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Assessment of women’s needs and wishes regarding interprofessional guidance on oral health in pregnancy – a qualitative study.

  • Merle Ebinghaus
  • Caroline Johanna Agricola
  • Birgit-Christiane Zyriax

BMC Pregnancy and Childbirth (2024)

Translating brand reputation into equity from the stakeholder’s theory: an approach to value creation based on consumer’s perception & interactions

  • Olukorede Adewole

International Journal of Corporate Social Responsibility (2024)

Perceptions and beliefs of community gatekeepers about genomic risk information in African cleft research

  • Abimbola M. Oladayo
  • Oluwakemi Odukoya
  • Azeez Butali

BMC Public Health (2024)

Assessment of women’s needs, wishes and preferences regarding interprofessional guidance on nutrition in pregnancy – a qualitative study

‘baby mamas’ in urban ghana: an exploratory qualitative study on the factors influencing serial fathering among men in accra, ghana.

  • Rosemond Akpene Hiadzi
  • Jemima Akweley Agyeman
  • Godwin Banafo Akrong

Reproductive Health (2023)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

qualitative research interview sample

Logo for Open Educational Resources Collective

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Chapter 13: Interviews

Danielle Berkovic

Learning outcomes

Upon completion of this chapter, you should be able to:

  • Understand when to use interviews in qualitative research.
  • Develop interview questions for an interview guide.
  • Understand how to conduct an interview.

What are interviews?

An interviewing method is the most commonly used data collection technique in qualitative research. 1 The purpose of an interview is to explore the experiences, understandings, opinions and motivations of research participants. 2 Interviews are conducted one-on-one with the researcher and the participant. Interviews are most appropriate when seeking to understand a participant’s subjective view of an experience and are also considered suitable for the exploration of sensitive topics.

What are the different types of interviews?

There are four main types of interviews:

  • Key stakeholder: A key stakeholder interview aims to explore one issue in detail with a person of interest or importance concerning the research topic. 3 Key stakeholder interviews seek the views of experts on some cultural, political or health aspects of the community, beyond their personal beliefs or actions. An example of a key stakeholder is the Chief Health Officer of Victoria (Australia’s second-most populous state) who oversaw the world’s longest lockdowns in response to the COVID-19 pandemic.
  • Dyad: A dyad interview aims to explore one issue in a level of detail with a dyad (two people). This form of interviewing is used when one participant of the dyad may need some support or is not wholly able to articulate themselves (e.g. people with cognitive impairment, or children). Independence is acknowledged and the interview is analysed as a unit. 4
  • Narrative: A narrative interview helps individuals tell their stories, and prioritises their own perspectives and experiences using the language that they prefer. 5 This type of interview has been widely used in social research but is gaining prominence in health research to better understand person-centred care, for example, negotiating exercise and food abstinence whilst living with Type 2 diabetes. 6,7
  • Life history: A life history interview allows the researcher to explore a person’s individual and subjective experiences within a history of the time framework. 8 Life history interviews challenge the researcher to understand how people’s current attitudes, behaviours and choices are influenced by previous experiences or trauma. Life history interviews have been conducted with Holocaust survivors 9 and youth who have been forcibly recruited to war. 10

Table 13.4 provides a summary of four studies, each adopting one of these types of interviews.

Interviewing techniques

There are two main interview techniques:

  • Semi-structured: Semi-structured interviewing aims to explore a few issues in moderate detail, to expand the researcher’s knowledge at some level. 11 Semi-structured interviews give the researcher the advantage of remaining reasonably objective while enabling participants to share their perspectives and opinions. The researcher should create an interview guide with targeted open questions to direct the interview. As examples, semi-structured interviews have been used to extend knowledge of why women might gain excess weight during pregnancy, 12 and to update guidelines for statin uptake. 13
  • In-depth: In-depth interviewing aims to explore a person’s subjective experiences and feelings about a particular topic. 14 In-depth interviews are often used to explore emotive (e.g. end-of-life care) 15 and complex (e.g. adolescent pregnancy) topics. 16 The researcher should create an interview guide with selected open questions to ask of the participant, but the participant should guide the direction of the interview more than in a semi-structured setting. In-depth interviews value participants’ lived experiences and are frequently used in phenomenology studies (as described in Chapter 6) .

When to use the different types of interview s

The type of interview a researcher uses should be determined by the study design, the research aims and objectives, and participant demographics. For example, if conducting a descriptive study, semi-structured interviews may be the best method of data collection. As explained in Chapter 5 , descriptive studies seek to describe phenomena, rather than to explain or interpret the data. A semi-structured interview, which seeks to expand upon some level of existing knowledge, will likely best facilitate this.

Similarly, if conducting a phenomenological study, in-depth interviews may be the best method of data collection. As described in Chapter 6 , the key concept of phenomenology is the individual. The emphasis is on the lived experience of that individual and the person’s sense-making of those experiences. Therefore, an in-depth interview is likely best placed to elicit that rich data.

While some interview types are better suited to certain study designs, there are no restrictions on the type of interview that may be used. For example, semi-structured interviews provide an excellent accompaniment to trial participation (see Chapter 11 about mixed methods), and key stakeholder interviews, as part of an action research study, can be used to define priorities, barriers and enablers to implementation.

How do I write my interview questions?

An interview aims to explore the experiences, understandings, opinions and motivations of research participants. The general rule is that the interviewee should speak for 80 per cent of the interview, and the interviewer should only be asking questions and clarifying responses, for about 20 per cent of the interview. This percentage may differ depending on the interview type; for example, a semi-structured interview involves the researcher asking more questions than in an in-depth interview. Still, to facilitate free-flowing responses, it is important to use open-ended language to encourage participants to be expansive in their responses. Examples of open-ended terms include questions that start with ‘who’, ‘how’ and ‘where’.

The researcher should avoid closed-ended questions that can be answered with yes or no, and limit conversation. For example, asking a participant ‘Did you have this experience?’ can elicit a simple ‘yes’, whereas asking them to ‘Describe your experience’, will likely encourage a narrative response. Table 13.1 provides examples of terminology to include and avoid in developing interview questions.

Table 13.1. Interview question formats to use and avoid

Use Avoid
Tell me about… Do you think that…
What happened when… Will you do this…
Why is this important? Did you believe that…
How did you feel when…

How do you…
Were there issues from your perspective…
What are the…

What does...

How long should my interview be?

There is no rule about how long an interview should take. Different types of interviews will likely run for different periods of time, but this also depends on the research question/s and the type of participant. For example, given that a semi-structured interview is seeking to expand on some previous knowledge, the interview may need no longer than 30 minutes, or up to one hour. An in-depth interview seeks to explore a topic in a greater level of detail and therefore, at a minimum, would be expected to last an hour. A dyad interview may be as short as 15 minutes (e.g. if the dyad is a person with dementia and a family member or caregiver) or longer, depending on the pairing.

Designing your interview guide

To figure out what questions to ask in an interview guide, the researcher may consult the literature, speak to experts (including people with lived experience) about the research and draw on their current knowledge. The topics and questions should be mapped to the research question/s, and the interview guide should be developed well in advance of commencing data collection. This enables time and opportunity to pilot-test the interview guide. The pilot interview provides an opportunity to explore the language and clarity of questions, the order and flow of the guide and to determine whether the instructions are clear to participants both before and after the interview. It can be beneficial to pilot-test the interview guide with someone who is not familiar with the research topic, to make sure that the language used is easily understood (and will be by participants, too). The study design should be used to determine the number of questions asked and the duration of the interview should guide the extent of the interview guide. The participant type may also determine the extent of the interview guide; for example, clinicians tend to be time-poor and therefore shorter, focused interviews are optimal. An interview guide is also likely to be shorter for a descriptive study than a phenomenological or ethnographic study, given the level of detail required. Chapter 5 outlined a descriptive study in which participants who had undergone percutaneous coronary intervention were interviewed. The interview guide consisted of four main questions and subsequent probing questions, linked to the research questions (see Table 13.2). 17

Table 13.2. Interview guide for a descriptive study

Research question Open questions Probing questions and topics
How does the patient feel, physically and psychologically, after their procedure? From your perspective, what would be considered a successful outcome of the procedure? Did the procedure meet your expectations? How do you define whether the procedure was successful?
How did you feel after the procedure?

How did you feel one week after the procedure and how does that compare with how you feel now?
How does the patient function after their procedure? After your procedure, tell me about your ability to do your daily activities? Prompt for activities including gardening, housework, personal care, work-related and family-related tasks.

Did you attend cardiac rehabilitation? Can you tell us about your experience of cardiac rehabilitation? What effect has medication had on your recovery?

What are the long-term effects of the procedure? What, if any, lifestyle changes have you made since your procedure?

Table 13.3 is an example of a larger and more detailed interview guide, designed for the qualitative component of a mixed-methods study aiming to examine the work and financial effects of living with arthritis as a younger person. The questions are mapped to the World Health Organization’s International Classification of Functioning, Disability, and Health, which measures health and disability at individual and population levels. 18

Table 13.3. Detailed interview guide

Research questions Open questions Probing questions
How do young people experience their arthritis diagnosis? Tell me about your experience of being diagnosed with arthritis.

How did being diagnosed with arthritis make you feel?

Tell me about your experience of arthritis flare ups what do they feel like?

What impacts arthritis flare ups or feeling like your arthritis is worse?

What circumstances lead to these feelings?

Based on your experience, what do you think causes symptoms of arthritis to become worse?
When were you diagnosed with arthritis?

What type of arthritis were you diagnosed with?

Does anyone else in your family have arthritis? What relation are they to you?
What are the work impacts of arthritis on younger people? What is your field of work, and how long have you been in this role?

How frequently do you work (full-time/part-time/casual)?
How has arthritis affected your work-related demands or career? How so?

Has arthritis led you to reconsider your career? How so?

Has arthritis affected your usual working hours each week? How so?

How have changes to work or career because of your arthritis impacted other areas of life, i.e. mental health or family role?
What are the financial impacts of living with arthritis as a younger person? Has your arthritis led to any financial concerns? Financial concerns pertaining to:

• Direct costs: rheumatologist, prescribed and non-prescribed medications (as well as supplements), allied health costs (rheumatology, physiotherapy, chiropractic, osteopathy, myotherapy), Pilates, and gym/personal trainer fees, complementary therapies.

• Indirect costs: workplace absenteeism, productivity, loss of wages, informal care, cost of different types of insurance: health insurance (joint replacements)

It is important to create an interview guide, for the following reasons:

  • The researcher should be familiar with their research questions.
  • Using an interview guide will enable the incorporation of feedback from the piloting process.
  • It is difficult to predict how participants will respond to interview questions. They may answer in a way that is anticipated or they may provide unanticipated insights that warrant follow-up. An interview guide (a physical or digital copy) enables the researcher to note these answers and follow-up with appropriate inquiry.
  • Participants will likely have provided heterogeneous answers to certain questions. The interview guide enables the researcher to note similarities and differences across various interviews, which may be important in data analysis.
  • Even experienced qualitative researchers get nervous before an interview! The interview guide provides a safety net if the researcher forgets their questions or needs to anticipate the next question.

Setting up the interview

In the past, most interviews were conducted in person or by telephone. Emerging technologies promote easier access to research participation (e.g. by people living in rural or remote communities, or for people with mobility limitations). Even in metropolitan settings, many interviews are now conducted electronically (e.g. using videoconferencing platforms). Regardless of your interview setting, it is essential that the interview environment is comfortable for the participant. This process can begin as soon as potential participants express interest in your research. Following are some tips from the literature and our own experiences of leading interviews:

  • Answer questions and set clear expectations . Participating in research is not an everyday task. People do not necessarily know what to expect during a research interview, and this can be daunting. Give people as much information as possible, answer their questions about the research and set clear expectations about what the interview will entail and how long it is expected to last. Let them know that the interview will be recorded for transcription and analysis purposes. Consider sending the interview questions a few days before the interview. This gives people time and space to reflect on their experiences, consider their responses to questions and to provide informed consent for their participation.
  • Consider your setting . If conducting the interview in person, consider the location and room in which the interview will be held. For example, if in a participant’s home, be mindful of their private space. Ask if you should remove your shoes before entering their home. If they offer refreshments (which in our experience many participants do), accept it with gratitude if possible. These considerations apply beyond the participant’s home; if using a room in an office setting, consider privacy and confidentiality, accessibility and potential for disruption. Consider the temperature as well as the furniture in the room, who may be able to overhear conversations and who may walk past. Similarly, if interviewing by phone or online, take time to assess the space, and if in a house or office that is not quiet or private, use headphones as needed.
  • Build rapport. The research topic may be important to participants from a professional perspective, or they may have deep emotional connections to the topic of interest. Regardless of the nature of the interview, it is important to remember that participants are being asked to open up to an interviewer who is likely to be a stranger. Spend some time with participants before the interview, to make sure that they are comfortable. Engage in some general conversation, and ask if they have any questions before you start. Remember that it is not a normal part of someone’s day to participate in research. Make it an enjoyable and/or meaningful experience for them, and it will enhance the data that you collect.
  • Let participants guide you. Oftentimes, the ways in which researchers and participants describe the same phenomena are different. In the interview, reflect the participant’s language. Make sure they feel heard and that they are willing and comfortable to speak openly about their experiences. For example, our research involves talking to older adults about their experience of falls. We noticed early in this research that participants did not use the word ‘fall’ but would rather use terms such as ‘trip’, ‘went over’ and ‘stumbled’. As interviewers we adopted the participant’s language into our questions.
  • Listen consistently and express interest. An interview is more complex than a simple question-and-answer format. The best interview data comes from participants feeling comfortable and confident to share their stories. By the time you are completing the 20th interview, it can be difficult to maintain the same level of concentration as with the first interview. Try to stay engaged: nod along with your participants, maintain eye contact, murmur in agreement and sympathise where warranted.
  • The interviewer is both the data collector and the data collection instrument. The data received is only as good as the questions asked. In qualitative research, the researcher influences how participants answer questions. It is important to remain reflexive and aware of how your language, body language and attitude might influence the interview. Being rested and prepared will enhance the quality of the questions asked and hence the data collected.
  • Avoid excessive use of ‘why’. It can be challenging for participants to recall why they felt a certain way or acted in a particular manner. Try to avoid asking ‘why’ questions too often, and instead adopt some of the open language described earlier in the chapter.

After your interview

When you have completed your interview, thank the participant and let them know they can contact you if they have any questions or follow-up information they would like to provide. If the interview has covered sensitive topics or the participant has become distressed throughout the interview, make sure that appropriate referrals and follow-up are provided (see section 6).

Download the recording from your device and make sure it is saved in a secure location that can only be accessed by people on the approved research team (see Chapters 35 and 36).

It is important to know what to do immediately after each interview is completed. Interviews should be transcribed – that is, reproduced verbatim for data analysis. Transcribing data is an important step in the process of analysis, but it is very time-consuming; transcribing a 60-minute interview can take up to 8 hours. Data analysis is discussed in Section 4.

Table 13.4. Examples of the four types of interviews

Title
CC Licence
First author and year Cuthbertson, 2019 Bannon, 2021 McGranahan, 2020 Gutierrez-Garcia, 2021
Interview type Key stakeholder Dyad Narrative Life history
Interview guide Appendix A eAppendix Supplement Not provided, but the text states that ‘qualitative semi-structured narrative interviews’ were conducted.’ [methods] Not provided, but the text states that ‘an open and semi-structured question guide was designed for use.' [methods]
Study design Convergent mixed-methods study Qualitative dyadic study Narrative interview study Life history and lifeline techniques
Number of participants 30

Key stakeholders were emergency management or disaster healthcare practitioners, academics specialising in disaster management in the Oceania region, and policy managers.
23 dyads 28 7
Aim ‘To investigate threats to the health and well-being of societies associated with disaster impact in Oceania.’ [abstract] ‘To explore the lived experiences of couples managing young-onset dementia using an integrated dyadic coping model.’[abstract] ‘To explore the experiences and views of people with psychotic experiences who have not received any treatment or other support from mental health services for the past 5 years.’ [abstract] ‘To analyse the use of life histories and lifelines in the study of female genital mutilation in the context of cross-cultural research in participants with different languages.’ [abstract]
Country Australia, Fiji, Indonesia, Aotearoa New Zealand, Timor Leste and Tonga United States England Spain
Length of interview 45–60 minutes 60 minutes 40-120 minutes 3 sessions

Session 1: life history interview

Session 2: Lifeline activity where participants used drawings to complement or enhance their interview

Session 3: The researchers and participants worked together to finalise the lifeline.
The life history interviews ran for 40 – 60 minutes. The timing for sessions 2 and 3 is not provided.
Sample of interview questions from interview guide 1. What do you believe are the top five disaster risks or threats in the Oceania region today?

2. What disaster risks do you believe are emerging in the Oceania region over the next decade?

3. Why do you think these are risks?

4. What are the drivers of these risks?

5. Do you have any suggestions on how we can improve disaster risk assessment?

6. Are the current disaster risk plans and practices suited to the future disaster risks? If not, why? If not, what do you think needs to be done to improve them?

7. What are the key areas of disaster practice that can enhance future community resilience to disaster risk?

8. What are the barriers or inhibitors to facilitating this practice?

9. What are the solutions or facilitators to enhancing community resilience?

[Appendix A]

1. We like to start by learning more about what you each first noticed that prompted the evaluations you went through to get to the diagnosis.

• Can you each tell me about the earliest symptoms you noticed?

2. What are the most noticeable or troubling symptoms that you have experienced since the time of diagnosis?

• How have your changes in functioning impacted you?

• Emotionally, how do you feel about your symptoms and the changes in functioning you are experiencing?

3. Are you open with your friends and family about the diagnosis?

• Have you experienced any stigma related to your diagnosis?

4. What is your understanding of the diagnosis?

• What is your understanding about the how this condition will affect you both in the future? How are you getting information about this diagnosis?

[eAppendix Supplement]

Not provided. Not provided.
Analysis Thematic analysis guided by The Hazard and Peril Glossary for describing and categorising disasters applied by the Centre for Research on the Epidemiology of Disasters Emergency Events Database Thematic analysis guided by the Dyadic Coping Theoretical Framework Inductive thematic analysis outlined by Braun and Clarke. Phenomenological method proposed by Giorgi (sense of the whole):

1. Reading the entire description to obtain a general sense of the discourse

2. The researcher goes back to the beginning and reads the text again, with the aim of distinguishing the meaning units by separating the perspective of the phenomenon of interest

3. The researcher expresses the contents of the units of meaning more clearly by creating categories

4. The researcher synthesises the units and categories of meaning into a consistent statement that takes into account the participant’s experience and language.
Main themes 1. Climate change is observed as a contemporary and emerging disaster risk

2. Risk is contextual to the different countries, communities and individuals in Oceania.

3. Human development trajectories and their impact, along with perceptions of a changing world, are viewed as drivers of current and emerging risks.

4. Current disaster risk plans and practices are not suited to future disaster risks.

5. Increased education and education of risk and risk assessment at a local level to empower community risk ownership.

[Results, Box 1]
1. Stress communication

2. Positive individual dyadic coping

3. Positive conjoint dyadic coping

4. Negative individual dyadic coping

5. Negative conjoint dyadic coping

[Abstract]
1. Perceiving psychosis as positive

2. Making sense of psychotic experiences

3. Finding sources of strength

4. Negative past experiences of mental health services

5. Positive past experiences with individual clinicians

[Abstract]
1. Important moments and their relationship with female genital mutilation

2. The ritual knife: how sharp or blunt it is at different stages, where and how women are subsequently held as a result

3. Changing relationships with family: how being subject to female genital mutilation changed relationships with mothers

4. Female genital mutilation increases the risk of future childbirth complications which change relationships with family and healthcare systems

5. Managing experiences with early exposure to physical and sexual violence across the lifespan.

Interviews are the most common data collection technique in qualitative research. There are four main types of interviews; the one you choose will depend on your research question, aims and objectives. It is important to formulate open-ended interview questions that are understandable and easy for participants to answer. Key considerations in setting up the interview will enhance the quality of the data obtained and the experience of the interview for the participant and the researcher.

  • Gill P, Stewart K, Treasure E, Chadwick B. Methods of data collection in qualitative research: interviews and focus groups. Br Dent J . 2008;204(6):291-295. doi:10.1038/bdj.2008.192
  • DeJonckheere M, Vaughn LM. Semistructured interviewing in primary care research: a balance of relationship and rigour. Fam Med Community Health . 2019;7(2):e000057. doi:10.1136/fmch-2018-000057
  • Nyanchoka L, Tudur-Smith C, Porcher R, Hren D. Key stakeholders’ perspectives and experiences with defining, identifying and displaying gaps in health research: a qualitative study. BMJ Open . 2020;10(11):e039932. doi:10.1136/bmjopen-2020-039932
  • Morgan DL, Ataie J, Carder P, Hoffman K. Introducing dyadic interviews as a method for collecting qualitative data. Qual Health Res .  2013;23(9):1276-84. doi:10.1177/1049732313501889
  • Picchi S, Bonapitacola C, Borghi E, et al. The narrative interview in therapeutic education. The diabetic patients’ point of view. Acta Biomed . Jul 18 2018;89(6-S):43-50. doi:10.23750/abm.v89i6-S.7488
  • Stuij M, Elling A, Abma T. Negotiating exercise as medicine: Narratives from people with type 2 diabetes. Health (London) . 2021;25(1):86-102. doi:10.1177/1363459319851545
  • Buchmann M, Wermeling M, Lucius-Hoene G, Himmel W. Experiences of food abstinence in patients with type 2 diabetes: a qualitative study. BMJ Open .  2016;6(1):e008907. doi:10.1136/bmjopen-2015-008907
  • Jessee E. The Life History Interview. Handbook of Research Methods in Health Social Sciences . 2018:1-17:Chapter 80-1.
  • Sheftel A, Zembrzycki S. Only Human: A Reflection on the Ethical and Methodological Challenges of Working with “Difficult” Stories. The Oral History Review . 2019;37(2):191-214. doi:10.1093/ohr/ohq050
  • Harnisch H, Montgomery E. “What kept me going”: A qualitative study of avoidant responses to war-related adversity and perpetration of violence by former forcibly recruited children and youth in the Acholi region of northern Uganda. Soc Sci Med .  2017;188:100-108. doi:10.1016/j.socscimed.2017.07.007
  • Ruslin., Mashuri S, Rasak MSA, Alhabsyi M, Alhabsyi F, Syam H. Semi-structured Interview: A Methodological Reflection on the Development of a Qualitative Research Instrument in Educational Studies. IOSR-JRME . 2022;12(1):22-29. doi:10.9790/7388-1201052229
  • Chang T, Llanes M, Gold KJ, Fetters MD. Perspectives about and approaches to weight gain in pregnancy: a qualitative study of physicians and nurse midwives. BMC Pregnancy & Childbirth . 2013;13(47)doi:10.1186/1471-2393-13-47
  • DeJonckheere M, Robinson CH, Evans L, et al. Designing for Clinical Change: Creating an Intervention to Implement New Statin Guidelines in a Primary Care Clinic. JMIR Hum Factors .  2018;5(2):e19. doi:10.2196/humanfactors.9030
  • Knott E, Rao AH, Summers K, Teeger C. Interviews in the social sciences. Nature Reviews Methods Primers . 2022;2(1)doi:10.1038/s43586-022-00150-6
  • Bergenholtz H, Missel M, Timm H. Talking about death and dying in a hospital setting – a qualitative study of the wishes for end-of-life conversations from the perspective of patients and spouses. BMC Palliat Care . 2020;19(1):168. doi:10.1186/s12904-020-00675-1
  • Olorunsaiye CZ, Degge HM, Ubanyi TO, Achema TA, Yaya S. “It’s like being involved in a car crash”: teen pregnancy narratives of adolescents and young adults in Jos, Nigeria. Int Health . 2022;14(6):562-571. doi:10.1093/inthealth/ihab069
  • Ayton DR, Barker AL, Peeters G, et al. Exploring patient-reported outcomes following percutaneous coronary intervention: A qualitative study. Health Expect .  2018;21(2):457-465. doi:10.1111/hex.12636
  • World Health Organization. International Classification of Functioning, Disability and Health (ICF). WHO. https://www.who.int/standards/classifications/international-classification-of-functioning-disability-and-health#:~:text=ICF%20is%20the%20WHO%20framework,and%20measure%20health%20and%20disability.
  • Cuthbertson J, Rodriguez-Llanes JM, Robertson A, Archer F. Current and Emerging Disaster Risks Perceptions in Oceania: Key Stakeholders Recommendations for Disaster Management and Resilience Building. Int J Environ Res Public Health .  2019;16(3)doi:10.3390/ijerph16030460
  • Bannon SM, Grunberg VA, Reichman M, et al. Thematic Analysis of Dyadic Coping in Couples With Young-Onset Dementia. JAMA Netw Open .  2021;4(4):e216111. doi:10.1001/jamanetworkopen.2021.6111
  • McGranahan R, Jakaite Z, Edwards A, Rennick-Egglestone S, Slade M, Priebe S. Living with Psychosis without Mental Health Services: A Narrative Interview Study. BMJ Open .  2021;11(7):e045661. doi:10.1136/bmjopen-2020-045661
  • Gutiérrez-García AI, Solano-Ruíz C, Siles-González J, Perpiñá-Galvañ J. Life Histories and Lifelines: A Methodological Symbiosis for the Study of Female Genital Mutilation. Int J Qual Methods . 2021;20doi:10.1177/16094069211040969

Qualitative Research – a practical guide for health and social care researchers and practitioners Copyright © 2023 by Danielle Berkovic is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License , except where otherwise noted.

Share This Book

  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

IndianScribes

Research, Record, and Transcribe Better

Preparing Questions for a Qualitative Research Interview

Updated on: June 22, 2024

Preparing-Questions-for-a-Qualitative-Research-Interview

A qualitative research interview is an invaluable tool for researchers. Whether one’s studying social phenomena, exploring personal narratives, or investigating complex issues, interviews offer a means to gain unique insights. 

“The quality of the data collected in a qualitative research interview is highly dependent on the quality and appropriateness of the questions asked.”

But how do you prepare the right questions to ensure your interviews yield rich data? In this guide, we’ll explore the types of qualitative research interviews and provide tips for crafting effective questions.

Table of Contents

Types of Qualitative Research Interviews

Before diving into question preparation, it’s important to select the type of qualitative research interview that’s best suited for the study at hand.

There are three types of qualitative research interviews:

Structured Interviews 

Structured interviews involve asking the same set of pre-written questions to every participant. This approach ensures consistency, making it easier to compare data between participants or groups later.

When conducting structured interviews, keep these guidelines in mind:

  • Pre-written Questions : All questions, including probes, should be meticulously written in advance.
  • Detailed Questions : Questions should be detailed enough to be used verbatim during interviews.
  • Consistent Sequence : The sequence of questions should be pre-decided and consistent across interviews.

Example of a Structured Interview Question

Question : Thinking back to your childhood days in Chelsea, can you remember what kind of local music was popular at the time?

  • Why do you think it was so popular?
  • Where was it played?
  • Were there other popular genres?

Structured interviews are ideal when you need uniform data collection across all participants. They are common in large-scale studies or when comparing responses quantitatively.

Read more: Advantages & Disadvantages of Structured Interviews

Semi-structured Interviews 

The second type of qualitative interviews are semi-structured interviews. In these interviews, the  interview guide outlines the topics to be explored, but the actual questions are not pre-written.

This approach allows interviewers the freedom to phrase questions spontaneously and explore topics in more depth.

Example of a Semi-Structured Interview Question

Question : What problems did the participant face growing up in the community?

  • Education-related.
  • Related to their immediate family.
  • Related to the community in general.

Semi-structured interviews strike a balance between flexibility and structure. They offer a framework within which interviewers can adapt questions to participants’ responses, making them suitable for in-depth exploration.

Unstructured Interviews 

In unstructured interviews, often referred to as  informal conversational interviews , are characterized by a lack of formal guidelines, predefined questions, or sequencing.

Questions emerge during the interview based on the conversation’s flow and the interviewee’s observations. Consequently, each unstructured interview is unique, and questions may evolve over time.

Unstructured interviews are highly exploratory and can lead to unexpected insights. They are particularly valuable when studying complex or novel phenomena where predefined questions may limit understanding.

Deciding What Information You Need

Once you’ve chosen the type of interview that suits your research study, the next step is to decide what information you need to collect.

Patton’s six types of questions offer a framework for shaping your inquiries:

  • Behavior or Experience : Explore participants’ actions and experiences.
  • Opinion or Belief : Probe participants’ beliefs, attitudes, and opinions.
  • Feelings : Delve into the emotional aspects of participants’ experiences.
  • Knowledge : Assess participants’ understanding and awareness of a topic.
  • Sensory : Investigate how participants perceive and interact with their environment.
  • Background or Demographic : Collect information about participants’ personal characteristics and histories.

Based on these categories, create a list of the specific information you aim to collect through the interview. This step ensures that your questions align with your research objectives.

Writing the Qualitative Research Interview Questions

After deciding the type of interview and nature of information you’d like to gather, the next step is to write the actual questions. 

Using Open-Ended Questions

Open-ended questions are the backbone of qualitative research interviews. They encourage participants to share their experiences and thoughts in-depth, providing rich, detailed data.

Avoid ‘yes’ or ‘no’ questions, as they limit responses. Instead, use open-ended questions that grant participants the freedom to express themselves. Here are some examples – 

Examples of Open-Ended Questions

How do you feel about working at ABC Corp. during your initial years there?

  • Encourages participants to share their emotions and experiences.

Can you describe the attitudes and approach to work of the other people working with you at the time?

  • Invites participants to reflect on their colleagues’ behaviors and attitudes.

Tell me more about your relationship with your peers.

  • Encourages participants to provide narrative insights into their relationships.

Read More: 100 Open-Ended Qualitative Interview Questions

Going from Unstructured to Structured Questions

Unstructured Questions allow the interviewee to guide the conversation, letting them focus on what they think is most important.

These questions make the interview longer, but also provide richer and deeper insight.

Examples of Unstructured Questions

  • Tell me about your experience working at [xxx].
  • What did it feel like to live in that neighborhood?
  • What stood out to you as the defining characteristic of that neighborhood?

Examples of Structured Questions

  • What are some ways people dealt with the health issues caused by excessive chemical industries in the neighborhood?
  • As an employee at ABC Corp. during the time, did you observe any specific actions taken by the employers to address the issue?

Probing Questions

Probing questions are used to get more information about an answer or clarify something. They help interviewers dig deeper, clarify responses, and gain a more comprehensive understanding.

Examples of Probing Questions

Tell me more about that.

  • Encourages participants to elaborate on their previous response.

And how did you feel about that?

  • Invites participants to share their emotional reactions.

What do you mean when you say [xxx]?

  • Seeks clarification on ambiguous or complex statements.

Probing questions enhance the depth and clarity of the data collected, however they should be used judiciously to avoid overwhelming participants.

A General Last Question

As your interview approaches its conclusion, it’s beneficial to have a general last question that allows the interviewee to share any additional thoughts or opinions they feel are relevant.

For instance, you might ask:

Thank you for all that valuable information. Is there anything else you’d like to add before we end?

This open-ended question provides participants with a final opportunity to express themselves fully, ensuring that no critical insights are left unshared.

Preparing questions for qualitative research interviews requires a thoughtful approach that considers the interview type, desired information, and the balance between structured and unstructured questioning.

Here’s a great guide from the Harvard University on the subject.

Read More: How to Transcribe an Interview – A Complete Guide

  • Choosing the Right Setting for a Qualitative Research Interview
  • 5 Ways Researchers can Transcribe from Audio to Text

Reader Interactions

hlabishi says

April 8, 2015 at 12:37 pm

I found the information valuable. It will assist me a lot with my research work.

Harpinder says

June 8, 2015 at 10:40 pm

I am going for my pilot study. Above information is really valuable for me. Thank you.

September 28, 2015 at 10:21 am

thank you for Patton’s 6 types of questions related to: 1. Behavior or experience. 2. Opinion or belief. 3. Feelings. 4. Knowledge. 5. Sensory. 6. Background or demographic. Really helpful

IBRAHIM A. ALIYU says

October 7, 2015 at 6:04 pm

Very interesting and good guides, thanks a lot

Dumisani says

July 31, 2017 at 7:55 am

Very informative. Thank you

Yongama says

June 5, 2018 at 11:57 pm

this is a good information and it helped me

Joshua Nonwo says

June 3, 2019 at 11:02 pm

vital information that really help me to do my research. thank you so much.

June 12, 2019 at 7:36 pm

Thanks a lot. Example of structured interview broadens My mind in formulating my structured research question. Indeed very helpful.

mwiine says

November 29, 2019 at 6:31 am

thanx, a lot. the information will guide me in my research.

Kayayoo isaac says

November 29, 2019 at 7:54 am

Thanks for the information, it was very much helpful to me in the area of data collection.

leslie says

December 27, 2019 at 4:29 pm

very useful thanks.

louisevbanz says

January 20, 2020 at 3:19 pm

I’d like put the writers of this in my references. May I ask who the writers are and what year was this published? Thank you very much.

Daniel says

June 1, 2020 at 6:21 pm

Thank you very much. Helpful information in my preparations for structured interviews for my research .

abby kamwana says

December 8, 2020 at 9:03 am

This is the information i was looking for thank you so much!.

Cosmas W.K. Mereku (Prof.) says

June 15, 2021 at 8:59 am

I am teaching 42 MPhil and 6 PhD postgraduate music students research methods this academic year. Your guide to qualitative research interview questions has been very useful. Because the students are in different disciplines (music education, music composition, ethnomusicology and performance), all the types of questions discussed have been very useful. Thank you very much.

Gerald Ibrahim b. says

June 16, 2021 at 12:45 pm

One of my best article ever read..thanks alot this may help me in completing my research report…

Corazon T. Balulao says

March 1, 2022 at 7:47 am

Thank you so much for sharing with us it helps me a lot doing mt basic research

antoinette says

March 28, 2022 at 7:35 am

this was very helpful

พนันบอล เล่นยังไง says

November 21, 2023 at 5:55 am

Very good article! We are linking to this particularly great article on our website. Keep up the good writing.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

  • Transcription
  • Qualitative Research
  • Better Audio & Video
  • Voice Recorders
  • Focus Groups
  • Terms of Service
  • Privacy Policy

Privacy Overview

qualitative research interview sample

Extract insights from Customer & Employee Interviews. At Scale.

Data analysis in qualitative research: sample techniques.

Insight7

Home » Data analysis in qualitative research: Sample techniques

Qualitative analysis methods play a significant role in understanding the rich, nuanced data that emerges from qualitative research. These methods allow researchers to delve deeper into the participants' experiences, perceptions, and feelings, making sense of complex insights that quantitative data often overlooks. By focusing on context, emotions, and social dynamics, qualitative analysis methods provide a framework for interpreting data in a way that captures the essence of human experience.

Moreover, the choice of analysis technique can greatly influence the findings and implications of a study. Common methods include thematic analysis, grounded theory, and narrative analysis, each offering distinct approaches to extracting meaning from qualitative data. Selecting the right method depends on the research questions and the specific nature of the data. Understanding these qualitative analysis methods is essential for researchers aiming to derive meaningful interpretations and craft actionable insights.

Key Techniques for Qualitative Analysis

Qualitative analysis methods encompass various techniques that researchers use to interpret non-numerical data, such as interview transcripts and open-ended survey responses. One key technique is thematic analysis, which identifies patterns or themes within the data. This process allows researchers to capture the essence of participants' experiences and perspectives effectively.

Another significant method is grounded theory, which involves developing theories based on the data collected. This approach is particularly useful when exploring new areas where existing theories may not apply. Additionally, narrative analysis focuses on the stories that individuals share, allowing researchers to understand how people construct meaning from their experiences. Each of these techniques plays a crucial role in qualitative research, enabling a deeper understanding of complex social phenomena and enriching the insights derived from the data.

Coding as a Qualitative Analysis Method

Coding serves as a foundational qualitative analysis method that aids researchers in organizing and interpreting data. By assigning labels or codes to segments of qualitative data, researchers can systematically categorize and identify recurring themes or patterns. This process not only facilitates better data management but also enhances the depth of analysis. Codes can be derived from the data itself or established prior to analysis, leading to a more structured approach in qualitative research.

There are several crucial steps involved in coding for qualitative analysis. First, familiarization with the data is essential to identify key concepts. Next, the process of coding involves applying labels to relevant portions of the text. Researchers then refine these codes into broader categories, which aids in presenting findings clearly. Finally, reviewing and revising codes ensures that they accurately reflect the dataset. This structured approach to coding not only streamlines the analysis process but enriches the overall quality of qualitative insights.

Thematic Analysis in Qualitative Research

Thematic analysis is a powerful qualitative analysis method that involves identifying and interpreting patterns or themes within qualitative data. This analysis helps researchers understand the underlying meanings in conversations, interviews, or textual responses, allowing for richer interpretations. Typically, it begins with familiarizing oneself with the data, followed by the generation of initial codes that capture important features relevant to the research questions.

Researchers often compile codes into potential themes and review these themes to ensure they accurately represent the dataset. This iterative process facilitates a deep dive into the data, revealing insights that can inform decision-making and strategy in various fields. By focusing on recurrent themes, researchers can provide a comprehensive understanding of the data, making thematic analysis crucial in qualitative research. It emphasizes the significance of qualitative analysis methods as a means of uncovering insightful narratives that go beyond mere statistics.

Applying Qualitative Analysis Methods in Practice

Applying qualitative analysis methods in practice involves understanding how to effectively gather, analyze, and interpret qualitative data. Researchers should begin by defining their research questions and objectives, ensuring clarity throughout the analysis process. Afterward, various data sources, such as interviews, observations, and focus groups, can be utilized to collect rich, descriptive information relevant to the study.

Next, the data can be organized using several approaches. Coding is a fundamental technique, where researchers categorize information into themes or patterns. This allows for deeper insights and connections to emerge from the data. Additionally, employing multiple analysis methods, including thematic analysis, content analysis, and narrative analysis, can enhance the richness of findings. Finally, it is crucial to validate results by seeking feedback from peers or participants, ensuring that the conclusions drawn are robust and credible.

Case Studies: Real-World Applications

Case studies illustrate the practical application of qualitative analysis methods in various settings. They provide concrete examples of how researchers can analyze data collected from interviews, focus groups, and observational studies. For instance, researchers may explore how consumers perceive a product by examining interview transcripts and identifying recurring themes. This process highlights the participants' motivations and experiences, providing rich insights into consumer behavior.

In another scenario, qualitative analysis methods can be used to assess educational programs. Researchers might collect feedback from students and educators to identify strengths and weaknesses in curriculum design. By analyzing this qualitative data, educators can make informed improvements. These case studies not only showcase the versatility of qualitative research but also emphasize the importance of adapting methods to specific contexts for deeper insights.

Leveraging Software Tools for Enhanced Analysis

In the realm of qualitative research, utilizing software tools transforms the analysis process significantly. With the right applications, researchers can streamline their workflows, allowing for faster and more accurate data interpretation. These tools often offer features like coding automation, which reduces the time spent on manual analysis. Consequently, this enables researchers to focus on deriving insights rather than getting bogged down in repetitive tasks.

By employing these software solutions, teams can improve collaboration, ensuring that all insights are centralized and easily accessible. This enhances not only the decision-making process but also minimizes the potential for bias often seen in manual coding. Ultimately, integrating software tools in qualitative analysis methods can lead to more reliable and actionable findings, empowering researchers to present informed conclusions efficiently.

Conclusion: The Role of Qualitative Analysis Methods in Research

Qualitative Analysis Methods play a vital role in enhancing the depth and richness of research findings. By focusing on human experiences and perceptions, these methods allow researchers to gain insights that quantitative data often overlooks. This depth provides a nuanced understanding of complex phenomena, enabling researchers to unravel intricate social dynamics and individual behaviors.

In conclusion, incorporating qualitative analysis methods into research significantly enriches data interpretation. These methods not only help in identifying patterns and themes but also foster a more holistic understanding of the research subject. Ultimately, this comprehensive approach enhances the credibility and relevance of research outcomes, paving the way for informed decision-making.

Turn interviews into actionable insights

On this Page

Coding for qualitative research: AI-powered tools

You may also like, best qualitative methods for research in 2024.

Insight7

Qualitative research approaches in 2024: what to know

Top strategies for qualitative research in political science.

Unlock Insights from Interviews 10x faster

qualitative research interview sample

  • Request demo
  • Get started for free

IMAGES

  1. sample qualitative research interview guide

    qualitative research interview sample

  2. 6 Qualitative Research and Interviews

    qualitative research interview sample

  3. Qualitative Research Interview Protocol Template

    qualitative research interview sample

  4. (PDF) The qualitative research interview

    qualitative research interview sample

  5. Qualitative Interview Strategies

    qualitative research interview sample

  6. Interview request Qualitative Research

    qualitative research interview sample

COMMENTS

  1. Big enough? Sampling in qualitative inquiry

    Mine tends to start with a reminder about the different philosophical assumptions undergirding qualitative and quantitative research projects ( Staller, 2013 ). As Abrams (2010) points out, this difference leads to "major differences in sampling goals and strategies." (p.537). Patton (2002) argues, "perhaps nothing better captures the ...

  2. Qualitative Interview Questions: Guidance for Novice Researchers

    The Qualitative Report 2020 Volume 25, Number 9, How To Article 1, 3185-3203. Qualitative Interview Questions: Guidance for Novice Researchers. Rosanne E. Roberts. Capella University, Minneapolis ...

  3. Series: Practical guidance to qualitative research. Part 3: Sampling

    In qualitative research, you sample deliberately, not at random. The most commonly used deliberate sampling strategies are purposive sampling, criterion sampling, theoretical sampling, convenience sampling and snowball sampling. ... The qualitative research interview seeks to describe the meanings of central themes in the life world of the ...

  4. Types of Interviews in Research

    An interview is a qualitative research method that relies on asking questions in order to collect data. Interviews involve two or more people, one of whom is the interviewer asking the questions. ... Smaller sample sizes can cause their validity and reliability to suffer, and there is an inherent risk of interviewer effect arising from ...

  5. Twelve tips for conducting qualitative research interviews

    Introduction. In medical education research, the qualitative research interview is a viable and highly utilized data-collection tool (DiCicco-Bloom and Crabtree Citation 2006; Jamshed Citation 2014).There are a range of interview formats, conducted with both individuals and groups, where semi-structured interviews are becoming increasingly prevalent in medical education research.

  6. Chapter 11. Interviewing

    Qualitative Sociology 37(2):153-171. Written as a response to various debates surrounding the relative value of interview-based studies and ethnographic studies defending the particular strengths of interviewing. This is a must-read article for anyone seriously engaging in qualitative research! Pugh, Allison J. 2013.

  7. Sample Size in Qualitative Interview Studies: Guided by Information

    Mason M. (2010). Sample size and saturation in PhD studies using qualitative interviews. Forum: Qualitative Social Research, 11, Article 8. Google Scholar ... O'Reilly M., Parker N. (2013). "Unsatisfactory saturation": A critical exploration of the notion of saturated sample sizes in qualitative research. Qualitative Research, 13, 190 ...

  8. Sampling in qualitative interview research: criteria, considerations

    In qualitative interview samples in tourism, researcher reflexivity on sample selection and characteristics is crucial (Ateljevic, Harris, Wilson, & Collins, 2005). Thus, considerations of the interviewer's positionality in relation with research participants become an important aspect of the study, which can shape and contextualise the outcomes.

  9. (PDF) Sample Size for Interview in Qualitative Research in Social

    There is no specific procedure for determining the sample size for interviews, because qualitative research requires quality information, focus, and not too much repetition [14]. Emphasized that ...

  10. How To Do Qualitative Interviews For Research

    5. Not keeping your golden thread front of mind. We touched on this a little earlier, but it is a key point that should be central to your entire research process. You don't want to end up with pages and pages of data after conducting your interviews and realize that it is not useful to your research aims.

  11. PDF Appendix 1: Semi-structured interview guide

    health research: a qualitative study protocol 1 Appendix 1: Semi-structured interview guide Date: Interviewer: Archival #: ... and your views of methods for identifying and display research gaps. The interviews will last approximately 20 to 40 minutes or as long as you would like to talk about your experience. With your permission, the interview

  12. How to Conduct a Qualitative Interview (2024 Guide)

    Here are some steps on how to analyze a qualitative interview: 1. Transcription. The first step is transcribing the interview into text format to have a written record of the conversation. This step is essential to ensure that you can refer back to the interview data and identify the important aspects of the interview.

  13. PDF TIPSHEET QUALITATIVE INTERVIEWING

    TIPSHEET QUALITATIVE INTERVIEWINGTIP. HEET - QUALITATIVE INTERVIEWINGQualitative interviewing provides a method for collecting rich and detailed information about how individuals experience, understand. nd explain events in their lives. This tipsheet offers an introduction to the topic and some advice on. arrying out eff.

  14. Chapter 5. Sampling

    Sampling in qualitative research has different purposes and goals than sampling in quantitative research. Sampling in both allows you to say something of interest about a population without having to include the entire population in your sample. We begin this chapter with the case of a population of interest composed of actual people.

  15. Characterising and justifying sample size sufficiency in interview

    Sample adequacy in qualitative inquiry pertains to the appropriateness of the sample composition and size.It is an important consideration in evaluations of the quality and trustworthiness of much qualitative research [] and is implicated - particularly for research that is situated within a post-positivist tradition and retains a degree of commitment to realist ontological premises - in ...

  16. 6 Qualitative Research and Interviews

    6. Qualitative Research and Interviews. So we've described doing a survey and collecting quantitative data. But not all questions can best be answered by a survey. A survey is great for understanding what people think (for example), but not why they think what they do. If your research is intending to understand the underlying motivations or ...

  17. PDF Strategies for Qualitative Interviews

    A Successful Interviewer is: 1. Knowledgeable: is thoroughly familiar with the focus of the interview; pilot interviews of the kind used in survey interviewing can be useful here. 2. Structuring: gives purpose for interview; rounds it off; asks whether interviewee has questions. 3. Clear: asks simple, easy, short questions; no jargon. 4. Gentle: lets people finish; gives them time to think ...

  18. Top 20 Qualitative Research Interview Questions & Answers

    17. Describe a complex qualitative dataset you've managed and how you navigated its challenges. Managing a complex qualitative dataset requires meticulous organization, a strong grasp of research methods, and the ability to discern patterns and themes amidst a sea of words and narratives.

  19. Interviews and focus groups in qualitative research: an update for the

    Research interviews are a fundamental qualitative research method 15 and are utilised across methodological approaches. Interviews enable the researcher to learn in depth about the perspectives ...

  20. Chapter 13: Interviews

    What are interviews? An interviewing method is the most commonly used data collection technique in qualitative research. 1 The purpose of an interview is to explore the experiences, understandings, opinions and motivations of research participants. 2 Interviews are conducted one-on-one with the researcher and the participant. Interviews are most appropriate when seeking to understand a ...

  21. 35 qualitative research interview questions

    Related: A guide to interview methods in research (With examples) 4. Please describe in your own words what coding means in qualitative research. This question looks to explore your fundamental understanding of an important aspect of qualitative research. It relates to qualitative data analysis.

  22. Preparing Questions for a Qualitative Research Interview

    Read more: Advantages & Disadvantages of Structured Interviews Semi-structured Interviews . The second type of qualitative interviews are semi-structured interviews. In these interviews, the interview guide outlines the topics to be explored, but the actual questions are not pre-written.. This approach allows interviewers the freedom to phrase questions spontaneously and explore topics in more ...

  23. 5 Qualitative Research Interview Questions (With Answers)

    Here's a list of five qualitative research interview questions and some sample answers to consider when practicing for your interview: 1. Define market research and explain how it works. Interviewers may ask this question to evaluate your basic understanding of research and how to gather and understand it. Market research refers to another form ...

  24. Data analysis in qualitative research: Sample techniques

    Marketing Research Analyze in-depth interviews, focus groups, and other qualitative research. Financial Services Analyze financial interviews and drive smarter investment decisions; Technology Gain a deeper understanding of your customers' needs, pain points, and preferences. Industries

  25. Patient safety in remote primary care encounters: multimethod

    Background Triage and clinical consultations increasingly occur remotely. We aimed to learn why safety incidents occur in remote encounters and how to prevent them. Setting and sample UK primary care. 95 safety incidents (complaints, settled indemnity claims and reports) involving remote interactions. Separately, 12 general practices followed 2021-2023. Methods Multimethod qualitative study ...