• University Libraries
  • Research Guides
  • Topic Guides
  • Research Methods Guide
  • Interview Research

Research Methods Guide: Interview Research

  • Introduction
  • Research Design & Method
  • Survey Research
  • Data Analysis
  • Resources & Consultation

Tutorial Videos: Interview Method

Interview as a Method for Qualitative Research

research design for interviews

Goals of Interview Research

  • Preferences
  • They help you explain, better understand, and explore research subjects' opinions, behavior, experiences, phenomenon, etc.
  • Interview questions are usually open-ended questions so that in-depth information will be collected.

Mode of Data Collection

There are several types of interviews, including:

  • Face-to-Face
  • Online (e.g. Skype, Googlehangout, etc)

FAQ: Conducting Interview Research

What are the important steps involved in interviews?

  • Think about who you will interview
  • Think about what kind of information you want to obtain from interviews
  • Think about why you want to pursue in-depth information around your research topic
  • Introduce yourself and explain the aim of the interview
  • Devise your questions so interviewees can help answer your research question
  • Have a sequence to your questions / topics by grouping them in themes
  • Make sure you can easily move back and forth between questions / topics
  • Make sure your questions are clear and easy to understand
  • Do not ask leading questions
  • Do you want to bring a second interviewer with you?
  • Do you want to bring a notetaker?
  • Do you want to record interviews? If so, do you have time to transcribe interview recordings?
  • Where will you interview people? Where is the setting with the least distraction?
  • How long will each interview take?
  • Do you need to address terms of confidentiality?

Do I have to choose either a survey or interviewing method?

No.  In fact, many researchers use a mixed method - interviews can be useful as follow-up to certain respondents to surveys, e.g., to further investigate their responses.

Is training an interviewer important?

Yes, since the interviewer can control the quality of the result, training the interviewer becomes crucial.  If more than one interviewers are involved in your study, it is important to have every interviewer understand the interviewing procedure and rehearse the interviewing process before beginning the formal study.

  • << Previous: Survey Research
  • Next: Data Analysis >>
  • Last Updated: Aug 21, 2023 10:42 AM

Logo for Open Educational Resources

Chapter 11. Interviewing

Introduction.

Interviewing people is at the heart of qualitative research. It is not merely a way to collect data but an intrinsically rewarding activity—an interaction between two people that holds the potential for greater understanding and interpersonal development. Unlike many of our daily interactions with others that are fairly shallow and mundane, sitting down with a person for an hour or two and really listening to what they have to say is a profound and deep enterprise, one that can provide not only “data” for you, the interviewer, but also self-understanding and a feeling of being heard for the interviewee. I always approach interviewing with a deep appreciation for the opportunity it gives me to understand how other people experience the world. That said, there is not one kind of interview but many, and some of these are shallower than others. This chapter will provide you with an overview of interview techniques but with a special focus on the in-depth semistructured interview guide approach, which is the approach most widely used in social science research.

An interview can be variously defined as “a conversation with a purpose” ( Lune and Berg 2018 ) and an attempt to understand the world from the point of view of the person being interviewed: “to unfold the meaning of peoples’ experiences, to uncover their lived world prior to scientific explanations” ( Kvale 2007 ). It is a form of active listening in which the interviewer steers the conversation to subjects and topics of interest to their research but also manages to leave enough space for those interviewed to say surprising things. Achieving that balance is a tricky thing, which is why most practitioners believe interviewing is both an art and a science. In my experience as a teacher, there are some students who are “natural” interviewers (often they are introverts), but anyone can learn to conduct interviews, and everyone, even those of us who have been doing this for years, can improve their interviewing skills. This might be a good time to highlight the fact that the interview is a product between interviewer and interviewee and that this product is only as good as the rapport established between the two participants. Active listening is the key to establishing this necessary rapport.

Patton ( 2002 ) makes the argument that we use interviews because there are certain things that are not observable. In particular, “we cannot observe feelings, thoughts, and intentions. We cannot observe behaviors that took place at some previous point in time. We cannot observe situations that preclude the presence of an observer. We cannot observe how people have organized the world and the meanings they attach to what goes on in the world. We have to ask people questions about those things” ( 341 ).

Types of Interviews

There are several distinct types of interviews. Imagine a continuum (figure 11.1). On one side are unstructured conversations—the kind you have with your friends. No one is in control of those conversations, and what you talk about is often random—whatever pops into your head. There is no secret, underlying purpose to your talking—if anything, the purpose is to talk to and engage with each other, and the words you use and the things you talk about are a little beside the point. An unstructured interview is a little like this informal conversation, except that one of the parties to the conversation (you, the researcher) does have an underlying purpose, and that is to understand the other person. You are not friends speaking for no purpose, but it might feel just as unstructured to the “interviewee” in this scenario. That is one side of the continuum. On the other side are fully structured and standardized survey-type questions asked face-to-face. Here it is very clear who is asking the questions and who is answering them. This doesn’t feel like a conversation at all! A lot of people new to interviewing have this ( erroneously !) in mind when they think about interviews as data collection. Somewhere in the middle of these two extreme cases is the “ semistructured” interview , in which the researcher uses an “interview guide” to gently move the conversation to certain topics and issues. This is the primary form of interviewing for qualitative social scientists and will be what I refer to as interviewing for the rest of this chapter, unless otherwise specified.

Types of Interviewing Questions: Unstructured conversations, Semi-structured interview, Structured interview, Survey questions

Informal (unstructured conversations). This is the most “open-ended” approach to interviewing. It is particularly useful in conjunction with observational methods (see chapters 13 and 14). There are no predetermined questions. Each interview will be different. Imagine you are researching the Oregon Country Fair, an annual event in Veneta, Oregon, that includes live music, artisan craft booths, face painting, and a lot of people walking through forest paths. It’s unlikely that you will be able to get a person to sit down with you and talk intensely about a set of questions for an hour and a half. But you might be able to sidle up to several people and engage with them about their experiences at the fair. You might have a general interest in what attracts people to these events, so you could start a conversation by asking strangers why they are here or why they come back every year. That’s it. Then you have a conversation that may lead you anywhere. Maybe one person tells a long story about how their parents brought them here when they were a kid. A second person talks about how this is better than Burning Man. A third person shares their favorite traveling band. And yet another enthuses about the public library in the woods. During your conversations, you also talk about a lot of other things—the weather, the utilikilts for sale, the fact that a favorite food booth has disappeared. It’s all good. You may not be able to record these conversations. Instead, you might jot down notes on the spot and then, when you have the time, write down as much as you can remember about the conversations in long fieldnotes. Later, you will have to sit down with these fieldnotes and try to make sense of all the information (see chapters 18 and 19).

Interview guide ( semistructured interview ). This is the primary type employed by social science qualitative researchers. The researcher creates an “interview guide” in advance, which she uses in every interview. In theory, every person interviewed is asked the same questions. In practice, every person interviewed is asked mostly the same topics but not always the same questions, as the whole point of a “guide” is that it guides the direction of the conversation but does not command it. The guide is typically between five and ten questions or question areas, sometimes with suggested follow-ups or prompts . For example, one question might be “What was it like growing up in Eastern Oregon?” with prompts such as “Did you live in a rural area? What kind of high school did you attend?” to help the conversation develop. These interviews generally take place in a quiet place (not a busy walkway during a festival) and are recorded. The recordings are transcribed, and those transcriptions then become the “data” that is analyzed (see chapters 18 and 19). The conventional length of one of these types of interviews is between one hour and two hours, optimally ninety minutes. Less than one hour doesn’t allow for much development of questions and thoughts, and two hours (or more) is a lot of time to ask someone to sit still and answer questions. If you have a lot of ground to cover, and the person is willing, I highly recommend two separate interview sessions, with the second session being slightly shorter than the first (e.g., ninety minutes the first day, sixty minutes the second). There are lots of good reasons for this, but the most compelling one is that this allows you to listen to the first day’s recording and catch anything interesting you might have missed in the moment and so develop follow-up questions that can probe further. This also allows the person being interviewed to have some time to think about the issues raised in the interview and go a little deeper with their answers.

Standardized questionnaire with open responses ( structured interview ). This is the type of interview a lot of people have in mind when they hear “interview”: a researcher comes to your door with a clipboard and proceeds to ask you a series of questions. These questions are all the same whoever answers the door; they are “standardized.” Both the wording and the exact order are important, as people’s responses may vary depending on how and when a question is asked. These are qualitative only in that the questions allow for “open-ended responses”: people can say whatever they want rather than select from a predetermined menu of responses. For example, a survey I collaborated on included this open-ended response question: “How does class affect one’s career success in sociology?” Some of the answers were simply one word long (e.g., “debt”), and others were long statements with stories and personal anecdotes. It is possible to be surprised by the responses. Although it’s a stretch to call this kind of questioning a conversation, it does allow the person answering the question some degree of freedom in how they answer.

Survey questionnaire with closed responses (not an interview!). Standardized survey questions with specific answer options (e.g., closed responses) are not really interviews at all, and they do not generate qualitative data. For example, if we included five options for the question “How does class affect one’s career success in sociology?”—(1) debt, (2) social networks, (3) alienation, (4) family doesn’t understand, (5) type of grad program—we leave no room for surprises at all. Instead, we would most likely look at patterns around these responses, thinking quantitatively rather than qualitatively (e.g., using regression analysis techniques, we might find that working-class sociologists were twice as likely to bring up alienation). It can sometimes be confusing for new students because the very same survey can include both closed-ended and open-ended questions. The key is to think about how these will be analyzed and to what level surprises are possible. If your plan is to turn all responses into a number and make predictions about correlations and relationships, you are no longer conducting qualitative research. This is true even if you are conducting this survey face-to-face with a real live human. Closed-response questions are not conversations of any kind, purposeful or not.

In summary, the semistructured interview guide approach is the predominant form of interviewing for social science qualitative researchers because it allows a high degree of freedom of responses from those interviewed (thus allowing for novel discoveries) while still maintaining some connection to a research question area or topic of interest. The rest of the chapter assumes the employment of this form.

Creating an Interview Guide

Your interview guide is the instrument used to bridge your research question(s) and what the people you are interviewing want to tell you. Unlike a standardized questionnaire, the questions actually asked do not need to be exactly what you have written down in your guide. The guide is meant to create space for those you are interviewing to talk about the phenomenon of interest, but sometimes you are not even sure what that phenomenon is until you start asking questions. A priority in creating an interview guide is to ensure it offers space. One of the worst mistakes is to create questions that are so specific that the person answering them will not stray. Relatedly, questions that sound “academic” will shut down a lot of respondents. A good interview guide invites respondents to talk about what is important to them, not feel like they are performing or being evaluated by you.

Good interview questions should not sound like your “research question” at all. For example, let’s say your research question is “How do patriarchal assumptions influence men’s understanding of climate change and responses to climate change?” It would be worse than unhelpful to ask a respondent, “How do your assumptions about the role of men affect your understanding of climate change?” You need to unpack this into manageable nuggets that pull your respondent into the area of interest without leading him anywhere. You could start by asking him what he thinks about climate change in general. Or, even better, whether he has any concerns about heatwaves or increased tornadoes or polar icecaps melting. Once he starts talking about that, you can ask follow-up questions that bring in issues around gendered roles, perhaps asking if he is married (to a woman) and whether his wife shares his thoughts and, if not, how they negotiate that difference. The fact is, you won’t really know the right questions to ask until he starts talking.

There are several distinct types of questions that can be used in your interview guide, either as main questions or as follow-up probes. If you remember that the point is to leave space for the respondent, you will craft a much more effective interview guide! You will also want to think about the place of time in both the questions themselves (past, present, future orientations) and the sequencing of the questions.

Researcher Note

Suggestion : As you read the next three sections (types of questions, temporality, question sequence), have in mind a particular research question, and try to draft questions and sequence them in a way that opens space for a discussion that helps you answer your research question.

Type of Questions

Experience and behavior questions ask about what a respondent does regularly (their behavior) or has done (their experience). These are relatively easy questions for people to answer because they appear more “factual” and less subjective. This makes them good opening questions. For the study on climate change above, you might ask, “Have you ever experienced an unusual weather event? What happened?” Or “You said you work outside? What is a typical summer workday like for you? How do you protect yourself from the heat?”

Opinion and values questions , in contrast, ask questions that get inside the minds of those you are interviewing. “Do you think climate change is real? Who or what is responsible for it?” are two such questions. Note that you don’t have to literally ask, “What is your opinion of X?” but you can find a way to ask the specific question relevant to the conversation you are having. These questions are a bit trickier to ask because the answers you get may depend in part on how your respondent perceives you and whether they want to please you or not. We’ve talked a fair amount about being reflective. Here is another place where this comes into play. You need to be aware of the effect your presence might have on the answers you are receiving and adjust accordingly. If you are a woman who is perceived as liberal asking a man who identifies as conservative about climate change, there is a lot of subtext that can be going on in the interview. There is no one right way to resolve this, but you must at least be aware of it.

Feeling questions are questions that ask respondents to draw on their emotional responses. It’s pretty common for academic researchers to forget that we have bodies and emotions, but people’s understandings of the world often operate at this affective level, sometimes unconsciously or barely consciously. It is a good idea to include questions that leave space for respondents to remember, imagine, or relive emotional responses to particular phenomena. “What was it like when you heard your cousin’s house burned down in that wildfire?” doesn’t explicitly use any emotion words, but it allows your respondent to remember what was probably a pretty emotional day. And if they respond emotionally neutral, that is pretty interesting data too. Note that asking someone “How do you feel about X” is not always going to evoke an emotional response, as they might simply turn around and respond with “I think that…” It is better to craft a question that actually pushes the respondent into the affective category. This might be a specific follow-up to an experience and behavior question —for example, “You just told me about your daily routine during the summer heat. Do you worry it is going to get worse?” or “Have you ever been afraid it will be too hot to get your work accomplished?”

Knowledge questions ask respondents what they actually know about something factual. We have to be careful when we ask these types of questions so that respondents do not feel like we are evaluating them (which would shut them down), but, for example, it is helpful to know when you are having a conversation about climate change that your respondent does in fact know that unusual weather events have increased and that these have been attributed to climate change! Asking these questions can set the stage for deeper questions and can ensure that the conversation makes the same kind of sense to both participants. For example, a conversation about political polarization can be put back on track once you realize that the respondent doesn’t really have a clear understanding that there are two parties in the US. Instead of asking a series of questions about Republicans and Democrats, you might shift your questions to talk more generally about political disagreements (e.g., “people against abortion”). And sometimes what you do want to know is the level of knowledge about a particular program or event (e.g., “Are you aware you can discharge your student loans through the Public Service Loan Forgiveness program?”).

Sensory questions call on all senses of the respondent to capture deeper responses. These are particularly helpful in sparking memory. “Think back to your childhood in Eastern Oregon. Describe the smells, the sounds…” Or you could use these questions to help a person access the full experience of a setting they customarily inhabit: “When you walk through the doors to your office building, what do you see? Hear? Smell?” As with feeling questions , these questions often supplement experience and behavior questions . They are another way of allowing your respondent to report fully and deeply rather than remain on the surface.

Creative questions employ illustrative examples, suggested scenarios, or simulations to get respondents to think more deeply about an issue, topic, or experience. There are many options here. In The Trouble with Passion , Erin Cech ( 2021 ) provides a scenario in which “Joe” is trying to decide whether to stay at his decent but boring computer job or follow his passion by opening a restaurant. She asks respondents, “What should Joe do?” Their answers illuminate the attraction of “passion” in job selection. In my own work, I have used a news story about an upwardly mobile young man who no longer has time to see his mother and sisters to probe respondents’ feelings about the costs of social mobility. Jessi Streib and Betsy Leondar-Wright have used single-page cartoon “scenes” to elicit evaluations of potential racial discrimination, sexual harassment, and classism. Barbara Sutton ( 2010 ) has employed lists of words (“strong,” “mother,” “victim”) on notecards she fans out and asks her female respondents to select and discuss.

Background/Demographic Questions

You most definitely will want to know more about the person you are interviewing in terms of conventional demographic information, such as age, race, gender identity, occupation, and educational attainment. These are not questions that normally open up inquiry. [1] For this reason, my practice has been to include a separate “demographic questionnaire” sheet that I ask each respondent to fill out at the conclusion of the interview. Only include those aspects that are relevant to your study. For example, if you are not exploring religion or religious affiliation, do not include questions about a person’s religion on the demographic sheet. See the example provided at the end of this chapter.

Temporality

Any type of question can have a past, present, or future orientation. For example, if you are asking a behavior question about workplace routine, you might ask the respondent to talk about past work, present work, and ideal (future) work. Similarly, if you want to understand how people cope with natural disasters, you might ask your respondent how they felt then during the wildfire and now in retrospect and whether and to what extent they have concerns for future wildfire disasters. It’s a relatively simple suggestion—don’t forget to ask about past, present, and future—but it can have a big impact on the quality of the responses you receive.

Question Sequence

Having a list of good questions or good question areas is not enough to make a good interview guide. You will want to pay attention to the order in which you ask your questions. Even though any one respondent can derail this order (perhaps by jumping to answer a question you haven’t yet asked), a good advance plan is always helpful. When thinking about sequence, remember that your goal is to get your respondent to open up to you and to say things that might surprise you. To establish rapport, it is best to start with nonthreatening questions. Asking about the present is often the safest place to begin, followed by the past (they have to know you a little bit to get there), and lastly, the future (talking about hopes and fears requires the most rapport). To allow for surprises, it is best to move from very general questions to more particular questions only later in the interview. This ensures that respondents have the freedom to bring up the topics that are relevant to them rather than feel like they are constrained to answer you narrowly. For example, refrain from asking about particular emotions until these have come up previously—don’t lead with them. Often, your more particular questions will emerge only during the course of the interview, tailored to what is emerging in conversation.

Once you have a set of questions, read through them aloud and imagine you are being asked the same questions. Does the set of questions have a natural flow? Would you be willing to answer the very first question to a total stranger? Does your sequence establish facts and experiences before moving on to opinions and values? Did you include prefatory statements, where necessary; transitions; and other announcements? These can be as simple as “Hey, we talked a lot about your experiences as a barista while in college.… Now I am turning to something completely different: how you managed friendships in college.” That is an abrupt transition, but it has been softened by your acknowledgment of that.

Probes and Flexibility

Once you have the interview guide, you will also want to leave room for probes and follow-up questions. As in the sample probe included here, you can write out the obvious probes and follow-up questions in advance. You might not need them, as your respondent might anticipate them and include full responses to the original question. Or you might need to tailor them to how your respondent answered the question. Some common probes and follow-up questions include asking for more details (When did that happen? Who else was there?), asking for elaboration (Could you say more about that?), asking for clarification (Does that mean what I think it means or something else? I understand what you mean, but someone else reading the transcript might not), and asking for contrast or comparison (How did this experience compare with last year’s event?). “Probing is a skill that comes from knowing what to look for in the interview, listening carefully to what is being said and what is not said, and being sensitive to the feedback needs of the person being interviewed” ( Patton 2002:374 ). It takes work! And energy. I and many other interviewers I know report feeling emotionally and even physically drained after conducting an interview. You are tasked with active listening and rearranging your interview guide as needed on the fly. If you only ask the questions written down in your interview guide with no deviations, you are doing it wrong. [2]

The Final Question

Every interview guide should include a very open-ended final question that allows for the respondent to say whatever it is they have been dying to tell you but you’ve forgotten to ask. About half the time they are tired too and will tell you they have nothing else to say. But incredibly, some of the most honest and complete responses take place here, at the end of a long interview. You have to realize that the person being interviewed is often discovering things about themselves as they talk to you and that this process of discovery can lead to new insights for them. Making space at the end is therefore crucial. Be sure you convey that you actually do want them to tell you more, that the offer of “anything else?” is not read as an empty convention where the polite response is no. Here is where you can pull from that active listening and tailor the final question to the particular person. For example, “I’ve asked you a lot of questions about what it was like to live through that wildfire. I’m wondering if there is anything I’ve forgotten to ask, especially because I haven’t had that experience myself” is a much more inviting final question than “Great. Anything you want to add?” It’s also helpful to convey to the person that you have the time to listen to their full answer, even if the allotted time is at the end. After all, there are no more questions to ask, so the respondent knows exactly how much time is left. Do them the courtesy of listening to them!

Conducting the Interview

Once you have your interview guide, you are on your way to conducting your first interview. I always practice my interview guide with a friend or family member. I do this even when the questions don’t make perfect sense for them, as it still helps me realize which questions make no sense, are poorly worded (too academic), or don’t follow sequentially. I also practice the routine I will use for interviewing, which goes something like this:

  • Introduce myself and reintroduce the study
  • Provide consent form and ask them to sign and retain/return copy
  • Ask if they have any questions about the study before we begin
  • Ask if I can begin recording
  • Ask questions (from interview guide)
  • Turn off the recording device
  • Ask if they are willing to fill out my demographic questionnaire
  • Collect questionnaire and, without looking at the answers, place in same folder as signed consent form
  • Thank them and depart

A note on remote interviewing: Interviews have traditionally been conducted face-to-face in a private or quiet public setting. You don’t want a lot of background noise, as this will make transcriptions difficult. During the recent global pandemic, many interviewers, myself included, learned the benefits of interviewing remotely. Although face-to-face is still preferable for many reasons, Zoom interviewing is not a bad alternative, and it does allow more interviews across great distances. Zoom also includes automatic transcription, which significantly cuts down on the time it normally takes to convert our conversations into “data” to be analyzed. These automatic transcriptions are not perfect, however, and you will still need to listen to the recording and clarify and clean up the transcription. Nor do automatic transcriptions include notations of body language or change of tone, which you may want to include. When interviewing remotely, you will want to collect the consent form before you meet: ask them to read, sign, and return it as an email attachment. I think it is better to ask for the demographic questionnaire after the interview, but because some respondents may never return it then, it is probably best to ask for this at the same time as the consent form, in advance of the interview.

What should you bring to the interview? I would recommend bringing two copies of the consent form (one for you and one for the respondent), a demographic questionnaire, a manila folder in which to place the signed consent form and filled-out demographic questionnaire, a printed copy of your interview guide (I print with three-inch right margins so I can jot down notes on the page next to relevant questions), a pen, a recording device, and water.

After the interview, you will want to secure the signed consent form in a locked filing cabinet (if in print) or a password-protected folder on your computer. Using Excel or a similar program that allows tables/spreadsheets, create an identifying number for your interview that links to the consent form without using the name of your respondent. For example, let’s say that I conduct interviews with US politicians, and the first person I meet with is George W. Bush. I will assign the transcription the number “INT#001” and add it to the signed consent form. [3] The signed consent form goes into a locked filing cabinet, and I never use the name “George W. Bush” again. I take the information from the demographic sheet, open my Excel spreadsheet, and add the relevant information in separate columns for the row INT#001: White, male, Republican. When I interview Bill Clinton as my second interview, I include a second row: INT#002: White, male, Democrat. And so on. The only link to the actual name of the respondent and this information is the fact that the consent form (unavailable to anyone but me) has stamped on it the interview number.

Many students get very nervous before their first interview. Actually, many of us are always nervous before the interview! But do not worry—this is normal, and it does pass. Chances are, you will be pleasantly surprised at how comfortable it begins to feel. These “purposeful conversations” are often a delight for both participants. This is not to say that sometimes things go wrong. I often have my students practice several “bad scenarios” (e.g., a respondent that you cannot get to open up; a respondent who is too talkative and dominates the conversation, steering it away from the topics you are interested in; emotions that completely take over; or shocking disclosures you are ill-prepared to handle), but most of the time, things go quite well. Be prepared for the unexpected, but know that the reason interviews are so popular as a technique of data collection is that they are usually richly rewarding for both participants.

One thing that I stress to my methods students and remind myself about is that interviews are still conversations between people. If there’s something you might feel uncomfortable asking someone about in a “normal” conversation, you will likely also feel a bit of discomfort asking it in an interview. Maybe more importantly, your respondent may feel uncomfortable. Social research—especially about inequality—can be uncomfortable. And it’s easy to slip into an abstract, intellectualized, or removed perspective as an interviewer. This is one reason trying out interview questions is important. Another is that sometimes the question sounds good in your head but doesn’t work as well out loud in practice. I learned this the hard way when a respondent asked me how I would answer the question I had just posed, and I realized that not only did I not really know how I would answer it, but I also wasn’t quite as sure I knew what I was asking as I had thought.

—Elizabeth M. Lee, Associate Professor of Sociology at Saint Joseph’s University, author of Class and Campus Life , and co-author of Geographies of Campus Inequality

How Many Interviews?

Your research design has included a targeted number of interviews and a recruitment plan (see chapter 5). Follow your plan, but remember that “ saturation ” is your goal. You interview as many people as you can until you reach a point at which you are no longer surprised by what they tell you. This means not that no one after your first twenty interviews will have surprising, interesting stories to tell you but rather that the picture you are forming about the phenomenon of interest to you from a research perspective has come into focus, and none of the interviews are substantially refocusing that picture. That is when you should stop collecting interviews. Note that to know when you have reached this, you will need to read your transcripts as you go. More about this in chapters 18 and 19.

Your Final Product: The Ideal Interview Transcript

A good interview transcript will demonstrate a subtly controlled conversation by the skillful interviewer. In general, you want to see replies that are about one paragraph long, not short sentences and not running on for several pages. Although it is sometimes necessary to follow respondents down tangents, it is also often necessary to pull them back to the questions that form the basis of your research study. This is not really a free conversation, although it may feel like that to the person you are interviewing.

Final Tips from an Interview Master

Annette Lareau is arguably one of the masters of the trade. In Listening to People , she provides several guidelines for good interviews and then offers a detailed example of an interview gone wrong and how it could be addressed (please see the “Further Readings” at the end of this chapter). Here is an abbreviated version of her set of guidelines: (1) interview respondents who are experts on the subjects of most interest to you (as a corollary, don’t ask people about things they don’t know); (2) listen carefully and talk as little as possible; (3) keep in mind what you want to know and why you want to know it; (4) be a proactive interviewer (subtly guide the conversation); (5) assure respondents that there aren’t any right or wrong answers; (6) use the respondent’s own words to probe further (this both allows you to accurately identify what you heard and pushes the respondent to explain further); (7) reuse effective probes (don’t reinvent the wheel as you go—if repeating the words back works, do it again and again); (8) focus on learning the subjective meanings that events or experiences have for a respondent; (9) don’t be afraid to ask a question that draws on your own knowledge (unlike trial lawyers who are trained never to ask a question for which they don’t already know the answer, sometimes it’s worth it to ask risky questions based on your hypotheses or just plain hunches); (10) keep thinking while you are listening (so difficult…and important); (11) return to a theme raised by a respondent if you want further information; (12) be mindful of power inequalities (and never ever coerce a respondent to continue the interview if they want out); (13) take control with overly talkative respondents; (14) expect overly succinct responses, and develop strategies for probing further; (15) balance digging deep and moving on; (16) develop a plan to deflect questions (e.g., let them know you are happy to answer any questions at the end of the interview, but you don’t want to take time away from them now); and at the end, (17) check to see whether you have asked all your questions. You don’t always have to ask everyone the same set of questions, but if there is a big area you have forgotten to cover, now is the time to recover ( Lareau 2021:93–103 ).

Sample: Demographic Questionnaire

ASA Taskforce on First-Generation and Working-Class Persons in Sociology – Class Effects on Career Success

Supplementary Demographic Questionnaire

Thank you for your participation in this interview project. We would like to collect a few pieces of key demographic information from you to supplement our analyses. Your answers to these questions will be kept confidential and stored by ID number. All of your responses here are entirely voluntary!

What best captures your race/ethnicity? (please check any/all that apply)

  • White (Non Hispanic/Latina/o/x)
  • Black or African American
  • Hispanic, Latino/a/x of Spanish
  • Asian or Asian American
  • American Indian or Alaska Native
  • Middle Eastern or North African
  • Native Hawaiian or Pacific Islander
  • Other : (Please write in: ________________)

What is your current position?

  • Grad Student
  • Full Professor

Please check any and all of the following that apply to you:

  • I identify as a working-class academic
  • I was the first in my family to graduate from college
  • I grew up poor

What best reflects your gender?

  • Transgender female/Transgender woman
  • Transgender male/Transgender man
  • Gender queer/ Gender nonconforming

Anything else you would like us to know about you?

Example: Interview Guide

In this example, follow-up prompts are italicized.  Note the sequence of questions.  That second question often elicits an entire life history , answering several later questions in advance.

Introduction Script/Question

Thank you for participating in our survey of ASA members who identify as first-generation or working-class.  As you may have heard, ASA has sponsored a taskforce on first-generation and working-class persons in sociology and we are interested in hearing from those who so identify.  Your participation in this interview will help advance our knowledge in this area.

  • The first thing we would like to as you is why you have volunteered to be part of this study? What does it mean to you be first-gen or working class?  Why were you willing to be interviewed?
  • How did you decide to become a sociologist?
  • Can you tell me a little bit about where you grew up? ( prompts: what did your parent(s) do for a living?  What kind of high school did you attend?)
  • Has this identity been salient to your experience? (how? How much?)
  • How welcoming was your grad program? Your first academic employer?
  • Why did you decide to pursue sociology at the graduate level?
  • Did you experience culture shock in college? In graduate school?
  • Has your FGWC status shaped how you’ve thought about where you went to school? debt? etc?
  • Were you mentored? How did this work (not work)?  How might it?
  • What did you consider when deciding where to go to grad school? Where to apply for your first position?
  • What, to you, is a mark of career success? Have you achieved that success?  What has helped or hindered your pursuit of success?
  • Do you think sociology, as a field, cares about prestige?
  • Let’s talk a little bit about intersectionality. How does being first-gen/working class work alongside other identities that are important to you?
  • What do your friends and family think about your career? Have you had any difficulty relating to family members or past friends since becoming highly educated?
  • Do you have any debt from college/grad school? Are you concerned about this?  Could you explain more about how you paid for college/grad school?  (here, include assistance from family, fellowships, scholarships, etc.)
  • (You’ve mentioned issues or obstacles you had because of your background.) What could have helped?  Or, who or what did? Can you think of fortuitous moments in your career?
  • Do you have any regrets about the path you took?
  • Is there anything else you would like to add? Anything that the Taskforce should take note of, that we did not ask you about here?

Further Readings

Britten, Nicky. 1995. “Qualitative Interviews in Medical Research.” BMJ: British Medical Journal 31(6999):251–253. A good basic overview of interviewing particularly useful for students of public health and medical research generally.

Corbin, Juliet, and Janice M. Morse. 2003. “The Unstructured Interactive Interview: Issues of Reciprocity and Risks When Dealing with Sensitive Topics.” Qualitative Inquiry 9(3):335–354. Weighs the potential benefits and harms of conducting interviews on topics that may cause emotional distress. Argues that the researcher’s skills and code of ethics should ensure that the interviewing process provides more of a benefit to both participant and researcher than a harm to the former.

Gerson, Kathleen, and Sarah Damaske. 2020. The Science and Art of Interviewing . New York: Oxford University Press. A useful guidebook/textbook for both undergraduates and graduate students, written by sociologists.

Kvale, Steiner. 2007. Doing Interviews . London: SAGE. An easy-to-follow guide to conducting and analyzing interviews by psychologists.

Lamont, Michèle, and Ann Swidler. 2014. “Methodological Pluralism and the Possibilities and Limits of Interviewing.” Qualitative Sociology 37(2):153–171. Written as a response to various debates surrounding the relative value of interview-based studies and ethnographic studies defending the particular strengths of interviewing. This is a must-read article for anyone seriously engaging in qualitative research!

Pugh, Allison J. 2013. “What Good Are Interviews for Thinking about Culture? Demystifying Interpretive Analysis.” American Journal of Cultural Sociology 1(1):42–68. Another defense of interviewing written against those who champion ethnographic methods as superior, particularly in the area of studying culture. A classic.

Rapley, Timothy John. 2001. “The ‘Artfulness’ of Open-Ended Interviewing: Some considerations in analyzing interviews.” Qualitative Research 1(3):303–323. Argues for the importance of “local context” of data production (the relationship built between interviewer and interviewee, for example) in properly analyzing interview data.

Weiss, Robert S. 1995. Learning from Strangers: The Art and Method of Qualitative Interview Studies . New York: Simon and Schuster. A classic and well-regarded textbook on interviewing. Because Weiss has extensive experience conducting surveys, he contrasts the qualitative interview with the survey questionnaire well; particularly useful for those trained in the latter.

  • I say “normally” because how people understand their various identities can itself be an expansive topic of inquiry. Here, I am merely talking about collecting otherwise unexamined demographic data, similar to how we ask people to check boxes on surveys. ↵
  • Again, this applies to “semistructured in-depth interviewing.” When conducting standardized questionnaires, you will want to ask each question exactly as written, without deviations! ↵
  • I always include “INT” in the number because I sometimes have other kinds of data with their own numbering: FG#001 would mean the first focus group, for example. I also always include three-digit spaces, as this allows for up to 999 interviews (or, more realistically, allows for me to interview up to one hundred persons without having to reset my numbering system). ↵

A method of data collection in which the researcher asks the participant questions; the answers to these questions are often recorded and transcribed verbatim. There are many different kinds of interviews - see also semistructured interview , structured interview , and unstructured interview .

A document listing key questions and question areas for use during an interview.  It is used most often for semi-structured interviews.  A good interview guide may have no more than ten primary questions for two hours of interviewing, but these ten questions will be supplemented by probes and relevant follow-ups throughout the interview.  Most IRBs require the inclusion of the interview guide in applications for review.  See also interview and  semi-structured interview .

A data-collection method that relies on casual, conversational, and informal interviewing.  Despite its apparent conversational nature, the researcher usually has a set of particular questions or question areas in mind but allows the interview to unfold spontaneously.  This is a common data-collection technique among ethnographers.  Compare to the semi-structured or in-depth interview .

A form of interview that follows a standard guide of questions asked, although the order of the questions may change to match the particular needs of each individual interview subject, and probing “follow-up” questions are often added during the course of the interview.  The semi-structured interview is the primary form of interviewing used by qualitative researchers in the social sciences.  It is sometimes referred to as an “in-depth” interview.  See also interview and  interview guide .

The cluster of data-collection tools and techniques that involve observing interactions between people, the behaviors, and practices of individuals (sometimes in contrast to what they say about how they act and behave), and cultures in context.  Observational methods are the key tools employed by ethnographers and Grounded Theory .

Follow-up questions used in a semi-structured interview  to elicit further elaboration.  Suggested prompts can be included in the interview guide  to be used/deployed depending on how the initial question was answered or if the topic of the prompt does not emerge spontaneously.

A form of interview that follows a strict set of questions, asked in a particular order, for all interview subjects.  The questions are also the kind that elicits short answers, and the data is more “informative” than probing.  This is often used in mixed-methods studies, accompanying a survey instrument.  Because there is no room for nuance or the exploration of meaning in structured interviews, qualitative researchers tend to employ semi-structured interviews instead.  See also interview.

The point at which you can conclude data collection because every person you are interviewing, the interaction you are observing, or content you are analyzing merely confirms what you have already noted.  Achieving saturation is often used as the justification for the final sample size.

An interview variant in which a person’s life story is elicited in a narrative form.  Turning points and key themes are established by the researcher and used as data points for further analysis.

Introduction to Qualitative Research Methods Copyright © 2023 by Allison Hurst is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License , except where otherwise noted.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Basic Clin Pharm
  • v.5(4); September 2014-November 2014

Qualitative research method-interviewing and observation

Shazia jamshed.

Department of Pharmacy Practice, Kulliyyah of Pharmacy, International Islamic University Malaysia, Kuantan Campus, Pahang, Malaysia

Buckley and Chiang define research methodology as “a strategy or architectural design by which the researcher maps out an approach to problem-finding or problem-solving.”[ 1 ] According to Crotty, research methodology is a comprehensive strategy ‘that silhouettes our choice and use of specific methods relating them to the anticipated outcomes,[ 2 ] but the choice of research methodology is based upon the type and features of the research problem.[ 3 ] According to Johnson et al . mixed method research is “a class of research where the researcher mixes or combines quantitative and qualitative research techniques, methods, approaches, theories and or language into a single study.[ 4 ] In order to have diverse opinions and views, qualitative findings need to be supplemented with quantitative results.[ 5 ] Therefore, these research methodologies are considered to be complementary to each other rather than incompatible to each other.[ 6 ]

Qualitative research methodology is considered to be suitable when the researcher or the investigator either investigates new field of study or intends to ascertain and theorize prominent issues.[ 6 , 7 ] There are many qualitative methods which are developed to have an in depth and extensive understanding of the issues by means of their textual interpretation and the most common types are interviewing and observation.[ 7 ]

Interviewing

This is the most common format of data collection in qualitative research. According to Oakley, qualitative interview is a type of framework in which the practices and standards be not only recorded, but also achieved, challenged and as well as reinforced.[ 8 ] As no research interview lacks structure[ 9 ] most of the qualitative research interviews are either semi-structured, lightly structured or in-depth.[ 9 ] Unstructured interviews are generally suggested in conducting long-term field work and allow respondents to let them express in their own ways and pace, with minimal hold on respondents’ responses.[ 10 ]

Pioneers of ethnography developed the use of unstructured interviews with local key informants that is., by collecting the data through observation and record field notes as well as to involve themselves with study participants. To be precise, unstructured interview resembles a conversation more than an interview and is always thought to be a “controlled conversation,” which is skewed towards the interests of the interviewer.[ 11 ] Non-directive interviews, form of unstructured interviews are aimed to gather in-depth information and usually do not have pre-planned set of questions.[ 11 ] Another type of the unstructured interview is the focused interview in which the interviewer is well aware of the respondent and in times of deviating away from the main issue the interviewer generally refocuses the respondent towards key subject.[ 11 ] Another type of the unstructured interview is an informal, conversational interview, based on unplanned set of questions that are generated instantaneously during the interview.[ 11 ]

In contrast, semi-structured interviews are those in-depth interviews where the respondents have to answer preset open-ended questions and thus are widely employed by different healthcare professionals in their research. Semi-structured, in-depth interviews are utilized extensively as interviewing format possibly with an individual or sometimes even with a group.[ 6 ] These types of interviews are conducted once only, with an individual or with a group and generally cover the duration of 30 min to more than an hour.[ 12 ] Semi-structured interviews are based on semi-structured interview guide, which is a schematic presentation of questions or topics and need to be explored by the interviewer.[ 12 ] To achieve optimum use of interview time, interview guides serve the useful purpose of exploring many respondents more systematically and comprehensively as well as to keep the interview focused on the desired line of action.[ 12 ] The questions in the interview guide comprise of the core question and many associated questions related to the central question, which in turn, improve further through pilot testing of the interview guide.[ 7 ] In order to have the interview data captured more effectively, recording of the interviews is considered an appropriate choice but sometimes a matter of controversy among the researcher and the respondent. Hand written notes during the interview are relatively unreliable, and the researcher might miss some key points. The recording of the interview makes it easier for the researcher to focus on the interview content and the verbal prompts and thus enables the transcriptionist to generate “verbatim transcript” of the interview.

Similarly, in focus groups, invited groups of people are interviewed in a discussion setting in the presence of the session moderator and generally these discussions last for 90 min.[ 7 ] Like every research technique having its own merits and demerits, group discussions have some intrinsic worth of expressing the opinions openly by the participants. On the contrary in these types of discussion settings, limited issues can be focused, and this may lead to the generation of fewer initiatives and suggestions about research topic.

Observation

Observation is a type of qualitative research method which not only included participant's observation, but also covered ethnography and research work in the field. In the observational research design, multiple study sites are involved. Observational data can be integrated as auxiliary or confirmatory research.[ 11 ]

Research can be visualized and perceived as painstaking methodical efforts to examine, investigate as well as restructure the realities, theories and applications. Research methods reflect the approach to tackling the research problem. Depending upon the need, research method could be either an amalgam of both qualitative and quantitative or qualitative or quantitative independently. By adopting qualitative methodology, a prospective researcher is going to fine-tune the pre-conceived notions as well as extrapolate the thought process, analyzing and estimating the issues from an in-depth perspective. This could be carried out by one-to-one interviews or as issue-directed discussions. Observational methods are, sometimes, supplemental means for corroborating research findings.

Grad Coach

Qualitative Research 101: Interviewing

5 Common Mistakes To Avoid When Undertaking Interviews

By: David Phair (PhD) and Kerryn Warren (PhD) | March 2022

Undertaking interviews is potentially the most important step in the qualitative research process. If you don’t collect useful, useable data in your interviews, you’ll struggle through the rest of your dissertation or thesis.  Having helped numerous students with their research over the years, we’ve noticed some common interviewing mistakes that first-time researchers make. In this post, we’ll discuss five costly interview-related mistakes and outline useful strategies to avoid making these.

Overview: 5 Interviewing Mistakes

  • Not having a clear interview strategy /plan
  • Not having good interview techniques /skills
  • Not securing a suitable location and equipment
  • Not having a basic risk management plan
  • Not keeping your “ golden thread ” front of mind

1. Not having a clear interview strategy

The first common mistake that we’ll look at is that of starting the interviewing process without having first come up with a clear interview strategy or plan of action. While it’s natural to be keen to get started engaging with your interviewees, a lack of planning can result in a mess of data and inconsistency between interviews.

There are several design choices to decide on and plan for before you start interviewing anyone. Some of the most important questions you need to ask yourself before conducting interviews include:

  • What are the guiding research aims and research questions of my study?
  • Will I use a structured, semi-structured or unstructured interview approach?
  • How will I record the interviews (audio or video)?
  • Who will be interviewed and by whom ?
  • What ethics and data law considerations do I need to adhere to?
  • How will I analyze my data? 

Let’s take a quick look at some of these.

The core objective of the interviewing process is to generate useful data that will help you address your overall research aims. Therefore, your interviews need to be conducted in a way that directly links to your research aims, objectives and research questions (i.e. your “golden thread”). This means that you need to carefully consider the questions you’ll ask to ensure that they align with and feed into your golden thread. If any question doesn’t align with this, you may want to consider scrapping it.

Another important design choice is whether you’ll use an unstructured, semi-structured or structured interview approach . For semi-structured interviews, you will have a list of questions that you plan to ask and these questions will be open-ended in nature. You’ll also allow the discussion to digress from the core question set if something interesting comes up. This means that the type of information generated might differ a fair amount between interviews.

Contrasted to this, a structured approach to interviews is more rigid, where a specific set of closed questions is developed and asked for each interviewee in exactly the same order. Closed questions have a limited set of answers, that are often single-word answers. Therefore, you need to think about what you’re trying to achieve with your research project (i.e. your research aims) and decided on which approach would be best suited in your case.

It is also important to plan ahead with regards to who will be interviewed and how. You need to think about how you will approach the possible interviewees to get their cooperation, who will conduct the interviews, when to conduct the interviews and how to record the interviews. For each of these decisions, it’s also essential to make sure that all ethical considerations and data protection laws are taken into account.

Finally, you should think through how you plan to analyze the data (i.e., your qualitative analysis method) generated by the interviews. Different types of analysis rely on different types of data, so you need to ensure you’re asking the right types of questions and correctly guiding your respondents.

Simply put, you need to have a plan of action regarding the specifics of your interview approach before you start collecting data. If not, you’ll end up drifting in your approach from interview to interview, which will result in inconsistent, unusable data.

Your interview questions need to directly  link to your research aims, objectives and  research questions - your "golden thread”.

2. Not having good interview technique

While you’re generally not expected to become you to be an expert interviewer for a dissertation or thesis, it is important to practice good interview technique and develop basic interviewing skills .

Let’s go through some basics that will help the process along.

Firstly, before the interview , make sure you know your interview questions well and have a clear idea of what you want from the interview. Naturally, the specificity of your questions will depend on whether you’re taking a structured, semi-structured or unstructured approach, but you still need a consistent starting point . Ideally, you should develop an interview guide beforehand (more on this later) that details your core question and links these to the research aims, objectives and research questions.

Before you undertake any interviews, it’s a good idea to do a few mock interviews with friends or family members. This will help you get comfortable with the interviewer role, prepare for potentially unexpected answers and give you a good idea of how long the interview will take to conduct. In the interviewing process, you’re likely to encounter two kinds of challenging interviewees ; the two-word respondent and the respondent who meanders and babbles. Therefore, you should prepare yourself for both and come up with a plan to respond to each in a way that will allow the interview to continue productively.

To begin the formal interview , provide the person you are interviewing with an overview of your research. This will help to calm their nerves (and yours) and contextualize the interaction. Ultimately, you want the interviewee to feel comfortable and be willing to be open and honest with you, so it’s useful to start in a more casual, relaxed fashion and allow them to ask any questions they may have. From there, you can ease them into the rest of the questions.

As the interview progresses , avoid asking leading questions (i.e., questions that assume something about the interviewee or their response). Make sure that you speak clearly and slowly , using plain language and being ready to paraphrase questions if the person you are interviewing misunderstands. Be particularly careful with interviewing English second language speakers to ensure that you’re both on the same page.

Engage with the interviewee by listening to them carefully and acknowledging that you are listening to them by smiling or nodding. Show them that you’re interested in what they’re saying and thank them for their openness as appropriate. This will also encourage your interviewee to respond openly.

Need a helping hand?

research design for interviews

3. Not securing a suitable location and quality equipment

Where you conduct your interviews and the equipment you use to record them both play an important role in how the process unfolds. Therefore, you need to think carefully about each of these variables before you start interviewing.

Poor location: A bad location can result in the quality of your interviews being compromised, interrupted, or cancelled. If you are conducting physical interviews, you’ll need a location that is quiet, safe, and welcoming . It’s very important that your location of choice is not prone to interruptions (the workplace office is generally problematic, for example) and has suitable facilities (such as water, a bathroom, and snacks).

If you are conducting online interviews , you need to consider a few other factors. Importantly, you need to make sure that both you and your respondent have access to a good, stable internet connection and electricity. Always check before the time that both of you know how to use the relevant software and it’s accessible (sometimes meeting platforms are blocked by workplace policies or firewalls). It’s also good to have alternatives in place (such as WhatsApp, Zoom, or Teams) to cater for these types of issues.

Poor equipment: Using poor-quality recording equipment or using equipment incorrectly means that you will have trouble transcribing, coding, and analyzing your interviews. This can be a major issue , as some of your interview data may go completely to waste if not recorded well. So, make sure that you use good-quality recording equipment and that you know how to use it correctly.

To avoid issues, you should always conduct test recordings before every interview to ensure that you can use the relevant equipment properly. It’s also a good idea to spot check each recording afterwards, just to make sure it was recorded as planned. If your equipment uses batteries, be sure to always carry a spare set.

Where you conduct your interviews and the equipment you use to record them play an important role in how the process unfolds.

4. Not having a basic risk management plan

Many possible issues can arise during the interview process. Not planning for these issues can mean that you are left with compromised data that might not be useful to you. Therefore, it’s important to map out some sort of risk management plan ahead of time, considering the potential risks, how you’ll minimize their probability and how you’ll manage them if they materialize.

Common potential issues related to the actual interview include cancellations (people pulling out), delays (such as getting stuck in traffic), language and accent differences (especially in the case of poor internet connections), issues with internet connections and power supply. Other issues can also occur in the interview itself. For example, the interviewee could drift off-topic, or you might encounter an interviewee who does not say much at all.

You can prepare for these potential issues by considering possible worst-case scenarios and preparing a response for each scenario. For instance, it is important to plan a backup date just in case your interviewee cannot make it to the first meeting you scheduled with them. It’s also a good idea to factor in a 30-minute gap between your interviews for the instances where someone might be late, or an interview runs overtime for other reasons. Make sure that you also plan backup questions that could be used to bring a respondent back on topic if they start rambling, or questions to encourage those who are saying too little.

In general, it’s best practice to plan to conduct more interviews than you think you need (this is called oversampling ). Doing so will allow you some room for error if there are interviews that don’t go as planned, or if some interviewees withdraw. If you need 10 interviews, it is a good idea to plan for 15. Likely, a few will cancel , delay, or not produce useful data.

You should consider all the potential risks, how you’ll reduce their probability and how you'll respond if they do indeed materialize.

5. Not keeping your golden thread front of mind

We touched on this a little earlier, but it is a key point that should be central to your entire research process. You don’t want to end up with pages and pages of data after conducting your interviews and realize that it is not useful to your research aims . Your research aims, objectives and research questions – i.e., your golden thread – should influence every design decision and should guide the interview process at all times. 

A useful way to avoid this mistake is by developing an interview guide before you begin interviewing your respondents. An interview guide is a document that contains all of your questions with notes on how each of the interview questions is linked to the research question(s) of your study. You can also include your research aims and objectives here for a more comprehensive linkage. 

You can easily create an interview guide by drawing up a table with one column containing your core interview questions . Then add another column with your research questions , another with expectations that you may have in light of the relevant literature and another with backup or follow-up questions . As mentioned, you can also bring in your research aims and objectives to help you connect them all together. If you’d like, you can download a copy of our free interview guide here .

Recap: Qualitative Interview Mistakes

In this post, we’ve discussed 5 common costly mistakes that are easy to make in the process of planning and conducting qualitative interviews.

To recap, these include:

If you have any questions about these interviewing mistakes, drop a comment below. Alternatively, if you’re interested in getting 1-on-1 help with your thesis or dissertation , check out our dissertation coaching service or book a free initial consultation with one of our friendly Grad Coaches.

research design for interviews

Psst... there’s more!

This post was based on one of our popular Research Bootcamps . If you're working on a research project, you'll definitely want to check this out ...

You Might Also Like:

Writing A Dissertation/Thesis Abstract

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Published: 15 September 2022

Interviews in the social sciences

  • Eleanor Knott   ORCID: orcid.org/0000-0002-9131-3939 1 ,
  • Aliya Hamid Rao   ORCID: orcid.org/0000-0003-0674-4206 1 ,
  • Kate Summers   ORCID: orcid.org/0000-0001-9964-0259 1 &
  • Chana Teeger   ORCID: orcid.org/0000-0002-5046-8280 1  

Nature Reviews Methods Primers volume  2 , Article number:  73 ( 2022 ) Cite this article

719k Accesses

61 Citations

42 Altmetric

Metrics details

  • Interdisciplinary studies

In-depth interviews are a versatile form of qualitative data collection used by researchers across the social sciences. They allow individuals to explain, in their own words, how they understand and interpret the world around them. Interviews represent a deceptively familiar social encounter in which people interact by asking and answering questions. They are, however, a very particular type of conversation, guided by the researcher and used for specific ends. This dynamic introduces a range of methodological, analytical and ethical challenges, for novice researchers in particular. In this Primer, we focus on the stages and challenges of designing and conducting an interview project and analysing data from it, as well as strategies to overcome such challenges.

Similar content being viewed by others

research design for interviews

The fundamental importance of method to theory

research design for interviews

How ‘going online’ mediates the challenges of policy elite interviews

research design for interviews

Participatory action research

Introduction.

In-depth interviews are a qualitative research method that follow a deceptively familiar logic of human interaction: they are conversations where people talk with each other, interact and pose and answer questions 1 . An interview is a specific type of interaction in which — usually and predominantly — a researcher asks questions about someone’s life experience, opinions, dreams, fears and hopes and the interview participant answers the questions 1 .

Interviews will often be used as a standalone method or combined with other qualitative methods, such as focus groups or ethnography, or quantitative methods, such as surveys or experiments. Although interviewing is a frequently used method, it should not be viewed as an easy default for qualitative researchers 2 . Interviews are also not suited to answering all qualitative research questions, but instead have specific strengths that should guide whether or not they are deployed in a research project. Whereas ethnography might be better suited to trying to observe what people do, interviews provide a space for extended conversations that allow the researcher insights into how people think and what they believe. Quantitative surveys also give these kinds of insights, but they use pre-determined questions and scales, privileging breadth over depth and often overlooking harder-to-reach participants.

In-depth interviews can take many different shapes and forms, often with more than one participant or researcher. For example, interviews might be highly structured (using an almost survey-like interview guide), entirely unstructured (taking a narrative and free-flowing approach) or semi-structured (using a topic guide ). Researchers might combine these approaches within a single project depending on the purpose of the interview and the characteristics of the participant. Whatever form the interview takes, researchers should be mindful of the dynamics between interviewer and participant and factor these in at all stages of the project.

In this Primer, we focus on the most common type of interview: one researcher taking a semi-structured approach to interviewing one participant using a topic guide. Focusing on how to plan research using interviews, we discuss the necessary stages of data collection. We also discuss the stages and thought-process behind analysing interview material to ensure that the richness and interpretability of interview material is maintained and communicated to readers. The Primer also tracks innovations in interview methods and discusses the developments we expect over the next 5–10 years.

We wrote this Primer as researchers from sociology, social policy and political science. We note our disciplinary background because we acknowledge that there are disciplinary differences in how interviews are approached and understood as a method.

Experimentation

Here we address research design considerations and data collection issues focusing on topic guide construction and other pragmatics of the interview. We also explore issues of ethics and reflexivity that are crucial throughout the research project.

Research design

Participant selection.

Participants can be selected and recruited in various ways for in-depth interview studies. The researcher must first decide what defines the people or social groups being studied. Often, this means moving from an abstract theoretical research question to a more precise empirical one. For example, the researcher might be interested in how people talk about race in contexts of diversity. Empirical settings in which this issue could be studied could include schools, workplaces or adoption agencies. The best research designs should clearly explain why the particular setting was chosen. Often there are both intrinsic and extrinsic reasons for choosing to study a particular group of people at a specific time and place 3 . Intrinsic motivations relate to the fact that the research is focused on an important specific social phenomenon that has been understudied. Extrinsic motivations speak to the broader theoretical research questions and explain why the case at hand is a good one through which to address them empirically.

Next, the researcher needs to decide which types of people they would like to interview. This decision amounts to delineating the inclusion and exclusion criteria for the study. The criteria might be based on demographic variables, like race or gender, but they may also be context-specific, for example, years of experience in an organization. These should be decided based on the research goals. Researchers should be clear about what characteristics would make an individual a candidate for inclusion in the study (and what would exclude them).

The next step is to identify and recruit the study’s sample . Usually, many more people fit the inclusion criteria than can be interviewed. In cases where lists of potential participants are available, the researcher might want to employ stratified sampling , dividing the list by characteristics of interest before sampling.

When there are no lists, researchers will often employ purposive sampling . Many researchers consider purposive sampling the most useful mode for interview-based research since the number of interviews to be conducted is too small to aim to be statistically representative 4 . Instead, the aim is not breadth, via representativeness, but depth via rich insights about a set of participants. In addition to purposive sampling, researchers often use snowball sampling . Both purposive and snowball sampling can be combined with quota sampling . All three types of sampling aim to ensure a variety of perspectives within the confines of a research project. A goal for in-depth interview studies can be to sample for range, being mindful of recruiting a diversity of participants fitting the inclusion criteria.

Study design

The total number of interviews depends on many factors, including the population studied, whether comparisons are to be made and the duration of interviews. Studies that rely on quota sampling where explicit comparisons are made between groups will require a larger number of interviews than studies focused on one group only. Studies where participants are interviewed over several hours, days or even repeatedly across years will tend to have fewer participants than those that entail a one-off engagement.

Researchers often stop interviewing when new interviews confirm findings from earlier interviews with no new or surprising insights (saturation) 4 , 5 , 6 . As a criterion for research design, saturation assumes that data collection and analysis are happening in tandem and that researchers will stop collecting new data once there is no new information emerging from the interviews. This is not always possible. Researchers rarely have time for systematic data analysis during data collection and they often need to specify their sample in funding proposals prior to data collection. As a result, researchers often draw on existing reports of saturation to estimate a sample size prior to data collection. These suggest between 12 and 20 interviews per category of participant (although researchers have reported saturation with samples that are both smaller and larger than this) 7 , 8 , 9 . The idea of saturation has been critiqued by many qualitative researchers because it assumes that meaning inheres in the data, waiting to be discovered — and confirmed — once saturation has been reached 7 . In-depth interview data are often multivalent and can give rise to different interpretations. The important consideration is, therefore, not merely how many participants are interviewed, but whether one’s research design allows for collecting rich and textured data that provide insight into participants’ understandings, accounts, perceptions and interpretations.

Sometimes, researchers will conduct interviews with more than one participant at a time. Researchers should consider the benefits and shortcomings of such an approach. Joint interviews may, for example, give researchers insight into how caregivers agree or debate childrearing decisions. At the same time, they may be less adaptive to exploring aspects of caregiving that participants may not wish to disclose to each other. In other cases, there may be more than one person interviewing each participant, such as when an interpreter is used, and so it is important to consider during the research design phase how this might shape the dynamics of the interview.

Data collection

Semi-structured interviews are typically organized around a topic guide comprised of an ordered set of broad topics (usually 3–5). Each topic includes a set of questions that form the basis of the discussion between the researcher and participant (Fig.  1 ). These topics are organized around key concepts that the researcher has identified (for example, through a close study of prior research, or perhaps through piloting a small, exploratory study) 5 .

figure 1

a | Elaborated topics the researcher wants to cover in the interview and example questions. b | An example topic arc. Using such an arc, one can think flexibly about the order of topics. Considering the main question for each topic will help to determine the best order for the topics. After conducting some interviews, the researcher can move topics around if a different order seems to make sense.

Topic guide

One common way to structure a topic guide is to start with relatively easy, open-ended questions (Table  1 ). Opening questions should be related to the research topic but broad and easy to answer, so that they help to ease the participant into conversation.

After these broad, opening questions, the topic guide may move into topics that speak more directly to the overarching research question. The interview questions will be accompanied by probes designed to elicit concrete details and examples from the participant (see Table  1 ).

Abstract questions are often easier for participants to answer once they have been asked more concrete questions. In our experience, for example, questions about feelings can be difficult for some participants to answer, but when following probes concerning factual experiences these questions can become less challenging. After the main themes of the topic guide have been covered, the topic guide can move onto closing questions. At this stage, participants often repeat something they have said before, although they may sometimes introduce a new topic.

Interviews are especially well suited to gaining a deeper insight into people’s experiences. Getting these insights largely depends on the participants’ willingness to talk to the researcher. We recommend designing open-ended questions that are more likely to elicit an elaborated response and extended reflection from participants rather than questions that can be answered with yes or no.

Questions should avoid foreclosing the possibility that the participant might disagree with the premise of the question. Take for example the question: “Do you support the new family-friendly policies?” This question minimizes the possibility of the participant disagreeing with the premise of this question, which assumes that the policies are ‘family-friendly’ and asks for a yes or no answer. Instead, asking more broadly how a participant feels about the specific policy being described as ‘family-friendly’ (for example, a work-from-home policy) allows them to express agreement, disagreement or impartiality and, crucially, to explain their reasoning 10 .

For an uninterrupted interview that will last between 90 and 120 minutes, the topic guide should be one to two single-spaced pages with questions and probes. Ideally, the researcher will memorize the topic guide before embarking on the first interview. It is fine to carry a printed-out copy of the topic guide but memorizing the topic guide ahead of the interviews can often make the interviewer feel well prepared in guiding the participant through the interview process.

Although the topic guide helps the researcher stay on track with the broad areas they want to cover, there is no need for the researcher to feel tied down by the topic guide. For instance, if a participant brings up a theme that the researcher intended to discuss later or a point the researcher had not anticipated, the researcher may well decide to follow the lead of the participant. The researcher’s role extends beyond simply stating the questions; it entails listening and responding, making split-second decisions about what line of inquiry to pursue and allowing the interview to proceed in unexpected directions.

Optimizing the interview

The ideal place for an interview will depend on the study and what is feasible for participants. Generally, a place where the participant and researcher can both feel relaxed, where the interview can be uninterrupted and where noise or other distractions are limited is ideal. But this may not always be possible and so the researcher needs to be prepared to adapt their plans within what is feasible (and desirable for participants).

Another key tool for the interview is a recording device (assuming that permission for recording has been given). Recording can be important to capture what the participant says verbatim. Additionally, it can allow the researcher to focus on determining what probes and follow-up questions they want to pursue rather than focusing on taking notes. Sometimes, however, a participant may not allow the researcher to record, or the recording may fail. If the interview is not recorded we suggest that the researcher takes brief notes during the interview, if feasible, and then thoroughly make notes immediately after the interview and try to remember the participant’s facial expressions, gestures and tone of voice. Not having a recording of an interview need not limit the researcher from getting analytical value from it.

As soon as possible after each interview, we recommend that the researcher write a one-page interview memo comprising three key sections. The first section should identify two to three important moments from the interview. What constitutes important is up to the researcher’s discretion 9 . The researcher should note down what happened in these moments, including the participant’s facial expressions, gestures, tone of voice and maybe even the sensory details of their surroundings. This exercise is about capturing ethnographic detail from the interview. The second part of the interview memo is the analytical section with notes on how the interview fits in with previous interviews, for example, where the participant’s responses concur or diverge from other responses. The third part consists of a methodological section where the researcher notes their perception of their relationship with the participant. The interview memo allows the researcher to think critically about their positionality and practice reflexivity — key concepts for an ethical and transparent research practice in qualitative methodology 11 , 12 .

Ethics and reflexivity

All elements of an in-depth interview can raise ethical challenges and concerns. Good ethical practice in interview studies often means going beyond the ethical procedures mandated by institutions 13 . While discussions and requirements of ethics can differ across disciplines, here we focus on the most pertinent considerations for interviews across the research process for an interdisciplinary audience.

Ethical considerations prior to interview

Before conducting interviews, researchers should consider harm minimization, informed consent, anonymity and confidentiality, and reflexivity and positionality. It is important for the researcher to develop their own ethical sensitivities and sensibilities by gaining training in interview and qualitative methods, reading methodological and field-specific texts on interviews and ethics and discussing their research plans with colleagues.

Researchers should map the potential harm to consider how this can be minimized. Primarily, researchers should consider harm from the participants’ perspective (Box  1 ). But, it is also important to consider and plan for potential harm to the researcher, research assistants, gatekeepers, future researchers and members of the wider community 14 . Even the most banal of research topics can potentially pose some form of harm to the participant, researcher and others — and the level of harm is often highly context-dependent. For example, a research project on religion in society might have very different ethical considerations in a democratic versus authoritarian research context because of how openly or not such topics can be discussed and debated 15 .

The researcher should consider how they will obtain and record informed consent (for example, written or oral), based on what makes the most sense for their research project and context 16 . Some institutions might specify how informed consent should be gained. Regardless of how consent is obtained, the participant must be made aware of the form of consent, the intentions and procedures of the interview and potential forms of harm and benefit to the participant or community before the interview commences. Moreover, the participant must agree to be interviewed before the interview commences. If, in addition to interviews, the study contains an ethnographic component, it is worth reading around this topic (see, for example, Murphy and Dingwall 17 ). Informed consent must also be gained for how the interview will be recorded before the interview commences. These practices are important to ensure the participant is contributing on a voluntary basis. It is also important to remind participants that they can withdraw their consent at any time during the interview and for a specified period after the interview (to be decided with the participant). The researcher should indicate that participants can ask for anything shared to be off the record and/or not disseminated.

In terms of anonymity and confidentiality, it is standard practice when conducting interviews to agree not to use (or even collect) participants’ names and personal details that are not pertinent to the study. Anonymizing can often be the safer option for minimizing harm to participants as it is hard to foresee all the consequences of de-anonymizing, even if participants agree. Regardless of what a researcher decides, decisions around anonymity must be agreed with participants during the process of gaining informed consent and respected following the interview.

Although not all ethical challenges can be foreseen or planned for 18 , researchers should think carefully — before the interview — about power dynamics, participant vulnerability, emotional state and interactional dynamics between interviewer and participant, even when discussing low-risk topics. Researchers may then wish to plan for potential ethical issues, for example by preparing a list of relevant organizations to which participants can be signposted. A researcher interviewing a participant about debt, for instance, might prepare in advance a list of debt advice charities, organizations and helplines that could provide further support and advice. It is important to remember that the role of an interviewer is as a researcher rather than as a social worker or counsellor because researchers may not have relevant and requisite training in these other domains.

Box 1 Mapping potential forms of harm

Social: researchers should avoid causing any relational detriment to anyone in the course of interviews, for example, by sharing information with other participants or causing interview participants to be shunned or mistreated by their community as a result of participating.

Economic: researchers should avoid causing financial detriment to anyone, for example, by expecting them to pay for transport to be interviewed or to potentially lose their job as a result of participating.

Physical: researchers should minimize the risk of anyone being exposed to violence as a result of the research both from other individuals or from authorities, including police.

Psychological: researchers should minimize the risk of causing anyone trauma (or re-traumatization) or psychological anguish as a result of the research; this includes not only the participant but importantly the researcher themselves and anyone that might read or analyse the transcripts, should they contain triggering information.

Political: researchers should minimize the risk of anyone being exposed to political detriment as a result of the research, such as retribution.

Professional/reputational: researchers should minimize the potential for reputational damage to anyone connected to the research (this includes ensuring good research practices so that any researchers involved are not harmed reputationally by being involved with the research project).

The task here is not to map exhaustively the potential forms of harm that might pertain to a particular research project (that is the researcher’s job and they should have the expertise most suited to mapping such potential harms relative to the specific project) but to demonstrate the breadth of potential forms of harm.

Ethical considerations post-interview

Researchers should consider how interview data are stored, analysed and disseminated. If participants have been offered anonymity and confidentiality, data should be stored in a way that does not compromise this. For example, researchers should consider removing names and any other unnecessary personal details from interview transcripts, password-protecting and encrypting files and using pseudonyms to label and store all interview data. It is also important to address where interview data are taken (for example, across borders in particular where interview data might be of interest to local authorities) and how this might affect the storage of interview data.

Examining how the researcher will represent participants is a paramount ethical consideration both in the planning stages of the interview study and after it has been conducted. Dissemination strategies also need to consider questions of anonymity and representation. In small communities, even if participants are given pseudonyms, it might be obvious who is being described. Anonymizing not only the names of those participating but also the research context is therefore a standard practice 19 . With particularly sensitive data or insights about the participant, it is worth considering describing participants in a more abstract way rather than as specific individuals. These practices are important both for protecting participants’ anonymity but can also affect the ability of the researcher and others to return ethically to the research context and similar contexts 20 .

Reflexivity and positionality

Reflexivity and positionality mean considering the researcher’s role and assumptions in knowledge production 13 . A key part of reflexivity is considering the power relations between the researcher and participant within the interview setting, as well as how researchers might be perceived by participants. Further, researchers need to consider how their own identities shape the kind of knowledge and assumptions they bring to the interview, including how they approach and ask questions and their analysis of interviews (Box  2 ). Reflexivity is a necessary part of developing ethical sensibility as a researcher by adapting and reflecting on how one engages with participants. Participants should not feel judged, for example, when they share information that researchers might disagree with or find objectionable. How researchers deal with uncomfortable moments or information shared by participants is at their discretion, but they should consider how they will react both ahead of time and in the moment.

Researchers can develop their reflexivity by considering how they themselves would feel being asked these interview questions or represented in this way, and then adapting their practice accordingly. There might be situations where these questions are not appropriate in that they unduly centre the researchers’ experiences and worldview. Nevertheless, these prompts can provide a useful starting point for those beginning their reflexive journey and developing an ethical sensibility.

Reflexivity and ethical sensitivities require active reflection throughout the research process. For example, researchers should take care in interview memos and their notes to consider their assumptions, potential preconceptions, worldviews and own identities prior to and after interviews (Box  2 ). Checking in with assumptions can be a way of making sure that researchers are paying close attention to their own theoretical and analytical biases and revising them in accordance with what they learn through the interviews. Researchers should return to these notes (especially when analysing interview material), to try to unpack their own effects on the research process as well as how participants positioned and engaged with them.

Box 2 Aspects to reflect on reflexively

For reflexive engagement, and understanding the power relations being co-constructed and (re)produced in interviews, it is necessary to reflect, at a minimum, on the following.

Ethnicity, race and nationality, such as how does privilege stemming from race or nationality operate between the researcher, the participant and research context (for example, a researcher from a majority community may be interviewing a member of a minority community)

Gender and sexuality, see above on ethnicity, race and nationality

Social class, and in particular the issue of middle-class bias among researchers when formulating research and interview questions

Economic security/precarity, see above on social class and thinking about the researcher’s relative privilege and the source of biases that stem from this

Educational experiences and privileges, see above

Disciplinary biases, such as how the researcher’s discipline/subfield usually approaches these questions, possibly normalizing certain assumptions that might be contested by participants and in the research context

Political and social values

Lived experiences and other dimensions of ourselves that affect and construct our identity as researchers

In this section, we discuss the next stage of an interview study, namely, analysing the interview data. Data analysis may begin while more data are being collected. Doing so allows early findings to inform the focus of further data collection, as part of an iterative process across the research project. Here, the researcher is ultimately working towards achieving coherence between the data collected and the findings produced to answer successfully the research question(s) they have set.

The two most common methods used to analyse interview material across the social sciences are thematic analysis 21 and discourse analysis 22 . Thematic analysis is a particularly useful and accessible method for those starting out in analysis of qualitative data and interview material as a method of coding data to develop and interpret themes in the data 21 . Discourse analysis is more specialized and focuses on the role of discourse in society by paying close attention to the explicit, implicit and taken-for-granted dimensions of language and power 22 , 23 . Although thematic and discourse analysis are often discussed as separate techniques, in practice researchers might flexibly combine these approaches depending on the object of analysis. For example, those intending to use discourse analysis might first conduct thematic analysis as a way to organize and systematize the data. The object and intention of analysis might differ (for example, developing themes or interrogating language), but the questions facing the researcher (such as whether to take an inductive or deductive approach to analysis) are similar.

Preparing data

Data preparation is an important step in the data analysis process. The researcher should first determine what comprises the corpus of material and in what form it will it be analysed. The former refers to whether, for example, alongside the interviews themselves, analytic memos or observational notes that may have been taken during data collection will also be directly analysed. The latter refers to decisions about how the verbal/audio interview data will be transformed into a written form, making it suitable for processes of data analysis. Typically, interview audio recordings are transcribed to produce a written transcript. It is important to note that the process of transcription is one of transformation. The verbal interview data are transformed into a written transcript through a series of decisions that the researcher must make. The researcher should consider the effect of mishearing what has been said or how choosing to punctuate a sentence in a particular way will affect the final analysis.

Box  3 shows an example transcript excerpt from an interview with a teacher conducted by Teeger as part of her study of history education in post-apartheid South Africa 24 (Box  3 ). Seeing both the questions and the responses means that the reader can contextualize what the participant (Ms Mokoena) has said. Throughout the transcript the researcher has used square brackets, for example to indicate a pause in speech, when Ms Mokoena says “it’s [pause] it’s a difficult topic”. The transcription choice made here means that we see that Ms Mokoena has taken time to pause, perhaps to search for the right words, or perhaps because she has a slight apprehension. Square brackets are also included as an overt act of communication to the reader. When Ms Mokoena says “ja”, the English translation (“yes”) of the word in Afrikaans is placed in square brackets to ensure that the reader can follow the meaning of the speech.

Decisions about what to include when transcribing will be hugely important for the direction and possibilities of analysis. Researchers should decide what they want to capture in the transcript, based on their analytic focus. From a (post)positivist perspective 25 , the researcher may be interested in the manifest content of the interview (such as what is said, not how it is said). In that case, they may choose to transcribe intelligent verbatim . From a constructivist perspective 25 , researchers may choose to record more aspects of speech (including, for example, pauses, repetitions, false starts, talking over one another) so that these features can be analysed. Those working from this perspective argue that to recognize the interactional nature of the interview setting adequately and to avoid misinterpretations, features of interaction (pauses, overlaps between speakers and so on) should be preserved in transcription and therefore in the analysis 10 . Readers interested in learning more should consult Potter and Hepburn’s summary of how to present interaction through transcription of interview data 26 .

The process of analysing semi-structured interviews might be thought of as a generative rather than an extractive enterprise. Findings do not already exist within the interview data to be discovered. Rather, researchers create something new when analysing the data by applying their analytic lens or approach to the transcripts. At a high level, there are options as to what researchers might want to glean from their interview data. They might be interested in themes, whereby they identify patterns of meaning across the dataset 21 . Alternatively, they may focus on discourse(s), looking to identify how language is used to construct meanings and therefore how language reinforces or produces aspects of the social world 27 . Alternatively, they might look at the data to understand narrative or biographical elements 28 .

A further overarching decision to make is the extent to which researchers bring predetermined framings or understandings to bear on their data, or instead begin from the data themselves to generate an analysis. One way of articulating this is the extent to which researchers take a deductive approach or an inductive approach to analysis. One example of a truly inductive approach is grounded theory, whereby the aim of the analysis is to build new theory, beginning with one’s data 6 , 29 . In practice, researchers using thematic and discourse analysis often combine deductive and inductive logics and describe their process instead as iterative (referred to also as an abductive approach ) 30 , 31 . For example, researchers may decide that they will apply a given theoretical framing, or begin with an initial analytic framework, but then refine or develop these once they begin the process of analysis.

Box 3 Excerpt of interview transcript (from Teeger 24 )

Interviewer : Maybe you could just start by talking about what it’s like to teach apartheid history.

Ms Mokoena : It’s a bit challenging. You’ve got to accommodate all the kids in the class. You’ve got to be sensitive to all the racial differences. You want to emphasize the wrongs that were done in the past but you also want to, you know, not to make kids feel like it’s their fault. So you want to use the wrongs of the past to try and unite the kids …

Interviewer : So what kind of things do you do?

Ms Mokoena : Well I normally highlight the fact that people that were struggling were not just the blacks, it was all the races. And I give examples of the people … from all walks of life, all races, and highlight how they suffered as well as a result of apartheid, particularly the whites… . What I noticed, particularly my first year of teaching apartheid, I noticed that the black kids made the others feel responsible for what happened… . I had a lot of fights…. A lot of kids started hating each other because, you know, the others are white and the others were black. And they started saying, “My mother is a domestic worker because she was never allowed an opportunity to get good education.” …

Interviewer : I didn’t see any of that now when I was observing.

Ms Mokoena : … Like I was saying I think that because of the re-emphasis of the fact that, look, everybody did suffer one way or the other, they sort of got to see that it was everybody’s struggle … . They should now get to understand that that’s why we’re called a Rainbow Nation. Not everybody agreed with apartheid and not everybody suffered. Even all the blacks, not all blacks got to feel what the others felt . So ja [yes], it’s [pause] it’s a difficult topic, ja . But I think if you get the kids to understand why we’re teaching apartheid in the first place and you show the involvement of all races in all the different sides , then I think you have managed to teach it properly. So I think because of my inexperience then — that was my first year of teaching history — so I think I — maybe I over-emphasized the suffering of the blacks versus the whites [emphasis added].

Reprinted with permission from ref. 24 , Sage Publications.

From data to codes

Coding data is a key building block shared across many approaches to data analysis. Coding is a way of organizing and describing data, but is also ultimately a way of transforming data to produce analytic insights. The basic practice of coding involves highlighting a segment of text (this may be a sentence, a clause or a longer excerpt) and assigning a label to it. The aim of the label is to communicate some sort of summary of what is in the highlighted piece of text. Coding is an iterative process, whereby researchers read and reread their transcripts, applying and refining their codes, until they have a coding frame (a set of codes) that is applied coherently across the dataset and that captures and communicates the key features of what is contained in the data as it relates to the researchers’ analytic focus.

What one codes for is entirely contingent on the focus of the research project and the choices the researcher makes about the approach to analysis. At first, one might apply descriptive codes, summarizing what is contained in the interviews. It is rarely desirable to stop at this point, however, because coding is a tool to move from describing the data to interpreting the data. Suppose the researcher is pursuing some version of thematic analysis. In that case, it might be that the objects of coding are aspects of reported action, emotions, opinions, norms, relationships, routines, agreement/disagreement and change over time. A discourse analysis might instead code for different types of speech acts, tropes, linguistic or rhetorical devices. Multiple types of code might be generated within the same research project. What is important is that researchers are aware of the choices they are making in terms of what they are coding for. Moreover, through the process of refinement, the aim is to produce a set of discrete codes — in which codes are conceptually distinct, as opposed to overlapping. By using the same codes across the dataset, the researcher can capture commonalities across the interviews. This process of refinement involves relabelling codes and reorganizing how and where they are applied in the dataset.

From coding to analysis and writing

Data analysis is also an iterative process in which researchers move closer to and further away from the data. As they move away from the data, they synthesize their findings, thus honing and articulating their analytic insights. As they move closer to the data, they ground these insights in what is contained in the interviews. The link should not be broken between the data themselves and higher-order conceptual insights or claims being made. Researchers must be able to show evidence for their claims in the data. Figure  2 summarizes this iterative process and suggests the sorts of activities involved at each stage more concretely.

figure 2

As well as going through steps 1 to 6 in order, the researcher will also go backwards and forwards between stages. Some stages will themselves be a forwards and backwards processing of coding and refining when working across different interview transcripts.

At the stage of synthesizing, there are some common quandaries. When dealing with a dataset consisting of multiple interviews, there will be salient and minority statements across different participants, or consensus or dissent on topics of interest to the researcher. A strength of qualitative interviews is that we can build in these nuances and variations across our data as opposed to aggregating them away. When exploring and reporting data, researchers should be asking how different findings are patterned and which interviews contain which codes, themes or tropes. Researchers should think about how these variations fit within the longer flow of individual interviews and what these variations tell them about the nature of their substantive research interests.

A further consideration is how to approach analysis within and across interview data. Researchers may look at one individual code, to examine the forms it takes across different participants and what they might be able to summarize about this code in the round. Alternatively, they might look at how a code or set of codes pattern across the account of one participant, to understand the code(s) in a more contextualized way. Further analysis might be done according to different sampling characteristics, where researchers group together interviews based on certain demographic characteristics and explore these together.

When it comes to writing up and presenting interview data, key considerations tend to rest on what is often termed transparency. When presenting the findings of an interview-based study, the reader should be able to understand and trace what the stated findings are based upon. This process typically involves describing the analytic process, how key decisions were made and presenting direct excerpts from the data. It is important to account for how the interview was set up and to consider the active part that the researcher has played in generating the data 32 . Quotes from interviews should not be thought of as merely embellishing or adding interest to a final research output. Rather, quotes serve the important function of connecting the reader directly to the underlying data. Quotes, therefore, should be chosen because they provide the reader with the most apt insight into what is being discussed. It is good practice to report not just on what participants said, but also on the questions that were asked to elicit the responses.

Researchers have increasingly used specialist qualitative data analysis software to organize and analyse their interview data, such as NVivo or ATLAS.ti. It is important to remember that such software is a tool for, rather than an approach or technique of, analysis. That said, software also creates a wide range of possibilities in terms of what can be done with the data. As researchers, we should reflect on how the range of possibilities of a given software package might be shaping our analytical choices and whether these are choices that we do indeed want to make.

Applications

This section reviews how and why in-depth interviews have been used by researchers studying gender, education and inequality, nationalism and ethnicity and the welfare state. Although interviews can be employed as a method of data collection in just about any social science topic, the applications below speak directly to the authors’ expertise and cutting-edge areas of research.

When it comes to the broad study of gender, in-depth interviews have been invaluable in shaping our understanding of how gender functions in everyday life. In a study of the US hedge fund industry (an industry dominated by white men), Tobias Neely was interested in understanding the factors that enable white men to prosper in the industry 33 . The study comprised interviews with 45 hedge fund workers and oversampled women of all races and men of colour to capture a range of experiences and beliefs. Tobias Neely found that practices of hiring, grooming and seeding are key to maintaining white men’s dominance in the industry. In terms of hiring, the interviews clarified that white men in charge typically preferred to hire people like themselves, usually from their extended networks. When women were hired, they were usually hired to less lucrative positions. In terms of grooming, Tobias Neely identifies how older and more senior men in the industry who have power and status will select one or several younger men as their protégés, to include in their own elite networks. Finally, in terms of her concept of seeding, Tobias Neely describes how older men who are hedge fund managers provide the seed money (often in the hundreds of millions of dollars) for a hedge fund to men, often their own sons (but not their daughters). These interviews provided an in-depth look into gendered and racialized mechanisms that allow white men to flourish in this industry.

Research by Rao draws on dozens of interviews with men and women who had lost their jobs, some of the participants’ spouses and follow-up interviews with about half the sample approximately 6 months after the initial interview 34 . Rao used interviews to understand the gendered experience and understanding of unemployment. Through these interviews, she found that the very process of losing their jobs meant different things for men and women. Women often saw job loss as being a personal indictment of their professional capabilities. The women interviewed often referenced how years of devaluation in the workplace coloured their interpretation of their job loss. Men, by contrast, were also saddened by their job loss, but they saw it as part and parcel of a weak economy rather than a personal failing. How these varied interpretations occurred was tied to men’s and women’s very different experiences in the workplace. Further, through her analysis of these interviews, Rao also showed how these gendered interpretations had implications for the kinds of jobs men and women sought to pursue after job loss. Whereas men remained tied to participating in full-time paid work, job loss appeared to be a catalyst pushing some of the women to re-evaluate their ties to the labour force.

In a study of workers in the tech industry, Hart used interviews to explain how individuals respond to unwanted and ambiguously sexual interactions 35 . Here, the researcher used interviews to allow participants to describe how these interactions made them feel and act and the logics of how they interpreted, classified and made sense of them 35 . Through her analysis of these interviews, Hart showed that participants engaged in a process she termed “trajectory guarding”, whereby they sought to monitor unwanted and ambiguously sexual interactions to avoid them from escalating. Yet, as Hart’s analysis proficiently demonstrates, these very strategies — which protect these workers sexually — also undermined their workplace advancement.

Drawing on interviews, these studies have helped us to understand better how gendered mechanisms, gendered interpretations and gendered interactions foster gender inequality when it comes to paid work. Methodologically, these studies illuminate the power of interviews to reveal important aspects of social life.

Nationalism and ethnicity

Traditionally, nationalism has been studied from a top-down perspective, through the lens of the state or using historical methods; in other words, in-depth interviews have not been a common way of collecting data to study nationalism. The methodological turn towards everyday nationalism has encouraged more scholars to go to the field and use interviews (and ethnography) to understand nationalism from the bottom up: how people talk about, give meaning, understand, navigate and contest their relation to nation, national identification and nationalism 36 , 37 , 38 , 39 . This turn has also addressed the gap left by those studying national and ethnic identification via quantitative methods, such as surveys.

Surveys can enumerate how individuals ascribe to categorical forms of identification 40 . However, interviews can question the usefulness of such categories and ask whether these categories are reflected, or resisted, by participants in terms of the meanings they give to identification 41 , 42 . Categories often pitch identification as a mutually exclusive choice; but identification might be more complex than such categories allow. For example, some might hybridize these categories or see themselves as moving between and across categories 43 . Hearing how people talk about themselves and their relation to nations, states and ethnicities, therefore, contributes substantially to the study of nationalism and national and ethnic forms of identification.

One particular approach to studying these topics, whether via everyday nationalism or alternatives, is that of using interviews to capture both articulations and narratives of identification, relations to nationalism and the boundaries people construct. For example, interviews can be used to gather self–other narratives by studying how individuals construct I–we–them boundaries 44 , including how participants talk about themselves, who participants include in their various ‘we’ groupings and which and how participants create ‘them’ groupings of others, inserting boundaries between ‘I/we’ and ‘them’. Overall, interviews hold great potential for listening to participants and understanding the nuances of identification and the construction of boundaries from their point of view.

Education and inequality

Scholars of social stratification have long noted that the school system often reproduces existing social inequalities. Carter explains that all schools have both material and sociocultural resources 45 . When children from different backgrounds attend schools with different material resources, their educational and occupational outcomes are likely to vary. Such material resources are relatively easy to measure. They are operationalized as teacher-to-student ratios, access to computers and textbooks and the physical infrastructure of classrooms and playgrounds.

Drawing on Bourdieusian theory 46 , Carter conceptualizes the sociocultural context as the norms, values and dispositions privileged within a social space 45 . Scholars have drawn on interviews with students and teachers (as well as ethnographic observations) to show how schools confer advantages on students from middle-class families, for example, by rewarding their help-seeking behaviours 47 . Focusing on race, researchers have revealed how schools can remain socioculturally white even as they enrol a racially diverse student population. In such contexts, for example, teachers often misrecognize the aesthetic choices made by students of colour, wrongly inferring that these students’ tastes in clothing and music reflect negative orientations to schooling 48 , 49 , 50 . These assessments can result in disparate forms of discipline and may ultimately shape educators’ assessments of students’ academic potential 51 .

Further, teachers and administrators tend to view the appropriate relationship between home and school in ways that resonate with white middle-class parents 52 . These parents are then able to advocate effectively for their children in ways that non-white parents are not 53 . In-depth interviews are particularly good at tapping into these understandings, revealing the mechanisms that confer privilege on certain groups of students and thereby reproduce inequality.

In addition, interviews can shed light on the unequal experiences that young people have within educational institutions, as the views of dominant groups are affirmed while those from disadvantaged backgrounds are delegitimized. For example, Teeger’s interviews with South African high schoolers showed how — because racially charged incidents are often framed as jokes in the broader school culture — Black students often feel compelled to ignore and keep silent about the racism they experience 54 . Interviews revealed that Black students who objected to these supposed jokes were coded by other students as serious or angry. In trying to avoid such labels, these students found themselves unable to challenge the racism they experienced. Interviews give us insight into these dynamics and help us see how young people understand and interpret the messages transmitted in schools — including those that speak to issues of inequality in their local school contexts as well as in society more broadly 24 , 55 .

The welfare state

In-depth interviews have also proved to be an important method for studying various aspects of the welfare state. By welfare state, we mean the social institutions relating to the economic and social wellbeing of a state’s citizens. Notably, using interviews has been useful to look at how policy design features are experienced and play out on the ground. Interviews have often been paired with large-scale surveys to produce mixed-methods study designs, therefore achieving both breadth and depth of insights.

In-depth interviews provide the opportunity to look behind policy assumptions or how policies are designed from the top down, to examine how these play out in the lives of those affected by the policies and whose experiences might otherwise be obscured or ignored. For example, the Welfare Conditionality project used interviews to critique the assumptions that conditionality (such as, the withdrawal of social security benefits if recipients did not perform or meet certain criteria) improved employment outcomes and instead showed that conditionality was harmful to mental health, living standards and had many other negative consequences 56 . Meanwhile, combining datasets from two small-scale interview studies with recipients allowed Summers and Young to critique assumptions around the simplicity that underpinned the design of Universal Credit in 2020, for example, showing that the apparently simple monthly payment design instead burdened recipients with additional money management decisions and responsibilities 57 .

Similarly, the Welfare at a (Social) Distance project used a mixed-methods approach in a large-scale study that combined national surveys with case studies and in-depth interviews to investigate the experience of claiming social security benefits during the COVID-19 pandemic. The interviews allowed researchers to understand in detail any issues experienced by recipients of benefits, such as delays in the process of claiming, managing on a very tight budget and navigating stigma and claiming 58 .

These applications demonstrate the multi-faceted topics and questions for which interviews can be a relevant method for data collection. These applications highlight not only the relevance of interviews, but also emphasize the key added value of interviews, which might be missed by other methods (surveys, in particular). Interviews can expose and question what is taken for granted and directly engage with communities and participants that might otherwise be ignored, obscured or marginalized.

Reproducibility and data deposition

There is a robust, ongoing debate about reproducibility in qualitative research, including interview studies. In some research paradigms, reproducibility can be a way of interrogating the rigour and robustness of research claims, by seeing whether these hold up when the research process is repeated. Some scholars have suggested that although reproducibility may be challenging, researchers can facilitate it by naming the place where the research was conducted, naming participants, sharing interview and fieldwork transcripts (anonymized and de-identified in cases where researchers are not naming people or places) and employing fact-checkers for accuracy 11 , 59 , 60 .

In addition to the ethical concerns of whether de-anonymization is ever feasible or desirable, it is also important to address whether the replicability of interview studies is meaningful. For example, the flexibility of interviews allows for the unexpected and the unforeseen to be incorporated into the scope of the research 61 . However, this flexibility means that we cannot expect reproducibility in the conventional sense, given that different researchers will elicit different types of data from participants. Sharing interview transcripts with other researchers, for instance, downplays the contextual nature of an interview.

Drawing on Bauer and Gaskell, we propose several measures to enhance rigour in qualitative research: transparency, grounding interpretations and aiming for theoretical transferability and significance 62 .

Researchers should be transparent when describing their methodological choices. Transparency means documenting who was interviewed, where and when (without requiring de-anonymization, for example, by documenting their characteristics), as well as the questions they were asked. It means carefully considering who was left out of the interviews and what that could mean for the researcher’s findings. It also means carefully considering who the researcher is and how their identity shaped the research process (integrating and articulating reflexivity into whatever is written up).

Second, researchers should ground their interpretations in the data. Grounding means presenting the evidence upon which the interpretation relies. Quotes and extracts should be extensive enough to allow the reader to evaluate whether the researcher’s interpretations are grounded in the data. At each step, researchers should carefully compare their own explanations and interpretations with alternative explanations. Doing so systematically and frequently allows researchers to become more confident in their claims. Here, researchers should justify the link between data and analysis by using quotes to justify and demonstrate the analytical point, while making sure the analytical point offers an interpretation of quotes (Box  4 ).

An important step in considering alternative explanations is to seek out disconfirming evidence 4 , 63 . This involves looking for instances where participants deviate from what the majority are saying and thus bring into question the theory (or explanation) that the researcher is developing. Careful analysis of such examples can often demonstrate the salience and meaning of what appears to be the norm (see Table  2 for examples) 54 . Considering alternative explanations and paying attention to disconfirming evidence allows the researcher to refine their own theories in respect of the data.

Finally, researchers should aim for theoretical transferability and significance in their discussions of findings. One way to think about this is to imagine someone who is not interested in the empirical study. Articulating theoretical transferability and significance usually takes the form of broadening out from the specific findings to consider explicitly how the research has refined or altered prior theoretical approaches. This process also means considering under what other conditions, aside from those of the study, the researcher thinks their theoretical revision would be supported by and why. Importantly, it also includes thinking about the limitations of one’s own approach and where the theoretical implications of the study might not hold.

Box 4 An example of grounding interpretations in data (from Rao 34 )

In an article explaining how unemployed men frame their job loss as a pervasive experience, Rao writes the following: “Unemployed men in this study understood unemployment to be an expected aspect of paid work in the contemporary United States. Robert, a white unemployed communications professional, compared the economic landscape after the Great Recession with the tragic events of September 11, 2001:

Part of your post-9/11 world was knowing people that died as a result of terrorism. The same thing is true with the [Great] Recession, right? … After the Recession you know somebody who was unemployed … People that really should be working.

The pervasiveness of unemployment rendered it normal, as Robert indicates.”

Here, the link between the quote presented and the analytical point Rao is making is clear: the analytical point is grounded in a quote and an interpretation of the quote is offered 34 .

Limitations and optimizations

When deciding which research method to use, the key question is whether the method provides a good fit for the research questions posed. In other words, researchers should consider whether interviews will allow them to successfully access the social phenomena necessary to answer their question(s) and whether the interviews will do so more effectively than other methods. Table  3 summarizes the major strengths and limitations of interviews. However, the accompanying text below is organized around some key issues, where relative strengths and weaknesses are presented alongside each other, the aim being that readers should think about how these can be balanced and optimized in relation to their own research.

Breadth versus depth of insight

Achieving an overall breadth of insight, in a statistically representative sense, is not something that is possible or indeed desirable when conducting in-depth interviews. Instead, the strength of conducting interviews lies in their ability to generate various sorts of depth of insight. The experiences or views of participants that can be accessed by conducting interviews help us to understand participants’ subjective realities. The challenge, therefore, is for researchers to be clear about why depth of insight is the focus and what we should aim to glean from these types of insight.

Naturalistic or artificial interviews

Interviews make use of a form of interaction with which people are familiar 64 . By replicating a naturalistic form of interaction as a tool to gather social science data, researchers can capitalize on people’s familiarity and expectations of what happens in a conversation. This familiarity can also be a challenge, as people come to the interview with preconceived ideas about what this conversation might be for or about. People may draw on experiences of other similar conversations when taking part in a research interview (for example, job interviews, therapy sessions, confessional conversations, chats with friends). Researchers should be aware of such potential overlaps and think through their implications both in how the aims and purposes of the research interview are communicated to participants and in how interview data are interpreted.

Further, some argue that a limitation of interviews is that they are an artificial form of data collection. By taking people out of their daily lives and asking them to stand back and pass comment, we are creating a distance that makes it difficult to use such data to say something meaningful about people’s actions, experiences and views. Other approaches, such as ethnography, might be more suitable for tapping into what people actually do, as opposed to what they say they do 65 .

Dynamism and replicability

Interviews following a semi-structured format offer flexibility both to the researcher and the participant. As the conversation develops, the interlocutors can explore the topics raised in much more detail, if desired, or pass over ones that are not relevant. This flexibility allows for the unexpected and the unforeseen to be incorporated into the scope of the research.

However, this flexibility has a related challenge of replicability. Interviews cannot be reproduced because they are contingent upon the interaction between the researcher and the participant in that given moment of interaction. In some research paradigms, replicability can be a way of interrogating the robustness of research claims, by seeing whether they hold when they are repeated. This is not a useful framework to bring to in-depth interviews and instead quality criteria (such as transparency) tend to be employed as criteria of rigour.

Accessing the private and personal

Interviews have been recognized for their strength in accessing private, personal issues, which participants may feel more comfortable talking about in a one-to-one conversation. Furthermore, interviews are likely to take a more personable form with their extended questions and answers, perhaps making a participant feel more at ease when discussing sensitive topics in such a context. There is a similar, but separate, argument made about accessing what are sometimes referred to as vulnerable groups, who may be difficult to make contact with using other research methods.

There is an associated challenge of anonymity. There can be types of in-depth interview that make it particularly challenging to protect the identities of participants, such as interviewing within a small community, or multiple members of the same household. The challenge to ensure anonymity in such contexts is even more important and difficult when the topic of research is of a sensitive nature or participants are vulnerable.

Increasingly, researchers are collaborating in large-scale interview-based studies and integrating interviews into broader mixed-methods designs. At the same time, interviews can be seen as an old-fashioned (and perhaps outdated) mode of data collection. We review these debates and discussions and point to innovations in interview-based studies. These include the shift from face-to-face interviews to the use of online platforms, as well as integrating and adapting interviews towards more inclusive methodologies.

Collaborating and mixing

Qualitative researchers have long worked alone 66 . Increasingly, however, researchers are collaborating with others for reasons such as efficiency, institutional incentives (for example, funding for collaborative research) and a desire to pool expertise (for example, studying similar phenomena in different contexts 67 or via different methods). Collaboration can occur across disciplines and methods, cases and contexts and between industry/business, practitioners and researchers. In many settings and contexts, collaboration has become an imperative 68 .

Cheek notes how collaboration provides both advantages and disadvantages 68 . For example, collaboration can be advantageous, saving time and building on the divergent knowledge, skills and resources of different researchers. Scholars with different theoretical or case-based knowledge (or contacts) can work together to build research that is comparative and/or more than the sum of its parts. But such endeavours also carry with them practical and political challenges in terms of how resources might actually be pooled, shared or accounted for. When undertaking such projects, as Morse notes, it is worth thinking about the nature of the collaboration and being explicit about such a choice, its advantages and its disadvantages 66 .

A further tension, but also a motivation for collaboration, stems from integrating interviews as a method in a mixed-methods project, whether with other qualitative researchers (to combine with, for example, focus groups, document analysis or ethnography) or with quantitative researchers (to combine with, for example, surveys, social media analysis or big data analysis). Cheek and Morse both note the pitfalls of collaboration with quantitative researchers: that quality of research may be sacrificed, qualitative interpretations watered down or not taken seriously, or tensions experienced over the pace and different assumptions that come with different methods and approaches of research 66 , 68 .

At the same time, there can be real benefits of such mixed-methods collaboration, such as reaching different and more diverse audiences or testing assumptions and theories between research components in the same project (for example, testing insights from prior quantitative research via interviews, or vice versa), as long as the skillsets of collaborators are seen as equally beneficial to the project. Cheek provides a set of questions that, as a starting point, can be useful for guiding collaboration, whether mixed methods or otherwise. First, Cheek advises asking all collaborators about their assumptions and understandings concerning collaboration. Second, Cheek recommends discussing what each perspective highlights and focuses on (and conversely ignores or sidelines) 68 .

A different way to engage with the idea of collaboration and mixed methods research is by fostering greater collaboration between researchers in the Global South and Global North, thus reversing trends of researchers from the Global North extracting knowledge from the Global South 69 . Such forms of collaboration also align with interview innovations, discussed below, that seek to transform traditional interview approaches into more participatory and inclusive (as part of participatory methodologies).

Digital innovations and challenges

The ongoing COVID-19 pandemic has centred the question of technology within interview-based fieldwork. Although conducting synchronous oral interviews online — for example, via Zoom, Skype or other such platforms — has been a method used by a small constituency of researchers for many years, it became (and remains) a necessity for many researchers wanting to continue or start interview-based projects while COVID-19 prevents face-to-face data collection.

In the past, online interviews were often framed as an inferior form of data collection for not providing the kinds of (often necessary) insights and forms of immersion face-to-face interviews allow 70 , 71 . Online interviews do tend to be more decontextualized than interviews conducted face-to-face 72 . For example, it is harder to recognize, engage with and respond to non-verbal cues 71 . At the same time, they broaden participation to those who might not have been able to access or travel to sites where interviews would have been conducted otherwise, for example people with disabilities. Online interviews also offer more flexibility in terms of scheduling and time requirements. For example, they provide more flexibility around precarious employment or caring responsibilities without having to travel and be away from home. In addition, online interviews might also reduce discomfort between researchers and participants, compared with face-to-face interviews, enabling more discussion of sensitive material 71 . They can also provide participants with more control, enabling them to turn on and off the microphone and video as they choose, for example, to provide more time to reflect and disconnect if they so wish 72 .

That said, online interviews can also introduce new biases based on access to technology 72 . For example, in the Global South, there are often urban/rural and gender gaps between who has access to mobile phones and who does not, meaning that some population groups might be overlooked unless researchers sample mindfully 71 . There are also important ethical considerations when deciding between online and face-to-face interviews. Online interviews might seem to imply lower ethical risks than face-to-face interviews (for example, they lower the chances of identification of participants or researchers), but they also offer more barriers to building trust between researchers and participants 72 . Interacting only online with participants might not provide the information needed to assess risk, for example, participants’ access to a private space to speak 71 . Just because online interviews might be more likely to be conducted in private spaces does not mean that private spaces are safe, for example, for victims of domestic violence. Finally, online interviews prompt further questions about decolonizing research and engaging with participants if research is conducted from afar 72 , such as how to include participants meaningfully and challenge dominant assumptions while doing so remotely.

A further digital innovation, modulating how researchers conduct interviews and the kinds of data collected and analysed, stems from the use and integration of (new) technology, such as WhatsApp text or voice notes to conduct synchronous or asynchronous oral or written interviews 73 . Such methods can provide more privacy, comfort and control to participants and make recruitment easier, allowing participants to share what they want when they want to, using technology that already forms a part of their daily lives, especially for young people 74 , 75 . Such technology is also emerging in other qualitative methods, such as focus groups, with similar arguments around greater inclusivity versus traditional offline modes. Here, the digital challenge might be higher for researchers than for participants if they are less used to such technology 75 . And while there might be concerns about the richness, depth and quality of written messages as a form of interview data, Gibson reports that the reams of transcripts that resulted from a study using written messaging were dense with meaning to be analysed 75 .

Like with online and face-to-face interviews, it is important also to consider the ethical questions and challenges of using such technology, from gaining consent to ensuring participant safety and attending to their distress, without cues, like crying, that might be more obvious in a face-to-face setting 75 , 76 . Attention to the platform used for such interviews is also important and researchers should be attuned to the local and national context. For example, in China, many platforms are neither legal nor available 76 . There, more popular platforms — like WeChat — can be highly monitored by the government, posing potential risks to participants depending on the topic of the interview. Ultimately, researchers should consider trade-offs between online and offline interview modalities, being attentive to the social context and power dynamics involved.

The next 5–10 years

Continuing to integrate (ethically) this technology will be among the major persisting developments in interview-based research, whether to offer more flexibility to researchers or participants, or to diversify who can participate and on what terms.

Pushing the idea of inclusion even further is the potential for integrating interview-based studies within participatory methods, which are also innovating via integrating technology. There is no hard and fast line between researchers using in-depth interviews and participatory methods; many who employ participatory methods will use interviews at the beginning, middle or end phases of a research project to capture insights, perspectives and reflections from participants 77 , 78 . Participatory methods emphasize the need to resist existing power and knowledge structures. They broaden who has the right and ability to contribute to academic knowledge by including and incorporating participants not only as subjects of data collection, but as crucial voices in research design and data analysis 77 . Participatory methods also seek to facilitate local change and to produce research materials, whether for academic or non-academic audiences, including films and documentaries, in collaboration with participants.

In responding to the challenges of COVID-19, capturing the fraught situation wrought by the pandemic and the momentum to integrate technology, participatory researchers have sought to continue data collection from afar. For example, Marzi has adapted an existing project to co-produce participatory videos, via participants’ smartphones in Medellin, Colombia, alongside regular check-in conversations/meetings/interviews with participants 79 . Integrating participatory methods into interview studies offers a route by which researchers can respond to the challenge of diversifying knowledge, challenging assumptions and power hierarchies and creating more inclusive and collaborative partnerships between participants and researchers in the Global North and South.

Brinkmann, S. & Kvale, S. Doing Interviews Vol. 2 (Sage, 2018). This book offers a good general introduction to the practice and design of interview-based studies.

Silverman, D. A Very Short, Fairly Interesting And Reasonably Cheap Book About Qualitative Research (Sage, 2017).

Yin, R. K. Case Study Research And Applications: Design And Methods (Sage, 2018).

Small, M. L. How many cases do I need?’ On science and the logic of case selection in field-based research. Ethnography 10 , 5–38 (2009). This article convincingly demonstrates how the logic of qualitative research differs from quantitative research and its goal of representativeness.

Google Scholar  

Gerson, K. & Damaske, S. The Science and Art of Interviewing (Oxford Univ. Press, 2020).

Glaser, B. G. & Strauss, A. L. The Discovery Of Grounded Theory: Strategies For Qualitative Research (Aldine, 1967).

Braun, V. & Clarke, V. To saturate or not to saturate? Questioning data saturation as a useful concept for thematic analysis and sample-size rationales. Qual. Res. Sport Exerc. Health 13 , 201–216 (2021).

Guest, G., Bunce, A. & Johnson, L. How many interviews are enough? An experiment with data saturation and variability. Field Methods 18 , 59–82 (2006).

Vasileiou, K., Barnett, J., Thorpe, S. & Young, T. Characterising and justifying sample size sufficiency in interview-based studies: systematic analysis of qualitative health research over a 15-year period. BMC Med. Res. Methodol. 18 , 148 (2018).

Silverman, D. How was it for you? The Interview Society and the irresistible rise of the (poorly analyzed) interview. Qual. Res. 17 , 144–158 (2017).

Jerolmack, C. & Murphy, A. The ethical dilemmas and social scientific tradeoffs of masking in ethnography. Sociol. Methods Res. 48 , 801–827 (2019).

MathSciNet   Google Scholar  

Reyes, V. Ethnographic toolkit: strategic positionality and researchers’ visible and invisible tools in field research. Ethnography 21 , 220–240 (2020).

Guillemin, M. & Gillam, L. Ethics, reflexivity and “ethically important moments” in research. Qual. Inq. 10 , 261–280 (2004).

Summers, K. For the greater good? Ethical reflections on interviewing the ‘rich’ and ‘poor’ in qualitative research. Int. J. Soc. Res. Methodol. 23 , 593–602 (2020). This article argues that, in qualitative interview research, a clearer distinction needs to be drawn between ethical commitments to individual research participants and the group(s) to which they belong, a distinction that is often elided in existing ethics guidelines.

Yusupova, G. Exploring sensitive topics in an authoritarian context: an insider perspective. Soc. Sci. Q. 100 , 1459–1478 (2019).

Hemming, J. in Surviving Field Research: Working In Violent And Difficult Situations 21–37 (Routledge, 2009).

Murphy, E. & Dingwall, R. Informed consent, anticipatory regulation and ethnographic practice. Soc. Sci. Med. 65 , 2223–2234 (2007).

Kostovicova, D. & Knott, E. Harm, change and unpredictability: the ethics of interviews in conflict research. Qual. Res. 22 , 56–73 (2022). This article highlights how interviews need to be considered as ethically unpredictable moments where engaging with change among participants can itself be ethical.

Andersson, R. Illegality, Inc.: Clandestine Migration And The Business Of Bordering Europe (Univ. California Press, 2014).

Ellis, R. What do we mean by a “hard-to-reach” population? Legitimacy versus precarity as barriers to access. Sociol. Methods Res. https://doi.org/10.1177/0049124121995536 (2021).

Article   Google Scholar  

Braun, V. & Clarke, V. Thematic Analysis: A Practical Guide (Sage, 2022).

Alejandro, A. & Knott, E. How to pay attention to the words we use: the reflexive review as a method for linguistic reflexivity. Int. Stud. Rev. https://doi.org/10.1093/isr/viac025 (2022).

Alejandro, A., Laurence, M. & Maertens, L. in International Organisations and Research Methods: An Introduction (eds Badache, F., Kimber, L. R. & Maertens, L.) (Michigan Univ. Press, in the press).

Teeger, C. “Both sides of the story” history education in post-apartheid South Africa. Am. Sociol. Rev. 80 , 1175–1200 (2015).

Crotty, M. The Foundations Of Social Research: Meaning And Perspective In The Research Process (Routledge, 2020).

Potter, J. & Hepburn, A. Qualitative interviews in psychology: problems and possibilities. Qual. Res. Psychol. 2 , 281–307 (2005).

Taylor, S. What is Discourse Analysis? (Bloomsbury Publishing, 2013).

Riessman, C. K. Narrative Analysis (Sage, 1993).

Corbin, J. M. & Strauss, A. Grounded theory research: Procedures, canons and evaluative criteria. Qual. Sociol. 13 , 3–21 (1990).

Timmermans, S. & Tavory, I. Theory construction in qualitative research: from grounded theory to abductive analysis. Sociol. Theory 30 , 167–186 (2012).

Fereday, J. & Muir-Cochrane, E. Demonstrating rigor using thematic analysis: a hybrid approach of inductive and deductive coding and theme development. Int. J. Qual. Meth. 5 , 80–92 (2006).

Potter, J. & Hepburn, A. Eight challenges for interview researchers. Handb. Interview Res. 2 , 541–570 (2012).

Tobias Neely, M. Fit to be king: how patrimonialism on Wall Street leads to inequality. Socioecon. Rev. 16 , 365–385 (2018).

Rao, A. H. Gendered interpretations of job loss and subsequent professional pathways. Gend. Soc. 35 , 884–909 (2021). This article used interview data from unemployed men and women to illuminate how job loss becomes a pivotal moment shaping men’s and women’s orientation to paid work, especially in terms of curtailing women’s participation in paid work.

Hart, C. G. Trajectory guarding: managing unwanted, ambiguously sexual interactions at work. Am. Sociol. Rev. 86 , 256–278 (2021).

Goode, J. P. & Stroup, D. R. Everyday nationalism: constructivism for the masses. Soc. Sci. Q. 96 , 717–739 (2015).

Antonsich, M. The ‘everyday’ of banal nationalism — ordinary people’s views on Italy and Italian. Polit. Geogr. 54 , 32–42 (2016).

Fox, J. E. & Miller-Idriss, C. Everyday nationhood. Ethnicities 8 , 536–563 (2008).

Yusupova, G. Cultural nationalism and everyday resistance in an illiberal nationalising state: ethnic minority nationalism in Russia. Nations National. 24 , 624–647 (2018).

Kiely, R., Bechhofer, F. & McCrone, D. Birth, blood and belonging: identity claims in post-devolution Scotland. Sociol. Rev. 53 , 150–171 (2005).

Brubaker, R. & Cooper, F. Beyond ‘identity’. Theory Soc. 29 , 1–47 (2000).

Brubaker, R. Ethnicity Without Groups (Harvard Univ. Press, 2004).

Knott, E. Kin Majorities: Identity And Citizenship In Crimea And Moldova From The Bottom-Up (McGill Univ. Press, 2022).

Bucher, B. & Jasper, U. Revisiting ‘identity’ in international relations: from identity as substance to identifications in action. Eur. J. Int. Relat. 23 , 391–415 (2016).

Carter, P. L. Stubborn Roots: Race, Culture And Inequality In US And South African Schools (Oxford Univ. Press, 2012).

Bourdieu, P. in Cultural Theory: An Anthology Vol. 1, 81–93 (eds Szeman, I. & Kaposy, T.) (Wiley-Blackwell, 2011).

Calarco, J. M. Negotiating Opportunities: How The Middle Class Secures Advantages In School (Oxford Univ. Press, 2018).

Carter, P. L. Keepin’ It Real: School Success Beyond Black And White (Oxford Univ. Press, 2005).

Carter, P. L. ‘Black’ cultural capital, status positioning and schooling conflicts for low-income African American youth. Soc. Probl. 50 , 136–155 (2003).

Warikoo, N. K. The Diversity Bargain Balancing Acts: Youth Culture in the Global City (Univ. California Press, 2011).

Morris, E. W. “Tuck in that shirt!” Race, class, gender and discipline in an urban school. Sociol. Perspect. 48 , 25–48 (2005).

Lareau, A. Social class differences in family–school relationships: the importance of cultural capital. Sociol. Educ. 60 , 73–85 (1987).

Warikoo, N. Addressing emotional health while protecting status: Asian American and white parents in suburban America. Am. J. Sociol. 126 , 545–576 (2020).

Teeger, C. Ruptures in the rainbow nation: how desegregated South African schools deal with interpersonal and structural racism. Sociol. Educ. 88 , 226–243 (2015). This article leverages ‘ deviant ’ cases in an interview study with South African high schoolers to understand why the majority of participants were reluctant to code racially charged incidents at school as racist.

Ispa-Landa, S. & Conwell, J. “Once you go to a white school, you kind of adapt” black adolescents and the racial classification of schools. Sociol. Educ. 88 , 1–19 (2015).

Dwyer, P. J. Punitive and ineffective: benefit sanctions within social security. J. Soc. Secur. Law 25 , 142–157 (2018).

Summers, K. & Young, D. Universal simplicity? The alleged simplicity of Universal Credit from administrative and claimant perspectives. J. Poverty Soc. Justice 28 , 169–186 (2020).

Summers, K. et al. Claimants’ Experiences Of The Social Security System During The First Wave Of COVID-19 . https://www.distantwelfare.co.uk/winter-report (2021).

Desmond, M. Evicted: Poverty And Profit In The American City (Crown Books, 2016).

Reyes, V. Three models of transparency in ethnographic research: naming places, naming people and sharing data. Ethnography 19 , 204–226 (2018).

Robson, C. & McCartan, K. Real World Research (Wiley, 2016).

Bauer, M. W. & Gaskell, G. Qualitative Researching With Text, Image And Sound: A Practical Handbook (SAGE, 2000).

Lareau, A. Listening To People: A Practical Guide To Interviewing, Participant Observation, Data Analysis And Writing It All Up (Univ. Chicago Press, 2021).

Lincoln, Y. S. & Guba, E. G. Naturalistic Inquiry (Sage, 1985).

Jerolmack, C. & Khan, S. Talk is cheap. Sociol. Methods Res. 43 , 178–209 (2014).

Morse, J. M. Styles of collaboration in qualitative inquiry. Qual. Health Res. 18 , 3–4 (2008).

ADS   Google Scholar  

Lamont, M. et al. Getting Respect: Responding To Stigma And Discrimination In The United States, Brazil And Israel (Princeton Univ. Press, 2016).

Cheek, J. Researching collaboratively: implications for qualitative research and researchers. Qual. Health Res. 18 , 1599–1603 (2008).

Botha, L. Mixing methods as a process towards indigenous methodologies. Int. J. Soc. Res. Methodol. 14 , 313–325 (2011).

Howlett, M. Looking at the ‘field’ through a zoom lens: methodological reflections on conducting online research during a global pandemic. Qual. Res. https://doi.org/10.1177/1468794120985691 (2021).

Reñosa, M. D. C. et al. Selfie consents, remote rapport and Zoom debriefings: collecting qualitative data amid a pandemic in four resource-constrained settings. BMJ Glob. Health 6 , e004193 (2021).

Mwambari, D., Purdeková, A. & Bisoka, A. N. Covid-19 and research in conflict-affected contexts: distanced methods and the digitalisation of suffering. Qual. Res. https://doi.org/10.1177/1468794121999014 (2021).

Colom, A. Using WhatsApp for focus group discussions: ecological validity, inclusion and deliberation. Qual. Res. https://doi.org/10.1177/1468794120986074 (2021).

Kaufmann, K. & Peil, C. The mobile instant messaging interview (MIMI): using WhatsApp to enhance self-reporting and explore media usage in situ. Mob. Media Commun. 8 , 229–246 (2020).

Gibson, K. Bridging the digital divide: reflections on using WhatsApp instant messenger interviews in youth research. Qual. Res. Psychol. 19 , 611–631 (2020).

Lawrence, L. Conducting cross-cultural qualitative interviews with mainland Chinese participants during COVID: lessons from the field. Qual. Res. https://doi.org/10.1177/1468794120974157 (2020).

Ponzoni, E. Windows of understanding: broadening access to knowledge production through participatory action research. Qual. Res. 16 , 557–574 (2016).

Kong, T. S. Gay and grey: participatory action research in Hong Kong. Qual. Res. 18 , 257–272 (2018).

Marzi, S. Participatory video from a distance: co-producing knowledge during the COVID-19 pandemic using smartphones. Qual. Res. https://doi.org/10.1177/14687941211038171 (2021).

Kvale, S. & Brinkmann, S. InterViews: Learning The Craft Of Qualitative Research Interviewing (Sage, 2008).

Rao, A. H. The ideal job-seeker norm: unemployment and marital privileges in the professional middle-class. J. Marriage Fam. 83 , 1038–1057 (2021).

Rivera, L. A. Ivies, extracurriculars and exclusion: elite employers’ use of educational credentials. Res. Soc. Stratif. Mobil. 29 , 71–90 (2011).

Download references

Acknowledgements

The authors are grateful to the MY421 team and students for prompting how best to frame and communicate issues pertinent to in-depth interview studies.

Author information

Authors and affiliations.

Department of Methodology, London School of Economics, London, UK

Eleanor Knott, Aliya Hamid Rao, Kate Summers & Chana Teeger

You can also search for this author in PubMed   Google Scholar

Contributions

The authors contributed equally to all aspects of the article.

Corresponding author

Correspondence to Eleanor Knott .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Peer review

Peer review information.

Nature Reviews Methods Primers thanks Jonathan Potter and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

A pre-written interview outline for a semi-structured interview that provides both a topic structure and the ability to adapt flexibly to the content and context of the interview and the interaction between the interviewer and participant. Others may refer to the topic guide as an interview protocol.

Here we refer to the participants that take part in the study as the sample. Other researchers may refer to the participants as a participant group or dataset.

This involves dividing a population into smaller groups based on particular characteristics, for example, age or gender, and then sampling randomly within each group.

A sampling method where the guiding logic when deciding who to recruit is to achieve the most relevant participants for the research topic, in terms of being rich in information or insights.

Researchers ask participants to introduce the researcher to others who meet the study’s inclusion criteria.

Similar to stratified sampling, but participants are not necessarily randomly selected. Instead, the researcher determines how many people from each category of participants should be recruited. Recruitment can happen via snowball or purposive sampling.

A method for developing, analysing and interpreting patterns across data by coding in order to develop themes.

An approach that interrogates the explicit, implicit and taken-for-granted dimensions of language as well as the contexts in which it is articulated to unpack its purposes and effects.

A form of transcription that simplifies what has been said by removing certain verbal and non-verbal details that add no further meaning, such as ‘ums and ahs’ and false starts.

The analytic framework, theoretical approach and often hypotheses, are developed prior to examining the data and then applied to the dataset.

The analytic framework and theoretical approach is developed from analysing the data.

An approach that combines deductive and inductive components to work recursively by going back and forth between data and existing theoretical frameworks (also described as an iterative approach). This approach is increasingly recognized not only as a more realistic but also more desirable third alternative to the more traditional inductive versus deductive binary choice.

A theoretical apparatus that emphasizes the role of cultural processes and capital in (intergenerational) social reproduction.

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Cite this article.

Knott, E., Rao, A.H., Summers, K. et al. Interviews in the social sciences. Nat Rev Methods Primers 2 , 73 (2022). https://doi.org/10.1038/s43586-022-00150-6

Download citation

Accepted : 14 July 2022

Published : 15 September 2022

DOI : https://doi.org/10.1038/s43586-022-00150-6

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Development of a digital intervention for psychedelic preparation (dipp).

  • Rosalind G. McAlpine
  • Matthew D. Sacchet
  • Sunjeev K. Kamboj

Scientific Reports (2024)

Is e-business breaking down barriers for Bangladesh’s young female entrepreneurs during the COVID-19 pandemic? A qualitative study

  • Md. Fouad Hossain Sarker
  • Sayed Farrukh Ahmed
  • Md. Salman Sohel

SN Social Sciences (2024)

Between the dog and the wolf: an interpretative phenomenological analysis of bicultural, sexual minority people’s lived experiences

  • Emelie Louise Miller
  • Ingrid Zakrisson

Discover Psychology (2024)

Acknowledging that Men are Moral and Harmed by Gender Stereotypes Increases Men’s Willingness to Engage in Collective Action on Behalf of Women

  • Alexandra Vázquez
  • Lucía López-Rodríguez
  • Marco Brambilla

Sex Roles (2024)

Navigating Cultural Integration: The Role of Social Media Among Chinese Students in the UK

Journal of the Knowledge Economy (2024)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

research design for interviews

Qualitative study design: Interviews

  • Qualitative study design
  • Phenomenology
  • Grounded theory
  • Ethnography
  • Narrative inquiry
  • Action research
  • Case Studies
  • Field research
  • Focus groups
  • Observation
  • Surveys & questionnaires
  • Study Designs Home

Interviews are intended to find out the experiences, understandings, opinions, or motivations of participants. The relationship between the interviewer and interviewee is crucial to the success of the research interview; the interviewer builds an environment of trust with the interviewee/s, guiding the interviewee/s through a set of topics or questions to be discussed in depth.

Interviews are the most commonly used qualitative data gathering technique and are used with grounded theory, focus groups, and case studies.

  • Interviews are purposive conversations between the researcher and the interviewee, either alone or as part of a group
  • Interviews can be face to face, via telecommunications (Skype, Facetime, or phone), or via email (internet or email interview)
  • The length of an interview varies. They may be anywhere from thirty minutes to several hours in length, depending on your research approach
  • Structured interviews use a set list of questions which need to be asked in order, increasing the reliability and credibility of the data but decreasing responsiveness to interviewee/s. Structured interviews are like a verbal survey
  • Unstructured interviews are where the interviewer has a set list of topics to address but no predetermined questions. This increases the flexibility of the interview but decreases the reliability of the data. Unstructured interviews may be used in long-term field observation research
  • Semi-structured interviews are the middle ground. Semi-structured interviews require the interviewer to have a list of questions and topics pre-prepared, which can be asked in different ways with different interviewee/s. Semi-structured interviews increase the flexibility and the responsiveness of the interview while keeping the interview on track, increasing the reliability and credibility of the data. Semi-structured interviews are one of the most common interview techniques.
  • Flexible – probing questions can be asked, and the order of questions changed, depending on the participant and how structured or unstructured the interview is
  • Quick way to collect data
  • Familiarity – most interviewees are familiar with the concept of an interview and are comfortable with this research approach

Limitations

  • Not all participants are equally articulate or perceptive
  • Questions must be worded carefully to reduce response bias
  • Transcription of interviews can be time and labour intensive

Example questions

  • What are the experiences of midwives in providing care to high-risk mothers, where there is a history of drug or alcohol use?

Example studies

Sandelin, A., Kalman, S., Gustafsson, B. (2019). Prerequisites for safe intraoperative nursing care and teamwork – operating theatre nurses’ perspectives: a qualitative interview study, Journal of Clinical Nursing, 28, 2635-2643. Doi: 10.1111/jocn.14850  

Babbie, E. (2008). The basics of social research (4th ed). Belmont: Thomson Wadsworth

Creswell, J.W. & Creswell, J.D. (2018). Research design: Qualitative, quantitative and mixed methods approaches (5th ed). Thousand Oaks: SAGE

Jamshed, S. (2014). Qualitative research method-interviewing and observation. Journal of basic and clinical pharmacy, 5(4), 87-88. doi:10.4103/0976-0105.141942

Lindlof, T. & Taylor, B. (2002). Qualitative communication research methods (2nd ed). Thousand Oaks: SAGE .

  • << Previous: Surveys & questionnaires
  • Next: Sampling >>
  • Last Updated: Apr 8, 2024 11:12 AM
  • URL: https://deakin.libguides.com/qualitative-study-designs

Root out friction in every digital experience, super-charge conversion rates, and optimize digital self-service

Uncover insights from any interaction, deliver AI-powered agent coaching, and reduce cost to serve

Increase revenue and loyalty with real-time insights and recommendations delivered to teams on the ground

Know how your people feel and empower managers to improve employee engagement, productivity, and retention

Take action in the moments that matter most along the employee journey and drive bottom line growth

Whatever they’re are saying, wherever they’re saying it, know exactly what’s going on with your people

Get faster, richer insights with qual and quant tools that make powerful market research available to everyone

Run concept tests, pricing studies, prototyping + more with fast, powerful studies designed by UX research experts

Track your brand performance 24/7 and act quickly to respond to opportunities and challenges in your market

Explore the platform powering Experience Management

  • Free Account
  • For Digital
  • For Customer Care
  • For Human Resources
  • For Researchers
  • Financial Services
  • All Industries

Popular Use Cases

  • Customer Experience
  • Employee Experience
  • Net Promoter Score
  • Voice of Customer
  • Customer Success Hub
  • Product Documentation
  • Training & Certification
  • XM Institute
  • Popular Resources
  • Customer Stories
  • Artificial Intelligence

Market Research

  • Partnerships
  • Marketplace

The annual gathering of the experience leaders at the world’s iconic brands building breakthrough business results, live in Salt Lake City.

  • English/AU & NZ
  • Español/Europa
  • Español/América Latina
  • Português Brasileiro
  • REQUEST DEMO
  • Experience Management
  • Qualitative Research Interviews

Try Qualtrics for free

How to carry out great interviews in qualitative research.

11 min read An interview is one of the most versatile methods used in qualitative research. Here’s what you need to know about conducting great qualitative interviews.

What is a qualitative research interview?

Qualitative research interviews are a mainstay among q ualitative research techniques, and have been in use for decades either as a primary data collection method or as an adjunct to a wider research process. A qualitative research interview is a one-to-one data collection session between a researcher and a participant. Interviews may be carried out face-to-face, over the phone or via video call using a service like Skype or Zoom.

There are three main types of qualitative research interview – structured, unstructured or semi-structured.

  • Structured interviews Structured interviews are based around a schedule of predetermined questions and talking points that the researcher has developed. At their most rigid, structured interviews may have a precise wording and question order, meaning that they can be replicated across many different interviewers and participants with relatively consistent results.
  • Unstructured interviews Unstructured interviews have no predetermined format, although that doesn’t mean they’re ad hoc or unplanned. An unstructured interview may outwardly resemble a normal conversation, but the interviewer will in fact be working carefully to make sure the right topics are addressed during the interaction while putting the participant at ease with a natural manner.
  • Semi-structured interviews Semi-structured interviews are the most common type of qualitative research interview, combining the informality and rapport of an unstructured interview with the consistency and replicability of a structured interview. The researcher will come prepared with questions and topics, but will not need to stick to precise wording. This blended approach can work well for in-depth interviews.

Free eBook: The qualitative research design handbook

What are the pros and cons of interviews in qualitative research?

As a qualitative research method interviewing is hard to beat, with applications in social research, market research, and even basic and clinical pharmacy. But like any aspect of the research process, it’s not without its limitations. Before choosing qualitative interviewing as your research method, it’s worth weighing up the pros and cons.

Pros of qualitative interviews:

  • provide in-depth information and context
  • can be used effectively when their are low numbers of participants
  • provide an opportunity to discuss and explain questions
  • useful for complex topics
  • rich in data – in the case of in-person or video interviews , the researcher can observe body language and facial expression as well as the answers to questions

Cons of qualitative interviews:

  • can be time-consuming to carry out
  • costly when compared to some other research methods
  • because of time and cost constraints, they often limit you to a small number of participants
  • difficult to standardize your data across different researchers and participants unless the interviews are very tightly structured
  • As the Open University of Hong Kong notes, qualitative interviews may take an emotional toll on interviewers

Qualitative interview guides

Semi-structured interviews are based on a qualitative interview guide, which acts as a road map for the researcher. While conducting interviews, the researcher can use the interview guide to help them stay focused on their research questions and make sure they cover all the topics they intend to.

An interview guide may include a list of questions written out in full, or it may be a set of bullet points grouped around particular topics. It can prompt the interviewer to dig deeper and ask probing questions during the interview if appropriate.

Consider writing out the project’s research question at the top of your interview guide, ahead of the interview questions. This may help you steer the interview in the right direction if it threatens to head off on a tangent.

research design for interviews

Avoid bias in qualitative research interviews

According to Duke University , bias can create significant problems in your qualitative interview.

  • Acquiescence bias is common to many qualitative methods, including focus groups. It occurs when the participant feels obliged to say what they think the researcher wants to hear. This can be especially problematic when there is a perceived power imbalance between participant and interviewer. To counteract this, Duke University’s experts recommend emphasizing the participant’s expertise in the subject being discussed, and the value of their contributions.
  • Interviewer bias is when the interviewer’s own feelings about the topic come to light through hand gestures, facial expressions or turns of phrase. Duke’s recommendation is to stick to scripted phrases where this is an issue, and to make sure researchers become very familiar with the interview guide or script before conducting interviews, so that they can hone their delivery.

What kinds of questions should you ask in a qualitative interview?

The interview questions you ask need to be carefully considered both before and during the data collection process. As well as considering the topics you’ll cover, you will need to think carefully about the way you ask questions.

Open-ended interview questions – which cannot be answered with a ‘yes’ ‘no’ or ‘maybe’ – are recommended by many researchers as a way to pursue in depth information.

An example of an open-ended question is “What made you want to move to the East Coast?” This will prompt the participant to consider different factors and select at least one. Having thought about it carefully, they may give you more detailed information about their reasoning.

A closed-ended question , such as “Would you recommend your neighborhood to a friend?” can be answered without too much deliberation, and without giving much information about personal thoughts, opinions and feelings.

Follow-up questions can be used to delve deeper into the research topic and to get more detail from open-ended questions. Examples of follow-up questions include:

  • What makes you say that?
  • What do you mean by that?
  • Can you tell me more about X?
  • What did/does that mean to you?

As well as avoiding closed-ended questions, be wary of leading questions. As with other qualitative research techniques such as surveys or focus groups, these can introduce bias in your data. Leading questions presume a certain point of view shared by the interviewer and participant, and may even suggest a foregone conclusion.

An example of a leading question might be: “You moved to New York in 1990, didn’t you?” In answering the question, the participant is much more likely to agree than disagree. This may be down to acquiescence bias or a belief that the interviewer has checked the information and already knows the correct answer.

Other leading questions involve adjectival phrases or other wording that introduces negative or positive connotations about a particular topic. An example of this kind of leading question is: “Many employees dislike wearing masks to work. How do you feel about this?” It presumes a positive opinion and the participant may be swayed by it, or not want to contradict the interviewer.

Harvard University’s guidelines for qualitative interview research add that you shouldn’t be afraid to ask embarrassing questions – “if you don’t ask, they won’t tell.” Bear in mind though that too much probing around sensitive topics may cause the interview participant to withdraw. The Harvard guidelines recommend leaving sensitive questions til the later stages of the interview when a rapport has been established.

More tips for conducting qualitative interviews

Observing a participant’s body language can give you important data about their thoughts and feelings. It can also help you decide when to broach a topic, and whether to use a follow-up question or return to the subject later in the interview.

Be conscious that the participant may regard you as the expert, not themselves. In order to make sure they express their opinions openly, use active listening skills like verbal encouragement and paraphrasing and clarifying their meaning to show how much you value what they are saying.

Remember that part of the goal is to leave the interview participant feeling good about volunteering their time and their thought process to your research. Aim to make them feel empowered , respected and heard.

Unstructured interviews can demand a lot of a researcher, both cognitively and emotionally. Be sure to leave time in between in-depth interviews when scheduling your data collection to make sure you maintain the quality of your data, as well as your own well-being .

Recording and transcribing interviews

Historically, recording qualitative research interviews and then transcribing the conversation manually would have represented a significant part of the cost and time involved in research projects that collect qualitative data.

Fortunately, researchers now have access to digital recording tools, and even speech-to-text technology that can automatically transcribe interview data using AI and machine learning. This type of tool can also be used to capture qualitative data from qualitative research (focus groups,ect.) making this kind of social research or market research much less time consuming.

research design for interviews

Data analysis

Qualitative interview data is unstructured, rich in content and difficult to analyze without the appropriate tools. Fortunately, machine learning and AI can once again make things faster and easier when you use qualitative methods like the research interview.

Text analysis tools and natural language processing software can ‘read’ your transcripts and voice data and identify patterns and trends across large volumes of text or speech. They can also perform khttps://www.qualtrics.com/experience-management/research/sentiment-analysis/

which assesses overall trends in opinion and provides an unbiased overall summary of how participants are feeling.

research design for interviews

Another feature of text analysis tools is their ability to categorize information by topic, sorting it into groupings that help you organize your data according to the topic discussed.

All in all, interviews are a valuable technique for qualitative research in business, yielding rich and detailed unstructured data. Historically, they have only been limited by the human capacity to interpret and communicate results and conclusions, which demands considerable time and skill.

When you combine this data with AI tools that can interpret it quickly and automatically, it becomes easy to analyze and structure, dovetailing perfectly with your other business data. An additional benefit of natural language analysis tools is that they are free of subjective biases, and can replicate the same approach across as much data as you choose. By combining human research skills with machine analysis, qualitative research methods such as interviews are more valuable than ever to your business.

Related resources

Market intelligence 10 min read, marketing insights 11 min read, ethnographic research 11 min read, qualitative vs quantitative research 13 min read, qualitative research questions 11 min read, qualitative research design 12 min read, primary vs secondary research 14 min read, request demo.

Ready to learn more about Qualtrics?

  • Search Menu

Sign in through your institution

  • Browse content in Arts and Humanities
  • Browse content in Archaeology
  • Anglo-Saxon and Medieval Archaeology
  • Archaeological Methodology and Techniques
  • Archaeology by Region
  • Archaeology of Religion
  • Archaeology of Trade and Exchange
  • Biblical Archaeology
  • Contemporary and Public Archaeology
  • Environmental Archaeology
  • Historical Archaeology
  • History and Theory of Archaeology
  • Industrial Archaeology
  • Landscape Archaeology
  • Mortuary Archaeology
  • Prehistoric Archaeology
  • Underwater Archaeology
  • Zooarchaeology
  • Browse content in Architecture
  • Architectural Structure and Design
  • History of Architecture
  • Residential and Domestic Buildings
  • Theory of Architecture
  • Browse content in Art
  • Art Subjects and Themes
  • History of Art
  • Industrial and Commercial Art
  • Theory of Art
  • Biographical Studies
  • Byzantine Studies
  • Browse content in Classical Studies
  • Classical History
  • Classical Philosophy
  • Classical Mythology
  • Classical Literature
  • Classical Reception
  • Classical Art and Architecture
  • Classical Oratory and Rhetoric
  • Greek and Roman Papyrology
  • Greek and Roman Epigraphy
  • Greek and Roman Law
  • Greek and Roman Archaeology
  • Late Antiquity
  • Religion in the Ancient World
  • Digital Humanities
  • Browse content in History
  • Colonialism and Imperialism
  • Diplomatic History
  • Environmental History
  • Genealogy, Heraldry, Names, and Honours
  • Genocide and Ethnic Cleansing
  • Historical Geography
  • History by Period
  • History of Emotions
  • History of Agriculture
  • History of Education
  • History of Gender and Sexuality
  • Industrial History
  • Intellectual History
  • International History
  • Labour History
  • Legal and Constitutional History
  • Local and Family History
  • Maritime History
  • Military History
  • National Liberation and Post-Colonialism
  • Oral History
  • Political History
  • Public History
  • Regional and National History
  • Revolutions and Rebellions
  • Slavery and Abolition of Slavery
  • Social and Cultural History
  • Theory, Methods, and Historiography
  • Urban History
  • World History
  • Browse content in Language Teaching and Learning
  • Language Learning (Specific Skills)
  • Language Teaching Theory and Methods
  • Browse content in Linguistics
  • Applied Linguistics
  • Cognitive Linguistics
  • Computational Linguistics
  • Forensic Linguistics
  • Grammar, Syntax and Morphology
  • Historical and Diachronic Linguistics
  • History of English
  • Language Evolution
  • Language Reference
  • Language Acquisition
  • Language Variation
  • Language Families
  • Lexicography
  • Linguistic Anthropology
  • Linguistic Theories
  • Linguistic Typology
  • Phonetics and Phonology
  • Psycholinguistics
  • Sociolinguistics
  • Translation and Interpretation
  • Writing Systems
  • Browse content in Literature
  • Bibliography
  • Children's Literature Studies
  • Literary Studies (Romanticism)
  • Literary Studies (American)
  • Literary Studies (Asian)
  • Literary Studies (European)
  • Literary Studies (Eco-criticism)
  • Literary Studies (Modernism)
  • Literary Studies - World
  • Literary Studies (1500 to 1800)
  • Literary Studies (19th Century)
  • Literary Studies (20th Century onwards)
  • Literary Studies (African American Literature)
  • Literary Studies (British and Irish)
  • Literary Studies (Early and Medieval)
  • Literary Studies (Fiction, Novelists, and Prose Writers)
  • Literary Studies (Gender Studies)
  • Literary Studies (Graphic Novels)
  • Literary Studies (History of the Book)
  • Literary Studies (Plays and Playwrights)
  • Literary Studies (Poetry and Poets)
  • Literary Studies (Postcolonial Literature)
  • Literary Studies (Queer Studies)
  • Literary Studies (Science Fiction)
  • Literary Studies (Travel Literature)
  • Literary Studies (War Literature)
  • Literary Studies (Women's Writing)
  • Literary Theory and Cultural Studies
  • Mythology and Folklore
  • Shakespeare Studies and Criticism
  • Browse content in Media Studies
  • Browse content in Music
  • Applied Music
  • Dance and Music
  • Ethics in Music
  • Ethnomusicology
  • Gender and Sexuality in Music
  • Medicine and Music
  • Music Cultures
  • Music and Media
  • Music and Religion
  • Music and Culture
  • Music Education and Pedagogy
  • Music Theory and Analysis
  • Musical Scores, Lyrics, and Libretti
  • Musical Structures, Styles, and Techniques
  • Musicology and Music History
  • Performance Practice and Studies
  • Race and Ethnicity in Music
  • Sound Studies
  • Browse content in Performing Arts
  • Browse content in Philosophy
  • Aesthetics and Philosophy of Art
  • Epistemology
  • Feminist Philosophy
  • History of Western Philosophy
  • Metaphysics
  • Moral Philosophy
  • Non-Western Philosophy
  • Philosophy of Language
  • Philosophy of Mind
  • Philosophy of Perception
  • Philosophy of Science
  • Philosophy of Action
  • Philosophy of Law
  • Philosophy of Religion
  • Philosophy of Mathematics and Logic
  • Practical Ethics
  • Social and Political Philosophy
  • Browse content in Religion
  • Biblical Studies
  • Christianity
  • East Asian Religions
  • History of Religion
  • Judaism and Jewish Studies
  • Qumran Studies
  • Religion and Education
  • Religion and Health
  • Religion and Politics
  • Religion and Science
  • Religion and Law
  • Religion and Art, Literature, and Music
  • Religious Studies
  • Browse content in Society and Culture
  • Cookery, Food, and Drink
  • Cultural Studies
  • Customs and Traditions
  • Ethical Issues and Debates
  • Hobbies, Games, Arts and Crafts
  • Natural world, Country Life, and Pets
  • Popular Beliefs and Controversial Knowledge
  • Sports and Outdoor Recreation
  • Technology and Society
  • Travel and Holiday
  • Visual Culture
  • Browse content in Law
  • Arbitration
  • Browse content in Company and Commercial Law
  • Commercial Law
  • Company Law
  • Browse content in Comparative Law
  • Systems of Law
  • Competition Law
  • Browse content in Constitutional and Administrative Law
  • Government Powers
  • Judicial Review
  • Local Government Law
  • Military and Defence Law
  • Parliamentary and Legislative Practice
  • Construction Law
  • Contract Law
  • Browse content in Criminal Law
  • Criminal Procedure
  • Criminal Evidence Law
  • Sentencing and Punishment
  • Employment and Labour Law
  • Environment and Energy Law
  • Browse content in Financial Law
  • Banking Law
  • Insolvency Law
  • History of Law
  • Human Rights and Immigration
  • Intellectual Property Law
  • Browse content in International Law
  • Private International Law and Conflict of Laws
  • Public International Law
  • IT and Communications Law
  • Jurisprudence and Philosophy of Law
  • Law and Politics
  • Law and Society
  • Browse content in Legal System and Practice
  • Courts and Procedure
  • Legal Skills and Practice
  • Primary Sources of Law
  • Regulation of Legal Profession
  • Medical and Healthcare Law
  • Browse content in Policing
  • Criminal Investigation and Detection
  • Police and Security Services
  • Police Procedure and Law
  • Police Regional Planning
  • Browse content in Property Law
  • Personal Property Law
  • Study and Revision
  • Terrorism and National Security Law
  • Browse content in Trusts Law
  • Wills and Probate or Succession
  • Browse content in Medicine and Health
  • Browse content in Allied Health Professions
  • Arts Therapies
  • Clinical Science
  • Dietetics and Nutrition
  • Occupational Therapy
  • Operating Department Practice
  • Physiotherapy
  • Radiography
  • Speech and Language Therapy
  • Browse content in Anaesthetics
  • General Anaesthesia
  • Neuroanaesthesia
  • Clinical Neuroscience
  • Browse content in Clinical Medicine
  • Acute Medicine
  • Cardiovascular Medicine
  • Clinical Genetics
  • Clinical Pharmacology and Therapeutics
  • Dermatology
  • Endocrinology and Diabetes
  • Gastroenterology
  • Genito-urinary Medicine
  • Geriatric Medicine
  • Infectious Diseases
  • Medical Toxicology
  • Medical Oncology
  • Pain Medicine
  • Palliative Medicine
  • Rehabilitation Medicine
  • Respiratory Medicine and Pulmonology
  • Rheumatology
  • Sleep Medicine
  • Sports and Exercise Medicine
  • Community Medical Services
  • Critical Care
  • Emergency Medicine
  • Forensic Medicine
  • Haematology
  • History of Medicine
  • Browse content in Medical Skills
  • Clinical Skills
  • Communication Skills
  • Nursing Skills
  • Surgical Skills
  • Browse content in Medical Dentistry
  • Oral and Maxillofacial Surgery
  • Paediatric Dentistry
  • Restorative Dentistry and Orthodontics
  • Surgical Dentistry
  • Medical Ethics
  • Medical Statistics and Methodology
  • Browse content in Neurology
  • Clinical Neurophysiology
  • Neuropathology
  • Nursing Studies
  • Browse content in Obstetrics and Gynaecology
  • Gynaecology
  • Occupational Medicine
  • Ophthalmology
  • Otolaryngology (ENT)
  • Browse content in Paediatrics
  • Neonatology
  • Browse content in Pathology
  • Chemical Pathology
  • Clinical Cytogenetics and Molecular Genetics
  • Histopathology
  • Medical Microbiology and Virology
  • Patient Education and Information
  • Browse content in Pharmacology
  • Psychopharmacology
  • Browse content in Popular Health
  • Caring for Others
  • Complementary and Alternative Medicine
  • Self-help and Personal Development
  • Browse content in Preclinical Medicine
  • Cell Biology
  • Molecular Biology and Genetics
  • Reproduction, Growth and Development
  • Primary Care
  • Professional Development in Medicine
  • Browse content in Psychiatry
  • Addiction Medicine
  • Child and Adolescent Psychiatry
  • Forensic Psychiatry
  • Learning Disabilities
  • Old Age Psychiatry
  • Psychotherapy
  • Browse content in Public Health and Epidemiology
  • Epidemiology
  • Public Health
  • Browse content in Radiology
  • Clinical Radiology
  • Interventional Radiology
  • Nuclear Medicine
  • Radiation Oncology
  • Reproductive Medicine
  • Browse content in Surgery
  • Cardiothoracic Surgery
  • Gastro-intestinal and Colorectal Surgery
  • General Surgery
  • Neurosurgery
  • Paediatric Surgery
  • Peri-operative Care
  • Plastic and Reconstructive Surgery
  • Surgical Oncology
  • Transplant Surgery
  • Trauma and Orthopaedic Surgery
  • Vascular Surgery
  • Browse content in Science and Mathematics
  • Browse content in Biological Sciences
  • Aquatic Biology
  • Biochemistry
  • Bioinformatics and Computational Biology
  • Developmental Biology
  • Ecology and Conservation
  • Evolutionary Biology
  • Genetics and Genomics
  • Microbiology
  • Molecular and Cell Biology
  • Natural History
  • Plant Sciences and Forestry
  • Research Methods in Life Sciences
  • Structural Biology
  • Systems Biology
  • Zoology and Animal Sciences
  • Browse content in Chemistry
  • Analytical Chemistry
  • Computational Chemistry
  • Crystallography
  • Environmental Chemistry
  • Industrial Chemistry
  • Inorganic Chemistry
  • Materials Chemistry
  • Medicinal Chemistry
  • Mineralogy and Gems
  • Organic Chemistry
  • Physical Chemistry
  • Polymer Chemistry
  • Study and Communication Skills in Chemistry
  • Theoretical Chemistry
  • Browse content in Computer Science
  • Artificial Intelligence
  • Computer Architecture and Logic Design
  • Game Studies
  • Human-Computer Interaction
  • Mathematical Theory of Computation
  • Programming Languages
  • Software Engineering
  • Systems Analysis and Design
  • Virtual Reality
  • Browse content in Computing
  • Business Applications
  • Computer Security
  • Computer Games
  • Computer Networking and Communications
  • Digital Lifestyle
  • Graphical and Digital Media Applications
  • Operating Systems
  • Browse content in Earth Sciences and Geography
  • Atmospheric Sciences
  • Environmental Geography
  • Geology and the Lithosphere
  • Maps and Map-making
  • Meteorology and Climatology
  • Oceanography and Hydrology
  • Palaeontology
  • Physical Geography and Topography
  • Regional Geography
  • Soil Science
  • Urban Geography
  • Browse content in Engineering and Technology
  • Agriculture and Farming
  • Biological Engineering
  • Civil Engineering, Surveying, and Building
  • Electronics and Communications Engineering
  • Energy Technology
  • Engineering (General)
  • Environmental Science, Engineering, and Technology
  • History of Engineering and Technology
  • Mechanical Engineering and Materials
  • Technology of Industrial Chemistry
  • Transport Technology and Trades
  • Browse content in Environmental Science
  • Applied Ecology (Environmental Science)
  • Conservation of the Environment (Environmental Science)
  • Environmental Sustainability
  • Environmentalist Thought and Ideology (Environmental Science)
  • Management of Land and Natural Resources (Environmental Science)
  • Natural Disasters (Environmental Science)
  • Nuclear Issues (Environmental Science)
  • Pollution and Threats to the Environment (Environmental Science)
  • Social Impact of Environmental Issues (Environmental Science)
  • History of Science and Technology
  • Browse content in Materials Science
  • Ceramics and Glasses
  • Composite Materials
  • Metals, Alloying, and Corrosion
  • Nanotechnology
  • Browse content in Mathematics
  • Applied Mathematics
  • Biomathematics and Statistics
  • History of Mathematics
  • Mathematical Education
  • Mathematical Finance
  • Mathematical Analysis
  • Numerical and Computational Mathematics
  • Probability and Statistics
  • Pure Mathematics
  • Browse content in Neuroscience
  • Cognition and Behavioural Neuroscience
  • Development of the Nervous System
  • Disorders of the Nervous System
  • History of Neuroscience
  • Invertebrate Neurobiology
  • Molecular and Cellular Systems
  • Neuroendocrinology and Autonomic Nervous System
  • Neuroscientific Techniques
  • Sensory and Motor Systems
  • Browse content in Physics
  • Astronomy and Astrophysics
  • Atomic, Molecular, and Optical Physics
  • Biological and Medical Physics
  • Classical Mechanics
  • Computational Physics
  • Condensed Matter Physics
  • Electromagnetism, Optics, and Acoustics
  • History of Physics
  • Mathematical and Statistical Physics
  • Measurement Science
  • Nuclear Physics
  • Particles and Fields
  • Plasma Physics
  • Quantum Physics
  • Relativity and Gravitation
  • Semiconductor and Mesoscopic Physics
  • Browse content in Psychology
  • Affective Sciences
  • Clinical Psychology
  • Cognitive Psychology
  • Cognitive Neuroscience
  • Criminal and Forensic Psychology
  • Developmental Psychology
  • Educational Psychology
  • Evolutionary Psychology
  • Health Psychology
  • History and Systems in Psychology
  • Music Psychology
  • Neuropsychology
  • Organizational Psychology
  • Psychological Assessment and Testing
  • Psychology of Human-Technology Interaction
  • Psychology Professional Development and Training
  • Research Methods in Psychology
  • Social Psychology
  • Browse content in Social Sciences
  • Browse content in Anthropology
  • Anthropology of Religion
  • Human Evolution
  • Medical Anthropology
  • Physical Anthropology
  • Regional Anthropology
  • Social and Cultural Anthropology
  • Theory and Practice of Anthropology
  • Browse content in Business and Management
  • Business Ethics
  • Business Strategy
  • Business History
  • Business and Technology
  • Business and Government
  • Business and the Environment
  • Comparative Management
  • Corporate Governance
  • Corporate Social Responsibility
  • Entrepreneurship
  • Health Management
  • Human Resource Management
  • Industrial and Employment Relations
  • Industry Studies
  • Information and Communication Technologies
  • International Business
  • Knowledge Management
  • Management and Management Techniques
  • Operations Management
  • Organizational Theory and Behaviour
  • Pensions and Pension Management
  • Public and Nonprofit Management
  • Strategic Management
  • Supply Chain Management
  • Browse content in Criminology and Criminal Justice
  • Criminal Justice
  • Criminology
  • Forms of Crime
  • International and Comparative Criminology
  • Youth Violence and Juvenile Justice
  • Development Studies
  • Browse content in Economics
  • Agricultural, Environmental, and Natural Resource Economics
  • Asian Economics
  • Behavioural Finance
  • Behavioural Economics and Neuroeconomics
  • Econometrics and Mathematical Economics
  • Economic History
  • Economic Systems
  • Economic Methodology
  • Economic Development and Growth
  • Financial Markets
  • Financial Institutions and Services
  • General Economics and Teaching
  • Health, Education, and Welfare
  • History of Economic Thought
  • International Economics
  • Labour and Demographic Economics
  • Law and Economics
  • Macroeconomics and Monetary Economics
  • Microeconomics
  • Public Economics
  • Urban, Rural, and Regional Economics
  • Welfare Economics
  • Browse content in Education
  • Adult Education and Continuous Learning
  • Care and Counselling of Students
  • Early Childhood and Elementary Education
  • Educational Equipment and Technology
  • Educational Strategies and Policy
  • Higher and Further Education
  • Organization and Management of Education
  • Philosophy and Theory of Education
  • Schools Studies
  • Secondary Education
  • Teaching of a Specific Subject
  • Teaching of Specific Groups and Special Educational Needs
  • Teaching Skills and Techniques
  • Browse content in Environment
  • Applied Ecology (Social Science)
  • Climate Change
  • Conservation of the Environment (Social Science)
  • Environmentalist Thought and Ideology (Social Science)
  • Natural Disasters (Environment)
  • Social Impact of Environmental Issues (Social Science)
  • Browse content in Human Geography
  • Cultural Geography
  • Economic Geography
  • Political Geography
  • Browse content in Interdisciplinary Studies
  • Communication Studies
  • Museums, Libraries, and Information Sciences
  • Browse content in Politics
  • African Politics
  • Asian Politics
  • Chinese Politics
  • Comparative Politics
  • Conflict Politics
  • Elections and Electoral Studies
  • Environmental Politics
  • Ethnic Politics
  • European Union
  • Foreign Policy
  • Gender and Politics
  • Human Rights and Politics
  • Indian Politics
  • International Relations
  • International Organization (Politics)
  • International Political Economy
  • Irish Politics
  • Latin American Politics
  • Middle Eastern Politics
  • Political Behaviour
  • Political Economy
  • Political Institutions
  • Political Methodology
  • Political Communication
  • Political Philosophy
  • Political Sociology
  • Political Theory
  • Politics and Law
  • Politics of Development
  • Public Policy
  • Public Administration
  • Quantitative Political Methodology
  • Regional Political Studies
  • Russian Politics
  • Security Studies
  • State and Local Government
  • UK Politics
  • US Politics
  • Browse content in Regional and Area Studies
  • African Studies
  • Asian Studies
  • East Asian Studies
  • Japanese Studies
  • Latin American Studies
  • Middle Eastern Studies
  • Native American Studies
  • Scottish Studies
  • Browse content in Research and Information
  • Research Methods
  • Browse content in Social Work
  • Addictions and Substance Misuse
  • Adoption and Fostering
  • Care of the Elderly
  • Child and Adolescent Social Work
  • Couple and Family Social Work
  • Direct Practice and Clinical Social Work
  • Emergency Services
  • Human Behaviour and the Social Environment
  • International and Global Issues in Social Work
  • Mental and Behavioural Health
  • Social Justice and Human Rights
  • Social Policy and Advocacy
  • Social Work and Crime and Justice
  • Social Work Macro Practice
  • Social Work Practice Settings
  • Social Work Research and Evidence-based Practice
  • Welfare and Benefit Systems
  • Browse content in Sociology
  • Childhood Studies
  • Community Development
  • Comparative and Historical Sociology
  • Economic Sociology
  • Gender and Sexuality
  • Gerontology and Ageing
  • Health, Illness, and Medicine
  • Marriage and the Family
  • Migration Studies
  • Occupations, Professions, and Work
  • Organizations
  • Population and Demography
  • Race and Ethnicity
  • Social Theory
  • Social Movements and Social Change
  • Social Research and Statistics
  • Social Stratification, Inequality, and Mobility
  • Sociology of Religion
  • Sociology of Education
  • Sport and Leisure
  • Urban and Rural Studies
  • Browse content in Warfare and Defence
  • Defence Strategy, Planning, and Research
  • Land Forces and Warfare
  • Military Administration
  • Military Life and Institutions
  • Naval Forces and Warfare
  • Other Warfare and Defence Issues
  • Peace Studies and Conflict Resolution
  • Weapons and Equipment

Qualitative Interviewing

  • < Previous chapter
  • Next chapter >

Qualitative Interviewing

2 Research Design in Interview Studies

  • Published: May 2013
  • Cite Icon Cite
  • Permissions Icon Permissions

This chapter discusses several issues that should be taken into account when considering research designs in qualitative interviewing. It outlines a generic step-wise approach to design that is informative of what to do, when to do it, and why do this rather than that in the research process. It explains three broad aims for qualitative interviewing: discovery, construction, and understanding. The chapter also discusses some conceptual distinctions between inductive, deductive, and abductive designs. It refers to three examples from paradigmatic interview studies to demonstrate how different designs enable different research processes and results.

Signed in as

Institutional accounts.

  • GoogleCrawler [DO NOT DELETE]
  • Google Scholar Indexing

Personal account

  • Sign in with email/username & password
  • Get email alerts
  • Save searches
  • Purchase content
  • Activate your purchase/trial code
  • Add your ORCID iD

Institutional access

Sign in with a library card.

  • Sign in with username/password
  • Recommend to your librarian
  • Institutional account management
  • Get help with access

Access to content on Oxford Academic is often provided through institutional subscriptions and purchases. If you are a member of an institution with an active account, you may be able to access content in one of the following ways:

IP based access

Typically, access is provided across an institutional network to a range of IP addresses. This authentication occurs automatically, and it is not possible to sign out of an IP authenticated account.

Choose this option to get remote access when outside your institution. Shibboleth/Open Athens technology is used to provide single sign-on between your institution’s website and Oxford Academic.

  • Click Sign in through your institution.
  • Select your institution from the list provided, which will take you to your institution's website to sign in.
  • When on the institution site, please use the credentials provided by your institution. Do not use an Oxford Academic personal account.
  • Following successful sign in, you will be returned to Oxford Academic.

If your institution is not listed or you cannot sign in to your institution’s website, please contact your librarian or administrator.

Enter your library card number to sign in. If you cannot sign in, please contact your librarian.

Society Members

Society member access to a journal is achieved in one of the following ways:

Sign in through society site

Many societies offer single sign-on between the society website and Oxford Academic. If you see ‘Sign in through society site’ in the sign in pane within a journal:

  • Click Sign in through society site.
  • When on the society site, please use the credentials provided by that society. Do not use an Oxford Academic personal account.

If you do not have a society account or have forgotten your username or password, please contact your society.

Sign in using a personal account

Some societies use Oxford Academic personal accounts to provide access to their members. See below.

A personal account can be used to get email alerts, save searches, purchase content, and activate subscriptions.

Some societies use Oxford Academic personal accounts to provide access to their members.

Viewing your signed in accounts

Click the account icon in the top right to:

  • View your signed in personal account and access account management features.
  • View the institutional accounts that are providing access.

Signed in but can't access content

Oxford Academic is home to a wide variety of products. The institutional subscription may not cover the content that you are trying to access. If you believe you should have access to that content, please contact your librarian.

For librarians and administrators, your personal account also provides access to institutional account management. Here you will find options to view and activate subscriptions, manage institutional settings and access options, access usage statistics, and more.

Our books are available by subscription or purchase to libraries and institutions.

  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Rights and permissions
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology

Research Design | Step-by-Step Guide with Examples

Published on 5 May 2022 by Shona McCombes . Revised on 20 March 2023.

A research design is a strategy for answering your research question  using empirical data. Creating a research design means making decisions about:

  • Your overall aims and approach
  • The type of research design you’ll use
  • Your sampling methods or criteria for selecting subjects
  • Your data collection methods
  • The procedures you’ll follow to collect data
  • Your data analysis methods

A well-planned research design helps ensure that your methods match your research aims and that you use the right kind of analysis for your data.

Table of contents

Step 1: consider your aims and approach, step 2: choose a type of research design, step 3: identify your population and sampling method, step 4: choose your data collection methods, step 5: plan your data collection procedures, step 6: decide on your data analysis strategies, frequently asked questions.

  • Introduction

Before you can start designing your research, you should already have a clear idea of the research question you want to investigate.

There are many different ways you could go about answering this question. Your research design choices should be driven by your aims and priorities – start by thinking carefully about what you want to achieve.

The first choice you need to make is whether you’ll take a qualitative or quantitative approach.

Qualitative research designs tend to be more flexible and inductive , allowing you to adjust your approach based on what you find throughout the research process.

Quantitative research designs tend to be more fixed and deductive , with variables and hypotheses clearly defined in advance of data collection.

It’s also possible to use a mixed methods design that integrates aspects of both approaches. By combining qualitative and quantitative insights, you can gain a more complete picture of the problem you’re studying and strengthen the credibility of your conclusions.

Practical and ethical considerations when designing research

As well as scientific considerations, you need to think practically when designing your research. If your research involves people or animals, you also need to consider research ethics .

  • How much time do you have to collect data and write up the research?
  • Will you be able to gain access to the data you need (e.g., by travelling to a specific location or contacting specific people)?
  • Do you have the necessary research skills (e.g., statistical analysis or interview techniques)?
  • Will you need ethical approval ?

At each stage of the research design process, make sure that your choices are practically feasible.

Prevent plagiarism, run a free check.

Within both qualitative and quantitative approaches, there are several types of research design to choose from. Each type provides a framework for the overall shape of your research.

Types of quantitative research designs

Quantitative designs can be split into four main types. Experimental and   quasi-experimental designs allow you to test cause-and-effect relationships, while descriptive and correlational designs allow you to measure variables and describe relationships between them.

With descriptive and correlational designs, you can get a clear picture of characteristics, trends, and relationships as they exist in the real world. However, you can’t draw conclusions about cause and effect (because correlation doesn’t imply causation ).

Experiments are the strongest way to test cause-and-effect relationships without the risk of other variables influencing the results. However, their controlled conditions may not always reflect how things work in the real world. They’re often also more difficult and expensive to implement.

Types of qualitative research designs

Qualitative designs are less strictly defined. This approach is about gaining a rich, detailed understanding of a specific context or phenomenon, and you can often be more creative and flexible in designing your research.

The table below shows some common types of qualitative design. They often have similar approaches in terms of data collection, but focus on different aspects when analysing the data.

Your research design should clearly define who or what your research will focus on, and how you’ll go about choosing your participants or subjects.

In research, a population is the entire group that you want to draw conclusions about, while a sample is the smaller group of individuals you’ll actually collect data from.

Defining the population

A population can be made up of anything you want to study – plants, animals, organisations, texts, countries, etc. In the social sciences, it most often refers to a group of people.

For example, will you focus on people from a specific demographic, region, or background? Are you interested in people with a certain job or medical condition, or users of a particular product?

The more precisely you define your population, the easier it will be to gather a representative sample.

Sampling methods

Even with a narrowly defined population, it’s rarely possible to collect data from every individual. Instead, you’ll collect data from a sample.

To select a sample, there are two main approaches: probability sampling and non-probability sampling . The sampling method you use affects how confidently you can generalise your results to the population as a whole.

Probability sampling is the most statistically valid option, but it’s often difficult to achieve unless you’re dealing with a very small and accessible population.

For practical reasons, many studies use non-probability sampling, but it’s important to be aware of the limitations and carefully consider potential biases. You should always make an effort to gather a sample that’s as representative as possible of the population.

Case selection in qualitative research

In some types of qualitative designs, sampling may not be relevant.

For example, in an ethnography or a case study, your aim is to deeply understand a specific context, not to generalise to a population. Instead of sampling, you may simply aim to collect as much data as possible about the context you are studying.

In these types of design, you still have to carefully consider your choice of case or community. You should have a clear rationale for why this particular case is suitable for answering your research question.

For example, you might choose a case study that reveals an unusual or neglected aspect of your research problem, or you might choose several very similar or very different cases in order to compare them.

Data collection methods are ways of directly measuring variables and gathering information. They allow you to gain first-hand knowledge and original insights into your research problem.

You can choose just one data collection method, or use several methods in the same study.

Survey methods

Surveys allow you to collect data about opinions, behaviours, experiences, and characteristics by asking people directly. There are two main survey methods to choose from: questionnaires and interviews.

Observation methods

Observations allow you to collect data unobtrusively, observing characteristics, behaviours, or social interactions without relying on self-reporting.

Observations may be conducted in real time, taking notes as you observe, or you might make audiovisual recordings for later analysis. They can be qualitative or quantitative.

Other methods of data collection

There are many other ways you might collect data depending on your field and topic.

If you’re not sure which methods will work best for your research design, try reading some papers in your field to see what data collection methods they used.

Secondary data

If you don’t have the time or resources to collect data from the population you’re interested in, you can also choose to use secondary data that other researchers already collected – for example, datasets from government surveys or previous studies on your topic.

With this raw data, you can do your own analysis to answer new research questions that weren’t addressed by the original study.

Using secondary data can expand the scope of your research, as you may be able to access much larger and more varied samples than you could collect yourself.

However, it also means you don’t have any control over which variables to measure or how to measure them, so the conclusions you can draw may be limited.

As well as deciding on your methods, you need to plan exactly how you’ll use these methods to collect data that’s consistent, accurate, and unbiased.

Planning systematic procedures is especially important in quantitative research, where you need to precisely define your variables and ensure your measurements are reliable and valid.

Operationalisation

Some variables, like height or age, are easily measured. But often you’ll be dealing with more abstract concepts, like satisfaction, anxiety, or competence. Operationalisation means turning these fuzzy ideas into measurable indicators.

If you’re using observations , which events or actions will you count?

If you’re using surveys , which questions will you ask and what range of responses will be offered?

You may also choose to use or adapt existing materials designed to measure the concept you’re interested in – for example, questionnaires or inventories whose reliability and validity has already been established.

Reliability and validity

Reliability means your results can be consistently reproduced , while validity means that you’re actually measuring the concept you’re interested in.

For valid and reliable results, your measurement materials should be thoroughly researched and carefully designed. Plan your procedures to make sure you carry out the same steps in the same way for each participant.

If you’re developing a new questionnaire or other instrument to measure a specific concept, running a pilot study allows you to check its validity and reliability in advance.

Sampling procedures

As well as choosing an appropriate sampling method, you need a concrete plan for how you’ll actually contact and recruit your selected sample.

That means making decisions about things like:

  • How many participants do you need for an adequate sample size?
  • What inclusion and exclusion criteria will you use to identify eligible participants?
  • How will you contact your sample – by mail, online, by phone, or in person?

If you’re using a probability sampling method, it’s important that everyone who is randomly selected actually participates in the study. How will you ensure a high response rate?

If you’re using a non-probability method, how will you avoid bias and ensure a representative sample?

Data management

It’s also important to create a data management plan for organising and storing your data.

Will you need to transcribe interviews or perform data entry for observations? You should anonymise and safeguard any sensitive data, and make sure it’s backed up regularly.

Keeping your data well organised will save time when it comes to analysing them. It can also help other researchers validate and add to your findings.

On their own, raw data can’t answer your research question. The last step of designing your research is planning how you’ll analyse the data.

Quantitative data analysis

In quantitative research, you’ll most likely use some form of statistical analysis . With statistics, you can summarise your sample data, make estimates, and test hypotheses.

Using descriptive statistics , you can summarise your sample data in terms of:

  • The distribution of the data (e.g., the frequency of each score on a test)
  • The central tendency of the data (e.g., the mean to describe the average score)
  • The variability of the data (e.g., the standard deviation to describe how spread out the scores are)

The specific calculations you can do depend on the level of measurement of your variables.

Using inferential statistics , you can:

  • Make estimates about the population based on your sample data.
  • Test hypotheses about a relationship between variables.

Regression and correlation tests look for associations between two or more variables, while comparison tests (such as t tests and ANOVAs ) look for differences in the outcomes of different groups.

Your choice of statistical test depends on various aspects of your research design, including the types of variables you’re dealing with and the distribution of your data.

Qualitative data analysis

In qualitative research, your data will usually be very dense with information and ideas. Instead of summing it up in numbers, you’ll need to comb through the data in detail, interpret its meanings, identify patterns, and extract the parts that are most relevant to your research question.

Two of the most common approaches to doing this are thematic analysis and discourse analysis .

There are many other ways of analysing qualitative data depending on the aims of your research. To get a sense of potential approaches, try reading some qualitative research papers in your field.

A sample is a subset of individuals from a larger population. Sampling means selecting the group that you will actually collect data from in your research.

For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

Statistical sampling allows you to test a hypothesis about the characteristics of a population. There are various sampling methods you can use to ensure that your sample is representative of the population as a whole.

Operationalisation means turning abstract conceptual ideas into measurable observations.

For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioural avoidance of crowded places, or physical anxiety symptoms in social situations.

Before collecting data , it’s important to consider how you will operationalise the variables that you want to measure.

The research methods you use depend on the type of data you need to answer your research question .

  • If you want to measure something or test a hypothesis , use quantitative methods . If you want to explore ideas, thoughts, and meanings, use qualitative methods .
  • If you want to analyse a large amount of readily available data, use secondary data. If you want data specific to your purposes with control over how they are generated, collect primary data.
  • If you want to establish cause-and-effect relationships between variables , use experimental methods. If you want to understand the characteristics of a research subject, use descriptive methods.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

McCombes, S. (2023, March 20). Research Design | Step-by-Step Guide with Examples. Scribbr. Retrieved 3 June 2024, from https://www.scribbr.co.uk/research-methods/research-design/

Is this article helpful?

Shona McCombes

Shona McCombes

Research-Methodology

Interviews can be defined as a qualitative research technique which involves “conducting intensive individual interviews with a small number of respondents to explore their perspectives on a particular idea, program or situation.” [1]

There are three different formats of interviews: structured, semi-structured and unstructured.

Structured interviews consist of a series of pre-determined questions that all interviewees answer in the same order. Data analysis usually tends to be more straightforward because researcher can compare and contrast different answers given to the same questions.

Unstructured interviews are usually the least reliable from research viewpoint, because no questions are prepared prior to the interview and data collection is conducted in an informal manner. Unstructured interviews can be associated with a high level of bias and comparison of answers given by different respondents tends to be difficult due to the differences in formulation of questions.

Semi-structured interviews contain the components of both, structured and unstructured interviews. In semi-structured interviews, interviewer prepares a set of same questions to be answered by all interviewees. At the same time, additional questions might be asked during interviews to clarify and/or further expand certain issues.

Advantages of interviews include possibilities of collecting detailed information about research questions.  Moreover, in in this type of primary data collection researcher has direct control over the flow of process and she has a chance to clarify certain issues during the process if needed. Disadvantages, on the other hand, include longer time requirements and difficulties associated with arranging an appropriate time with perspective sample group members to conduct interviews.

When conducting interviews you should have an open mind and refrain from displaying disagreements in any forms when viewpoints expressed by interviewees contradict your own ideas. Moreover, timing and environment for interviews need to be scheduled effectively. Specifically, interviews need to be conducted in a relaxed environment, free of any forms of pressure for interviewees whatsoever.

Respected scholars warn that “in conducting an interview the interviewer should attempt to create a friendly, non-threatening atmosphere. Much as one does with a cover letter, the interviewer should give a brief, casual introduction to the study; stress the importance of the person’s participation; and assure anonymity, or at least confidentiality, when possible.” [2]

There is a risk of interviewee bias during the primary data collection process and this would seriously compromise the validity of the project findings. Some interviewer bias can be avoided by ensuring that the interviewer does not overreact to responses of the interviewee. Other steps that can be taken to help avoid or reduce interviewer bias include having the interviewer dress inconspicuously and appropriately for the environment and holding the interview in a private setting.  [3]

My e-book, The Ultimate Guide to Writing a Dissertation in Business Studies: a step by step assistance offers practical assistance to complete a dissertation with minimum or no stress. The e-book covers all stages of writing a dissertation starting from the selection to the research area to submitting the completed version of the work within the deadline.John Dudovskiy

Interviews

[1] Boyce, C. & Neale, P. (2006) “Conducting in-depth Interviews: A Guide for Designing and Conducting In-Depth Interviews”, Pathfinder International Tool Series

[2] Connaway, L.S.& Powell, R.P.(2010) “Basic Research Methods for Librarians” ABC-CLIO

[3] Connaway, L.S.& Powell, R.P.(2010) “Basic Research Methods for Librarians” ABC-CLIO

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Appendix: Qualitative Interview Design

Daniel W. Turner III and Nicole Hagstrom-Schmidt

Qualitative Interview Design: A Practical Guide for Novice Investigators

Qualitative research design can be complicated depending upon the level of experience a researcher may have with a particular type of methodology. As researchers, many aspire to grow and expand their knowledge and experiences with qualitative design in order to better utilize a variety of research paradigms. One of the more popular areas of interest in qualitative research design is that of the interview protocol. Interviews provide in-depth information pertaining to participants’ experiences and viewpoints of a particular topic. Oftentimes, interviews are coupled with other forms of data collection in order to provide the researcher with a well-rounded collection of information for analyses. This paper explores the effective ways to conduct in-depth, qualitative interviews for novice investigators by expanding upon the practical components of each interview design.

Categories of Qualitative Interview Design

As common with quantitative analyses, there are various forms of interview design that can be developed to obtain thick, rich data utilizing a qualitative investigational perspective. [1] For the purpose of this examination, there are three formats for interview design that will be explored which are summarized by Gall, Gall, and Borg:

  • Informal conversational interview,
  • General interview guide approach,
  • Standardized open-ended interview. [2]

In addition, I will expand on some suggestions for conducting qualitative interviews which includes the construction of research questions as well as the analysis of interview data. These suggestions come from both my personal experiences with interviewing as well as the recommendations from the literature to assist novice interviewers.

Informal Conversational Interview

The informal conversational interview is outlined by Gall, Gall, and Borg for the purpose of relying “…entirely on the spontaneous generation of questions in a natural interaction, typically one that occurs as part of ongoing participant observation fieldwork.” [3] I am curious when it comes to other cultures or religions and I enjoy immersing myself in these environments as an active participant. I ask questions in order to learn more about these social settings without having a predetermined set of structured questions. Primarily the questions come from “in the moment experiences” as a means for further understanding or clarification of what I am witnessing or experiencing at a particular moment. With the informal conversational approach, the researcher does not ask any specific types of questions, but rather relies on the interaction with the participants to guide the interview process. [4] Think of this type of interview as an “off the top of your head” style of interview where you really construct questions as you move forward. Many consider this type of interview beneficial because of the lack of structure, which allows for flexibility in the nature of the interview. However, many researchers view this type of interview as unstable or unreliable because of the inconsistency in the interview questions, thus making it difficult to code data. [5] If you choose to conduct an informal conversational interview, it is critical to understand the need for flexibility and originality in the questioning as a key for success.

General Interview Guide Approach

The general interview guide approach is more structured than the informal conversational interview although there is still quite a bit of flexibility in its composition. [6] The ways that questions are potentially worded depend upon the researcher who is conducting the interview. Therefore, one of the obvious issues with this type of interview is the lack of consistency in the way research questions are posed because researchers can interchange the way he or she poses them. With that in mind, the respondents may not consistently answer the same question(s) based on how they were posed by the interviewer. [7] During research for my doctoral dissertation, I was able to interact with alumni participants in a relaxed and informal manner where I had the opportunity to learn more about the in-depth experiences of the participants through structured interviews. This informal environment allowed me the opportunity to develop rapport with the participants so that I was able to ask follow-up or probing questions based on their responses to pre-constructed questions. I found this quite useful in my interviews because I could ask questions or change questions based on participant responses to previous questions. The questions were structured, but adapting them allowed me to explore a more personal approach to each alumni interview.

According to McNamara, the strength of the general interview guide approach is the ability of the researcher “…to ensure that the same general areas of information are collected from each interviewee; this provides more focus than the conversational approach, but still allows a degree of freedom and adaptability in getting information from the interviewee.” [8] The researcher remains in the driver’s seat with this type of interview approach, but flexibility takes precedence based on perceived prompts from the participants.

You might ask, “What does this mean anyway?” The easiest way to answer that question is to think about your own personal experiences at a job interview. When you were invited to a job interview in the past, you might have prepared for all sorts of curve ball-style questions to come your way. You desired an answer for every potential question. If the interviewer were asking you questions using a general interview guide approach, he or she would ask questions using their own unique style, which might differ from the way the questions were originally created. You as the interviewee would then respond to those questions in the manner in which the interviewer asked which would dictate how the interview continued. Based on how the interviewer asked the question(s), you might have been able to answer more information or less information than that of other job candidates. Therefore, it is easy to see how this could positively or negatively influence a job candidate if the interviewer were using a general interview guide approach.

Standardized Open-Ended Interviews

The standardized open-ended interview is extremely structured in terms of the wording of the questions. Participants are always asked identical questions, but the questions are worded so that responses are open-ended. [9] This open-endedness allows the participants to contribute as much detailed information as they desire and it also allows the researcher to ask probing questions as a means of follow-up. Standardized open-ended interviews are likely the most popular form of interviewing utilized in research studies because of the nature of the open-ended questions, allowing the participants to fully express their viewpoints and experiences. If one were to identify weaknesses with open-ended interviewing, they would likely identify the difficulty with coding the data. [10] Since open-ended interviews in composition call for participants to fully express their responses in as much detail as desired, it can be quite difficult for researchers to extract similar themes or codes from the interview transcripts as they would with less open-ended responses. Although the data provided by participants are rich and thick with qualitative data, it can be a more cumbersome process for the researcher to sift through the narrative responses in order to fully and accurately reflect an overall perspective of all interview responses through the coding process. However, according to Gall, Gall, and Borg, this reduces researcher biases within the study, particularly when the interviewing process involves many participants. [11]

Suggestions for Conducting Qualitative Interviews

Now that we know a few of the more popular interview designs that are available to qualitative researchers, we can more closely examine various suggestions for conducting qualitative interviews based on the available research. These suggestions are designed to provide the researcher with the tools needed to conduct a well constructed, professional interview with their participants. Some of the most common information found within the literature relating to interviews, according to Creswell [12] :

  • The preparation for the interview,
  • The constructing effective research questions,
  • The actual implementation of the interview(s). [13]

Preparation for the Interview

Probably the most helpful tip with the interview process is that of interview preparation. This process can help make or break the process and can either alleviate or exacerbate the problematic circumstances that could potentially occur once the research is implemented. McNamara suggests the importance of the preparation stage in order to maintain an unambiguous focus as to how the interviews will be erected in order to provide maximum benefit to the proposed research study. [14] Along these lines Chenail provides a number of pre-interview exercises researchers can use to improve their instrumentality and address potential biases. [15] McNamara applies eight principles to the preparation stage of interviewing which includes the following ingredients:

  • Choose a setting with little distraction;
  • Explain the purpose of the interview;
  • Address terms of confidentiality;
  • Explain the format of the interview;
  • Indicate how long the interview usually takes;
  • Tell them how to get in touch with you later if they want to;
  • Ask them if they have any questions before you both get started with the interview;
  • Don’t count on your memory to recall their answers. [16]

Selecting Participants

Creswell discusses the importance of selecting the appropriate candidates for interviews. He asserts that the researcher should utilize one of the various types of sampling strategies such as criterion based sampling or critical case sampling (among many others) in order to obtain qualified candidates that will provide the most credible information to the study. [17] Creswell also suggests the importance of acquiring participants who will be willing to openly and honestly share information or “their story.” [18] It might be easier to conduct the interviews with participants in a comfortable environment where the participants do not feel restricted or uncomfortable to share information.

Pilot Testing

Another important element to the interview preparation is the implementation of a pilot test. The pilot test will assist the research in determining if there are flaws, limitations, or other weaknesses within the interview design and will allow him or her to make necessary revisions prior to the implementation of the study. [19] A pilot test should be conducted with participants that have similar interests as those that will participate in the implemented study. The pilot test will also assist the researchers with the refinement of research questions, which will be discussed in the next section.

Constructing Effective Research Questions

Creating effective research questions for the interview process is one of the most crucial components to interview design. Researchers desiring to conduct such an investigation should be careful that each of the questions will allow the examiner to dig deep into the experiences and/or knowledge of the participants in order to gain maximum data from the interviews. McNamara suggests several recommendations for creating effective research questions for interviews which includes the following elements:

  • Wording should be open-ended (respondents should be able to choose their own terms when answering questions);
  • Questions should be as neutral as possible (avoid wording that might influence answers, e.g., evocative, judgmental wording);
  • Questions should be asked one at a time;
  • Questions should be worded clearly (this includes knowing any terms particular to the program or the respondents’ culture); and
  • Be careful asking “why” questions. [20]

Examples of Useful and Not-So Useful Research Questions

To assist the novice interviewer with the preparation of research questions, I will propose a useful research question and a not so useful research question. Based on McNamara’s suggestion, it is important to ask an open-ended question. [21] So for the useful question, I will propose the following: “How have your experiences as a kindergarten teacher influenced or not influenced you in the decisions that you have made in raising your children”? As you can see, the question allows the respondent to discuss how his or her experiences as a kindergarten teacher have or have not affected their decision-making with their own children without making the assumption that the experience has influenced their decision-making. On the other hand, if you were to ask a similar question, but from a less than useful perspective, you might construct the same question in this manner: “How has your experiences as a kindergarten teacher affected you as a parent”? As you can see, the question is still open-ended, but it makes the assumption that the experiences have indeed affected them as a parent. We as the researcher cannot make this assumption in the wording of our questions.

Follow-Up Questions

Creswell also makes the suggestion of being flexible with research questions being constructed. [22] He makes the assertion that respondents in an interview will not necessarily answer the question being asked by the researcher and, in fact, may answer a question that is asked in another question later in the interview. Creswell believes that the researcher must construct questions in such a manner to keep participants on focus with their responses to the questions. In addition, the researcher must be prepared with follow-up questions or prompts in order to ensure that they obtain optimal responses from participants. When I was an Assistant Director for a large division at my University a couple of years ago, I was tasked with the responsibility of hiring student affairs coordinators at our off-campus educational centers. Throughout the interviewing process, I found that interviewees did indeed get off topic with certain questions because they either misunderstood the question(s) being asked or did not wish to answer the question(s) directly. I was able to utilize Creswell’s suggestion [23] by reconstructing questions so that they were clearly assembled in a manner to reduce misunderstanding and was able to erect effective follow-up prompts to further understanding. This alleviated many of the problems I had and assisted me in extracting the information I needed from the interview through my follow-up questioning.

Implementation of Interviews

As with other sections of interview design, McNamara makes some excellent recommendations for the implementation stage of the interview process. He includes the following tips for interview implementation:

  • Occasionally verify the tape recorder (if used) is working;
  • Ask one question at a time;
  • Attempt to remain as neutral as possible (that is, don’t show strong emotional reactions to their responses;
  • Encourage responses with occasional nods of the head, “uh huh”s, etc.;
  • Be careful about the appearance when note taking (that is, if you jump to take a note, it may appear as if you’re surprised or very pleased about an answer, which may influence answers to future questions);
  • Provide transition between major topics, e.g., “we’ve been talking about (some topic) and now I’d like to move on to (another topic);”
  • Don’t lose control of the interview (this can occur when respondents stray to another topic, take so long to answer a question that times begins to run out, or even begin asking questions to the interviewer). [24]

Interpreting Data

The final constituent in the interview design process is that of interpreting the data that was gathered during the interview process. During this phase, the researcher must make “sense” out of what was just uncovered and compile the data into sections or groups of information, also known as themes or codes. [25] These themes or codes are consistent phrases, expressions, or ideas that were common among research participants. [26] How the researcher formulates themes or codes vary. Many researchers suggest the need to employ a third party consultant who can review codes or themes in order to determine the quality and effectiveness based on their evaluation of the interview transcripts. [27] This helps alleviate researcher biases or potentially eliminate where over-analyzing of data has occurred. Many researchers may choose to employ an iterative review process where a committee of nonparticipating researchers can provide constructive feedback and suggestions to the researcher(s) primarily involved with the study.

From choosing the appropriate type of interview design process through the interpretation of interview data, this guide for conducting qualitative research interviews proposes a practical way to perform an investigation based on the recommendations and experiences of qualified researchers in the field and through my own personal experiences. Although qualitative investigation provides a myriad of opportunities for conducting investigational research, interview design has remained one of the more popular forms of analyses. As the variety of qualitative research methods become more widely utilized across research institutions, we will continue to see more practical guides for protocol implementation outlined in peer reviewed journals across the world.

This text was derived from

Turner, Daniel W., III. “Qualitative Interview Design: A Practical Guide for Novice Investigators.” The Qualitative Report 15, no. 3 (2010): 754-760. https://doi.org/10.46743/2160-3715/2010.1178 . Licensed under a  Creative Commons Attribution-Noncommercial-Share Alike 4.0 International License .

It is edited and reformatted by Nicole Hagstrom-Schmidt.

  • John W. Creswell, Qualitative Inquiry and Research Design: Choosing Among Five Approaches , 2nd ed. (Thousand Oaks, CA: Sage, 2007). ↵
  • M.D. Gall, Walter R. Borg, and Joyce P. Gall, Educational Research: An Introduction , 7th ed. (Boston, MA: Pearson, 2003). ↵
  • M.D. Gall, Walter R. Borg, and Joyce P. Gall, Educational Research: An Introduction , 7th ed (Boston, MA: Pearson, 2003), 239. ↵
  • Carter McNamara, “General Guidelines for Conducting Interviews,” Free Management Library , accessed January 11, 2010, https://managementhelp.org/businessresearch/interviews.htm. ↵
  • M.D. Gall, Walter R. Borg, and Joyce P. Gall, Educational Research: An Introduction , 7th ed (Boston, MA: Pearson, 2003). ↵
  • Carter McNamara, “General Guidelines for Conducting Interviews,” Free Management Library , accessed January 11, 2010, https://managementhelp.org/businessresearch/interviews.htm . ↵
  • Carter McNamara, “General Guidelines for Conducting Interviews,” Free Management Library , “Types of Interviews” section, para. 1, accessed January 11, 2010, https://managementhelp.org/businessresearch/interviews.htm . ↵
  • John W. Creswell, Research Design: Qualitative, Quantitative, and Mixed Methods Approaches , 3rd ed. (Thousand Oaks, CA: Sage, 2003); John W. Creswell, Qualitative Inquiry and Research Design: Choosing Among Five Approaches , 2nd ed. (Thousand Oaks, CA: Sage, 2007). ↵
  • Ronald J. Chenail, “Interviewing the Investigator: Strategies for Addressing Instrumentation and Researcher Bias Concerns in Qualitative Research,” The Qualitative Report 16, no. 1 (2011): 255–262, https://nsuworks.nova.edu/tqr/vol16/iss1/16/ . ↵
  • Carter McNamara, “General Guidelines for Conducting Interviews,” Free Management Library , “Preparation for Interview section,” para. 1, accessed January 11, 2010, https://managementhelp.org/businessresearch/interviews.htm . ↵
  • John W. Creswell, Qualitative Inquiry and Research Design: Choosing Among Five Approaches , 2nd ed. (Thousand Oaks, CA: Sage, 2007), 133. ↵
  • Steinar Kvale, Doing Interviews (London and Thousand Oaks, CA: Sage, 2007) https://doi.org/10.4135/9781849208963 . ↵
  • Carter McNamara, “General Guidelines for Conducting Interviews,” Free Management Library , “Wording of Questions” section, para. 1, accessed January 11, 2010, https://managementhelp.org/businessresearch/interviews.htm . ↵
  • Carter McNamara, “General Guidelines for Conducting Interviews,” Free Management Library , “Conducting Interview” section, para 1, accessed January 11, 2010, https://managementhelp.org/businessresearch/interviews.htm . ↵
  • Steinar Kvale, Doing Interviews (London and Thousand Oaks, CA: Sage, 2007) https://doi.org/10.4135/9781849208963 ↵

Appendix: Qualitative Interview Design Copyright © 2022 by Daniel W. Turner III and Nicole Hagstrom-Schmidt is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

Logo for Mavs Open Press

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

9.4 Types of qualitative research designs

Learning objectives.

  • Define focus groups and outline how they differ from one-on-one interviews
  • Describe how to determine the best size for focus groups
  • Identify the important considerations in focus group composition
  • Discuss how to moderate focus groups
  • Identify the strengths and weaknesses of focus group methodology
  • Describe case study research, ethnography, and phenomenology.

There are various types of approaches to qualitative research.  This chapter presents information about focus groups, which are often used in social work research.  It also introduces case studies, ethnography, and phenomenology.

Focus Groups

Focus groups resemble qualitative interviews in that a researcher may prepare a guide in advance and interact with participants by asking them questions. But anyone who has conducted both one-on-one interviews and focus groups knows that each is unique. In an interview, usually one member (the research participant) is most active while the other (the researcher) plays the role of listener, conversation guider, and question-asker. Focus groups , on the other hand, are planned discussions designed to elicit group interaction and “obtain perceptions on a defined area of interest in a permissive, nonthreatening environment” (Krueger & Casey, 2000, p. 5).  In focus groups, the researcher play a different role than in a one-on-one interview. The researcher’s aim is to get participants talking to each other,  to observe interactions among participants, and moderate the discussion.

research design for interviews

There are numerous examples of focus group research. In their 2008 study, for example, Amy Slater and Marika Tiggemann (2010) conducted six focus groups with 49 adolescent girls between the ages of 13 and 15 to learn more about girls’ attitudes towards’ participation in sports. In order to get focus group participants to speak with one another rather than with the group facilitator, the focus group interview guide contained just two questions: “Can you tell me some of the reasons that girls stop playing sports or other physical activities?” and “Why do you think girls don’t play as much sport/physical activity as boys?” In another focus group study, Virpi Ylanne and Angie Williams (2009) held nine focus group sessions with adults of different ages to gauge their perceptions of how older characters are represented in television commercials. Among other considerations, the researchers were interested in discovering how focus group participants position themselves and others in terms of age stereotypes and identities during the group discussion. In both examples, the researchers’ core interest in group interaction could not have been assessed had interviews been conducted on a one-on-one basis, making the focus group method an ideal choice.

Who should be in your focus group?

In some ways, focus groups require more planning than other qualitative methods of data collection, such as one-on-one interviews in which a researcher may be better able to the dialogue. Researchers must take care to form focus groups with members who will want to interact with one another and to control the timing of the event so that participants are not asked nor expected to stay for a longer time than they’ve agreed to participate. The researcher should also be prepared to inform focus group participants of their responsibility to maintain the confidentiality of what is said in the group. But while the researcher can and should encourage all focus group members to maintain confidentiality, she should also clarify to participants that the unique nature of the group setting prevents her from being able to promise that confidentiality will be maintained by other participants. Once focus group members leave the research setting, researchers cannot control what they say to other people.

research design for interviews

Group size should be determined in part by the topic of the interview and your sense of the likelihood that participants will have much to say without much prompting. If the topic is one about which you think participants feel passionately and will have much to say, a group of 3–5 could make sense. Groups larger than that, especially for heated topics, can easily become unmanageable. Some researchers say that a group of about 6–10 participants is the ideal size for focus group research (Morgan, 1997); others recommend that groups should include 3–12 participants (Adler & Clark, 2008).  The size of the focus group is ultimately the decision of the researcher. When forming groups and deciding how large or small to make them, take into consideration what you know about the topic and participants’ potential interest in, passion for, and feelings about the topic. Also consider your comfort level and experience in conducting focus groups. These factors will help you decide which size is right in your particular case.

It may seem counterintuitive, but in general, it is better to form focus groups consisting of participants who do not know one another than to create groups consisting of friends, relatives, or acquaintances (Agar & MacDonald, 1995).  The reason is that group members who know each other may not share some taken-for-granted knowledge or assumptions. In research, it is precisely the  taken-for-granted knowledge that is often of interest; thus, the focus group researcher should avoid setting up interactions where participants may be discouraged to question or raise issues that they take for granted. However, group members should not be so different from one another that participants will be unlikely to feel comfortable talking with one another.

Focus group researchers must carefully consider the composition of the groups they put together. In his text on conducting focus groups, Morgan (1997) suggests that “homogeneity in background and not homogeneity in attitudes” (p. 36) should be the goal, since participants must feel comfortable speaking up but must also have enough differences to facilitate a productive discussion.  Whatever composition a researcher designs for her focus groups, the important point to keep in mind is that focus group dynamics are shaped by multiple social contexts (Hollander, 2004). Participants’ silences as well as their speech may be shaped by gender, race, class, sexuality, age, or other background characteristics or social dynamics—all of which might be suppressed or exacerbated depending on the composition of the group. Hollander (2004) suggests that researchers must pay careful attention to group composition, must be attentive to group dynamics during the focus group discussion, and should use multiple methods of data collection in order to “untangle participants’ responses and their relationship to the social contexts of the focus group” (p. 632).

The role of the moderator

In addition to the importance of group composition, focus groups also require skillful moderation. A moderator is the researcher tasked with facilitating the conversation in the focus group. Participants may ask each other follow-up questions, agree or disagree with one another, display body language that tells us something about their feelings about the conversation, or even come up with questions not previously conceived of by the researcher. It is just these sorts of interactions and displays that are of interest to the researcher. A researcher conducting focus groups collects data on more than people’s direct responses to her question, as in interviews.

The moderator’s job is not to ask questions to each person individually, but to stimulate conversation between participants. It is important to set ground rules for focus groups at the outset of the discussion. Remind participants you’ve invited them to participate because you want to hear from all of them. Therefore, the group should aim to let just one person speak at a time and avoid letting just a couple of participants dominate the conversation. One way to do this is to begin the discussion by asking participants to briefly introduce themselves or to provide a brief response to an opening question. This will help set the tone of having all group members participate. Also, ask participants to avoid having side conversations; thoughts or reactions to what is said in the group are important and should be shared with everyone.

As the focus group gets rolling, the moderator will play a less active role as participants talk to one another. There may be times when the conversation stagnates or when you, as moderator, wish to guide the conversation in another direction. In these instances, it is important to demonstrate that you’ve been paying attention to what participants have said. Being prepared to interject statements or questions such as “I’d really like to hear more about what Sunil and Joe think about what Dominick and Jae have been saying” or “Several of you have mentioned X. What do others think about this?” will be important for keeping the conversation going. It can also help redirect the conversation, shift the focus to participants who have been less active in the group, and serve as a cue to those who may be dominating the conversation that it is time to allow others to speak. Researchers may choose to use multiple moderators to make managing these various tasks easier.

Moderators are often too busy working with participants to take diligent notes during a focus group. It is helpful to have a note-taker who can record participants’ responses (Liamputtong, 2011). The note-taker creates, in essence, the first draft of interpretation for the data in the study. They note themes in responses, nonverbal cues, and other information to be included in the analysis later on. Focus groups are analyzed in a similar way as interviews; however, the interactive dimension between participants adds another element to the analytical process. Researchers must attend to the group dynamics of each focus group, as “verbal and nonverbal expressions, the tactical use of humour, interruptions in interaction, and disagreement between participants” are all data that are vital to include in analysis (Liamputtong, 2011, p. 175). Note-takers record these elements in field notes, which allows moderators to focus on the conversation.

Strengths and weaknesses of focus groups

Focus groups share many of the strengths and weaknesses of one-on-one qualitative interviews. Both methods can yield very detailed, in-depth information; are excellent for studying social processes; and provide researchers with an opportunity not only to hear what participants say but also to observe what they do in terms of their body language. Focus groups offer the added benefit of giving researchers a chance to collect data on human interaction by observing how group participants respond and react to one another. Like one-on-one qualitative interviews, focus groups can also be quite expensive and time-consuming. However, there may be some savings with focus groups as it takes fewer group events than one-on-one interviews to gather data from the same number of people. Another potential drawback of focus groups, which is not a concern for one-on-one interviews, is that one or two participants might dominate the group, silencing other participants. Careful planning and skillful moderation on the part of the researcher are crucial for avoiding, or at least dealing with, such possibilities. The various strengths and weaknesses of focus group research are summarized in Table 91.

Grounded Theory

Grounded theory has been widely used since its development in the late 1960s (Glaser & Strauss, 1967). Largely derived from schools of sociology, grounded theory involves emersion of the researcher in the field and in the data. Researchers follow a systematic set of procedures and a simultaneous approach to data collection and analysis. Grounded theory is most often used to generate rich explanations of complex actions, processes, and transitions. The primary mode of data collection is one-on-one participant interviews. Sample sizes tend to range from 20 to 30 individuals, sampled purposively (Padgett, 2016). However, sample sizes can be larger or smaller, depending on data saturation. Data saturation is the point in the qualitative research data collection process when no new information is being discovered. Researchers use a constant comparative approach in which previously collected data are analyzed during the same time frame as new data are being collected.  This allows the researchers to determine when new information is no longer being gleaned from data collection and analysis — that data saturation has been reached — in order to conclude the data collection phase.

Rather than apply or test existing grand theories, or “Big T” theories, grounded theory focuses on “small t” theories (Padgett, 2016). Grand theories, or “Big T” theories, are systems of principles, ideas, and concepts used to predict phenomena. These theories are backed up by facts and tested hypotheses. “Small t” theories are speculative and contingent upon specific contexts. In grounded theory, these “small t” theories are grounded in events and experiences and emerge from the analysis of the data collected.

One notable application of grounded theory produced a “small t” theory of acceptance following cancer diagnoses (Jakobsson, Horvath, & Ahlberg, 2005). Using grounded theory, the researchers interviewed nine patients in western Sweden. Data collection and analysis stopped when saturation was reached. The researchers found that action and knowledge, given with respect and continuity led to confidence which led to acceptance. This “small t” theory continues to be applied and further explored in other contexts.

Case study research

Case study research is an intensive longitudinal study of a phenomenon at one or more research sites for the purpose of deriving detailed, contextualized inferences and understanding the dynamic process underlying a phenomenon of interest. Case research is a unique research design in that it can be used in an interpretive manner to build theories or in a positivist manner to test theories. The previous chapter on case research discusses both techniques in depth and provides illustrative exemplars. Furthermore, the case researcher is a neutral observer (direct observation) in the social setting rather than an active participant (participant observation). As with any other interpretive approach, drawing meaningful inferences from case research depends heavily on the observational skills and integrative abilities of the researcher.

Ethnography

The ethnographic research method, derived largely from the field of anthropology, emphasizes studying a phenomenon within the context of its culture. The researcher must be deeply immersed in the social culture over an extended period of time (usually 8 months to 2 years) and should engage, observe, and record the daily life of the studied culture and its social participants within their natural setting. The primary mode of data collection is participant observation, and data analysis involves a “sense-making” approach. In addition, the researcher must take extensive field notes, and narrate her experience in descriptive detail so that readers may experience the same culture as the researcher. In this method, the researcher has two roles: rely on her unique knowledge and engagement to generate insights (theory), and convince the scientific community of the trans-situational nature of the studied phenomenon.

The classic example of ethnographic research is Jane Goodall’s study of primate behaviors, where she lived with chimpanzees in their natural habitat at Gombe National Park in Tanzania, observed their behaviors, interacted with them, and shared their lives. During that process, she learnt and chronicled how chimpanzees seek food and shelter, how they socialize with each other, their communication patterns, their mating behaviors, and so forth. A more contemporary example of ethnographic research is Myra Bluebond-Langer’s (1996)14 study of decision making in families with children suffering from life-threatening illnesses, and the physical, psychological, environmental, ethical, legal, and cultural issues that influence such decision-making. The researcher followed the experiences of approximately 80 children with incurable illnesses and their families for a period of over two years. Data collection involved participant observation and formal/informal conversations with children, their parents and relatives, and health care providers to document their lived experience.

Phenomenology

Phenomenology is a research method that emphasizes the study of conscious experiences as a way of understanding the reality around us. Phenomenology is concerned with the systematic reflection and analysis of phenomena associated with conscious experiences, such as human judgment, perceptions, and actions, with the goal of (1) appreciating and describing social reality from the diverse subjective perspectives of the participants involved, and (2) understanding the symbolic meanings (“deep structure”) underlying these subjective experiences. Phenomenological inquiry requires that researchers eliminate any prior assumptions and personal biases, empathize with the participant’s situation, and tune into existential dimensions of that situation, so that they can fully understand the deep structures that drives the conscious thinking, feeling, and behavior of the studied participants.

Some researchers view phenomenology as a philosophy rather than as a research method. In response to this criticism, Giorgi and Giorgi (2003) developed an existential phenomenological research method to guide studies in this area. This method can be grouped into data collection and data analysis phases. In the data collection phase, participants embedded in a social phenomenon are interviewed to capture their subjective experiences and perspectives regarding the phenomenon under investigation. Examples of questions that may be asked include “can you describe a typical day” or “can you describe that particular incident in more detail?” These interviews are recorded and transcribed for further analysis. During data analysis, the researcher reads the transcripts to: (1) get a sense of the whole, and (2) establish “units of significance” that can faithfully represent participants’ subjective experiences. Examples of such units of significance are concepts such as “felt space” and “felt time,” which are then used to document participants’ psychological experiences. For instance, did participants feel safe, free, trapped, or joyous when experiencing a phenomenon (“felt-space”)? Did they feel that their experience was pressured, slow, or discontinuous (“felt-time”)? Phenomenological analysis should take into account the participants’ temporal landscape (i.e., their sense of past, present, and future), and the researcher must transpose herself in an imaginary sense in the participant’s situation (i.e., temporarily live the participant’s life). The participants’ lived experience is described in form of a narrative or using emergent themes. The analysis then delves into these themes to identify multiple layers of meaning while retaining the fragility and ambiguity of subjects’ lived experiences.

Key Takeaways

  • In terms of focus group composition, homogeneity of background among participants is recommended while diverse attitudes within the group are ideal.
  • The goal of a focus group is to get participants to talk with one another rather than the researcher.
  • Like one-on-one qualitative interviews, focus groups can yield very detailed information, are excellent for studying social processes, and provide researchers with an opportunity to observe participants’ body language; they also allow researchers to observe social interaction.
  • Focus groups can be expensive and time-consuming, as are one-on-one interviews; there is also the possibility that a few participants will dominate the group and silence others in the group.
  • Other types of qualitative research include case studies, ethnography, and phenomenology.
  • Data saturation – the point in the qualitative research data collection process when no new information is being discovered
  • Focus groups- planned discussions designed to elicit group interaction and “obtain perceptions on a defined area of interest in a permissive, nonthreatening environment” (Krueger & Casey, 2000, p. 5)
  • Moderator- the researcher tasked with facilitating the conversation in the focus group

Image attributions

target group by geralt CC-0

workplace team by Free-Photos CC-0

Foundations of Social Work Research Copyright © 2020 by Rebecca L. Mauldin is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

  • USC Libraries
  • Research Guides

Organizing Your Social Sciences Research Paper

  • Types of Research Designs
  • Purpose of Guide
  • Design Flaws to Avoid
  • Independent and Dependent Variables
  • Glossary of Research Terms
  • Reading Research Effectively
  • Narrowing a Topic Idea
  • Broadening a Topic Idea
  • Extending the Timeliness of a Topic Idea
  • Academic Writing Style
  • Applying Critical Thinking
  • Choosing a Title
  • Making an Outline
  • Paragraph Development
  • Research Process Video Series
  • Executive Summary
  • The C.A.R.S. Model
  • Background Information
  • The Research Problem/Question
  • Theoretical Framework
  • Citation Tracking
  • Content Alert Services
  • Evaluating Sources
  • Primary Sources
  • Secondary Sources
  • Tiertiary Sources
  • Scholarly vs. Popular Publications
  • Qualitative Methods
  • Quantitative Methods
  • Insiderness
  • Using Non-Textual Elements
  • Limitations of the Study
  • Common Grammar Mistakes
  • Writing Concisely
  • Avoiding Plagiarism
  • Footnotes or Endnotes?
  • Further Readings
  • Generative AI and Writing
  • USC Libraries Tutorials and Other Guides
  • Bibliography

Introduction

Before beginning your paper, you need to decide how you plan to design the study .

The research design refers to the overall strategy and analytical approach that you have chosen in order to integrate, in a coherent and logical way, the different components of the study, thus ensuring that the research problem will be thoroughly investigated. It constitutes the blueprint for the collection, measurement, and interpretation of information and data. Note that the research problem determines the type of design you choose, not the other way around!

De Vaus, D. A. Research Design in Social Research . London: SAGE, 2001; Trochim, William M.K. Research Methods Knowledge Base. 2006.

General Structure and Writing Style

The function of a research design is to ensure that the evidence obtained enables you to effectively address the research problem logically and as unambiguously as possible . In social sciences research, obtaining information relevant to the research problem generally entails specifying the type of evidence needed to test the underlying assumptions of a theory, to evaluate a program, or to accurately describe and assess meaning related to an observable phenomenon.

With this in mind, a common mistake made by researchers is that they begin their investigations before they have thought critically about what information is required to address the research problem. Without attending to these design issues beforehand, the overall research problem will not be adequately addressed and any conclusions drawn will run the risk of being weak and unconvincing. As a consequence, the overall validity of the study will be undermined.

The length and complexity of describing the research design in your paper can vary considerably, but any well-developed description will achieve the following :

  • Identify the research problem clearly and justify its selection, particularly in relation to any valid alternative designs that could have been used,
  • Review and synthesize previously published literature associated with the research problem,
  • Clearly and explicitly specify hypotheses [i.e., research questions] central to the problem,
  • Effectively describe the information and/or data which will be necessary for an adequate testing of the hypotheses and explain how such information and/or data will be obtained, and
  • Describe the methods of analysis to be applied to the data in determining whether or not the hypotheses are true or false.

The research design is usually incorporated into the introduction of your paper . You can obtain an overall sense of what to do by reviewing studies that have utilized the same research design [e.g., using a case study approach]. This can help you develop an outline to follow for your own paper.

NOTE: Use the SAGE Research Methods Online and Cases and the SAGE Research Methods Videos databases to search for scholarly resources on how to apply specific research designs and methods . The Research Methods Online database contains links to more than 175,000 pages of SAGE publisher's book, journal, and reference content on quantitative, qualitative, and mixed research methodologies. Also included is a collection of case studies of social research projects that can be used to help you better understand abstract or complex methodological concepts. The Research Methods Videos database contains hours of tutorials, interviews, video case studies, and mini-documentaries covering the entire research process.

Creswell, John W. and J. David Creswell. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches . 5th edition. Thousand Oaks, CA: Sage, 2018; De Vaus, D. A. Research Design in Social Research . London: SAGE, 2001; Gorard, Stephen. Research Design: Creating Robust Approaches for the Social Sciences . Thousand Oaks, CA: Sage, 2013; Leedy, Paul D. and Jeanne Ellis Ormrod. Practical Research: Planning and Design . Tenth edition. Boston, MA: Pearson, 2013; Vogt, W. Paul, Dianna C. Gardner, and Lynne M. Haeffele. When to Use What Research Design . New York: Guilford, 2012.

Action Research Design

Definition and Purpose

The essentials of action research design follow a characteristic cycle whereby initially an exploratory stance is adopted, where an understanding of a problem is developed and plans are made for some form of interventionary strategy. Then the intervention is carried out [the "action" in action research] during which time, pertinent observations are collected in various forms. The new interventional strategies are carried out, and this cyclic process repeats, continuing until a sufficient understanding of [or a valid implementation solution for] the problem is achieved. The protocol is iterative or cyclical in nature and is intended to foster deeper understanding of a given situation, starting with conceptualizing and particularizing the problem and moving through several interventions and evaluations.

What do these studies tell you ?

  • This is a collaborative and adaptive research design that lends itself to use in work or community situations.
  • Design focuses on pragmatic and solution-driven research outcomes rather than testing theories.
  • When practitioners use action research, it has the potential to increase the amount they learn consciously from their experience; the action research cycle can be regarded as a learning cycle.
  • Action research studies often have direct and obvious relevance to improving practice and advocating for change.
  • There are no hidden controls or preemption of direction by the researcher.

What these studies don't tell you ?

  • It is harder to do than conducting conventional research because the researcher takes on responsibilities of advocating for change as well as for researching the topic.
  • Action research is much harder to write up because it is less likely that you can use a standard format to report your findings effectively [i.e., data is often in the form of stories or observation].
  • Personal over-involvement of the researcher may bias research results.
  • The cyclic nature of action research to achieve its twin outcomes of action [e.g. change] and research [e.g. understanding] is time-consuming and complex to conduct.
  • Advocating for change usually requires buy-in from study participants.

Coghlan, David and Mary Brydon-Miller. The Sage Encyclopedia of Action Research . Thousand Oaks, CA:  Sage, 2014; Efron, Sara Efrat and Ruth Ravid. Action Research in Education: A Practical Guide . New York: Guilford, 2013; Gall, Meredith. Educational Research: An Introduction . Chapter 18, Action Research. 8th ed. Boston, MA: Pearson/Allyn and Bacon, 2007; Gorard, Stephen. Research Design: Creating Robust Approaches for the Social Sciences . Thousand Oaks, CA: Sage, 2013; Kemmis, Stephen and Robin McTaggart. “Participatory Action Research.” In Handbook of Qualitative Research . Norman Denzin and Yvonna S. Lincoln, eds. 2nd ed. (Thousand Oaks, CA: SAGE, 2000), pp. 567-605; McNiff, Jean. Writing and Doing Action Research . London: Sage, 2014; Reason, Peter and Hilary Bradbury. Handbook of Action Research: Participative Inquiry and Practice . Thousand Oaks, CA: SAGE, 2001.

Case Study Design

A case study is an in-depth study of a particular research problem rather than a sweeping statistical survey or comprehensive comparative inquiry. It is often used to narrow down a very broad field of research into one or a few easily researchable examples. The case study research design is also useful for testing whether a specific theory and model actually applies to phenomena in the real world. It is a useful design when not much is known about an issue or phenomenon.

  • Approach excels at bringing us to an understanding of a complex issue through detailed contextual analysis of a limited number of events or conditions and their relationships.
  • A researcher using a case study design can apply a variety of methodologies and rely on a variety of sources to investigate a research problem.
  • Design can extend experience or add strength to what is already known through previous research.
  • Social scientists, in particular, make wide use of this research design to examine contemporary real-life situations and provide the basis for the application of concepts and theories and the extension of methodologies.
  • The design can provide detailed descriptions of specific and rare cases.
  • A single or small number of cases offers little basis for establishing reliability or to generalize the findings to a wider population of people, places, or things.
  • Intense exposure to the study of a case may bias a researcher's interpretation of the findings.
  • Design does not facilitate assessment of cause and effect relationships.
  • Vital information may be missing, making the case hard to interpret.
  • The case may not be representative or typical of the larger problem being investigated.
  • If the criteria for selecting a case is because it represents a very unusual or unique phenomenon or problem for study, then your interpretation of the findings can only apply to that particular case.

Case Studies. Writing@CSU. Colorado State University; Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 4, Flexible Methods: Case Study Design. 2nd ed. New York: Columbia University Press, 1999; Gerring, John. “What Is a Case Study and What Is It Good for?” American Political Science Review 98 (May 2004): 341-354; Greenhalgh, Trisha, editor. Case Study Evaluation: Past, Present and Future Challenges . Bingley, UK: Emerald Group Publishing, 2015; Mills, Albert J. , Gabrielle Durepos, and Eiden Wiebe, editors. Encyclopedia of Case Study Research . Thousand Oaks, CA: SAGE Publications, 2010; Stake, Robert E. The Art of Case Study Research . Thousand Oaks, CA: SAGE, 1995; Yin, Robert K. Case Study Research: Design and Theory . Applied Social Research Methods Series, no. 5. 3rd ed. Thousand Oaks, CA: SAGE, 2003.

Causal Design

Causality studies may be thought of as understanding a phenomenon in terms of conditional statements in the form, “If X, then Y.” This type of research is used to measure what impact a specific change will have on existing norms and assumptions. Most social scientists seek causal explanations that reflect tests of hypotheses. Causal effect (nomothetic perspective) occurs when variation in one phenomenon, an independent variable, leads to or results, on average, in variation in another phenomenon, the dependent variable.

Conditions necessary for determining causality:

  • Empirical association -- a valid conclusion is based on finding an association between the independent variable and the dependent variable.
  • Appropriate time order -- to conclude that causation was involved, one must see that cases were exposed to variation in the independent variable before variation in the dependent variable.
  • Nonspuriousness -- a relationship between two variables that is not due to variation in a third variable.
  • Causality research designs assist researchers in understanding why the world works the way it does through the process of proving a causal link between variables and by the process of eliminating other possibilities.
  • Replication is possible.
  • There is greater confidence the study has internal validity due to the systematic subject selection and equity of groups being compared.
  • Not all relationships are causal! The possibility always exists that, by sheer coincidence, two unrelated events appear to be related [e.g., Punxatawney Phil could accurately predict the duration of Winter for five consecutive years but, the fact remains, he's just a big, furry rodent].
  • Conclusions about causal relationships are difficult to determine due to a variety of extraneous and confounding variables that exist in a social environment. This means causality can only be inferred, never proven.
  • If two variables are correlated, the cause must come before the effect. However, even though two variables might be causally related, it can sometimes be difficult to determine which variable comes first and, therefore, to establish which variable is the actual cause and which is the  actual effect.

Beach, Derek and Rasmus Brun Pedersen. Causal Case Study Methods: Foundations and Guidelines for Comparing, Matching, and Tracing . Ann Arbor, MI: University of Michigan Press, 2016; Bachman, Ronet. The Practice of Research in Criminology and Criminal Justice . Chapter 5, Causation and Research Designs. 3rd ed. Thousand Oaks, CA: Pine Forge Press, 2007; Brewer, Ernest W. and Jennifer Kubn. “Causal-Comparative Design.” In Encyclopedia of Research Design . Neil J. Salkind, editor. (Thousand Oaks, CA: Sage, 2010), pp. 125-132; Causal Research Design: Experimentation. Anonymous SlideShare Presentation; Gall, Meredith. Educational Research: An Introduction . Chapter 11, Nonexperimental Research: Correlational Designs. 8th ed. Boston, MA: Pearson/Allyn and Bacon, 2007; Trochim, William M.K. Research Methods Knowledge Base. 2006.

Cohort Design

Often used in the medical sciences, but also found in the applied social sciences, a cohort study generally refers to a study conducted over a period of time involving members of a population which the subject or representative member comes from, and who are united by some commonality or similarity. Using a quantitative framework, a cohort study makes note of statistical occurrence within a specialized subgroup, united by same or similar characteristics that are relevant to the research problem being investigated, rather than studying statistical occurrence within the general population. Using a qualitative framework, cohort studies generally gather data using methods of observation. Cohorts can be either "open" or "closed."

  • Open Cohort Studies [dynamic populations, such as the population of Los Angeles] involve a population that is defined just by the state of being a part of the study in question (and being monitored for the outcome). Date of entry and exit from the study is individually defined, therefore, the size of the study population is not constant. In open cohort studies, researchers can only calculate rate based data, such as, incidence rates and variants thereof.
  • Closed Cohort Studies [static populations, such as patients entered into a clinical trial] involve participants who enter into the study at one defining point in time and where it is presumed that no new participants can enter the cohort. Given this, the number of study participants remains constant (or can only decrease).
  • The use of cohorts is often mandatory because a randomized control study may be unethical. For example, you cannot deliberately expose people to asbestos, you can only study its effects on those who have already been exposed. Research that measures risk factors often relies upon cohort designs.
  • Because cohort studies measure potential causes before the outcome has occurred, they can demonstrate that these “causes” preceded the outcome, thereby avoiding the debate as to which is the cause and which is the effect.
  • Cohort analysis is highly flexible and can provide insight into effects over time and related to a variety of different types of changes [e.g., social, cultural, political, economic, etc.].
  • Either original data or secondary data can be used in this design.
  • In cases where a comparative analysis of two cohorts is made [e.g., studying the effects of one group exposed to asbestos and one that has not], a researcher cannot control for all other factors that might differ between the two groups. These factors are known as confounding variables.
  • Cohort studies can end up taking a long time to complete if the researcher must wait for the conditions of interest to develop within the group. This also increases the chance that key variables change during the course of the study, potentially impacting the validity of the findings.
  • Due to the lack of randominization in the cohort design, its external validity is lower than that of study designs where the researcher randomly assigns participants.

Healy P, Devane D. “Methodological Considerations in Cohort Study Designs.” Nurse Researcher 18 (2011): 32-36; Glenn, Norval D, editor. Cohort Analysis . 2nd edition. Thousand Oaks, CA: Sage, 2005; Levin, Kate Ann. Study Design IV: Cohort Studies. Evidence-Based Dentistry 7 (2003): 51–52; Payne, Geoff. “Cohort Study.” In The SAGE Dictionary of Social Research Methods . Victor Jupp, editor. (Thousand Oaks, CA: Sage, 2006), pp. 31-33; Study Design 101. Himmelfarb Health Sciences Library. George Washington University, November 2011; Cohort Study. Wikipedia.

Cross-Sectional Design

Cross-sectional research designs have three distinctive features: no time dimension; a reliance on existing differences rather than change following intervention; and, groups are selected based on existing differences rather than random allocation. The cross-sectional design can only measure differences between or from among a variety of people, subjects, or phenomena rather than a process of change. As such, researchers using this design can only employ a relatively passive approach to making causal inferences based on findings.

  • Cross-sectional studies provide a clear 'snapshot' of the outcome and the characteristics associated with it, at a specific point in time.
  • Unlike an experimental design, where there is an active intervention by the researcher to produce and measure change or to create differences, cross-sectional designs focus on studying and drawing inferences from existing differences between people, subjects, or phenomena.
  • Entails collecting data at and concerning one point in time. While longitudinal studies involve taking multiple measures over an extended period of time, cross-sectional research is focused on finding relationships between variables at one moment in time.
  • Groups identified for study are purposely selected based upon existing differences in the sample rather than seeking random sampling.
  • Cross-section studies are capable of using data from a large number of subjects and, unlike observational studies, is not geographically bound.
  • Can estimate prevalence of an outcome of interest because the sample is usually taken from the whole population.
  • Because cross-sectional designs generally use survey techniques to gather data, they are relatively inexpensive and take up little time to conduct.
  • Finding people, subjects, or phenomena to study that are very similar except in one specific variable can be difficult.
  • Results are static and time bound and, therefore, give no indication of a sequence of events or reveal historical or temporal contexts.
  • Studies cannot be utilized to establish cause and effect relationships.
  • This design only provides a snapshot of analysis so there is always the possibility that a study could have differing results if another time-frame had been chosen.
  • There is no follow up to the findings.

Bethlehem, Jelke. "7: Cross-sectional Research." In Research Methodology in the Social, Behavioural and Life Sciences . Herman J Adèr and Gideon J Mellenbergh, editors. (London, England: Sage, 1999), pp. 110-43; Bourque, Linda B. “Cross-Sectional Design.” In  The SAGE Encyclopedia of Social Science Research Methods . Michael S. Lewis-Beck, Alan Bryman, and Tim Futing Liao. (Thousand Oaks, CA: 2004), pp. 230-231; Hall, John. “Cross-Sectional Survey Design.” In Encyclopedia of Survey Research Methods . Paul J. Lavrakas, ed. (Thousand Oaks, CA: Sage, 2008), pp. 173-174; Helen Barratt, Maria Kirwan. Cross-Sectional Studies: Design Application, Strengths and Weaknesses of Cross-Sectional Studies. Healthknowledge, 2009. Cross-Sectional Study. Wikipedia.

Descriptive Design

Descriptive research designs help provide answers to the questions of who, what, when, where, and how associated with a particular research problem; a descriptive study cannot conclusively ascertain answers to why. Descriptive research is used to obtain information concerning the current status of the phenomena and to describe "what exists" with respect to variables or conditions in a situation.

  • The subject is being observed in a completely natural and unchanged natural environment. True experiments, whilst giving analyzable data, often adversely influence the normal behavior of the subject [a.k.a., the Heisenberg effect whereby measurements of certain systems cannot be made without affecting the systems].
  • Descriptive research is often used as a pre-cursor to more quantitative research designs with the general overview giving some valuable pointers as to what variables are worth testing quantitatively.
  • If the limitations are understood, they can be a useful tool in developing a more focused study.
  • Descriptive studies can yield rich data that lead to important recommendations in practice.
  • Appoach collects a large amount of data for detailed analysis.
  • The results from a descriptive research cannot be used to discover a definitive answer or to disprove a hypothesis.
  • Because descriptive designs often utilize observational methods [as opposed to quantitative methods], the results cannot be replicated.
  • The descriptive function of research is heavily dependent on instrumentation for measurement and observation.

Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 5, Flexible Methods: Descriptive Research. 2nd ed. New York: Columbia University Press, 1999; Given, Lisa M. "Descriptive Research." In Encyclopedia of Measurement and Statistics . Neil J. Salkind and Kristin Rasmussen, editors. (Thousand Oaks, CA: Sage, 2007), pp. 251-254; McNabb, Connie. Descriptive Research Methodologies. Powerpoint Presentation; Shuttleworth, Martyn. Descriptive Research Design, September 26, 2008; Erickson, G. Scott. "Descriptive Research Design." In New Methods of Market Research and Analysis . (Northampton, MA: Edward Elgar Publishing, 2017), pp. 51-77; Sahin, Sagufta, and Jayanta Mete. "A Brief Study on Descriptive Research: Its Nature and Application in Social Science." International Journal of Research and Analysis in Humanities 1 (2021): 11; K. Swatzell and P. Jennings. “Descriptive Research: The Nuts and Bolts.” Journal of the American Academy of Physician Assistants 20 (2007), pp. 55-56; Kane, E. Doing Your Own Research: Basic Descriptive Research in the Social Sciences and Humanities . London: Marion Boyars, 1985.

Experimental Design

A blueprint of the procedure that enables the researcher to maintain control over all factors that may affect the result of an experiment. In doing this, the researcher attempts to determine or predict what may occur. Experimental research is often used where there is time priority in a causal relationship (cause precedes effect), there is consistency in a causal relationship (a cause will always lead to the same effect), and the magnitude of the correlation is great. The classic experimental design specifies an experimental group and a control group. The independent variable is administered to the experimental group and not to the control group, and both groups are measured on the same dependent variable. Subsequent experimental designs have used more groups and more measurements over longer periods. True experiments must have control, randomization, and manipulation.

  • Experimental research allows the researcher to control the situation. In so doing, it allows researchers to answer the question, “What causes something to occur?”
  • Permits the researcher to identify cause and effect relationships between variables and to distinguish placebo effects from treatment effects.
  • Experimental research designs support the ability to limit alternative explanations and to infer direct causal relationships in the study.
  • Approach provides the highest level of evidence for single studies.
  • The design is artificial, and results may not generalize well to the real world.
  • The artificial settings of experiments may alter the behaviors or responses of participants.
  • Experimental designs can be costly if special equipment or facilities are needed.
  • Some research problems cannot be studied using an experiment because of ethical or technical reasons.
  • Difficult to apply ethnographic and other qualitative methods to experimentally designed studies.

Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 7, Flexible Methods: Experimental Research. 2nd ed. New York: Columbia University Press, 1999; Chapter 2: Research Design, Experimental Designs. School of Psychology, University of New England, 2000; Chow, Siu L. "Experimental Design." In Encyclopedia of Research Design . Neil J. Salkind, editor. (Thousand Oaks, CA: Sage, 2010), pp. 448-453; "Experimental Design." In Social Research Methods . Nicholas Walliman, editor. (London, England: Sage, 2006), pp, 101-110; Experimental Research. Research Methods by Dummies. Department of Psychology. California State University, Fresno, 2006; Kirk, Roger E. Experimental Design: Procedures for the Behavioral Sciences . 4th edition. Thousand Oaks, CA: Sage, 2013; Trochim, William M.K. Experimental Design. Research Methods Knowledge Base. 2006; Rasool, Shafqat. Experimental Research. Slideshare presentation.

Exploratory Design

An exploratory design is conducted about a research problem when there are few or no earlier studies to refer to or rely upon to predict an outcome . The focus is on gaining insights and familiarity for later investigation or undertaken when research problems are in a preliminary stage of investigation. Exploratory designs are often used to establish an understanding of how best to proceed in studying an issue or what methodology would effectively apply to gathering information about the issue.

The goals of exploratory research are intended to produce the following possible insights:

  • Familiarity with basic details, settings, and concerns.
  • Well grounded picture of the situation being developed.
  • Generation of new ideas and assumptions.
  • Development of tentative theories or hypotheses.
  • Determination about whether a study is feasible in the future.
  • Issues get refined for more systematic investigation and formulation of new research questions.
  • Direction for future research and techniques get developed.
  • Design is a useful approach for gaining background information on a particular topic.
  • Exploratory research is flexible and can address research questions of all types (what, why, how).
  • Provides an opportunity to define new terms and clarify existing concepts.
  • Exploratory research is often used to generate formal hypotheses and develop more precise research problems.
  • In the policy arena or applied to practice, exploratory studies help establish research priorities and where resources should be allocated.
  • Exploratory research generally utilizes small sample sizes and, thus, findings are typically not generalizable to the population at large.
  • The exploratory nature of the research inhibits an ability to make definitive conclusions about the findings. They provide insight but not definitive conclusions.
  • The research process underpinning exploratory studies is flexible but often unstructured, leading to only tentative results that have limited value to decision-makers.
  • Design lacks rigorous standards applied to methods of data gathering and analysis because one of the areas for exploration could be to determine what method or methodologies could best fit the research problem.

Cuthill, Michael. “Exploratory Research: Citizen Participation, Local Government, and Sustainable Development in Australia.” Sustainable Development 10 (2002): 79-89; Streb, Christoph K. "Exploratory Case Study." In Encyclopedia of Case Study Research . Albert J. Mills, Gabrielle Durepos and Eiden Wiebe, editors. (Thousand Oaks, CA: Sage, 2010), pp. 372-374; Taylor, P. J., G. Catalano, and D.R.F. Walker. “Exploratory Analysis of the World City Network.” Urban Studies 39 (December 2002): 2377-2394; Exploratory Research. Wikipedia.

Field Research Design

Sometimes referred to as ethnography or participant observation, designs around field research encompass a variety of interpretative procedures [e.g., observation and interviews] rooted in qualitative approaches to studying people individually or in groups while inhabiting their natural environment as opposed to using survey instruments or other forms of impersonal methods of data gathering. Information acquired from observational research takes the form of “ field notes ” that involves documenting what the researcher actually sees and hears while in the field. Findings do not consist of conclusive statements derived from numbers and statistics because field research involves analysis of words and observations of behavior. Conclusions, therefore, are developed from an interpretation of findings that reveal overriding themes, concepts, and ideas. More information can be found HERE .

  • Field research is often necessary to fill gaps in understanding the research problem applied to local conditions or to specific groups of people that cannot be ascertained from existing data.
  • The research helps contextualize already known information about a research problem, thereby facilitating ways to assess the origins, scope, and scale of a problem and to gage the causes, consequences, and means to resolve an issue based on deliberate interaction with people in their natural inhabited spaces.
  • Enables the researcher to corroborate or confirm data by gathering additional information that supports or refutes findings reported in prior studies of the topic.
  • Because the researcher in embedded in the field, they are better able to make observations or ask questions that reflect the specific cultural context of the setting being investigated.
  • Observing the local reality offers the opportunity to gain new perspectives or obtain unique data that challenges existing theoretical propositions or long-standing assumptions found in the literature.

What these studies don't tell you

  • A field research study requires extensive time and resources to carry out the multiple steps involved with preparing for the gathering of information, including for example, examining background information about the study site, obtaining permission to access the study site, and building trust and rapport with subjects.
  • Requires a commitment to staying engaged in the field to ensure that you can adequately document events and behaviors as they unfold.
  • The unpredictable nature of fieldwork means that researchers can never fully control the process of data gathering. They must maintain a flexible approach to studying the setting because events and circumstances can change quickly or unexpectedly.
  • Findings can be difficult to interpret and verify without access to documents and other source materials that help to enhance the credibility of information obtained from the field  [i.e., the act of triangulating the data].
  • Linking the research problem to the selection of study participants inhabiting their natural environment is critical. However, this specificity limits the ability to generalize findings to different situations or in other contexts or to infer courses of action applied to other settings or groups of people.
  • The reporting of findings must take into account how the researcher themselves may have inadvertently affected respondents and their behaviors.

Historical Design

The purpose of a historical research design is to collect, verify, and synthesize evidence from the past to establish facts that defend or refute a hypothesis. It uses secondary sources and a variety of primary documentary evidence, such as, diaries, official records, reports, archives, and non-textual information [maps, pictures, audio and visual recordings]. The limitation is that the sources must be both authentic and valid.

  • The historical research design is unobtrusive; the act of research does not affect the results of the study.
  • The historical approach is well suited for trend analysis.
  • Historical records can add important contextual background required to more fully understand and interpret a research problem.
  • There is often no possibility of researcher-subject interaction that could affect the findings.
  • Historical sources can be used over and over to study different research problems or to replicate a previous study.
  • The ability to fulfill the aims of your research are directly related to the amount and quality of documentation available to understand the research problem.
  • Since historical research relies on data from the past, there is no way to manipulate it to control for contemporary contexts.
  • Interpreting historical sources can be very time consuming.
  • The sources of historical materials must be archived consistently to ensure access. This may especially challenging for digital or online-only sources.
  • Original authors bring their own perspectives and biases to the interpretation of past events and these biases are more difficult to ascertain in historical resources.
  • Due to the lack of control over external variables, historical research is very weak with regard to the demands of internal validity.
  • It is rare that the entirety of historical documentation needed to fully address a research problem is available for interpretation, therefore, gaps need to be acknowledged.

Howell, Martha C. and Walter Prevenier. From Reliable Sources: An Introduction to Historical Methods . Ithaca, NY: Cornell University Press, 2001; Lundy, Karen Saucier. "Historical Research." In The Sage Encyclopedia of Qualitative Research Methods . Lisa M. Given, editor. (Thousand Oaks, CA: Sage, 2008), pp. 396-400; Marius, Richard. and Melvin E. Page. A Short Guide to Writing about History . 9th edition. Boston, MA: Pearson, 2015; Savitt, Ronald. “Historical Research in Marketing.” Journal of Marketing 44 (Autumn, 1980): 52-58;  Gall, Meredith. Educational Research: An Introduction . Chapter 16, Historical Research. 8th ed. Boston, MA: Pearson/Allyn and Bacon, 2007.

Longitudinal Design

A longitudinal study follows the same sample over time and makes repeated observations. For example, with longitudinal surveys, the same group of people is interviewed at regular intervals, enabling researchers to track changes over time and to relate them to variables that might explain why the changes occur. Longitudinal research designs describe patterns of change and help establish the direction and magnitude of causal relationships. Measurements are taken on each variable over two or more distinct time periods. This allows the researcher to measure change in variables over time. It is a type of observational study sometimes referred to as a panel study.

  • Longitudinal data facilitate the analysis of the duration of a particular phenomenon.
  • Enables survey researchers to get close to the kinds of causal explanations usually attainable only with experiments.
  • The design permits the measurement of differences or change in a variable from one period to another [i.e., the description of patterns of change over time].
  • Longitudinal studies facilitate the prediction of future outcomes based upon earlier factors.
  • The data collection method may change over time.
  • Maintaining the integrity of the original sample can be difficult over an extended period of time.
  • It can be difficult to show more than one variable at a time.
  • This design often needs qualitative research data to explain fluctuations in the results.
  • A longitudinal research design assumes present trends will continue unchanged.
  • It can take a long period of time to gather results.
  • There is a need to have a large sample size and accurate sampling to reach representativness.

Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 6, Flexible Methods: Relational and Longitudinal Research. 2nd ed. New York: Columbia University Press, 1999; Forgues, Bernard, and Isabelle Vandangeon-Derumez. "Longitudinal Analyses." In Doing Management Research . Raymond-Alain Thiétart and Samantha Wauchope, editors. (London, England: Sage, 2001), pp. 332-351; Kalaian, Sema A. and Rafa M. Kasim. "Longitudinal Studies." In Encyclopedia of Survey Research Methods . Paul J. Lavrakas, ed. (Thousand Oaks, CA: Sage, 2008), pp. 440-441; Menard, Scott, editor. Longitudinal Research . Thousand Oaks, CA: Sage, 2002; Ployhart, Robert E. and Robert J. Vandenberg. "Longitudinal Research: The Theory, Design, and Analysis of Change.” Journal of Management 36 (January 2010): 94-120; Longitudinal Study. Wikipedia.

Meta-Analysis Design

Meta-analysis is an analytical methodology designed to systematically evaluate and summarize the results from a number of individual studies, thereby, increasing the overall sample size and the ability of the researcher to study effects of interest. The purpose is to not simply summarize existing knowledge, but to develop a new understanding of a research problem using synoptic reasoning. The main objectives of meta-analysis include analyzing differences in the results among studies and increasing the precision by which effects are estimated. A well-designed meta-analysis depends upon strict adherence to the criteria used for selecting studies and the availability of information in each study to properly analyze their findings. Lack of information can severely limit the type of analyzes and conclusions that can be reached. In addition, the more dissimilarity there is in the results among individual studies [heterogeneity], the more difficult it is to justify interpretations that govern a valid synopsis of results. A meta-analysis needs to fulfill the following requirements to ensure the validity of your findings:

  • Clearly defined description of objectives, including precise definitions of the variables and outcomes that are being evaluated;
  • A well-reasoned and well-documented justification for identification and selection of the studies;
  • Assessment and explicit acknowledgment of any researcher bias in the identification and selection of those studies;
  • Description and evaluation of the degree of heterogeneity among the sample size of studies reviewed; and,
  • Justification of the techniques used to evaluate the studies.
  • Can be an effective strategy for determining gaps in the literature.
  • Provides a means of reviewing research published about a particular topic over an extended period of time and from a variety of sources.
  • Is useful in clarifying what policy or programmatic actions can be justified on the basis of analyzing research results from multiple studies.
  • Provides a method for overcoming small sample sizes in individual studies that previously may have had little relationship to each other.
  • Can be used to generate new hypotheses or highlight research problems for future studies.
  • Small violations in defining the criteria used for content analysis can lead to difficult to interpret and/or meaningless findings.
  • A large sample size can yield reliable, but not necessarily valid, results.
  • A lack of uniformity regarding, for example, the type of literature reviewed, how methods are applied, and how findings are measured within the sample of studies you are analyzing, can make the process of synthesis difficult to perform.
  • Depending on the sample size, the process of reviewing and synthesizing multiple studies can be very time consuming.

Beck, Lewis W. "The Synoptic Method." The Journal of Philosophy 36 (1939): 337-345; Cooper, Harris, Larry V. Hedges, and Jeffrey C. Valentine, eds. The Handbook of Research Synthesis and Meta-Analysis . 2nd edition. New York: Russell Sage Foundation, 2009; Guzzo, Richard A., Susan E. Jackson and Raymond A. Katzell. “Meta-Analysis Analysis.” In Research in Organizational Behavior , Volume 9. (Greenwich, CT: JAI Press, 1987), pp 407-442; Lipsey, Mark W. and David B. Wilson. Practical Meta-Analysis . Thousand Oaks, CA: Sage Publications, 2001; Study Design 101. Meta-Analysis. The Himmelfarb Health Sciences Library, George Washington University; Timulak, Ladislav. “Qualitative Meta-Analysis.” In The SAGE Handbook of Qualitative Data Analysis . Uwe Flick, editor. (Los Angeles, CA: Sage, 2013), pp. 481-495; Walker, Esteban, Adrian V. Hernandez, and Micheal W. Kattan. "Meta-Analysis: It's Strengths and Limitations." Cleveland Clinic Journal of Medicine 75 (June 2008): 431-439.

Mixed-Method Design

  • Narrative and non-textual information can add meaning to numeric data, while numeric data can add precision to narrative and non-textual information.
  • Can utilize existing data while at the same time generating and testing a grounded theory approach to describe and explain the phenomenon under study.
  • A broader, more complex research problem can be investigated because the researcher is not constrained by using only one method.
  • The strengths of one method can be used to overcome the inherent weaknesses of another method.
  • Can provide stronger, more robust evidence to support a conclusion or set of recommendations.
  • May generate new knowledge new insights or uncover hidden insights, patterns, or relationships that a single methodological approach might not reveal.
  • Produces more complete knowledge and understanding of the research problem that can be used to increase the generalizability of findings applied to theory or practice.
  • A researcher must be proficient in understanding how to apply multiple methods to investigating a research problem as well as be proficient in optimizing how to design a study that coherently melds them together.
  • Can increase the likelihood of conflicting results or ambiguous findings that inhibit drawing a valid conclusion or setting forth a recommended course of action [e.g., sample interview responses do not support existing statistical data].
  • Because the research design can be very complex, reporting the findings requires a well-organized narrative, clear writing style, and precise word choice.
  • Design invites collaboration among experts. However, merging different investigative approaches and writing styles requires more attention to the overall research process than studies conducted using only one methodological paradigm.
  • Concurrent merging of quantitative and qualitative research requires greater attention to having adequate sample sizes, using comparable samples, and applying a consistent unit of analysis. For sequential designs where one phase of qualitative research builds on the quantitative phase or vice versa, decisions about what results from the first phase to use in the next phase, the choice of samples and estimating reasonable sample sizes for both phases, and the interpretation of results from both phases can be difficult.
  • Due to multiple forms of data being collected and analyzed, this design requires extensive time and resources to carry out the multiple steps involved in data gathering and interpretation.

Burch, Patricia and Carolyn J. Heinrich. Mixed Methods for Policy Research and Program Evaluation . Thousand Oaks, CA: Sage, 2016; Creswell, John w. et al. Best Practices for Mixed Methods Research in the Health Sciences . Bethesda, MD: Office of Behavioral and Social Sciences Research, National Institutes of Health, 2010Creswell, John W. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches . 4th edition. Thousand Oaks, CA: Sage Publications, 2014; Domínguez, Silvia, editor. Mixed Methods Social Networks Research . Cambridge, UK: Cambridge University Press, 2014; Hesse-Biber, Sharlene Nagy. Mixed Methods Research: Merging Theory with Practice . New York: Guilford Press, 2010; Niglas, Katrin. “How the Novice Researcher Can Make Sense of Mixed Methods Designs.” International Journal of Multiple Research Approaches 3 (2009): 34-46; Onwuegbuzie, Anthony J. and Nancy L. Leech. “Linking Research Questions to Mixed Methods Data Analysis Procedures.” The Qualitative Report 11 (September 2006): 474-498; Tashakorri, Abbas and John W. Creswell. “The New Era of Mixed Methods.” Journal of Mixed Methods Research 1 (January 2007): 3-7; Zhanga, Wanqing. “Mixed Methods Application in Health Intervention Research: A Multiple Case Study.” International Journal of Multiple Research Approaches 8 (2014): 24-35 .

Observational Design

This type of research design draws a conclusion by comparing subjects against a control group, in cases where the researcher has no control over the experiment. There are two general types of observational designs. In direct observations, people know that you are watching them. Unobtrusive measures involve any method for studying behavior where individuals do not know they are being observed. An observational study allows a useful insight into a phenomenon and avoids the ethical and practical difficulties of setting up a large and cumbersome research project.

  • Observational studies are usually flexible and do not necessarily need to be structured around a hypothesis about what you expect to observe [data is emergent rather than pre-existing].
  • The researcher is able to collect in-depth information about a particular behavior.
  • Can reveal interrelationships among multifaceted dimensions of group interactions.
  • You can generalize your results to real life situations.
  • Observational research is useful for discovering what variables may be important before applying other methods like experiments.
  • Observation research designs account for the complexity of group behaviors.
  • Reliability of data is low because seeing behaviors occur over and over again may be a time consuming task and are difficult to replicate.
  • In observational research, findings may only reflect a unique sample population and, thus, cannot be generalized to other groups.
  • There can be problems with bias as the researcher may only "see what they want to see."
  • There is no possibility to determine "cause and effect" relationships since nothing is manipulated.
  • Sources or subjects may not all be equally credible.
  • Any group that is knowingly studied is altered to some degree by the presence of the researcher, therefore, potentially skewing any data collected.

Atkinson, Paul and Martyn Hammersley. “Ethnography and Participant Observation.” In Handbook of Qualitative Research . Norman K. Denzin and Yvonna S. Lincoln, eds. (Thousand Oaks, CA: Sage, 1994), pp. 248-261; Observational Research. Research Methods by Dummies. Department of Psychology. California State University, Fresno, 2006; Patton Michael Quinn. Qualitiative Research and Evaluation Methods . Chapter 6, Fieldwork Strategies and Observational Methods. 3rd ed. Thousand Oaks, CA: Sage, 2002; Payne, Geoff and Judy Payne. "Observation." In Key Concepts in Social Research . The SAGE Key Concepts series. (London, England: Sage, 2004), pp. 158-162; Rosenbaum, Paul R. Design of Observational Studies . New York: Springer, 2010;Williams, J. Patrick. "Nonparticipant Observation." In The Sage Encyclopedia of Qualitative Research Methods . Lisa M. Given, editor.(Thousand Oaks, CA: Sage, 2008), pp. 562-563.

Philosophical Design

Understood more as an broad approach to examining a research problem than a methodological design, philosophical analysis and argumentation is intended to challenge deeply embedded, often intractable, assumptions underpinning an area of study. This approach uses the tools of argumentation derived from philosophical traditions, concepts, models, and theories to critically explore and challenge, for example, the relevance of logic and evidence in academic debates, to analyze arguments about fundamental issues, or to discuss the root of existing discourse about a research problem. These overarching tools of analysis can be framed in three ways:

  • Ontology -- the study that describes the nature of reality; for example, what is real and what is not, what is fundamental and what is derivative?
  • Epistemology -- the study that explores the nature of knowledge; for example, by what means does knowledge and understanding depend upon and how can we be certain of what we know?
  • Axiology -- the study of values; for example, what values does an individual or group hold and why? How are values related to interest, desire, will, experience, and means-to-end? And, what is the difference between a matter of fact and a matter of value?
  • Can provide a basis for applying ethical decision-making to practice.
  • Functions as a means of gaining greater self-understanding and self-knowledge about the purposes of research.
  • Brings clarity to general guiding practices and principles of an individual or group.
  • Philosophy informs methodology.
  • Refine concepts and theories that are invoked in relatively unreflective modes of thought and discourse.
  • Beyond methodology, philosophy also informs critical thinking about epistemology and the structure of reality (metaphysics).
  • Offers clarity and definition to the practical and theoretical uses of terms, concepts, and ideas.
  • Limited application to specific research problems [answering the "So What?" question in social science research].
  • Analysis can be abstract, argumentative, and limited in its practical application to real-life issues.
  • While a philosophical analysis may render problematic that which was once simple or taken-for-granted, the writing can be dense and subject to unnecessary jargon, overstatement, and/or excessive quotation and documentation.
  • There are limitations in the use of metaphor as a vehicle of philosophical analysis.
  • There can be analytical difficulties in moving from philosophy to advocacy and between abstract thought and application to the phenomenal world.

Burton, Dawn. "Part I, Philosophy of the Social Sciences." In Research Training for Social Scientists . (London, England: Sage, 2000), pp. 1-5; Chapter 4, Research Methodology and Design. Unisa Institutional Repository (UnisaIR), University of South Africa; Jarvie, Ian C., and Jesús Zamora-Bonilla, editors. The SAGE Handbook of the Philosophy of Social Sciences . London: Sage, 2011; Labaree, Robert V. and Ross Scimeca. “The Philosophical Problem of Truth in Librarianship.” The Library Quarterly 78 (January 2008): 43-70; Maykut, Pamela S. Beginning Qualitative Research: A Philosophic and Practical Guide . Washington, DC: Falmer Press, 1994; McLaughlin, Hugh. "The Philosophy of Social Research." In Understanding Social Work Research . 2nd edition. (London: SAGE Publications Ltd., 2012), pp. 24-47; Stanford Encyclopedia of Philosophy . Metaphysics Research Lab, CSLI, Stanford University, 2013.

Sequential Design

  • The researcher has a limitless option when it comes to sample size and the sampling schedule.
  • Due to the repetitive nature of this research design, minor changes and adjustments can be done during the initial parts of the study to correct and hone the research method.
  • This is a useful design for exploratory studies.
  • There is very little effort on the part of the researcher when performing this technique. It is generally not expensive, time consuming, or workforce intensive.
  • Because the study is conducted serially, the results of one sample are known before the next sample is taken and analyzed. This provides opportunities for continuous improvement of sampling and methods of analysis.
  • The sampling method is not representative of the entire population. The only possibility of approaching representativeness is when the researcher chooses to use a very large sample size significant enough to represent a significant portion of the entire population. In this case, moving on to study a second or more specific sample can be difficult.
  • The design cannot be used to create conclusions and interpretations that pertain to an entire population because the sampling technique is not randomized. Generalizability from findings is, therefore, limited.
  • Difficult to account for and interpret variation from one sample to another over time, particularly when using qualitative methods of data collection.

Betensky, Rebecca. Harvard University, Course Lecture Note slides; Bovaird, James A. and Kevin A. Kupzyk. "Sequential Design." In Encyclopedia of Research Design . Neil J. Salkind, editor. (Thousand Oaks, CA: Sage, 2010), pp. 1347-1352; Cresswell, John W. Et al. “Advanced Mixed-Methods Research Designs.” In Handbook of Mixed Methods in Social and Behavioral Research . Abbas Tashakkori and Charles Teddle, eds. (Thousand Oaks, CA: Sage, 2003), pp. 209-240; Henry, Gary T. "Sequential Sampling." In The SAGE Encyclopedia of Social Science Research Methods . Michael S. Lewis-Beck, Alan Bryman and Tim Futing Liao, editors. (Thousand Oaks, CA: Sage, 2004), pp. 1027-1028; Nataliya V. Ivankova. “Using Mixed-Methods Sequential Explanatory Design: From Theory to Practice.” Field Methods 18 (February 2006): 3-20; Bovaird, James A. and Kevin A. Kupzyk. “Sequential Design.” In Encyclopedia of Research Design . Neil J. Salkind, ed. Thousand Oaks, CA: Sage, 2010; Sequential Analysis. Wikipedia.

Systematic Review

  • A systematic review synthesizes the findings of multiple studies related to each other by incorporating strategies of analysis and interpretation intended to reduce biases and random errors.
  • The application of critical exploration, evaluation, and synthesis methods separates insignificant, unsound, or redundant research from the most salient and relevant studies worthy of reflection.
  • They can be use to identify, justify, and refine hypotheses, recognize and avoid hidden problems in prior studies, and explain data inconsistencies and conflicts in data.
  • Systematic reviews can be used to help policy makers formulate evidence-based guidelines and regulations.
  • The use of strict, explicit, and pre-determined methods of synthesis, when applied appropriately, provide reliable estimates about the effects of interventions, evaluations, and effects related to the overarching research problem investigated by each study under review.
  • Systematic reviews illuminate where knowledge or thorough understanding of a research problem is lacking and, therefore, can then be used to guide future research.
  • The accepted inclusion of unpublished studies [i.e., grey literature] ensures the broadest possible way to analyze and interpret research on a topic.
  • Results of the synthesis can be generalized and the findings extrapolated into the general population with more validity than most other types of studies .
  • Systematic reviews do not create new knowledge per se; they are a method for synthesizing existing studies about a research problem in order to gain new insights and determine gaps in the literature.
  • The way researchers have carried out their investigations [e.g., the period of time covered, number of participants, sources of data analyzed, etc.] can make it difficult to effectively synthesize studies.
  • The inclusion of unpublished studies can introduce bias into the review because they may not have undergone a rigorous peer-review process prior to publication. Examples may include conference presentations or proceedings, publications from government agencies, white papers, working papers, and internal documents from organizations, and doctoral dissertations and Master's theses.

Denyer, David and David Tranfield. "Producing a Systematic Review." In The Sage Handbook of Organizational Research Methods .  David A. Buchanan and Alan Bryman, editors. ( Thousand Oaks, CA: Sage Publications, 2009), pp. 671-689; Foster, Margaret J. and Sarah T. Jewell, editors. Assembling the Pieces of a Systematic Review: A Guide for Librarians . Lanham, MD: Rowman and Littlefield, 2017; Gough, David, Sandy Oliver, James Thomas, editors. Introduction to Systematic Reviews . 2nd edition. Los Angeles, CA: Sage Publications, 2017; Gopalakrishnan, S. and P. Ganeshkumar. “Systematic Reviews and Meta-analysis: Understanding the Best Evidence in Primary Healthcare.” Journal of Family Medicine and Primary Care 2 (2013): 9-14; Gough, David, James Thomas, and Sandy Oliver. "Clarifying Differences between Review Designs and Methods." Systematic Reviews 1 (2012): 1-9; Khan, Khalid S., Regina Kunz, Jos Kleijnen, and Gerd Antes. “Five Steps to Conducting a Systematic Review.” Journal of the Royal Society of Medicine 96 (2003): 118-121; Mulrow, C. D. “Systematic Reviews: Rationale for Systematic Reviews.” BMJ 309:597 (September 1994); O'Dwyer, Linda C., and Q. Eileen Wafford. "Addressing Challenges with Systematic Review Teams through Effective Communication: A Case Report." Journal of the Medical Library Association 109 (October 2021): 643-647; Okoli, Chitu, and Kira Schabram. "A Guide to Conducting a Systematic Literature Review of Information Systems Research."  Sprouts: Working Papers on Information Systems 10 (2010); Siddaway, Andy P., Alex M. Wood, and Larry V. Hedges. "How to Do a Systematic Review: A Best Practice Guide for Conducting and Reporting Narrative Reviews, Meta-analyses, and Meta-syntheses." Annual Review of Psychology 70 (2019): 747-770; Torgerson, Carole J. “Publication Bias: The Achilles’ Heel of Systematic Reviews?” British Journal of Educational Studies 54 (March 2006): 89-102; Torgerson, Carole. Systematic Reviews . New York: Continuum, 2003.

  • << Previous: Purpose of Guide
  • Next: Design Flaws to Avoid >>
  • Last Updated: May 30, 2024 9:38 AM
  • URL: https://libguides.usc.edu/writingguide

InterviewPrep

30 Design Researcher Interview Questions and Answers

Common Design Researcher interview questions, how to answer them, and example answers from a certified career coach.

research design for interviews

In the intricate world of design, a Design Researcher plays a critical role in understanding user needs and translating these into innovative solutions. You’ve honed your skills, built an impressive portfolio, and now you’re ready to take on new challenges at your dream job. But before you can start influencing design strategies, you first have to navigate through the interview process.

Interviews for design research positions often go beyond generic questions; they delve deep into your thought processes, creativity, analytical skills, and ability to empathize with users. In this article, we’ll explore common interview questions that you may encounter during your quest for a Design Researcher position. We will also provide insightful tips and sample answers to help you confidently demonstrate your unique approach to design research.

1. Can you describe a project where your research significantly influenced the design direction?

Designers are the architects of user experience, and research plays a key role in ensuring that the design is user-centric. When interviewers ask this question, they are looking to evaluate your ability to effectively translate research findings into actionable design insights. They want to understand how your research methodology, findings, and recommendations have shaped the design process and contributed to the success of a project.

Example: “One significant project was a mobile app redesign for an e-commerce platform. My research involved user interviews, surveys, and usability testing of the existing app.

The findings revealed that users were frustrated with the complex checkout process and lack of personalized recommendations. This insight led to a design overhaul focused on simplifying the checkout process and incorporating AI-driven product suggestions.

This shift in design direction resulted in increased conversions and improved user satisfaction scores post-launch. The success of this project reinforced the value of thorough user research in guiding design decisions.”

2. How do you ensure your research findings are effectively communicated to the design team?

Communication is the key that unlocks the potential of design research. Without effective communication, even the most insightful research findings can become lost or misunderstood. Therefore, hiring managers are eager to understand your communication skills and strategies. They want to ensure that you can not only gather important data but also present it in a way that inspires and informs the design process.

Example: “Effective communication with the design team is crucial. I ensure this by using clear, jargon-free language and visual aids like infographics or flowcharts to present research findings.

I also use collaborative tools for real-time sharing of information and feedback. Regular meetings are scheduled to discuss progress and address any queries.

Lastly, I believe in tailoring my communication style to suit the needs of the design team. This ensures everyone understands the research insights and can apply them effectively in their work.”

3. What is your approach to conducting ethnographic research for design purposes?

Ethnographic research is a critical tool in a design researcher’s toolkit. It provides the insights needed to truly understand user needs, behaviors, and motivations, which in turn, informs the design process. By asking this question, hiring managers are aiming to gauge your familiarity with this method, as well as your ability to practically apply it in a design context.

Example: “In conducting ethnographic research for design purposes, my primary focus is on understanding the user’s context and needs. I start by identifying the target population and defining the scope of the study.

I then engage in participant observation where I immerse myself in their environment to gain a deeper understanding of their behavior, motivations, and challenges. This often involves interviews, surveys, or even shadowing users as they interact with existing designs.

The data collected is analyzed qualitatively to identify patterns and insights that can inform the design process. The goal is to create solutions that are not only functional but also culturally relevant and meaningful to the end-users.”

4. Can you discuss a time when the results of your research were not what you expected? How did you handle it?

This question is rooted in a desire to understand your problem-solving abilities and resilience in the face of unexpected results. Research is not always predictable, especially in design, and the ability to adapt and pivot based on surprising outcomes is a key skill. Employers want to know that you can handle surprises and use them to learn and grow, rather than getting stuck or discouraged.

Example: “In a recent project, I was researching user interaction with a new app interface. The data suggested users were struggling with navigation – contrary to my hypothesis that the design would be intuitive.

I didn’t let this deter me. Instead, I saw it as an opportunity to delve deeper into understanding our users’ needs and behaviors.

I conducted further research through interviews and usability testing. This helped me uncover the root cause of the problem and propose solutions. It was a valuable lesson in not making assumptions about user behavior and always validating ideas with solid research.”

5. How do you incorporate quantitative data into your design research process?

Because design research often involves understanding and interpreting human behavior, it’s important to balance the qualitative (anecdotal, observational) data with quantitative (measurable, statistical) data. Interviewers want to know if you can combine these two types of data to create a comprehensive understanding of user needs and behaviors. This can help inform and validate design decisions, ensuring they are effective and user-centric.

Example: “Incorporating quantitative data into design research is crucial for making informed decisions. I usually start by defining clear, measurable objectives to guide the data collection process.

For instance, if we’re designing a new website layout, we might track metrics like click-through rates or time spent on each page. This provides concrete evidence of user behavior and preferences.

I also use A/B testing to compare different design elements and their impact on user engagement. The results from these tests provide valuable data that can directly influence our design choices.

Quantitative data gives us an objective basis for evaluating design performance and helps in refining designs based on user response. It ensures our design decisions are rooted in fact rather than assumption.”

6. Describe a project where you had to pivot your research strategy midway. What prompted the change?

This question is posed to assess your adaptability and problem-solving skills. In a field like design research, it’s common for initial plans to evolve as new insights and information emerge. Employers want to know that you can navigate these changes effectively, making necessary adjustments without losing sight of the overarching goals.

Example: “In a recent project, we were designing an app for elderly users. Our initial strategy was to focus on simplicity and ease of use. However, after conducting user interviews, we realized that our target audience valued personalization more than simplicity.

This prompted us to pivot our research towards understanding how personalization could be incorporated without compromising usability. We conducted further interviews and surveys, focusing on preferences and needs related to customization.

The change in strategy led to valuable insights that significantly influenced the final design. It reinforced the importance of continuous user feedback and flexibility in research strategies.”

7. How do you ensure the user’s voice is central to your research and the subsequent design process?

In the realm of design research, user-centricity is the golden rule. It’s not just about creating beautiful and functional products, but about creating solutions that truly meet the needs and wants of the users. By asking this question, hiring managers want to see that you have a user-centric mindset, that you value their input, and that you have strategies in place to ensure their feedback is incorporated effectively into the design process.

Example: “To ensure the user’s voice is central to my research and design process, I employ several strategies.

I start with a thorough understanding of the users through methods like surveys, interviews, and observations. This helps in identifying their needs, preferences, and pain points.

Next, I use these insights to create user personas and journey maps which serve as constant reminders of who we are designing for throughout the project.

I also advocate for regular usability testing at various stages of the design process. This allows us to validate our designs against actual user feedback and make necessary adjustments.

Lastly, I believe in maintaining an open line of communication with the users even post-launch. Their ongoing feedback can provide valuable insights for future improvements.”

8. Can you explain your process for conducting competitive analysis in a design context?

The heart of this question lies in your ability to gather, analyze, and apply information about competitors in a way that benefits your company’s design strategies. A comprehensive competitive analysis allows you to understand the strengths and weaknesses of rival products, anticipate market trends, and provide insights that can help shape the direction of your own design work. It’s a critical skill for a design researcher—hence, why interviewers want to hear about your process.

Example: “In conducting competitive analysis in a design context, I start by identifying key competitors and their products. This involves understanding their strengths, weaknesses, opportunities and threats (SWOT).

Next, I analyze the user interface and user experience of these products, focusing on aspects like usability, functionality, aesthetics, and overall user satisfaction.

Then, I gather data through methods such as surveys, interviews, or focus groups to gain insights into users’ perceptions and experiences with these products.

Finally, I synthesize all this information into a comprehensive report that highlights areas where our product can differentiate and excel. The goal is not just to imitate what others are doing but to understand gaps and opportunities for innovation.”

9. How do you handle bias in your research process to ensure objective results?

As a design researcher, your role is to provide reliable, objective data that can guide the design process and ensure the final product effectively meets user needs. However, bias—both your own and that of your participants—can creep into the research process and skew your results. Hiring managers want to know that you’re aware of these potential pitfalls and have strategies in place to avoid them.

Example: “To handle bias in my research process, I use a few key strategies.

I ensure diversity in sample selection to represent various perspectives. This helps prevent skewed data and results.

Triangulation is another method I employ. By using multiple sources of data, any bias inherent in one source can be mitigated by the others.

Blind testing is also useful. It involves withholding information that might lead to bias from those interpreting the data.

Lastly, peer reviews offer an external check on potential biases. Other researchers can spot biases that may have been overlooked.

These methods help maintain objectivity throughout the research process.”

10. How have you used participatory design methods in your research?

Inviting users into the design process helps ensure the final product is user-friendly and meets their needs. Participatory design methods are a great way to gain insights into user needs, expectations, and behaviors. Therefore, hiring managers want to know if you have experience with these methods and how effectively you can incorporate them into your research.

Example: “In my research, I’ve used participatory design methods to ensure that the end product is user-centric. For example, in a project aimed at redesigning an e-commerce website’s checkout process, we included real users from the very beginning.

We conducted workshops where users could map out their ideal checkout process. We then used these maps as a guide for our initial prototypes. This method allowed us to gain valuable insights into what users wanted and needed, which greatly informed our design decisions.

This approach not only increased user satisfaction with the final design but also saved time and resources by reducing the need for extensive revisions later on.”

11. Discuss an instance where your research findings were at odds with the design team’s ideas. How did you resolve this?

This inquiry is designed to gauge your ability to handle conflict and manage differing perspectives within a team setting. As a design researcher, your role is pivotal in providing insights that guide the design process. However, there might be times when your findings clash with the design team’s vision or ideas. The interviewer wants to understand how you navigate such situations, ensuring that the final output is not compromised and that team harmony is maintained.

Example: “In a recent project, my research suggested that users preferred a more minimalist design approach. However, the design team was keen on incorporating numerous features and visuals. This discrepancy led to some initial disagreements.

To resolve this, I presented my findings in detail, explaining how user experience could be impacted negatively by an overly complex design. We then had a brainstorming session where we discussed ways to incorporate essential features without compromising simplicity.

Ultimately, we found a balance between functionality and aesthetics, resulting in a product that was well-received by users. It was a valuable lesson in effective communication and collaboration.”

12. How do you balance the need for quick results with the need for thorough research in the design process?

Design research involves a constant tug-of-war between the desire for comprehensive data and the constraints of project timelines. Employers want to know that you can navigate this tension and deliver valuable insights without holding up the design process. They’re eager to see if you can strike the right balance, prioritizing the most critical research tasks, while also moving swiftly to keep projects on track.

Example: “Balancing quick results with thorough research in design is a matter of prioritizing and strategizing. Understanding the project’s scope, goals, and timeline helps to identify critical areas requiring immediate attention.

For rapid results, I employ lean methodologies such as iterative prototyping and user feedback sessions. This allows for early detection and rectification of design flaws.

However, for comprehensive research, I ensure deep dives into user behavior, market trends, and competitor analysis. While this takes time, it guarantees informed decisions that enhance product longevity and relevance.

The key is integrating both approaches: using quick methods to guide initial stages while allowing findings from detailed research to refine the final design. It’s about working smart, not just hard.”

13. Can you describe a project where you used a unique or innovative research method to inform design?

In the world of design, being innovative and thinking outside the box is more than just a desirable trait—it’s a necessity. Hiring managers want to know if you can put theory into practice, and more importantly, if you can come up with unique solutions to problems. Demonstrating your ability to use innovative research methods to inform design decisions is a way to showcase your critical thinking skills, creativity, and your ability to drive a project in new and exciting directions.

Example: “In a recent project, I used a method called “Cultural Probes” to gather insights. This involved creating packages with tasks for participants to complete over time, such as taking photos or keeping journals.

This qualitative approach allowed us to gain deep, personal insights into the users’ lives and experiences that we wouldn’t have obtained through traditional surveys or interviews. The data collected was invaluable in informing our design process, leading to more empathetic and user-centered solutions.”

14. How do you approach research for designing inclusive and accessible products?

Inclusivity and accessibility are extremely important in modern product design. A design researcher is responsible for making products that are usable by as many people as possible, regardless of their abilities or backgrounds. By asking this question, hiring managers want to know if you understand the value of inclusive design and if you have strategies for conducting research that will help make products more accessible and inclusive.

Example: “In designing inclusive and accessible products, I start with understanding the diverse needs of users. This involves conducting in-depth user research that includes people from different demographics, abilities, and backgrounds.

I also utilize guidelines such as WCAG for accessibility standards to ensure our designs meet these criteria. Collaboration is key; working closely with developers, UX writers, and other stakeholders helps us address potential issues early on.

Finally, usability testing is crucial. It allows us to validate our design decisions, ensuring they work well for all users. Inclusivity and accessibility are not afterthoughts, but integral parts of my design process.”

15. What strategies do you use to recruit participants for your research studies?

Recruiting participants for research studies is often a key aspect of a design researcher’s role, and the strategies you use can greatly impact the quality and relevance of your findings. Interviewers want to ensure you’re adept at identifying and reaching out to potential participants in a way that’s effective, ethical, and aligned with the study’s objectives.

Example: “To recruit participants for research studies, I employ a multi-pronged approach.

I use social media platforms and online forums related to the study’s subject matter as they are effective tools in reaching out to potential participants. I also leverage existing networks of contacts within relevant industries or communities.

Incentives can be an effective strategy too; these could range from financial compensation to exclusive access to the results of the study.

However, it’s crucial to ensure that the recruitment process is ethical and unbiased. Therefore, I always strive to maintain transparency about the purpose of the study and what participation involves.”

16. Describe a situation where you had to defend your research findings to stakeholders. How did you handle it?

Stakeholders might not always agree with your findings, and in some cases, they might have their own set of data that they believe contradicts yours. Because of this, they might need you to explain your methodology, findings, and reasoning. Hiring managers want to know you can handle these situations professionally, tactfully, and effectively, ensuring your research is understood and applied correctly.

Example: “In one project, our research suggested a new user interface design that was contrary to the stakeholders’ initial preference. They were skeptical about the change.

I handled it by presenting the data and methodology we used in reaching our conclusion. I explained how this approach would improve user experience based on behavioral patterns observed during testing.

To address their concerns further, I proposed A/B testing for both designs. This allowed us to gather real-time feedback from users, which ultimately validated our recommendation. It demonstrated my commitment to evidence-based decision making and reassured them of the robustness of our research process.”

17. Can you discuss a project where you used data visualization to communicate your research findings?

Being able to communicate complex data in a visually appealing and easy-to-understand manner is a key skill for design researchers. This question is asked to gain insight into your ability to translate research into actionable insights that can be easily digested by a wide range of stakeholders, from designers and developers to executives. Your answer will reveal your proficiency in data visualization tools, your creative thinking, and your ability to make data-driven decisions.

Example: “One project involved analyzing customer feedback for a retail company. I used sentiment analysis to categorize comments into positive, negative, and neutral categories.

I then created an interactive dashboard using Tableau that displayed these sentiments over time. This allowed stakeholders to easily identify trends and patterns in customer satisfaction.

The visualization was instrumental in driving decisions about product improvements and customer service strategies. It helped the team understand complex data sets and make informed decisions based on customers’ needs and preferences.”

18. How do you ensure your research methods are ethical, especially when dealing with sensitive user data?

Hiring managers want to know that you understand the importance of ethics in research. As a Design Researcher, you will often handle sensitive user data, and it’s critical that this information is treated with the utmost respect and confidentiality. The way you gather, use, and store data should adhere to the highest ethical standards to maintain users’ trust and comply with regulations.

Example: “Ensuring ethical research methods, particularly when dealing with sensitive user data, requires a multi-faceted approach.

One key aspect is obtaining informed consent from users before collecting or using their data. This involves clearly explaining the purpose of the research, what data will be collected, how it will be used and stored, and any potential risks involved.

Another crucial element is maintaining confidentiality and privacy. This can be achieved by anonymizing data, storing it securely, and only sharing it on a need-to-know basis.

Finally, I believe in conducting regular audits to ensure compliance with these practices and staying updated on evolving ethical standards and regulations related to data handling and user research.”

19. How do you approach the challenge of translating abstract research data into concrete design solutions?

The crux of a design researcher’s role is to utilize their research findings to implement effective, user-friendly design solutions. Interviewers ask this question to understand your ability to interpret and apply research data, demonstrating your analytical skills, creativity, and practical problem-solving abilities. They want to see that you can bridge the gap between data and design, bringing valuable insights to life in a tangible way.

Example: “Translating abstract research data into design solutions requires a systematic approach. I start by thoroughly understanding the data, identifying key insights and trends.

Next, I map these findings to user needs and business goals, ensuring alignment. This often involves creating personas or journey maps to visualize the information.

Then, through brainstorming sessions and iterative prototyping, I translate these insights into tangible design concepts. These are tested and refined based on feedback until they effectively address the identified needs.

Throughout this process, clear communication with stakeholders is crucial for aligning expectations and facilitating effective decision-making.”

20. What is your experience with remote user testing and research?

In today’s digital-first landscape, the ability to conduct user testing and research remotely is a critical skill for a design researcher. This question is asked to gauge your experience and comfort level with various remote research methodologies, tools, and platforms. It also helps determine whether you can adapt to a remote work environment, which is becoming increasingly prevalent in many industries.

Example: “I have extensive experience with remote user testing and research. In my past projects, I’ve used tools like UserZoom and Lookback to conduct usability tests, interviews, and surveys.

One key aspect of remote research is clear communication. To ensure this, I always prepare a detailed test plan and script. This helps in setting expectations for participants and minimizing potential confusion during the session.

Another important element is being adaptable. Remote sessions can be unpredictable due to technical issues or participant availability. Therefore, having contingency plans and being flexible with scheduling are crucial.

Through these experiences, I’ve learned how to effectively gather and analyze data remotely, which has been invaluable in informing design decisions.”

21. Can you share an example of a project where your research led to a significant design breakthrough?

This question is rooted in understanding your ability to transform research findings into actionable design insights. It’s all about seeing how you apply your knowledge and skills in the real world. They want to know if you can take complex information and distill it into clear, usable design principles that lead to tangible improvements in the user experience. Your answer will demonstrate your analytical skills, creativity, and capacity to contribute to innovative design solutions.

Example: “In a recent project, we were tasked with redesigning an e-commerce website to increase user engagement. My research involved studying user behavior patterns and conducting surveys.

The data revealed that users found the checkout process too complicated leading to cart abandonment. I proposed simplifying the process by reducing the number of steps and introducing a progress bar for better visibility.

Post-implementation, there was a 30% decrease in cart abandonment rates and a significant boost in conversions. This example underscores how research can directly impact design decisions and improve user experience.”

22. How do you manage and prioritize multiple research projects with overlapping deadlines?

Design research is a field where juggling multiple projects at once is often the norm rather than the exception. Hiring managers want to see that you can handle this kind of environment without missing deadlines or sacrificing the quality of your work. Demonstrating your project management skills in your answer can reassure them that you’re up to the task.

Example: “Managing and prioritizing multiple research projects requires a strategic approach. I use project management tools to track tasks, deadlines, and progress. This allows me to visualize the workload and allocate resources effectively.

Prioritization is crucial in managing overlapping deadlines. I prioritize based on urgency, importance, and the project’s impact. Regular communication with stakeholders helps align expectations and keep everyone informed about the project status.

In addition, flexibility is key. Unexpected issues can arise, so it’s essential to be adaptable and ready to adjust plans when necessary. By maintaining an organized workflow and staying proactive, I ensure all projects are completed efficiently without compromising quality.”

23. What is your process for developing user personas based on your research?

User personas are a critical tool in design research for understanding and empathizing with the end users. Therefore, hiring managers want to see if you have a systematic approach to creating these personas. They’re interested in knowing if you can identify user needs, behaviors, and motivations, and translate those insights into actionable tools for the design team. This question also reveals your ability to communicate complex user data in understandable, relatable ways.

Example: “Creating user personas starts with comprehensive research. I gather qualitative and quantitative data through methods like surveys, interviews, and observation to understand user behavior, needs, and motivations.

From the collected data, I identify common patterns and group similar behaviors to form a rough persona grouping. Each group represents a unique user type that shares common characteristics.

Then, I refine these groups into well-defined personas by adding specific details such as demographic information, goals, pain points, etc., making them relatable and realistic.

Finally, I validate these personas with real users to ensure their accuracy and make any necessary adjustments.

This process is iterative and requires regular updates as we learn more about our users or when business objectives change.”

24. How have you used A/B testing in your research to inform design decisions?

A/B testing is a valuable tool in a design researcher’s arsenal. It’s a way to compare two variants of a design to see which performs better. By asking this question, hiring managers are looking for evidence of your ability to use A/B testing effectively. This includes setting up the test, interpreting the results, and applying those insights to improve the design. They also want to ensure you can make data-informed decisions that lead to a better user experience.

Example: “In my research, A/B testing has been a critical tool for making data-driven design decisions. For instance, in one project aimed at improving user engagement, I created two versions of a landing page with different layouts.

After collecting and analyzing the data, it was clear that one layout outperformed the other significantly in terms of click-through rates and time spent on the page. This informed our decision to adopt that particular layout across the site.

A/B testing not only helps validate design choices but also provides insights into user behavior and preferences, which are invaluable when creating user-centric designs.”

25. Can you discuss a time when you had to adjust your research methods due to budget constraints?

The real-world research process is often constrained by budget, timeline, or other resources. Employers want to know that you can still deliver valuable insights even when you can’t use the most ideal methods. It’s about demonstrating your flexibility, creativity, and problem-solving skills in finding alternative ways to conduct research.

Example: “In one project, we were researching user interaction with a new app. Initially, we planned to conduct extensive in-person usability testing. However, due to budget cuts, we had to rethink our approach.

We decided to leverage digital tools for remote usability testing which was more cost-effective. We also shifted from individual interviews to focus groups to gather more data at once.

Despite the constraints, the adjustments led to rich insights and ultimately contributed to a successful product launch. It taught me that creativity can often lead to even better outcomes when faced with limitations.”

26. How do you stay updated with the latest design research methods and tools?

This question is a way for potential employers to gauge your commitment to ongoing professional development and your ability to stay current with evolving industry trends. The world of design research is dynamic and rapidly changing, and employers want to ensure you’re someone who takes the initiative to keep their skills and knowledge fresh. This not only benefits you as an individual but also contributes to the overall competitiveness and success of the team and organization.

Example: “I stay updated with the latest design research methods and tools by regularly attending industry-specific webinars, workshops, and conferences. I also subscribe to several professional newsletters and journals such as UX Design Weekly and the Journal of Design Research.

Moreover, I actively participate in online communities like Behance and Dribbble where designers share their work and discuss new trends and techniques.

Lastly, I take advantage of online learning platforms like Coursera and Udemy to enhance my skills and knowledge about emerging tools and methodologies in design research.”

27. Can you describe a project where you had to collaborate with cross-functional teams during the research process?

Collaboration is a key skill for a design researcher, as the role often involves working with teams from different departments such as product, marketing, and user experience. The interviewer wants to understand how you navigate these relationships and facilitate a collaborative atmosphere. Your answer will reveal your ability to communicate, cooperate, and integrate insights from various perspectives, which is crucial for a holistic design approach.

Example: “In a recent project, I was part of a team developing an innovative kitchen appliance. The goal was to create a user-friendly product that met market demands and regulatory requirements.

My role involved collaborating with the engineering, marketing, and legal teams. With engineers, we conducted usability tests to ensure design functionality. We worked with marketing to understand consumer needs and trends. Legal collaboration ensured our design complied with safety regulations.

This cross-functional collaboration was key in creating a successful product that not only met user needs but also adhered to market and legal standards. It demonstrated the importance of diverse perspectives in the research process.”

28. How do you handle negative feedback or criticism of your research findings?

As a design researcher, you’re in a field that thrives on feedback and iteration. Negative feedback or criticism is not a sign of failure, but an opportunity to refine and improve your work. Interviewers want to see if you can accept criticism positively, constructively, and use it as a catalyst for growth and improvement. They are interested in how you respond to challenges and setbacks, as well as your ability to work collaboratively with others who may not always agree with your findings.

Example: “Negative feedback or criticism is a crucial part of the research process. I view it as an opportunity to improve and refine my work. When faced with such situations, I ensure that I understand the critique fully by asking for clarification if needed.

I then objectively analyze the feedback against my findings, considering its validity and how it can enhance the results. If the criticism is valid, I revise my approach accordingly. However, if I believe in my methodology, I am prepared to defend it respectfully while providing supporting evidence.

Remember, research is about learning and growth, and constructive criticism plays a vital role in this journey.”

29. Can you discuss a project where you had to use research to design for a complex user journey?

This question is designed to test your practical application of research principles in the design process. Designing for complex user journeys requires an in-depth understanding of user needs, pain points, and behavior. By asking for a specific example, hiring managers can assess your ability to conduct effective research, synthesize findings, and apply these insights to create a thoughtful and effective design solution.

Example: “One project that stands out is when I was tasked with designing a digital platform for an international non-profit organization. The user journey was complex due to the diverse audience which included donors, volunteers, and beneficiaries across different countries and cultures.

I started by conducting extensive research using methods like surveys, interviews, and usability testing to understand each user group’s needs, motivations, and pain points. This helped me create detailed personas and map out their unique journeys.

The design process involved creating wireframes and prototypes that were iteratively tested and refined based on user feedback. The end result was a user-centric platform that effectively catered to the distinct needs of each user group, improving engagement and satisfaction rates significantly.”

30. How do you measure the success or impact of your research on the final design?

The key to effective design research is not just conducting the research, but also ensuring it informs the design process and leads to a successful final product. This question is asked to evaluate if you can effectively translate data into actionable design insights, and if you understand how to measure the impact of your research on the final product. It helps interviewers assess your analytical skills and your ability to think critically about the design process.

Example: “Measuring the success of research in design involves both qualitative and quantitative methods. User feedback is crucial; it provides insights into how well the design meets user needs and expectations.

Quantitative data such as usage statistics, conversion rates, and time spent on specific tasks can also indicate the effectiveness of a design.

A successful design not only meets its functional objectives but also enhances the overall user experience. If users find value in the product and continue to use it over time, that’s a strong indicator of impactful research.”

30 Research Consultant Interview Questions and Answers

30 dental office administrator interview questions and answers, you may also be interested in..., 30 medical massage therapist interview questions and answers, 20 information architect interview questions and answers, 20 patient care manager interview questions and answers, 20 disease intervention specialist interview questions and answers.

Sign up today  for O*Academy’s Design Research Mastery Course

  • UX Research & UX Design
  • UX Staff Augmentation
  • Service Design
  • Design Workshops
  • Case Studies
  • Why Outwitly?
  • Outwitly Team
  • Diversity, Equality and Inclusion

Design Research Methods: In-Depth Interviews

In our new three-part blog series, we introduce our favourite qualitative research methods and strategies that you can immediately start applying to your human-centered design projects.

We cover the following design research methods:

In-Depth Interviews (in this post)

Contextual Observations , and

Diary Studies

Do you want to conduct better interviews? 

We help you navigate in-depth interviews for your users and customers. We’ll explore how to plan and execute a stellar interview, and we’ll outline our Top 7 Tips for In-Depth Interviewers.

What are in-depth interviews?

In-depth interviews are one of the most common qualitative research methods used in design thinking and human-centered design processes. They allow you to gather a lot of information at once, with relative logistical ease. In-depth interviews are a form of ethnographic research, where researchers observe participants in their real-life environment. They are most effective when conducted in a one-on-one setting.

How and when can you use interviews?

In-depth interviews are best deployed during the Discovery phase of the Human-Centered Design (HCD) process . They are an opportunity to explore and a chance to uncover your user’s needs and challenges. Do you want to find out where they are struggling the most with your service? Now is the time to ask.

User Interview Workbook - This image directs you to Outwitly's free workbook that prepares and teaches UX designers how to conduct interviews like a pro.

Logistics for In-Depth Interviews

Here are our top tips for planning out the logistics for your interviews:

Recruiting: Properly recruiting for interviews is a crucial step, and it can sometimes be the most challenging part of the process. Recruitment can either be handled by the client or in-house, and sometimes by an external recruiting firm. You’ll identify the demographics and characteristics of your different user groups as a first step (e.g. by gender, age, occupation, etc.), and then you’ll ideally find 4-6 interview participants that match your recruiting criteria.

Scheduling: Outwitly uses a scheduling tool called Calendly to schedule all of our interviews. This handy platform syncs directly with our internal calendars, and it will even hook-up to our web-conferencing tool to send call information directly to the participant.

Format: Interviews can be conducted in-person or remotely over the phone, or a combination of the two. An advantage to conducting in-person interviews is that they allow for easier rapport-building, and you’re able to more fully understand the context of how your participant may interact with the product, service, or organization, as well as a holistic picture of their lives. The advantage to remote interviews is that they are easier to schedule and recruit for, and they can really be conducted from anywhere with a cell signal or a WiFi connection. Ideally, you are able to do a mix of both interview types, or you’re able to use remote interviewing in conjunction with another research method, like observations.

Duration: The sweet spot for in-depth interview length is between 45–90 minutes. This depends on how many research themes and questions you have, and of course, your participant’s schedule. Anything over 90 minutes can be very draining for both you and the participant.

Note-Taking: When possible (and with the participant’s consent), it’s best to audio record interviews. This way you are not scrambling to keep up with your hand-written notes, and you are able to fully engage with the participant and listen closely. At Outwitly, we use manual audio recorders, but the iPhone Voice Record Pro app is also an option for in-person interviews. For remote interviewing, you might opt to use call recording software; we like to use the built-in recording feature of GoToMeeting , which is our preferred web-conference platform. Once audio recordings have been collected, we typically get the recordings transcribed using services like Rev.com . This saves a lot of time during the data analysis phase.

Interview Protocol: Before running a set of interviews, it’s important to prepare an ‘interview protocol.’ A protocol is the combination of two things:

1) An introductory script about the research and what the participant can expect from the interview. This is also the time to ask consent for recording and to assure participants that their names and everything they say will be kept confidential.

7 Tips for In-Depth Interviewers

Interviewing is an art form, and it requires a high level of emotional intelligence. You need to be in tune with how comfortable your interviewee/research participant feels, and enable them to open up to you–a complete stranger–about their challenges. Research can sometimes involve particularly sensitive subjects like weight management, divorce, personal finances, and more, so rapport-building (Tip #4) is especially crucial for successful interviewing. Here are our Top 7 best practices for interviewers.

Active Listening: The best skill an interviewer can foster is their listening ability. In a strong interview, the interviewer is not interrupting, bringing up their own anecdotes, or asking too many questions. While some of these “what-not-to-do’s” can actually be helpful to make the participant feel comfortable, too many can derail the interview and also lead the participant to certain answers (as discussed in Tip #3). The interview should flow naturally, and you should mostly allow for the participant to lead the conversation. You’ll want to be listening to them, and when appropriate, repeating key points back to them to reiterate that you are actively listening. Asking a question like “I heard you say your biggest challenges are XYZ. Is there anything else?” shows the participants that you are interested in what they are saying, and it encourages them to keep sharing.

Probing: ‘Probing’ in the context of in-depth interviews refers to diving deeper on a particular response or topic. Typically, you will have prepared your interview protocol with a list of questions and sub-questions–the latter are your probing questions. For example, you might begin with an open-ended, general question, and as your participant replies, you might ask subsequent questions that encourage them to keep digging into the subject. A good interviewer also knows when to continue probing on a subject–and when to move on.

Non-Leading: Learn not to ask leading questions. A leading question is one in which you are making an assumption in the way your question is phrased. This can influence how your participant answers the question. For example, if you ask a participant “What challenges do you have with XYZ?”, you are assuming there are challenges, which may skew the participants response. They may not have any challenges to begin with, but they might reply that there were challenges anyway to fit the question. A better way to ask that question would be: “What challenges, if any, have you had with XYZ?” When prepping the interview protocol, be careful not to draft leading questions. And in the heat of the moment if you go off-script, you’ll need to think about how you’re phrasing your questions.

Building Rapport: Learning to build rapport is one of the most important skills to cultivate as an interviewer. By ensuring your participants feel comfortable, they are much more likely to open up to you. Remember to always be friendly and courteous in your communication prior to conducting the interview (e.g. in emails you send regarding scheduling). In the interview, use a tone of voice that is soft and inquisitive, as well as understanding. Introduce yourself as the researcher and explain the research to the participant. Emphasize that you are there to learn about them, and to understand their needs and how the product, service, or organization they are interacting with could be improved to suit them. During the interview, if you hear in their tone of voice that something in their experience was very frustrating, use language to acknowledge that, by saying “It sounds like that was very frustrating” or “I understand” to let them know that you are on their side. Also, reassure them throughout the interview that their feedback is very useful and helpful by saying things like “Thank you – that’s very interesting,” or “I’ve heard that before from others, you are not the only one!”

Agility & Go-with-the-Flow Attitude: You can prepare, rehearse, and write your interview protocol, but in every interview you will have to be agile. For example, if you’ve separated your interview questions into sections, and the participant naturally starts talking about a topic that you have written down for a later portion of the interview, you should freely move down to those questions and jump back to where you were afterwards. This way, the interview will feel more organic and conversational, and less robotic. Flexibility is also critical because some participants just do not have a lot to say. In these cases, you’ll be required to think of more “off the cuff” questions, or you’ll need to reconsider whether the interview is still a valuable use of your time and theirs. Knowing when to cut an interview short is also an important skill. For the most part, let the participant lead the conversation, feel comfortable jumping around a little in your protocol, and listen to them to know what other questions you could ask that might not be in the protocol. Also, know when to skip a question if you’ve already gotten a response elsewhere in the interview.

Facilitate & Guide: Sometimes interviews will be easy and they’ll naturally follow the flow of your interview protocol. And sometimes they’ll be more challenging, especially if an interviewee is particularly passionate about one topic. In this case, you’ll need to guide your participants as much as possible, so that you can move through more of your questions. This is a delicate balance of listening, finding a time to cut in, and using transitional phrases like “That’s very helpful. I’m mindful of the time, and I would like to ask you some questions about XYZ.”

Comfort with Discomfort: It can sometimes be difficult for participants to answer a question quickly in an interview. They might need to think about their answer before responding. Or they may be able to answer quickly, but there might be things in the back of their mind related to the question that might take a minute for them to recall. It’s important to allow interviewees that space to think about the question. From a human perspective, leaving open silence can feel awkward, but it’s important to create space for the participant to remember anything else that might be important. So while you might be sitting there thinking “wow this is awkward,” they are actually just thinking about their answer. On the flip side, you also don’t want to leave too much space in case there is nothing else to add–this can in turn make participants feel insecure that they have not said enough. Perfecting this skill comes with a lot of experience, so for now, try counting to 10, or perhaps mention that you need a few seconds to catch-up on your note taking–this gives them the space to think longer without feeling too much time pressure. Of course, if nothing more comes up, just feel free to move on.

  View this post on Instagram   A post shared by Outwitly | UX & Service Design (@outwitly)

Click through to download your copy now…

Next in our Research Methods blog series, we walk you through best practices for conducting observations and shadowing as part of your research and design process.

Resources we like…

Calendly for Scheduling

GotoMeeting for Remote Interviewing

iPhone Voice Record Pro app for Audio Recording

Rev for Audio Transcription

Similar blog posts you might like...

research design for interviews

Making your Journey Map Actionable and Creating Change: 301

research design for interviews

The New Outwitly Blog: Design, Research, and Storytelling

research design for interviews

Design Research Methods: Diary Study

A man is working on an empathy map in a virtual empathy mapping workshop. He is using the Miro empathy mapping template.

How to Plan and Conduct a Virtual Empathy Mapping Workshop

Subscribe to the weekly wit, what you’ll get.

  • Hot remote industry jobs
  • Blogs, podcasts, and worthwhile resources
  • Free ebooks, webinars, and mini-courses
  • Tips from the brightest minds in design

Ready to conduct user interviews like a pro?

Download our free user interview workbook.

Trends and Motivations in Critical Quantitative Educational Research: A Multimethod Examination Across Higher Education Scholarship and Author Perspectives

  • Open access
  • Published: 04 June 2024

Cite this article

You have full access to this open access article

research design for interviews

  • Christa E. Winkler   ORCID: orcid.org/0000-0002-1700-5444 1 &
  • Annie M. Wofford   ORCID: orcid.org/0000-0002-2246-1946 2  

9 Altmetric

To challenge “objective” conventions in quantitative methodology, higher education scholars have increasingly employed critical lenses (e.g., quantitative criticalism, QuantCrit). Yet, specific approaches remain opaque. We use a multimethod design to examine researchers’ use of critical approaches and explore how authors discussed embedding strategies to disrupt dominant quantitative thinking. We draw data from a systematic scoping review of critical quantitative higher education research between 2007 and 2021 ( N  = 34) and semi-structured interviews with 18 manuscript authors. Findings illuminate (in)consistencies across scholars’ incorporation of critical approaches, including within study motivations, theoretical framing, and methodological choices. Additionally, interview data reveal complex layers to authors’ decision-making processes, indicating that decisions about embracing critical quantitative approaches must be asset-based and intentional. Lastly, we discuss findings in the context of their guiding frameworks (e.g., quantitative criticalism, QuantCrit) and offer implications for employing and conducting research about critical quantitative research.

Similar content being viewed by others

research design for interviews

Publication Patterns of Higher Education Research Using Quantitative Criticalism and QuantCrit Perspectives

research design for interviews

Questions of Legitimacy and Quality in Educational Research

research design for interviews

Rethinking approaches to research: the importance of considering contextually mitigating factors in promoting equitable practices in science education research

Avoid common mistakes on your manuscript.

Across the field of higher education and within many roles—including policymakers, researchers, and administrators—key leaders and educational partners have historically relied on quantitative methods to inform system-level and student-level changes to policy and practice. This reliance is rooted, in part, on the misconception that quantitative methods depict the objective state of affairs in higher education. This perception is not only inaccurate but also dangerous, as the numbers produced from quantitative methods are “neither objective nor color-blind” (Gillborn et al., 2018 , p. 159). In fact, like all research, quantitative data collection and analysis are informed by theories and beliefs that are susceptible to bias. Further, such bias may come in multiple forms such as researcher bias and bias within the statistical methods themselves (e.g., Bierema et al., 2021 ; Torgerson & Torgerson, 2003 ). Thus, if left unexamined from a critical perspective, quantitative research may inform policies and practices that fuel the engine of cultural and social reproduction in higher education (e.g., Bourdieu, 1977 ).

Largely, critical approaches to higher education research have been dominated by qualitative methods (McCoy & Rodricks, 2015 ). While qualitative approaches are vital, some have argued that a wider conceptualization of critical inquiry may propel our understanding of processes in higher education (Stage & Wells, 2014 ) and that critical research need not be explicitly qualitative (refer to Sablan, 2019 ; Stage, 2007 ). If scholars hope to embrace multiple ways of challenging persistent inequities and structures of oppression in higher education, such as racism, advancing critical quantitative work can help higher education researchers “expose and challenge hidden assumptions that frequently encode racist perspectives beneath the façade of supposed quantitative objectivity” (Gillborn et al., 2018 , p. 158).

Across professional networks in higher education, the perspectives of association leaders (e.g., Association for the Study of Higher Education [ASHE]) have often placed qualitative and quantitative research in opposition to each other, with qualitative research being a primary way to amplify the voices of systemically minoritized students, faculty, and staff (Kimball & Friedensen, 2019 ). Yet, given the vast growth of critical higher education research (e.g., Byrd, 2019 ; Espino, 2012 ; Martínez-Alemán et al., 2015 ), recent ASHE presidents have recognized how prior leaders planted transformative seeds of critical theory and praxis (Renn, 2020 ) and advocated for critical higher education scholarship as a disrupter (Stewart, 2022 ). With this shift in discourse, many members of the higher education research community have also grown their desire to expand upon the legacy of critical research—in both qualitative and quantitative forms.

Critical quantitative approaches hold promise as one avenue for meeting recent calls to embrace equity-mindedness and transform the future of higher education research, yet current structures of training and resources for quantitative methods lack guidance on engaging such approaches. For higher education scholars to advance critical inquiry via quantitative methods, we must first understand the extent to which such approaches have been adopted. Accordingly, this study sheds light on critical quantitative approaches used in higher education literature and provides storied insights from the experiences of scholars who have engaged critical perspectives with quantitative methods. We were guided by the following research questions:

To what extent do higher education scholars incorporate critical perspectives into quantitative research?

How do higher education scholars discuss specific strategies to leverage critical perspectives in quantitative research?

Contextualizing Existing Critical Approaches to Quantitative Research

To foreground our analysis of literature employing critical quantitative lenses to studies about higher education, we first must understand the roots of such framing. Broadly, the foundations of critical quantitative approaches align with many elements of equity-mindedness. Equity-mindedness prompts individuals to question divergent patterns in educational outcomes, recognize that racism is embedded in everyday practices, and invest in un/learning the effects of racial identity and racialized expectations (Bensimon, 2018 ). Yet, researchers’ commitments to critical quantitative approaches stand out as a unique thread in the larger fabric of opportunities to embrace equity-mindedness in higher education research. Below, we discuss three significant publications that have been widely applied as frameworks to engage critical quantitative approaches in higher education. While these publications are not the only ones associated with critical inquiry in quantitative research, their evolution, commonalities, and distinctions offer a robust background of epistemological development in this area of scholarship.

Quantitative Criticalism (Stage, 2007 )

Although some higher education scholars have applied critical perspectives in their research for many years, Stage’s ( 2007 ) introduction of quantitative criticalism was a salient contribution to creating greater discourse related to such perspectives. Quantitative criticalism, as a coined paradigmatic approach for engaging critical questions using quantitative data, was among the first of several crucial publications on this topic in a 2007 edition of New Directions for Institutional Research . Collectively, this special issue advanced perspectives on how higher education scholars may challenge traditional positivist and post-positivist paradigms in quantitative inquiry. Instead, researchers could apply (what Stage referred to as) quantitative criticalism to develop research questions centering on social inequities in educational processes and outcomes as well as challenge widely accepted models, measures, and analytic practices.

Notably, Stage ( 2007 ) grounded the motivation for this new paradigmatic approach in the core concepts of critical inquiry (e.g., Kincheloe & McLaren, 1994 ). Tracing critical inquiry back to the German Frankfurt school, Stage discussed how the principles of critical theory have evolved over time and highlighted Kincheloe and McLaren’s ( 1994 ) definition of critical theory as most relevant to the principles of quantitative criticalism. Kincheloe and McLaren’s definition of critical describes how researchers applying critical paradigms in their scholarship center concepts such as socially and historically created power structures, subjectivity, privilege and oppression, and the reproduction of oppression in traditional research approaches. Perhaps most importantly, Kincheloe and McLaren urge scholars to be self-conscious in their decision making—a tall ask of quantitative scholars operating from positivist and post-positivist vantage points.

In advancing quantitative criticalism, Stage ( 2007 ) first argued that all critical scholars must center their outcomes on equity. To enact this core focus on equity in quantitative criticalism, Stage outlined two tasks for researchers. First, critical quantitative researchers must “use data to represent educational processes and outcomes on a large scale to reveal inequities and to identify social or institutional perpetuation of systematic inequities in such processes and outcomes” (p. 10). Second, Stage advocated for critical quantitative researchers to “question the models, measures, and analytic practices of quantitative research in order to offer competing models, measures, and analytic practices that better describe experiences of those who have not been adequately represented” (p. 10). Stage’s arguments and invitations for criticalism spurred crucial conversations, many of which led to the development of a two-part series on critical quantitative approaches in New Directions for Institutional Research (Stage & Wells, 2014 ; Wells & Stage, 2015 ). With nearly a decade of new perspectives to offer, manuscripts within these subsequent special issues expanded the concepts of quantitative criticalism. Specifically, these new contributions advanced the notion that quantitative criticalism should include all parts of the research process—instead of maintaining a focus on paradigm and research questions alone—and made inroads when it came to challenging the (default, dominant) process of quantitative research. While many scholars offered noteworthy perspectives in these special issues (Stage & Wells, 2014 ; Wells & Stage, 2015 ), we now turn to one specific article within these special issues that offered a conceptual model for critical quantitative inquiry.

Critical Quantitative Inquiry (Rios-Aguilar, 2014 )

Building from and guided by the work of other criticalists (namely, Estela Bensimon, Sara Goldrick-Rab, Frances Stage, and Erin Leahey), Rios-Aguilar ( 2014 ) developed a complementary framework representing the process and application of critical quantitative inquiry in higher education scholarship. At the heart of Rios-Aguilar’s conceptualization lies the acknowledgment that quantitative research is a human activity that requires careful decisions. With this foundation comes the pressing need for quantitative scholars to engage in self-reflection and transparency about the processes and outcomes of their methodological choices—actions that could potentially disrupt traditional notions and deficit assumptions that maintain systems of oppression in higher education.

Rios-Aguilar ( 2014 ) offered greater specificity to build upon many principles from other criticalists. For one, methodologically, Rios-Aguilar challenged the notion of using “fancy” statistical methods just for the sake of applying advanced methods. Instead, she argued that critical quantitative scholars should engage “in a self-reflection of the actual research practices and statistical approaches (i.e., choice of centering approach, type of model estimated, number of control variables, etc.) they use and the various influences that affect those practices” (Rios-Aguilar, 2014 , p. 98). In this purview, scholars should ensure that all methodological choices advance their ability to reveal inequities; such choices may include those that challenge the use of reference groups in coding, the interpretation of statistics in ways that move beyond p -values for statistical significance, or the application and alignment of theoretical and conceptual frameworks that focus on the assets of systemically minoritized students. Rios-Aguilar also noted, in agreement with the foundations of equity-mindedness and critical theory, that quantitative criticalists have an obligation to translate findings into tangible changes in policy and practice that can redress inequities.

Ultimately, Rios-Aguilar’s ( 2014 ) framework focused on “the interplay between research questions, theory, method/research practices, and policy/advocacy” to identify how quantitative criticalists’ scholarship can be “relevant and meaningful” (p. 96). Specifically, Rios-Aguilar called upon quantitative criticalists to ask research questions that center on equity and power, engage in self-reflection about their data sources, analyses, and disaggregation techniques, attend to interpretation with practical/policy-related significance, and expand beyond field-level silos in theory and implications. Without challenging dominant approaches in quantitative higher education research, Rios-Aguilar noted that the field will continue to inaccurately capture the experiences of systemically minoritized students. In college access and success, for example, ignoring this need for evolving approaches and models would continue what Bensimon ( 2007 ) referred to as the Tintonian Dynasty, with scholars widely applying and citing Tinto’s work but failing to acknowledge the unique experiences of systemically minoritized students. These and other concrete recommendations have served as a springboard for quantitative criticalists, prompting scholars to incorporate critical approaches in more cohesive and congruent ways.

QuantCrit (Gillborn et al., 2018 )

As an epistemologically different but related form of critical quantitative scholarship, QuantCrit—quantitative critical race theory—has emerged as a vital stream of inquiry that applies critical race theory to methodological approaches. Given that statistical methods were developed in support of the eugenics movement (Zuberi, 2001 ), QuantCrit researchers must consider how the “norms” of quantitative research support white supremacy (Zuberi & Bonilla-Silva, 2008 ). Fortunately, as Garcia et al. ( 2018 ) noted, “[t]he problems concerning the ahistorical and decontextualized ‘default’ mode and misuse of quantitative research methods are not insurmountable” (p. 154). As such, the goal of QuantCrit is to conduct quantitative research in a way that can contextualize and challenge historical, social, political, and economic power structures that uphold racism (e.g., Garcia et al., 2018 ; Gillborn et al., 2018 ).

In coining the term QuantCrit, Gillborn et al. ( 2018 ) provided five QuantCrit tenets adapted from critical race theory. First, the centrality of racism offers a methodological and political statement about how racism is complex, fluid, and rooted in social dynamics of power. Second, numbers are not neutral demonstrates an imperative for QuantCrit researchers—one that prompts scholars to understand how quantitative data have been collected and analyzed to prioritize interests rooted in white, elite worldviews. As such, QuantCrit researchers must reject numbers as “true” and as presenting a unidimensional truth. Third, categories are neither “natural” nor given prompts researchers to consider how “even the most basic decisions in research design can have fundamental consequences for the re/presentation of race inequity” (Gillborn et al., 2018 , p. 171). Notably, even when race is a focus, scholars must operationalize and interpret findings related to race in the context of racism. Fourth, prioritizing voice and insight advances the notion that data cannot “speak for itself” and numerous interpretations are possible. In QuantCrit, this tenet leverages experiential knowledge among People of Color as an interpretive tool. Finally, the fifth tenet explicates how numbers can be used for social justice but statistical research cannot be placed in a position of greater legitimacy in equity efforts relative to qualitative research. Collectively, although Gillborn et al. ( 2018 ) stated that they expect—much like all epistemological foundations—the tenets of QuantCrit to be expanded, we must first understand how these stated principles arise in critical quantitative research.

Bridging Critical Quantitative Concepts as a Guiding Framework

Guided by these framings (i.e., quantitative criticalism, critical quantitative inquiry, QuantCrit) as a specific stream of inquiry within the larger realm of equity-minded educational research, we explore the extent to which the primary elements of these critical quantitative frameworks are applied in higher education. Across the framings discussed, the commitment to equity-mindedness contributes to a shared underlying essence of critical quantitative approaches. Not only do Stage, Rios-Aguilar, and Gillborn et al. aim for researchers to center on inequities and commit to disrupting “neutral” decisions about and interpretations of statistics, but they also advocate for critical quantitative research (by any name) to serve as a tool for advocacy and praxis—creating structural changes to discriminatory policies and practices, rather than ceasing equity-based commitments with publications alone. Thus, the conceptual framework for the present study brings together alignments and distinctions in scholars’ motivations and actualizations of quantitative research through a critical lens.

Specifically, looking to Stage ( 2007 ), quantitative criticalists must center on inequity in their questions and actions to disrupt traditional models, methods, and practices. Second, extending critical inquiry through all aspects of quantitative research (Rios-Aguilar, 2014 ), researchers must interrogate how critical perspectives can be embedded in every part of research. The embedded nature of critical approaches should consider how study questions, frameworks, analytic practices, and advocacy are developed with intentionality, reflexivity, and the goal of unmasking inequities. Third, centering on the five known tenets of QuantCrit (Gillborn et al., 2018 ), QuantCrit researchers should adapt critical race theory for quantitative research. Although QuantCrit tenets are likely to be expanded in the future, the foundations of such research should continue to acknowledge the centrality of racism, advance critiques of statistical neutrality and categories that serve white racial interests, prioritize the lived experiences of People of Color, and complicate how statistics can be one—but not the lone—part of social justice endeavors.

Over many years, higher education scholars have advanced more critical research, as illustrated through publication trends of critical quantitative manuscripts in higher education (Wofford & Winkler, 2022 ). However, the application of critical quantitative approaches remains laced with tensions among paradigms and analytic strategies. Despite recent systematic examinations of critical quantitative scholarship across educational research broadly (Tabron & Thomas, 2023 ), there has yet to be a comprehensive, systematic review of higher education studies that attempt to apply principles rooted in quantitative criticalism, critical quantitative inquiry, and QuantCrit. Thus, much remains to be learned regarding whether and how higher education researchers have been able to apply the principles previously articulated. In order for researchers to fully (re)imagine possibilities for future critical approaches to quantitative higher education research, we must first understand the landscape of current approaches.

Study Aims and Role of the Researchers

Study aims and scope.

For this study, we examined the extent to which authors adopted critical quantitative approaches in higher education research and the trends in tools and strategies they employed to do so. In other words, we sought to understand to what extent, and in what ways, authors—in their own perspectives—applied critical perspectives to quantitative research. We relied on the nomenclature used by the authors of each manuscript (e.g., whether they operated from the lens of quantitative criticalism, QuantCrit, or another approach determined by the authors). Importantly, our intent was not to evaluate the quality of authors’ applications of critical approaches to quantitative research in higher education.

Researcher Positionality

As with all research, our positions and motivations shape how we conceptualized and executed the present study. We come to this work as early career higher education faculty, drawn to the study of higher education as one way to rectify educational disparities, and thus are both deeply invested in understanding how critical quantitative approaches may advance such efforts. After engaging in initial discussions during an association-sponsored workshop on critical quantitative research in higher education, we were motivated to explore these perspectives, understand trends in our field, and inform our own empirical engagement. Throughout our collaboration, we were also reflexive about the social privileges we hold in the academy and society as white, cisgender women—particularly given how quantitative criticalism and QuantCrit create inroads for systemically minoritized scholars to combat the erasure of perspectives from their communities due to small sample sizes. As we work to understand prior critical quantitative endeavors, with the goal of creating opportunity for this work to flourish in the future, we continually reflect on how we can use our positions of privilege to be co-conspirators in the advancement of quantitative research for social justice in higher education.

This study employed a qualitatively driven multimethod sequential design (Hesse-Biber et al., 2015 ) to illuminate how critical quantitative perspectives and methods have been applied in higher education contexts over 15 years. Anguera et al. ( 2018 ) noted that the hallmark feature of multimethod studies is the coexistence of different methodologies. Unlike mixed-methods studies, which integrate both quantitative and qualitative methods, multimethod studies can be exclusively qualitative, exclusively quantitative, or a combination of qualitative and quantitative methods. A multimethod research design was also appropriate given the distinct research questions in this study—each answered using a different stream of data. Specifically, we conducted a systematic scoping review of existing literature and facilitated follow-up interviews with a subset of corresponding authors from included publications, as detailed below and in Fig.  1 . We employed a systematic scoping review to examine the extent to which higher education scholars incorporated critical perspectives into quantitative research (research question one), and we then conducted follow-up interviews to elucidate how those scholars discussed specific strategies for leveraging critical perspectives in their quantitative research (research question two).

figure 1

Sequential multimethod approach to data collection and analysis

Given the scope of our work—which examined the extent to which, and in what ways, authors applied critical perspectives to quantitative higher education research—we employed an exploratory approach with a constructivist lens. Using a constructivist paradigm allowed us to explore the many realities of doing critical quantitative research, with the authors themselves constructing truths from their worldviews (Magoon, 1977 ). In what follows, we contextualize both our methodological choices and the limitations of those choices in executing this study.

Data Sources

Systematic scoping review.

First, we employed a systematic scoping review of published higher education literature. Consistent with the purpose of a scoping review, we sought to “examine the extent, range, and nature” of critical quantitative approaches in higher education that integrate quantitative methods and critical inquiry (Arskey & O’Malley, 2005 , p. 6). We used a multi-stage scoping framework (Arskey & O’Malley, 2005 ; Levac et al., 2010 ) to identify studies that were (a) empirical, (b) conducted within a higher education context, and (c) guided by critical quantitative perspectives. We restricted our review to literature published in 2007 or later (i.e., since Stage’s formal introduction of quantitative criticalism in higher education). All studies considered for review were written in the English language.

The literature search spanned multiple databases, including Academic Search Premier, Scopus, ERIC, PsychINFO, Web of Science, SocINDEX , Psychological and Behavioral Sciences Collection, Sociological Abstracts, and JSTOR. To locate relevant works, we used independent and combined keywords that reflected the inclusion criteria, with the initial search resulting in 285 unique records for eligibility screening. All screening was conducted separately by both authors using the CADIMA online platform (Kohl et al., 2018). In total, 285 title/abstract records were screened for eligibility, with 40 full-text records subsequently screened for eligibility. After separately screening all records, we discussed inconsistencies in title/abstract and full-text eligibility ratings to reach consensus. This strategy led us to a sample of 34 manuscripts that met all inclusion criteria (Fig.  2 ).

figure 2

Identification of systematic scoping review sample via literature search and screening

Systematic scoping reviews are particularly well-suited for initial examinations of emerging approaches in the literature (Munn et al., 2018 ), aligning with our goal to establish an initial understanding of the landscape of critical quantitative research applications in higher education. It also relies heavily on researcher-led qualitative review of the literature, which we viewed as a vital component of our study, as we sought to identify not just what researchers did (e.g., what topics they explored or in what outlets they did so), but also how they articulated their decision-making process in the literature. Alternative methods to examining the literature, such as bibliometric analysis, supervised topic modeling, and network analysis, may reveal additional insights regarding the scope and structure of critical quantitative research in higher education not addressed in the current study. As noted by Munn et al. ( 2018 ), systematic scoping reviews can serve as a useful precursor to more advanced approaches of research synthesis.

Semi-structured Interviews

To understand how scholars navigated the opportunities and tensions of critical quantitative inquiry in their research, we then conducted semi-structured interviews with authors whose work was identified in the scoping review. For each article meeting the review criteria ( N  = 34), we compiled information about the corresponding author and their contact information as our sample universe (Robinson, 2014 ). Each corresponding author was contacted via email for participation in a semi-structured interview. There were 32 distinct corresponding authors for the 34 manuscripts, as two corresponding authors led two manuscripts each within our corpus of data. In the recruitment email, we provided corresponding authors with a link to a Qualtrics intake survey; this survey confirmed potential participants’ role as corresponding author on the identified manuscript, collected information about their professional roles and social identities, and provided information about informed consent in the study. Twenty-five authors responded to the Qualtrics survey, with 18 corresponding authors ultimately participating in an interview.

Individual semi-structured interviews were conducted via Zoom and lasted approximately 45–60 min. The interview protocol began with questions about corresponding authors’ backgrounds, then moving into questions regarding their motivations for engaging in critical approaches to quantitative methods, navigation of the epistemological and methodological tensions that may arise when doing quantitative research with a critical lens, approaches to research design, frameworks, and methods that challenged quantitative norms, and experiences with the publication process for their manuscript included in the scoping review. In other words, we asked that corresponding authors explicitly relay the thought processes underlying their methodological choices in the article(s) from our scoping review. Importantly, given the semi-structured nature of these interviews, conversations also reflected participants’ broader trajectory to and through critical quantitative thinking as well as their general reflections about how the field of higher education has grappled with critical approaches to quantitative scholarship. To increase consistency in our data collection and the nature of these conversations, the first author conducted all interviews. With participants’ consent, we recorded each interview, had interviews professionally transcribed, and then de-identified data for subsequent analysis. All interview participants were compensated for their time and contributions with a $50 Amazon gift card.

At the conclusion of each interview, participants were given the opportunity to select their own pseudonym. A profile of interview participants, along with their self-selected pseudonyms, is provided in Table  1 . Although we invited all corresponding authors to participate in interviews, our sample may reflect some self-selection bias, as authors had to opt in to be represented in the interview data. Further, interview insights do not represent all perspectives from participants’ co-authors, some of which may diverge based on lived experiences, history with quantitative research, or engagement with critical quantitative approaches.

Data Analysis

After identifying the sample of 34 publications, we began data analysis for the scoping review by uploading manuscripts to Dedoose. Both researchers then independently applied a priori codes (Saldaña, 2015 ) from Stage’s ( 2007 ) conceptualization of quantitative criticalism, Rios-Aguilar’s ( 2014 ) framework for quantitative critical inquiry, and Gillborn et al.’s ( 2018 ) QuantCrit tenets (Table  2 ). While we applied codes in accordance with Stage’s and Rios-Aguilar’s conceptualizations to each article, codes relevant to Gillborn et al.’s tenets of QuantCrit were only applied to manuscripts where authors self-identified as explicitly employing QuantCrit. Given the distinct epistemological origin of QuantCrit from broader forms of critical quantitative scholarship, codes representing the tenets of QuantCrit reflect its origins in critical race theory and may not be appropriate to apply to broader streams of critical quantitative scholarship that do not center on racism (e.g., scholarship related to (dis)ability, gender identity, sexual identity and orientation). After individually completing a priori coding, we met to reconcile discrepancies and engage in peer debriefing (Creswell & Miller, 2000 ). Data synthesis involved tabulating and reporting findings to explore how each manuscript component aligned with critical quantitative frameworks in higher education research to date.

We analyzed interview data through a multiphase process that engaged deductive and inductive coding strategies. After interviews were transcribed and redacted, we uploaded the transcripts to Dedoose for collaborative qualitative coding. The second author read each transcript in full to holistically understand participants’ insights about generating critical quantitative research. During this initial read, the second author noted quotes that were salient to our question regarding the strategies that scholars use to employ critical quantitative approaches.

Then, using the a priori codes drawn from Stage’s ( 2007 ), Rios-Aguilar’s ( 2014 ) and Gillborn et al.’s ( 2018 ) conceptualizations relevant to quantitative criticalism, critical quantitative inquiry, and QuantCrit, we collaboratively established a working codebook for deductive coding by defining the a priori codes in ways that could capture how participants discussed their work. Although these a priori codes had been previously applied to the manuscripts in the scoping review, definitions and applications of the same codes for interview analysis were noticeably broader (to align with the nature of conversations during interviews). For example, we originally applied the code “policy/advocacy”—established from Rios-Aguilar's work—to components from the implications section of scoping review manuscripts. When (re)developed for deductive coding of interview data, however, we expanded the definition of “policy/advocacy” to include participants’ policy- and advocacy-related actions (beyond writing) that advanced critical inquiry and equity for their educational communities.

In the final phase of analysis, each research team member engaged in inductive coding of the interview data. Specifically, we relied on open coding (Saldaña, 2015 ) to analyze excerpts pertaining to participants’ strategies for employing critical quantitative approaches that were not previously captured by deductive codes. Through open coding, we used successive analysis to work in sequence from a single case to multiple cases (Miles et al., 2014 ). Then, as suggested by Saldaña ( 2015 ), we collapsed our initial codes into broader categories that allowed us insight regarding how participants’ strategies in critical quantitative research expanded beyond those which have been previously articulated. Finally, to draw cohesive interpretations from these data, we independently drafted analytic memos for each interview participant’s transcript, later bridging examples from the scoping review that mapped onto qualitative codes as a form of establishing greater confidence and trustworthiness in our multimethod design.

In introducing study findings through a synthesized lens that heeds our multimethod design, we organize the sections below to draw from both scoping review and interview data. Specifically, we organize findings into two primary areas that address authors’ (1) articulated motivations to adopt critical approaches to quantitative higher education research, and (2) methodological choices that they perceive to align with critical approaches to quantitative higher education research. Within these sections, we discuss several coherent areas where authors collectively grappled with tensions in motivation (i.e., broad motivations, using coined names of critical approaches, conveying positionality, leveraging asset-based frameworks) and method (i.e., using data sources and choosing variables, challenging coding norms, interpreting statistical results), all of which signal authors’ efforts to embody criticality in quantitative research about higher education. Given our sequential research questions, which first examined the landscape of critical quantitative higher education research and then asked authors to elucidate their thought processes and strategies underlying their approaches to these manuscripts, our findings primarily focus on areas of convergence across data sources; we do, however, highlight challenges and tensions authors faced in conducting such work.

Articulated Motivations in Critical Approaches to Quantitative Research

To date, critical quantitative researchers in higher education have heeded Stage’s ( 2007 ) call to use data to reveal the large-scale perpetuation of inequities in educational processes and outcomes. This emerged as a defining aspect of higher education scholars’ critical quantitative work, as all manuscripts ( N  = 34) in the scoping review articulated underlying motivations to identify and/or address inequities.

Often, these motivations were reflected in the articulated research questions ( n  = 31; 91.2%). For example, one manuscript sought to “critically examine […] whether students were differentially impacted” by an educational policy based on intersecting race/ethnicity, gender, and income (Article 29, p. 39). Others sought to challenge notions of homogeneity across groups of systemically minoritized individuals by “explor[ing] within-group heterogeneity” of constructs such as sense of belonging among Asian American students (Article 32, p. iii) and “challenging the assumption that [economically and educationally challenged] students are a monolithic group with the same values and concerns” (Article 31, p. 5). These underlying motivations for conducting critical quantitative research emerged most clearly in the named approaches, positionality statements, and asset-based frameworks articulated in manuscripts.

Adopting the Coined Names of Quantitative Criticalism, QuantCrit, and Related Approaches

Based on the inclusion criteria applied in the scoping review, we anticipated that all manuscripts would employ approaches that were explicitly critical and quantitative in nature. Accordingly, all manuscripts ( N  = 34; 100%) adopted approaches that were coined as quantitative criticalism , QuantCrit , critical policy analysis (CPA), critical quantitative intersectionality (CQI) , or some combination of those terms. Twenty-one manuscripts (61.8%) identified their approach as quantitative criticalism, nine manuscripts (26.5%) identified their approach as QuantCrit, two manuscripts (5.9%) identified their approach as CPA, and two manuscripts (5.9%) identified their approach as CQI.

One of the manuscripts that applied quantitative criticalism broadly described it as an approach that “seeks to quantitatively understand the predictors contributing to completion for a specific population of minority students” (Article 34, p. 62), noting that researchers have historically “attempted to explain the experiences of [minority] students using theories, concepts, and approaches that were initially designed for white, middle and upper class students” (Article 34, p. 62). Although this example speaks only to the limited context and outcomes of one study, it highlights a broader theme found across articles; that is, quantitative criticalism was often leveraged to challenge dominant theories, concepts, and approaches that failed to represent systemically minoritized individuals’ experiences. In challenging dominant theories, QuantCrit applications were most explicitly associated with critical race theory and issues of racism. One manuscript noted that “QuantCrit recognizes the limitations of quantitative data as it cannot fully capture individual experiences and the impact of racism” (Article 29, p. 9). However, these authors subsequently noted that “quantitative methodology can support CRT work by measuring and highlighting inequities” (Article 29, p. 9). Several scholars who employed QuantCrit explicitly identified tenets of QuantCrit that they aimed to address, with several authors making clear how they aligned decisions with two tenets establishing that categories are not given and numbers are not neutral.

Despite broadly applying several of the coined names for critical realms of quantitative research, interview data revealed that several authors felt a palpable tension in labeling. Some participants, like Nathan, questioned the surface-level engagement that may come with coined names: “I don’t know, I think it’s the thinking and the thought processes and the intentionality that matters. How invested should we be in the label?” Nathan elaborated by noting how he has shied away from labeling some of his work as quantitative criticalist , given that he did not have a clear answer about “what would set it apart from the equity-minded, inequality-focused, structurally and systematically-oriented kind of work.” Similarly, Leo stated how labels could (un)intentionally stop short of the true mission for the research, recalling that he felt “more inclined to say that I’m employing critical quantitative leanings or influences from critical quant” because a true application of critical epistemology should be apparent in each part of the research process. Although most interview participants remained comfortable with labeling, we also note that—within both interview data and the articles themselves—authors sometimes presented varied source attributions for labels and conflated some of the coined names, representing the messiness of this emerging body of research.

Challenging Objectivity by Conveying Researcher Positionality

Positionality statements acknowledge the influence of scholars’ identities and social positions on research decisions. Quantitative research has historically been viewed as an objective, value-neutral endeavor, with some researchers deeming positionality statements as unnecessary and inconsistent with the positivist paradigm from which such work is often conducted. Several interviewed authors noted that positivist or post-positivist roots of quantitative research characterized their doctoral training, which often meant that their “original thinking around statistics and research was very post-positivist” (Carter) or that “there really wasn’t much of a discussion, as far as I can remember as a doc student, about epistemology or ontology” (Randall). Although positionality statements have been generally rare in quantitative research studies, half of the manuscripts in our sample ( n  = 17; 50.0%) included statements of researcher positionality. One interview participant, Gabrielle, discussed the importance of positionality statements as one way to challenge norms of quantitative research in saying:

It’s not objective, right? I think having more space to say, “This is why I chose the measures I chose. This is how I’m coming to this work. This is why it matters to me. This is my positioning, right?” I think that’s really important in quantitative work…that raises that level of consciousness to say these are not just passive, like every decision you make in your research is an active decision.

While Gabrielle, as well as Carter and Randall, all came to be advocates of positionality statements in quantitative scholarship through different pathways, it became clear through these and other interviews that positionality statements were one way to bring greater transparency to a traditionally value-neutral space.

As an additional source of contextual data, we reviewed submission guidelines for the peer-reviewed journals in which manuscripts were published. Not one of the 15 peer-reviewed outlets represented in our scoping review sample required that authors include positionality statements. One outlet, Journal of Diversity in Higher Education (where two scoping review articles were printed), offered “inclusive reporting standards” where they recommended that authors include reflexivity and positionality statements in their submitted manuscripts (American Psychological Association, 2024 ). Another outlet, Teachers College Record (where one scoping review article was printed), mentioned positionality statements in their author instructions. Yet, Teachers College Record did not require nor recommend the inclusion of author positionality statements; rather, they offered recommendations if authors chose to include them. Specifically, they suggested that if authors chose to include a positionality statement, it should be “more than demographic information or abstract statements” (Sage Journals, 2024 ). The remaining 13 peer-reviewed outlets from the scoping review data made no mention of author reflexivity or positionality in their author guidelines.

When present, the scoping review revealed that positionality statements varied in form and content. Some positionality statements were embedded in manuscript narratives, while others existed as separate tables with each author’s positionality represented as a separate row. In content, it was most common for authors to identify how their identities and experiences motivated their work. For example, one author noted their shared identity with their research participants as a low-income, first-generation Latina college student (Article 2, p. 25). Another author discussed the identity that they and their co-author shared as AAPI faculty, making the research “personally relevant for [them]” (Article 11, p. 344),

In interviews, participants recalled how the relationship between their identities, lived experiences, and motivations for critical approaches to quantitative research were all intertwined. Leo mentioned, “naming who we are in a study helps us be very forthright with the pieces that we’re more likely to attend to.” Yet, Leo went on to say that “one of the most cosmetic choices that people see in critically oriented quantitative research is our positionality statements,” which other participants noted about how information in positionality statements is presented. In several interviews, authors’ reflections on whether these statements should appear as lists of identities or deeper statements about reflexivity presented a clear tension. For some, positionality statements were places to “identify ourselves and our social locations” (David) or “brand yourself” as a critical quantitative scholar to meet “trendy” writing standards in this area (Michelle). Yet, others felt such statements fall short in revealing “how this study was shaped by their background identities and perspectives” (Junco) or appear to “be written in response to the context of the research or people participating” (Ginger). Ultimately, many participants felt that shaping honest positionality statements that better convey “the assumptions, and the biases and experiences we’ve all had” (Randall) was one area where quantitative higher education scholars could significantly improve their writing to reflect a critical lens.

Some manuscripts also clarified how authors’ identities and social positions reshaped the research process and product. For instance, authors of one manuscript reported being “guided by [their] cultural intuition” throughout the research (Article 17, p. 218). Alternatively, another author described the narrative style of their manuscript as intentionally “autobiographical and personally reflexive” in order “to represent the connections [they] made between [their] own experiences and findings that emerged” from their work (Article 28, p. 56). Taken together, among the manuscripts that explicitly included positionality statements, these remarks make clear that authors had widely varying approaches to their reflexivity and writing processes.

Actualizing Asset-Based Frameworks

Notably, conceptual and theoretical frameworks emerged as a common way for critical quantitative scholars to pursue equitable educational processes and outcomes in higher education research. Nearly all ( n  = 32; 94.1%) manuscripts explicitly challenged dominant conceptual and theoretical models. Some authors enacted this challenge by countering canonical constructs and theories in the framing of their study. For example, several manuscripts addressed critiques of theoretical concepts such as integration and sense of belonging in building the conceptual framework for their own studies. Other manuscripts were constructed with the underlying goal to problematize and redefine frameworks, such as engagement for Latina/e/o/x students or the “leaky pipeline” discourse related to broadening participation in the sciences.

Across interviews, participants challenged deficit framings or “traditional” theoretical and conceptual approaches in many ways. Some frameworks are taken as a “truism in higher ed” (Leo), such as sense of belonging and Astin’s ( 1984 ) I-E-O model, and these frameworks were sometimes purposefully used to disrupt their normative assumptions. Randall, for one, recalled using a more normative higher education framework but opted to think about this framework “as more culturalized” than had previously been done. Further, Carter noted that “thinking about the findings in an anti-deficit lens” comprised a large portion of critical quantitative approaches. Using frameworks for asset-based interpretation was further exemplified by Caroline stating, “We found that Black students don’t do as well, but it’s not the fault of Black students.” Instead, Caroline challenged deficit understandings through the selected framework and implications for institutional policy. Collectively, challenging normative theoretical underpinnings in higher education was widely favored among participants, and Jackie hoped that “the field continues to turn a critical lens onto itself, to grow and incorporate new knowledges and even older forms of knowledge that maybe it hasn’t yet.”

Alternatively, some participants discussed rejecting widely used frameworks in higher education research in favor of adapting frameworks from other disciplines. For example, QuantCrit researchers drew from critical race theory (and related frameworks, such as intersectionality) to quantitatively examine higher education topics in ways that value the knowledge of People of Color. In using these frameworks, which have origins in critical legal and Black feminist theorization, interview participants noted how important it was “to put yourself out there with talking about race and racism” (Isabel) and connect the statistics “back to systems related to power, privilege, and oppression [because] it’s about connecting [results] to these systemic factors that shape experience, opportunities, barriers, all of that kind of stuff” (Jackie). Further, several authors related pulling theoretical lenses from sociology, gender studies, feminist studies, and queer studies to explore asset-based theorization in higher education contexts and potentially (re)build culturally relevant concepts for quantitative measurement in higher education.

Embodying Criticality in Methodological Sources, Approaches, and Interpretations

Moving beyond underlying motivations of critical quantitative higher education research, scoping review authors also frequently actualized the task of questioning and reconstructing “models, measures, and analytic practices [to] better describe experiences of those who have not been adequately represented” (Stage, 2007 , p. 10). Common across all manuscripts ( N  = 34) was the discussion of specific ways in which authors’ critical quantitative approaches informed their analytic decisions. In fact, “analytic practices” was by far the most prevalent code applied to the manuscripts in our dataset, with 342 total references across the 34 manuscripts. This amounted to 20.8% of the excerpts in the scoping review dataset being coded as reflecting critical quantitative approaches to analytic practices, specifically.

Interestingly, many analytic approaches reflected what some would consider “standard” quantitative methodological tools. For example, manuscripts employed factor analysis to assess measures, t-tests to examine differences between groups, and hierarchical linear regression to examine relationships in specific contexts. Some more advanced, though less commonly applied, methods included measurement invariance testing and latent class analysis. Thus, applying a critical quantitative lens tended not to involve applying a separate set of analytic tools; rather, the critical lens was reflected in authors’ selection of data sources and variables, approaches to data coding and (dis)aggregation, and interpretation of statistical results.

Selecting Data Sources and Variables

Although scholars were explicit in their underlying motivations and approaches to critical quantitative research, this did not often translate into explicitly critical data collection endeavors. Most manuscripts ( n  = 29; 85.3%) leveraged existing measures and data sources for quantitative analysis. Existing data sources included many national, large-scale datasets including the Educational Longitudinal Study (NCES), National Survey of Recent College Graduates (NSF), and the Current Population Survey (U.S. Census Bureau). Other large-scale data sources reflecting specific higher education contexts and populations included the HEDS Diversity and Equity Campus Climate Survey, Learning About STEM Student Outcomes (LASSO) platform, and National Longitudinal Survey of Freshmen. Only five manuscripts (14.7%) conducted analysis using original data collected and/or with newly designed measures.

It was apparent, however, that many authors grappled with challenges related to using existing data and measures. Interview participants’ stories crystallized the strengths and limitations of secondary data. Over half of the interview participants in our study spoke about their choices regarding quantitative data sources. Some participants noted that surveys “weren’t really designed to ask critical questions” (Sarah) and discussed the issues with survey data collected around sex and gender (Jessica). Still, Sarah and Jessica drew from existing survey data to complicate the higher education experiences they aimed to understand and tried to leverage critical framing to question “traditional” definitions of social constructs. In another discussion about data sources and the design of such sources, Carter expanded by saying:

I came in without [being] able to think through the sampling or data collection portion, but rather “this is what I have, how do I use it in a way that is applying critical frameworks but also staying true to the data themselves.” That is something that looks different for each study.

In discussing quantitative data source design, more broadly, Tyler added: “In a lot of ways, all quantitative methods are mixed methods. All of our measures should be developed with a qualitative component to them.” In the scoping review articles, one example of this qualitative component is evident within the cognitive interviews that Sablan ( 2019 ) employed to validate survey items. Finally, several participants noted how crucial it is to “just be honest and acknowledge the [limitations of secondary data] in the paper” (Caroline) and “not try to hide [the limitations]” (Alexis), illustrating the value of increased transparency when it comes to the selection and use of existing quantitative data in manuscripts advancing critical perspectives.

Regardless of data source, attention to power, oppression, and systemic inequities was apparent in the selection of variables across manuscripts. Many variables, and thus the associated models, captured institutional contexts and conditions. The multilevel nature of variables, which extended beyond individual experiences, aligned with authors’ articulated motivations to disrupt inequitable educational processes and outcomes, which are often systemic and institutionalized in nature. For one, David explained key motivations behind his analytic process: “We could have controlled for various effects, but we really wanted to see how are [the outcomes] differing by these different life experiences?” David’s focus on moving past “controlling” for different effects shows a deep level of intentionality that was reflected among many participants. Carter expanded on this notion by recalling how variable selection required, “thinking through how I can account for systemic oppression in my model even though it’s not included in the survey…I’ve never seen it measured.” Further, Leo discussed how reflexivity shaped variable selection and shared: “Ultimately, it’s thinking about how do these environments not function in value-neutral ways, right? It’s not just selecting X, Y, and Z variable to include. It’s being able to interrogate [how] these variables represent environments that are not power neutral.” The process of selecting quantitative data sources and variables was perhaps best summed up by Nick, who concisely shared, “it’s been very iterative.” Indeed, most participants recalled how their methodological processes necessitated reflexivity—an iterative process of continually revisiting assumptions one brings to the quantitative research process (Jamieson et al., 2023 )—and a willingness to lean into innovative ways of operationalizing data for critical purposes.

Challenging the Norms of Coding

An especially common way of enacting critical principles in quantitative research was to challenge traditional norms of coding. This emerged in three primary ways: (1) disaggregation of categories to reflect heterogeneity in individuals’ experiences, (2) alternative approaches to identifying reference groups, and (3) efforts to capture individuals’ intersecting identities. Across manuscripts, authors often intentionally disaggregated identity subgroups (e.g., race/ethnicity, gender) and ran distinct analytical models for each subgroup separately. In interviews, Junco expressed that running separate models was one way that analyses could cultivate a different way of thinking about racial equity. Specifically, Junco challenged colleagues’ analytic processes by asking whether their research questions “really need to focus on racial comparison?” Junco then pushed her colleagues by asking, “can we make a different story when we look at just the Black groups? Or when we look at only Asian groups, can we make a different story that people have not really heard?” Isabel added that focusing on measurement for People of Color allowed for them (Isabel and her research collaborators) to “apply our knowledge and understanding about minoritized students to understand what the nuances were.” In nearly one third of the manuscripts ( n  = 11; 32.4%), focusing on single group analyses emerged as one way that QuantCrit scholars disrupted the perceived neutrality of numbers and how categories have previously been established to serve white, elite interests. Five of those manuscripts (14.7%) explicitly focused on understanding heterogeneity within systemically minoritized subpopulations, including Asian American, Latina/e/o/x, and Black students.

It was not the case, however, that authors avoided group comparisons altogether. For example, one team of authors used separate principal components analysis (PCA) models for Indigenous and non-Indigenous students with the explicit intent of comparing models between groups. The authors noted that “[t]ypically, monolithic comparisons between racial groups perpetuate deficit thinking and marginalization.” However, they sought to “highlight the nuance in belonging for Indigenous community college students as it differs from the White-centric or normative standards” by comparing groups from an asset-driven perspective (Article 5, p. 7). Thus, in cases where critical quantitative scholars included group comparisons, the intentionality underlying those choices as a mechanism to highlight inequities and/or contribute to asset-based narratives was apparent.

Four manuscripts (11.8%) were explicit in their efforts to identify alternative analytic methods to normative reference groups. Reference groups are often required when building quantitative models with categorical variables such as racial/ethnic and gender identity. Often, dominant identities (e.g., respondents who are white and/or men) comprise the largest portion of a research sample and are selected as the comparison group, typifying experiences of individuals with those dominant identities. To counter the traditional practice of reference groups, some manuscript authors stated using effect coding, often referencing the work of Mayhew and Simonoff ( 2015 ), and dynamic centering as two alternatives. Effect coding (used in three manuscripts) removes the need for a reference group; instead, all groups are compared to the overall sample mean. Dynamic centering (used in one manuscript), on the other hand, uses a reference group but one that is intentionally selected based on the construct in question, as opposed to relying on sample size or dominant identities.

Interview participants also discussed navigating alternative coding practices, with several authors raising key points about their exposure to and capacity building for effect coding. As Angela described, effect coding necessitates that “you don’t choose a specific group as your benchmark to do the comparison. And you instead compare to the group.” Angela then stated that this approach made more sense than choosing benchmarks, as she felt uncomfortable identifying one group as a comparison group. Junco, however, noted that “effect coding was much more complicated than what I thought,” as she reflected on unlearning positivist strategies in favor of equity-focused approaches that could elucidate greater nuance. Importantly, using alternative coding practices was not universal among manuscripts or interview participants. One manuscript utilized traditional dummy coding for race in regression models, with white students as the reference group to which all other groups were compared. The authors explicated that “using white students as the reference [was] not a result of ‘privileging’ them or maintaining the patterns of power related to racial categorizations” (Article 8, p. 1282). Instead, they argued that the comparison was a deliberate choice to “reveal patterns of racial or ethnic educational inequality compared to the privileged racial group” (Article 8, p. 1282). Another author maintained the use of reference groups purely for ease of interpretation. David shared, “it’s easier for the person to just look at it and compare magnitudes.” However, by prioritizing the benefit of easy interpretation with traditional reference groups, authors may incur other costs (such as sustaining unnecessary comparisons to white students). Additionally, several manuscripts ( n  = 13; 38.2%) employed analytic coding practices that aimed to account for intersectionality. While authors identified these practices by various names (e.g., interaction terms, mediating variables, conditional effects) they all afforded similar opportunities. The most common practice among authors in our sample ( n  = 8; 23.5%) was computing interaction terms to account for intersecting identities, such as race and gender. Specifically pertaining to intersectionality, Alexis summarized many researchers’ tensions well in sharing, “I know what Kimberlé Crenshaw says. But how do I operationalize that mathematically into something that’s relevant?” In offering one way that intersectionality could be realized with quantitative data, Tyler stated that “being able to keep in these variables that are interacting [via interaction terms] and showing differences” may align with the core ideas of intersectionality. Yet, participants also recognized that statistics would inherently always fall short of representing respondents’ lived experiences, as discussed by Nick: “We disaggregate as far as we can, but you could only go so far, and like, how do we deal with tension.” Several other participants reflected on bringing in open-text response data about individuals’ social identities, categorizing racial and ethnic groups according to continent (while also recognizing that this did not necessarily attend to the complexities of diasporas), or making decisions about groups that qualify as “minoritized” based on disciplinary and social movements. Collectively, the disparate approaches that authors used and discussed directly speak to critical higher education scholars’ movement away from normative comparisons that did not meaningfully answer questions related to (in)equity and/or intersectionality in higher education.

Interpreting Statistical Results

One notable, albeit less common, way higher education scholars enacted critical quantitative approaches through analytic methods was by challenging traditional ways of reporting and interpreting statistical results. The dominant approach to statistical methods aligns with a null hypothesis significance testing (NHST) approach, whereby p -values—used as indicators of statistically significant effects—serve to identify meaningful results. NHST practices were prevalent in nearly all scoping review manuscripts; yet, there were some exceptions. For example, three manuscripts (8.8%) cautioned against reliance on statistical significance due to its dependence on large sample size (i.e., statistical power), which is often at odds with centering research on systemically minoritized populations. One of those manuscripts (2.9%) even chose to interpret nonsignificant results from their quantitative analyses. In a similar vein, two manuscripts (5.9%) also questioned and adapted common statistical practices related to model selection (e.g., using corrected Akaike information criteria (AIC) instead of p -values) and variable selection (e.g., avoiding use of variance explained so as not to “[exclude] marginalized students from groups with small representations in the data” (Article 23, p. 7). Meanwhile, others attended to raw numeric data and uncertainty associated with quantitative results. The resources to enact these alternative methodological practices were briefly discussed by Tyler through his interview, in which he shared: “The use of p -values is so poorly done that the American Statistical Association has released a statement on p -values, an entire special collection [and people in my field] don’t know those things exist.” Tyler went on to share that this knowledge barrier was tied to the siloed nature of academia, and that such siloes may inhibit the generation of critical quantitative research that draws from different disciplinary origins.

Among interviewed authors, many also viewed interpretation as a stage of quantitative research that required a high level of responsibility and awareness of worldview. Nick related that using a QuantCrit approach changed how he was interpreting results, in “talking about educational debts instead of gaps, talking about racism instead of race.” As demonstrated by Nick, critical interpretations of statistics necessitate congruence with theoretical or conceptual framing, as well, given the explicit call to interrogate structures of inequity and power in research adopting a critical lens. Leo described this responsibility as a necessary challenge:

It’s very easy to look at results and interpret them—I don’t wanna say ‘as is’ because I don’t think that there is an ‘as is’—but interpret them in ways that they’re traditionally interpreted and to keep them there. But, if we’re truly trying to accomplish these critical quantitative themes, then we need to be able to reference these larger structures to make meaning of the results that are put in front of us.

Nick, Leo, and several other participants all emphasized how crucial interpretation is in critical quantitative research in ways that expanded beyond statistical practices; ultimately, the perspective that “behind every number is a human” served as a primary motivation for many authors in fulfilling the call toward ethical and intentional interpretation of statistics.

Leveraging a multimethod approach with 15 years of published manuscripts ( N  = 34) and 18 semi-structured interviews with corresponding authors, this study identifies the extent to which principles of quantitative criticalism, critical quantitative inquiry, and QuantCrit have been applied in higher education research. While scholars are continuing to develop strategies to enact a critical quantitative lens in their studies—a path we hope will continue, as continued questioning, creativity, and exploration of new possibilities underscore the foundations of critical theory (Bronner, 2017 )—our findings do suggest that higher education researchers may benefit from intentional conversations regarding specific analytic practices they use to advance critical quantitative research (e.g., confidence intervals versus p -values, finite mixture models versus homogeneous distribution models).

Our interviews with higher education scholars who produced such work also fills a need for guidance on strategies to enact critical perspectives in quantitative research, addressing an absence of such from most quantitative training and resources. By drawing on the work and insights of higher education researchers engaging critical quantitative approaches, we provide a foundation on which future scholars can imagine and implement a fuller range of possibilities for critical inquiry via quantitative methods in higher education. In what follows, we discuss the findings of this study alongside the frameworks from which they drew inspiration. Then, we offer implications for research and practice to catalyze continued exploration and application of critical quantitative approaches in higher education scholarship.

Synthesizing Key Takeaways

First, scoping review data revealed several commonalities across manuscripts regarding authors’ underlying motivations to identify and/or address inequities for systemically minoritized populations—speaking to how critical quantitative approaches can fall within the larger umbrella of equity-mindedness in higher education research. Such motivations were reflected in authors’ research questions and frameworks (consistent with Stage’s ( 2007 ) initial guidance). Most manuscripts identified their approach as quantitative criticalism broadly, although there were sometimes blurred boundaries between approaches termed quantitative criticalism, QuantCrit, critical policy analysis, and critical quantitative intersectionality. Notably, authors’ decisions about which framing their work invoked also determined how scholars enacted a specified critical quantitative approach. For example, the tenets of QuantCrit, offered by Gillborn et al. ( 2018 ), were specifically heeded by researchers seeking to take up a QuantCrit lens. Scholars who noted inspiration from Rios-Aguilar ( 2014 ) often drew specifically from the framework for critical quantitative inquiry. While the key ingredients of these critical quantitative approaches were offered in the foundational framings we introduced, the field has lacked understanding on how scholars take up these considerations. Thus, the present findings create inroads to a conversation about applying and extending the articulated components associated with critical quantitative higher education research.

Second, our multimethod approach illuminated general agreement (in manuscripts and interviews) that quantitative research in higher education—whether explicitly critical or not—is not neutral nor objective. However, despite positionality being a key part of Rios-Aguilar’s ( 2014 ) critical quantitative inquiry framework, only half of the manuscripts included researcher positionality. Thus, while educational researchers may agree that, without challenging objectivity, quantitative methods serve to uphold inequity (e.g., Arellano, 2022 ; Castillo & Babb, 2024 ), higher education scholars may not have yet established consensus on how these principles materialize. To be clear, consensus need not be the goal of critical quantitative approaches, given that critical theory demands constant questioning for new ways of thinking and being (Bronner, 2017 ); yet, greater solidarity among critical quantitative higher education researchers may be beneficial, so that community-based discussions can drive the actualization of equity-minded motivations. Interview data also revealed complications in how scholars choose if, and how, to define and label critical quantitative approaches. Some participants struggled with whether their work was “critical enough” to be labeled as such. Those conversations raise concerns that critical quantitative research in higher education could—or potentially has—become an exclusionary space where level of criticality is measured by an arbitrary barometer (refer to Garvey & Huynh, 2024 ). Meanwhile, other participants worried that attaching such a label to their work was irrelevant (i.e., that it was the motivations and intentionality underlying the work that mattered, not the label). Although the field remains in disagreement regarding if/how labeling should be implemented for critical quantitative approaches, “it is the naming of experience and ideologies of power that initiates the process [of transformation] in its critical form” (Hanley, 2004 , p. 55). As such, we argue that naming critical quantitative approaches can serve as a lever for transforming quantitative higher education research and create power in related dialogue.

Implications for Future Studies on Critical Quantitative Higher Education Research

As with any empirical approach, and especially those that are gaining traction (as critical quantitative approaches are in higher education; Wofford & Winkler, 2022 ), there is utility in conducting research about the research . First, in the context of higher education as a broad field of applied research, there is a need to illustrate what critical quantitative scholars focus on when they conceptualize higher education in the first place. For example, is higher education viewed as a possibility for social mobility? Or are critical quantitative scholars viewing postsecondary institutions as engines of inequity? Second, it was notable that—among the manuscripts including positionality statements—it was common for such statements to read as biographies (i.e., lists of social identities) rather than as reflexive accounts about the roles/commitments of the researcher(s). Future research would benefit from a deeper understanding of the enactment of positionality in critical quantitative higher education research. Third, given the productive tensions associated with naming and understanding the (dis)agreed upon ingredients between quantitative criticalism, critical quantitative inquiry, QuantCrit, as well as additional known and unknown conceptualizations, further research regarding how higher education scholars grapple with definitions, distinctions, and adaptations of these related approaches will clarify how scholars can advance their critical commitments with quantitative postsecondary data.

Implications for Employing Critical Quantitative Higher Education Research

Emerging analytical tools for critical quantitative research.

In terms of employing critical quantitative approaches in higher education research, there is significant room for scholars to explore emerging quantitative methodological tools. We agree with López et al.’s ( 2018 ) assessment that critical quantitative work tends to remain demographic and/or descriptive in its methodological nature, and there is great potential for more advanced inferential quantitative methods to serve critical aims. While there are some examples in the literature—for example, Sablan’s ( 2019 ) work in the realm of quantitative measurement and Malcom-Piqueux’s (2015) work related to latent class analysis and other person-centered modeling approaches—additional examples of advanced and innovative analytical tools were limited in our findings. Thus, integrating more advanced quantitative methodological tools into critical quantitative higher education research, such as finite mixture modeling (as noted by Malcom-Piqueux, 2015), measurement invariance testing, and multi-group structural equation modeling, may advance the ways in which scholars address questions related to heterogeneity in the experiences and outcomes of college students, faculty, and staff.

Traditional quantitative analytical tools have historically highlighted between-group differences that perpetuate deficit narratives for systemically minoritized students, faculty, and staff on college campuses; for example, comparing the educational outcomes of Black students to white students. Emerging approaches such as finite mixture modeling hold promise in unearthing more nuanced understandings. Of growing interest to many critical quantitative scholars is heterogeneity within minoritized populations; finite mixture modeling approaches such as growth mixture modeling, latent class analysis, and latent profile analysis are particularly well suited to reveal within-group differences that are otherwise obfuscated in most quantitative analyses. Although we found a few examples in our scoping review of authors who leveraged more traditional group comparisons for equity-minded aims, these emerging analytical approaches may be better suited for the questions asked by future critical quantitative scholars.

One Size Does Not Fit All

Many emerging analytical tools demonstrate promise in advancing conversations about inequity, particularly related to heterogeneity in subpopulations on college and university campuses. As noted previously, however, Rios-Aguilar ( 2014 ) noted that critical quantitative research need not rely solely on “fancy” or advanced analytical tools; in fact, our findings did not lead us to conclude that higher education scholars have established a set of analytical approaches that are explicitly critical in nature. Rather, our results revealed a common theme: that critical quantitative scholarship in higher education necessitates an elevated degree of intentionality in selection, application, and interpretation of whichever analytical approaches—advanced or not—scholars choose.

As noted, there were several instances in our data where commonly critiqued analytical approaches were still applied in the critical quantitative literature. For example, we found manuscripts that conducted a monolithic comparison of Indigenous and non-Indigenous students and the utilization of traditional dummy coding with white students as a normative reference group. What made these manuscripts distinct from more non-critical quantitative research was the thoughtfulness and intentionality with which those approaches were selected to serve equity-minded goals—an intentionality that was explicitly communicated to readers in the methods section of manuscripts. Just as the inclusion of positionality statements in half of the manuscripts suggests that researcher objectivity was generally not assumed by higher education scholars conducting critical quantitative scholarship, choices that often otherwise go unquestioned were interrogated and discussed in manuscripts.

Cokley and Awad ( 2013 ) share several recommendations for advancing social justice research via quantitative methods. One of their recommendations addresses the utilization of racial group comparisons in quantitative analyses. They do not suggest that researchers avoid comparisons between groups altogether, but rather they avoid “unnecessary” comparisons between groups (p. 35). They elaborate that, “[t]here should be a clear research questions that necessitates the use of the comparison” if utilized in quantitative research with critical aims (Cokley & Awad, 2013 , p. 35). Our findings suggested that—in the current state of critical quantitate scholarship in higher education—it is not so much about a specific set of approaches deeming scholarship as critical (or not), but rather about asking critical questions (as Stage initially called us to do in 2007) and then selecting methods that align with those goals.

Opportunities for Training and Collaboration

Notably, many of the emerging analytical approaches mentioned require a significant degree of methodological training. The limited use of such tools, which are otherwise well-suited for critical quantitative applications, points to a potential disconnect in training of higher education scholars. Some structured opportunities for partnership between disciplinary and methodological scholars have emerged via training programs such as the Quantitative Research Methods (QRM) for STEM Education Scholars Program (funded by the National Science Foundation Award 1937745) and the Institute on Mixture Modeling for Equity-Oriented Researchers, Scholars and Educators (IMMERSE) fellowship (funded by the Institute for Education Sciences Award R305B220021). These grant-funded training opportunities connect quantitative methodological experts with applied researchers across educational contexts.

We must consider additional ways, both formal and informal, to expand training opportunities for higher education scholars with interest in both advanced quantitative methods and equity-focused research; until then, expertise in quantitative methods and critical frameworks will likely inhabit two distinct communities of scholars. For higher education scholars to fully embrace the potential of critical quantitative research, we will be well served by intentional partnerships across methodological (e.g., quantitative and qualitative) and disciplinary (e.g., higher education scholars and methodologists) boundaries. In addition to expanding applied researchers’ analytical skillsets, training and collaboration opportunities also prepare potential critical quantitative scholars in higher education to select methodological approaches, whether introductory or advanced, that most closely align with their research aims.

Historically, critical inquiry has been viewed primarily as an endeavor for qualitative research. Recently, educational scholars have begun considering the possibilities for quantitative research to be leveraged in support of critical inquiry. However, there remains limited work evaluating whether and to what extent principles from quantitative criticalism, critical quantitative inquiry, and QuantCrit have been applied in higher education research. By drawing on the work and insights of scholars engaging in critical quantitative work, we provide a foundation on which future scholars can imagine and implement a vast range of possibilities for critical inquiry via quantitative methods in higher education. Ultimately, this work will allow scholars to realize the potential for research methodologies to directly support critical aims.

Data Availability

The list of manuscripts generated from the scoping review analysis is available via the Online Supplemental Materials Information link. Given the nature of our sample and topics discussed, interview data will not be shared publicly to protect participant anonymity.

American Psychological Association. (2024). Journal of Diversity in Higher Education. https://www.apa.org/pubs/journals/dhe

Anguera, M. T., Blanco-Villaseñor, A., Losada, J. L., Sánchez-Algarra, P., & Onwuegbuzie, A. J. (2018). Revisiting the difference between mixed methods and multimethods: Is it all in the name? Quality & Quantity, 52 , 2757–2770.

Article   Google Scholar  

Arellano, L. (2022). Questioning the science: How quantitative methodologies perpetuate inequity in higher education. Education Sciences, 12 (2), 116.

Arskey, H., & O’Malley, L. (2005). Scoping studies: Towards a methodological framework. International Journal of Social Research Methodology, 8 (1), 19–32.

Astin, A. W. (1984). Student involvement: A developmental theory for higher education. Journal of College Student Personnel, 25 , 297–308.

Google Scholar  

Bensimon, E. M. (2007). The underestimated significance of practitioner knowledge in the scholarship on student success. The Review of Higher Education, 30 (4), 441–469.

Bensimon, E. M. (2018). Reclaiming racial justice in equity. Change: The Magazine of Higher Learning, 50 (3–4), 95–98.

Bierema, A., Hoskinson, A. M., Moscarella, R., Lyford, A., Haudek, K., Merrill, J., & Urban-Lurain, M. (2021). Quantifying cognitive bias in educational researchers. International Journal of Research & Method in Education, 44 (4), 395–413.

Bourdieu, P. (1977). Cultural reproduction and social reproduction. In J. Karabel & A. H. Halsey (Eds.), Power and ideology in education (pp. 487–511). Oxford University Press.

Bronner, S. E. (2017). Critical theory: A very short introduction (Vol. 263). Oxford University Press.

Book   Google Scholar  

Byrd, D. (2019). The diversity distraction: A critical comparative analysis of discourse in higher education scholarship. Review of Higher Education, 42 , 135–172.

Castillo, W., & Babb, N. (2024). Transforming the future of quantitative educational research: A systematic review of enacting QuantCrit. Race Ethnicity and Education, 27 (1), 1–21.

Cokley, A., & Awad, G. H. (2013). In defense of quantitative methods: Using the “master’s tools” to promote social justice. Journal for Social Action in Counseling and Psychology, 5 (2), 26–41.

Creswell, J. W., & Miller, D. L. (2000). Determining validity in qualitative inquiry. Theory into Practice, 39 (3), 124–130.

Espino, M. M. (2012). Seeking the “truth” in the stories we tell: The role of critical race epistemology in higher education research. The Review of Higher Education, 36 (1), 31–67.

Garcia, N. M., López, N., & Vélez, V. N. (2018). Race, ethnicity, and education: Vol 21, No 2. QuantCrit: Rectifying quantitative methods through critical race theory . Routledge.

Garvey, J. C., & Huynh, J. (2024). Quantitative criticalism in education research. Critical Education, 15 (1), 74–90.

Gillborn, D., Warmington, P., & Demack, S. (2018). QuantCrit: Education, policy, ‘Big Data’ and principles for a critical race theory of statistics. Race Ethnicity and Education, 21 (2), 158–179.

Hanley, M. S. (2004). The name game: Naming in culture, critical theory, and the arts. Journal of Thought, 39 (4), 53–74.

Hesse-Biber, S., Rodriguez, D., & Frost, N. A. (2015). A qualitatively driven approach to multimethod and mixed methods research. In S. Hesse-Biber & R. B. Johnson (Eds.), The Oxford handbook of multimethod and mixed methods research inquiry. Oxford University Press.

Chapter   Google Scholar  

Jamieson, M. K., Govaart, G. H., & Pownall, M. (2023). Reflexivity in quantitative research: A rationale and beginner’s guide. Social and Personality Psychology Compass, 17 (4), 1–15.

Kimball, E., & Friedensen, R. E. (2019). The search for meaning in higher education research: A discourse analysis of ASHE presidential addresses. The Review of Higher Education, 42 (4), 1549–1574.

Kincheloe, J. L., & McLaren, P. L. (1994). Rethinking critical theory and qualitative research. In N. K. Denzin & Y. S. Lincoln (Eds.), Handbook of qualitative research (pp. 138–157). Sage Publications Inc.

Levac, D., Colquhoun, H., & O’Brien, K. K. (2010). Scoping studies: Advancing the methodology. Implementation Science, 5 , 69.

López, N., Erwin, C., Binder, M., & Javier Chavez, M. (2018). Making the invisible visible: Advancing quantitative methods in higher education using critical race theory and intersectionality. Race Ethnicity and Education, 21 (2), 180–207.

Magoon, A. J. (1977). Constructivist approaches in educational research. Review of Educational Research, 47 (4), 651–693.

Martínez-Alemán, A. M., Pusser, B., & Bensimon, E. M. (Eds.). (2015). Critical approaches to the study of higher education: A practical introduction . Johns Hopkins University Press.

Mayhew, M. J., & Simonoff, J. S. (2015). Non-White, no more: Effect coding as an alternative to dummy coding with implications for higher education researchers. Journal of College Student Development, 56 (2), 170–175.

McCoy, D. L., & Rodricks, D. J. (2015). Critical race theory in higher education: 20 years of theoretical and research innovations. ASHE Higher Education Report, 41 (3), 1.

Miles, M. B., Huberman, A. M., & Saldaña, J. (2014). Qualitative data analysis: A methods sourcebook . Sage.

Munn, Z., Peters, M. D. J., Stern, C., Tufanaru, C., McArthur, A., & Aromataris, E. (2018). Systematic review of scoping review? Guidance for authors when choosing between a systematic or scoping review approach. BMC Medical Research Methodology, 18 , 1–7.

Renn, K. A. (2020). Reimagining the study of higher education: Generous thinking, chaos, and order in a low consensus field. The Review of Higher Education, 43 (4), 917–934.

Rios-Aguilar, C. (2014). The changing context of critical quantitative inquiry. New Directions for Institutional Research, 158 , 95–107.

Robinson, O. C. (2014). Sampling in interview-based qualitative research: A theoretical and practical guide. Qualitative Research in Psychology, 11 (1), 25–41.

Sablan, J. R. (2019). Can you really measure that? Combining critical race theory and quantitative methods. American Educational Research Journal, 56 (1), 178–203.

Sage Journals. (2024). Teachers college record: The voice of scholarship in education. https://journals.sagepub.com/author-instructions/TCZ

Saldaña, J. (2015). The coding manual for qualitative researchers . Sage Publications.

Stage, F. K. (Ed.). (2007). New directions for institutional research: No. 133. Using quantitative data to answer critical questions . Jossey-Bass.

Stage, F. K., & Wells, R. S. (Eds.). (2014). New directions for institutional research: No. 158. New scholarship in critical quantitative research—Part 1: Studying institutions and people in context . Jossey-Bass.

Stewart, D. L. (2022). Spanning and unsettling the borders of critical scholarship in higher education. The Review of Higher Education, 45 (4), 549–563.

Tabron, L. A., & Thomas, A. K. (2023). Deeper than wordplay: A systematic review of critical quantitative approaches in education research (2007–2021). Review of Educational Research, 93 , 756. https://doi.org/10.3102/00346543221130017

Torgerson, D. J., & Torgerson, C. J. (2003). Avoiding bias in randomised controlled trials in educational research. British Journal of Educational Studies, 51 (1), 36–45.

Wells, R. S., & Stage, F. K. (Eds.). (2015). New directions for institutional research: No. 163. New scholarship in critical quantitative research—Part 2: New populations, approaches, and challenges . Jossey-Bass.

Wofford, A. M., & Winkler, C. E. (2022). Publication patterns of higher education research using quantitative criticalism and QuantCrit perspectives. Innovative Higher Education, 47 (6), 967–988. https://doi.org/10.1007/s10755-022-09628-3

Zuberi, T. (2001). Thicker than blood: How racial statistics lie . University of Minnesota Press.

Zuberi, T., & Bonilla-Silva, E. (2008). White logic, White methods: Racism and methodology . Lanham.

Download references

Acknowledgements

This research was supported by a grant from the American Educational Research Association, Division D. The authors gratefully thank Dr. Jason (Jay) Garvey for his support as an early thought partner with regard to this project, and Dr. Christopher Sewell for his helpful feedback on an earlier version of this manuscript, which was presented at the 2022 Association for the Study of Higher Education meeting.

This research was supported by a grant from the American Educational Research Association, Division D.

Author information

Authors and affiliations.

Department of Counseling, Higher Education Leadership, Educational Psychology, & Foundations, Mississippi State University, 175 President’s Circle, 536 Allen Hall, Mississippi State, MS, 39762, USA

Christa E. Winkler

Department of Educational Leadership & Policy Studies, Florida State University, Tallahassee, FL, USA

Annie M. Wofford

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Christa E. Winkler .

Ethics declarations

Competing interests.

The authors have no competing interests to declare relevant to the content of this article.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary file1 (DOCX 26 KB)

Rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Winkler, C.E., Wofford, A.M. Trends and Motivations in Critical Quantitative Educational Research: A Multimethod Examination Across Higher Education Scholarship and Author Perspectives. Res High Educ (2024). https://doi.org/10.1007/s11162-024-09802-w

Download citation

Received : 25 June 2023

Accepted : 14 May 2024

Published : 04 June 2024

DOI : https://doi.org/10.1007/s11162-024-09802-w

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Critical quantitative
  • Quantitative criticalism
  • Scoping review
  • Multimethod study
  • Find a journal
  • Publish with us
  • Track your research
  • Open access
  • Published: 05 June 2024

Current status and ongoing needs for the teaching and assessment of clinical reasoning – an international mixed-methods study from the students` and teachers` perspective

  • F. L Wagner 1 ,
  • M. Sudacka 2 ,
  • A. A Kononowicz 3 ,
  • M. Elvén 4 , 5 ,
  • S. J Durning 6 ,
  • I. Hege 7 &
  • S. Huwendiek 1  

BMC Medical Education volume  24 , Article number:  622 ( 2024 ) Cite this article

Metrics details

Clinical reasoning (CR) is a crucial ability that can prevent errors in patient care. Despite its important role, CR is often not taught explicitly and, even when it is taught, typically not all aspects of this ability are addressed in health professions education. Recent research has shown the need for explicit teaching of CR for both students and teachers. To further develop the teaching and learning of CR we need to improve the understanding of students' and teachers' needs regarding content as well as teaching and assessment methods for a student and trainer CR curriculum.

Parallel mixed-methods design that used web-surveys and semi-structured interviews to gather data from both students (n survey  = 100; n interviews  = 13) and teachers (n survey  = 112; n interviews  = 28). The interviews and surveys contained similar questions to allow for triangulation of the results. This study was conducted as part of the EU-funded project DID-ACT ( https://did-act.eu ).

Both the surveys and interview data emphasized the need for content in a clinical reasoning (CR) curriculum such as “gathering, interpreting and synthesizing patient information”, “generating differential diagnoses”, “developing a diagnostic and a treatment plan” and “collaborative and interprofessional aspects of CR”. There was high agreement that case-based learning and simulations are most useful for teaching CR. Clinical and oral examinations were favored for the assessment of CR. The preferred format for a train-the-trainer (TTT)-course was blended learning. There was also some agreement between the survey and interview participants regarding contents of a TTT-course (e.g. teaching and assessment methods for CR). The interviewees placed special importance on interprofessional aspects also for the TTT-course.

Conclusions

We found some consensus on needed content, teaching and assessment methods for a student and TTT-course in CR. Future research could investigate the effects of CR curricula on desired outcomes, such as patient care.

Peer Review reports

Introduction

Clinical reasoning (CR) is a universal ability that mobilizes integration of necessary fundamental knowledge while delivering high-quality patient care in a variety of contexts in a timely and effective way [ 1 , 2 ]. Daniel et al. [ 3 ] define it as a “skill, process or outcome wherein clinicians observe, collect, and interpret data to diagnose and treat patients”. CR encompasses health professionals thinking and acting in patient assessment, diagnostic, and management processes in clinical situations, taking into account the patient ‘s specific circumstances and preferences [ 4 ]. How CR is defined can vary between health professions, but there are also similarities [ 5 ]. Poor CR is associated with low-quality patient care and increases the risk of medical errors [ 6 ]. Berner and Graber [ 7 ] suggested that the rate of diagnostic error is around 15%, underlining the threat that insufficient CR ability poses to patient safety as well as increasing healthcare costs [ 8 ]. Despite the importance of CR, it appears to be rarely taught or assessed explicitly, often only parts of the CR process are covered in existing curricula, and there seems to be a lack of progression throughout curricula (e.g. [ 9 , 10 , 11 , 12 , 13 , 14 ].). Moreover, teachers are often not trained to explicitly teach CR, including explaining their own reasoning to others [ 10 , 11 , 12 ] although this appears to be an important factor in the implementation of a CR curriculum [ 15 ]. Some teachers even question whether CR can be explicitly taught [ 16 ]. Considering these findings, efforts should be made to incorporate explicit teaching of CR into health care professions curricula and training for teachers should be established based on best evidence. However, to date, little is known about what a longitudinal CR curriculum should incorporate to meet the needs of teachers and students.

Insights regarding teaching CR were provided from a global survey by Kononowicz et al. [ 10 ], who reported a need for a longitudinal CR curriculum. However, the participants in their study were mainly health professions educators, leaving the needs of students for a CR curriculum largely unknown. As students are future participants of a CR curriculum, their needs should also be investigated. Kononowicz et al. [ 10 ] also identified a lack of qualified faculty to teach CR. A train-the-trainer course for CR could help reduce this barrier to teaching CR. To the best of our knowledge, in addition to the work by Kononowicz et al. [ 10 ], no research exists yet that addresses the needs of teachers for such a course, and Kononowicz et al. [ 10 ] did not investigate their needs beyond course content. Recently, Gupta et al. [ 12 ] and Gold et al. [ 13 ] conducted needs analyses regarding clinical reasoning instruction from the perspective of course directors at United States medical schools, yet a European perspective is missing. Thus, our research questions were the following:

What aspects of clinical reasoning are currently taught and how important are they in a clinical reasoning curriculum according to teachers and students?

What methods are currently used to teach and assess clinical reasoning and which methods would be ideal according to teachers and students?

In what study year does the teaching of clinical reasoning currently begin and when should it ideally begin according to teachers and students?

How should a train-the-trainer course for teachers of clinical reasoning be constructed regarding content and format?

In this study, we used a convergent parallel mixed-methods design [ 17 ] within a pragmatic constructivist case study approach [ 18 ]. We simultaneously collected data from students and educators using online questionnaires and semi-structured interviews to gain deeper insight into their needs on one particular situation [ 19 ]– the development of a clinical reasoning curriculum—to address our research questions. To help ensure that the results of the survey and the interviews could be compared and integrated, we constructed the questions for the survey and the interviews similarly with the exception that in the interviews, the questions were first asked openly. The design was parallel both in that we collected data simultaneously and also constructed the survey and interviews to cover similar topics. We chose this approach to obtain comprehensive answers to the research questions and to facilitate later triangulation [ 17 ] of the results.

Context of this study

We conducted this study within the EU-funded (Erasmus + program) project DID-ACT (“Developing, implementing, and disseminating an adaptive clinical reasoning curriculum for healthcare students and educators”; https://did-act.eu ). Institutions from six European countries (Augsburg University, Germany; Jagiellonian University in Kraków, Poland; Maribor University, Slovenia; Örebro University, Sweden; University of Bern, Switzerland; EDU, a higher medical education institution based in Malta, Instruct GmbH, Munich, Germany) with the support of associate partners (e.g., Prof. Steven Durning, Uniformed Services University of the Health Sciences, USA; Mälardalen University, Sweden.) were part of this project. For further information, see https://did-act.eu/team-overview/team/ . In this project, we developed an interprofessional longitudinal clinical reasoning curriculum for students in healthcare education and a train-the-trainer course for health profession educators. The current curriculum (for a description of the curriculum, see Hege et al. [ 20 ]) was also informed by this study. This study was part of the Erasmus + Knowledge Alliance DID-ACT (612,454-EPP-1–2019-1-DE-EPPKA2-KA).

Target groups

We identified two relevant target groups for this study, teachers and students, which are potential future users and participants of a train—the—trainer (TTT-) course and a clinical reasoning curriculum, respectively. The teacher group also included individuals who were considered knowledgeable regarding the current status of clinical reasoning teaching and assessment at their institutions (e.g. curriculum managers). These specific participants were individually selected by the DID-ACT project team to help ensure that they had the desired level of expertise. The target groups included different health professions from a large number of countries (see Table  1 ), as we wanted to gather insights that are not restricted to one profession.

Development of data collection instruments

Development of questions.

The questions in this study addressed the current status and needs regarding content, teaching, and assessment of clinical reasoning (CR). They were based on the questions used by Kononowicz et al. [ 10 ] and were expanded to obtain more detailed information. Specifically, regarding CR content, we added additional aspects (see Table 8 in the Appendix for details). The contents covered in this part of the study also align with the five domains of CR education (clinical reasoning concepts, history and physical examination, choosing and interpreting diagnostic tests, problem identification and management and shared decision-making) that were reported by Cooper et al. [ 14 ]. It has been shown that there are similarities between professions regarding the definition of CR (e.g. history taking or an emphasis on clinical skills), while nurses placed greater importance on a patient-centered approach [ 5 ]. We aimed to cover as many aspects of CR in the contents as possible to represent these findings. We expanded the questions on CR teaching formats to cover a broader range of formats. Furthermore, two additional assessment methods were added to the respective questions. Finally, one aspect was added to the content questions for a train-the-trainer course (see Table 8 in the Appendix ). As a lack of qualified faculty to teach CR was identified in the study by Kononowicz et al. [ 10 ], we added additional questions on the specific needs for the design of a CR train-the-trainer course beyond content. Table 8 in the Appendix shows the adaptations that we made in detail.

We discussed the questions within the interprofessional DID-ACT project team and adapted them in several iterative cycles until the final versions of the survey questionnaire and the interview guide were obtained and agreed upon. We tested the pre-final versions with think-alouds [ 21 ] to ensure that the questions were understandable and interpreted as intended, which led to a few changes. The survey questionnaires and interview-guides can be found at https://did-act.eu/results/ and accessed via links in table sections D1.1a (survey questions) and D1.1b (interview guides), respectively. Of these questions, we included only those relevant to the research questions addressed in this study. The questions included in this study can be found in the Appendix in Table8.

Teachers were asked questions about all content areas, but only the expert subgroup was asked to answer questions on the current situation regarding the teaching and assessment of clinical reasoning at their institutions, as they were considered the best informed group on the matter. Furthermore, students were not asked questions on the train-the-trainer course. Using the abovementioned procedures, we also hoped to improve the response rate as longer surveys were found to be associated with lower response rates [ 22 ].

We created two different versions of the interview guide, one for teachers and one for students. The student interview guide did not contain questions on the current status of clinical reasoning teaching and assessment or questions about the train-the-trainer course. The interview guides were prepared with detailed instructions to ensure that the interviews were conducted in a comparable manner at all locations. By using interviews, we intended to obtain a broad picture of existing needs. Individual interviews further allowed participants to speak their own languages and thus to express themselves naturally and as precisely as possible.

Reflexivity statement

Seven researchers representing different perspectives and professions form the study team. MS has been a PhD candidate representing the junior researcher perspective, while also experienced researchers with a broad background in clinical reasoning and qualitative as well as quantitative research are part of the team (SD, SH, AK, IH, ME, FW). ME represents the physiotherapist perspective, SD, SH, and MS represent the medical perspective. We discussed all steps of the study in the team and made joint decisions.

Data collection and analysis

The survey was created using LimeSurvey software (LimeSurvey GmbH). The survey links were distributed via e-mail (individual invitations, posts to institutional mailing lists, newsletters) by the DID-ACT project team and associate partners (the target groups received specific links to the online-survey). The e-mail contained information on the project and its goals. By individually contacting persons in the local language, we hoped to increase the likelihood of participation. The survey was anonymous. The data were collected from March to July 2020.

Potential interview participants were contacted personally by the DID-ACT project team members in their respective countries. We used a convenience sampling approach by personally contacting potential interview partners in the local language to motivate as many participants as possible. With this approach we also hoped to increase the likelihood of participation. The interviews were conducted in the local languages also to avoid language barriers and were audio-recorded to help with the analysis and for documentation purposes. Most interviews were conducted using online meeting services (e.g. Skype or Zoom) because of restrictions due to the ongoing coronavirus pandemic that occurred with the start of data collection at the beginning of the DID-ACT project. The data were collected from March to July 2020. All interview partners provided informed consent.

Ethics approval and consent to participate

We asked the Bern Ethics Committee to approve this multi-institutional study. This type of study was regarded as exempt from formal ethical approval according to the regulations of the Bern Ethics Committee (‘Kantonale Ethikkommission Bern’, decision Req-2020–00074). All participants voluntarily participated and provided informed consent before taking part in this study.

Data analysis

Descriptive analyses were performed using SPSS statistics software (version 28, 2021). Independent samples t-tests were computed for comparisons between teachers and students. When the variances of the two groups were unequal, Welch’s test was used. Bonferroni correction of significance levels was used to counteract alpha error accumulation in repeated tests. The answers to the free text questions were screened for recurring themes. There were very few free-text comments, typically repeating aspects from the closed questions, hence, no meaningful analysis was possible. For this reason, the survey comments are mentioned only where they made a unique contribution to the results.

The interviews were translated into English by the partners. An overarching summarizing qualitative content analysis [ 23 ] of the data was conducted. A summarizing content analysis is particularly useful when the content level of the material is of interest. Its goal is to reduce the material to manageable short texts in a way that retains the essential meaning [ 23 ]. The analysis was conducted first by two of the authors of the study (FW, SH) and then discussed by the entire author team. The analysis was carried out as an iterative process until a complete consensus was reached within the author team.

The results from the surveys and interviews were compared and are presented together in the results section. The qualitative data are reported in accordance with the standards for reporting qualitative research (SRQR, O’Brien et al. [ 24 ]).

Table 1 shows the professional background and country of the interviewees and survey samples. The survey was opened by 857 persons, 212 (25%) of whom answered the questions included in this study. The expert sub-group of teachers who answered the questions on the current status of clinical reasoning teaching and assessment encompassed 45 individuals.

Content of a clinical reasoning curriculum for students

The survey results show that “Gathering, interpreting, and synthesizing patient information”, is currently most extensively taught, while “Theories of clinical reasoning” are rarely taught (see Table  2 ). In accordance with these findings, “Gathering, interpreting, and synthesizing patient information” received the highest mean importance rating for a clinical reasoning curriculum while “Theories of clinical reasoning” received the lowest importance rating. Full results can be found in Table 9 in the Appendix .

Teachers and students differed significantly in their importance ratings of two content areas, “Gathering, interpreting, and synthesizing patient information” ( t (148.32) = 4.294, p  < 0.001, d  = 0.609) and “Developing a problem formulation/hypothesis” ( t (202) = 4.006, p  < 0.001, d  = 0.561), with teachers assigning greater importance to both of these content areas.

The results from the interviews are in line with those from the survey. Details can be found in Table 12 in the Appendix .

Clinical reasoning teaching methods

The survey participants reported that, most often, case-based learning is currently applied in the teaching of clinical reasoning (CR). This format was also rated as most important for teaching CR (see Table  3 ). Full results can be found in Table 10 in the Appendix .

Teachers and students differed significantly in their importance ratings of Team-based learning ( t (202) = 3.079, p  = 0.002, d  = 0.431), with teachers assigning greater importance to this teaching format.

Overall, the interviewees provided very similar judgements to the survey participants. Next to the teaching formats shown in Table  3 , some of them would employ blended learning, and clinical teaching formats such as bedside teaching and internships were also mentioned. Details can be found in the Appendix in Table 13. In addition to the importance of each individual teaching format, it was also argued that all of the formats can be useful because they all are meant to reach different objectives and that there is not one single best format for teaching CR.

Start of clinical reasoning teaching in curricula

Most teachers (52.5%) reported that currently, the teaching of clinical reasoning (CR) starts in the first year of study. Most often (46.4%) the participants also chose the first study year as the optimal year for starting the teaching CR. In accordance with the survey results, the interviewees also advocated for an early start of the teaching of CR. Some interview participants who advocated for a later start of CR teaching suggested that the students first need a solid knowledge base and that once the clinical/practical education starts, explicit teaching of CR should begin.

Assessment of clinical reasoning

The survey results suggest that currently written tests or clinical examinations are most often used, while Virtual Patients are used least often (see Table  4 ). Despite written tests being the most common current assessment format, they received the lowest importance rating for a future longitudinal CR curriculum. Full results can be found in Table 11 in the Appendix .

Teachers and students differed significantly in their importance ratings of clinical examinations ( t (161.81) = 2.854, p  = 0.005, d  = 0.413) and workplace-based assessments ( t (185) = 2.640, p = 0.009, d  = 0.386) with teachers assigning greater importance to both of these assessment formats.

The interviewees also placed importance on all assessment methods but found it difficult to assess CR with written assessment methods. The students seemed to associate clinical examinations more with practical skills than with CR. Details can be found in the Appendix in Table 14. Two of the interview participants mentioned that CR is currently not assessed at their institutions, and one person mentioned that students are asked to self-reflect on their interactions with patients and on potential improvements.

Train-the-trainer course

The following sections highlight the results from the needs analysis regarding a train-the-trainer (TTT-) course. The questions presented here were posed only to the teachers.

Most survey participants reported that there is currently no TTT- course on clinical reasoning at their institution but that they think such a course is necessary (see Table  5 ). The same was also true for the interviewees (no TTT- course on clinical reasoning existing but need for one).

In the interviews, 22 participants (78.6%) answered that a TTT-course is necessary for healthcare educators, two participants answered that no such course was necessary, and two other participants were undecided about its necessity. At none of the institutions represented by the interviewees, a TTT-course for teaching clinical reasoning exists.

When asked what the best format for a clinical reasoning TTT- course would be (single answer question), the majority of the survey participants favored a blended learning / flipped classroom approach, a combination of e-learning and face-to-face meetings. (see Table  6 ).

In the survey comments it was noted that blended-learning encompasses the benefits of both self-directed learning and discussion/learning from others. It would further allow teachers to gather knowledge about CR first in an online learning phase where they can take the time they need before coming to a face-to-face meeting.

The interviewees also found a blended-learning approach particularly suitable for a TTT-course. An e-learning course only was seen as more critical because teachers may lack motivation to participate in an online-only setting, while a one-time face-to-face meeting would not provide enough time. In some interviews, it was emphasized that teachers should experience themselves what they are supposed to teach to the students and also that the trainers for the teachers need to have solid education and knowledge on clinical reasoning.

Table 7 shows the importance ratings of potential content of a TTT-course generated from the survey. To elaborate on this content, comments by the interviewees were added. On average, all content was seen as (somewhat) important with teaching methods on the ward and/or clinic receiving the highest ratings. Some interviewees also mentioned the importance of interprofessional aspects and interdisciplinary understanding of CR. In the survey comments, some participants further expressed their interest in such a course.

Finally, the interviewees were asked about the ideal length of a clinical reasoning TTT-course. The answers varied greatly from 2–3 hours to a two-year educational program, with a tendency toward 1–2 days. Several interviewees commented that the time teachers are able to spend on a TTT-course is limited. This should be considered in the planning of such a course to make participation feasible for teachers.

In this study, we investigated the current status of and suggestions for teaching and assessment of clinical reasoning (CR) in a longitudinal curriculum as well as suggestions for a train-the-trainer (TTT-) course for CR. Teachers and students were invited to participate in online-surveys as well as semi-structured interviews to derive answers to our research questions. Regarding the contents of a CR curriculum for students, the results of the surveys and interviews were comparable and favoured content such as gathering, interpreting, and synthesizing patient information, generating differential diagnoses, and developing a diagnostic and a treatment plan. In the interviews, high importance was additionally placed on collaborative and interprofessional aspects of CR. Case-based learning and simulations were seen as the most useful methods for teaching CR, and clinical and oral examinations were favoured for the assessment of CR. The preferred format for a TTT-course was blended learning. In terms of course content, teaching and assessment methods for CR were emphasized. In addition to research from the North American region [ 11 ], this study provides results from predominantly European countries that support the existing findings.

Content of a clinical reasoning curriculum

Our results revealed that there are still aspects of clinical reasoning (CR), such as “Errors in the clinical reasoning process and strategies to avoid them” or “Interprofessional aspects of CR” that are rarely taught despite their high importance, corroborating the findings of Kononowicz et al. [ 10 ]. According to the interviewees, students should have basic knowledge of CR before they are taught about errors in the CR process and strategies to avoid them. The lack of teaching of errors in CR may also stem from a lack of institutional culture regarding how to manage failures in a constructive way (e.g. [ 16 , 25 ]), making it difficult to explicitly address errors and strategies to avoid them. Although highly relevant in the everyday practice of healthcare professions and underpinned by CR theoretical frameworks (e.g., distributed cognition [ 26 ]), interprofessional and collaborative aspects of CR are currently rarely considered in the teaching of CR. The interviews suggested that hierarchical distance and cultural barriers may contribute to this finding. Sudacka et al. [ 16 ] also reported cultural barriers as one reason for a lack of CR teaching. Generally, the interviewees seemed to place greater importance on interprofessional and collaborative aspects than did the survey-participants This may have been due to differences in the professions represented in the two modalities (e.g., a greater percentage of nurses among the interview participants, who tend to define CR more broadly than physicians [ 5 ]).

“Self-reflection on clinical reasoning performance and strategies for future improvement”, “Developing a problem formulation/hypothesis” and “Aspects of patient-participation in CR” were rated as important but are currently rarely taught, a finding not previously reported. The aspect “Self-reflection on clinical reasoning performance and strategies for future improvement”, received high importance ratings, but only 25% of the survey-participants answered that it is currently taught to a great extent. The interviewees agreed that self-reflection is important and added that ideally, it should be guided by specific questions. Ogdie et al. [ 27 ] found that reflective writing exercises helped students identify errors in their reasoning and biases that contributed to these errors.

“Gathering, interpreting, and synthesizing patient information” and “Developing a problem formulation/hypothesis” were rated significantly more important by teachers than by students. It appears that students may be less aware yet of the importance of gathering, interpreting, and synthesizing patient information in the clinical reasoning process. There was some indication in the interviews that the students may not have had enough experience yet with “Developing a problem formulation/hypothesis” or associate this aspect with research, possibly contributing to the observed difference.

Overall, our results on the contents of a CR curriculum suggest that all content is important and should be included in a CR curriculum, starting with basic theoretical knowledge and data gathering to more advanced aspects such as errors in CR and collaboration. Two other recent surveys conducted in the United States among pre-clerkship clinical skills course directors [ 12 ] and members of clerkship organizations [ 13 ] came to similar conclusions regarding the inclusion of clinical reasoning content at various stages of medical curricula. How to fit the content into already dense study programs, however, can still be a challenge [ 16 ].

In addition to case-based learning and clinical teaching, human simulated patients and Team-based learning also received high importance ratings for teaching clinical reasoning (CR), a finding not previously reported. Lectures, on the other hand, are seen as the least important to teach CR (see also Kononowicz et al. [ 10 ]), as they mainly deliver factual knowledge according to the interviewees. High-fidelity simulations (mannequins) and Virtual Patients (VPs) are rarely used to teach CR at the moment and are rated less important compared to other teaching formats. Some interviewees see high-fidelity simulations as more useful for teaching practical skills. The lower importance rating of VPs was surprising given that this format is case-based, provides a safe environment for learning, and is described in the literature as a well-suited tool for teaching CR [ 28 , 29 ]. Considering that VPs seemed to be used less often at the institutions involved in this study, the lack of experience with this format may have led to this result.

Teachers rated Team-based learning as significantly more important for teaching clinical reasoning than students. In the interviews, many students seemed not to be familiar with Team-based learning, possibly explaining the lower ratings the students gave this format in the survey.

Taken together, our results suggest that there is not one best format for teaching all aspects of clinical reasoning but rather that the use of all teaching formats is justified depending on the specific content to be taught and goals to be achieved. However, there was agreement that a safe learning environment where no patients can be harmed is preferred for teaching clinical reasoning, and that discussions should be possible.

There was wide agreement that clinical reasoning (CR) teaching should start in the first year of study in the curriculum. However, a few participants of this study argued that students first need to develop some general knowledge before CR is taught. Rencic et al. [ 11 ] reported that according to internal medicine clerkship directors, CR should be taught throughout all years of medical school, with a particular focus during the clinical teaching years. A similar remark was made by participants in a survey among pre-clerkship clinical skills course directors by Gupta et al. [ 12 ] where the current structure of some curricula (e.g. late introduction of the pathophysiology) was regarded as a barrier to introducing CR from the first year of study on [ 12 ].

Our results show that the most important format for assessing clinical reasoning (CR) that is also currently used to the greatest extent are clinical examinations (e.g. OSCE), consistent with Kononowicz et al. [ 10 ]. The interviewees emphasized that CR should ideally be assessed in a conversation or discussion where the learners can explain their reasoning. Given this argument, all assessment formats enabling a conversation are suitable for assessing CR. This is reflected in our survey results, where assessment formats that allow for a discussion with the learner received the most favourable importance ratings, including oral examinations. In agreement with Kononowicz et al. [ 10 ], we also found that written tests are currently used most often to assess CR but are rated as least important and suitable only for the assessment of some aspects of CR. Daniel et al. [ 3 ] argued that written exams such as MCQs, where correct answers have to be selected from a list of choices, are not the best representation of real practical CR ability. Thus, there still seems to be potential for improvement in the way CR is assessed.

Teachers rated clinical examinations and workplace-based assessments significantly higher than students. Based on the interviews, the students seemed to associate clinical examinations such as OSCEs more with a focus on practical skills than CR, potentially explaining their lower ratings of this format.

What a clinical reasoning train-the-trainer course should look like

Our results show a clear need for a clinical reasoning (CR) train-the-trainer course (see also Singh et al. [ 15 ]), which currently does not exist at most institutions represented in this study, corroborating findings by Kononowicz et al. [ 10 ]. A lack of adequately trained teachers is a common barrier to the introduction of CR content into curricula [ 12 , 16 ]. According to our results such a course should follow a blended learning/flipped classroom approach or consist of a series of face-to-face meetings. A blended-learning course would combine the benefits of both self-directed learning and the possibility for trainers to discuss with and learn from their peers, which could also increase their motivation to participate in such a course. An e-learning only course or a one-time face-to-face meeting were considered insufficient. The contents “Clinical reasoning strategies” and “Common errors in the clinical reasoning process” were given greater importance for the trainer-curriculum than for the students-curriculum, possibly reflecting higher expectations of trainers as “CR experts” compared with students. There was some agreement in the interviews that ideally, the course should not be too time-consuming, with participants tending towards an overall duration of 1–2 days, considering that most teachers usually have many duties and may not be able or willing to attend the course if it were too long. Lack of time was also identified as a barrier to attending teacher training [ 12 , 13 , 16 ].

Strengths and limitations

The strengths of this study include its international and interprofessional participants. Furthermore, we explicitly included teachers and students as target groups in the same study, which enables a comparison of different perspectives. Members of the target groups not only participated in a survey but were also interviewed to gain in-depth knowledge. A distinct strength of this study is its mixed-methods design. The two data collection methods employed in parallel provided convergent results, with responses from the web survey indicating global needs and semi-structured interviews contributing to a deeper understanding of the stakeholder groups’ nuanced expectations and perspectives on CR education.

This study is limited in that most answers came from physicians, making the results potentially less generalizable to other professions. Furthermore, there were participants from a great variety of countries, with some countries overrepresented. Because of the way the survey-invitations were distributed, the exact number of recipients is unknown, making it impossible to compute an exact response rate. Also, the response rate of the survey was rather low for individuals who opened the survey. Because the survey was anonymous, it cannot completely be ruled out that some individuals participated in both interviews and survey. Finally, there could have been some language issues in the interview analysis, as the data were translated to English at the local partner institutions before they were submitted for further analysis.

Our study provides evidence of an existing need for explicit clinical reasoning (CR) longitudinal teaching and dedicated CR teacher training. More specifically, there are aspects of CR that are rarely taught that our participants believe should be given priority, such as self-reflection on clinical reasoning performance and strategies for future improvement and aspects of patient participation in CR that have not been previously reported. Case-based learning and clinical teaching methods were again identified as the most important formats for teaching CR, while lectures were considered relevant only for certain aspects of CR. To assess CR, students should have to explain their reasoning, and assessment formats should be chosen accordingly. There was also still a clear need for a CR train-the-trainer course. In addition to existing research, our results show that such a course should ideally have a blended-learning format and should not be too time-consuming. The most important contents of the train-the-trainer course were confirmed to be teaching methods, CR strategies, and strategies to avoid errors in the CR process. Examples exist for what a longitudinal CR curriculum for students and a corresponding train-the-trainer course could look like and how these components could be integrated into existing curricula (e.g. DID-ACT curriculum [ 20 ], https://did-act.eu/integration-guide/ or the described curriculum of Singh et al. [ 15 ]). Further research should focus on whether and to what extent the intended outcomes of such a curriculum are actually reached, including the potential impact on patient care.

Availability of data and materials

All materials described in this manuscript generated during the current study are available from the corresponding author on reasonable request without breaching participant confidentiality.

Connor DM, Durning SJ, Rencic JJ. Clinical reasoning as a core competency. Acad Med. 2020;95:1166–71.

Article   Google Scholar  

Young M, Szulewski A, Anderson R, Gomez-Garibello C, Thoma B, Monteiro S. Clinical reasoning in CanMEDS 2025. Can Med Educ J. 2023;14:58–62.

Google Scholar  

Daniel M, Rencic J, Durning SJ, Holmboe E, Santen SA, Lang V, Gruppen LD. Clinical reasoning assessment methods: a scoping review and practical guidance. Acad Med. 2019;94:902–12.

Scott IA. Errors in clinical reasoning: causes and remedial strategies. BMJ. 2009. https://doi.org/10.1136/bmj.b1860 .

Huesmann L, Sudacka M, Durning SJ, Georg C, Huwendiek S, Kononowicz AA, Schlegel C, Hege I. Clinical reasoning: what do nurses, physicians, and students reason about. J Interprof Care. 2023;37:990–8.

Norman GR, Eva KW. Diagnostic error and clinical reasoning. Med Educ. 2010;44:94–100.

Berner E, Graber M. Overconfidence as a cause of diagnostic error in medicine. Am J Med. 2008;121:2–23.

Cooper N, Da Silva AL, Powell S. Teaching clinical reasoning. In: Cooper N, Frain J, editors. ABC of clinical reasoning. 1st ed. Hoboken, NJ: John Wiley & Sons Ltd; 2016. p. 44–50.

Elvén M, Welin E, Wiegleb Edström D, Petreski T, Szopa M, Durning SJ, Edelbring S. Clinical reasoning curricula in health professions education: a scoping review. J Med Educ Curric Dev. 2023. https://doi.org/10.1177/23821205231209093 .

Kononowicz AA, Hege I, Edelbring S, Sobocan M, Huwendiek S, Durning SJ. The need for longitudinal clinical reasoning teaching and assessment: results of an international survey. Med Teach. 2020;42:457–62.

Rencic J, Trowbridge RL, Fagan M, Szauter K, Durning SJ. Clinical reasoning education at US medical schools: results from a national survey of internal medicine clerkship directors. J Gen Intern Med. 2017;32:1242–6.

Gupta S, Jackson JM, Appel JL, Ovitsh RK, Oza SK, Pinto-Powell R, Chow CJ, Roussel D. Perspectives on the current state of pre-clerkship clinical reasoning instruction in United States medical schools: a survey of clinical skills course directors. Diagnosis. 2021;9:59–68.

Gold JG, Knight CL, Christner JG, Mooney CE, Manthey DE, Lang VJ. Clinical reasoning education in the clerkship years: a cross-disciplinary national needs assessment. PLoS One. 2022;17:e0273250.

Cooper N, Bartlett M, Gay S, Hammond A, Lillicrap M, Matthan J, Singh M. UK Clinical Reasoning in Medical Education (CReME) consensus statement group. Consensus statement on the content of clinical reasoning curricula in undergraduate medical education. Med Teach. 2021;43:152–9.

Singh M, Collins L, Farrington R, Jones M, Thampy H, Watson P, Grundy J. From principles to practice: embedding clinical reasoning as a longitudinal curriculum theme in a medical school programme. Diagnosis. 2021;9:184–94.

Sudacka M, Adler M, Durning SJ, Edelbring S, Frankowska A, Hartmann D, Hege I, Huwendiek S, Sobočan M, Thiessen N, Wagner FL, Kononowicz AA. Why is it so difficult to implement a longitudinal clinical reasoning curriculum? A multicenter interview study on the barriers perceived by European health professions educators. BMC Med Educ. 2021. https://doi.org/10.1186/s12909-021-02960-w .

Hingley A, Kavaliova A, Montgomery J, O’Barr G. Mixed methods designs. In: Creswell JW, editor. Educational research: planning, conducting, and evaluating quantitative and qualitative research. 4th ed. Boston: Pearson; 2012. p. 534–75.

Merriam SB. Qualitative research and case study applications in education. In: from" case study research in education.". Sansome St. Revised and Expanded. San Francisco, CA: Jossey-Bass Publishers; 1998.

Cleland J, MacLeod A, Ellaway RH. The curious case of case study research. Med Educ. 2021;55:1131–41.

Hege I, Adler M, Donath D, Durning SJ, Edelbring S, Elvén M, Wiegleb Edström D. Developing a European longitudinal and interprofessional curriculum for clinical reasoning. Diagnosis. 2023;10:218–24.

Collins D. Pretesting survey instruments: an overview of cognitive methods. Qual Life Res. 2003;12:229–38.

Liu M, Wronski L. Examining completion rates in web surveys via over 25,000 real-world surveys. Soc Sci Comput Rev. 2018;36:116–24.

Mayring P, Fenzl T. Qualitative inhaltsanalyse. In: Baur N, Blasius J, editors. Handbuch methoden der empirischen Sozialforschung. Wiesbaden: Springer VS; 2019. p. 633–48.

Chapter   Google Scholar  

O’Brien BC, Harris IB, Beckman TJ, Reed DA, Cook DA. Standards for reporting qualitative research: a synthesis of recommendations. Acad Med. 2014;89:1245–51.

Edmondson AC. Learning from failure in health care: frequent opportunities, pervasive barriers. BMJ Qual Saf. 2004;13 Suppl 2:ii3-ii9.

Merkebu J, Battistone M, McMains K, McOwen K, Witkop C, Konopasky A, Durning SJ. Situativity: a family of social cognitive theories for understanding clinical reasoning and diagnostic error. Diagnosis. 2020;7:169–76.

Ogdie AR, Reilly JB, Pang WG, Keddem S, Barg FK, Von Feldt JM, Myers JS. Seen through their eyes: residents’ reflections on the cognitive and contextual components of diagnostic errors in medicine. Acad Med. 2012;87:1361–7.

Berman NB, Durning SJ, Fischer MR, Huwendiek S, Triola MM. The role for virtual patients in the future of medical education. Acad Med. 2016;91:1217–22.

Plackett R, Kassianos AP, Mylan S, Kambouri M, Raine R, Sheringham J. The effectiveness of using virtual patient educational tools to improve medical students’ clinical reasoning skills: a systematic review. BMC Med Educ. 2022. https://doi.org/10.1186/s12909-022-03410-x .

Download references

Acknowledgements

We want to thank all participants of the interviews and survey who took their time to contribute to this study despite the ongoing pandemic in 2020. Furthermore, we thank the members of the DID-ACT project team who supported collection and analysis of survey and interview data.

The views expressed herein are those of the authors and not necessarily those of the Department of Defense, the Uniformed Services University or other Federal Agencies.

This study was partially supported by the Erasmus + Knowledge Alliance DID-ACT (612454-EPP-1–2019-1-DE-EPPKA2-KA).

Author information

Authors and affiliations.

Institute for Medical Education, Department for Assessment and Evaluation, University of Bern, Bern, Switzerland

F. L Wagner & S. Huwendiek

Center of Innovative Medical Education, Department of Medical Education, Jagiellonian University, Kraków, Poland

Faculty of Medicine, Department of Bioinformatics and Telemedicine, Jagiellonian University, Kraków, Poland

A. A Kononowicz

School of Health, Care and Social Welfare, Mälardalen University, Västerås, Sweden

Faculty of Medicine and Health, School of Health Sciences, Örebro University, Örebro, Sweden

Uniformed Services University of the Health Sciences, Bethesda, MD, USA

S. J Durning

Institute of Medical Education, University Hospital, LMU Munich, Munich, Germany

You can also search for this author in PubMed   Google Scholar

Contributions

FW and SH wrote the first draft of the manuscript. All authors critically revised the manu-script in several rounds and approved the final manuscript.

Corresponding author

Correspondence to F. L Wagner .

Ethics declarations

This type of study was regarded as exempt from formal ethical approval according to the regulations of the Bern Ethics Committee (‘Kantonale Ethikkommission Bern’, decision Req-2020–00074). All participants voluntarily participated and provided informed consent before taking part in this study.

Consent for publication

All authors consent to publication of this manuscript.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Wagner, F., Sudacka, M., Kononowicz, A. et al. Current status and ongoing needs for the teaching and assessment of clinical reasoning – an international mixed-methods study from the students` and teachers` perspective. BMC Med Educ 24 , 622 (2024). https://doi.org/10.1186/s12909-024-05518-8

Download citation

Received : 16 January 2024

Accepted : 06 May 2024

Published : 05 June 2024

DOI : https://doi.org/10.1186/s12909-024-05518-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Clinical reasoning

BMC Medical Education

ISSN: 1472-6920

research design for interviews

  • Open access
  • Published: 29 May 2024

The implementation of person-centred plans in the community-care sector: a qualitative study of organizations in Ontario, Canada

  • Samina Idrees 1 ,
  • Gillian Young 1 ,
  • Brian Dunne 2 ,
  • Donnie Antony 2 ,
  • Leslie Meredith 1 &
  • Maria Mathews 1  

BMC Health Services Research volume  24 , Article number:  680 ( 2024 ) Cite this article

356 Accesses

2 Altmetric

Metrics details

Person-centred planning refers to a model of care in which programs and services are developed in collaboration with persons receiving care (i.e., persons-supported) and tailored to their unique needs and goals. In recent decades, governments around the world have enacted policies requiring community-care agencies to adopt an individualized or person-centred approach to service delivery. Although regional mandates provide a framework for directing care, it is unclear how this guidance is implemented in practice given the diversity and range of organizations within the sector. This study aims to address a gap in the literature by describing how person-centred care plans are implemented in community-care organizations.

We conducted semi-structured interviews with administrators from community-care organizations in Ontario, Canada. We asked participants about their organization’s approach to developing and updating person-centred care plans, including relevant supports and barriers. We analyzed the data thematically using a pragmatic, qualitative, descriptive approach.

We interviewed administrators from 12 community-care organizations. We identified three overarching categories or processes related to organizational characteristics and person-centred planning: (1) organizational context, (2) organizational culture, and (3) the design and delivery of person-centred care plans. The context of care and the types of services offered by the organization were directly informed by the needs and characteristics of the population served. The culture of the organization (e.g., their values, attitudes and beliefs surrounding persons-supported) was a key influence in the development and implementation of person-centred care plans. Participants described the person-centred planning process as being iterative and collaborative, involving initial and continued consultations with persons-supported and their close family and friends, while also citing implementation challenges in cases where persons had difficulty communicating, and in cases where they preferred not to have a formal plan in place.

Conclusions

The person-centred planning process is largely informed by organizational context and culture. There are ongoing challenges in the implementation of person-centred care plans, highlighting a gap between policy and practice and suggesting a need for comprehensive guidance and enhanced adaptability in current regulations. Policymakers, administrators, and service providers can leverage these insights to refine policies, advocating for inclusive, flexible approaches that better align with diverse community needs.

Peer Review reports

The community-care sector facilitates the coordination and administration of in-home and community-based health and social services. Community-care services include supports for independent living, residential services, complex medical care, and community-participation services to support personal and professional goals (e.g., education, employment, and recreation-based supports) [ 1 ]. There is substantial heterogeneity in the clinical and demographic characteristics of the community-care population, including individuals with physical and developmental disabilities, and complex medical needs [ 2 ]. We refer to the individuals served by these organizations as ‘persons-supported’ in line with person-first language conventions [ 3 , 4 ].

In recent decades, governments across the world have enacted policies requiring community-care agencies to adopt an individualized or person-centred approach to service delivery [ 5 , 6 , 7 , 8 ]. Person-centred care encompasses a broad framework designed to direct care delivery, as opposed to a singular standardized process. In the context of community-care, person-centred planning refers to a model of care provision in which programs and services are developed in collaboration with persons-supported and tailored to their unique needs and desired outcomes [ 9 , 10 ].

In Ontario, Canada, community-care services are funded by the Ministry of Health (MOH) and the Ministry of Children, Community and Social Services (MCCSS). Service agreements between these ministries and individual agencies can be complex and contingent on different factors including compliance with a number of regulatory items and policies [ 7 , 11 ]. MOH provides funding for health-based services including in-home physiotherapy, respiratory therapy, and personal support services, among several others. MOH funds Home and Community Care Support Services (HCCSS), a network of organizations responsible for coordinating the delivery of in-home and community-based care in the province. MCCSS funds social service agencies including those providing community participation and residential support for people with intellectual and developmental disabilities (IDDs).

Several tools and resources have been developed to aid organizations in providing person-centred care and organizations may differ in their use of these tools and their specific approach. Although regional mandates provide a framework for directing care delivery, it is unclear how this guidance is implemented in practice given the diversity and range of organizations within the sector. In addition, as noted by a recent scoping review, there is limited literature on the implementation process and impact of person-centred planning on individual outcomes [ 12 ]. Using a pragmatic, qualitative, descriptive approach [ 13 ], we outline how community-care organizations enact a person-centred approach to care and the factors that shape their enactment. By describing existing practices in the context of the community-care sector, we aim to provide insight on how to optimize care delivery to improve outcomes and inform current policy. This study is part of a larger, multi-methods project examining the implementation of person-centred care plans in the community-care sector. This project encompasses qualitative interviews with representatives from different community-care organizations, as well as staff and persons-supported at a partner community-care organization. This paper focuses on analyzing data from interviews with representatives from different community-care organizations.

We conducted semi-structured interviews with administrators from community-care organizations in Southwestern Ontario (roughly the Ontario Health West Region) between October 2022 and January 2023. We included community-care organizations funded by MOH or MCCSS. We excluded organizations that did not provide services in Southwestern Ontario. We identified eligible organizations and participants by searching online databases, including community resource lists, as well as through consultation with members of the research team.

We used maximum variation sampling [ 14 ], to recruit participants from organizations with a wide range of characteristics including location (i.e., urban, rural), organization type (i.e., for-profit, not-for-profit), and types of services provided (e.g., residential, recreation, transportation, etc.) We contacted eligible organizations via email, providing them with study information and inviting them to participate. We recruited until the data reached saturation, defined as the point at which there was sufficient data to enable rigorous analysis [ 14 , 15 ].

In each interview, we asked participants about their organization’s approach to developing and updating individual service agreements or person-centred care plans, and the supports and barriers (e.g., organizational, funding, staffing, etc.) that facilitate or hinder the implementation of these plans (Supplementary Material 1 : Interview Guide). We also collected information on relevant participant and organizational characteristics, including participant gender, position, years of experience, organization location, type (i.e., for-profit, not-for-profit), services offered, years in operation, and client load. The interviews were approximately one hour in length and conducted virtually via Zoom (Zoom Video Communications Inc.) or by telephone. The interviews were audio-recorded and transcribed verbatim. Interviewer field notes were also used in data analysis.

We analyzed the data thematically [ 16 ]. The coding process followed a collaborative and multi-step approach. Initially, three members of the research team independently reviewed and coded a selection of transcripts to identify key ideas and patterns in the data, and form a preliminary coding template. We then met to consolidate individual coding efforts. We compared coding of each transcript, resolving conflicts through discussion and consensus. In coding subsequent transcripts and through a series of meetings, we worked together to finalize the codebook to reflect more analytic codes. We used the finalized template to code all interview transcripts in NVivo (QSR International), a software designed to facilitate qualitative data analysis. We refined the codebook on an as-needed basis by incorporating novel insights gleaned from the coding of additional transcripts, reflecting the iterative nature of the analysis.

We increased the robustness of our methodology by pre-testing interview questions, documenting interview and transcription protocols, using experienced interviewers, and confirming meaning with participants in interviews [ 14 , 15 , 16 ]. We kept detailed records of interviews, field notes, and drafts of the coding template. We made efforts to identify negative cases and provided rich descriptions and illustrative quotes [ 17 ]. We included individuals directly involved in the administration of community-care services on our research team. These individuals provided important context and feedback at each stage of the research process.

This study was approved by the research ethics board at Western University. We obtained informed consent from participants prior to the onset of interviews. We maintained confidentiality through secure storage of interview data (e.g., audio recordings), password-protection of sensitive documents, and the de-identification of transcripts.

Positionality

The authors represent a multidisciplinary team of researchers, clinicians, and community-care leaders. The community-care leaders and clinicians on our team provided key practical expertise to inform the development of interview questions and the analysis of study findings.

We interviewed administrators across 12 community-care organizations in Southwestern Ontario. The sample included representatives from seven organizations that received funding from MCCSS, three organizations that received funding from MOH, and two organizations that received funding from both MCCSS and MOH (Table  1 ). Eleven organizations were not-for-profit, one was a for-profit agency. The organizations provided care in rural ( n =  3), urban ( n =  4), or both rural and urban populations ( n =  5). Seven of the 12 participants were women, nine had been working with their organization for more than 11 years, and all had been working in the community-care sector for more than 12 years (Table  2 ).

We identified three key categories or processes relating to organizational characteristics and their impact on the design and delivery of person-centred care plans: (1) organizational context, (2) organizational culture, and (3) the development and implementation of person-centred care plans.

Organizational context

Organizational context refers to the characteristics of persons-supported, and the nature of services provided. Organizational context accounts for the considerable heterogeneity across organizations in the community-care sector and their approach to person-centred care plans.

Populations served

The majority of organizations included in the study supported individuals with IDDs: “all of the people have been identified as having a developmental disability. That’s part of the eligibility criteria for any funded developmental service in Ontario.” [P10]. Participants described how eligibility was ascertained through the referral process: “ the DSO [Developmental Services Ontario] figures all of that out and then refers them to us .” [P08]. These descriptions highlighted a common access point for publicly-funded adult developmental services in the province. Accordingly, these organizations were primarily funded by MCCSS. Other organizations focused on medically complex individuals including those with acquired brain injuries or those unable to access out-patient services due to physical disabilities: “the typical reason for referral is going to be around a physical impairment… But, with this medically complex population, you’re often seeing comorbidities where there may be some cognitive impairment, early dementia.” [P04]. In these organizations, eligibility and referral were usually coordinated by HCCSS. These insights highlighted the diverse characteristics of community-care populations, emphasizing the need to consider both physical and cognitive health challenges in care provision approaches.

Services offered

The characteristics of persons-supported informed the context of care and the type of services offered by the organization. The different dimensions of services offered within this sector include social and medical care, short and long-term care provision, in-home and community-care, and full and part-time care.

Nature of care: social vs. medical

Many organizations serving individuals with IDDs employed a holistic, psychosocial model of care, designed to support all areas of an individual’s life including supports for independent-living, and community-based education, employment, and recreation services to support personal and professional goals: “we support people in their homes, so residential supports. We also support people in the community, to be a part of the community, participate in the community and also to work in the community.” [P06]. These descriptions reflect a comprehensive approach to care, aiming to address needs within and beyond residential settings to promote active participation within the broader community. In contrast, some organizations followed a biomedical model of care, designed to support specific health needs: “We provide all five therapies… physiotherapy, occupational therapy, speech, social work, and nutrition. In some locations we provide visiting nursing, at some locations shift nursing. We have some clinic-nursing… and we provide personal support and home-making services in a number of locations as well.” [P04]. These organizations adopted a more clinically-focused approach to care. In either instance, the care model and the nature of services offered were largely determined by an organization’s mandate including which gaps they aimed to fill within the community. Many organizations described providing a mixture of social and medical care for individuals with complex needs. However, the implementation of care plans could be impacted by the lack of integration between social and medical care sectors, as some participants spoke to the importance of “[integrating] all of the different healthcare sector services… [including] acute care and public health and home and community care and primary care, and mental health and addictions.” [P04].

Duration of care: short-term vs. long-term

The duration of care also varied based on the needs of persons-supported. Organizations serving individuals with IDDs usually offered support across the lifespan: “We support adults with developmental disabilities and we support them from 18 [years] up until the end of their life.” [P06]. Some organizations provided temporary supports aimed at addressing specific health needs: “For therapies – these are all short-term interventions and typically they’re very specific and focused on certain goals. And so, you may get a referral for physiotherapy that is authorized for three visits or five visits” [P04], or crisis situations (e.g., homelessness): “Our services are then brought in to help provide some level of support, guidance, stabilization resource, and once essentially sustainability and positive outcomes are achieved—then our services are immediately withdrawn.” [P12]. One organization employed a model of care with two service streams, an initial rehabilitation stream that was intended to be short-term and an ongoing service stream for individuals requiring continuing support.

In-home vs. community-based care

Many organizations provided in-home care and community-based supports, where residential supports were designed to help individuals lead independent lives, and community-based supports encouraged participation in community activities to further inclusion and address personal and professional goals. One participant spoke about the range of services offered in the home and community:

“There’s probably two big categories of [services we offer]: community support services—so that includes things like adult day programs, assisted living, meals on wheels, transportation, friendly visiting … and things like blood pressure clinics, exercise programs… and then on the other side we do home care services. In the home care basket, we provide personal support, and we also provide social work support.” [P05].

Likewise, another participant spoke in further detail on the types of services that allow individuals to live independently within their homes, or in community-based residential settings (e.g., long-term care facilities):

“We provide accommodation supports to about 100 people living in our community—which means that we will provide support to them in their own homes. So, anywhere from an hour a week to 24 hours a day. And that service can include things from personal care to home management to money management, cooking, cleaning, and being out and about in communities—so community participation. We also provide supports for about 50 people living in long-term care facilities and that is all community participation support. So, minus the last 2 and a half years because of the pandemic, what that means is that a person living in a long-term care facility with a developmental disability can have our support to get out and about for 2 or 3 hours a week, on average.” [P10].

Full-time vs. part-time support

The person-supported’s needs also determined whether they would receive care within their homes and if they would be supported on a full-time (i.e., 24 h a day, 7 days a week) or part-time basis:

“ It really does range from that intensive 24- hour/7 day a week support, which we actually do provide that level of intense support in the family home, if that’s needed. And then, all the way through to just occasional advocacy support and phone check-in.” [P01].

Organizational Culture

Organizational culture was described as a key influence in the development and implementation of person-centred care plans. The culture of the organization includes their perceptions, attitudes and beliefs surrounding persons-supported; their model of care provision; as well as their willingness to evolve and adapt service provision to optimize care delivery.

Perceptions, attitudes, and beliefs regarding persons-supported

Participants described their organization’s view of persons-supported, with many organizations adopting an inclusionary framework where persons-supported were afforded the same rights and dignities as others in the community. This organizational philosophy was described as being deeply intertwined with an organization’s approach to personalizing programs and services:

“…an organization needs to be able to listen to the people who are receiving the service… and support them, to learn more, figure out, articulate, whatever it is, the service or the supports that they need in order to get and move forward with their life.” [P10].

The focus on the person-supported, their needs, likes, and dislikes, was echoed across organizations, with an emphasis on the impact of “culture and trying to embed for each person who delivers service the importance of understanding the individual.” [P05]. Participants also described their organization’s approach to allowing persons-supported to take risks, make mistakes, and live life on their own terms:

“You have to go and venture out and take some [risks]… We try to exercise that philosophy - people with disabilities should have the same rights and responsibilities as other people in the community. Whether that’s birthing or education, getting a job, having a house they can be proud of, accessing community supports, whether that be [a] library or community centre, or service club, whatever that is.” [P03].

Model of care provision

The model of care provision was heavily influenced by the organization’s values and philosophy. Several organizations employed a flexible model of care where supports were developed around the needs, preferences, and desired outcomes of the person-supported:

“…if we don’t offer [the program they want], we certainly build it. Honestly, most of our programs were either created or built by someone coming to us [and] saying ‘I want to do this with my life,’ or …‘my son would like to do art.’” [P02].

Although there were similarities in models across the different organizations, one participant noted that flexibility can be limited in the congregate care setting as staff must tend to the needs of a group as opposed to an individual:

“Our typical plan of operation outside of the congregate setting is we design services around the needs of the person. We don’t ask them to fit into what we need, we build services for what they need. Within the congregate care setting, we have a specific set of rules and regulations for safety and well-being of the other people that are here.” [P11].

Evolving service orientation

In organizations serving individuals with IDDs, many described shifting from program-based services to more individualized and community-based supports: “The goal was always to get people involved in their community and build in some of those natural supports … [we] are looking to support people in their own communities based on their individual plans.” [P07]. One participant described this model as a person-directed approach as opposed to person-centred, citing the limitations of program-based services in meeting individual needs:

“[Persons-supported] couldn’t [do] what they wanted because they were part of a bigger group. We would listen to the bigger group, but if one person didn’t want to go bowling … we couldn’t support them because everybody had to go bowling.” [P06].

The focus on individualized support could potentially lead to increased inclusion for persons-supported in their communities:

“… people go to Tim Horton’s, and if they go every day at 9 they probably, eventually will meet other people that go at 9 o’clock and maybe strike up a conversation and get to know somebody and join a table … and meet people in the community.” [P02].

By creating routines centred on individual preferences, the person-supported becomes a part of a community with shared interests and values.

Person-centred care plans

Community-care organizations enacted a person-centred approach by creating person-centred care plans for each person-supported. Although all participants said their organization provided person-centred services, there was considerable variation in the specific processes for developing, implementing, and updating care plans.

Developing a person-centred care plan

The development of a care plan includes assessment, consultation, and prioritization. The initial development of the care plan usually involved an assessment of an individual’s needs and goals. Participants described agency-specific assessment processes that often incorporated information from service referrals: “ In addition to the material we get from the DSO [Disability Services Ontario] we facilitate the delivery of an intake package specifically for our services. And that intake package helps to further understand the nature and needs of an individual.” [P12]. Agency-specific assessment processes differed by the nature of services provided and the characteristics of the population. However, most organizations included assessments of “not only physical functioning capabilities, but also cognitive.” [P01]. Assessment also included an appraisal of the suitability of the organization’s services. In instances where persons-supported were seeking residential placements or independent-living support, organizations assessed their ability to carry out the activities of daily living:

“[Our internal assessment] is an overview of all areas of their life. From, ‘do they need assistance with baking, cooking, groceries, cleaning, laundry? Is there going to be day program opportunities included in that residential request for placement? What the medical needs are?’” [P02].

In contrast, the person-supported’s community-based activities were primarily informed by their interests and desired outcomes: “We talk about what kinds of goals they want to work on. What kind of outcomes we’re looking for…” [P06].

The development of the care plan also included a consultation phase, involving conversations with the person-supported, their family members, and potentially external care providers: “We would use the application information, we’d use the supports intensity scale, but we’d also spend time with the person and their connections, their family and friends, in their home to figure out what are the kinds of things that this person needs assistance with.” [P10]. Participants described the person-supported’s view as taking precedence in these meetings: “We definitely include the family or [alternate] decision-maker in that plan, but the person-supported ultimately has the final stamp of approval.” [P08]. Many participants also acknowledged the difficulty of identifying and incorporating the person-supported’s view in cases where opinions clash and the person-supported has difficulty communicating and/or is non-verbal: “Some of the people we support are very good at expressing what they want. Some people are not. Some of our staff are really strong in expressing what they support. …And some of the family members are very strong. So you have to be very careful that the [person-supported] is not being lost in the middle of it.” [P06].

Participants also noted that some persons-supported preferred not to have a care plan:

“Some of the people say ‘I hate [the plans] I don’t want to do them’…. we look at it in a different way then. We’ll use graphic art, we’ll use video, we’ll think outside the box to get them to somehow—because at the end of the day when we’re audited by MCCSS every [person-supported] either has to have [a plan]… or there has to be [an approval of] why it wasn’t completed.” [P02].

Plan development may also include a prioritization process, particularly in cases where resources are limited. A person-supported’s goals could be prioritized using different schemas. One participant noted that “the support coordinator takes the cue from the person-supported - … what they’ve identified as ‘have to have’ and ‘nice to have’. … because the ‘have to haves’ are prioritized.” [P09]. Likewise, the person-supported’s preference could also be identified through “[an] exercise, called ‘what’s important for and what’s important to .’” [P06]. This model, based on a Helen Sanderson approach [ 18 ], was described as being helpful in highlighting what is important to the person-supported, as opposed to what others (i.e., friends, family, staff, etc.) feel is important for them.

Several organizations updated care plans throughout the year, to document progress towards goals, adapt to changing needs and plan for future goals: “We revisit the plan periodically through the year. And if they say the goal is done, we may set another goal.” [P06]. Organizations may also change plans to adapt to the person-supported’s changing health status or personal capacity.

Implementing a person-centred care plan

The implementation of care plans differed based on the nature of services provided by the organization. The delivery of health-based or personal support services often involved matching the length and intensity of care with the individual’s needs and capacity:

“Sometimes that is a long time, sometimes it’s a short time, sometimes it’s an intervention that’s needed for a bit, and then the person is able to function.” [P05].

In contrast, the delivery of community-based services involved matching activities and staff by interests: “[if] a person-supported wants to go out and be involved in the music community, then we pull the staff pool in and match them up according to interest.” [P06].

Broad personal goals were broken down into smaller, specific activities. For example, one participant described their organization’s plan in helping a person-supported achieve his professional goal of securing employment:

“[The person-supported] said ‘Okay, I want a job.’ So for three weeks he was matched up with a facilitator. They came up with an action plan in terms of how to get a job, what kind of job he’s looking for, where he wants to go, where he wants to apply, how to conduct an interview. And after three weeks he got a job.” [P09].

Organizations that provided residential services focused on developing independent-living skills. One participant described their organization’s plan to empowering persons-supported by allowing them to make their own financial decisions:

“If one month they’re looking after their own finances, and they’ve overspent. Well, maybe we help them out with a grocery card or something and say ‘okay, next month how are you going to do this?’ [The person-supported may say], ‘well, maybe I’ll put so much money aside each week rather than doing a big grocery shop the first week and not having enough money left at the end of the month.’” [P03].

The participant noted that “a tremendous amount of learning [happens] when a person is allowed to [take] risks and make their own decisions.” [P03].

Likewise, participants representing organizations that provided residential services described tailoring care to the persons-supported’s sleeping schedule and daily routine:

“We develop a plan and tweak it as we go. With [the person-supported] coming to the home, what worked well was, we found that he wanted to sleep in, so we adjusted the [staff] time. We took a look at his [medication] times in the morning… and [changed] his [medication] times. We found that he wanted to sleep [until] later in the day, so he would get up at 10 o’clock, so then instead of having breakfast, lunch, and supper he would just have a bigger brunch. Just really tailoring the plan around the person-supported, and it’s worked out well.” [P08].

These examples highlight how organizational context and culture influence how organizations operationalize person-centred care plans; the same individual may experience different approaches to care and engage in different activities depending on the organization they receive services from.

In this paper, we described key elements of the person-centred planning process across different community-care organizations in Southwestern Ontario. We also identified that the context and culture of an organization play a central role in informing the process by which services are personalized to an individual’s needs. These findings shed light on the diversity of factors that influence the implementation of person-centred care plans and the degree to which organizations are able to address medical and social needs in an integrated fashion. They also inform future evaluations of person and system-related outcomes of person-centred planning.

There are regulations around individualizing services delivered by community-care organizations, whereby care providers must allow persons-supported to participate in the development and evaluation of their care plans. HCCSS or MOH-funded services are largely focused on in-home rehabilitation or medical care. In contrast, MCCSS-funded organizations often focus on developing independent living skills or promoting community participation, thus highlighting the role of the funding agency in determining organizational context as well as the nature of services and personalization of care plans.

We also identified organizational culture as a key influence in the person-centred planning process. In previous reports, organizational culture, and specifically the way in which staff perceive and view persons-supported and their decision-making capabilities can impact the effective delivery of person-centred care [ 19 ]. Staff support, including their commitment to persons-supported and the person-centred process, has been regarded as one of the most powerful predictors of positive outcomes and goal attainment in the developmental services sector [ 20 , 21 ]. Moreover, in order to be successful, commitment to this process should extend across all levels of the organization, be fully integrated into organizational service delivery, and be reflected in organizational philosophy, values and views of persons-supported [ 22 , 23 , 24 ].

MCCSS mandates that agencies serving individuals with IDDs develop an individual service plan (ISP) for each person-supported, one “that address[es] the person’s goals, preferences and needs.” [ 7 ]. We reference ISPs as person-centred care plans, as is in line with the view of participants in interviews. There are a series of checklists designed to measure compliance with these policies, and the process is iterative, with mandated annual reviews of care plans and active participation by the person-supported [ 25 ]. In our study, the agencies funded by MCCSS adhered to the general framework outlined by these regulations and informed service delivery accordingly. However, participants also described areas for improvement with respect to the implementation of these policies in practice. These policies, while well-intentioned, may imply a one-size-fits-all approach and appear more as an administrative exercise as opposed to a meaningful endeavor designed to optimize care. Participants spoke about individuals who preferred not to have an ISP, and how that in and of itself is a person-centred approach, respecting the person’s wishes. Additionally, we heard about how the goal-setting process may not be realistic as it can be perceived as unnatural to have goals at each point in one’s life. Moreover, participants noted challenges in implementing person-centred care in shared residential settings (e.g., group homes) or in cases where persons-supported had difficulty communicating.

Prior research indicates that individuals living in semi-independent settings fare better across several quality-of-life measures relative to individuals living in group homes, including decreased social dissatisfaction, increased community participation, increased participation in activities of daily living, and increased empowerment [ 26 ]. Furthermore, a recent study by İsvan et al. (2023) found that individuals living in the community (e.g., own home, family home, or foster home) exhibit greater autonomy in making everyday and life decisions, and greater satisfaction with their inclusion in the community [ 27 ]. These findings may be indicative of a reduced focus on person-centred care plan development and implementation in congregate care settings, where limited staff capacity can make it difficult to tend to the needs of everyone in the home. However, poor outcomes may also be explained by potentially more complex health challenges or more severe disability in persons-supported living in congregate care settings. The challenges described in our study are consistent with calls to improve the quality of care provided in residential group home settings [ 28 , 29 ].

In line with our findings, previous literature also describes challenges in implementing person-centred planning for individuals who have difficulty communicating or are non-verbal [ 19 , 30 , 31 , 32 ]. Communication has also been identified as a barrier to patient-centred care for adults with IDDs in healthcare settings [ 33 , 34 ]. Other reports have identified a need for increased training and awareness of diverse communication styles (including careful observation of non-verbal cues) to aid staff in including persons-supported in the development of care plans [ 35 , 36 , 37 ]. Importantly, these methods take substantial time which is often limited, and compounded by staffing shortages that are widespread across the sector [ 38 ]. Similar barriers were identified in interviews with staff and persons-supported at a partner community-care agency within our larger project [ 39 ]; other papers from the project examine strategies used by the organization to overcome these barriers.

Limitations

The findings from this study should be interpreted in the context of the following limitations. There is a risk for social desirability bias, whereby participants may feel pressure to present their care plan process in a more positive light due to societal norms and expectations [ 40 ]. Additionally, the experiences and views of community-care organizations may vary by region and organization type (i.e., for-profit vs. not-for-profit). In this study, we limited participation to agencies providing services in Southwestern Ontario and we were only able to interview one for-profit agency, despite concerted recruitment efforts. Consequently, we may not have fully captured how financial pressures, or different contextual and cultural components of an organization impact their implementation of care plans.

The person-centred planning process in community-care organizations is largely informed by the characteristics of the population served and the nature of services offered (i.e., organizational context). This process usually involves initial and continued consultations with persons-supported to tailor plans to their specific needs and desired outcomes. There are ongoing challenges in the implementation of person-centred planning, including a need for increased adaptability and clarity in current regulations. In some areas, there may be benefit to incorporating nuance in the application of policies (e.g., in cases where a person-supported does not want to have a formal plan in place). In other areas, it may be helpful to have increased guidance on how to optimize care delivery to improve outcomes (e.g., in cases where a person-supported has difficulty communicating, or is residing in a group home). Policymakers, administrators, and service providers can leverage these insights to refine policies, advocating for inclusive, flexible approaches that better align with diverse community needs.

Data availability

The datasets generated and analyzed in the current study are not publicly available to maintain participant confidentiality, however access may be granted by the corresponding author upon reasonable request.

Abbreviations

Acquired Brain Injury

Disability Services Ontario

Home and Community Care Support Services

Intellectual and Developmental Disabilities

Individual Service Plan

Ministry of Children, Community and Social Services

Ministry of Health

Purbhoo D, Wojtak A. Patient and family-centred home and community care: realizing the opportunity. Nurs Leadersh Tor Ont. 2018;31(2):40–51.

Article   PubMed   Google Scholar  

Lin E, Balogh RS, Durbin A, Holder L, Gupta N, Volpe T et al. Addressing gaps in the health care services used by adults with developmental disabilities in Ontario. ICES. 2019 [cited 2023 Aug 30]. https://www.ices.on.ca/publications/research-reports/addressing-gaps-in-the-health-care-services-used-by-adults-with-developmental-disabilities-in-ontario/

American Psychological Association. APA Guidelines for Assessment and Intervention with Persons with Disabilities: (502822022-001). 2022 [cited 2023 Aug 30]; http://doi.apa.org/get-pe-doi.cfm?doi=10.1037/e502822022-001

Dunn DS, Andrews EE. Person-first and identity-first language: developing psychologists’ cultural competence using disability language. Am Psychol. 2015;70(3):255–64.

Burke C. Building a stronger system for people with developmental disabilities: a six-month progress report from Commissioner Courtney Burke. New York Office for People with Developmental Disabilities; 2011. https://opwdd.ny.gov/system/files/documents/2019/12/6_month_progress_report_0.pdf

Government of Manitoba. Agency service coordination manual: 5.1 person-centred planning. 2021. https://www.gov.mb.ca/fs/clds/asc-manual/pubs/5.1-person-centred-planning.pdf

Government of Ontario. Services and supports to promote the social inclusion of Persons with Developmental Disabilities Act, 2008. 2008 [cited 2023 Aug 30]. https://www.ontario.ca/laws/regulation/100299#:~:text=O.-,Reg.,and other relevant clinical assessments

State of Michigan. Person Centered Planning.pdf. 2018.

O’Brien CL, O’Brien J. The origins of person-centered planning: a community of practice perspective.

Sanderson H. Person Centred Planning. York, Joseph Rowntree Foundation; 2000.

Government of Ontario. Ontario.ca. 2019 [cited 2023 Sep 4]. Connecting Care Act, 2019: Home and Community Care Services. https://www.ontario.ca/laws/regulation/220187#BK17

Dong M. Examining individualized participatory approaches to care for individuals with intellectual and developmental disabilities. University of Western Ontario; 2023. https://ir.lib.uwo.ca/etd/9517

Doyle L, McCabe C, Keogh B, Brady A, McCann M. An overview of the qualitative descriptive design within nursing research. J Res Nurs. 2020;25(5):443–55.

Creswell JW. Research design: qualitative, quantitative, and mixed methods approaches. SAGE; 2014. p. 305.

Berg BL. Qualitative research methods for the social sciences: 2nd ed. Bostan, MA: Allyn and Bacon; 1995. p. 421.

Google Scholar  

Guest G, MacQueen KM, Namey EE. Applied Thematic Analysis. SAGE; 2012. p. 321.

Yin RK. Case study research design and methods. 5th ed. Thousand Oaks, CA: SAGE; 2014. p. 282.

Helen Sanderson Associates. Helen Sanderson Associates. [cited 2023 Oct 31]. Sorting important to/for. http://helensandersonassociates.co.uk/person-centred-practice/person-centred-thinking-tools/sorting-important-tofor/

Hughes CA. The benefits and barriers to person centered planning for adults with developmental disabilities. [Saint Paul, Minnesota, USA]: St. Catherine University; 2013. https://sophia.stkate.edu/msw_papers/191

Heller T, Miller AB, Hsieh K, Sterns H. Later-life planning: promoting knowledge of options and choice-making. Ment Retard. 2000;38(5):395–406.

Article   CAS   PubMed   Google Scholar  

Ratti V, Hassiotis A, Crabtree J, Deb S, Gallagher P, Unwin G. The effectiveness of person-centred planning for people with intellectual disabilities: a systematic review. Res Dev Disabil. 2016;57:63–84.

Kaehne A, Beyer S. Person-centred reviews as a mechanism for planning the post-school transition of young people with intellectual disability. J Intellect Disabil Res. 2014;58(7):603–13.

Parley FF. Person-centred outcomes: are outcomes improved where a person-centred care model is used? J Learn Disabil. 2001;5(4):299–308.

Article   Google Scholar  

Sanderson H, Thompson J, Kilbane J. The emergence of person-centred planning as evidence‐based practice. J Integr Care. 2006;14(2):18–25.

Government of Ontario. Ministry of Children, Community and Social Services. Developmental service (DS) compliance inspection: indicator list. 2021 [cited 2023 Sep 4]. https://www.mcss.gov.on.ca/documents/en/mcss/developmental/EN_DS_Indicator_List.pdf

Stancliffe RJ, Keane S. Outcomes and costs of community living: a matched comparison of group homes and semi-independent living. J Intellect Dev Disabil. 2000;25(4):281–305.

İsvan N, Bonardi A, Hiersteiner D. Effects of person-centred planning and practices on the health and well-being of adults with intellectual and developmental disabilities: a multilevel analysis of linked administrative and survey data. J Intellect Disabil Res. 2023 [cited 2023 Sep 4]; https://onlinelibrary.wiley.com/doi/abs/ https://doi.org/10.1111/jir.13015

Office of the Auditor General of Ontario. 3.10: residential services for people with developmental disabilities. 2014. https://www.auditor.on.ca/en/content/annualreports/arreports/en14/310en14.pdf

Office of the Auditor General of Ontario. 1.10 residential services for people with developmental disabilities. 2016. https://www.auditor.on.ca/en/content/annualreports/arreports/en16/v2_110en16.pdf

Robertson J, Emerson E, Hatton C, Elliott J, McIntosh B, Swift P, et al. Person-centred planning: factors associated with successful outcomes for people with intellectual disabilities. J Intellect Disabil Res JIDR. 2007;51(Pt 3):232–43.

Everson JM, Zhang D, Person-Centered Planning. Characteristics, inhibitors, and supports. Educ Train Ment Retard Dev Disabil. 2000;35(1):36–43.

Claes C, Van Hove G, Vandevelde S, van Loon J, Schalock RL. Person-centered planning: analysis of research and effectiveness. Intellect Dev Disabil. 2010;48(6):432–53.

Stringer K, Terry AL, Ryan BL, Pike A. Patient-centred primary care of adults with severe and profound intellectual and developmental disabilities. Can Fam Physician. 2018;64(Suppl 2):S63–9.

PubMed   PubMed Central   Google Scholar  

Badcock E, Sakellariou D. Treating him… like a piece of meat: poor communication as a barrier to care for people with learning disabilities. Disabil Stud Q. 2022 [cited 2023 Sep 4];42(1). https://dsq-sds.org/index.php/dsq/article/view/7408

Mansell J, Beadle-Brown J. Person-centred planning or person-centred action? Policy and Practice in Intellectual Disability Services. J Appl Res Intellect Disabil. 2004;17(1):1–9.

Bigby C, Frawley P. Social work practice and intellectual disability: working to support change. Bloomsbury Publishing; 2018. p. 253.

Taylor JE, Taylor JA. Person-centered planning: evidence-based practice, challenges, and potential for the 21st century. J Soc Work Disabil Rehabil. 2013;12(3):213–35.

Zijlstra R, Vlaskamp C, Buntinx W. Direct-care staff turnover: an indicator of the quality of life of individuals with profound multiple disabilities. Eur J Mental Disabil. 2001;(22):39–55.

Canadian Institutes of Health Research (CIHR), CIHR-Institute of Health Services and Policy Research (CIHR-IHSPR) and partners. Evidence brief booklet: quadruple aim & equity catalyst grants. 2024 [cited 2024 Apr 2]. https://face2face.events/wp-content/uploads/2024/01/Evidence-Brief-Booklet-Quadruple-Aim-and-Equity-EN.pdf

Bergen N, Labonté R. Everything is perfect, and we have no problems: detecting and limiting social desirability bias in qualitative research. Qual Health Res. 2020;30(5):783–92.

Download references

Acknowledgements

The authors thank Ruth Armstrong, from PHSS - Medical & Complex Care in Community, for her valuable feedback and support throughout the research process.

This research was funded by the Canadian Institutes of Health Research. The funding agency had no role in the conceptualization, design, data collection, analysis, decision to publish, or preparation of the manuscript.

Author information

Authors and affiliations.

Department of Family Medicine, Schulich School of Medicine & Dentistry, Western University, 1151 Richmond St, London, ON, N6A 5C1, Canada

Samina Idrees, Gillian Young, Leslie Meredith & Maria Mathews

PHSS - Medical & Complex Care in Community, 620 Colborne St, London, ON, N6B 3R9, Canada

Brian Dunne & Donnie Antony

You can also search for this author in PubMed   Google Scholar

Contributions

S.I. conducted the interviews, developed the coding template, coded the data, thematically analyzed the data, and prepared the manuscript. G.Y. helped develop the coding template, and reviewed and approved the final manuscript. B.D. and D.A. helped conceptualize the study, aided in the interpretation and analysis of study findings, and reviewed and approved the final manuscript. L.M. coordinated research activities, aided in the interpretation and analysis of study findings, and reviewed and approved the final manuscript. M.M. conceptualized the study, supervised its implementation, and was a major contributor in reviewing and editing the manuscript. All authors have read and approved the final manuscript.

Corresponding author

Correspondence to Maria Mathews .

Ethics declarations

Ethics approval and consent to participate.

This study was approved by the research ethics board at Western University. We obtained informed consent from participants prior to the onset of interviews.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Idrees, S., Young, G., Dunne, B. et al. The implementation of person-centred plans in the community-care sector: a qualitative study of organizations in Ontario, Canada. BMC Health Serv Res 24 , 680 (2024). https://doi.org/10.1186/s12913-024-11089-7

Download citation

Received : 14 February 2024

Accepted : 08 May 2024

Published : 29 May 2024

DOI : https://doi.org/10.1186/s12913-024-11089-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Person-centred planning
  • Community-based care
  • Integrated care
  • Social services
  • Health services
  • Organizational culture
  • Qualitative study

BMC Health Services Research

ISSN: 1472-6963

research design for interviews

IMAGES

  1. Types of Interviews in Research and Methods

    research design for interviews

  2. INTERVIEW AS A RESEARCH METHOD (KEY POINTS TO REMEMBER)

    research design for interviews

  3. A Guide for Case Study Interview Presentations for Beginners

    research design for interviews

  4. (PDF) The qualitative research interview

    research design for interviews

  5. Types Of Job Interview Methods Qualitative Research

    research design for interviews

  6. Qualitative Interview Techniques and Considerations

    research design for interviews

VIDEO

  1. Research Designs: Part 2 of 3: Qualitative Research Designs (ሪሰርች ዲዛይን

  2. How to come up with semi structured interview questions for qualitative research

  3. User Research Methods and Best Practices

  4. Types of Research Design ‌I தமிழில்

  5. Research Methodology

  6. Algorithms You Should Know Before System Design Interviews

COMMENTS

  1. Types of Interviews in Research

    There are several types of interviews, often differentiated by their level of structure. Structured interviews have predetermined questions asked in a predetermined order. Unstructured interviews are more free-flowing. Semi-structured interviews fall in between. Interviews are commonly used in market research, social science, and ethnographic ...

  2. How to Conduct an Effective Interview; A Guide to Interview Design in

    Vancouver, Canada. Abstract. Interviews are one of the most promising ways of collecting qualitative data throug h establishment of a. communication between r esearcher and the interviewee. Re ...

  3. Research Methods Guide: Interview Research

    Develop an interview guide. Introduce yourself and explain the aim of the interview. Devise your questions so interviewees can help answer your research question. Have a sequence to your questions / topics by grouping them in themes. Make sure you can easily move back and forth between questions / topics. Make sure your questions are clear and ...

  4. Planning Qualitative Research: Design and Decision Making for New

    While many books and articles guide various qualitative research methods and analyses, there is currently no concise resource that explains and differentiates among the most common qualitative approaches. We believe novice qualitative researchers, students planning the design of a qualitative study or taking an introductory qualitative research course, and faculty teaching such courses can ...

  5. Chapter 11. Interviewing

    Introduction. Interviewing people is at the heart of qualitative research. It is not merely a way to collect data but an intrinsically rewarding activity—an interaction between two people that holds the potential for greater understanding and interpersonal development. Unlike many of our daily interactions with others that are fairly shallow ...

  6. "Qualitative Interview Design: A Practical Guide for Novice ...

    Qualitative research design can be complicated depending upon the level of experience a researcher may have with a particular type of methodology. As researchers, many aspire to grow and expand their knowledge and experiences with qualitative design in order to better utilize diversified research paradigms for future investigations. One of the more popular areas of interest in qualitative ...

  7. Qualitative research method-interviewing and observation

    Qualitative research method-interviewing and observation. Buckley and Chiang define research methodology as "a strategy or architectural design by which the researcher maps out an approach to problem-finding or problem-solving.". [ 1] According to Crotty, research methodology is a comprehensive strategy 'that silhouettes our choice and ...

  8. How To Do Qualitative Interviews For Research

    If you need 10 interviews, it is a good idea to plan for 15. Likely, a few will cancel, delay, or not produce useful data. 5. Not keeping your golden thread front of mind. We touched on this a little earlier, but it is a key point that should be central to your entire research process.

  9. Interviews in the social sciences

    Here we address research design considerations and data collection issues focusing on topic guide construction and other pragmatics of the interview.

  10. Interviews

    Interviews are the most commonly used qualitative data gathering technique and are used with grounded theory, focus groups, and case studies. The length of an interview varies. They may be anywhere from thirty minutes to several hours in length, depending on your research approach. Structured interviews use a set list of questions which need to ...

  11. What Is a Research Design

    A research design is a strategy for answering your research question using empirical data. Creating a research design means making decisions about: Your overall research objectives and approach. Whether you'll rely on primary research or secondary research. Your sampling methods or criteria for selecting subjects. Your data collection methods.

  12. How to Carry Out Great Interviews in Qualitative Research

    A qualitative research interview is a one-to-one data collection session between a researcher and a participant. Interviews may be carried out face-to-face, over the phone or via video call using a service like Skype or Zoom. There are three main types of qualitative research interview - structured, unstructured or semi-structured.

  13. Email Interviews: A Guide to Research Design and Implementation

    This article adds to the existing body of literature on electronic research methods by zooming in on email interviewing, and outlining a strategy for how email interviews can be used to generate in-depth and rich qualitative data, specifically in explorative studies. The argument of the article is that email interviewing can fruitfully be combined with explorative interviewing, offering the ...

  14. Qualitative Interview Design: A Practical Guide for Novice Investigators

    qualitative interviews for novice investigators by employing a step-by-step process for implementation. Key Words: Informal Conversational Interview, General Interview Guide, and Open-Ended Interviews. Qualitative research design can be complicated depending upon the level of experience a researcher may have with a particular type of methodology.

  15. Structured Interview

    Structured Interview | Definition, Guide & Examples. Published on January 27, 2022 by Tegan George and Julia Merkus. Revised on June 22, 2023. A structured interview is a data collection method that relies on asking questions in a set order to collect data on a topic. It is one of four types of interviews.. In research, structured interviews are often quantitative in nature.

  16. Research Design in Interview Studies

    It outlines a generic step-wise approach to design that is informative of what to do, when to do it, and why do this rather than that in the research process. It explains three broad aims for qualitative interviewing: discovery, construction, and understanding. The chapter also discusses some conceptual distinctions between inductive, deductive ...

  17. Research Design

    Step 1: Consider your aims and approach. Step 2: Choose a type of research design. Step 3: Identify your population and sampling method. Step 4: Choose your data collection methods. Step 5: Plan your data collection procedures. Step 6: Decide on your data analysis strategies. Frequently asked questions.

  18. interviews

    Interviews can be defined as a qualitative research technique which involves "conducting intensive individual interviews with a small number of respondents to explore their perspectives on a particular idea, program or situation." There are three different formats of interviews: structured, semi-structured and unstructured.

  19. How to Conduct an Effective Interview; A Guide to Interview Design in

    However, more details can be added to the protocols including literature reviews, and summaries of data analyses methods. •Design the Protocol Including Abstract and Questions •Choose the Participants •Choose the Interviewers. Plan an Interview. •Consider Time •Consider Location •Condut the Interview.

  20. Appendix: Qualitative Interview Design

    One of the more popular areas of interest in qualitative research design is that of the interview protocol. Interviews provide in-depth information pertaining to participants' experiences and viewpoints of a particular topic. Oftentimes, interviews are coupled with other forms of data collection in order to provide the researcher with a well ...

  21. 9.4 Types of qualitative research designs

    Focus Groups. Focus groups resemble qualitative interviews in that a researcher may prepare a guide in advance and interact with participants by asking them questions. But anyone who has conducted both one-on-one interviews and focus groups knows that each is unique. In an interview, usually one member (the research participant) is most active ...

  22. Types of Research Designs

    The Research Methods Videos database contains hours of tutorials, interviews, video case studies, and mini-documentaries covering the entire research process. Creswell, John W. and J. David Creswell. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches . 5th edition.

  23. 30 Design Researcher Interview Questions and Answers

    Interviews for design research positions often go beyond generic questions; they delve deep into your thought processes, creativity, analytical skills, and ability to empathize with users. In this article, we'll explore common interview questions that you may encounter during your quest for a Design Researcher position. We will also provide ...

  24. Design Research Methods: In-Depth Interviews

    In-depth interviews are one of the most common qualitative research methods used in design thinking and human-centered design processes. They allow you to gather a lot of information at once, with relative logistical ease. In-depth interviews are a form of ethnographic research, where researchers observe participants in their real-life environment.

  25. Trends and Motivations in Critical Quantitative Educational Research: A

    A multimethod research design was also appropriate given the distinct research questions in this study—each answered using a different stream of data. Specifically, we conducted a systematic scoping review of existing literature and facilitated follow-up interviews with a subset of corresponding authors from included publications, as detailed ...

  26. Current status and ongoing needs for the teaching and assessment of

    Design. In this study, we used a convergent parallel mixed-methods design [] within a pragmatic constructivist case study approach [].We simultaneously collected data from students and educators using online questionnaires and semi-structured interviews to gain deeper insight into their needs on one particular situation []- the development of a clinical reasoning curriculum—to address our ...

  27. ChatGPT

    Early access to new features. Access to GPT-4, GPT-4o, GPT-3.5. Up to 5x more messages for GPT-4o. Access to advanced data analysis, file uploads, vision, and web browsing

  28. "It's like your days are empty and yet there's life all around": A

    Abstract. Purpose To identify experiences of boredom and associations with psychosocial well-being during and following homelessness. Methods Using a convergent, mixed-methods explanatory design, we conducted quantitative interviews with 164 participants) (n = 102 unhoused; n = 62 housed following homelessness) using a 92-item protocol involving demographic components and seven standardized ...

  29. Types of Research Designs Compared

    Types of Research Designs Compared | Guide & Examples. Published on June 20, 2019 by Shona McCombes.Revised on June 22, 2023. When you start planning a research project, developing research questions and creating a research design, you will have to make various decisions about the type of research you want to do.. There are many ways to categorize different types of research.

  30. The implementation of person-centred plans in the community-care sector

    Background Person-centred planning refers to a model of care in which programs and services are developed in collaboration with persons receiving care (i.e., persons-supported) and tailored to their unique needs and goals. In recent decades, governments around the world have enacted policies requiring community-care agencies to adopt an individualized or person-centred approach to service ...