Imperial College London Imperial College London

Latest news.

interview protocol example qualitative research

Neutrino discoveries and academic awards for excellence: News from Imperial

interview protocol example qualitative research

Imperial joins European discussion on accelerating innovation

interview protocol example qualitative research

Imperial to strengthen Transatlantic tech cooperation with new hub

  • Centre for Higher Education Research and Scholarship
  • Research and Innovation
  • Education evaluation toolkit
  • Tools and resources for evaluation

Interview protocol design

On this page you will find our recommendations for creating an interview protocol for both structured and semi-structured interviews. Your protocol can be viewed as a guide for the interview: what to say at the beginning of the interview to introduce yourself and the topic of the interview, how to collect participant consent, interview questions, and what to say when you end the interview. These tips have been adapted from  Jacob and Furgerson’s (2012) guide to writing interview protocols and conducting interviews for those new to qualitative research. Your protocol may have more questions if you are planning a structured interview. However, it may have fewer and more open-ended questions if you are planning a semi-structured interview, in order to allow more time for participants to elaborate on their responses and for you to ask follow-up questions.

Interview protocol design accordion widget

Use a script to open and close the interview.

This will allow you to share all of the relevant information about your study and critical details about informed consent before you begin the interview. It will also allow a space to close the interview and give the participant an opportunity to share additional thoughts that haven’t yet been discussed in the interview.

Collect informed consent

The most common (and encouraged) means of gaining informed consent is by giving the participant a participant information sheet as well as an informed consent form to read through and then sign before you begin the interview. You can find the template for participant information sheets  and informed consent form on the Imperial College London Education Ethics Review Process (EERP) webpage . Other resources for the EERP process can also be found on this website.

Start with the basics

To help build rapport and a comfortable space for the participant, start out with questions that ask for some basic background information. This could include asking their name, their course year, how they are doing, whether they have any interesting things happening at the moment, their likes and interests etc. (although be careful not to come across as inauthentic). This will help both you and the participant to have an open conversation throughout the interview.

Create open-ended questions

Open-ended questions enable more time and space for the participant to open up and share more detail about their experiences. Using phrases like “Tell me about…” rather than “Did you ever experience X?” will be less likely to elicit only “yes” or “no” answers, which do not provide rich data. If a participant does give a “yes” or “no” answer, but you would like to know more, you can ask, “Can you tell me why?” or “Could you please elaborate on that answer a bit more?” For example, if you are interviewing a student about their sense of belonging at Imperial, you could ask, “Can you tell me about a time when you felt a real sense that you belonged at Imperial College London?”

Ensure your questions are informed by existing research

Before creating your interview questions, conduct a thorough review of the literature about the topic you are investigating through interviews. For example, research on the topic of “students’ sense of belonging” has emphasised the importance of students feeling respected by other members of the university. Therefore, it would be a good idea to include a question about “respect” if you are interested in your students’ sense of belonging at Imperial or within their departments and study areas (e.g. the classroom). See our sense of belonging interview protocol for an idea.

Begin with questions that are easier to answer, then move to more difficult or abstract questions

Be aware that even if you have explained your topic to the participant, you should not assume that they have the same understanding of the topic as you. Resist the temptation to simply ask your research questions to your participants directly, particularly at the beginning of the interview, as these will often be too conceptual and abstract for them to answer easily. Asking abstract questions too early on can alienate your participant. By asking more concrete questions that participants can answer easily, you will build rapport and trust more quickly. Start by asking questions about concrete experiences, preferably ones that are very recent or ongoing. For example, if you are interested in students’ sense of belonging, do not start by asking whether a student “belongs” or how they perceive their “belonging.” Rather, try asking about how they have felt in recent modules to give them the opportunity to raise any positive or negative experiences themselves. Later, you can ask questions which specifically address concepts related to sense of belonging, for example whether they always feel “respected” (to follow on from our earlier example). Then, at the end of the interview, you could ask your participant to reflect more directly and generally on your topic. For example, it may be good to end an interview by asking the participant to summarise the extent to which they feel they ‘belong’ and what the main factors are. Note that this advice is particularly important if dealing with topics that may be difficult to form an opinion on, such as topics which require students to remember things from the distant past, or which deal with controversial topics.  

Use prompts

If you are asking open-ended questions, the intention is that the participant will use that as an opportunity to provide you with rich qualitative detail about their experiences and perceptions. However, participants sometimes need prompts to get them going. Try to anticipate what prompts you could give to help someone answer each of your open-ended questions (Jacob & Furgerson, 2012). For example, if you are investigating sense of belonging and the participant is struggling to respond to the question “What could someone see about you that would show them that you felt like you belonged?”, you might prompt them to think about their clothes or accessories (for example do they wear or carry anything with the Imperial College London logo) or their activities (for example membership in student groups), and what meaning they attach to these. 

Be prepared to revise your protocol during and after the interview

During the interview, you may notice that some additional questions might pop into your mind, or you might need to re-order the questions, depending on the response of the participant and the direction in which the interview is going. This is fine, as it probably means the interview is flowing like a natural conversation. You might even find that this new order of questions should be adopted for future interviews, and you can adjust the protocol accordingly.

Be mindful of how much time the interview will take

When designing the protocol, keep in mind that six to ten well-written questions may make for an interview lasting approximately one hour. Consider who you are interviewing, and remember that you are asking people to share their experiences and their time with you, so be mindful of how long you expect the interview to last.

Pilot test your questions with a colleague

Pilot testing your interview protocol will help you to assess whether your interview questions make sense. Pilot testing gives you the chance to familiarise yourself with the order and flow of the questions out loud, which will help you to feel more comfortable when you begin conducting the interviews for your data collection.

Jacob, S. A., & Furgerson, S. P. (2012). Writing Interview Protocols and Conducting Interviews: Tips for Students New to the Field of Qualitative Research. The Qualitative Report, 17 (2), 1-10.

Welch, C., & Piekkari, R. (2006). Crossing Language Boundaries:. Management International Review, 46 , 417-437. Retrieved from https://link.springer.com/content/pdf/10.1007%2Fs11575-006-0099-1.pdf

Our systems are now restored following recent technical disruption, and we’re working hard to catch up on publishing. We apologise for the inconvenience caused. Find out more: https://www.cambridge.org/universitypress/about-us/news-and-blogs/cambridge-university-press-publishing-update-following-technical-disruption

We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings .

Login Alert

interview protocol example qualitative research

  • > Doing Interview-based Qualitative Research
  • > Designing the interview guide

interview protocol example qualitative research

Book contents

  • Frontmatter
  • 1 Introduction
  • 2 Some examples of interpretative research
  • 3 Planning and beginning an interpretative research project
  • 4 Making decisions about participants
  • 5 Designing the interview guide
  • 6 Doing the interview
  • 7 Preparing for analysis
  • 8 Finding meanings in people's talk
  • 9 Analyzing stories in interviews
  • 10 Analyzing talk-as-action
  • 11 Analyzing for implicit cultural meanings
  • 12 Reporting your project

5 - Designing the interview guide

Published online by Cambridge University Press:  05 October 2015

This chapter shows you how to prepare a comprehensive interview guide. You need to prepare such a guide before you start interviewing. The interview guide serves many purposes. Most important, it is a memory aid to ensure that the interviewer covers every topic and obtains the necessary detail about the topic. For this reason, the interview guide should contain all the interview items in the order that you have decided. The exact wording of the items should be given, although the interviewer may sometimes depart from this wording. Interviews often contain some questions that are sensitive or potentially offensive. For such questions, it is vital to work out the best wording of the question ahead of time and to have it available in the interview.

To study people's meaning-making, researchers must create a situation that enables people to tell about their experiences and that also foregrounds each person's particular way of making sense of those experiences. Put another way, the interview situation must encourage participants to tell about their experiences in their own words and in their own way without being constrained by categories or classifications imposed by the interviewer. The type of interview that you will learn about here has a conversational and relaxed tone. However, the interview is far from extemporaneous. The interviewer works from the interview guide that has been carefully prepared ahead of time. It contains a detailed and specific list of items that concern topics that will shed light on the researchable questions.

Often researchers are in a hurry to get into the field and gather their material. It may seem obvious to them what questions to ask participants. Seasoned interviewers may feel ready to approach interviewing with nothing but a laundry list of topics. But it is always wise to move slowly at this point. Time spent designing and refining interview items – polishing the wording of the items, weighing language choices, considering the best sequence of topics, and then pretesting and revising the interview guide – will always pay off in producing better interviews. Moreover, it will also provide you with a deep knowledge of the elements of the interview and a clear idea of the intent behind each of the items. This can help you to keep the interviews on track.

Access options

Save book to kindle.

To save this book to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle .

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service .

  • Designing the interview guide
  • Eva Magnusson , Umeå Universitet, Sweden , Jeanne Marecek , Swarthmore College, Pennsylvania
  • Book: Doing Interview-based Qualitative Research
  • Online publication: 05 October 2015
  • Chapter DOI: https://doi.org/10.1017/CBO9781107449893.005

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox .

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive .

Logo for Open Educational Resources

Chapter 11. Interviewing

Introduction.

Interviewing people is at the heart of qualitative research. It is not merely a way to collect data but an intrinsically rewarding activity—an interaction between two people that holds the potential for greater understanding and interpersonal development. Unlike many of our daily interactions with others that are fairly shallow and mundane, sitting down with a person for an hour or two and really listening to what they have to say is a profound and deep enterprise, one that can provide not only “data” for you, the interviewer, but also self-understanding and a feeling of being heard for the interviewee. I always approach interviewing with a deep appreciation for the opportunity it gives me to understand how other people experience the world. That said, there is not one kind of interview but many, and some of these are shallower than others. This chapter will provide you with an overview of interview techniques but with a special focus on the in-depth semistructured interview guide approach, which is the approach most widely used in social science research.

An interview can be variously defined as “a conversation with a purpose” ( Lune and Berg 2018 ) and an attempt to understand the world from the point of view of the person being interviewed: “to unfold the meaning of peoples’ experiences, to uncover their lived world prior to scientific explanations” ( Kvale 2007 ). It is a form of active listening in which the interviewer steers the conversation to subjects and topics of interest to their research but also manages to leave enough space for those interviewed to say surprising things. Achieving that balance is a tricky thing, which is why most practitioners believe interviewing is both an art and a science. In my experience as a teacher, there are some students who are “natural” interviewers (often they are introverts), but anyone can learn to conduct interviews, and everyone, even those of us who have been doing this for years, can improve their interviewing skills. This might be a good time to highlight the fact that the interview is a product between interviewer and interviewee and that this product is only as good as the rapport established between the two participants. Active listening is the key to establishing this necessary rapport.

Patton ( 2002 ) makes the argument that we use interviews because there are certain things that are not observable. In particular, “we cannot observe feelings, thoughts, and intentions. We cannot observe behaviors that took place at some previous point in time. We cannot observe situations that preclude the presence of an observer. We cannot observe how people have organized the world and the meanings they attach to what goes on in the world. We have to ask people questions about those things” ( 341 ).

Types of Interviews

There are several distinct types of interviews. Imagine a continuum (figure 11.1). On one side are unstructured conversations—the kind you have with your friends. No one is in control of those conversations, and what you talk about is often random—whatever pops into your head. There is no secret, underlying purpose to your talking—if anything, the purpose is to talk to and engage with each other, and the words you use and the things you talk about are a little beside the point. An unstructured interview is a little like this informal conversation, except that one of the parties to the conversation (you, the researcher) does have an underlying purpose, and that is to understand the other person. You are not friends speaking for no purpose, but it might feel just as unstructured to the “interviewee” in this scenario. That is one side of the continuum. On the other side are fully structured and standardized survey-type questions asked face-to-face. Here it is very clear who is asking the questions and who is answering them. This doesn’t feel like a conversation at all! A lot of people new to interviewing have this ( erroneously !) in mind when they think about interviews as data collection. Somewhere in the middle of these two extreme cases is the “ semistructured” interview , in which the researcher uses an “interview guide” to gently move the conversation to certain topics and issues. This is the primary form of interviewing for qualitative social scientists and will be what I refer to as interviewing for the rest of this chapter, unless otherwise specified.

Types of Interviewing Questions: Unstructured conversations, Semi-structured interview, Structured interview, Survey questions

Informal (unstructured conversations). This is the most “open-ended” approach to interviewing. It is particularly useful in conjunction with observational methods (see chapters 13 and 14). There are no predetermined questions. Each interview will be different. Imagine you are researching the Oregon Country Fair, an annual event in Veneta, Oregon, that includes live music, artisan craft booths, face painting, and a lot of people walking through forest paths. It’s unlikely that you will be able to get a person to sit down with you and talk intensely about a set of questions for an hour and a half. But you might be able to sidle up to several people and engage with them about their experiences at the fair. You might have a general interest in what attracts people to these events, so you could start a conversation by asking strangers why they are here or why they come back every year. That’s it. Then you have a conversation that may lead you anywhere. Maybe one person tells a long story about how their parents brought them here when they were a kid. A second person talks about how this is better than Burning Man. A third person shares their favorite traveling band. And yet another enthuses about the public library in the woods. During your conversations, you also talk about a lot of other things—the weather, the utilikilts for sale, the fact that a favorite food booth has disappeared. It’s all good. You may not be able to record these conversations. Instead, you might jot down notes on the spot and then, when you have the time, write down as much as you can remember about the conversations in long fieldnotes. Later, you will have to sit down with these fieldnotes and try to make sense of all the information (see chapters 18 and 19).

Interview guide ( semistructured interview ). This is the primary type employed by social science qualitative researchers. The researcher creates an “interview guide” in advance, which she uses in every interview. In theory, every person interviewed is asked the same questions. In practice, every person interviewed is asked mostly the same topics but not always the same questions, as the whole point of a “guide” is that it guides the direction of the conversation but does not command it. The guide is typically between five and ten questions or question areas, sometimes with suggested follow-ups or prompts . For example, one question might be “What was it like growing up in Eastern Oregon?” with prompts such as “Did you live in a rural area? What kind of high school did you attend?” to help the conversation develop. These interviews generally take place in a quiet place (not a busy walkway during a festival) and are recorded. The recordings are transcribed, and those transcriptions then become the “data” that is analyzed (see chapters 18 and 19). The conventional length of one of these types of interviews is between one hour and two hours, optimally ninety minutes. Less than one hour doesn’t allow for much development of questions and thoughts, and two hours (or more) is a lot of time to ask someone to sit still and answer questions. If you have a lot of ground to cover, and the person is willing, I highly recommend two separate interview sessions, with the second session being slightly shorter than the first (e.g., ninety minutes the first day, sixty minutes the second). There are lots of good reasons for this, but the most compelling one is that this allows you to listen to the first day’s recording and catch anything interesting you might have missed in the moment and so develop follow-up questions that can probe further. This also allows the person being interviewed to have some time to think about the issues raised in the interview and go a little deeper with their answers.

Standardized questionnaire with open responses ( structured interview ). This is the type of interview a lot of people have in mind when they hear “interview”: a researcher comes to your door with a clipboard and proceeds to ask you a series of questions. These questions are all the same whoever answers the door; they are “standardized.” Both the wording and the exact order are important, as people’s responses may vary depending on how and when a question is asked. These are qualitative only in that the questions allow for “open-ended responses”: people can say whatever they want rather than select from a predetermined menu of responses. For example, a survey I collaborated on included this open-ended response question: “How does class affect one’s career success in sociology?” Some of the answers were simply one word long (e.g., “debt”), and others were long statements with stories and personal anecdotes. It is possible to be surprised by the responses. Although it’s a stretch to call this kind of questioning a conversation, it does allow the person answering the question some degree of freedom in how they answer.

Survey questionnaire with closed responses (not an interview!). Standardized survey questions with specific answer options (e.g., closed responses) are not really interviews at all, and they do not generate qualitative data. For example, if we included five options for the question “How does class affect one’s career success in sociology?”—(1) debt, (2) social networks, (3) alienation, (4) family doesn’t understand, (5) type of grad program—we leave no room for surprises at all. Instead, we would most likely look at patterns around these responses, thinking quantitatively rather than qualitatively (e.g., using regression analysis techniques, we might find that working-class sociologists were twice as likely to bring up alienation). It can sometimes be confusing for new students because the very same survey can include both closed-ended and open-ended questions. The key is to think about how these will be analyzed and to what level surprises are possible. If your plan is to turn all responses into a number and make predictions about correlations and relationships, you are no longer conducting qualitative research. This is true even if you are conducting this survey face-to-face with a real live human. Closed-response questions are not conversations of any kind, purposeful or not.

In summary, the semistructured interview guide approach is the predominant form of interviewing for social science qualitative researchers because it allows a high degree of freedom of responses from those interviewed (thus allowing for novel discoveries) while still maintaining some connection to a research question area or topic of interest. The rest of the chapter assumes the employment of this form.

Creating an Interview Guide

Your interview guide is the instrument used to bridge your research question(s) and what the people you are interviewing want to tell you. Unlike a standardized questionnaire, the questions actually asked do not need to be exactly what you have written down in your guide. The guide is meant to create space for those you are interviewing to talk about the phenomenon of interest, but sometimes you are not even sure what that phenomenon is until you start asking questions. A priority in creating an interview guide is to ensure it offers space. One of the worst mistakes is to create questions that are so specific that the person answering them will not stray. Relatedly, questions that sound “academic” will shut down a lot of respondents. A good interview guide invites respondents to talk about what is important to them, not feel like they are performing or being evaluated by you.

Good interview questions should not sound like your “research question” at all. For example, let’s say your research question is “How do patriarchal assumptions influence men’s understanding of climate change and responses to climate change?” It would be worse than unhelpful to ask a respondent, “How do your assumptions about the role of men affect your understanding of climate change?” You need to unpack this into manageable nuggets that pull your respondent into the area of interest without leading him anywhere. You could start by asking him what he thinks about climate change in general. Or, even better, whether he has any concerns about heatwaves or increased tornadoes or polar icecaps melting. Once he starts talking about that, you can ask follow-up questions that bring in issues around gendered roles, perhaps asking if he is married (to a woman) and whether his wife shares his thoughts and, if not, how they negotiate that difference. The fact is, you won’t really know the right questions to ask until he starts talking.

There are several distinct types of questions that can be used in your interview guide, either as main questions or as follow-up probes. If you remember that the point is to leave space for the respondent, you will craft a much more effective interview guide! You will also want to think about the place of time in both the questions themselves (past, present, future orientations) and the sequencing of the questions.

Researcher Note

Suggestion : As you read the next three sections (types of questions, temporality, question sequence), have in mind a particular research question, and try to draft questions and sequence them in a way that opens space for a discussion that helps you answer your research question.

Type of Questions

Experience and behavior questions ask about what a respondent does regularly (their behavior) or has done (their experience). These are relatively easy questions for people to answer because they appear more “factual” and less subjective. This makes them good opening questions. For the study on climate change above, you might ask, “Have you ever experienced an unusual weather event? What happened?” Or “You said you work outside? What is a typical summer workday like for you? How do you protect yourself from the heat?”

Opinion and values questions , in contrast, ask questions that get inside the minds of those you are interviewing. “Do you think climate change is real? Who or what is responsible for it?” are two such questions. Note that you don’t have to literally ask, “What is your opinion of X?” but you can find a way to ask the specific question relevant to the conversation you are having. These questions are a bit trickier to ask because the answers you get may depend in part on how your respondent perceives you and whether they want to please you or not. We’ve talked a fair amount about being reflective. Here is another place where this comes into play. You need to be aware of the effect your presence might have on the answers you are receiving and adjust accordingly. If you are a woman who is perceived as liberal asking a man who identifies as conservative about climate change, there is a lot of subtext that can be going on in the interview. There is no one right way to resolve this, but you must at least be aware of it.

Feeling questions are questions that ask respondents to draw on their emotional responses. It’s pretty common for academic researchers to forget that we have bodies and emotions, but people’s understandings of the world often operate at this affective level, sometimes unconsciously or barely consciously. It is a good idea to include questions that leave space for respondents to remember, imagine, or relive emotional responses to particular phenomena. “What was it like when you heard your cousin’s house burned down in that wildfire?” doesn’t explicitly use any emotion words, but it allows your respondent to remember what was probably a pretty emotional day. And if they respond emotionally neutral, that is pretty interesting data too. Note that asking someone “How do you feel about X” is not always going to evoke an emotional response, as they might simply turn around and respond with “I think that…” It is better to craft a question that actually pushes the respondent into the affective category. This might be a specific follow-up to an experience and behavior question —for example, “You just told me about your daily routine during the summer heat. Do you worry it is going to get worse?” or “Have you ever been afraid it will be too hot to get your work accomplished?”

Knowledge questions ask respondents what they actually know about something factual. We have to be careful when we ask these types of questions so that respondents do not feel like we are evaluating them (which would shut them down), but, for example, it is helpful to know when you are having a conversation about climate change that your respondent does in fact know that unusual weather events have increased and that these have been attributed to climate change! Asking these questions can set the stage for deeper questions and can ensure that the conversation makes the same kind of sense to both participants. For example, a conversation about political polarization can be put back on track once you realize that the respondent doesn’t really have a clear understanding that there are two parties in the US. Instead of asking a series of questions about Republicans and Democrats, you might shift your questions to talk more generally about political disagreements (e.g., “people against abortion”). And sometimes what you do want to know is the level of knowledge about a particular program or event (e.g., “Are you aware you can discharge your student loans through the Public Service Loan Forgiveness program?”).

Sensory questions call on all senses of the respondent to capture deeper responses. These are particularly helpful in sparking memory. “Think back to your childhood in Eastern Oregon. Describe the smells, the sounds…” Or you could use these questions to help a person access the full experience of a setting they customarily inhabit: “When you walk through the doors to your office building, what do you see? Hear? Smell?” As with feeling questions , these questions often supplement experience and behavior questions . They are another way of allowing your respondent to report fully and deeply rather than remain on the surface.

Creative questions employ illustrative examples, suggested scenarios, or simulations to get respondents to think more deeply about an issue, topic, or experience. There are many options here. In The Trouble with Passion , Erin Cech ( 2021 ) provides a scenario in which “Joe” is trying to decide whether to stay at his decent but boring computer job or follow his passion by opening a restaurant. She asks respondents, “What should Joe do?” Their answers illuminate the attraction of “passion” in job selection. In my own work, I have used a news story about an upwardly mobile young man who no longer has time to see his mother and sisters to probe respondents’ feelings about the costs of social mobility. Jessi Streib and Betsy Leondar-Wright have used single-page cartoon “scenes” to elicit evaluations of potential racial discrimination, sexual harassment, and classism. Barbara Sutton ( 2010 ) has employed lists of words (“strong,” “mother,” “victim”) on notecards she fans out and asks her female respondents to select and discuss.

Background/Demographic Questions

You most definitely will want to know more about the person you are interviewing in terms of conventional demographic information, such as age, race, gender identity, occupation, and educational attainment. These are not questions that normally open up inquiry. [1] For this reason, my practice has been to include a separate “demographic questionnaire” sheet that I ask each respondent to fill out at the conclusion of the interview. Only include those aspects that are relevant to your study. For example, if you are not exploring religion or religious affiliation, do not include questions about a person’s religion on the demographic sheet. See the example provided at the end of this chapter.

Temporality

Any type of question can have a past, present, or future orientation. For example, if you are asking a behavior question about workplace routine, you might ask the respondent to talk about past work, present work, and ideal (future) work. Similarly, if you want to understand how people cope with natural disasters, you might ask your respondent how they felt then during the wildfire and now in retrospect and whether and to what extent they have concerns for future wildfire disasters. It’s a relatively simple suggestion—don’t forget to ask about past, present, and future—but it can have a big impact on the quality of the responses you receive.

Question Sequence

Having a list of good questions or good question areas is not enough to make a good interview guide. You will want to pay attention to the order in which you ask your questions. Even though any one respondent can derail this order (perhaps by jumping to answer a question you haven’t yet asked), a good advance plan is always helpful. When thinking about sequence, remember that your goal is to get your respondent to open up to you and to say things that might surprise you. To establish rapport, it is best to start with nonthreatening questions. Asking about the present is often the safest place to begin, followed by the past (they have to know you a little bit to get there), and lastly, the future (talking about hopes and fears requires the most rapport). To allow for surprises, it is best to move from very general questions to more particular questions only later in the interview. This ensures that respondents have the freedom to bring up the topics that are relevant to them rather than feel like they are constrained to answer you narrowly. For example, refrain from asking about particular emotions until these have come up previously—don’t lead with them. Often, your more particular questions will emerge only during the course of the interview, tailored to what is emerging in conversation.

Once you have a set of questions, read through them aloud and imagine you are being asked the same questions. Does the set of questions have a natural flow? Would you be willing to answer the very first question to a total stranger? Does your sequence establish facts and experiences before moving on to opinions and values? Did you include prefatory statements, where necessary; transitions; and other announcements? These can be as simple as “Hey, we talked a lot about your experiences as a barista while in college.… Now I am turning to something completely different: how you managed friendships in college.” That is an abrupt transition, but it has been softened by your acknowledgment of that.

Probes and Flexibility

Once you have the interview guide, you will also want to leave room for probes and follow-up questions. As in the sample probe included here, you can write out the obvious probes and follow-up questions in advance. You might not need them, as your respondent might anticipate them and include full responses to the original question. Or you might need to tailor them to how your respondent answered the question. Some common probes and follow-up questions include asking for more details (When did that happen? Who else was there?), asking for elaboration (Could you say more about that?), asking for clarification (Does that mean what I think it means or something else? I understand what you mean, but someone else reading the transcript might not), and asking for contrast or comparison (How did this experience compare with last year’s event?). “Probing is a skill that comes from knowing what to look for in the interview, listening carefully to what is being said and what is not said, and being sensitive to the feedback needs of the person being interviewed” ( Patton 2002:374 ). It takes work! And energy. I and many other interviewers I know report feeling emotionally and even physically drained after conducting an interview. You are tasked with active listening and rearranging your interview guide as needed on the fly. If you only ask the questions written down in your interview guide with no deviations, you are doing it wrong. [2]

The Final Question

Every interview guide should include a very open-ended final question that allows for the respondent to say whatever it is they have been dying to tell you but you’ve forgotten to ask. About half the time they are tired too and will tell you they have nothing else to say. But incredibly, some of the most honest and complete responses take place here, at the end of a long interview. You have to realize that the person being interviewed is often discovering things about themselves as they talk to you and that this process of discovery can lead to new insights for them. Making space at the end is therefore crucial. Be sure you convey that you actually do want them to tell you more, that the offer of “anything else?” is not read as an empty convention where the polite response is no. Here is where you can pull from that active listening and tailor the final question to the particular person. For example, “I’ve asked you a lot of questions about what it was like to live through that wildfire. I’m wondering if there is anything I’ve forgotten to ask, especially because I haven’t had that experience myself” is a much more inviting final question than “Great. Anything you want to add?” It’s also helpful to convey to the person that you have the time to listen to their full answer, even if the allotted time is at the end. After all, there are no more questions to ask, so the respondent knows exactly how much time is left. Do them the courtesy of listening to them!

Conducting the Interview

Once you have your interview guide, you are on your way to conducting your first interview. I always practice my interview guide with a friend or family member. I do this even when the questions don’t make perfect sense for them, as it still helps me realize which questions make no sense, are poorly worded (too academic), or don’t follow sequentially. I also practice the routine I will use for interviewing, which goes something like this:

  • Introduce myself and reintroduce the study
  • Provide consent form and ask them to sign and retain/return copy
  • Ask if they have any questions about the study before we begin
  • Ask if I can begin recording
  • Ask questions (from interview guide)
  • Turn off the recording device
  • Ask if they are willing to fill out my demographic questionnaire
  • Collect questionnaire and, without looking at the answers, place in same folder as signed consent form
  • Thank them and depart

A note on remote interviewing: Interviews have traditionally been conducted face-to-face in a private or quiet public setting. You don’t want a lot of background noise, as this will make transcriptions difficult. During the recent global pandemic, many interviewers, myself included, learned the benefits of interviewing remotely. Although face-to-face is still preferable for many reasons, Zoom interviewing is not a bad alternative, and it does allow more interviews across great distances. Zoom also includes automatic transcription, which significantly cuts down on the time it normally takes to convert our conversations into “data” to be analyzed. These automatic transcriptions are not perfect, however, and you will still need to listen to the recording and clarify and clean up the transcription. Nor do automatic transcriptions include notations of body language or change of tone, which you may want to include. When interviewing remotely, you will want to collect the consent form before you meet: ask them to read, sign, and return it as an email attachment. I think it is better to ask for the demographic questionnaire after the interview, but because some respondents may never return it then, it is probably best to ask for this at the same time as the consent form, in advance of the interview.

What should you bring to the interview? I would recommend bringing two copies of the consent form (one for you and one for the respondent), a demographic questionnaire, a manila folder in which to place the signed consent form and filled-out demographic questionnaire, a printed copy of your interview guide (I print with three-inch right margins so I can jot down notes on the page next to relevant questions), a pen, a recording device, and water.

After the interview, you will want to secure the signed consent form in a locked filing cabinet (if in print) or a password-protected folder on your computer. Using Excel or a similar program that allows tables/spreadsheets, create an identifying number for your interview that links to the consent form without using the name of your respondent. For example, let’s say that I conduct interviews with US politicians, and the first person I meet with is George W. Bush. I will assign the transcription the number “INT#001” and add it to the signed consent form. [3] The signed consent form goes into a locked filing cabinet, and I never use the name “George W. Bush” again. I take the information from the demographic sheet, open my Excel spreadsheet, and add the relevant information in separate columns for the row INT#001: White, male, Republican. When I interview Bill Clinton as my second interview, I include a second row: INT#002: White, male, Democrat. And so on. The only link to the actual name of the respondent and this information is the fact that the consent form (unavailable to anyone but me) has stamped on it the interview number.

Many students get very nervous before their first interview. Actually, many of us are always nervous before the interview! But do not worry—this is normal, and it does pass. Chances are, you will be pleasantly surprised at how comfortable it begins to feel. These “purposeful conversations” are often a delight for both participants. This is not to say that sometimes things go wrong. I often have my students practice several “bad scenarios” (e.g., a respondent that you cannot get to open up; a respondent who is too talkative and dominates the conversation, steering it away from the topics you are interested in; emotions that completely take over; or shocking disclosures you are ill-prepared to handle), but most of the time, things go quite well. Be prepared for the unexpected, but know that the reason interviews are so popular as a technique of data collection is that they are usually richly rewarding for both participants.

One thing that I stress to my methods students and remind myself about is that interviews are still conversations between people. If there’s something you might feel uncomfortable asking someone about in a “normal” conversation, you will likely also feel a bit of discomfort asking it in an interview. Maybe more importantly, your respondent may feel uncomfortable. Social research—especially about inequality—can be uncomfortable. And it’s easy to slip into an abstract, intellectualized, or removed perspective as an interviewer. This is one reason trying out interview questions is important. Another is that sometimes the question sounds good in your head but doesn’t work as well out loud in practice. I learned this the hard way when a respondent asked me how I would answer the question I had just posed, and I realized that not only did I not really know how I would answer it, but I also wasn’t quite as sure I knew what I was asking as I had thought.

—Elizabeth M. Lee, Associate Professor of Sociology at Saint Joseph’s University, author of Class and Campus Life , and co-author of Geographies of Campus Inequality

How Many Interviews?

Your research design has included a targeted number of interviews and a recruitment plan (see chapter 5). Follow your plan, but remember that “ saturation ” is your goal. You interview as many people as you can until you reach a point at which you are no longer surprised by what they tell you. This means not that no one after your first twenty interviews will have surprising, interesting stories to tell you but rather that the picture you are forming about the phenomenon of interest to you from a research perspective has come into focus, and none of the interviews are substantially refocusing that picture. That is when you should stop collecting interviews. Note that to know when you have reached this, you will need to read your transcripts as you go. More about this in chapters 18 and 19.

Your Final Product: The Ideal Interview Transcript

A good interview transcript will demonstrate a subtly controlled conversation by the skillful interviewer. In general, you want to see replies that are about one paragraph long, not short sentences and not running on for several pages. Although it is sometimes necessary to follow respondents down tangents, it is also often necessary to pull them back to the questions that form the basis of your research study. This is not really a free conversation, although it may feel like that to the person you are interviewing.

Final Tips from an Interview Master

Annette Lareau is arguably one of the masters of the trade. In Listening to People , she provides several guidelines for good interviews and then offers a detailed example of an interview gone wrong and how it could be addressed (please see the “Further Readings” at the end of this chapter). Here is an abbreviated version of her set of guidelines: (1) interview respondents who are experts on the subjects of most interest to you (as a corollary, don’t ask people about things they don’t know); (2) listen carefully and talk as little as possible; (3) keep in mind what you want to know and why you want to know it; (4) be a proactive interviewer (subtly guide the conversation); (5) assure respondents that there aren’t any right or wrong answers; (6) use the respondent’s own words to probe further (this both allows you to accurately identify what you heard and pushes the respondent to explain further); (7) reuse effective probes (don’t reinvent the wheel as you go—if repeating the words back works, do it again and again); (8) focus on learning the subjective meanings that events or experiences have for a respondent; (9) don’t be afraid to ask a question that draws on your own knowledge (unlike trial lawyers who are trained never to ask a question for which they don’t already know the answer, sometimes it’s worth it to ask risky questions based on your hypotheses or just plain hunches); (10) keep thinking while you are listening (so difficult…and important); (11) return to a theme raised by a respondent if you want further information; (12) be mindful of power inequalities (and never ever coerce a respondent to continue the interview if they want out); (13) take control with overly talkative respondents; (14) expect overly succinct responses, and develop strategies for probing further; (15) balance digging deep and moving on; (16) develop a plan to deflect questions (e.g., let them know you are happy to answer any questions at the end of the interview, but you don’t want to take time away from them now); and at the end, (17) check to see whether you have asked all your questions. You don’t always have to ask everyone the same set of questions, but if there is a big area you have forgotten to cover, now is the time to recover ( Lareau 2021:93–103 ).

Sample: Demographic Questionnaire

ASA Taskforce on First-Generation and Working-Class Persons in Sociology – Class Effects on Career Success

Supplementary Demographic Questionnaire

Thank you for your participation in this interview project. We would like to collect a few pieces of key demographic information from you to supplement our analyses. Your answers to these questions will be kept confidential and stored by ID number. All of your responses here are entirely voluntary!

What best captures your race/ethnicity? (please check any/all that apply)

  • White (Non Hispanic/Latina/o/x)
  • Black or African American
  • Hispanic, Latino/a/x of Spanish
  • Asian or Asian American
  • American Indian or Alaska Native
  • Middle Eastern or North African
  • Native Hawaiian or Pacific Islander
  • Other : (Please write in: ________________)

What is your current position?

  • Grad Student
  • Full Professor

Please check any and all of the following that apply to you:

  • I identify as a working-class academic
  • I was the first in my family to graduate from college
  • I grew up poor

What best reflects your gender?

  • Transgender female/Transgender woman
  • Transgender male/Transgender man
  • Gender queer/ Gender nonconforming

Anything else you would like us to know about you?

Example: Interview Guide

In this example, follow-up prompts are italicized.  Note the sequence of questions.  That second question often elicits an entire life history , answering several later questions in advance.

Introduction Script/Question

Thank you for participating in our survey of ASA members who identify as first-generation or working-class.  As you may have heard, ASA has sponsored a taskforce on first-generation and working-class persons in sociology and we are interested in hearing from those who so identify.  Your participation in this interview will help advance our knowledge in this area.

  • The first thing we would like to as you is why you have volunteered to be part of this study? What does it mean to you be first-gen or working class?  Why were you willing to be interviewed?
  • How did you decide to become a sociologist?
  • Can you tell me a little bit about where you grew up? ( prompts: what did your parent(s) do for a living?  What kind of high school did you attend?)
  • Has this identity been salient to your experience? (how? How much?)
  • How welcoming was your grad program? Your first academic employer?
  • Why did you decide to pursue sociology at the graduate level?
  • Did you experience culture shock in college? In graduate school?
  • Has your FGWC status shaped how you’ve thought about where you went to school? debt? etc?
  • Were you mentored? How did this work (not work)?  How might it?
  • What did you consider when deciding where to go to grad school? Where to apply for your first position?
  • What, to you, is a mark of career success? Have you achieved that success?  What has helped or hindered your pursuit of success?
  • Do you think sociology, as a field, cares about prestige?
  • Let’s talk a little bit about intersectionality. How does being first-gen/working class work alongside other identities that are important to you?
  • What do your friends and family think about your career? Have you had any difficulty relating to family members or past friends since becoming highly educated?
  • Do you have any debt from college/grad school? Are you concerned about this?  Could you explain more about how you paid for college/grad school?  (here, include assistance from family, fellowships, scholarships, etc.)
  • (You’ve mentioned issues or obstacles you had because of your background.) What could have helped?  Or, who or what did? Can you think of fortuitous moments in your career?
  • Do you have any regrets about the path you took?
  • Is there anything else you would like to add? Anything that the Taskforce should take note of, that we did not ask you about here?

Further Readings

Britten, Nicky. 1995. “Qualitative Interviews in Medical Research.” BMJ: British Medical Journal 31(6999):251–253. A good basic overview of interviewing particularly useful for students of public health and medical research generally.

Corbin, Juliet, and Janice M. Morse. 2003. “The Unstructured Interactive Interview: Issues of Reciprocity and Risks When Dealing with Sensitive Topics.” Qualitative Inquiry 9(3):335–354. Weighs the potential benefits and harms of conducting interviews on topics that may cause emotional distress. Argues that the researcher’s skills and code of ethics should ensure that the interviewing process provides more of a benefit to both participant and researcher than a harm to the former.

Gerson, Kathleen, and Sarah Damaske. 2020. The Science and Art of Interviewing . New York: Oxford University Press. A useful guidebook/textbook for both undergraduates and graduate students, written by sociologists.

Kvale, Steiner. 2007. Doing Interviews . London: SAGE. An easy-to-follow guide to conducting and analyzing interviews by psychologists.

Lamont, Michèle, and Ann Swidler. 2014. “Methodological Pluralism and the Possibilities and Limits of Interviewing.” Qualitative Sociology 37(2):153–171. Written as a response to various debates surrounding the relative value of interview-based studies and ethnographic studies defending the particular strengths of interviewing. This is a must-read article for anyone seriously engaging in qualitative research!

Pugh, Allison J. 2013. “What Good Are Interviews for Thinking about Culture? Demystifying Interpretive Analysis.” American Journal of Cultural Sociology 1(1):42–68. Another defense of interviewing written against those who champion ethnographic methods as superior, particularly in the area of studying culture. A classic.

Rapley, Timothy John. 2001. “The ‘Artfulness’ of Open-Ended Interviewing: Some considerations in analyzing interviews.” Qualitative Research 1(3):303–323. Argues for the importance of “local context” of data production (the relationship built between interviewer and interviewee, for example) in properly analyzing interview data.

Weiss, Robert S. 1995. Learning from Strangers: The Art and Method of Qualitative Interview Studies . New York: Simon and Schuster. A classic and well-regarded textbook on interviewing. Because Weiss has extensive experience conducting surveys, he contrasts the qualitative interview with the survey questionnaire well; particularly useful for those trained in the latter.

  • I say “normally” because how people understand their various identities can itself be an expansive topic of inquiry. Here, I am merely talking about collecting otherwise unexamined demographic data, similar to how we ask people to check boxes on surveys. ↵
  • Again, this applies to “semistructured in-depth interviewing.” When conducting standardized questionnaires, you will want to ask each question exactly as written, without deviations! ↵
  • I always include “INT” in the number because I sometimes have other kinds of data with their own numbering: FG#001 would mean the first focus group, for example. I also always include three-digit spaces, as this allows for up to 999 interviews (or, more realistically, allows for me to interview up to one hundred persons without having to reset my numbering system). ↵

A method of data collection in which the researcher asks the participant questions; the answers to these questions are often recorded and transcribed verbatim. There are many different kinds of interviews - see also semistructured interview , structured interview , and unstructured interview .

A document listing key questions and question areas for use during an interview.  It is used most often for semi-structured interviews.  A good interview guide may have no more than ten primary questions for two hours of interviewing, but these ten questions will be supplemented by probes and relevant follow-ups throughout the interview.  Most IRBs require the inclusion of the interview guide in applications for review.  See also interview and  semi-structured interview .

A data-collection method that relies on casual, conversational, and informal interviewing.  Despite its apparent conversational nature, the researcher usually has a set of particular questions or question areas in mind but allows the interview to unfold spontaneously.  This is a common data-collection technique among ethnographers.  Compare to the semi-structured or in-depth interview .

A form of interview that follows a standard guide of questions asked, although the order of the questions may change to match the particular needs of each individual interview subject, and probing “follow-up” questions are often added during the course of the interview.  The semi-structured interview is the primary form of interviewing used by qualitative researchers in the social sciences.  It is sometimes referred to as an “in-depth” interview.  See also interview and  interview guide .

The cluster of data-collection tools and techniques that involve observing interactions between people, the behaviors, and practices of individuals (sometimes in contrast to what they say about how they act and behave), and cultures in context.  Observational methods are the key tools employed by ethnographers and Grounded Theory .

Follow-up questions used in a semi-structured interview  to elicit further elaboration.  Suggested prompts can be included in the interview guide  to be used/deployed depending on how the initial question was answered or if the topic of the prompt does not emerge spontaneously.

A form of interview that follows a strict set of questions, asked in a particular order, for all interview subjects.  The questions are also the kind that elicits short answers, and the data is more “informative” than probing.  This is often used in mixed-methods studies, accompanying a survey instrument.  Because there is no room for nuance or the exploration of meaning in structured interviews, qualitative researchers tend to employ semi-structured interviews instead.  See also interview.

The point at which you can conclude data collection because every person you are interviewing, the interaction you are observing, or content you are analyzing merely confirms what you have already noted.  Achieving saturation is often used as the justification for the final sample size.

An interview variant in which a person’s life story is elicited in a narrative form.  Turning points and key themes are established by the researcher and used as data points for further analysis.

Introduction to Qualitative Research Methods Copyright © 2023 by Allison Hurst is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License , except where otherwise noted.

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Appendix: Qualitative Interview Design

Daniel W. Turner III and Nicole Hagstrom-Schmidt

Qualitative Interview Design: A Practical Guide for Novice Investigators

Qualitative research design can be complicated depending upon the level of experience a researcher may have with a particular type of methodology. As researchers, many aspire to grow and expand their knowledge and experiences with qualitative design in order to better utilize a variety of research paradigms. One of the more popular areas of interest in qualitative research design is that of the interview protocol. Interviews provide in-depth information pertaining to participants’ experiences and viewpoints of a particular topic. Oftentimes, interviews are coupled with other forms of data collection in order to provide the researcher with a well-rounded collection of information for analyses. This paper explores the effective ways to conduct in-depth, qualitative interviews for novice investigators by expanding upon the practical components of each interview design.

Categories of Qualitative Interview Design

As common with quantitative analyses, there are various forms of interview design that can be developed to obtain thick, rich data utilizing a qualitative investigational perspective. [1] For the purpose of this examination, there are three formats for interview design that will be explored which are summarized by Gall, Gall, and Borg:

  • Informal conversational interview,
  • General interview guide approach,
  • Standardized open-ended interview. [2]

In addition, I will expand on some suggestions for conducting qualitative interviews which includes the construction of research questions as well as the analysis of interview data. These suggestions come from both my personal experiences with interviewing as well as the recommendations from the literature to assist novice interviewers.

Informal Conversational Interview

The informal conversational interview is outlined by Gall, Gall, and Borg for the purpose of relying “…entirely on the spontaneous generation of questions in a natural interaction, typically one that occurs as part of ongoing participant observation fieldwork.” [3] I am curious when it comes to other cultures or religions and I enjoy immersing myself in these environments as an active participant. I ask questions in order to learn more about these social settings without having a predetermined set of structured questions. Primarily the questions come from “in the moment experiences” as a means for further understanding or clarification of what I am witnessing or experiencing at a particular moment. With the informal conversational approach, the researcher does not ask any specific types of questions, but rather relies on the interaction with the participants to guide the interview process. [4] Think of this type of interview as an “off the top of your head” style of interview where you really construct questions as you move forward. Many consider this type of interview beneficial because of the lack of structure, which allows for flexibility in the nature of the interview. However, many researchers view this type of interview as unstable or unreliable because of the inconsistency in the interview questions, thus making it difficult to code data. [5] If you choose to conduct an informal conversational interview, it is critical to understand the need for flexibility and originality in the questioning as a key for success.

General Interview Guide Approach

The general interview guide approach is more structured than the informal conversational interview although there is still quite a bit of flexibility in its composition. [6] The ways that questions are potentially worded depend upon the researcher who is conducting the interview. Therefore, one of the obvious issues with this type of interview is the lack of consistency in the way research questions are posed because researchers can interchange the way he or she poses them. With that in mind, the respondents may not consistently answer the same question(s) based on how they were posed by the interviewer. [7] During research for my doctoral dissertation, I was able to interact with alumni participants in a relaxed and informal manner where I had the opportunity to learn more about the in-depth experiences of the participants through structured interviews. This informal environment allowed me the opportunity to develop rapport with the participants so that I was able to ask follow-up or probing questions based on their responses to pre-constructed questions. I found this quite useful in my interviews because I could ask questions or change questions based on participant responses to previous questions. The questions were structured, but adapting them allowed me to explore a more personal approach to each alumni interview.

According to McNamara, the strength of the general interview guide approach is the ability of the researcher “…to ensure that the same general areas of information are collected from each interviewee; this provides more focus than the conversational approach, but still allows a degree of freedom and adaptability in getting information from the interviewee.” [8] The researcher remains in the driver’s seat with this type of interview approach, but flexibility takes precedence based on perceived prompts from the participants.

You might ask, “What does this mean anyway?” The easiest way to answer that question is to think about your own personal experiences at a job interview. When you were invited to a job interview in the past, you might have prepared for all sorts of curve ball-style questions to come your way. You desired an answer for every potential question. If the interviewer were asking you questions using a general interview guide approach, he or she would ask questions using their own unique style, which might differ from the way the questions were originally created. You as the interviewee would then respond to those questions in the manner in which the interviewer asked which would dictate how the interview continued. Based on how the interviewer asked the question(s), you might have been able to answer more information or less information than that of other job candidates. Therefore, it is easy to see how this could positively or negatively influence a job candidate if the interviewer were using a general interview guide approach.

Standardized Open-Ended Interviews

The standardized open-ended interview is extremely structured in terms of the wording of the questions. Participants are always asked identical questions, but the questions are worded so that responses are open-ended. [9] This open-endedness allows the participants to contribute as much detailed information as they desire and it also allows the researcher to ask probing questions as a means of follow-up. Standardized open-ended interviews are likely the most popular form of interviewing utilized in research studies because of the nature of the open-ended questions, allowing the participants to fully express their viewpoints and experiences. If one were to identify weaknesses with open-ended interviewing, they would likely identify the difficulty with coding the data. [10] Since open-ended interviews in composition call for participants to fully express their responses in as much detail as desired, it can be quite difficult for researchers to extract similar themes or codes from the interview transcripts as they would with less open-ended responses. Although the data provided by participants are rich and thick with qualitative data, it can be a more cumbersome process for the researcher to sift through the narrative responses in order to fully and accurately reflect an overall perspective of all interview responses through the coding process. However, according to Gall, Gall, and Borg, this reduces researcher biases within the study, particularly when the interviewing process involves many participants. [11]

Suggestions for Conducting Qualitative Interviews

Now that we know a few of the more popular interview designs that are available to qualitative researchers, we can more closely examine various suggestions for conducting qualitative interviews based on the available research. These suggestions are designed to provide the researcher with the tools needed to conduct a well constructed, professional interview with their participants. Some of the most common information found within the literature relating to interviews, according to Creswell [12] :

  • The preparation for the interview,
  • The constructing effective research questions,
  • The actual implementation of the interview(s). [13]

Preparation for the Interview

Probably the most helpful tip with the interview process is that of interview preparation. This process can help make or break the process and can either alleviate or exacerbate the problematic circumstances that could potentially occur once the research is implemented. McNamara suggests the importance of the preparation stage in order to maintain an unambiguous focus as to how the interviews will be erected in order to provide maximum benefit to the proposed research study. [14] Along these lines Chenail provides a number of pre-interview exercises researchers can use to improve their instrumentality and address potential biases. [15] McNamara applies eight principles to the preparation stage of interviewing which includes the following ingredients:

  • Choose a setting with little distraction;
  • Explain the purpose of the interview;
  • Address terms of confidentiality;
  • Explain the format of the interview;
  • Indicate how long the interview usually takes;
  • Tell them how to get in touch with you later if they want to;
  • Ask them if they have any questions before you both get started with the interview;
  • Don’t count on your memory to recall their answers. [16]

Selecting Participants

Creswell discusses the importance of selecting the appropriate candidates for interviews. He asserts that the researcher should utilize one of the various types of sampling strategies such as criterion based sampling or critical case sampling (among many others) in order to obtain qualified candidates that will provide the most credible information to the study. [17] Creswell also suggests the importance of acquiring participants who will be willing to openly and honestly share information or “their story.” [18] It might be easier to conduct the interviews with participants in a comfortable environment where the participants do not feel restricted or uncomfortable to share information.

Pilot Testing

Another important element to the interview preparation is the implementation of a pilot test. The pilot test will assist the research in determining if there are flaws, limitations, or other weaknesses within the interview design and will allow him or her to make necessary revisions prior to the implementation of the study. [19] A pilot test should be conducted with participants that have similar interests as those that will participate in the implemented study. The pilot test will also assist the researchers with the refinement of research questions, which will be discussed in the next section.

Constructing Effective Research Questions

Creating effective research questions for the interview process is one of the most crucial components to interview design. Researchers desiring to conduct such an investigation should be careful that each of the questions will allow the examiner to dig deep into the experiences and/or knowledge of the participants in order to gain maximum data from the interviews. McNamara suggests several recommendations for creating effective research questions for interviews which includes the following elements:

  • Wording should be open-ended (respondents should be able to choose their own terms when answering questions);
  • Questions should be as neutral as possible (avoid wording that might influence answers, e.g., evocative, judgmental wording);
  • Questions should be asked one at a time;
  • Questions should be worded clearly (this includes knowing any terms particular to the program or the respondents’ culture); and
  • Be careful asking “why” questions. [20]

Examples of Useful and Not-So Useful Research Questions

To assist the novice interviewer with the preparation of research questions, I will propose a useful research question and a not so useful research question. Based on McNamara’s suggestion, it is important to ask an open-ended question. [21] So for the useful question, I will propose the following: “How have your experiences as a kindergarten teacher influenced or not influenced you in the decisions that you have made in raising your children”? As you can see, the question allows the respondent to discuss how his or her experiences as a kindergarten teacher have or have not affected their decision-making with their own children without making the assumption that the experience has influenced their decision-making. On the other hand, if you were to ask a similar question, but from a less than useful perspective, you might construct the same question in this manner: “How has your experiences as a kindergarten teacher affected you as a parent”? As you can see, the question is still open-ended, but it makes the assumption that the experiences have indeed affected them as a parent. We as the researcher cannot make this assumption in the wording of our questions.

Follow-Up Questions

Creswell also makes the suggestion of being flexible with research questions being constructed. [22] He makes the assertion that respondents in an interview will not necessarily answer the question being asked by the researcher and, in fact, may answer a question that is asked in another question later in the interview. Creswell believes that the researcher must construct questions in such a manner to keep participants on focus with their responses to the questions. In addition, the researcher must be prepared with follow-up questions or prompts in order to ensure that they obtain optimal responses from participants. When I was an Assistant Director for a large division at my University a couple of years ago, I was tasked with the responsibility of hiring student affairs coordinators at our off-campus educational centers. Throughout the interviewing process, I found that interviewees did indeed get off topic with certain questions because they either misunderstood the question(s) being asked or did not wish to answer the question(s) directly. I was able to utilize Creswell’s suggestion [23] by reconstructing questions so that they were clearly assembled in a manner to reduce misunderstanding and was able to erect effective follow-up prompts to further understanding. This alleviated many of the problems I had and assisted me in extracting the information I needed from the interview through my follow-up questioning.

Implementation of Interviews

As with other sections of interview design, McNamara makes some excellent recommendations for the implementation stage of the interview process. He includes the following tips for interview implementation:

  • Occasionally verify the tape recorder (if used) is working;
  • Ask one question at a time;
  • Attempt to remain as neutral as possible (that is, don’t show strong emotional reactions to their responses;
  • Encourage responses with occasional nods of the head, “uh huh”s, etc.;
  • Be careful about the appearance when note taking (that is, if you jump to take a note, it may appear as if you’re surprised or very pleased about an answer, which may influence answers to future questions);
  • Provide transition between major topics, e.g., “we’ve been talking about (some topic) and now I’d like to move on to (another topic);”
  • Don’t lose control of the interview (this can occur when respondents stray to another topic, take so long to answer a question that times begins to run out, or even begin asking questions to the interviewer). [24]

Interpreting Data

The final constituent in the interview design process is that of interpreting the data that was gathered during the interview process. During this phase, the researcher must make “sense” out of what was just uncovered and compile the data into sections or groups of information, also known as themes or codes. [25] These themes or codes are consistent phrases, expressions, or ideas that were common among research participants. [26] How the researcher formulates themes or codes vary. Many researchers suggest the need to employ a third party consultant who can review codes or themes in order to determine the quality and effectiveness based on their evaluation of the interview transcripts. [27] This helps alleviate researcher biases or potentially eliminate where over-analyzing of data has occurred. Many researchers may choose to employ an iterative review process where a committee of nonparticipating researchers can provide constructive feedback and suggestions to the researcher(s) primarily involved with the study.

From choosing the appropriate type of interview design process through the interpretation of interview data, this guide for conducting qualitative research interviews proposes a practical way to perform an investigation based on the recommendations and experiences of qualified researchers in the field and through my own personal experiences. Although qualitative investigation provides a myriad of opportunities for conducting investigational research, interview design has remained one of the more popular forms of analyses. As the variety of qualitative research methods become more widely utilized across research institutions, we will continue to see more practical guides for protocol implementation outlined in peer reviewed journals across the world.

This text was derived from

Turner, Daniel W., III. “Qualitative Interview Design: A Practical Guide for Novice Investigators.” The Qualitative Report 15, no. 3 (2010): 754-760. https://doi.org/10.46743/2160-3715/2010.1178 . Licensed under a  Creative Commons Attribution-Noncommercial-Share Alike 4.0 International License .

It is edited and reformatted by Nicole Hagstrom-Schmidt.

  • John W. Creswell, Qualitative Inquiry and Research Design: Choosing Among Five Approaches , 2nd ed. (Thousand Oaks, CA: Sage, 2007). ↵
  • M.D. Gall, Walter R. Borg, and Joyce P. Gall, Educational Research: An Introduction , 7th ed. (Boston, MA: Pearson, 2003). ↵
  • M.D. Gall, Walter R. Borg, and Joyce P. Gall, Educational Research: An Introduction , 7th ed (Boston, MA: Pearson, 2003), 239. ↵
  • Carter McNamara, “General Guidelines for Conducting Interviews,” Free Management Library , accessed January 11, 2010, https://managementhelp.org/businessresearch/interviews.htm. ↵
  • M.D. Gall, Walter R. Borg, and Joyce P. Gall, Educational Research: An Introduction , 7th ed (Boston, MA: Pearson, 2003). ↵
  • Carter McNamara, “General Guidelines for Conducting Interviews,” Free Management Library , accessed January 11, 2010, https://managementhelp.org/businessresearch/interviews.htm . ↵
  • Carter McNamara, “General Guidelines for Conducting Interviews,” Free Management Library , “Types of Interviews” section, para. 1, accessed January 11, 2010, https://managementhelp.org/businessresearch/interviews.htm . ↵
  • John W. Creswell, Research Design: Qualitative, Quantitative, and Mixed Methods Approaches , 3rd ed. (Thousand Oaks, CA: Sage, 2003); John W. Creswell, Qualitative Inquiry and Research Design: Choosing Among Five Approaches , 2nd ed. (Thousand Oaks, CA: Sage, 2007). ↵
  • Ronald J. Chenail, “Interviewing the Investigator: Strategies for Addressing Instrumentation and Researcher Bias Concerns in Qualitative Research,” The Qualitative Report 16, no. 1 (2011): 255–262, https://nsuworks.nova.edu/tqr/vol16/iss1/16/ . ↵
  • Carter McNamara, “General Guidelines for Conducting Interviews,” Free Management Library , “Preparation for Interview section,” para. 1, accessed January 11, 2010, https://managementhelp.org/businessresearch/interviews.htm . ↵
  • John W. Creswell, Qualitative Inquiry and Research Design: Choosing Among Five Approaches , 2nd ed. (Thousand Oaks, CA: Sage, 2007), 133. ↵
  • Steinar Kvale, Doing Interviews (London and Thousand Oaks, CA: Sage, 2007) https://doi.org/10.4135/9781849208963 . ↵
  • Carter McNamara, “General Guidelines for Conducting Interviews,” Free Management Library , “Wording of Questions” section, para. 1, accessed January 11, 2010, https://managementhelp.org/businessresearch/interviews.htm . ↵
  • Carter McNamara, “General Guidelines for Conducting Interviews,” Free Management Library , “Conducting Interview” section, para 1, accessed January 11, 2010, https://managementhelp.org/businessresearch/interviews.htm . ↵
  • Steinar Kvale, Doing Interviews (London and Thousand Oaks, CA: Sage, 2007) https://doi.org/10.4135/9781849208963 ↵

Appendix: Qualitative Interview Design Copyright © 2022 by Daniel W. Turner III and Nicole Hagstrom-Schmidt is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Types of Interviews in Research | Guide & Examples

Types of Interviews in Research | Guide & Examples

Published on March 10, 2022 by Tegan George . Revised on June 22, 2023.

An interview is a qualitative research method that relies on asking questions in order to collect data . Interviews involve two or more people, one of whom is the interviewer asking the questions.

There are several types of interviews, often differentiated by their level of structure.

  • Structured interviews have predetermined questions asked in a predetermined order.
  • Unstructured interviews are more free-flowing.
  • Semi-structured interviews fall in between.

Interviews are commonly used in market research, social science, and ethnographic research .

Table of contents

What is a structured interview, what is a semi-structured interview, what is an unstructured interview, what is a focus group, examples of interview questions, advantages and disadvantages of interviews, other interesting articles, frequently asked questions about types of interviews.

Structured interviews have predetermined questions in a set order. They are often closed-ended, featuring dichotomous (yes/no) or multiple-choice questions. While open-ended structured interviews exist, they are much less common. The types of questions asked make structured interviews a predominantly quantitative tool.

Asking set questions in a set order can help you see patterns among responses, and it allows you to easily compare responses between participants while keeping other factors constant. This can mitigate   research biases and lead to higher reliability and validity. However, structured interviews can be overly formal, as well as limited in scope and flexibility.

  • You feel very comfortable with your topic. This will help you formulate your questions most effectively.
  • You have limited time or resources. Structured interviews are a bit more straightforward to analyze because of their closed-ended nature, and can be a doable undertaking for an individual.
  • Your research question depends on holding environmental conditions between participants constant.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

interview protocol example qualitative research

Semi-structured interviews are a blend of structured and unstructured interviews. While the interviewer has a general plan for what they want to ask, the questions do not have to follow a particular phrasing or order.

Semi-structured interviews are often open-ended, allowing for flexibility, but follow a predetermined thematic framework, giving a sense of order. For this reason, they are often considered “the best of both worlds.”

However, if the questions differ substantially between participants, it can be challenging to look for patterns, lessening the generalizability and validity of your results.

  • You have prior interview experience. It’s easier than you think to accidentally ask a leading question when coming up with questions on the fly. Overall, spontaneous questions are much more difficult than they may seem.
  • Your research question is exploratory in nature. The answers you receive can help guide your future research.

An unstructured interview is the most flexible type of interview. The questions and the order in which they are asked are not set. Instead, the interview can proceed more spontaneously, based on the participant’s previous answers.

Unstructured interviews are by definition open-ended. This flexibility can help you gather detailed information on your topic, while still allowing you to observe patterns between participants.

However, so much flexibility means that they can be very challenging to conduct properly. You must be very careful not to ask leading questions, as biased responses can lead to lower reliability or even invalidate your research.

  • You have a solid background in your research topic and have conducted interviews before.
  • Your research question is exploratory in nature, and you are seeking descriptive data that will deepen and contextualize your initial hypotheses.
  • Your research necessitates forming a deeper connection with your participants, encouraging them to feel comfortable revealing their true opinions and emotions.

A focus group brings together a group of participants to answer questions on a topic of interest in a moderated setting. Focus groups are qualitative in nature and often study the group’s dynamic and body language in addition to their answers. Responses can guide future research on consumer products and services, human behavior, or controversial topics.

Focus groups can provide more nuanced and unfiltered feedback than individual interviews and are easier to organize than experiments or large surveys . However, their small size leads to low external validity and the temptation as a researcher to “cherry-pick” responses that fit your hypotheses.

  • Your research focuses on the dynamics of group discussion or real-time responses to your topic.
  • Your questions are complex and rooted in feelings, opinions, and perceptions that cannot be answered with a “yes” or “no.”
  • Your topic is exploratory in nature, and you are seeking information that will help you uncover new questions or future research ideas.

Prevent plagiarism. Run a free check.

Depending on the type of interview you are conducting, your questions will differ in style, phrasing, and intention. Structured interview questions are set and precise, while the other types of interviews allow for more open-endedness and flexibility.

Here are some examples.

  • Semi-structured
  • Unstructured
  • Focus group
  • Do you like dogs? Yes/No
  • Do you associate dogs with feeling: happy; somewhat happy; neutral; somewhat unhappy; unhappy
  • If yes, name one attribute of dogs that you like.
  • If no, name one attribute of dogs that you don’t like.
  • What feelings do dogs bring out in you?
  • When you think more deeply about this, what experiences would you say your feelings are rooted in?

Interviews are a great research tool. They allow you to gather rich information and draw more detailed conclusions than other research methods, taking into consideration nonverbal cues, off-the-cuff reactions, and emotional responses.

However, they can also be time-consuming and deceptively challenging to conduct properly. Smaller sample sizes can cause their validity and reliability to suffer, and there is an inherent risk of interviewer effect arising from accidentally leading questions.

Here are some advantages and disadvantages of each type of interview that can help you decide if you’d like to utilize this research method.

Advantages and disadvantages of interviews
Type of interview Advantages Disadvantages
Structured interview
Semi-structured interview , , , and
Unstructured interview , , , and
Focus group , , and , since there are multiple people present

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Student’s  t -distribution
  • Normal distribution
  • Null and Alternative Hypotheses
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles
  • Cluster sampling
  • Stratified sampling
  • Data cleansing
  • Reproducibility vs Replicability
  • Peer review
  • Prospective cohort study

Research bias

  • Implicit bias
  • Cognitive bias
  • Placebo effect
  • Hawthorne effect
  • Hindsight bias
  • Affect heuristic
  • Social desirability bias

The four most common types of interviews are:

  • Structured interviews : The questions are predetermined in both topic and order. 
  • Semi-structured interviews : A few questions are predetermined, but other questions aren’t planned.
  • Unstructured interviews : None of the questions are predetermined.
  • Focus group interviews : The questions are presented to a group instead of one individual.

The interviewer effect is a type of bias that emerges when a characteristic of an interviewer (race, age, gender identity, etc.) influences the responses given by the interviewee.

There is a risk of an interviewer effect in all types of interviews , but it can be mitigated by writing really high-quality interview questions.

Social desirability bias is the tendency for interview participants to give responses that will be viewed favorably by the interviewer or other participants. It occurs in all types of interviews and surveys , but is most common in semi-structured interviews , unstructured interviews , and focus groups .

Social desirability bias can be mitigated by ensuring participants feel at ease and comfortable sharing their views. Make sure to pay attention to your own body language and any physical or verbal cues, such as nodding or widening your eyes.

This type of bias can also occur in observations if the participants know they’re being observed. They might alter their behavior accordingly.

A focus group is a research method that brings together a small group of people to answer questions in a moderated setting. The group is chosen due to predefined demographic traits, and the questions are designed to shed light on a topic of interest. It is one of 4 types of interviews .

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to systematically measure variables and test hypotheses . Qualitative methods allow you to explore concepts and experiences in more detail.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

George, T. (2023, June 22). Types of Interviews in Research | Guide & Examples. Scribbr. Retrieved September 11, 2024, from https://www.scribbr.com/methodology/interviews-research/

Is this article helpful?

Tegan George

Tegan George

Other students also liked, unstructured interview | definition, guide & examples, structured interview | definition, guide & examples, semi-structured interview | definition, guide & examples, get unlimited documents corrected.

✔ Free APA citation check included ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

Sample Interview Protocol Form

Faculty interview protocol.

Institutions: _____________________________________________________

Interviewee (Title and Name): ______________________________________

Interviewer: _____________________________________________________

Survey Section Used:

_____ A: Interview Background _____ B: Institutional Perspective _____ C: Assessment _____ D: Department and Discipline _____ E: Teaching and Learning _____ F: Demographics (no specific questions)

Other Topics Discussed:____________________________________________

________________________________________________________________

Documents Obtained: _____________________________________________

Post Interview Comments or Leads:

Teaching, Learning, and Assessment Interviews

Introductory Protocol

To facilitate our note-taking, we would like to audio tape our conversations today. Please sign the release form. For your information, only researchers on the project will be privy to the tapes which will be eventually destroyed after they are transcribed. In addition, you must sign a form devised to meet our human subject requirements. Essentially, this document states that: (1) all information will be held confidential, (2) your participation is voluntary and you may stop at any time if you feel uncomfortable, and (3) we do not intend to inflict any harm. Thank you for your agreeing to participate.

We have planned this interview to last no longer than one hour. During this time, we have several questions that we would like to cover. If time begins to run short, it may be necessary to interrupt you in order to push ahead and complete this line of questioning.

Introduction

You have been selected to speak with us today because you have been identified as someone who has a great deal to share about teaching, learning, and assessment on this campus. Our research project as a whole focuses on the improvement of teaching and learning activity, with particular interest in understanding how faculty in academic programs are engaged in this activity, how they assess student learning, and whether we can begin to share what we know about making a difference in undergraduate education. Our study does not aim to evaluate your techniques or experiences. Rather, we are trying to learn more about teaching and learning, and hopefully learn about faculty practices that help improve student learning on campus.

A. Interviewee Background

How long have you been …

_______ in your present position? _______ at this institution?

Interesting background information on interviewee:

What is your highest degree? ___________________________________________

What is your field of study? ____________________________________________

1. Briefly describe your role (office, committee, classroom, etc.) as it relates to student learning and assessment (if appropriate).

Probes: How are you involved in teaching, learning, and assessment here?

How did you get involved?

2. What motivates you to use innovative teaching and/or assessment techniques in your teaching?

B. Institutional Perspective

1. What is the strategy at this institution for improving teaching, learning, and assessment?

Probes: Is it working – why or why not?

Purpose, development, administration, recent initiatives

2. What resources are available to faculty for improving teaching and assessment techniques?

3. What rewards do faculty receive from the institution for engaging in innovative teaching/learning and assessment strategies?

Probe: Do you see a widening of the circle of participants here on campus?

4. What is changing about teaching, learning, and assessment on this campus?

Probe: What is being accomplished through campus-based initiatives?

What kinds of networks do you see developing surrounding teaching/learning reforms?

5. Have you or your colleagues encountered resistance to these reforms in your department? . . . on campus?

C. Assessment

1. How do you go about assessing whether students grasp the material you present in class?

Probe: Do you use evidence of student learning in your assessment of classroom strategies?

2. What kinds of assessment techniques tell you the most about what students are learning?

Probe: What kinds of assessment most accurately capture what students are learning?

3. Are you involved in evaluating teaching, learning, and assessment practices at either the department or campus level? How is this achieved?

4. How is the assessment of student learning used to improve teaching/learning in your department? …. on campus?

D. Department and Discipline

1. What are some of the major challenges your department faces in attempting to change teaching, learning, and assessment practices? What are the major opportunities?

Probes: How can barriers be overcome?

How can opportunities be maximized?

2. To what extent are teaching-related activities evaluated at your institutions? . . . in your department?

Probe: How is “good teaching” rewarded?

3. To what extent is teaching and assessment valued within your discipline?

E. Teaching and Learning

1. Describe how teaching, learning, and assessment practices are improving on this campus

Probe: How do you know? (criteria, evidence)

2. Is the assessment of teaching and learning a major focus of attention and discussion here?

Probe: why or why not? (reasons, influences)

3. What specific new teaching or assessment practices have you implemented in your classes?

4. Are there any particular characteristics that you associate with faculty who are interested in innovative teaching/learning initiatives?

5. What types of faculty development opportunities do you see emerging on your campus that focus on teaching and learning strategies for the classroom? (Institutional or disciplinary?)

Probes: What motivates you to participate in instructional development programs on campus?

How frequently do you attend such programs?

How are these programs advertised to faculty?

F. Demographics

Post Interview Comments and/or Observations:

Interview Protocol Sample Interview Protocol Form

Related Resources

PDF Version of the Sample Interview Protocol Form

Return to Parent Page

  • Corpus ID: 141006426

Writing Interview Protocols and Conducting Interviews: Tips for Students New to the Field of Qualitative Research

  • S. A. Jacob , S. Furgerson
  • Published 2012
  • Education, Sociology
  • The Qualitative Report

961 Citations

Conducting interviews for qualitative research studies, qualitative interviewing.

  • Highly Influenced

Qualitative research interviewing: reflections on power, silence and assumptions.

Piloting for interviews in qualitative research: operationalization and lessons learnt, an interviewer’s reflection of data collection in building an archive of language learner experiences, the process of qualitative interview: practical insights for novice researchers, the qualitative report the qualitative report, teaching requirements elicitation interviews: an empirical study of learning from mistakes, studying leadership: an eclectic approach to qualitative data collection and analysis, slowing down and digging deep: teaching students to examine interview interaction in depth, 10 references, qualitative inquiry and research design: choosing among five traditions., qualitative inquiry and research design: choosing among five approaches (2nd edition) [book review].

  • Highly Influential

Qualitative Inquiry and Research Design: Choosing Among Five Approaches

Teaching, learning, and millennial students, the art of storytelling, qualitative inquiry: a dictionary of terms, the interview: from structured questions to negotiated text, related papers.

Showing 1 through 3 of 0 Related Papers

interview protocol example qualitative research

Qualitative Research 101: Interviewing

5 Common Mistakes To Avoid When Undertaking Interviews

By: David Phair (PhD) and Kerryn Warren (PhD) | March 2022

Undertaking interviews is potentially the most important step in the qualitative research process. If you don’t collect useful, useable data in your interviews, you’ll struggle through the rest of your dissertation or thesis.  Having helped numerous students with their research over the years, we’ve noticed some common interviewing mistakes that first-time researchers make. In this post, we’ll discuss five costly interview-related mistakes and outline useful strategies to avoid making these.

Overview: 5 Interviewing Mistakes

  • Not having a clear interview strategy /plan
  • Not having good interview techniques /skills
  • Not securing a suitable location and equipment
  • Not having a basic risk management plan
  • Not keeping your “ golden thread ” front of mind

1. Not having a clear interview strategy

The first common mistake that we’ll look at is that of starting the interviewing process without having first come up with a clear interview strategy or plan of action. While it’s natural to be keen to get started engaging with your interviewees, a lack of planning can result in a mess of data and inconsistency between interviews.

There are several design choices to decide on and plan for before you start interviewing anyone. Some of the most important questions you need to ask yourself before conducting interviews include:

  • What are the guiding research aims and research questions of my study?
  • Will I use a structured, semi-structured or unstructured interview approach?
  • How will I record the interviews (audio or video)?
  • Who will be interviewed and by whom ?
  • What ethics and data law considerations do I need to adhere to?
  • How will I analyze my data? 

Let’s take a quick look at some of these.

The core objective of the interviewing process is to generate useful data that will help you address your overall research aims. Therefore, your interviews need to be conducted in a way that directly links to your research aims, objectives and research questions (i.e. your “golden thread”). This means that you need to carefully consider the questions you’ll ask to ensure that they align with and feed into your golden thread. If any question doesn’t align with this, you may want to consider scrapping it.

Another important design choice is whether you’ll use an unstructured, semi-structured or structured interview approach . For semi-structured interviews, you will have a list of questions that you plan to ask and these questions will be open-ended in nature. You’ll also allow the discussion to digress from the core question set if something interesting comes up. This means that the type of information generated might differ a fair amount between interviews.

Contrasted to this, a structured approach to interviews is more rigid, where a specific set of closed questions is developed and asked for each interviewee in exactly the same order. Closed questions have a limited set of answers, that are often single-word answers. Therefore, you need to think about what you’re trying to achieve with your research project (i.e. your research aims) and decided on which approach would be best suited in your case.

It is also important to plan ahead with regards to who will be interviewed and how. You need to think about how you will approach the possible interviewees to get their cooperation, who will conduct the interviews, when to conduct the interviews and how to record the interviews. For each of these decisions, it’s also essential to make sure that all ethical considerations and data protection laws are taken into account.

Finally, you should think through how you plan to analyze the data (i.e., your qualitative analysis method) generated by the interviews. Different types of analysis rely on different types of data, so you need to ensure you’re asking the right types of questions and correctly guiding your respondents.

Simply put, you need to have a plan of action regarding the specifics of your interview approach before you start collecting data. If not, you’ll end up drifting in your approach from interview to interview, which will result in inconsistent, unusable data.

Your interview questions need to directly  link to your research aims, objectives and  research questions - your "golden thread”.

2. Not having good interview technique

While you’re generally not expected to become you to be an expert interviewer for a dissertation or thesis, it is important to practice good interview technique and develop basic interviewing skills .

Let’s go through some basics that will help the process along.

Firstly, before the interview , make sure you know your interview questions well and have a clear idea of what you want from the interview. Naturally, the specificity of your questions will depend on whether you’re taking a structured, semi-structured or unstructured approach, but you still need a consistent starting point . Ideally, you should develop an interview guide beforehand (more on this later) that details your core question and links these to the research aims, objectives and research questions.

Before you undertake any interviews, it’s a good idea to do a few mock interviews with friends or family members. This will help you get comfortable with the interviewer role, prepare for potentially unexpected answers and give you a good idea of how long the interview will take to conduct. In the interviewing process, you’re likely to encounter two kinds of challenging interviewees ; the two-word respondent and the respondent who meanders and babbles. Therefore, you should prepare yourself for both and come up with a plan to respond to each in a way that will allow the interview to continue productively.

To begin the formal interview , provide the person you are interviewing with an overview of your research. This will help to calm their nerves (and yours) and contextualize the interaction. Ultimately, you want the interviewee to feel comfortable and be willing to be open and honest with you, so it’s useful to start in a more casual, relaxed fashion and allow them to ask any questions they may have. From there, you can ease them into the rest of the questions.

As the interview progresses , avoid asking leading questions (i.e., questions that assume something about the interviewee or their response). Make sure that you speak clearly and slowly , using plain language and being ready to paraphrase questions if the person you are interviewing misunderstands. Be particularly careful with interviewing English second language speakers to ensure that you’re both on the same page.

Engage with the interviewee by listening to them carefully and acknowledging that you are listening to them by smiling or nodding. Show them that you’re interested in what they’re saying and thank them for their openness as appropriate. This will also encourage your interviewee to respond openly.

Need a helping hand?

interview protocol example qualitative research

3. Not securing a suitable location and quality equipment

Where you conduct your interviews and the equipment you use to record them both play an important role in how the process unfolds. Therefore, you need to think carefully about each of these variables before you start interviewing.

Poor location: A bad location can result in the quality of your interviews being compromised, interrupted, or cancelled. If you are conducting physical interviews, you’ll need a location that is quiet, safe, and welcoming . It’s very important that your location of choice is not prone to interruptions (the workplace office is generally problematic, for example) and has suitable facilities (such as water, a bathroom, and snacks).

If you are conducting online interviews , you need to consider a few other factors. Importantly, you need to make sure that both you and your respondent have access to a good, stable internet connection and electricity. Always check before the time that both of you know how to use the relevant software and it’s accessible (sometimes meeting platforms are blocked by workplace policies or firewalls). It’s also good to have alternatives in place (such as WhatsApp, Zoom, or Teams) to cater for these types of issues.

Poor equipment: Using poor-quality recording equipment or using equipment incorrectly means that you will have trouble transcribing, coding, and analyzing your interviews. This can be a major issue , as some of your interview data may go completely to waste if not recorded well. So, make sure that you use good-quality recording equipment and that you know how to use it correctly.

To avoid issues, you should always conduct test recordings before every interview to ensure that you can use the relevant equipment properly. It’s also a good idea to spot check each recording afterwards, just to make sure it was recorded as planned. If your equipment uses batteries, be sure to always carry a spare set.

Where you conduct your interviews and the equipment you use to record them play an important role in how the process unfolds.

4. Not having a basic risk management plan

Many possible issues can arise during the interview process. Not planning for these issues can mean that you are left with compromised data that might not be useful to you. Therefore, it’s important to map out some sort of risk management plan ahead of time, considering the potential risks, how you’ll minimize their probability and how you’ll manage them if they materialize.

Common potential issues related to the actual interview include cancellations (people pulling out), delays (such as getting stuck in traffic), language and accent differences (especially in the case of poor internet connections), issues with internet connections and power supply. Other issues can also occur in the interview itself. For example, the interviewee could drift off-topic, or you might encounter an interviewee who does not say much at all.

You can prepare for these potential issues by considering possible worst-case scenarios and preparing a response for each scenario. For instance, it is important to plan a backup date just in case your interviewee cannot make it to the first meeting you scheduled with them. It’s also a good idea to factor in a 30-minute gap between your interviews for the instances where someone might be late, or an interview runs overtime for other reasons. Make sure that you also plan backup questions that could be used to bring a respondent back on topic if they start rambling, or questions to encourage those who are saying too little.

In general, it’s best practice to plan to conduct more interviews than you think you need (this is called oversampling ). Doing so will allow you some room for error if there are interviews that don’t go as planned, or if some interviewees withdraw. If you need 10 interviews, it is a good idea to plan for 15. Likely, a few will cancel , delay, or not produce useful data.

You should consider all the potential risks, how you’ll reduce their probability and how you'll respond if they do indeed materialize.

5. Not keeping your golden thread front of mind

We touched on this a little earlier, but it is a key point that should be central to your entire research process. You don’t want to end up with pages and pages of data after conducting your interviews and realize that it is not useful to your research aims . Your research aims, objectives and research questions – i.e., your golden thread – should influence every design decision and should guide the interview process at all times. 

A useful way to avoid this mistake is by developing an interview guide before you begin interviewing your respondents. An interview guide is a document that contains all of your questions with notes on how each of the interview questions is linked to the research question(s) of your study. You can also include your research aims and objectives here for a more comprehensive linkage. 

You can easily create an interview guide by drawing up a table with one column containing your core interview questions . Then add another column with your research questions , another with expectations that you may have in light of the relevant literature and another with backup or follow-up questions . As mentioned, you can also bring in your research aims and objectives to help you connect them all together. If you’d like, you can download a copy of our free interview guide here .

Recap: Qualitative Interview Mistakes

In this post, we’ve discussed 5 common costly mistakes that are easy to make in the process of planning and conducting qualitative interviews.

To recap, these include:

If you have any questions about these interviewing mistakes, drop a comment below. Alternatively, if you’re interested in getting 1-on-1 help with your thesis or dissertation , check out our dissertation coaching service or book a free initial consultation with one of our friendly Grad Coaches.

interview protocol example qualitative research

Psst... there’s more!

This post was based on one of our popular Research Bootcamps . If you're working on a research project, you'll definitely want to check this out ...

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

interview protocol example qualitative research

  • Print Friendly
  • Open access
  • Published: 13 September 2024

A qualitative analysis of health service problems and the strategies used to manage them in the COVID-19 pandemic: exploiting generic and context-specific approaches

  • Hania Rahimi-Ardabili 1 ,
  • Farah Magrabi 1 ,
  • Brenton Sanderson 1 , 2 ,
  • Thilo Schuler 1 , 3 &
  • Enrico Coiera 1  

BMC Health Services Research volume  24 , Article number:  1067 ( 2024 ) Cite this article

Metrics details

The COVID-19 pandemic disrupted health systems around the globe. Lessons from health systems responses to these challenges may help design effective and sustainable health system responses for future challenges. This study aimed to 1/ identify the broad types of health system challenges faced during the pandemic and 2/ develop a typology of health system response to these challenges.

Semi-structured one-on-one online interviews explored the experience of 19 health professionals during COVID-19 in a large state health system in Australia. Data were analysed using constant comparative analysis utilising a sociotechnical system lens.

Participants described four overarching challenges: 1/ System overload, 2/ Barriers to decision-making, 3/ Education or training gaps, and 4/ Limitations of existing services. The limited time often available to respond meant that specific and well-designed strategies were often not possible, and more generic strategies that relied on the workforce to modify solutions and repair unexpected gaps were common. For example, generic responses to system overload included working longer hours, whilst specific strategies utilised pre-existing technical resources (e.g. converting non-emergency wards into COVID-19 wards).

During the pandemic, it was often not possible to rely on mature strategies to frame responses, and more generic, emergent approaches were commonly required when urgent responses were needed. The degree to which specific strategies were ready-to-hand appeared to dictate how much a strategy relied on such generic approaches. The workforce played a pivotal role in enabling emergent responses that required dealing with uncertainties.

Peer Review reports

The COVID-19 pandemic has posed a significant challenge to health systems worldwide, and many have struggled to cope, especially in the early stages [ 1 ]. The global consequences of COVID-19 on health systems are measured in loss or impairment of lives [ 2 ], healthcare professional burnout [ 3 ], reduced services, and delayed care [ 4 , 5 ].

Unfortunately, it is highly probable that health systems will confront many more such crises, with climate change risks amongst these [ 6 ]. Understanding what was common to successful COVID-19 strategies, and what was shared amongst failed ones could be instructive as we prepare for the future. The pandemic affected every aspect of operations from planning and procurement to care delivery [ 7 , 8 ]. Services, processes and tools were repurposed or created ad hoc, often from the ground up [ 9 , 10 ]. Hospitals for instance, responded by repurposing existing facilities and wards, and implementing strategies to cope with sudden rises in patient numbers that overwhelmed existing critical care services such as intensive care units [ 11 , 12 ]. The initial phase of the pandemic witnessed immediate actions, some of which succeeded such as the development of mRNA vaccines [ 13 ] and others that failed such as certain COVID-19 contact tracing applications [ 14 ].

The challenges healthcare professionals experienced during the COVID-19 pandemic has had some attention in the research literature. For example, a 2021 systematic review examining the COVID-19 burden on healthcare workers from nine different countries identified four main challenges of inadequate preparedness; emotional challenges; insufficient equipment and information; and work burnout [ 15 ].

This study goes beyond describing the challenges faced, and examines the responses to these problems using the lens of sociotechnical system theory (STS) [ 16 ]. STS thinking sees system processes as the emergent outcome of interactions between people and technology [ 16 ].

Using first-hand stories from healthcare professionals, this study first describes the different health service problems experienced by health professionals during the pandemic. Next, we attempt to categorise the different strategies they employed to deal with these problems, exploring how people and technologies came together to craft responses to these problems during the pandemic. We develop a typology of responses that identifies the different roles for generic (general-purpose strategies), and specific (local or health service-specific) approaches. Identifying the circumstances in which each of these strategy types was used may assist in preparedness and guide future crisis responses.

A series of semi-structured interviews explored the firsthand experiences of healthcare professionals in either developing or making COVID-19 pandemic responses. We utilised a qualitative and interpretive approach, which aims to generate new hypotheses by exploring emergent relationships between descriptions of phenomena [ 17 , 18 ]. This manuscript follows the COREQ (Consolidated Criteria for Reporting Qualitative Research) guidelines (See Additional file 1 for the checklist).

Participants and setting

Health system staff from a variety of professional groups and levels of seniority were recruited. Health professionals who had been involved in the pandemic response in New South Wales (NSW) were eligible for interviews. These included medical specialists (e.g. respiratory physicians), nurses and midwives, general practitioners (GPs), allied health workers (e.g. physiotherapists working in ICUs), health service executives and administrative staff, and paramedics. Participants were selected from a diverse range of health professions and services, including hospitals, public health organisations, and laboratories, in both public and private sectors as well as rural and urban settings. Our target sample size of 20 was informed by a systematic review of 14 qualitative studies that explored the experiences of healthcare professionals during the pandemic and concluded that on average, past studies reached data saturation with approximately 15 participants [ 19 ].

NSW is an Australian state with over eight million people. It includes about 9,600 full-time equivalent GPs [ 20 ] and 2000 registered pharmacies [ 21 ] governed by the federal government [ 22 ]. Further, NSW Health is the public health system for the state and includes NSW Ambulance, NSW Health Pathology, eHealth NSW, Health Protection NSW (public health legislation and surveillance), and Local Health Districts (LHDs) [ 23 ]. LHDs encompass hospitals, home hospitals, hospital pharmacies, aged health and disabilities, mental health, aboriginal health, drug health, and public health including immunisation [ 24 ]. During 2020-21, NSW had a total of 228 public hospitals and 210 private hospitals [ 25 ], and over 150 pathology collection centres [ 26 ]. Participants in this study were from general practices and community pharmacies, as well as NSW Health, including NSW Ambulance, Health Pathology (including COVID-19 testing centres), eHealth NSW, hospitals, hospital pharmacies, and immunisation services.

The research team (E.C., B.S., T.S., F.M.) initiated purposive recruitment with a convenience sample [ 27 , 28 ], identifying potential participants within their health system networks. Once enrolled, we used snowballing where participants were asked to forward the study invitation email to others who might be interested. Participants did not have any pre-existing relationship with the interviewer (H.R.-A.) who invited them via email. Transcripts were deidentified by H.R.-A. before sharing them with the other core analysis team (E.C., F.M).

Ethics and consent

Ethics approval was obtained from the Macquarie University Ethics Committee (ID: 11187) prior to commencing the study. Participants provided written informed consent prior to data collection.

Data collection

Data were collected between April and September 2022. At the time of the interview, the COVID-19 vaccine was freely available to the community, and health services in NSW have been providing in-person services in addition to tele-consultation. One-on-one interviews were conducted online using videoconferencing software (Zoom Video Communications, Inc. 2023) with each session lasting an average of 51 min (range: 27–73 min). One of the researchers (H.R-A.) with experience in qualitative interviews was responsible for conducting the interviews. Interviews were transcribed using an AI-based transcription tool (rev.com). A subset of four transcripts were manually checked for transcription accuracy (H.R-A.). Data collection and preliminary analysis were concurrent, with emerging themes from initial analysis reshaping subsequent interview questions and recruitment. Emerging themes about the use of different types of strategies led to new probe questions about strategy and whether such responses were new to the setting. The bulk of the analysis was conducted after data collection.

After the interviewer introduced herself and the reasons for conducting the research (identifying potential approaches for a crises ready health system) participants were asked about: (1) The challenges they faced while providing clinical services during the entire stages of the pandemic; (2) Specific health service responses that they were involved with and (3) what they did differently to pre COVID-19 practices (See Additional file 2 for the initial version of the interview guide).

Data analysis

Data were analysed using constant comparative analysis [ 29 ]. Two early transcripts were open-coded line-by-line to identify emerging concepts and themes (By H.R-A.). To ensure generalisability, these early codes were discussed and refined with a second analyst (E.C.). Codes were further refined and extended during the study by comparing similar categories across participants. An axial coding approach was taken, looking at connections between categories in terms of causation, strategies, consequences, context, and related conditions [ 29 ]. This process continued until all transcripts were coded. Both inductive and deductive approaches were utilised for coding and conceptualising the themes and frameworks.

Data coding was supported by QRS International NVivo ® 12 Software. Visualisation of code connections, codes and data was undertaken using Microsoft Excel. Some codes were grouped into more general constructs, and others were separated into several distinct codes. H.R-A. created memos of each transcript including key quotes, cross-indexed back to the transcripts and documented all process changes in an audit trail.

Reflexivity

Authors (E.C., B.S., T.S.; males) have a clinical background (medical doctors) and two are currently in clinical practice (B.S., T.S.). E.C. (PhD), F.M. (PhD, female) and H.R-A. (PhD, female) were academic researchers at the time of the study. All authors are experienced health system researchers, with prior experience in qualitative research. The interviewer and principal analyst (H.R.-A.) who had no previous contact with any of the participants, deidentified the transcripts before sharing them with other team members. Three participants were willing to provide feedback on the initial analyses.

Analytic framework

We analysed data to identify the types of (1) problems faced by participants or their health services during the COVID-19 pandemic; and (2) the type of health service responses employed to manage these problems. The analysis of health service responses was undertaken using the lens of STS theory which emphasises that system processes are the inevitable consequence of interaction between the people and technology, and that studying either in isolation leads to reductionism that fails to explain how the real world works adequately [ 16 ]. Thus, technological processes were analysed alongside human processes, each shaping the other in a continuous process of human-technology interaction [ 30 , 31 ]. For example, if a participant discussed technology, we probed for human processes related to the technology. We sought to understand the context that led to different social and technical response patterns with specific attention to human and technology interactions. Two researchers (H. R-A. and E.C.) analysed the health service responses reported by interviewees, and differences in interpretation were resolved by discussion.

Participant characteristics

Of 28 invited health professionals, 19 participated in our study. Participants who were involved in the pandemic response were GPs ( n  = 2), pharmacists ( n  = 2), specialists (e.g. emergency physician and respiratory physician), ( n  = 3), nurses and midwives ( n  = 3), allied health workers (e.g. physiotherapist and social worker working in ICU) ( n  = 3), pathologists ( n  = 2), a paramedic ( n  = 1), a clerical officer ( n  = 1) and public health implementation officer/ managers ( n  = 2).

Health service problem types

Participants identified four broad classes of challenges faced by their health services during COVID-19. A summary of challenges is provided below, and a detailed description with example quotes from participants available in Additional File 3 .

Health system overload . The ability for health services to meet the needs of the population as the pandemic unfolded was often compromised because of an imbalance between the supply and demand for resources. System overload was often the result.

Barriers to decision-making : In the rapidly unfolding pandemic, evidence was not being generated and distributed as quickly as health services required, and the communication pathways to share information were sometimes suboptimal.

Education and training gaps : The need to train the public and health service staff as services responded to the pandemic was triggered both by the arrival of new evidence and best-practice guidance needing to be shared widely, or by staff working in roles that were new to them.

Limitations of existing services : Faced with multiple and concurrent challenges, many existing services or care models were found to be inadequate.

Health service response types

Respondents provided a rich account of the different strategies employed to meet the problems faced during the early years of the pandemic, with multiple examples across all four problem types (Additional file 4 ).

High-level analysis of these responses identified that human organisational responses were apparently shaped by the degree of technology maturity and availability. We observed differences in the use of generic responses (applicable to many settings) and specific responses (designed to serve a given service, its unique characteristics and the problems it faced). In this section below, we contrast examples of general and specific responses, presented for each problem type to explore why these strategic differences might have been adopted. Example responses are cross-referenced to relevant quote IDs in brackets, indicating each code’s cell address and item number in the Excel sheet - Additional file 4 as “([Cell address]#[item number when available])”.

Health system overload

Generic overload management strategies: Respondents described increasing the hours worked by staff (quote IDs H03#1, H20, H21), redeploying staff to critical services (quote ID H03#2), hiring new staff (quote IDs H03#3, H20#2, H21#4) or retraining existing staff (quote IDs H14#3, H15#2) to address imbalances between service supply and demand. Work pattern changes included delaying non-urgent care (quote IDs H03#9, H13#7, H66), altering staff/patient ratios in hospitals (quote IDs H03#12, H35#2,6,7), and fast-tracking patient discharge in tandem with home monitoring and support packs for COVID-19 patients (quote ID H03#11). Clinical staff working under difficult circumstances or longer hours were supported with access to accommodation, peer and mental health support (quote IDs H15#7, H21#3, H27#2, H35#5).

The choice of generic responses appeared to be driven by time constraints necessitating immediate solutions (quote ID H20#2). For example, outsourcing recruitment was more expedient than developing new internal processes: “ they hired an external company to I guess source more [staff who] didn’t have the experience that we had it was yeah that’s what effectively led for those long [vaccine] lines… the expectation was the training would come in the same day… the workforce was ignored… it would be much helpful to know that like in two months we’re wrapping up to be 1500 [vaccinations] yeah we would have tried extra hard to train more people [Pharmacist – 14].”

Specific overload management strategies: Overload strategies were sometimes quite specific to the health service experiencing stress. Batch testing of pooled samples for polymerase chain reaction (PCR) tests was undertaken to improve the throughput of otherwise overloaded laboratory services (quote ID H07#1). Rapid antigen tests (RATs) were used in hospitals to reduce the number of PCR tests for likely-negative individuals and for symptomatic positive patients, and allow ill patients to receive COVID-19 treatment without delay (quote ID H07#2):

What a rapid test would do with someone who is symptomatic would be that if you turn positive on a RAT you are COVID positive , so what that would end up doing was then that would decrease the amount of PCR that we were doing… If we had access to them [RATs] in Delta [variant phase] a testing capacity for PCRs would have dropped , identification of COVID positive patients who have been much faster , and that would have changed our treatment or discharge plans for these patients a lot quicker [ICU Nurse – 13].

Other specific responses were increasing hospital capacity by converting non-emergency wards into COVID-19 wards (quote ID H08#1), creating temporary wards (e.g. tents in hospital car parks) (quote ID H18), and facilitating hospital discharge by providing bus services to take patients home (quote ID H55). Emergency co-ordination centres assisted in identifying beds for patients across a region (quote ID H03#10), and respiratory clinics were set up in the community to support keeping patients at home (quote ID H60#2).

Barriers to decision-making

Generic decision-making strategies: Health services adopted several generic strategies to improve data capture, and dissemination of new evidence and local data. A respondent explained how a generic electronic medical record system (EMR) was customised to capture COVID-19 specific information (quote ID H56). “ We had to make EMR kind of work for us [Emergency physician – 09].” The respondent and their colleagues “ had to sort of come up with a process … to mark that you’ve had COVID and then not test you .” General purpose strategies required staff to be vigilant for problems during their application: “people were good at that. It was just realising that it [problem] was coming. So sort of working out. Oh hang on this is going to be a problem as we go forward. So what do we do? [Emergency physician – 09] . ”

Non-specific technologies such as email, Zoom, and Microsoft Teams were often used to enhance team communication. Communication processes were also enhanced by scheduling regular daily staff meetings at hospitals (quote IDs H09#1, H14#11), and weekly meetings for GPs to speak directly with those involved in pandemic management from the public health system (quote ID H09#4). Microsoft SharePoint was used to gather information about staff activities, such as where and when they treated COVID-19 patients, to assist with infection control and for patient managements (quote IDs H47, H48#2, H49).

Specific decision-making strategies: To provide local best practice guidance, expert support teams were created to assist with troubleshooting (quote ID H09#3), local protocols were developed and updated potentially daily (quote IDs H09#2, H68#1,), and interdisciplinary collaborations (e.g. pharmacists working with nurses) developed local workflow models (quote ID H17#1). Such activities required significant effort (quote ID H09#2): “ a working group that met like daily seven days a week for months and months and months to put together the [local protocol and updates] response [Transplant nephrologist – 18]. ”

Education and training gaps

Generic training strategies: Virtual training packages were used to maximise the dissemination of educational materials where local training was not feasible (quote ID H57). Peer support networks were developed to support information sharing where training was not available (quote IDs H03#4, H34 #5, H15#2). Adaptations of such solutions required significant human effort e.g. peer support meant senior staff had to be “there every step of the way [Emergency nurse – 13]. ”

Specific training strategies: Many of the responses designed to educate the health system workforce and the community were highly targeted (quote ID H15#2). Specific training programs were instituted to meet urgent needs, e.g. training clinicians in the use of PPE and hand hygiene. Consumers received highly targeted educational messages, such as requests to avoid unnecessary calling of ambulances, and simple social distancing rules and masking advice (quote IDs H22, H25, H26). Pharmacies provided in-house RATs for members of the public who did not understand the testing process (quote ID H25).

Limitations of existing services

Generic service strategies: The early stages of the pandemic saw a flurry of new or extended health services, often implemented under significant time and resource limitations. Periods of public health mandated lockdowns and work-from-home arrangements relied upon general purpose technologies (quote IDs H3#13, H14#5). Virtual consultations were delivered over channels of varying sophistication from telephone to online telecare products (quote IDs H13#3, H44#2, H52#1, H62#1). When there was lack of supply or limited access to manufactured PCR kits for COVID-19, specialised experts using general PCR techniques “try and put together a rapid PCR type of [solution/reagent] which they didn’t have [Pathology manager – 26] ” (quote ID H64#2).

Specific service strategies: Context-specific responses to service limitations included massive expansion of contact tracing capabilities, new measures such as routine COVID-19 surveillance of clinical staff (quote ID H14#6), and the use of QR (quick response) codes in public venues to support rapid contact tracing (quote ID H14#15). COVID-19 focussed respiratory clinics (quote ID H60#2) and PCR testing facilities appeared in the community for the first time. Specialist vaccination hubs and expanded community pharmacy services such as home delivery of medications were other specific responses. Hospital emergency services expanded their triage functions by creating specialised COVID-19 assessment areas with staff in full PPE, either using repurposed hospital space or in carparks outside the emergency departments or clinics (quote ID H03#11, H13#1,2, H14#13). Laboratories took advantage of manufactured PCR kits when available (quote ID H064#4): “you just opened the box and you put it together and you go [Pathology manager – 26].”

General to specific strategies . Many early responses to the pandemic involved the use of general strategies that sought to optimise responses from existing services (such as reconfiguring rostering or using general-purpose software):

The use of general solutions seemed to coincide with urgency and lack of time or resources to craft a more specific local solution (e.g. quote ID H20#2).

General solutions also could thus be seen to “buy time” whilst uncertainty remained about the best way forward, and better more specific solutions were being developed (e.g. quote IDs H03#13, H14#5). Pre-existing SARS infection control protocols were widely used early on and adapted to local circumstances or evolving knowledge. Generic information and communication tools were used to patch together information processes whilst more sophisticated solutions could be developed (e.g. quote IDs H13#3, H44#2, H52#1, H62#1) [ 32 ].

It is the nature of such generic responses that they are never a perfect fit to a specific task or context. Consequently, some adaptation or localisation is required to better meet these local needs. Such “fitting work” [ 33 ] often fell to local staff, and could take the form of workarounds (e.g. to make standard computer systems work in a new setting) or the addition of local changes (e.g. to a PPE protocol [ 34 ]) (e.g. quote IDs H47, H48#2, H49, H56).

The need for fitting work imposes additional load on staff (e.g. quote IDs H03#1, H20, H21) to “ make things work here right now ” and could be a contributor to the high levels of staff burnout reported through the pandemic.

Specific to general strategies . Highly local solutions to pandemic challenges were often needed where services provided highly specialised services. For example, the details of changes to the workflow for laboratory processing of high volumes of PCR tests would not have wide applicability beyond the laboratory setting.

The use of specific solutions appeared to coincide with unique local problems, or some capacity to develop new specific solutions whilst generic solutions “ held the fort ” (quote ID H64 #2).

Nonetheless, general lessons from such specific responses can sometimes be drawn e.g. in the approach taken to agree upon the specific solution and how it is subsequently communicated. For example, public health services had to rapidly expand their workforce in support of contact tracing, and their use of external recruitment agencies could be adopted by very different parts of the health system.

This study has examined the challenges faced during the COVID-19 pandemic, and health system responses to those challenges in Australia.

Clearly the challenges faced during the pandemic were not uniform, and different health services found themselves better or less well prepared or capable of responding than others [ 35 ]. Our analysis of these responses identified what appeared to be two quite different response pathways that played distinct roles in crisis management – the adoption of general strategies which could be used across a wide variety of settings, or the use or creation of highly targeted context specific responses.

What lessons can be learned from these broad responses? Given the nature of crises, each will bring novel and likely unanticipated challenges.

When faced with requirements to dramatically alter the duties and workflows of existing health services, especially when constrained by time, resource or knowledge, health services can turn to general-purpose strategies to reconfigure their existing workforce, and adopt ready to hand general purpose technologies. Whilst not ideal, these strategies support quick responses and buy time for more targeted solutions to emerge.

Crisis preparedness could thus focus on understanding the range of general-purpose tools and processes that can quickly be brought to hand. Adaptation protocols might provide guidance on localisation processes that optimise speed, quality, impact on staff, or cost. For example, protocols might describe processes of problem identification, workaround development, and team communication approaches that facilitate these tasks. In developing such protocols, we should not forget that while some services must develop highly localised solutions, they nevertheless can be a rich source of lessons about general approaches to identifying issues, designing solutions, and enacting them effectively. During the pandemic, innovations commonly involved combining pre-existing services.

Theoretical frameworks for system resilience describe the importance of flexibility and adaptability to respond to unexpected and escalating situations [ 36 , 37 ]. Generic competencies are often team-based and include information management, communication and coordination, decision-making, and effect control [ 36 ]. Responses when managing the early phase of health emergencies should be simple and generic, such as using generic international guidance [ 38 ]. The Interactive Systems Framework (ISF) for dissemination and implementation distinguishes innovation-specific capacity and general capacity [ 39 ]. Various implementation frameworks suggest general organisational capacity building is an essential step in the early phase of implementation [ 40 ]. Such approaches emphasise that stabilising a situation and maintaining organizational function are key to managing uncertainty while developing specific responses.

Limitations

The problems and system responses reported in this study may lack representativeness because of the small sample size of interviewees, the focus on a single albeit large health system in Australia, and the potential for recruitment biases introduced by convenience and snowballing sampling. Different nations had distinct experiences during COVID-19, such as variations in public health measures adopted, access to vaccines, lockdowns, government policy, and health impacts of the virus on their population. Thus, these findings may not be generalisable to other health system settings. Respondents detailed challenges and system responses with many examples. We anticipated achieving theoretical saturation with 20 participants but during the analysis phase did not do so. This may be due to the richness of innovations during COVID-19 or the diverse selection of participants [ 41 , 42 ]. Failure to saturate suggests that interviewing additional participants could likely identify new examples and issues that might have uncovered additional issues. However, the concept of data saturation in qualitative studies is currently under debate [ 43 ].

Health services have a range of different response strategies available to them when faced with novel challenges, and selection of a strategy can be guided by the circumstances and the availability of ready-to-hand specific strategies. The workforce is pivotal in enabling emergent responses that require dealing with uncertainties. Recognising the important role that general purpose strategies play when time is short (e.g. emergencies) and specific solutions are not yet available suggests that health services can invest in formalising protocols for solution design and focus on workforce support, including team communication and supporting solution implementation. Such capabilities should enhance health system preparedness for crises such as new pandemics or climate-change triggered events. Much can also be learnt about the construction of context-specific solutions, a deeper exploration of when to employ such approaches and how to support them to best prepare for future crises.

Data availability

The complete datasets generated and analysed during the current study are not publicly available because consent was not obtained from study participants for data to be made public but are available from the corresponding author on reasonable request subject to approval from the Macquarie University Ethics Committee. Part of the deidentified data is provided as a supplementary file.

Abbreviations

Electronic medical record system

General practitioner

Intensive care unit

Polymerase Chain Reaction

Personal protective equipment, RAT: Rapid antigen tests

Sociotechnical system

Sheehan MC, Fox MA. Early warnings: the lessons of COVID-19 for Public Health Climate Preparedness. Int J Health Serv. 2020;50(3):264–70.

Article   PubMed   PubMed Central   Google Scholar  

Silva S, Goosby E, Reid MJA. Assessing the impact of one million COVID-19 deaths in America: economic and life expectancy losses. Sci Rep. 2023;13(1):3065.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Lluch C, Galiana L, Doménech P, Sansó N. The impact of the COVID-19 pandemic on Burnout, Compassion fatigue, and Compassion satisfaction in Healthcare personnel: a systematic review of the literature published during the First Year of the pandemic. Healthc [Internet]. 2022; 10(2).

Schmidt AE, Rodrigues R, Simmons C, Steiber N. A crisis like no other? Unmet needs in healthcare during the first wave of the COVID-19 crisis in Austria. Eur J Pub Health. 2022;32(6):969–75.

Article   Google Scholar  

Gonzalez J-P, Souris M, Valdivia-Gr aW. Global spread of hemorrhagic fever viruses: Predicting Pandemics. Methods in molecular biology. (Clifton NJ). 2018;1604:3–31.

CAS   Google Scholar  

Coiera E, Braithwaite J. Turbulence health systems: engineering a rapidly adaptive health system for times of crisis. BMJ Health Care Inf. 2021;28(1).

Turner S, Niño N. Qualitative analysis of the coordination of major system change within the Colombian health system in response to COVID-19: study protocol. Implement Sci Commun. 2020;1:75.

Wensing M, Sales A, Armstrong R, Wilson P. Implementation science in times of Covid-19. Implement Sci. 2020;15(1):42.

Milella F, Minelli EA, Strozzi F, Croce D. Change and Innovation in Healthcare: findings from literature. Clinicoecon Outcomes Res. 2021;13:395–408.

Legido-Quigley H, Asgari N, Teo YY, Leung GM, Oshitani H, Fukuda K, et al. Are high-performing health systems resilient against the COVID-19 epidemic? Lancet. 2020;395(10227):848–50.

Organization WH. Strengthening the health systems response to COVID-19: technical guidance# 2: creating surge capacity for acute and intensive care, 6 April 2020. World Health Organization. Regional Office for Europe; 2020.

Winkelmann J, Webb E, Williams GA, Hernández-Quevedo C, Maier CB, Panteli D. European countries’ responses in ensuring sufficient physical infrastructure and workforce capacity during the first COVID-19 wave. Health Policy. 2022;126(5):362–72.

Article   PubMed   Google Scholar  

Hanisch M, Rake B. Repurposing without purpose? Early innovation responses to the COVID-19 crisis: evidence from clinical trials. R&D Manage. 2021;51(4):393–409.

Lucie W, van Philippe B. Without a trace: why did corona apps fail? J Med Ethics. 2021;47(12):e83.

Koontalay A, Suksatan W, Prabsangob K, Sadang JM. Healthcare workers’ burdens during the COVID-19 pandemic: a qualitative systematic review. J Multidiscip Healthc. 2021;14:3015–25.

Cooper R, Foster M. Sociotechnical systems. Am Psychol. 1971;26(5):467–74.

Thompson Burdine J, Thorne S, Sandhu G. Interpretive description: a flexible qualitative methodology for medical education research. Med Educ. 2021;55(3):336–43.

Hunt MR. Strengths and challenges in the use of interpretive description: reflections arising from a study of the moral experience of health professionals in humanitarian work. Qual Health Res. 2009;19(9):1284–92.

Billings J, Ching BCF, Gkofa V, Greene T, Bloomfield M. Experiences of frontline healthcare workers and their views about support during COVID-19 and previous pandemics: a systematic review and qualitative meta-synthesis. BMC Health Serv Res. 2021;21(1):923.

Australian Government Productivity Commission. Report on Government Services 2022. 2022.

Pharmacy Council of NSW. Annual Report 2018–2019. 2019.

Gordon J, Britt H, Miller GC, Henderson J, Scott A, Harrison C. General Practice Statistics in Australia: pushing a Round Peg into a Square Hole. Int J Environ Res Public Health. 2022;19(4).

NSW Ministry of Health. Our structure NSW Ministry of Health 2023 [ https://www.health.nsw.gov.au/about/nswhealth/Pages/structure.aspx

NSW Government. Hospitals & Services | Sydney Local Health Districts 2024 [ https://slhd.health.nsw.gov.au/hospitals-services

NSW Health. Snapshot, Annual Report 2020-21 2021.

NSW Government NHP. NSW Health Pathology, [ https://pathology.health.nsw.gov.au/

Andrade C. The Inconvenient Truth about Convenience and Purposive Samples. Indian J Psychol Med. 2020;43(1):86–8.

Campbell S, Greenwood M, Prior S, Shearer T, Walkem K, Young S, et al. Purposive sampling: complex or simple? Research case examples. J Res Nurs. 2020;25(8):652–61.

Chun Tie Y, Birks M, Francis K. Grounded theory research: a design framework for novice researchers. SAGE Open Med. 2019;7:2050312118822927.

Fox WM. Sociotechnical system principles and guidelines: past and present. J Appl Behav Sci. 1995;31(1):91–105.

Hoholm T, La Rocca A, Aanestad M. Controversies in healthcare innovation: service, technology and organization. Springer; 2018.

Coiera E. When conversation is better than computation. J Am Med Inf Assoc. 2000;7(3):277–86.

Article   CAS   Google Scholar  

Coiera E. The standard problem. J Am Med Inform Assoc. 2023;30(12):2086–97.

Coiera E. Communication spaces. J Am Med Inform Assoc. 2014;21(3):414–22.

Mustafa S, Zhang Y, Zibwowa Z, Seifeldin R, Ako-Egbe L, McDarby G, et al. COVID-19 preparedness and response plans from 106 countries: a review from a health systems resilience perspective. Health Policy Plan. 2022;37(2):255–68.

Bergström J, Dahlström N, Dekker S, Petersen K. Training organisational resilience in escalating situations. Resilience engineering in practice: CRC; 2017. pp. 45–57.

Google Scholar  

Bhamra R, Dani S, Burnard K, Resilience. The Concept, a literature review and future directions. Int J Prod Res. 2011;49:5375–93.

Crick M, McKenna T, Buglova E, Winkler G, Martincic R. Emergency management in the early phase. Radiat Prot Dosimetry. 2004;109(1–2):7–17.

Article   CAS   PubMed   Google Scholar  

Flaspohler P, Duffy J, Wandersman A, Stillman L, Maras MA. Unpacking prevention capacity: an intersection of research-to-practice models and community-centered models. Am J Community Psychol. 2008;41(3–4):182–96.

Meyers DC, Durlak JA, Wandersman A. The quality implementation framework: a synthesis of critical steps in the implementation process. Am J Community Psychol. 2012;50(3–4):462–80.

Sebele-Mpofu FY. Saturation controversy in qualitative research: complexities and underlying assumptions. A literature review. Cogent Social Sci. 2020;6(1):1838706.

Aldiabat KM, Le Navenec C-L. Data saturation: the mysterious step in grounded theory methodology. Qualitative Rep. 2018;23(1):245–61.

Braun V, Clarke V. To saturate or not to saturate? Questioning data saturation as a useful concept for thematic analysis and sample-size rationales. Qualitative Res Sport Exerc Health. 2021;13(2):201–16.

Download references

Acknowledgements

The authors thank K-lynn Smith and Yvonne Zurynski for their valuable feedback on the manuscript.

The project was conducted with funding from the National Health and Medical Research Council: Partnership Centre for Health System Sustainability; and Centre of Research Excellence in Digital Health (APP1134919).

Author information

Authors and affiliations.

Centre for Health Informatics, Australian Institute of Health Innovation, Macquarie University, Syndey, NSW, 2109, Australia

Hania Rahimi-Ardabili, Farah Magrabi, Brenton Sanderson, Thilo Schuler & Enrico Coiera

Department of Anaesthesia and Perioperative Medicine, Westmead Hospital, Sydney, NSW, Australia

Brenton Sanderson

Department of Radiation Oncology, Northern Sydney Cancer Centre, Royal North Shore Hospital, Sydney, NSW, Australia

Thilo Schuler

You can also search for this author in PubMed   Google Scholar

Contributions

E.C., B.S., T.S. and F.M. conceptualised the study. H.R.-A. developed the study protocol and collected data, E.C. and H.R.-A. analyzed the data. E.C. and H.R.-A. prepared the original draft, and all authors contributed to the final drafts of the manuscript.

Corresponding author

Correspondence to Enrico Coiera .

Ethics declarations

Ethics approval and consent to participate.

Ethics approval was obtained from the Macquarie University Ethics Committee prior to commencing the study (ID: 11187). All participants provided written informed consent prior to data collection.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Supplementary material 2, supplementary material 3, supplementary material 4, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ .

Reprints and permissions

About this article

Cite this article.

Rahimi-Ardabili, H., Magrabi, F., Sanderson, B. et al. A qualitative analysis of health service problems and the strategies used to manage them in the COVID-19 pandemic: exploiting generic and context-specific approaches. BMC Health Serv Res 24 , 1067 (2024). https://doi.org/10.1186/s12913-024-11499-7

Download citation

Received : 07 March 2024

Accepted : 28 August 2024

Published : 13 September 2024

DOI : https://doi.org/10.1186/s12913-024-11499-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Health services
  • Sociotechnical systems

BMC Health Services Research

ISSN: 1472-6963

interview protocol example qualitative research

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • For authors
  • Browse by collection
  • BMJ Journals

You are here

  • Volume 14, Issue 9
  • Barriers and facilitators to implementing imaging-based diagnostic artificial intelligence-assisted decision-making software in hospitals in China: a qualitative study using the updated Consolidated Framework for Implementation Research
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • http://orcid.org/0000-0002-2349-8775 Xiwen Liao 1 , 2 ,
  • Chen Yao 1 , 2 ,
  • http://orcid.org/0000-0002-4991-0158 Feifei Jin 3 , 4 ,
  • Jun Zhang 5 ,
  • Larry Liu 6 , 7
  • 1 Peking University First Hospital , Beijing , China
  • 2 Clinical Research Institute, Institute of Advanced Clinical Medicine , Peking University , Beijing , China
  • 3 Trauma Medicine Center , Peking University People's Hospital , Beijing , China
  • 4 Key Laboratory of Trauma treatment and Neural Regeneration, Peking University , Ministry of Education , Beijing , China
  • 5 MSD R&D (China) Co., Ltd , Beijing , China
  • 6 Merck & Co Inc , Rahway , New Jersey , USA
  • 7 Weill Cornell Medical College , New York City , New York , USA
  • Correspondence to Chen Yao; yaochen.pucri{at}foxmail.com

Objectives To identify the barriers and facilitators to the successful implementation of imaging-based diagnostic artificial intelligence (AI)-assisted decision-making software in China, using the updated Consolidated Framework for Implementation Research (CFIR) as a theoretical basis to develop strategies that promote effective implementation.

Design This qualitative study involved semistructured interviews with key stakeholders from both clinical settings and industry. Interview guide development, coding, analysis and reporting of findings were thoroughly informed by the updated CFIR.

Setting Four healthcare institutions in Beijing and Shanghai and two vendors of AI-assisted decision-making software for lung nodules detection and diabetic retinopathy screening were selected based on purposive sampling.

Participants A total of 23 healthcare practitioners, 6 hospital informatics specialists, 4 hospital administrators and 7 vendors of the selected AI-assisted decision-making software were included in the study.

Results Within the 5 CFIR domains, 10 constructs were identified as barriers, 8 as facilitators and 3 as both barriers and facilitators. Major barriers included unsatisfactory clinical performance (Innovation); lack of collaborative network between primary and tertiary hospitals, lack of information security measures and certification (outer setting); suboptimal data quality, misalignment between software functions and goals of healthcare institutions (inner setting); unmet clinical needs (individuals). Key facilitators were strong empirical evidence of effectiveness, improved clinical efficiency (innovation); national guidelines related to AI, deployment of AI software in peer hospitals (outer setting); integration of AI software into existing hospital systems (inner setting) and involvement of clinicians (implementation process).

Conclusions The study findings contributed to the ongoing exploration of AI integration in healthcare from the perspective of China, emphasising the need for a comprehensive approach considering both innovation-specific factors and the broader organisational and contextual dynamics. As China and other developing countries continue to advance in adopting AI technologies, the derived insights could further inform healthcare practitioners, industry stakeholders and policy-makers, guiding policies and practices that promote the successful implementation of imaging-based diagnostic AI-assisted decision-making software in healthcare for optimal patient care.

  • Clinical Decision-Making
  • Implementation Science
  • Information technology

Data availability statement

Data are available on reasonable request. Study protocol and interview transcripts are available on request by contacting the corresponding author.

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See:  http://creativecommons.org/licenses/by-nc/4.0/ .

https://doi.org/10.1136/bmjopen-2024-084398

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

STRENGTHS AND LIMITATIONS OF THIS STUDY

Used the updated Consolidated Framework for Implementation Research to systematically identify barriers and facilitators.

Conducted semistructured interviews with a wide range of key stakeholders, both from clinical settings and industry.

Potential generalisability limitations due to purposive sampling of artificial intelligence software and the cluster of healthcare institutions and study participants in big cities.

The inclusion of perspectives from patients should be addressed in future research.

Introduction

Clinical decision-making (CDM) is a challenging and complex process. 1 Effective and informed CDM requires a delicate balance between the best available evidence, environmental and organisational factors, knowledge of the patient and comprehensive professional capabilities, such as clinical skills and experiences. 2 3 However, the ability of healthcare professionals to make such decision is often restricted by the dynamic and uncertain nature of clinical practices. 3 In response, decision-making tools have been developed to enhance and streamline CDM for optimal healthcare outcomes. Traditional decision-making tools heavily depend on computerised clinical knowledge bases, supporting CDM by matching individual patient data with the knowledge base to provide patient-specific assessments or recommendations. 4 5 However, relying solely on knowledge-based tools has become insufficient to fulfil the growing need for accessible, efficient and personalised healthcare services, due to its inherent limitations such as time-consuming processes, disruptions in routine clinical workflow and challenges in constructing complex queries. 6 7

Non-knowledge-based decision support tools, on the other hand, harness artificial intelligence (AI) algorithms to analyse large and complex datasets and learn continuously for more accurate and individualised recommendations. 5 8 AI technology has been rapidly advancing since 2000, unleashing substantial potential to revolutionise the conventional CDM process and driving a fundamental shift in healthcare paradigm. 9 The development and extensive growth of clinical real-world data (RWD) have made integrating AI technology into the healthcare sector a priority for both the healthcare industry and regulatory agencies.

The US Food and Drug Administration (FDA) has well recognised the use of AI techniques combined with clinical RWD for both drug and medical device development. In drug development, the FDA has reported a marked increase in the number of drug and biological application submissions with AI components across different stages of life cycle. 10 AI algorithms have been actively integrated into biomarker discovery, eligible population identification and prescreening, clinical drug repurposing, and adverse event (AE) detection, ranging from drug discovery and premarket clinical studies to postmarket safety surveillance. 11 Particularly, the number of studies on AE detection with the use of natural language processing increased from 1 between 2005 and 2018 to 32 between 2017 and 2020. 11

In addition to the pharmaceutical industry, AI medical devices, whether intended for decision-making or other purposes, have experienced rapid development between 2000 and 2018. 12 During this period, the growing sophistication of imaging medical supplies, such as CT scanners, has led to an exponential increase in the volume of high-dimensional imaging data. This surge has gradually shifted the focus of AI medical devices towards imaging-based diagnostic decision-making software, making lung nodule detection and diabetic retinopathy screening popular areas of research. 13 These AI systems, classified as software as medical devices, are designed to support diagnostic decision-making using clinical RWD, particularly imaging data generated by medical devices, leveraging AI technology to perform functions independently of hardware medical devices. 14–16 Notably, the FDA’s approval of the first imaging-based AI-assisted medical device for detecting diabetes-related eye diseases in 2018 marked major progress towards the implementation of imaging-based diagnostic AI-assisted decision-making software. 17 In China, a major milestone was achieved with the approval of the first AI-related software for coronary artery fractional flow reserve by the National Medical Products Administration (NMPA) in 2020, 18 highlighting the continuous development and integration of AI technologies in healthcare practices. Following this breakthrough, AI-assisted decision-making software has experienced rising popularity in China. The number of regulatory approvals for AI software expanded from 1 in 2020 to 62 in 2022. 12 Key functions included disease identification, lesion segmentation, and risk prediction, and risk classification, covering therapeutic areas from cardiovascular diseases to various types of cancers. 12 19

Evidence from randomised controlled trials has demonstrated the safety and effectiveness of AI-assisted decision-making software across various therapeutic areas. For disease detection, these tools have facilitated the early identification of patients with low ejection fraction, improved detection rates for actionable lung nodules and increased the identification of easily missed polyps. 20–23 Furthermore, AI software has significantly reduced diagnostic times compared with senior consultants in diagnosing childhood cataracts, improving clinical efficacy. 24 In disease management, AI-assisted decision-making software has decreased treatment delays for cardiovascular diseases and lowered in-hospital mortality for severe sepsis, 25 26 contributing to improved healthcare quality. Additionally, studies have indicated that AI tools are effective across diverse populations, significantly enhancing follow-up rates for diabetic retinopathy in paediatric populations and improving referral adherence in adult patients within low-resource settings. 27 28 However, a priori scoping review found a disparity between substantial research investment and limited real-world clinical implementation. 19 Further, employing stratified cluster sampling from six provinces across China, a study revealed that only 23.75% of the surveyed hospitals had implemented AI-assisted decision-making software. 29 Accordingly, existing literature has emphasised the deficiency in implementation science expertise for understanding AI implementation efforts in clinical settings. 30 To bridge the gap, the current study aimed to explore the barriers and facilitators of implementing existing imaging-based diagnostic AI-assisted decision-making software in China through qualitative interviews, using the updated Consolidated Framework for Implementation Research (CFIR). The key strength of using the updated CFIR lied in its adaptability to capture both the breadth and depth of qualitative data, as well as its applicability to explore the implementation of technology across various healthcare settings, further enhancing the rigour and comprehensiveness. 31 32 In addition, the study provided tailored implementation strategies to address key barriers, exploiting the full potential of imaging-based diagnostic AI-assisted decision-making software for the improved quality of care in China.

Innovation selection

A recent scoping review identified imaging-based diagnostic decision-making software using medical imaging data as the predominant AI-assisted decision-making software in China, especially those designed for the lung nodules detection and diabetic retinopathy screening. 19 To manage the increasing submissions in these areas, the Center for Medical Device Evaluation of NMPA issued two corresponding guidelines in 2022, delineating specific regulatory requirements. 33 34 Therefore, the current study purposively chose to investigate AI-assisted decision-making software for lung nodules and diabetic retinopathy screening, serving as two representative imaging-based diagnostic AI applications. Two lists were obtained from the NMPA database, from which one software for lung nodules and another for diabetic retinopathy were ultimately selected ( online supplemental tables 1 and 2 ). The vendors were selected through convenience sampling and their voluntary consent to participate. Characteristics of the selected AI-assisted decision-making software are shown in table 1 .

Supplemental material

  • View inline

Characteristics of the selected AI-assisted decision-making software

Study setting and participants

With the aim of reflecting diversified perspectives from a wide range of stakeholders, participants from both clinical settings and industry were included. Specifically, clinical stakeholders consisted of three different roles, including healthcare practitioners, hospital informatics specialists and hospital administrators. Industry stakeholders were vendors of the selected AI-assisted decision-making software, which was further divided into three subroles, including data scientists, database experts and algorithm engineers. While the perspectives of patients are valuable, the current study aimed to gather in-depth insights regarding practical and systemic challenges from stakeholders who are directly involved in the implementation, deployment and development of the selected AI-assisted decision-making software. The included stakeholders possessed either the operational or technical knowledge necessary to identify specific barriers and facilitators related to the implementation of the selected AI software in clinical settings. Thus, patients were not included as a stakeholder group in this study.

The selection of study participants involved a two-stage process, wherein initial screening occurred at the institutional level, followed by the individual-level selection. For clinical stakeholders, two lists of hospitals that implemented the selected AI-assisted decision-making software were acquired from the corresponding software vendor. Employing stratified purposive sampling, one tertiary hospital and one primary or secondary hospital were selected from each of the lists, respectively. All selected healthcare institutions were located in big cities, including the Cancer Hospital of the Chinese Academy of Medical Sciences, Beijing Hospital, Beijing Shuili Hospital and the community healthcare centre of Qingpu, Shanghai. The snowball sampling technique was subsequently used to select study participants by asking the Scientific Research Division to identify relevant clinical department and related healthcare practitioners, health informatics specialists and hospital administrators. 35 Additionally, healthcare practitioners were stratified by their professional titles, including junior, intermediate and senior. A similar two-stage process was applied to select stakeholders related to AI-assisted software vendors. CY was responsible to contact the hospitals and AI-assisted software vendors for study participation and interview arrangements.

Eligibility criteria for participant selection were as follows:

Inclusion criteria

Participants from clinical settings should either have user experience with the selected AI-assisted decision-making software or have experience in deploying or managing such software at the hospital level.

Participants from the industry should have working experience in the development of AI-assisted CDM software.

Participants should be formal staff at the stakeholder’s institution.

Participants should be at least 18 years old.

Participants should be able to sign the informed consent form voluntarily.

Exclusion criteria

Participants were excluded from the study if:

Participants could not sign the informed consent form.

Participants could not provide at least 15 min for the interview.

No sensitive information was collected during the interviews. To maintain confidentiality, all qualitative data were anonymised, and each participant who signed the informed consent voluntarily was assigned a unique identification number. Deidentified transcriptions and audio recordings were stored securely on a protected research drive with access restricted to the research team. The research data will be destroyed after 5 years of the study’s conclusion.

Theoretical framework: the CFIR

The CFIR, a well-established conceptual framework in implementation science, was originally developed in 2009 to systematically assess complex and multilevel implementation contexts for the identification of determinants impacting the successful implementation of an innovation. 36 The original CFIR is an exhaustive and standardised meta-theoretical framework synthesised from 19 pre-existing implementation theories, models and frameworks, which was modified in 2022 in response to user feedback and critiques. 37 The updated CFIR consists of 48 constructs and 19 subconstructs across 5 domains including innovation, outer setting, inner setting, individual and implementation process. 37 It provided a fundamental structure for the exploratory evaluation of the barriers and facilitators to implementing imaging-based diagnostic AI-assisted decision-making software in China. Studies employing the CFIR in healthcare settings extensively explored various technological areas, including the implementation of electronic health record systems, 38 telemedicine 39 and various innovative tools, such as the frailty identification tool and decision-support systems in emergency care settings. 40 41 The wide adoption of the CFIR across diverse healthcare contexts emphasised its value in capturing the complex dynamics involved in implementing technology innovations. The thorough and flexible application of the updated CFIR in data collection, analysis and reporting within the current study aimed to increase study efficiency, produce generalisable research findings to inform AI implementation practice and build a scientifically sound evidence base for tailoring implementation strategies to address key barriers.

Data collection procedures

Semistructured in-person interviews were conducted. Study-related data were collected subsequent to obtaining informed consent from the participants. Guided by the updated CFIR, different interview guides were developed for four distinct stakeholder roles, including healthcare practitioners, hospital informatics specialists, hospital administrators and vendors of AI-assisted decision-making software ( online supplemental appendix 1 ). The interview guides were designed specifically to elicit participants’ perspectives, experiences and insights in implementing or delivering AI-assisted decision-making software (CY, XL and FJ). Prior to initiating data collection, the interview guides were pilot tested with four non-study participants to ensure clarity and reliability. Necessary modifications were made based on the feedback. The interviews were conducted by two interviewers with extensive training and experience in qualitative interviews (XL and FJ). Interview time, location, stakeholder role and basic demographic information were collected. Interviews continued until constructs of the updated CFIR were adequately represented in the data, indicating data saturation. 42

Data analysis

The interviews were audio recorded, transcribed verbatim in Chinese and coded independently by two coders (XL and FJ). Deductive content analysis was primarily used for data analysis. As a systematic and objective qualitative approach, content analysis is used to describe and quantify phenomena by deriving replicable and reliable inferences from qualitative data within relevant context. 43 For deductive content analysis, data were coded based on existing theories or framework defined a priori. However, the current study allowed new themes that did not fit into any of the pre-existing CFIR constructs to emerge through inductive analysis of the data.

Steps of deductive data analysis were as follows 44 :

Selecting the unit of analysis

Each interview was selected as unit of analysis, wherein conversational turns that contributed to the understanding of research questions were identified as meaning units. A turn consisted of an uninterrupted segment, which could be a single word or a few sentences. Following independent transcription of the audio recordings by two coders (XL and FJ), CY reviewed and compared the transcriptions, finalising the transcript to be analysed. To be immersed in the data, two coders (XL and FJ) engaged in a thorough reading of the transcripts and made relevant annotations.

Developing structured codebook

Before coding, a standardised, publicly available codebook template based on the original CFIR was employed and adapted to the study context collectively (XL, FJ and CY) ( online supplemental appendix 2 ). 45 Adaptations were multifaceted, which included aligning the original CFIR domains and constructs with those of the updated CFIR, tailoring language specific to the implementation of imaging-based diagnostic AI-assisted decision-making software in China, refining operational definitions and developing eligibility criteria for each construct.

Data coding

Two coders (XL and FJ), who were trained rigorously in using the codebook, performed coding independently. To ensure reliability, 10% of the transcripts were randomly selected for pilot testing. Two coders independently applied the codebook to generate preliminary codes for each meaning unit and subsequently categorised them within the updated CFIR framework. On completion, a group discussion with CY or JZ was warranted, which involved a comprehensive review and comparison of coding discrepancies to ensure consistency in the interpretation and categorisation of units. Disagreements were resolved through consensus, and any necessary adjustments to the operational definitions and eligibility criteria were promptly and appropriately made.

The main coding process was then structured into several iterative rounds to ensure coding consistency. In each round, individual coders were responsible to code four distinct transcripts individually and a fifth transcript collaboratively, addressing any inconsistencies through comprehensive discussion until a consensus was reached. ATLAS.ti (V.23.1.1) was used to identify, label and categorise themes and patterns within the qualitative data. 46 Additionally, it facilitated data management, ensuring the storage, systematic organisation and retrieval of interview transcripts.

Reporting the data by category

Identified categories across the five domains of the updated CFIR were reported descriptively with direct quotes from participants.

Trustworthiness

The study employed several methodological strategies to ensure rigour and reliability. Multiple data sources and perspectives were incorporated to achieve triangulation, including distinct stakeholders directly involved in the implementation, deployment and development of the selected AI-assisted decision-making software. Throughout the data coding process, peer debriefing was employed. Two coders independently analysed the transcripts and collaboratively discussed the interpretations and coding decisions to reach a consensus. Moreover, an audit trail was conducted to ensure transparency, with thorough documentation of study processes such as the prespecified research protocol, deidentified transcriptions, informed consent forms, interview codebooks, typed notes, audio recordings and analyses of qualitative data. External audits were further performed to validate credibility. Experts independent of the study reviewed the study protocol, interview guides and findings, providing objective suggestions.

Patient and public involvement

There was no patient or public involvement in this study. Participants were only invited to participate in qualitative interviews.

Characteristics of study setting and participants

Interviews were conducted between May and August 2023. Table 2 provides an overview of the characteristics of the selected healthcare institutions. A total of 43 participants were invited for study enrolment, and 40 (93.0%) agreed to participate, including 23 healthcare practitioners, 6 hospital informatics specialists, 4 hospital administrators and 7 vendors of the selected AI-assisted decision-making software ( table 3 ). Non-participants included two senior healthcare practitioners and one vendor of AI-assisted decision-making software. Most participants held at least a master’s degree, and 57.5% of them were male.

Characteristics of the selected healthcare institutions

Demographic characteristics of study participants

Barriers and facilitators to implementing imaging-based diagnostic AI-assisted decision-making software

Among the 48 CFIR constructs and 19 subconstructs, 21 of them across 5 domains were found to be relevant in the context of implementing imaging-based diagnostic AI-assisted decision-making software in China ( figure 1 ). Specifically, 10 were identified as barriers, 8 as facilitators and 3 as both barriers and facilitators ( tables 4 and 5 ).

  • Download figure
  • Open in new tab
  • Download powerpoint

Identified CFIR constructs and their impact on the implementation of imaging-based diagnostic AI-assisted decision-making software in China. ‘−’ indicated barriers; ‘+’ indicated facilitators. AI, artificial intelligence; CFIR, Consolidated Framework for Implementation Research.

Barriers to implementing imaging-based diagnostic AI-assisted decision-making software using the updated CFIR

Facilitators to implementing imaging-based diagnostic AI-assisted decision-making software using the updated CFIR

Innovation evidence base (+): strong empirical evidence of effectiveness

The innovation evidence base was suggested to be a key determinant facilitating implementation. Participants in this study reported evidence supporting that the clinical performance of AI software was comparable to or even surpassed that of human beings, further leading to decreased diagnostic time, reduced risk of medical errors and enhanced patient outcomes. As the AI software supported healthcare practitioners in making critical judgements regarding patient care, the robust clinical findings of efficacy and accuracy were instrumental in fostering trust and acceptance among participants towards implementation.

Before we started using the AI software in our department, I checked out articles published in some highly respected peer-reviewed journals. They reported that clinical performance of AI was comparable to human performance. This encourages me to start using the software.—intermediate clinician

Innovation relative advantage (±)

Improved clinical efficiency (+).

One of the crucial benefits gained by using AI-assisted decision-making software was the improved clinical efficacy. Some key functions of AI-assisted software included the detection of anomalies and lesions at risk, automated volumetric measurements and classification of disease severity. Study participants noted that the average interpretation time for a human reader was markedly longer than that of AI-assisted software alone or concurrent reading with the software, regardless of the level of clinical experience and complexity of diseases.

The AI software makes decisions very quickly, as compared to my decision-making time. It greatly supports my clinical judgement and improves routine clinical efficacy.—junior clinician

Unsatisfactory clinical performance (−)

However, the real-world clinical performance of the AI-assisted decision-making software remained suboptimal. Despite a strong evidence base, compromised accuracies, high false-positive rates, overestimation of lesion size and misclassification of lesion types were commonly reported by study participants. The participating healthcare practitioners highlighted the need to improve clinical performance of the AI-assisted software, particularly under complex real-world clinical conditions.

In my daily practice, I consider my clinical judgment as the gold standard. The AI software’s performance, especially in distinguishing between part-solid and solid lung nodules, doesn’t meet my expectations. The performance of the software should be improved for better usability.—intermediate clinician

Innovation adaptability (–): lack of adaptability for generated report

The generation of a diagnostic report was recognised as a pivotal function of AI-assisted decision-making software, providing diagnostic recommendations based on the analysis of patient data. The direct and automatic integration of findings into the diagnostic report was a time-saving aspect for healthcare practitioners in terms of medical documentation. However, study participants perceived that diagnostic reports generated by the AI software lacked customisation options necessary to align with the standard documentation practices of healthcare practitioners. This limitation, along with the software’s insufficient flexibility to fully comply with the hospital’s documentation standards, hindered its seamless incorporation into clinical workflow.

Personally, I don’t use the reports generated by the software. The automatically generated repots don’t align with my documentation style or the hospital’s requirement, and it doesn’t allow me to change any elements within the report. I prefer to write the reports by myself.—senior clinician

Innovation trialability (+): AI software trialability

The ability to test or experiment with AI-assisted decision-making software before full implementation was determined as a pivotal factor facilitating successful implementation. Trialability allowed participating healthcare practitioners to assess the AI software on a smaller scale, supporting their familiarity with the new innovation. More importantly, a trial period enabled evaluation of the software’s compatibility with existing workflows, identification of potential implementation barriers, and assessment of the clinical performance and reliability in real-world settings.

As we prepared for the official implementation, our department pilot tested the AI software for several weeks. This allowed me to personally experience the software, making comparisons with our standard clinical practices.—intermediate clinician

Innovation complexity (+): easiness of use

The perceived easiness of use promoted successful integration of AI-assisted decision-making software into healthcare institutions. According to the participants, the AI software featured user-friendly interfaces designed in a straightforward manner for easy navigation. In addition, the AI software generated automated, clear and comprehensible output to support medical decisions, contributing to a smooth learning curve that was conducive to quick adoption and acceptance of study participants.

The good thing is that AI software is straightforward and easy to use. I learned how to use it with minimal hassle because it provided clear and understandable output with just a few mouse clicks—intermediate clinician

Innovation cost (−): financial burden of AI software

Currently, the cost of AI-assisted decision-making software is not covered by any insurance plans. Healthcare institutions sometimes face financial constraints when acquiring the software and managing ongoing maintenance costs. Participated hospital administrators, especially those from primary and secondary hospitals, expressed the need to reallocate budgetary resources from other areas, such as staff resources and infrastructure, to accommodate the high cost associated with AI software. The perceived lack of cost-effectiveness discouraged further investment.

The insurance plans don’t cover the cost of AI software now, and patients are not paying for it either. Cost-effectiveness is one of our top priorities, and we won’t spend a lot money on the software.—hospital administrator

Outer setting

Partnerships and connections (−): lack of a collaborative network between primary/secondary and tertiary hospitals.

AI-assisted decision-making software was valuable in early disease detection and intervention. However, study participants from primary care reported that the absence of partnerships and communication channels with tertiary hospitals created challenges for patients diagnosed with diseases. These challenges included delays in receiving informed referrals to tertiary hospitals, potentially resulting in late medical intervention and discontinuity in care. Further, the lack of established connections impeded the sharing and exchange of patient data between hospitals. Tertiary hospitals received incomplete or insufficient patient profiles from primary hospitals, contributing to an inadequate understanding of the patient’s condition and history. In such scenarios, patients might undergo redundant diagnostic tests at different facilities, leading to both patient inconvenience and increased healthcare costs.

For the efficient and effective utilisation of AI-assisted decision-making software in healthcare settings and optimal patient care, participants highlighted the importance of establishing a mechanism to refer and follow up with patients who have positive or indeterminate disease findings from primary hospitals to tertiary hospitals.

Patients diagnosed at our hospital with positive or indeterminate results usually need to be referred to a tertiary or specialized hospital for further treatment. However, ensuring patient compliance is a challenge. Partnering with those hospitals and establishing some referral and follow-up mechanisms will be beneficial.—intermediate clinician

Policies and laws (±)

National guidelines related to ai (+).

With the rapid advancements in AI technology, China released a series of national policies and guidelines to rigorously promote the interdisciplinary integration of AI into healthcare sector. 47–49 In response, clinical institutions took necessary steps forward, proactively incorporating AI-assisted decision-making software into conventional healthcare practices. To date, well-established regulatory frameworks clearly outlined and regulated the development, approval and classification of AI-assisted decision-making software as a medical device. Compliance with these regulations increased the confidence of study participants in the implementation of AI-assisted software.

We decided to bring this software in our hospital because our country is promoting the widespread adoption of AI, and it’s also the trend across different economic sectors nationwide. There are several national guidelines supporting its development and use in healthcare system, which increased our confidence in implementing the software.—hospital administrator

Lack of information security measures and certification (−)

Conversely, to ensure data security and protect patient privacy, legislation such as the cybersecurity law mandated a multilevel protection scheme. In accordance, the ‘Information Security Technology—Baseline for Classified Protection of Cybersecurity’ defined four levels of security requirements, which provided baseline guidance for securing platforms and systems handling personal information. Information systems in healthcare institutions must comply with level 3 standards, the highest for non-banking institutions, given the sensitivity of patient electronic data. Consequently, participating vendors of AI software seeking collaboration with hospitals were required to have robust information security measures and level 3 security certification as prerequisites to fulfil safety obligations. The absence of such measures and certification, not uncommon among innovative technology companies, posed barriers to successful implementation.

In order to ensure the confidentiality of patient electronic data and comply with cybersecurity protection requirements, our hospital can’t implement AI decision-making software without robust data security measures or Level 3 security certification.—hospital informatics specialist

External pressure–market pressure (+): deployment of AI software in peer hospitals

Study participants noted that the implementation of AI-assisted decision-making software in peer hospitals, such as those within the same academic affiliation, fostered a competitive atmosphere and exerted a form of peer pressure, facilitating its widespread implementation. This was especially evident in the context of China’s medical informatisation development, where hospitals without AI software implementation felt compelled to stay competitive with their peers to gain a strategic advantage.

We’ve learned that some peer hospitals have already been using such software for quite some time. We are late adopters.—hospital informatics specialist

Inner setting

Relational connections (−): lack of collaboration between specialised and non-specialised clinical departments.

When patient care involved multiple clinical departments, ambiguity arose regarding the authorisation of reports generated by AI-assisted decision-making software. These reports were intended to complement the clinical decisions made by human clinicians who ultimately held the responsibility. However, non-specialists faced challenges in endorsing automated reports due to differences in clinical expertise, varying criteria for report validation, and concerns regarding liability. On the other hand, specialists often regarded AI-generated reports as less reliable than their own specialised assessments, potentially leading to reluctance in signing the reports.

Study participants emphasised the importance of establishing an interdepartmental collaborative network between specialty and non-specialty clinical departments and providing clear definitions of the roles and responsibilities within these departments to address this barrier.

As doctors without specialized expertise in ophthalmology, my colleague and I may not be authorized to sign the clinical report produced by the AI software. A collaborative network or mechanism with the department of ophthalmology will be helpful.—intermediate clinician

Communications (+): regular communication channels within department

Participants suggested that establishing regular communication channels, like weekly meetings, ensured that all members of the clinical department stayed informed about AI-assisted decision-making software. It also provided a platform for educational opportunities, such as workshops, to keep healthcare practitioners well informed and up to date. Open communication effectively addressed concerns and questions related to the implementation of AI software, fostering confidence and competence among healthcare practitioners.

I’m glad that I have the opportunity to discuss personal experience with my colleague during our weekly meetings, where case studies are shared and insights are exchanged. The open dialogue enhances our knowledge and improves my proficiency and confidence.—junior clinician

Compatibility (±)

Suboptimal data quality (−).

The effective performance of AI-assisted decision-making software relied on the availability of high-quality and accurate source data. In the process of software development, machine learning and deep learning algorithms used to analyse and interpret imaging data were trained with dataset that underwent meticulous cleaning and curation, ensuring the removal of poor-quality data containing imaging noise and artefacts before analysis. However, in real-world clinical settings, various factors, like equipment limitations, patient motion and varying proficiency of technicians, potentially introduced imperfections in imaging data. As a result, participants pointed out that AI-assisted software was not highly compatible with and adequately trained on data collected during routine clinical practice.

The real-world imaging quality is often less than optimal, which can lead to inaccuracy or failure of AI diagnosis. We can’t always ask the patient to redo the examination for better quality data, in consideration of their time and healthcare cost.—senior clinician

Integration of AI software into existing hospital systems (+)

In contrast, the integration of AI-assisted software into established hospital systems, such as the picture archiving and communication system (PACS), streamlined clinical workflows and facilitated the effective implementation. The compatibility with PACS enabled interoperability between AI-assisted software and healthcare information systems, providing healthcare practitioners with a familiar working environment and mitigating interruptions in workflow.

Our AI software integrates with the PACS. Clinicians don’t have to learn a new standalone system; instead, they can access AI-generated insights directly within their existing PACS environment, minimizing any disruptions to their workflow.—vendor of AI-assisted software

Mission alignment (−): misalignment between software functions and goals of healthcare institution

A misalignment between functions of AI-assisted decision-making software and the core hospital missions, especially for comprehensive tertiary hospitals, was revealed by study participants. Currently, diagnostic AI-assisted software predominantly supports the diagnosis of general and non-complicated diseases, which divergess from the main strategic objectives of tertiary hospitals dedicated to managing complex medical conditions and delivering high-level care through specialised expertise. Alternatively, AI software appeared more suitable for primary hospitals, where it could be used for general disease diagnosis and population-level screening. Tertiary hospitals prioritised other initiatives perceived to be more critical to their mission. This prioritisation further contributed to the reluctance among healthcare practitioners to embrace AI-assisted software, as they identified the introduction of AI software as a distraction from the hospital’s core mission.

At times, it’s difficult for us to establish collaborations with high-level tertiary hospitals. These hospitals often have highly experienced clinicians, focusing on the improvement of care quality for complex diseases and rare conditions. They have the perception that our AI software may not perform well in their setting. Instead, they suggest that our software may be better suited for primary hospitals where initial diagnoses take place.—vendor of AI-assisted decision-making software

Available resources–materials and equipment (–): lack of necessary medical supplies

The availability of essential medical supplies was integral to the successful implementation of AI-assisted decision-making software that relied on medical imaging data as the primary data source for accurate assessment. In primary and secondary hospitals, where resources were relatively scarce, the limited access to equipment, like CT scanners, hindered the implementation of AI-assisted software.

The implementation of AI decision-making software is not possible at hospitals without necessary medical supplies like CT scanners. —hospital informatics specialist

Access to knowledge and information (–): lack of adequate training

For advanced technologies like AI-assisted CDM software, participated healthcare practitioners sometimes lacked the necessary knowledge and information required for effective use. Inadequate training possibly contributed to a reluctance to adopt the technology, due to unfamiliarity with the software’s complete functionalities and challenges in its practical application.

I believe that I haven’t received thorough training on using the AI software. In fact, I’ve explored it on my own, and I’m not completely aware of all its functions.—intermediate clinician

Individuals

High-level and mid-level leaders (+), engagement of hospital administrator (+).

Participants in the study indicated that effective implementation of AI-assisted decision-making software was facilitated by the hospital’s active leadership engagement and promotional initiatives from the hospital to the department level. Hospital administrators took proactive steps to align the AI-assisted software with the institution’s long-term strategic goals through the initiation and oversight of pilot programmes. The endorsement and active support at the hospital level greatly fostered a collaborative environment among the clinical department, information technology department and vendor of AI-assisted decision-support software, positioning the AI-assisted software as an integral component of the hospital.

Strong support from the top level, especially from our hospital administrators, really makes a difference in introducing AI software and running it smoothly. They ensure its fit with the hospital through a pilot program and rigorously and effectively promote multi-stakeholder communication and collaboration.—hospital informatics specialist

Engagement of department head (+)

At the departmental level, leaders, such as department heads, who supported AI-assisted software, actively championed its implementation. They cultivated an atmosphere of support and knowledge-sharing within the department through the organisation of workshops and seminars, stressing the prospective clinical benefits of implementation. Beyond intradepartmental communication, they facilitated efficient interdepartmental communication with the information technology department to ensure seamless integration of AI-assisted decision-making software into the real-world clinical setting.

Our department head actively supports the implementation of AI software by integrating discussions about relevant knowledge and experiences into our weekly meetings, shedding light on the potential clinical benefits. In fact, they play a very important role in facilitating the integration of AI software into the existing PACS, making the entire implementation process much more efficient and effective.—junior clinician

Need (–): unmet clinical needs

Study participants revealed that AI-assisted decision-making software failed to meet the diverse clinical needs of healthcare practitioners. Currently, the underlying AI algorithm was predominantly designed and trained to address general and non-personalised clinical needs. Clinicians perceived AI-assisted software as insufficient in cases that were complex and multifaceted, requiring a comprehensive approach and an in-depth understanding of the patient’s medical history. Incorporating customisation options, enhancing adaptability in AI algorithms and demonstrating a commitment to ongoing improvement were essential to ensure that AI-assisted decision-making software aligned with the disparate needs of healthcare practitioners across various specialties and clinical settings.

The AI software we have is good for the basics, but we definitely expect more. Currently, its functions are too simplified, and it struggles in tricky and complex situations where you need a deep dive into the patient’s history.—senior clinician

Capability (–): incompetence in understanding AI reasoning mechanism

Participated healthcare practitioners faced challenges in implementing AI-assisted decision-making software in clinical practice due to a limited capability in understanding AI algorithms. The deficiency in necessary knowledge and expertise led to difficulties in comprehending the rationale behind the AI’s recommendations and decisions. This lack of clarity contributed to a lack of trust and reluctance towards the implementation of AI-assisted software. Participants suggested that addressing case-specific reasoning and providing global transparency, such as the algorithm’s functionality, strengths and limitations, would be helpful in opening the ‘black box’ of AI technology.

I sometimes find it hard to trust and embrace the software’s recommendations. I struggle with the complexity of the underlying rationale, since the software provides recommendations based on these algorithms. It’s not clear to me what’s inside the black box, like how it works, what its weakness and strengths are, etc. Clarifications on those factors would be helpful.—senior clinician

Implementation process

Engaging–innovation recipient (+): involvement of clinicians.

It was reported that active engagement substantially facilitated the implementation of AI-assisted decision-making software, particularly through the active involvement of key stakeholders during the process of implementation. In specific, the pilot testing phase was conducted to collect valuable insights and suggestions provided by users, determining limitations and identifying areas for improvement that closely aligned with their clinical needs and workflow. The healthcare practitioners, on the other hand, were empowered by actively shaping the software’s functionality and streamlining the process of implementation.

During the pilot testing phase, we collaborated with the entire department to answer any questions and gather suggestions. This active engagement of the clinicians was helpful not only for us to continuously improve the software, but also for the clinicians to feel involved and make an impact; it’s a win-win situation.—vendor of AI-assisted decision-making software

Reflecting and evaluating (–): lack of feedback incorporation

Reflecting and evaluating were central components of the continuous feedback-driven improvements that promoted the seamless integration of AI-assisted decision-making software into clinical settings. However, study participants noted that their suggestions and qualitative feedback, shared during the pilot testing phase, were not adequately reflected and implemented for process enhancement. Furthermore, there was a notable absence of quantitative assessment of the clinical performance of the AI-assisted software following its implementation. The absence of informative reflection on provided feedback and a structured evaluation process contributed to unaddressed challenges and the frustration of participating healthcare practitioners who felt that their inputs were not sufficiently valued.

My colleague and I provided feedback and suggestions about this AI software during the pilot testing phase, but we see no corresponding actions taken by the vendors, which is disappointing.—intermediate clinician I don’t think there is any systematic evaluation mechanism related to the clinical performance of AI software at our hospital. It is, however, important to periodically and systematically evaluate the performance of the software to make it more accurate and usable.—hospital informatics specialist

Comparison of stakeholder perspectives

It should be noted that the perceptions regarding the selected AI-assisted decision-making software varied considerably among different stakeholder roles. Recognising these unique perspectives is essential to the development of effective implementation strategies that address the varied concerns and priorities of each stakeholder group.

Clinicians, as the primary users, juxtaposed the potential benefits and limitations of the software. Junior clinicians, who have limited clinical experiences, generally held positive attitudes towards the implementation, highlighting the software’s ability to support clinical judgement and enhance routine clinical efficacy. While recognising the value of AI implementation, intermediate clinicians, who used the software more insightfully, gained practical perspectives and emphasised the need for strong interdepartmental collaboration, adequate training and referral mechanisms to tertiary or specialised hospitals for patients with positive or indeterminate disease findings. Senior clinicians provided the most critical feedback, expecting higher standard and improved performance in clinical effectiveness, reliability and transparency, particularly in complex clinical scenarios. On the other hand, hospital administrators focused on financial implications, like cost-effectiveness and budgetary constraints, and informatics specialists highlighted the importance of robust information security measures. Moreover, the selected vendors underscored the necessity of aligning the AI functions with the mission of the healthcare institution to ensure successful implementation.

To the best of our knowledge, this study was the first qualitative assessment that leveraged a well-established implementation framework to systematically guide the identification of barriers and facilitators of AI-assisted decision-making software in China’s healthcare system. The implementation of AI-assisted decision-making software in clinical practice is characterised by the inherent complexity and dynamic nature of both AI technology and healthcare environment. Previous literature attempted to synthesise and understand relevant determinants, with minimal application of theories or frameworks in implementation science, particularly in developing countries. 30 50 The use of the updated CFIR played a fundamental role in understanding the context of implementation and establishing a strategic roadmap, consistently and efficiently producing collective and generalisable knowledge for the development of context-specific implementation strategies tailored to China’s healthcare system. 51 The dynamic and continuous interaction among the five domains of the updated CFIR collectively shaped the outcome and effectiveness of imaging-based diagnostic AI-assisted software implementation. 36 52 The current study validated several barriers identified in prior research across diverse clinical settings, including suboptimal clinical performance, 53–55 compromised RWD quality, 56–58 insufficient training, 54 59 60 deficit in transparency and trust, 60–62 financial constraint, 54 57 insufficiency of necessary equipment, 59 60 and limited interdepartmental communication. 54 More importantly, study findings contributed novel insights to the continuous exploration of the implementation of imaging-based diagnostic AI-assisted decision-making software from the unique perspective of China’s healthcare system, establishing a theoretical foundation to guide the development of practical recommendations and implementation strategies for future improvement efforts.

Given the different perspectives of various stakeholder roles, the prioritisation of barriers, as well as the feasibility and cost-effectiveness of recommendations, the following three barriers and their corresponding suggestions were discussed in further detail ( figure 2 ).

Barriers and suggestions for implementing imaging-based diagnostic AI-assisted decision-making software in China. AI, artificial intelligence.

Barrier: misalignment between software functions and goals of healthcare institutions.

Suggestion: shift the focus of imaging-based diagnostic AI-assisted decision-making software implementation towards primary and secondary healthcare settings, where the AI software’s strengths in diagnosing generalised and non-complex conditions can be leveraged effectively.

The AI-assisted decision-making software has been disproportionately implemented in tertiary hospitals in China. 29 However, a notable misalignment between the functionality of imaging-based diagnostic AI-assisted decision-making software and the strategic goals of tertiary hospitals was found in the current study. Specifically, the implementation of AI-assisted decision-making software demonstrated its effectiveness in diagnosing generalised and non-complex medical conditions. Tertiary hospitals, in contrast, mainly served as hubs that provide specialised and advanced healthcare services, particularly for complex medical conditions. Despite the great potential of AI technologies to revolutionise healthcare, it has become evident that the complexity of conditions, frequently encountered by high-level healthcare institutions, has not been adequately addressed by existing AI competency. 55 60 Given the pivotal role that tertiary hospitals play in China’s healthcare system, it is necessary for imaging-based diagnostic AI-assisted decision-making software to further advance to meet the multifaceted clinical needs of tertiary hospitals in the near future. On the other hand, to effectively promote the implementation of existing imaging-based diagnostic AI-assisted software, a shift in the focus of implementation towards the primary or secondary level of healthcare, such as primary hospitals, physical examination centres or secondary hospitals, would offer a more cohesive fit. This shift would create a more suitable context to effectively implement imaging-based diagnostic AI-assisted decision-making software, leveraging its strengths while accommodating the unique challenges faced by healthcare institutes at the primary or secondary level. Primary healthcare in China typically addresses a broader spectrum of clinical needs and medical cases. However, it is often not the initial point of medical contact due to the suboptimal quality of care. 63 With substantial disparities between primary and tertiary care, residents in China perceived primary healthcare as of poor quality, as reflected in a low doctor-to-patient ratio in tertiary care. 64 65 Various contributing factors were reported, including insufficient knowledge among healthcare professionals, a gap between knowledge and practice, disproportionate distribution of health workforce and inadequate continuity of care across the entire healthcare system. 63 66 Implementing imaging-based diagnostic AI-assisted decision-making software at the primary level, aligning its functionality with the overarching goals of primary care, holds promise in addressing these challenges and bridging gaps, thereby potentially diverting patients with common medical needs towards primary healthcare facilities. AI technologies have the potential to facilitate a diagnosis at least equivalent to that of an intermediate-level clinician, complement clinical expertise and optimise medical resource allocation, enhancing early disease detection and ultimately promoting the quality of patient care across the healthcare hierarchy in China. 67–69

Barrier: lack of a collaborative network between primary/secondary and tertiary hospitals.

Suggestion: establish an integrated healthcare ecosystem driven by a hub-and-spoke model to promote the sharing of clinical data and improve patient referrals, ensuring seamless coordination between primary/secondary and tertiary healthcare institutions.

As mentioned above, one of the key challenges in China’s healthcare system lied in the fragmentation of healthcare delivery. The integration of imaging-based diagnostic AI-assisted decision-making software into primary care stressed the absence of a comprehensive collaborative network connecting primary and tertiary healthcare institutions. This deficiency exacerbated inefficiencies in patient referrals, with positive or indeterminate AI-assisted diagnoses at primary hospitals not being effectively referred to tertiary hospitals. As a result, patients could experience delays in receiving specialised treatment. Furthermore, the scattered and isolated electronic medical systems in China posed substantial challenges to joint healthcare initiatives. 63 Sharing and transfer of clinical data related to disease diagnosis were hindered due to the heterogeneity of systems, potentially leading to unnecessary and duplicated medical examinations. To address this issue, establishing an integrated healthcare ecosystem driven by the hub-and-spoke model would be a promising solution to promote the sharing of clinical data and medical knowledge, as well as facilitate best medical practices. This model, when applied in healthcare system, enhances peripheral services by connecting them with resource-replete centres. 70 In this context, basic medical needs are met through spokes, like primary healthcare institutions while medical resources and investments are centralised at hubs, such as tertiary healthcare institutions. The utilisation of imaging-based diagnostic AI-assisted decision-making software in a hub-and-spoke network of stroke care showed improved clinical efficiency, including decreased time to notification and transfer time between spokes to hubs, leading to a shorter length of stay. 71 72 Currently, a similar network tailored to the healthcare system in China has yet to be implemented, and its potential clinical benefits remain unclear. To fully leverage the capabilities of imaging-based diagnostic AI-assisted decision-making software, it is essential to seamlessly refer patients with positive or indeterminate diagnoses at spoke sites to the hub sites for specialised care, minimising delays in early treatment.

Barrier: lack of information security measures and certification.

Suggestion: establish an independent information platform with robust data security measures to ensure the protection of clinical data privacy and facilitate the integration and data exchange across primary/secondary and tertiary healthcare institutions.

The cybersecurity and data protection regulations in China are undergoing rapid changes, positioning it as one of the most stringent globally. The ‘Information security technology—baseline for classified protection of cybersecurity (GB/T 22239-2019)’, jointly issued by the State Market Regulatory Administration and Standardization Administration of China, came into effect in December 2019. 73 The standard defined level 1 to level 4 security requirements and specified the baseline guidance for information security technology to protect information platforms and systems responsible for collecting, storing, transmitting and processing personal information. 73 As China’s healthcare informatisation continues to advance, the security of hospital information infrastructures is becoming increasingly critical, given that any disruptions could have substantial consequences for individuals and society as a whole. To address this concern, the National Health Commission released the ‘Guidelines for Information Security Level Protection in the Healthcare Industry’, stipulating that core systems in healthcare institutions should adhere to level 3 information security protection standards, the highest level for non-banking institutions. 74 Level 3 security protection mainly covers 5 aspects of security technical requirements and 5 aspects of security management requirements. The multidimensional assessment involves 73 categories, with nearly 300 specific requirements, covering aspects such as information protection, security auditing, communication confidentiality, etc. 73 While AI technology software may not be explicitly covered by this specific requirement, it is often a mandatory administrative step for healthcare institutions, particularly tertiary hospitals, to request a level 3 security certification to ensure the protection of clinical data privacy during the integration and data exchange of AI software. As an integrated approach addressing the unique challenges faced by China’s healthcare system, an independent information platform with robust data security measures or level 3 security qualification to facilitate the implementation of imaging-based diagnostic AI-assisted decision-making software should be established. This platform acts as a vital link connecting the AI software, primary and tertiary healthcare institutions. It is designed to collect basic demographic information and medical history while transmitting deidentified imaging data to AI-assisted software for an initial diagnosis in primary care settings. In cases where patients receive positive or indeterminate reports, referral to the collaborated tertiary hospital within the hub-and-spoke network is warranted. Relevant clinical information, including collected demographic data, medical history, clinical reports and referral forms, is seamlessly transferred to enhance overall efficiency. The successful establishment of this platform requires multistakeholder engagement for proficient and collective design and management, addressing interinstitutional data sharing, security and governance challenges stemming from legal, technical and operational requirements. 70 75

Implications for policy-makers and healthcare practitioners

Globally, there are marked differences in the implementation of AI in healthcare systems. These variations are associated with factors including the type of AI software, healthcare infrastructure, existing policies and technological advancements. Despite ongoing criticism and multiple implementation challenges, AI-assisted decision-making software and health information technologies have demonstrated substantial potential for enhancing diagnostic procedures in primary care, especially with strong regulatory support, in both resource-rich and under-resourced settings. 76–79 Primary care is an ideal setting for AI tools to improve clinical efficiency and reduce medical errors due to its role in managing a large number of patients and making decisions under uncertainty. 76

In Germany, the Federal Ministry of Health has been proactive in supporting AI integration in healthcare. The ‘Smart Physician Portal for Patients with Unclear Disease’ project provided ongoing support to general practitioners (GPs) using an AI-based tool to diagnose uncertain cases. 80 This user-centred decision support tool was design specifically to address GP’s essential clinical needs through interviews and workshops, ensuring a seamless fit into the routine workflow while allowing for more efficient patient diagnosis. Similarly, the National Health Service in the UK has actively incorporated AI technologies into primary care through the use of Babylon’s Triage and Diagnostic system streamlining the diagnostic process. 81 Furthermore, the European Union’s 2021 Proposal sought to establish a global standard for safe, reliable and ethical AI by creating a comprehensive legal framework designed to enhance trust and encourage broad implementation, ensuring that the AI systems are both technically and ethically sound. 82 83 Therefore, regulatory support to increase trust, robust technical infrastructure, strong ethical standards and user-centred design are crucial for the extensive integration of AI in healthcare, ultimately improving patient outcomes and clinical efficiency.

In contrast, China faces particular difficulties due to its large and heterogeneous healthcare system, regional disparities in healthcare infrastructure and rapidly evolving regulatory environment. The successful integration of AI is hindered by the disparity in healthcare resources and the critical need for interoperability among various hospital information systems, especially in primary care settings. In particular, primary healthcare facilities in rural areas, constrained by financial and resource limitations, often lack access to the advanced AI technologies that are more steadily available in tertiary hospitals in big cities. 84 More importantly, while electronic medical record (EMR) systems are widely adopted in primary hospitals, their average level and functionality are typically lower as than those in tertiary hospitals. 85 According to the National Health Commission of the People’s Republic of China, as of 2022, the average level of EMR systems in the tertiary public hospitals was 4.0, indicating a medium stage of EMR development that enabled basic clinical decision support. 86 However, to fully facilitate intelligent clinical decision support, EMR systems need to reach at least level 5, posing an even greater challenge to the systematic integration of AI. 87

Financial incentives and policy support for EMR infrastructure facilitating the use of AI in primary healthcare settings could drive broader implementation and improve care quality. Guidelines for the thorough assessment of AI-assisted decision-making software for cost-effectiveness, efficacy and safety are also urgently needed. 88 To improve care coordination across different levels of healthcare institutions, policies should also support collaborative networks and data-sharing platforms. To increase healthcare practitioners’ familiarity with and confidence in AI-assisted decision-making software, major implementation barriers must be addressed and overall trust in AI technologies must be increased through thorough training and continued regulatory support.

Strengths and limitations

The current study has several strengths and limitations. To ensure scientific rigour and validity, the updated CFIR was thoroughly employed to guide the design of interview guides through the reporting of results. Although primarily descriptive, the study extended beyond identifying barriers and facilitators by providing practical suggestions tailored to China’s healthcare system. In order to capture a broad spectrum of perspectives, a wide range of key stakeholders, ranging from healthcare practitioners to industry vendors, were involved, allowing for a qualitative exploration of various roles and the provision of comprehensive insights. However, the inclusion of perspectives from patients should be warranted in future research, particularly through doctor–patient shared decision-making. Given the extensive impact of AI technology on the professional autonomy of healthcare practitioners, existing literature suggested a negative perception among patients towards physicians using AI-assisted software. 89 90 While the current study specifically focused on two representative AI applications, namely imaging-based diagnostic AI-assisted decision-making software for lung nodules and diabetic retinopathy, study findings were further generalised to the general diagnostic AI-assisted decision-making software using medical imaging as source data. Despite employing the updated CFIR as a systematic approach to understanding barriers and facilitators in the implementation process for enhanced generalisability, it is important to acknowledge potential variations across different types of diagnostic AI software. The current study might not fully capture certain software-specific differences and contextual factors associated with implementation. Moreover, purposive sampling was adopted, and all selected healthcare institutions were located in well-resourced areas, potentially leading to limited generalisability of findings beyond the selected healthcare institutions and software. The results should be interpreted considering this context, emphasising the strong need for cross-comparisons of the findings and the validation of recommendations in other settings, particularly in rural areas.

The rapid advancement of AI techniques is fuelling a global shift in the conventional medical decision-making paradigm. By using the updated CFIR, the current study contributed to a comprehensive understanding of the barriers and facilitators in implementing imaging-based diagnostic AI-assisted decision-making software in China’s evolving healthcare landscape. The findings served as a solid theoretical foundation, providing a possible roadmap for future efforts aimed at optimising the effective implementation of imaging-based diagnostic AI-assisted decision-making software. The tangible suggestions could further inform healthcare practitioners, industry stakeholders and policy-makers in both China and other developing countries, facilitating the unleashing of the full potential of imaging-based diagnostic AI-assisted decision-making software for optimal patient care.

Ethics statements

Patient consent for publication.

Not applicable.

Ethics approval

This study involves human participants and was approved by the Institutional Review Board of Peking University (IRB00001052-22138). Participants gave informed consent to participate in the study before taking part.

Acknowledgments

The authors thank all individuals who took the time to participate in the interviews and those who provided constructive suggestions on the manuscript.

  • Karimi S , et al
  • Greenes RA , et al
  • Winters-Miner LA ,
  • Bolding PS ,
  • Hilbe JM , et al
  • Delaney B ,
  • Kostopoulou O
  • Meunier PY ,
  • Raynaud C ,
  • Guimaraes E , et al
  • Sutton RT ,
  • Pincock D ,
  • Baumgart DC , et al
  • Administration USFaD
  • Hogan W , et al
  • Center for Medical Device Evaluation N
  • Technology CAoIaC
  • Administration NMP
  • Zhang J , et al
  • Zhou J , et al
  • Kim J , et al
  • Berzin TM , et al
  • Rushlow DR ,
  • Inselman JW , et al
  • Liu Z , et al
  • Shimabukuro DW ,
  • Barton CW ,
  • Feldman MD , et al
  • Chang C-H , et al
  • Liu TYA , et al
  • Mathenge W ,
  • Whitestone N ,
  • Nkurikiye J , et al
  • Genchev GZ , et al
  • Chomutare T ,
  • Tejedor M ,
  • Svenning TO , et al
  • Stevenson F ,
  • Lau R , et al
  • Richardson JE ,
  • Abramson EL ,
  • Pfoh ER , et al
  • Damschroder LJ ,
  • Keith RE , et al
  • Reardon CM ,
  • Widerquist MAO , et al
  • Acharya S ,
  • Van Citters AD ,
  • Scalia P , et al
  • Fujimori R ,
  • Soeno S , et al
  • Sampalli T , et al
  • Saunders B ,
  • Kingstone T , et al
  • Krippendorff K
  • Research TCFfI
  • Council GOotS
  • Wang D , et al
  • Hagedorn HJ
  • Yankey N , et al
  • Schwartz JM ,
  • Rossetti SC , et al
  • Hehakaya C ,
  • Ranschaert ER , et al
  • Romero-Brufau S ,
  • Boyum P , et al
  • Abramoff MD , et al
  • de Bruin JS ,
  • Hersch F , et al
  • Borges do Nascimento IJ ,
  • Abdulazeem H ,
  • Vasanthan LT , et al
  • Zhang Z , et al
  • Petitgand C ,
  • Motulsky A ,
  • Denis JL , et al
  • Tanguay-Sela M ,
  • Benrimoh D ,
  • Popescu C , et al
  • Huang J , et al
  • Li X , et al
  • Hao Y , et al
  • Krumholz HM ,
  • Yip W , et al
  • Jiang Y , et al
  • Zhang XL , et al
  • Fortenberry JL
  • Hassan AE ,
  • Ringheanu VM ,
  • Rabah RR , et al
  • Elijovich L ,
  • Dornbos Iii D ,
  • Nickele C , et al
  • Center NHCSI
  • Middendorf M ,
  • Heintzman J , et al
  • Miyagami T ,
  • Kunitomo K , et al
  • Cossy-Gantner A ,
  • Germann S , et al
  • Koçyiğit Burunkaya D , et al
  • Gomez-Cabello CA ,
  • Pressman S , et al
  • Schütze D ,
  • Neff MC , et al
  • Middleton K , et al
  • Thornton J ,
  • Wu S , et al
  • Wei X , et al
  • China NHCotPsRo
  • Lorenzini G ,
  • Arbelaez Ossa L ,
  • Shaw DM , et al
  • Shaffer VA ,
  • Probst CA ,
  • Merkle EC , et al

Contributors CY, JZ and LL designed the study and developed the eligibility criteria. CY contacted the respondents. CY, XL and FJ designed the interview guides. JZ and LL reviewed and made critical comments. XL and FJ conducted interviews and analysed the data. JZ and CY contributed to the review of qualitative analysis through discussion with XL and FJ. XL completed the first draft of the manuscript. CY, JZ and LL reviewed and revised the manuscript. All authors read and approved the final manuscript. CY is the guarantor, has full access to all data in the study and has final responsibility for the decision to submit for publication.

Funding This work was supported by Merck Sharp & Dohme, a subsidiary of Merck & Co., Rahway, New Jersey, USA. The sponsor participated in the design and development of the study, as well as the revision and editing of this manuscript.

Competing interests JZ is an employee of MSD R&D (China). LL is affiliated with Merck Sharp & Dohme, a subsidiary of Merck & Co., Rahway, New Jersey, USA, which funded the study and monitored the conduct of the study.

Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.

Provenance and peer review Not commissioned; externally peer reviewed.

Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.

Read the full text or download the PDF:

  • Study Protocol
  • Open access
  • Published: 13 September 2024

Canadian beach cohort study: protocol of a prospective study to assess the burden of recreational water illness

  • Ian Young 1 ,
  • Binyam N. Desta 1 ,
  • J. Johanna Sanchez 1 ,
  • Shannon E. Majowicz 2 ,
  • Thomas A. Edge 3 ,
  • Sarah Elton 4 ,
  • David L. Pearl 5 ,
  • Teresa Brooks 6 ,
  • Andrea Nesbitt 7 ,
  • Mahesh Patel 8 ,
  • Michael Schwandt 9 ,
  • Dylan Lyng 10 ,
  • Brandon Krupa 11 ,
  • Herb E. Schellhorn 3 ,
  • Elizabeth Montgomery 12 &
  • Jordan Tustin 1  

BMC Public Health volume  24 , Article number:  2502 ( 2024 ) Cite this article

Metrics details

Recreational water activities at beaches are popular among Canadians. However, these activities can increase the risk of recreational water illnesses (RWI) among beachgoers. Few studies have been conducted in Canada to determine the risk of these illnesses. This protocol describes the methodology for a study to determine the risk and burden of RWI due to exposure to fecal pollution at beaches in Canada.

This study will use a mixed-methods approach, consisting of a prospective cohort study of beachgoers with embedded qualitative research. The cohort study involves recruiting and enrolling participants at public beaches across Canada, ascertaining their water and sand contact exposure status, then following-up after seven days to determine the incidence of acute RWI outcomes. We will test beach water samples each recruitment day for culture-based E. coli , enterococci using rapid molecular methods, and microbial source tracking biomarkers. The study started in 2023 and will continue to 2025 at beaches in British Columbia, Manitoba, Ontario, and Nova Scotia. The target enrollment is 5000 beachgoers. Multilevel logistic regression models will be fitted to examine the relationships between water and sand contact and RWI among beachgoers. We will also examine differences in risks by beachgoer age, gender, and beach location and the influence of fecal indicator bacteria and other water quality parameters on these relationships. Sensitivity analyses will be conducted to examine the impact of various alternative exposure and outcome definitions on these associations. The qualitative research phase will include focus groups with beachgoers and key informant interviews to provide additional contextual insights into the study findings. The study will use an integrated knowledge translation approach.

Initial implementation of the study at two Toronto, Ontario, beaches in 2023 confirmed that recruitment is feasible and that a high completion rate (80%) can be achieved for the follow-up survey. While recall bias could be a concern for the self-reported RWI outcomes, we will examine the impact of this bias in a negative control analysis. Study findings will inform future recreational water quality guidelines, policies, and risk communication strategies in Canada.

Going to the beach is a popular seasonal activity among Canadians. For example, the percentage of Canadian households that reported swimming, going to the beach, surfing, scuba diving, or snorkeling close to home in the past 12 months more than tripled (6% to 21%) from 2011 to 2021 [ 1 ]. These activities are associated with numerous health benefits, including improved cardiovascular health, mental health, quality-of-life, and well-being [ 2 , 3 ]. In addition, beach tourism contributes substantially to local economies. For example, in Ontario, Canada, there were an estimated 6.5 million beach visits in 2014, contributing $1.6 billion in beachgoer spending – representing 6.6% of all visitor spending in the province that year [ 4 ]. Despite these benefits, recreational water activities can expose beachgoers to pathogens that cause recreational water illness (RWI) [ 5 , 6 , 7 , 8 , 9 , 10 , 11 ]. Water contact at beaches impacted by fecal pollution sources primarily increases the risk of acute gastrointestinal illness (AGI), but can also lead to an increased risk of respiratory, skin, ear, and eye infections [ 5 , 7 , 8 , 9 , 11 , 12 ].

In the U.S., approximately 90 million cases of RWI occur each year, resulting in annual costs of US$2.2–3.7 billion [ 12 ]. The costs of AGI specifically due to swimming or wading in beach water are > US$1,600 (range of $425–2,743) per 1,000 beachgoers, with lost productivity (e.g., having to stay home from school or work due to illness) the major driver of these costs [ 13 ]. Additionally, recreational water quality is a health equity issue, presenting disproportionate risks for different sociodemographic groups. For example, children and youth experience more severe health outcomes from beach water exposures [ 12 ]. They tend to spend more time in the water and playing in the sand, swallow more water, and have developing immune and digestive systems that place them at greater risk of illness [ 5 , 10 , 14 , 15 , 16 ].

In Canada, Health Canada publishes guidelines for recreational water quality at beaches. The most recent updates to the guidelines were published in 2024 [ 17 ]. These guidelines are used by local and provincial public and environmental health authorities across Canada who are responsible for local beach water monitoring, risk management, and risk communication. The guidelines recommend that authorities take public health action (e.g., issue a swimming advisory, investigate pollution sources) if ‘beach action values’ (BAV) are exceeded [ 17 ]. The BAVs are derived from epidemiological studies conducted in the U.S. from the 1990s to 2010, and correspond to an AGI rate of ~ 36 illnesses/1,000 bathers [ 18 , 19 ]. The direct applicability and suitability of these BAVs for Canadian settings is unclear, because RWI incidence and fecal indicator bacteria levels are driven by diverse and varying pollution sources, environmental and weather patterns, and beachgoer activities [ 8 , 20 , 21 , 22 ]. There is a need to conduct prospective research on RWI in Canadian settings to better inform these guidelines and public health risk management decisions with Canada-specific data.

While extensive data are available from the U.S. and other countries on the risk and burden of RWI in beachgoers [ 8 ], the last prospective cohort study in Canada to estimate the risk of RWI was conducted in 1980 [ 23 ]. Timely, updated information is needed on the incidence of AGI and other types of RWI in Canada. This manuscript describes a protocol for a national, prospective beach cohort study that will provide important baseline data on RWI risks at popular freshwater and marine beaches across multiple Canadian provinces and settings. The study will aim to identify RWI risks among demographic groups under different environmental and water quality conditions to inform beach water management policies and local risk management strategies.

Study objectives

The purpose of this study is to determine the burden of RWI among beachgoers at beaches across five regional sites in Canada. The specific objectives are to:

Measure the risk and burden of five different RWI outcomes (AGI, respiratory, eye, ear, and skin infections) in beachgoers that engage in different levels of water and sand contact;

Identify differences in RWI risks by beachgoer gender, age, and beach location;

Determine relationships between various fecal indicator bacteria measures, environmental parameters, and the risk of AGI among beachgoers; and

Understand beachgoer risk perceptions and behaviours related to recreational water quality and socio-political issues that may impact RWI risks among beachgoers.

Study design

We will use a mixed-methods approach in this study: a quantitative prospective cohort study will be conducted to address the first three objectives, with embedded qualitative research to address the fourth objective [ 24 ]. This approach combines the strengths of both methods to provide a more comprehensive understanding of this complex public health issue [ 24 ]. A prospective cohort study design was selected for consistency with and allowing comparability to prior studies, including the U.S. National Epidemiological and Environmental Assessment of Recreational water (NEEAR) study [ 5 , 8 , 11 , 14 , 25 , 26 , 27 , 28 ]. The qualitative research phase will include focus groups with beachgoers during and after the cohort study and key informant interviews to provide additional contextual insights into the study findings. The study is registered at ClinicalTrials.gov (ID: NCT06413485; registration date: May 9, 2024).

Study settings

The study will take place at beaches across five regional sites in British Columbia, Manitoba, Ontario, and Nova Scotia (Fig.  1 ). In British Columbia, recruitment will be conducted at two City of Vancouver marine water beaches in 2024: English Bay Beach and Kitsilano Beach. In Manitoba, recruitment will be conducted at Grand Beach East and West in 2024, both located on the East side of Lake Winnipeg. Two sites will be included from Ontario: Toronto and Niagara Region. In Toronto, recruitment was conducted in the summer of 2023 at Sunnyside and Marie Curtis Park East beaches on Lake Ontario. In Niagara Region, recruitment will be conducted in 2025 at Bay Beach in Fort Erie and Nickel Beach in Port Colborne, both located on Lake Erie. In Nova Scotia, recruitment will be conducted in 2025 at Birch Cove Beach on Lake Banook. The sites were selected to provide a diverse cross-section of beach types (i.e., marine and freshwater), pollution sources, and geographic regions. The specific beaches at each site were identified in consultation with regional collaborators as the best candidate sites for this study because the beaches are popular and frequently used for water and sand contact activities by families with young children and beach water quality is variable, reflecting one or more persistent or recurring sources of fecal contamination [ 29 ].

figure 1

Locations of the targeted beach sites. Legend: Beach site locations for this prospective cohort study examining the burden of recreational water illness in Canada

Participant enrolment and eligibility

Trained data collectors will recruit beachgoers at one to two beaches per site for ~ 35 days throughout the summer months (June to September). Data collectors will approach as many beachgoers as possible each day for enrolment, prioritizing families with children given their higher risk of RWI. Based on initial study recruitment in 2023 and prior studies [ 5 , 25 , 30 ], we anticipate recruiting ~ 15 households per day, with ~ 1.7 individuals per household. This corresponds to a total sample size of ~ 5000 beachgoers (~ 3000 households), with an average of ~ 1000 per site. Household members will be recruited and surveyed together. Each household member will be considered a separate study participant. Eligibility criteria will include: (a) ability to provide informed consent for the study and complete the surveys in English or French; (b) home address in Canada or the U.S.; and (c) not having participated in the study in the past 21 days. Given the acute and self-limiting outcomes, individuals will be allowed to participate again after a 21-day washout period [ 14 , 25 , 30 , 31 ].

Survey process

Two surveys will be conducted with each enrolled participant: (1) beach survey, and (2) follow-up survey (see Additional file 1). Upon recruitment, beachgoers will be advised to visit our study tables setup near entrances to complete the beach survey before they leave. The beach survey will determine contact information, sociodemographic characteristics, other pre-identified confounding variables, and exposures at the beach [ 14 , 25 , 26 , 27 , 30 , 31 , 32 ]. The beach survey will be implemented on tablets using a web-based survey platform. The follow-up survey will be completed seven days after participants’ beach visits. Participants will have an option of completing the follow-up survey online or by telephone. This survey will ask about RWI outcomes experienced since the beach visit. Questionnaires were adapted from the U.S. NEEAR study [ 14 , 25 ], and initial pre-testing was conducted with 10 individuals using a cognitive interviewing approach [ 33 ]. We then piloted the questionnaires and study feasibility at a Toronto beach in 2022 [ 34 ], made enhancements, and implemented the initial round of recruitment at the Toronto study site in 2023.

Exposures and outcomes of interest

The primary exposure of interest is level of recreational water contact activities (vs. no water contact) among beachgoers. Specifically, we will examine a graded classification of this exposure based on individuals’ minimum level of water contact: 1) no water contact; 2) minimal contact; 3) body immersion; and 4) swallowed water [ 5 , 10 , 28 , 30 ]. This classification will allow us to determine a possible dose–response relationship between water contact and illness. Minimal contact is defined as water contact that does not result in body immersion (e.g., wading below one’s waist, boating, fishing). Body immersion is defined as entering the water above one’s waist (e.g., swimming, surfing, snorkelling), and swallowing water as ingestion of any amount of water [ 5 , 28 , 30 ]. A secondary exposure of interest is sand contact activities (e.g., digging in the sand) [ 35 ].

The primary outcome measure is AGI in the 7-day period following beach water contact, which corresponds with incubation periods of viral and bacterial enteric pathogens of concern [ 36 , 37 ]. We define AGI using an internationally accepted definition as one or more of: (a) diarrhea (≥ 3 loose stools in 24 h); (b) vomiting; (c) nausea with stomach cramps; or (d) nausea or stomach cramps that interfere with regular daily activities (e.g., missed work or school) [ 5 , 14 , 25 , 26 , 27 ]. Secondary outcome measures include: acute respiratory illness (fever with sore throat, fever with nasal congestion, or cough with phlegm); skin infection (rash or itchy skin); ear infection or earache; and eye infection or irritation [ 14 , 25 , 26 , 27 , 31 ]. Days of missed work, school, or vacation, use of medications to treat symptoms, and medical consultations related to AGI will also be collected as indicators of illness severity [ 5 , 38 ].

Water quality and environmental parameters

We will measure various water quality and environmental indicators of fecal contamination as effect modifiers on the water contact-illness outcome relationships. In Canada, culture-based E. coli is still primarily used as the fecal indicator bacteria of interest for beach water quality. We will collect water samples each recruitment day to test for E. coli following standard culture-based methods [ 39 ]. In addition, we will also test water samples for enterococci using the U.S. EPA-validated qPCR method and for microbial source tracking (MST) biomarkers using digital PCR analysis [ 40 , 41 , 42 , 43 , 44 ]. The molecular approach to testing for enterococci was added to the most recent version of the Canadian guidelines for recreational water quality as a recommended, rapid approach for fecal indicator bacteria monitoring [ 18 ]. The MST analysis will detect and quantify host-specific DNA biomarkers to characterize the contribution of different sources of fecal contamination at study beaches [ 40 , 44 ]. This analysis will use standard probes and primers for human sewage (HF183), seagulls (Gull4), Canada geese (mitochondrial DNA), dogs (DG3), and ruminants (Rum-2-Bac) [ 40 , 44 ]. Additionally, we will collect information on beach environmental conditions, including air temperature and precipitation from the nearest weather station and water turbidity using a turbidimeter. Together, these data will support future decision-making about which fecal indicator and environmental measures are more strongly associated with AGI in Canada under different conditions.

Confounding variables

We will measure and adjust for important confounding variables in our analyses to mitigate the impact of possible confounding bias. We have compiled all potentially confounding variables into directed acyclic graphs (DAG) to determine the minimal sufficient adjustment sets for each exposure-outcome of interest [ 45 , 46 ]. The DAG for the water contact and AGI relationship is shown in Fig.  2 . The key participant-level confounding variables that require adjustment include age, gender, ethno-racial identity, education level, pre-existing or chronic conditions, other beach activities engaged in before or after the beach visit, sand contact, and consumption of food on the beach. We will also adjust for beach site, year, and beach water fecal indicator bacteria levels. Other listed variables do not require adjustment in the main-effects model. Confounders for other outcomes are summarized in Additional file 2.

figure 2

Directed acyclic graph for the water contact and AGI relationship. Legend: Directed acyclic graph of the minimal adjustment set of confounding variables for the relationship between level of water contact and AGI. The water contact variable is the exposure of interest, while all variables with white circles require adjustment in the analysis to determine unbiased estimates of effect

Data analysis plan

We will construct multilevel logistic regression models under a Bayesian analysis framework to determine the causal effects of water and sand contact exposures on our five binary health outcomes of interest [ 47 , 48 , 49 ]. Each of the five health outcomes, as well as missed activities, medication use, and medical consultations due to AGI, will be evaluated in separate models. Varying intercepts will be included to adjust for clustering of participants by household, recruitment date, and beach location [ 47 , 49 ]. All models will adjust for a minimal set of confounding variables specific to the exposure-outcome relationship. Differences in risks by beachgoer age and gender will be examined through their influence as effect modifiers. We will examine gender-specific differences in RWI, following recommended categories: boy/man, girl/woman, transgender, gender fluid [ 50 ]. We will examine risk differences between the following age groups (0–4, 5–9, 10–14, 15–19, and 20 +) [ 20 ]. The relationship between different measures of beach water fecal indicator bacteria ( E. coli , enterococci, and MST biomarkers) and AGI will be determined by examining their interaction with the exposure.

Weakly informative prior probability distributions for model parameters will be determined from prior research and evaluated to ensure the model is skeptical of highly unlikely or implausible values [ 47 , 48 ]. The appropriateness of priors will be assessed via prior predictive checking, and the impact of priors on each model will be assessed via sensitivity analysis [ 51 ]. Causal effects will be determined by examining and contrasting posterior probability distributions of parameters and their summary measures (mean and 95% credible intervals) [ 47 , 48 , 49 ]. Marginal effects plots will be created to visualize the effects of each predictor of interest, as well as interactions, on the predicted probability of each outcome [ 52 ]. Sensitivity analyses will be conducted to evaluate the impact of alternate exposure and outcome definitions (e.g., time spent in the water vs. level of water contact, diarrhea vs. AGI) and post-exposure illness timeframes (e.g., 3 and 5 vs. 7 days). Alternative models and variable specifications will be compared and selected using leave-one-out cross-validation [ 53 ]. Additionally, we will conduct a negative control analysis by examining the association between fecal indicator bacteria levels and AGI rates among the unexposed participants to identify possible residual confounding or differential outcome reporting bias [ 5 , 54 , 55 ]. Based on prior research, we expect findings will be robust to these possible biases [ 5 , 14 , 26 , 27 ].

Power and precision analysis

We conducted a power and precision analysis using the planned approach described above for the main-effect relationship between level of water contact and AGI incidence [ 56 ]. For this analysis, we assumed that ~ 60% of beachgoers will have water contact, of which ~ 35% will immerse their body and ~ 10% will swallow water [ 13 , 20 , 34 ]. We specified weakly informative prior distributions for each water contact parameter (minimal contact, body immersion, and swallowing water compared to no water contact), based on previously reported measures of effect [ 8 , 10 , 25 ]. These mean effects correspond to odds ratios of approximately 1.35, 1.5, and 1.8 for each respective exposure level compared to no water contact. Across 500 simulated datasets and analyses, the anticipated sample size of 5000 beachgoers (assuming a 20% attrition rate for the follow-up survey) should have precise credible intervals and an average power of 87% and 94%, respectively, to detect a positive association with body immersion and swallowing water compared to no water contact (with a posterior probability > 0.95). See Additional file 3 for full details and results of this simulation.

Embedded qualitative research

Embedded qualitative research will be conducted to inform, enrich, and enhance the value of the cohort study [ 24 , 57 , 58 ]. We will approach the qualitative research through a pragmatist paradigm that aims to produce policy-relevant outcomes that can prevent and mitigate RWI [ 58 ]. Focus groups will be conducted with parents and guardians of beachgoing children and youth to identify recreational water quality risk perceptions and behaviours [ 59 ]. Participants will be recruited through online panels or advertisements. We will recruit participants with diverse identity characteristics to capture a range of perspectives. Key informant interviews will also be conducted with public health inspectors and managers who oversee beach water quality programs in Canada and with representatives of other stakeholder groups (e.g., beach operators, beachgoer associations). We anticipate that these interviews will identify and highlight additional socio-political factors relevant to our data interpretation and recommendations. Interview participants will be recruited through professional listservs, stakeholder referrals, online searches, and policy documents.

We will conduct six to eight focus groups with 6–8 participants per group, and 15–20 key informant interviews, which should provide adequate saturation of themes [ 59 , 60 , 61 ]. Focus groups and interviews will be conducted virtually and will follow semi-structured question guides. The qualitative data will be analyzed using thematic analysis [ 59 , 62 ]. Analysis will be guided by the Health Belief Model and Theoretical Domains Framework [ 63 , 64 ]. Two independent analysts will conduct the analysis by applying theory-based and inductive codes to the transcribed audio recordings. Codes will be sorted and grouped to determine key experiences, priorities, opportunities, and challenges for improving beach water quality risk management and communication.

The focus group and interview results will be integrated with the cohort study at the interpretation level [ 57 ]. This will include narrative integration, where findings of both components will be jointly discussed by thematic area [ 57 ]. Additionally, we will develop joint displays, where cohort study results will be compared visually in tables or figures alongside the focus group results [ 57 , 65 ]. For example, if we find sociodemographic differences in RWI risks, the focus group results may provide additional contextual insights that can explain and enrich the findings.

Knowledge translation plan

The study is using an integrated knowledge translation approach. Knowledge users helped conceive the study, set the objectives, and create the protocol. They will continue to be involved throughout the study through a stakeholder steering group that meets 2–3 times per year. Results of the study will be submitted for presentations at relevant conferences and published in peer-reviewed journals. Additional knowledge translation techniques will include but are not limited to presentations at knowledge-user events or webinars, dissemination of infographics and policy briefs, and implementation of a project website.

Our 2022 feasibility pilot study and 2023 implementation in the first study site achieved very high household participation rates (60–70%), similar to prior studies [ 5 , 26 , 27 , 29 , 30 ], suggesting that there should be no issues with recruitment feasibility. In our 2022 limited-resource feasibility study at a Toronto beach, we had a very high attrition rate (65%), which was likely due to a lack of monetary incentive [ 34 ]. Therefore, we made several enhancements that have been incorporated into this protocol and were implemented successfully in the 2023 initial recruitment site, leading to a much lower attrition rate (20%). The enhancements included: $10 gift card incentive for each participating household (given after completion of the first beach survey); additional prize draw for households that complete the follow-up survey; more frequent follow-up reminders sent by multiple modes (email, text message, phone); and enhanced training for data collectors to emphasize the importance of the follow-up survey to participants upon enrolment [ 66 , 67 , 68 ]. Our 2023 data and prior research found that those who were lost to follow-up had similar characteristics to those completing the follow-up survey [ 14 , 25 , 30 , 31 ], suggesting attrition should not affect our estimation of measures of association and inferences [ 69 ]. If our recruitment targets are not met after the 2025 season, additional recruitment may be conducted at one or more other beaches in 2026 depending on available resources.

Conducting a feasibility pilot and full implementation of the study at the first site has provided field experience to anticipate and adapt to other logistical issues that may arise over the course of the study. For example, recruitment days will be flexible, scheduled each week depending on weather conditions to avoid days with expected low attendance (e.g., rainy days). While recall bias could be a concern for follow-up measures, prior studies have found that associations were robust to this possible bias [ 14 , 26 , 27 , 30 ]. Use of non-specific, symptom-based, and internationally comparable outcome measures allows a pragmatic and cost-effective approach to assess numerous possible etiological causes of RWI. Further, self-reported RWI outcomes have been shown to be strongly associated with laboratory-confirmed infection due to common etiological agents of concern [ 70 , 71 ]. Our negative control analysis will assess possible impacts of this bias, as the water fecal indicator bacteria levels are unlikely to be related to participants’ recall of RWI.

This study will provide Canadian-specific data that may be used to support updates to the Canadian recreational water quality guidelines and inform policies and risk communication strategies [ 17 ]. The results can also be used to inform targeted surveillance strategies, exposure and risk assessments, and source attribution activities of Canadian enteric disease initiatives [ 72 ]. Local and provincial public and environmental health authorities can use the results to guide risk management at public beaches, including development of site-specific strategies for fecal indicator bacteria monitoring and swimming advisories. In addition, the risk perception and behaviour data from the embedded qualitative research can enhance beach water risk communication and public messaging strategies. This will help beachgoers, particularly families with young children, to make more informed decisions about when to visit the beach and how to reduce their risk of RWI (e.g., lower vs. higher risk activities).

Availability of data and materials

Upon completion of the study, an anonymized version of the dataset will be made publicly available for non-commercial purposes.

Data availability

No datasets were generated or analysed during the current study.

Statistics Canada. Table 38–10–0121–01: participation in outdoor activities. 2023. https://doi.org/10.25318/3810012101-eng . Accessed 17 June 2024.

Denton H, Aranda K. The wellbeing benefits of sea swimming. Is it time to revisit the sea cure? Qual Res Sport Exerc Health. 2019;12:647–63.

Article   Google Scholar  

Lazar JM, Khanna N, Chesler R, Salciccioli L. Swimming and the heart. Int J Cardiol. 2013;168:19–26.

Article   PubMed   Google Scholar  

Ontario Ministry of Tourism, Culture and Sport. Ontario beach tourism statistics 2014. 2017. https://rto12.ca/wp-content/uploads/2014/04/Ontario-Beach-Tourism-2014.pdf . Accessed 17 June 2024.

Arnold BF, Wade TJ, Benjamin-Chung J, Schiff KC, Griffith JF, Dufour AP, et al. Acute gastroenteritis and recreational water: highest burden among young US children. Am J Public Health. 2016;106:1690–7.

Article   PubMed   PubMed Central   Google Scholar  

Wade TJ, Pai N, Eisenberg JNS, Colford JM. Do U.S. Environmental Protection Agency water quality guidelines for recreational waters prevent gastrointestinal illness? A systematic review and meta-analysis. Environ Health Perspect. 2003;111:1102–9.

Mannocci A, Torre GL, Spagnoli A, Solimini AG, Palazzo C, De Giusti M, et al. Is swimming in recreational water associated with the occurrence of respiratory illness? A systematic review and meta-analysis. J Water Health. 2016;14:590–9.

Russo GS, Eftim SE, Goldstone AE, Dufour AP, Nappier SP, Wade TJ. Evaluating health risks associated with exposure to ambient surface waters during recreational activities: a systematic review and meta-analysis. Water Res. 2020;176:115729.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Yau V, Wade TJ, de Wilde CK, Colford JM. Skin-related symptoms following exposure to recreational water: a systematic review and meta-analysis. Water Qual Expo Health. 2009;1:79–103.

Wade TJ, Arnold BF, Schiff K, Colford JM, Weisberg SB, Griffith JF, et al. Health risks to children from exposure to fecally-contaminated recreational water. PLoS One. 2022;17:e0266749.

Leonard AF, Singer A, Ukoumunne OC, Gaze WH, Garside R, et al. Is it safe to go back into the water? A systematic review and meta-analysis of the risk of acquiring infections from recreational exposure to seawater. Int J Epidemiol. 2018;47:572–86.

DeFlorio-Barker S, Wing C, Jones RM, Dorevitch S. Estimate of incidence and cost of recreational waterborne illness on United States surface waters. Environ Health. 2018;17:3.

DeFlorio-Barker S, Wade TJ, Jones RM, Friedman LS, Wing C, Dorevitch S. Estimated costs of sporadic gastrointestinal illness associated with surface water recreation: a combined analysis of data from NEEAR and CHEERS studies. Environ Health Perspect. 2017;125:215–22.

Wade TJ, Calderon RL, Brenner KP, Sams E, Beach M, Haugland R, et al. High sensitivity of children to swimming-associated gastrointestinal illness: results using a rapid assay of recreational water quality. Epidemiol. 2008;19:375–83.

Dufour AP, Behymer TD, Cantú R, Magnuson M, Wymer LJ. Ingestion of swimming pool water by recreational swimmers. J Water Health. 2017;15:429–37.

Article   CAS   PubMed   Google Scholar  

Deflorio-Barker S, Arnold BF, Sams EA, Dufour AP, Colford JM, Weisberg SB, et al. Child environmental exposures to water and sand at the beach: findings from studies of over 68,000 subjects at 12 beaches. J Expo Sci Environ Epidemiol. 2018;28:93–100.

Health Canada. Guidelines for Canadian recreational water quality: summary document. 2024. https://www.canada.ca/en/health-canada/services/publications/healthy-living/guidelines-canadian-recreational-water-quality-summary-document.html . Accessed 17 June 2024.

Health Canada. Guidelines for Canadian recreational water quality: indicators of fecal contamination. 2023. https://www.canada.ca/en/health-canada/services/publications/healthy-living/recreational-water-quality-guidelines-indicators-fecal-contamination.html . Accessed 17 June 2024.

US EPA. Recreational water quality criteria. EPA 820-F-12-058. 2012. https://www.epa.gov/sites/default/files/2015-10/documents/rwqc2012.pdf . Accessed 17 June 2024.

Collier SA, Wade TJ, Sams EA, Hlavsa MC, Dufour AP, Beach MJ. Swimming in the USA: beachgoer characteristics and health outcomes at US marine and freshwater beaches. J Water Health. 2015;13:531–43.

Nevers MB, Whitman RL. Efficacy of monitoring and empirical predictive modeling at improving public health protection at Chicago beaches. Water Res. 2011;45:1659–68.

Francy DS, Stelzer EA, Duris JW, Brady AMG, Harrison JH, Johnson HE, et al. Predictive models for Escherichia coli concentrations at inland lake beaches and relationship of model variables to pathogen detection. Appl Environ Microbiol. 2013;79:1676–88.

Seyfried PL, Tobin RS, Brown NE, Ness PF. A prospective study of swimming-related illness I. Swimming-associated health risk. Am J Public Health. 1985;75:1068–70.

Creswell JW, Plano Clark VL. Designing and conducting mixed methods research. Los Angeles: SAGE Publications, Inc.; 2018.

Google Scholar  

Wade TJ, Calderon RL, Sams E, Beach M, Brenner KP, Williams AH, et al. Rapidly measured indicators of recreational water quality are predictive of swimming-associated gastrointestinal illness. Environ Health Perspect. 2006;114:24–8.

Wade TJ, Sams E, Brenner KP, Haugland R, Chern E, Beach M, et al. Rapidly measured indicators of recreational water quality and swimming-associated illness at marine beaches: a prospective cohort study. Environ Health. 2010;9:66.

Dorevitch S, Pratap P, Wroblewski M, Hryhorczuk DO, Li H, Liu LC, et al. Health risks of limited-contact water recreation. Environ Health Perspect. 2012;120:192–7.

Colford JM, Schiff KC, Griffith JF, Yau V, Arnold BF, Wright CC, et al. Using rapid indicators for Enterococcus to assess the risk of illness after exposure to urban runoff contaminated marine water. Water Res. 2012;46:2176–86.

Wade TJ, Sams EA, Haugland RA, Brenner KP, Li Q, Wymer L, et al. Report on 2009 national epidemiologic and environmental assessment of recreational water epidemiology studies. Washington, DC: U.S. Environmental Protection Agency; 2010.

Arnold BF, Schiff KC, Griffith JF, Gruber JS, Yau V, Wright CC, et al. Swimmer illness associated with marine water exposure and water quality indicators: impact of widely used assumptions. Epidemiol. 2013;24:845–53.

Colford JM, Wade TJ, Schiff KC, Wright CC, Griffith JF, Sandhu SK, et al. Water quality indicators and the risk of illness at beaches with nonpoint sources of fecal contamination. Epidemiol. 2007;18:27–35.

Dorevitch S, DeFlorio-Barker S, Jones RM, Liu L. Water quality as a predictor of gastrointestinal illness following incidental contact water recreation. Water Res. 2015;83:94–103.

Peterson CH, Peterson NA, Powell KG. Cognitive interviewing for item development: validity evidence based on content and response processes. Meas Eval Couns Dev. 2017;50:217–23.

Young I, Sanchez JJ, Desta BN, Heasley C, Tustin J. Recreational water exposures and illness outcomes at a freshwater beach in Toronto, Canada: a prospective cohort pilot study. PLoS ONE. 2023;18:e0286584.

Heaney CD, Sams E, Dufour AP, Brenner KP, Haugland RA, Chern E, et al. Fecal indicators in sand, sand contact, and risk of enteric illness among beachgoers. Epidemiol. 2012;23:95–106.

Lee RM, Lessler J, Lee RA, Rudolph KE, Reich NG, Perl TM, et al. Incubation periods of viral gastroenteritis: a systematic review. BMC Infect Dis. 2013;13:1–11.

Chai SJ, Gu W, O’Connor KA, Richardson LC, Tauxe RV. Incubation periods of enteric illnesses in foodborne outbreaks, United States, 1998–2013. Epidemiol Infect. 2019;147:e285.

Deflorio-Barker S, Wade TJ, Turyk M, Dorevitch S. Water recreation and illness severity. J Water Health. 2016;14:713–26.

Health Canada. Guidelines for Canadian recreational water quality: microbiological sampling and analysis. 2023. https://www.canada.ca/en/health-canada/programs/consultation-guidelines-canadian-recreational-water-quality-microbiological-sampling-analysis/document.html . Accessed 17 June 2024.

Edge TA, Boyd RJ, Shum P, Thomas JL. Microbial source tracking to identify fecal sources contaminating the Toronto Harbour and Don River watershed in wet and dry weather. J Great Lakes Res. 2021;47:366–77.

Article   CAS   Google Scholar  

Saleem F, Edge TA, Schellhorn HE. Validation of qPCR method for enterococci quantification at Toronto beaches: application for rapid recreational water monitoring. J Great Lakes Res. 2022;48:707–16.

Shrestha A, Dorevitch S. Slow adoption of rapid testing: beach monitoring and notification using qPCR. J Microbiol Methods. 2020;174:105947.

Saleem F, Schellhorn HE, Simhon A, Edge TA. Same-day Enterococcus qPCR results of recreational water quality at two Toronto beaches provide added public health protection and reduced beach days lost. Can J Public Health. 2023;114:676–87.

Staley ZR, Boyd RJ, Shum P, Edge TA. Microbial source tracking using quantitative and digital PCR to identify sources of fecal contamination in stormwater, river water, and beach water in a Great Lakes area of concern. Appl Environ Microbiol. 2018;84:1634–52.

Tennant PWG, Murray EJ, Arnold KF, Berrie L, Fox MP, Gadd SC, et al. Use of directed acyclic graphs (DAGs) to identify confounders in applied health research: review and recommendations. Int J Epidemiol. 2021;50:620–32.

Textor J, van der Zander B, Gilthorpe MS, Liśkiewicz M, Ellison GT. Robust causal inference using directed acyclic graphs: the R package ‘dagitty.’ Int J Epidemiol. 2016;45:1887–94.

PubMed   Google Scholar  

McElreath R. Statistical rethinking. Boca Raton: Chapman and Hall/CRC; 2020.

Book   Google Scholar  

van de Schoot R, Depaoli S, King R, Kramer B, Märtens K, Tadesse MG, et al. Bayesian statistics and modelling. Nat Rev Methods Primers. 2021;1:1–26.

Bürkner PC. brms: an R package for Bayesian multilevel models using Stan. J Stat Softw. 2017;80:1–28.

Bauer GR, Braimoh J, Scheim AI, Dharma C. Transgender-inclusive measures of sex/gender for population surveys: mixed-methods evaluation and recommendations. PLoS ONE. 2017;12:e0178043.

Gabry J, Simpson D, Vehtari A, Betancourt M, Gelman A. Visualization in Bayesian workflow. J R Stat Soc Ser A Stat Soc. 2019;182:389–402.

Arel-Bundock V, Greifer N, Heiss A. How to intepret statistical models using marginaleffects for R and Python. 2024.  https://marginaleffects.com . Accessed 28 Aug 2024.

Vehtari A, Gelman A, Gabry J. Practical Bayesian model evaluation using leave-one-out cross-validation and WAIC. Stat Comput. 2017;27:1413–32.

Arnold BF, Ercumen A, Benjamin-Chung J, Colford JM. Brief report: negative controls to detect selection bias and measurement bias in epidemiologic studies. Epidemiol. 2016;27:637–41.

Lipsitch M, Tchetgen Tchetgen E, Cohen T. Negative controls: a tool for detecting confounding and bias in observational studies. Epidemiol. 2010;21:383–8.

Kruschke JK, Liddell TM. The Bayesian new statistics: hypothesis testing, estimation, meta-analysis, and power analysis from a Bayesian perspective. Psychon Bull Rev. 2018;25:178–206.

Fetters MD, Curry LA, Creswell JW. Achieving integration in mixed methods designs - principles and practices. Health Serv Res. 2013;48:2134–56.

Bishop FL. Using mixed methods research designs in health psychology: an illustrated discussion from a pragmatist perspective. Br J Health Psychol. 2015;20:5–20.

Krueger RA, Casey MA. Focus groups: a practical guide for applied research. New Delhi: SAGE Publications Inc.; 2015.

Guest G, Namey E, McKenna K. How many focus groups are enough? Building an evidence base for nonprobability sample sizes. Field Methods. 2017;29:3–22.

Namey E, Guest G, McKenna K, Chen M. Evaluating bang for the buck: a cost-effectiveness comparison between individual interviews and focus groups based on thematic saturation levels. Am J Eval. 2016;37:425–40.

Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol. 2006;3:77–101.

Skinner CS, Tiro J, Champion VL. The health belief model. In: Glanz K, Rimer BK, Viswanath K, editors. Health behavior and health education: theory, research, and practice. San Francisco: Wiley; 2015.

Atkins L, Francis J, Islam R, O’Connor D, Patey A, Ivers N, et al. A guide to using the Theoretical Domains Framework of behaviour change to investigate implementation problems. Implement Sci. 2017;12:77.

Guetterman TC, Fetters MD, Creswell JW. Integrating quantitative and qualitative results in health science mixed methods research through joint displays. Ann Fam Med. 2015;13:554–61.

Booker CL, Harding S, Benzeval M. A systematic review of the effect of retention methods in population-based cohort studies. BMC Public Health. 2011;11:1–12.

Teague S, Youssef GJ, Macdonald JA, Sciberras E, Shatte A, Fuller-Tyszkiewicz M, et al. Retention strategies in longitudinal cohort studies: a systematic review and meta-analysis. BMC Med Res Methodol. 2018;18:1–22.

Abdelazeem B, Abbas KS, Amin MA, El-Shahat NA, Malik B, Kalantary A, et al. The effectiveness of incentives for research participation: a systematic review and meta-analysis of randomized controlled trials. PLoS One. 2022;17:e0267534.

Kristman V, Manno M, Côté P. Loss to follow-up in cohort studies: how much is too much? Eur J Epidemiol. 2004;19:751–60.

Egorov AI, Griffin SM, Ward HD, Reilly K, Fout GS, Wade TJ. Application of a salivary immunoassay in a prospective community study of waterborne infections. Water Res. 2018;142:289–300.

Egorov AI, Converse R, Griffin SM, Bonasso R, Wickersham L, Klein E, et al. Recreational water exposure and waterborne infections in a prospective salivary antibody study at a Lake Michigan beach. Sci Rep. 2021;11:1–10.

Public Health Agency of Canada. FoodNet Canada annual report 2018. 2019. https://www.canada.ca/en/public-health/services/surveillance/foodnet-canada/publications/foodnet-canada-annual-report-2018.html . Accessed 17 June 2024.

Download references

Acknowledgements

We acknowledge the anonymous peer reviewers that provided comments, feedback, and suggestions to earlier iterations of this study in multiple rounds of funding submissions to the CIHR Project Grant competition. We acknowledge and thank all the beachgoers that agreed to participate in the data collection for this study to date. We thank the student data collectors from Toronto Metropolitan University that recruited and surveyed beachgoers at the Toronto site in 2023: Jenice Mun, Mariam Ayub, and Emily Mullen.

This study protocol was peer reviewed and funded by the Canadian Institutes of Health Research (CIHR), grant numbers PJT 185894 and PJT 192023 (P.I. Young).

Author information

Authors and affiliations.

School of Occupational and Public Health, Toronto Metropolitan University, Toronto, ON, Canada

Ian Young, Binyam N. Desta, J. Johanna Sanchez & Jordan Tustin

School of Public Health Sciences, University of Waterloo, Waterloo, ON, Canada

Shannon E. Majowicz

Department of Biology, McMaster University, Hamilton, ON, Canada

Thomas A. Edge & Herb E. Schellhorn

Dalla Lana School of Public Health, University of Toronto, Toronto, ON, Canada

Sarah Elton

Department of Population Medicine, University of Guelph, Guelph, ON, Canada

David L. Pearl

Water and Air Quality Bureau, Health Canada, Ottawa, ON, Canada

Teresa Brooks

Centre for Food-Borne, Environmental and Zoonotic Infectious Diseases, Public Health Agency of Canada, Guelph, ON, Canada

Andrea Nesbitt

Toronto Public Health, Toronto, ON, Canada

Mahesh Patel

Vancouver Coastal Health, Vancouver, BC, Canada

Michael Schwandt

Water Science and Watershed Management Branch, Manitoba Environment and Climate, Winnipeg, MB, Canada

Public Health Niagara Region, Thorold, ON, Canada

Brandon Krupa

Halifax Regional Municipality, Halifax, NS, Canada

Elizabeth Montgomery

You can also search for this author in PubMed   Google Scholar

Contributions

Conceptualization and study methodology: all authors. Data analysis plan: IY, BND, JS, SM, DP, and JT. Drafting of protocol: led by IY with contributions from all other co-authors. All authors reviewed and approved the final version for publication.

Corresponding author

Correspondence to Ian Young .

Ethics declarations

Ethics approval and consent to participate.

This study protocol was reviewed and approved by the Toronto Metropolitan University Research Ethics Board (2023–043). Study participants will provide informed consent by completing a digitally submitted form. Assent will be sought from children and youth that do not have capacity to provide informed consent, as long as a parent or guardian is present and provides consent for their participation.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1. beachgoer surveys. canadian beach cohort study beach and follow-up survey questionnaires, 12889_2024_19889_moesm2_esm.docx.

Additional file 2. Confounding variables for each model. Confounding variables conceptually and/or empirically related to beach water contact exposure and each of the acute water-borne illness outcomes, and that were determined to be part of the minimal sufficient adjustment set in the directed acyclic graph assessment

12889_2024_19889_MOESM3_ESM.docx

Additional file 3. Power and precision analysis. Simulation parameters, details, and results summary for the Bayesian power and precision analysis of the primary relationship of interest

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ .

Reprints and permissions

About this article

Cite this article.

Young, I., Desta, B.N., Sanchez, J.J. et al. Canadian beach cohort study: protocol of a prospective study to assess the burden of recreational water illness. BMC Public Health 24 , 2502 (2024). https://doi.org/10.1186/s12889-024-19889-6

Download citation

Received : 26 June 2024

Accepted : 26 August 2024

Published : 13 September 2024

DOI : https://doi.org/10.1186/s12889-024-19889-6

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Cohort study
  • Beach water
  • Gastrointestinal illness
  • Epidemiology
  • Environmental health

BMC Public Health

ISSN: 1471-2458

interview protocol example qualitative research

IMAGES

  1. Qualitative Research Interview Protocol Template

    interview protocol example qualitative research

  2. 242 Assignment 1 Interview Protocol

    interview protocol example qualitative research

  3. Qualitative Research Interview Protocol Template

    interview protocol example qualitative research

  4. Qualitative interview protocol.

    interview protocol example qualitative research

  5. An Example Interview Protocol Form.docx

    interview protocol example qualitative research

  6. (PDF) Creating Qualitative Interview Protocols

    interview protocol example qualitative research

VIDEO

  1. SAMPLING PROCEDURE AND SAMPLE (QUALITATIVE RESEARCH)

  2. How to Prepare for Interview

  3. Designing Semi-Structured Interview Guides for Implementation Research

  4. Qualitative Data #qualitativeresearch #qualitative

  5. Mastering Research Interviews: Proven Techniques for Successful Data Collection

  6. How to Create a User Reseach Survey Using Google Forms

COMMENTS

  1. PDF Appendix 1: Semi-structured interview guide

    health research: a qualitative study protocol 2 Appendix 2: Participant Information Sheet Experiences with Methods for Identifying and Displaying Research Gaps We invite you to take part in our research study. Before you decide whether to participate, you should understand why the research is being done and what it will involve.

  2. PDF Writing Interview Protocols and Conducting Interviews: Tips for

    10. Be willing to make "on the spot" revisions to your interview protocol. Many times when you are conducting interviews a follow up question may pop into your mind. If a question occurs to you in the interview ask it. Sometimes the "ah-ha" question that makes a great project comes to you in the moment.

  3. (PDF) Creating Qualitative Interview Protocols

    From a Narrative Inquiry approach interview protocols were developed based upon the exploration of a research question. The technique may be applied when gathering qualitative data in one-on-one ...

  4. Preparing for Interview Research: The Interview Protocol Refinement

    The interview protocol framework is comprised of four-phases: Phase 1: Ensuring interview questions align with research questions, Phase 2: Constructing an inquiry-based conversation, Phase 3: Receiving feedback on interview protocols Phase 4: Piloting the interview protocol. Each phase helps the researcher take one step further toward ...

  5. PDF CONDUCTING IN-DEPTH INTERVIEWS: A Guide for Designing and Conducting In

    In-depth interviewing is a qualitative research technique that involves conducting intensive individual interviews with a small number of respondents to explore their perspectives on a particular idea, program, or situation. For example, we might ask participants, staff, and others associated with a program about their experiences and ...

  6. Interview protocol design

    Interview protocol design. On this page you will find our recommendations for creating an interview protocol for both structured and semi-structured interviews. Your protocol can be viewed as a guide for the interview: what to say at the beginning of the interview to introduce yourself and the topic of the interview, how to collect participant ...

  7. PDF TIPSHEET QUALITATIVE INTERVIEWING

    TIPSHEET QUALITATIVE INTERVIEWINGTIP. HEET - QUALITATIVE INTERVIEWINGQualitative interviewing provides a method for collecting rich and detailed information about how individuals experience, understand. nd explain events in their lives. This tipsheet offers an introduction to the topic and some advice on. arrying out eff.

  8. Prompts, Not Questions: Four Techniques for Crafting Better Interview

    We offer effective ways to write interview protocol "prompts" that are generative of the most critical types of information researchers wish to learn from interview respondents: salience of events, attributes, and experiences; the structure of what is normal; perceptions of cause and effect; and views about sensitive topics. We offer tips for writing and putting into practice protocol ...

  9. PDF Prompts, Not Questions: Four Techniques for Crafting Better Interview

    the practice of conducting interviews. We provide illustrative examples from our and others' research to show how generally minor tweaks to interview protocols can go a long way in accomplishing the purpose of interviewing no matter the overarching research question or population under study. Setting an Intention: Prompts Not Questions

  10. Twelve tips for conducting qualitative research interviews

    How to conduct effective and ethical qualitative research interviews? This article provides twelve tips for researchers who want to collect rich and trustworthy data from their interviewees. The tips cover topics such as preparation, rapport, probing, recording, and analysis. The article draws on examples from various disciplines and contexts to illustrate the practical and ethical issues ...

  11. (PDF) How to Conduct an Effective Interview; A Guide to Interview

    Vancouver, Canada. Abstract. Interviews are one of the most promising ways of collecting qualitative data throug h establishment of a. communication between r esearcher and the interviewee. Re ...

  12. Designing the interview guide (Chapter 5)

    The interview guide serves many purposes. Most important, it is a memory aid to ensure that the interviewer covers every topic and obtains the necessary detail about the topic. For this reason, the interview guide should contain all the interview items in the order that you have decided. The exact wording of the items should be given, although ...

  13. Writing Interview Protocols and Conducting Interviews: Tips for

    Students new to doing qualitative research in the ethnographic and oral traditions, often have difficulty creating successful interview protocols. This article offers practical suggestions for students new to qualitative research for both writing interview protocol that elicit useful data and for conducting the interview. This piece was originally developed as a classroom tool and can be used ...

  14. Chapter 11. Interviewing

    Rapley, Timothy John. 2001. "The 'Artfulness' of Open-Ended Interviewing: Some considerations in analyzing interviews." Qualitative Research 1(3):303-323. Argues for the importance of "local context" of data production (the relationship built between interviewer and interviewee, for example) in properly analyzing interview data.

  15. Appendix: Qualitative Interview Design

    Appendix: Qualitative Interview Design ... One of the more popular areas of interest in qualitative research design is that of the interview protocol. Interviews provide in-depth information pertaining to participants' experiences and viewpoints of a particular topic. ... Examples of Useful and Not-So Useful Research Questions. To assist the ...

  16. [PDF] Preparing for Interview Research: The Interview Protocol

    Interviews provide researchers with rich and detailed qualitative data for understanding participants' experiences, how they describe those experiences, and the meaning they make of those experiences (Rubin & Rubin, 2012). Given the centrality of interviews for qualitative research, books and articles on conducting research interviews abound. These existing resources typically focus on: the ...

  17. PDF Annex 1. Example of the semi-‐structured interview guide

    Example of the semi-‐structured interview guide. Viral Hepatitis: Semi-structured interview. M / F Provider / community member / both Age Region. 1. Qualitative interview introduction. Length: 45-60 minutes. Primary goal: To see things the way you see them... more like a conversation with a focus on your experience, your opinions and what you ...

  18. Types of Interviews in Research

    There are several types of interviews, often differentiated by their level of structure. Structured interviews have predetermined questions asked in a predetermined order. Unstructured interviews are more free-flowing. Semi-structured interviews fall in between. Interviews are commonly used in market research, social science, and ethnographic ...

  19. (PDF) THE PROCESS OF QUALITATIVE INTERVIEW: PRACTICAL ...

    The main purpose of the current paper is to of fer practical insight s on the process of q ualitative. data collection through semi structured interv iews for novice researchers. The paper has ...

  20. Sample Interview Protocol Form

    Sample Interview Protocol Form. Faculty Interview Protocol. Institutions: _____ Interviewee (Title and Name): _____ Interviewer: _____ ... Our research project as a whole focuses on the improvement of teaching and learning activity, with particular interest in understanding how faculty in academic programs are engaged in this activity, how they ...

  21. Writing Interview Protocols and Conducting Interviews: Tips for

    Students new to doing qualitative research in the ethnographic and oral traditions, often have difficulty creating successful interview protocols. This article offers practical suggestions for students new to qualitative research for both writing interview protocol that elicit useful data and for conducting the interview. This piece was originally developed as a classroom tool and can be used ...

  22. How To Do Qualitative Interviews For Research

    5. Not keeping your golden thread front of mind. We touched on this a little earlier, but it is a key point that should be central to your entire research process. You don't want to end up with pages and pages of data after conducting your interviews and realize that it is not useful to your research aims.

  23. Qualitative Protocol Guidance and Template

    Qualitative Protocol Guidance and Template ... practice this means satisfying itself the research protocol, research team and the research ... reviewed and a survey and interviews were undertaken to explore what was happening and how to best achieve the potential of federations. This work highlighted that there are various motivations for

  24. The Ultimate Guide to Transcribing Qualitative Research Interviews

    Dovetail is another robust platform that offers a suite of tools for qualitative researchers. It stands out for its focus on collaborative analysis and rich data visualization capabilities. Pricing: $29 Per Month. Features: Import audio and video files with automatic, accurate transcription.

  25. A new patient-reported outcome measure for the evaluation of ankle

    The process begins with qualitative research based on face‒to‒face interviews with CAI individuals to explore the subjective experience of living with ankle instability. ... Table 2 Qualitative research interview guide. ... This study protocol describes the process of developing and validating a new disease-specific patient-reported tool ...

  26. A qualitative analysis of health service problems and the strategies

    Background The COVID-19 pandemic disrupted health systems around the globe. Lessons from health systems responses to these challenges may help design effective and sustainable health system responses for future challenges. This study aimed to 1/ identify the broad types of health system challenges faced during the pandemic and 2/ develop a typology of health system response to these challenges ...

  27. A qualitative study on reasons for women's loss and ...

    An exploratory, descriptive qualitative study using 46 in-depth interviews was employed among purposely selected women who were lost from Option B plus care or resumed care after LTFU, health care ...

  28. Barriers and facilitators to implementing imaging-based diagnostic

    Objectives To identify the barriers and facilitators to the successful implementation of imaging-based diagnostic artificial intelligence (AI)-assisted decision-making software in China, using the updated Consolidated Framework for Implementation Research (CFIR) as a theoretical basis to develop strategies that promote effective implementation. Design This qualitative study involved ...

  29. Promoting sustainable behavior: addressing user clusters through

    While qualitative research can add an in-depth understanding of the results, quantitative research can provide numerical data, offering possibilities for statistical analyses and generalization ...

  30. Canadian beach cohort study: protocol of a prospective study to assess

    This protocol describes the methodology for a study to determine the risk and burden of RWI due to exposure to fecal pollution at beaches in Canada. This study will use a mixed-methods approach, consisting of a prospective cohort study of beachgoers with embedded qualitative research. ... The qualitative research phase will include focus groups ...