The life history interviews ran for 40 – 60 minutes. The timing for sessions 2 and 3 is not provided.
Interviews are the most common data collection technique in qualitative research. There are four main types of interviews; the one you choose will depend on your research question, aims and objectives. It is important to formulate open-ended interview questions that are understandable and easy for participants to answer. Key considerations in setting up the interview will enhance the quality of the data obtained and the experience of the interview for the participant and the researcher.
Qualitative Research – a practical guide for health and social care researchers and practitioners Copyright © 2023 by Danielle Berkovic is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License , except where otherwise noted.
Run a free plagiarism check in 10 minutes, generate accurate citations for free.
Methodology
Published on January 27, 2022 by Tegan George and Julia Merkus. Revised on June 22, 2023.
A structured interview is a data collection method that relies on asking questions in a set order to collect data on a topic. It is one of four types of interviews .
In research, structured interviews are often quantitative in nature. They can also be used in qualitative research if the questions are open-ended, but this is less common.
While structured interviews are often associated with job interviews, they are also common in marketing, social science, survey methodology, and other research fields.
What is a structured interview, when to use a structured interview, advantages of structured interviews, disadvantages of structured interviews, structured interview questions, how to conduct a structured interview, how to analyze a structured interview, presenting your results, other interesting articles, frequently asked questions about structured interviews.
Structured interviews are the most systematized type of interview. In contrast to semi-structured or unstructured interviews, the interviewer uses predetermined questions in a set order.
Structured interviews are often closed-ended. They can be dichotomous, which means asking participants to answer “yes” or “no” to each question, or multiple-choice. While open-ended structured interviews do exist, they are less common.
Asking set questions in a set order allows you to easily compare responses between participants in a uniform context. This can help you see patterns and highlight areas for further research, and it can be a useful explanatory or exploratory research tool.
Structured interviews are best used when:
A structured interview is straightforward to conduct and analyze. Asking the same set of questions mitigates potential biases and leads to fewer ambiguities in analysis. It is an undertaking you can likely handle as an individual, provided you remain organized.
Make sure to choose the type of interview that suits your research best. This table shows the most important differences between the four types.
Fixed questions | ||||
---|---|---|---|---|
Fixed order of questions | ||||
Fixed number of questions | ||||
Option to ask additional questions |
Increased credibility, reliability and validity, simple, cost-effective and efficient, formal in nature, limited flexibility, limited scope.
It can be difficult to write structured interview questions that approximate exactly what you are seeking to measure. Here are a few tips for writing questions that contribute to high internal validity :
Structured interviews are among the most straightforward research methods to conduct and analyze. Once you’ve determined that they’re the right fit for your research topic , you can proceed with the following steps.
Start with brainstorming some guiding questions to help you conceptualize your research question, such as:
If you have satisfying reasoning for proceeding with a structured interview, you can move on to designing your questions.
Pay special attention to the order and wording of your structured interview questions . Remember that in a structured interview they must remain the same. Stick to closed-ended or very simple open-ended questions.
Depending on your topic, there are a few sampling methods you can use, such as:
Determine whether you will be conducting your interviews in person or whether your interview will take pen-and-paper format. If conducted live, you need to decide if you prefer to talk with participants in person, over the phone, or via video conferencing.
As you conduct your interviews, be very careful that all conditions remain as constant as possible.
After you’re finished conducting your interviews, it’s time to analyze your results.
If you have audio-recorded your interviews, you will likely have to transcribe them prior to conducting your analysis. In some cases, your supervisor might ask you to add the transcriptions in the appendix of your paper.
First, you will have to decide whether to conduct verbatim transcription or intelligent verbatim transcription. Do pauses, laughter, or filler words like “umm” or “like” affect your analysis and research conclusions?
The transcription process is a great opportunity for you to cleanse your data as well, spotting and resolving any inconsistencies or errors that come up as you listen.
After transcribing, it’s time to conduct your thematic or content analysis . This often involves “coding” words, patterns, or themes, separating them into categories for more robust analysis.
Due to the closed-ended nature of many structured interviews, you will most likely be conducting content analysis, rather than thematic analysis.
When conducting content analysis, you can take an inductive or a deductive approach. With an inductive approach, you allow the data to determine your themes. A deductive approach is the opposite, and involves investigating whether your data confirm preconceived themes or ideas.
Content analysis has a systematic procedure that can easily be replicated , yielding high reliability to your results. However, keep in mind that while this approach reduces bias, it doesn’t eliminate it. Be vigilant about remaining objective here, even if your analysis does not confirm your hypotheses .
After your data analysis, the next step is to combine your findings into a research paper .
If you conducted inferential statistics in addition to descriptive statistics, you would generally report the test statistic , p -value , and effect size in your results section. These values explain whether your results justify rejecting your null hypothesis and whether the result is practically significant .
You can then conclude with the main takeaways and avenues for further research.
Let’s say you are interested in healthcare on your campus. You attend a large public institution with a lot of international students, and you think there may be a difference in perceptions based on country of origin.
Specifically, you hypothesize that students coming from countries with single-payer or socialized healthcare will find US options less satisfying.
There is a large body of research available on this topic, so you decide to conduct structured interviews of your peers to see if there’s a difference between international students and local students.
You are a member of a large campus club that brings together international students and local students, and you send a message to the club to ask for volunteers.
Here are some questions you could ask:
After conducting your interviews and transcribing your data, you can then conduct content analysis, coding responses into different categories. Since you began your research with the theory that international students may find US healthcare lacking, you would use the deductive approach to see if your hypotheses seem to hold true.
If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.
Research bias
A structured interview is a data collection method that relies on asking questions in a set order to collect data on a topic. They are often quantitative in nature. Structured interviews are best used when:
More flexible interview options include semi-structured interviews , unstructured interviews , and focus groups .
The four most common types of interviews are:
The interviewer effect is a type of bias that emerges when a characteristic of an interviewer (race, age, gender identity, etc.) influences the responses given by the interviewee.
There is a risk of an interviewer effect in all types of interviews , but it can be mitigated by writing really high-quality interview questions.
If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.
George, T. & Merkus, J. (2023, June 22). Structured Interview | Definition, Guide & Examples. Scribbr. Retrieved July 1, 2024, from https://www.scribbr.com/methodology/structured-interview/
Other students also liked, semi-structured interview | definition, guide & examples, unstructured interview | definition, guide & examples, what is a focus group | step-by-step guide & examples, what is your plagiarism score.
A research instrument is a tool you will use to help you collect, measure and analyze the data you use as part of your research. The choice of research instrument will usually be yours to make as the researcher and will be whichever best suits your methodology.
There are many different research instruments you can use in collecting data for your research:
These are the most common ways of carrying out research, but it is really dependent on your needs as a researcher and what approach you think is best to take. It is also possible to combine a number of research instruments if this is necessary and appropriate in answering your research problem.
How to Collect Data for Your Research This article covers different ways of collecting data in preparation for writing a thesis.
An official website of the United States government
The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.
The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .
Anne e pezalla.
Pennsylvania State University, USA
Michelle miller-day.
Because the researcher is the instrument in semistructured or unstructured qualitative interviews, unique researcher characteristics have the potential to influence the collection of empirical materials. This concept, although widely acknowledged, has garnered little systematic investigation. This article discusses the interviewer characteristics of three different interviewers who are part of a qualitative research team. The researcher/interviewers – and authors of this article – reflect on their own and each other’s interviews and explore the ways in which individual interview practices create unique conversational spaces. The results suggest that certain interviewer characteristics may be more effective than others in eliciting detailed narratives from respondents depending on the perceived sensitivity of the topic, but that variation in interviewer characteristics may benefit rather than detract from the goals of team-based qualitative inquiry. The authors call for the inclusion of enhanced self-reflexivity in interviewer training and development activities and argue against standardization of interviewer practices in qualitative research teams.
Inner Silence Writing, Reflecting, Hoping Slipping into Truth Interviewing moments Take me by surprise Like Sunlight ( Janesick, 1998 : 53)
The level of researcher involvement in qualitative interviewing – indeed, the embodiment of the unique researcher as the instrument for qualitative data collection – has been widely acknowledged (e.g. Cassell, 2005 ; Rubin and Rubin, 2005 ; Turato, 2005 ). Because the researcher is the instrument in semistructured or unstructured qualitative interviews, unique researcher attributes have the potential to influence the collection of empirical materials. Although it is common for scholars to advocate for interviewer reflexivity ( Ellis and Berger, 2003 ; Pillow, 2003 ) and acknowledge the researcher as the primary instrument in qualitative interview studies ( Guba and Lincoln 1981 ; Merriam 2002 ), with some notable exceptions (e.g. Pitts and Miller-Day, 2007 ; Watts, 2008 ) few have actually examined the qualitative interview as a collaborative enterprise, as an exchange between two parties, reflecting on the ways in which the interviewer affects the organization of this talk-in-interaction and the processes by which the talk is produced. Given this, the first aim of this study is to provide a reflexive account of how three different interviewers (authors Jonathan, Annie, and Michelle) individually facilitate unique conversational spaces in their qualitative interviews.
Understanding the qualitative interview as social interaction is important for any sole qualitative researcher, but as Miller-Day et al. (2009) pointed out, this may be particularly germane for qualitative research teams (QRT). Herriott and Firestone (1983) argued that when there is more than one interviewer on a QRT, inconsistencies in interview style and approach may affect the quality of the research conversation and ultimately the study findings. Indeed, several published resources on QRTs suggest that interviewers should receive the same standard training with an eye toward producing consistent strategies and credible findings ( Bergman and Coxon, 2005 ; United States Agency for International Development’s Center for Development Information and Evaluation, 1996 ). Unfortunately, current literature addressing QRTs has primarily focused on the relationship dynamics among research team members (e.g. Fernald and Duclos, 2005 ; Rogers-Dillon, 2005 ; Sanders and Cuneo, 2010 ; Treloar and Graham, 2003 ) and on group analytical procedures (e.g. Guest and MacQueen, 2007 ; MacQueen et al., 1999 ; Olesen et al., 1994 ) rather than on the team member roles (e.g. interviewer, analyst) or data collection practices (e.g. strategies for building rapport). As QRTs are becoming more prevalent, especially in funded research ( Barry et al., 1999 ; Ferguson et al., 2009 ), there is a need for more information about how to maximize the use of multiple interviewers and maintain a focus on the unified research goals while respecting the flexibility of the in-depth qualitative interview as talk-in-interaction ( Mallozzi, 2009 ; Miller-Day et al., 2009 ). Toward that end, the second aim of this study is to reflect on and discuss implications of the study findings for qualitative research teams.
The phrase researcher-as-instrument refers to the researcher as an active respondent in the research process ( Hammersley and Atkinson, 1995 ). Researchers ‘use their sensory organs to grasp the study objects, mirroring them in their consciousness, where they then are converted into phenomenological representations to be interpreted’ ( Turato, 2005 : 510). It is through the researcher’s facilitative interaction that a conversational space is created – that is, an arena where respondents feel safe to share stories on their experiences and life worlds ( Owens, 2006 ).
Across the years, scholars have considered the nature of researcher-as-instrument as interpreter of empirical materials and as involved in the construction of ideas ( Janesick, 2001 ; Singer et al., 1983 ). This consideration began to grow after feminist UK scholars such as Oakley (1981) and Graham (1983) criticized quantitative-based research methods that assumed a detached and value-free researcher in the acquisition and interpretation of gathered data, and was further developed by feminist ethnographers such as Stack (1995) , who offered seminal research on ‘dramatizing both writer and subject’ in fieldwork on neighborhoods and communities (p. 1). More recently, scholars have extended their interest of researcher-instruments to consider specific interviewing strategies. Conversation analysis tools have often been used to examine the intricacies of interview conversations, studying the ways in which the ‘how’ of a given interview shapes the ‘what’ that is produced ( Holstein and Gubrium, 1995 ; Pillow, 2003 ).
While qualitative scholars agree that a conversational space must be created, they often disagree as to what that space should look like. Some scholars argue for a Rogerian interviewing space, where empathy, transparency, and unconditional positive regard are felt ( Janesick, 2001 ; Mallozzi, 2009 ; Matteson and Lincoln, 2009 ). Pitts and Miller-Day (2007) documented specific trajectories experienced by qualitative interviewers when establishing rapport with research participants, and the authors argue that a feeling of interpersonal connection was necessary for the qualitative interviewer and interviewee to develop a partnership. These claims are grounded in the feminist or postructuralist perspective, which hold that ‘the essential self … is not automatically revealed in a neutral environment but can and might need to be benevolently coaxed out into a safe environment, where it can be actualized’ ( Mallozzi, 2009 : 1045).
Others advocate against a feminist approach to interviewing. Tanggaard (2007) , for example, viewed empathy to be a dangerous interviewer quality because it tends to create a superficial form of friendship between interviewer and respondent. Self-disclosure has been similarly critiqued ( Abell et al., 2006 ). These critics hold that self-disclosure may actually distance the interviewer from the respondent when the self-disclosure portrays the interviewer as more knowledgeable than the respondent. These studies question the popular assumption that displays of empathy or acts of self-disclosure are naturally interpreted by the respondent as a means of establishing a conversational space of rapport and mutual understanding.
So where do these opposing viewpoints lead us as researchers? For the three of us who are authoring this article, the answer to that question is an unsatisfactory, ‘we are not sure.’ Working as part of a QRT, we were trained in a systematic manner, provided with clear procedures for carrying out our qualitative interviews, and educated in the ultimate goals of the research project. The interviewees in this team project were a fairly homogenous group – rural 6–7th grade students – and all three of us interviewed youth in both grades, both male and female, gregarious and stoic. Yet, the interviews we conducted all turned out to be very different. What stood out to us was that our individual attributes as researchers seemed to impact the manner in which we conducted our interviews and affected how we accomplished the primary objective of the interviews, which was to elicit detailed narratives from the adolescents. Hence, we set forth to better understand how we, as research instruments, individually facilitated unique conversational spaces in our interviews and determine if there were some researcher attributes or practices that were more effective than others in eliciting detailed narratives from the adolescent respondents. Additionally, we sought to reflect on the emergent findings and offer a discussion of how unique conversational spaces might impact QRTs.
The team-based qualitative research, participants.
The empirical materials for the current study came from a larger study designed to understand the social context of substance use for rural adolescents in two Mid-Atlantic States. A total of 113 participants between 12 and 19 years old ( M = 13.68, SD = 1.37) were recruited from schools identified as rural based on one of two main criteria: (a) the school district being located in a ‘rural’ area as determined by the National Center for Education Statistics (NCES, n.d.; and (b) the school’s location in a county being considered ‘Appalachian’ according to the Appalachian Regional Commission (ARC). Participating schools served a large population of economically disadvantaged students identified by family income being equal to or less than 180 percent of the United States Department of Agricultural federal poverty guidelines and these guidelines start at an annual salary of $20,036 but increase by $6,919 for each additional household member ( Ohio Department of Education 2010 ).
Eleven interviewers comprised the qualitative research team for this team-based study. All underwent at least four hours of interviewer training, which reviewed interview protocol and procedures, summarized guidelines for ethical research, and included interview practice and feedback. During training, interviewers were given a clear interview schedule. Because the interviews were semistructured, the interviewers were instructed to use the schedule as a guide. They were instructed not to read the questions word-for-word from the interview schedule, but instead to use their own phrasing for asking each question, use additional probes or prompts if necessary, and use a communication style that felt comfortable and natural to them. Interviewers were also instructed to interact with their participants as learners attempting to understand the participants’ experiences and realities from their perspectives ( Baxter and Babbie, 2004 ). All interviewers on the team participated in mock interview sessions and were provided with initial feedback about their interview skill.
The interviews themselves were conducted in private locations within the schools such as guidance counselors’ offices or unused classrooms or conference rooms. In most cases, either the adult school contact or the study liaison brought students to their interview site to ensure that the interviewer did not know the students’ names – only their unique identification number. Researchers assured all students their responses would remain confidential, in accordance with Institutional Review Board standards, and the interviewee was permitted to withdraw his/her data from the study at any time. All interviews were digitally recorded and ranged from 18–91 minutes in length. This length is typical of interviews dealing with sensitive topics such as drug use in a school-based setting ( Alberts et al., 1991 ; Botvin et al., 2000 ).
Interview sample.
For the purpose of the present study we all agreed that self-reflexivity was necessary to ‘understand ourselves as part of the process of understanding others’ ( Ellis and Berger, 2003 : 486), increase the transparency of our findings, and increase the legitimacy and validity of our research. Therefore, we elected to limit our analysis to only those interviews that the three of us conducted, excluding transcripts from the other eight interviewers in the team-based study. Transcripts of the interviews were provided by a professional transcriptionist who was blind to the purpose of the study. A total of 18 interviews were transcribed (six per interviewer). Further refining the sample, we elected to analyze only interviews that we deemed to be of sufficient quality. Transcript quality was based on two indicators: (a) the level of transcription detail; and (b) the ability of the respondent to speak and understand English. Transcripts that were poorly done (i.e. that failed to include sufficient detail from the interview audio file) or that indicated that the respondent did not understand English were rated as low quality and were not included in final analyses. We took this step to ensure that all transcripts in the study sample were of sufficient quality and provided adequate detail to decipher our interviewer practices. From the 18 originally submitted transcripts, we found 13 to be of sufficient quality, and retained them for analysis.
Following Baptiste’s (2001) advice, the first step in our analysis was to acknowledge our interpretivist orientation and to honestly discuss among ourselves the risks involved with self-reflexively examining our own work. If you think it is difficult to listen to your own voice in an audio-recording, imagine listening to your own voice and simultaneously reading the text illustrating your own interview errors, dysfluencies, and awkward pauses! This first step was perhaps the most difficult, but it resulted in a shared agreement for honest self-reflection and analysis.
The next step involved restricting our analysis to three specifically selected topics from the research interview. The three discussion topics included rural living, identity and future selves, and risky behavior. We identified these topics of discussion because they each represented a different level of emotional risk for the respondents ( Corbin and Morse, 2003 ), based on the assumptions that (a) respondents were all relatively similar in their emotional well-being – specifically, that none were too emotionally fragile to engage in a conversation with us, and (b) discussing topics of illegal or private activities would arouse more powerful emotions in respondents than would topics of legal and mundane activities. Across the entire sample of interviews, conversations on rural living were seen as fairly low-risk topics of discussion. The topic often served as a warm-up for many interview conversations because the topic was easy for respondents to discuss. Conversations on identity and future selves were typically perceived as moderately uncomfortable for respondents. Respondents were asked to talk about their personality characteristics and who they wanted to become in the future. Although some respondents appeared to enjoy the opportunity to talk about themselves, many appeared mildly uncomfortable doing so, perhaps because they were being asked to talk about themselves with someone they did not know. Conversations on risky behavior were often perceived to be more dangerous. Despite being reassured that their stories would remain confidential, respondents were nevertheless being asked to disclose information about potentially illegal activities in which they had taken part. These topics of discussion were not always mutually exclusive (e.g. respondents often talked about risky behavior when they discussed rural living); but, because every interview in the larger study included topics of discussion that were low, moderate, and highly sensitive, we believed that the three chosen topics of discussion represented an appropriate cross-section of the interview.
Dividing interviews into topics of discussion provided a way to organize long transcripts into relatively distinct topical areas. It also allowed us to examine interviewer practices across comparable topics of discussion, and to assess the ways in which particular characteristics facilitated different conversational spaces.
The next step involved identifying and labeling the discussion of each of the three topics within each of the 13 transcripts. As we labeled the related passages in the transcripts, each of us followed the same iterative analytic process, commencing with an analysis of our own individual transcripts and followed by a cross-case analysis of each others’ transcripts. Our individual, within-case analysis proceeded along four main steps: reading through our own transcripts 2–3 times before extracting the separate topics of discussion; then within each topic of discussion across all of our own interviews, we inductively identified, interpreted, and labeled what we each saw as important in the utterances, sequencing, and details of the conversational interaction, assessing the ways in which interviewer practices seemed to facilitate and to inhibit respondent disclosure. For our purposes, we defined an interviewer practice as an action performed repeatedly. These practices were eventually categorized into groups of interviewer characteristics. We conceptually defined an interviewer characteristic as a distinguishing general feature or overall quality of the interviewer. Throughout this process we individually developed and refined our code lists, discussing our emergent codes with one another via weekly meetings and email correspondence. As part of this process, we coded our own transcripts and then shared and discussed our code list with the others. Next, each of us (re) coded a portion of each other’s transcripts and calculated the percentage of raw coding agreement. Disagreements were negotiated until we all reached consensus on a working list of codes. This cross-case analysis did not commence until we had reached a minimum coding agreement of .80. Within the topic of rural living, for example, if two of us each generated five codes to describe one interviewer’s researcher-as-instrument characteristics, consensus was necessary on at least four of those codes before a trustworthy assessment could be made.
During the cross-case analysis we compared and contrasted the coded material within and across the entire sample of transcripts to identify discrepancies and consistencies in our codes. From this process, we reduced the code list to a common set of researcher-as-instrument characteristics and interviewing practices that were present in the utterances, sequencing, and details of the conversational interactions. Throughout this process we explicitly identified evidence (excerpts from the interview transcripts) for any research claim to connect the empirical materials with any findings ( Maxwell, 1996 ). The three of us met periodically to conference, share ideas, and challenge and refine emergent findings. We used Nvivo 8 to manage and analyze the interview data. In the end, we were able to (a) identify and describe individual interviewer practices that served to characterize each of us as individual interviewers, and (b) compare and contrast our individual differences within and across the different topics in the interview conversation. During this comparison we paid special attention to the adolescent’s contribution to the conversation and his or her level of disclosure.
Annie’s general interviewer characteristics were coded as affirming, energetic , and interpretive. The affirming characteristic was defined as ‘showing support for a respondent’s idea or belief’ and is illustrated in the following excerpt:
Annie : What do you do? Resp : I help the milkers, I help – Annie : You know how to milk a cow? That’s so cool, that’s great. Resp : Yeah, but you have to watch out ’cause they kick sometimes. ’Cause they don’t want you messing with their teats – they kick, it’s, uh … Annie : Have you been kicked? Resp : I got kicked in the arm, but I’m scared I’m gonna get kicked in the face one of these days. Annie : Yeah, that would really hurt, huh? Oh, wow, that’s amazing.
Comments like ‘that’s so cool, that’s great,’ and ‘Oh, wow, that’s amazing’ illustrated the affirmation. Annie’s affirming characteristic could be seen in other transcript passages in phrases such as ‘great,’ ‘awesome,’ ‘amazing,’ and ‘excellent.’ Annie’s interviewer characteristics were also coded as energetic , defined as ‘showing wonder, astonishment, or confusion by something respondent said that was unexpected, or remarkable.’
Annie : So you like dirt bikes. Do you have one of your own? Resp : Yeah, I have a, it’s a one, it’s a two-fifty. It’s like a, it’s a CRX 250, it’s like … Annie : Oh, wow! Is it a pretty big bike? Wow, what do you like to do on it? Resp : I just ride around in the fields and usually chase after deer on it. Annie : Really!
Annie : Um, is your sister older or younger? Resp : She’s younger, she’s ten. Annie : So you kinda look out for her? Resp : Yeah. She likes to feed the calves. Annie : Oooooh!! Cute little baby calves. That’s neat. Wow! How unique. That’s really, really cool.
Annie : What’s a – dwarf bunny? What is that? Resp : Yeah, they’re like little bunnies – they’re about that big. Annie : Like real bunnies? Resp : Yeah, they’re about that big – Annie : Oh, dwarf bunnies. Oooh!
The sheer number of exclamation marks in Annie’s transcripts illustrated her energetic interviewer characteristic, but the words she used (wow, really, oooooh) also illustrated the lively quality of her interview approach.
Lastly, Annie was also characterized as being interpretive , conceptually defined as ‘expressing a personal opinion or interpretation regarding something a respondent said.’ For example:
Resp : And I chugged it and like, I passed out. Annie : Did you have to go to the hospital? Resp : Oh no. We were in the middle of the woods and we weren’t saying anything ’cause we all would get busted. Annie : Oh my gosh, oh, you must have felt terrible.
Annie : Do you think that he drinks beer, or does chew or smokes cigarettes? Resp : He probably does, but – Annie : Do you think so? Um, and so when he offered this to you, were you, were you uncomfortable? Like, did you feel kind of weird?
In all of the above passages, Annie’s interpretive nature is evident in instances where she offers her own construal of the respondent’s story (e.g. ‘you must have felt terrible’), or when she creates a hypothetical scenario for the respondent to comment on (‘do you think he drinks beer?’). Such utterances illustrate her tendency to offer an opinion, either in response to a respondent’s story or before a conversation formally began.
Jonathan’s interviewer characteristics were characterized by neutrality and naivety. The neutral interviewer characteristic, defined as ‘not engaged on one side of argument or another; neither affirming nor disapproving of respondent’s stories,’ was best illustrated by the lack of extensive commentary Jonathan provided in his interviews. In comparison to Annie’s transcripts, Jonathan’s transcripts were characterized by shorter utterances, fewer opinionated responses, and very few exclamation marks:
Jonathan : Who were you living with in [name of town]? Resp : My mom. But she, my grandma got custody of me, so. Jonathan : What, what happened to do that? Like, what, what brought you? Resp : Well, I got put in [the local in-patient treatment facility] ’cause I said I was gonna kill myself. Jonathan : Oh, okay.
Jonathan : Okay. What, um, so does your dad mind if you drink then? Like, if he found out that you were going to the bar party and that you had gotten drunk, what would he say? Resp : He probably wouldn’t do anything because, like, I used to have parties at his house, at my dad’s house. But then he got, then he went to jail, so we stopped [lowers tone, quieter] In case, like, ’cause they were keeping a good eye on him after he got out. Jonathan : Mm hmm. Resp : So we stopped having parties there, just so that, like, my dad wouldn’t get in trouble for, like, the underage drinking. Jonathan : Okay.
It was often difficult to even see evidence of Jonathan’s ‘footprint’ in his transcripts because he maintained a fairly minimal presence in his interviews. As seen from the illustrations above, Jonathan kept many of his responses or comments to single-word phrases, ‘Okay,’ or ‘Mm hmm,’ or ‘Yeah.’ When Jonathan did offer more extensive commentary, it was often to acknowledge his lack of understanding about a subject matter. His transcripts often included passages like ‘I’ve never been here before’ or ‘I don’t know anything about that .’ It was in these instances that Jonathan’s interviewer characteristic of naive , defined as showing a lack of knowledge or information about respondent, was best illustrated:
Jonathan : Is it like illegal? Or is it like the whole town shuts down, they do racing down the streets? Resp : It’s illegal. Jonathan : Yes? I don’t know – you got tell me these things. I am learning.
These illustrations of naivety were most likely uttered to give the respondent a sense of mastery over the interview topics of discussion, and to elicit the respondent’s interpretations of the events or topics of discussion.
Michelle’s interviewer characteristics illustrated different qualities than either Jonathan or Annie. Michelle’s qualities as an interviewer were coded as being high in affirmation and self-disclosure. Michelle’s transcripts were filled with encouragement and compliments toward her respondents. The following utterances from Michelle illustrate this characteristic:
My goodness, you are smart for a seventh grader … It sounds like you are very helpful … Yes, that is a skill that you have there, that not a lot of people do have …
These instances of affirmation, defined as ‘showing support for a respondent’s idea or belief,’ were found in almost every topic of discussion. Michelle’s transcripts were also filled with instances of self-disclosure. Michelle often used stories of her adolescent son when she was explaining a topic that she wanted to discuss with the adolescent respondents:
Resp : On Friday nights, tonight I’ll go to my gran’s and we usually have a get-together and just play cards, it’s just a thing we do. I like it. It’s just time to spend with family. Michelle : Absolutely. Well, that sounds really nice. And I have a 14-year old in eighth grade. And every Sunday night, we do the game night sort of thing and I look forward to it.
The passages above illustrate three distinct interviewer characteristics: one high in affirmations, energy, interpretations ; another characterized by neutrality and naivety ; and another high in affirmations and self-disclosure . Although all three interviewers demonstrated other instrument qualities in their interviews, the few qualities associated with each interviewer above were found in nearly every topic of discussion (e.g. in almost every conversational topic for Annie, there was evidence of her affirming, energetic , and interpretive interviewer characteristics). These qualities seemed to characterize the unique style of the interviewers rather than reflect reactions to specific contexts. These qualities also persisted in our other interviews not included in these analyses.
In the following section, we compare our general interviewer characteristics across the three topics of discussion: rural living, identity and future selves, and risky behavior. We also examine the ways in which our respective interviewer characteristics appeared to influence the conversational space of our interviews. Specifically, we assess how the various interviewer characteristics seemed to facilitate or inhibit respondent disclosure.
Rural living was generally a low-risk topic. In her discussion of this topic with one adolescent, Michelle tended to utilize her self-disclosing characteristic:
Michelle : Are there groups or, like, not cliques, I don’t wanna say, but groups in school; kids who are more like you, who are more into the computers, versus the kids who are huntin’ and fishin’, versus the jocks? I know at my son’s school there are. Resp : There’s not really anybody like that here. Like all of my friends who are like that, they’re in a higher grade than me. But there are some people in my grade where I can relate to in a sense, yeah. Michelle : Okay, so most kids you can relate to are older but most o’ the kids, your peers and your age, are more into the four wheeling and hunting and fishing and kinda stuff like that? That must feel, well, I don’t know, I’m, I’m projecting now unto my own son because sometimes he feels like, that you know, it’s just ridiculous. Resp : Yeah. Michelle : It, eh, ya’ know – and you feel kinda stuck. Resp : Mmm hmm. Michelle : Yeah? Resp : Yeah. I just, like I’ll be sitting there in class and then they’ll start talking about hunting or fishing and I just wanna pull out my hair’ cause I, I don’t know how you can like that stuff. Like it’s just sitting there for a couple of hours doing nothing. Michelle : Right, right.
From the excerpt above, the respondent’s experience with school crowds did not appear to coincide with Michelle’s understanding of her son’s with school crowds. However, Michelle’s self-disclosure seemed to open up the conversational space for the respondent to respond in kind. In the final passage, the respondent offered a different perspective on the nature of crowds in his school.
Conversely, in his conversations with respondents about rural living, Jonathan tended to demonstrate his naive interviewer characteristic:
Jonathan : Is this [name of X town]? Is that where you live now? I don’t even know where I am. Okay, okay. I thought this was [name of Y town] is why, but it’s just the name of the High School. Resp : Well, this is [name of Y town], but [name of X town] is out near. Jonathan : Uh, I’m not, I don’t know this area so well … Resp : And then, like, when you hit, there’s this big huge fire station … and then there’s the [name of X town] Elementary School. And then if you go down there and then you turn and you go up, and then that, like, that whole area in there is [name of X town]. Jonathan : Okay. Resp : And then you go back and where there’s classified as [name of X town], but it’s actually [name of Z town]. Jonathan : Okay.
In response to Jonathan’s naivety (‘I don’t even know where I am’ and ‘I don’t know this area so well’), the respondent appeared to seize the opportunity to teach Jonathan about the area. The respondent did not simply answer Jonathan’s questions; he provided information about which Jonathan did not ask (e.g. the whereabouts of the fire station, elementary school, and nearby towns).
In contrast, Annie’s conversations about rural living were filled with her energetic interviewer characteristic:
Annie : What do you mean by hang out, like what do you ha-, what do you do when you hang out? Resp : We go four wheeler riding. Annie : Oh, four wheeler riding! Cool! Is that dangerous? Is it? Resp : Yeah, and we go up to our camp we built. Um … Annie : That you and your friends built? Resp : Mmm hmm. Annie : Wow! How did you know how to do all that? Resp : Um, my brother and a couple of his friends, that we’re really good friends with, helped us. And like, over the summer we camp out like every night. Like, I’m never home in the summertime, ever. Annie : Wow! Resp : There are three bedrooms and it’s, has a wood burner and it, yeah. Annie : That’s like, that sounds like a real house. That’s amazing. Resp : We built it out of trees. We had our, couple of our friends and our dads help us. We’ve had it for three years and it’s really nice.
After Annie’s lively reply to the respondent’s interest in four wheeler riding (‘Oh, four wheeler riding! Cool!’), the respondent opened up about a different, but related topic: her summer camp house. Moreover, Annie’s energetic comment about the house (‘Wow! How did you know how to do all that?’) seemed to open the conversational space even more, as the respondent explained the ways the house was built, the amenities of the house, and the amount of time she spent in the house during the summer.
Conversations about the adolescents’ identity and future selves were considered moderately uncomfortable for adolescents. The interview questions prompted the adolescents to talk about the qualities that described their personal and social identities, along with any hopes and aspirations they had for the future. Although the interview questions were designed to be as unobtrusive as possible, the topic was fairly personal. The interview questions required the adolescent respondent to be introspective with someone with whom they had no personal history:
Jonathan : After you’re all done with school, so you go through and you graduate from a high school. What do you want to do after that? Resp : Go back to Mexico and visit my family, and um get a job. Jonathan : Back in Mexico? Resp : It doesn’t really matter where, but just like get a job. Jonathan : Yes. What kind of job? Resp : Probably like a secretary or whatever job they give me, except prostitute. Jonathan : None a’ that. Is there anything you worry about in that transition of how you’re going to go get a job and what kind of job you’ll get, things like that? Resp : Not really, because like, you just have to like – I dunno, just like – just like – go on with life and whatever happens, just, take it.
Here again, Jonathan’s neutrality was demonstrated not by what he said, but what he did not say. Despite the fact that the adolescent shared a potentially troubling disclosure, that she would consider any job except prostitution, Jonathan kept his personal reactions to a minimum and provided only a short response (i.e. ‘None a’ that’). After this instance of neutrality, Jonathan moved on to a different topic (i.e. asking the respondent if she had any concerns about getting a job in the future), and the respondent moved on, as well, dutifully answering his questions. She provided no more information on her prostitution comment.
In comparison to Jonathan, Michelle and Annie’s utterances in their conversations on identity and future selves were replete with codes for affirmation:
Resp : I wanna be a pediatrician nurse or something. Like, I love kids to death. Like, I’ve, I learned how to change – I’ve been changing diapers – this is no lie – I’ve been changing diapers since I was like seven years old. ‘Cause my mom, step-mom, had a baby before my dad left again, and like I was always changing her diapers and stuff, and like, I babysit constantly. Annie : Aww, I bet you’re really good with kids. Resp : Oh, I’m amazing. Like, there’s this one little boy, like he goes to my church, he’s just like four, and I took him to my house one day and like he asked his mom to buy him a toy at the toy store, I cried, she’s like, she’s like, ‘Aww, I can’t sweetie, I don’t have the money’ and he was crying, he and he’s like ‘All my friends have toys. He was like two and he, like he, like he goes over to this daycare and he’s like ‘All my friends have these toys but I don’t have any.’ Like he had no toys at all and like my mom gave them, handed me a hundred dollars and she’s like ‘Go to, go, go buy toys. We gave him a hundred dollars, like we gave him all this money and they went out and bought like a b-, toys and stuff. It was really nice. Annie : That is, that’s really neat.
Michelle : So the first question that I have here is which of these things that you wrote down are you most proud of? Resp : Well, being helpful. Michelle : How are you helpful? Resp : Well. In school, there are some people that don’t like speak English that well. And I help them by translating. Michelle : Oh okay. Like you are doing for your teacher in there. You are helping do that. So how long have you been bilingual your whole life? Do both of your parents speak Spanish? Resp : Well, yes, they are Mexicans. They barely know English. Michelle : And they barely know English. And when did you come here? Resp : When I was nine months old. Michelle : When you were a baby. And before that you lived where? Resp : In Mexico. Michelle : Mexico. So you are 13, so that was when you were a year old. Okay, got it. Okay, so you learned here. So you speak English better than they do it, sounds like. Okay and then you translate. What’s that like translating for them? Resp : Well, for me it’s like sometimes difficult because I never went to school in Mexico and I know more English than Spanish and when I am translating it’s difficult for me. The big words my parents tell me to try to translate it in English. Michelle : Okay. So you’re doing both ways. You’re doing from English to and from Spanish to English. Both. Does that feel like a lot of responsibility for somebody your age? Resp : Yeah, especially when I got field trips stuff like that. I need to tell my parents, that my parents or if my parents needed something that comes in the mail, may be bills or something like that. Michelle : It sounds like you are very helpful. Who do you want to be when you are out of after high school? Resp : Since I like to help out people a lot, I mean, maybe be a translator and maybe in a hospital or in a school so – Michelle : Yes, that is a skill that you have there, that not a lot of people do have. So that’s – I’m glad you realized that, in terms of that.
Annie’s affirming characteristic could be seen in her affirmation of her respondent’s compassion for children (‘I bet you’re really good with kids’); for Michelle, the characteristic could be seen in her affirmations of her respondent’s willingness to help her parents, teacher, and classmates with their English or Spanish (‘… it sounds like you are very helpful’). Both Michelle and Annie’s affirmation seemed to foster a conversational space that was conducive for uninhibited self-disclosure. In response to Annie’s affirmation about owning a daycare someday, the respondent opened up to talk about her talents in working with children, and her compassion for the children in her community who were less fortunate than she. In response to Michelle’s affirmations about the responsibilities of translating for so many people, the respondent expounded on the difficulties of such a responsibility, and the tasks she must perform for various people (e.g. helping her classmates on field trips, assisting her parents with bills).
Discussions about alcohol, tobacco, and other drug usage (ATOD) were considered highly sensitive topics of discussion, as adolescents were often encouraged to disclose information about their own or their peers’ drug use. Although the respondents were continually reassured that the information they provided was confidential, disclosing information about illegal activity to a stranger was likely a highly sensitive activity. When discussing ATOD with adolescents, each interviewer utilized a different interviewer characteristic. Jonathan’s dominant characteristic when discussing this topic was neutrality :
Resp : Her parents’, like, bar. Like, they own this big, huge bar. And then, like, in the back where the kids can go. Jonathan : Oh, okay. Resp : And her parents don’t really care if you drink. Jonathan : Oh, okay. Resp : Just as long as you do it in the bar. You don’t just go outside, or you don’t tell your parents. Jonathan : Okay. Resp : She doesn’t really know that we drink, but we usually crash in the van, in the RV. Jonathan : Uh huh. Resp : … or out in the yard. And we only do the RV in the summer or in the spring. And then at my other friend’s house who has the bar, we stay at, we do the, we have parties there all the time. Jonathan : Mm hmm. Resp : Just cause her parents don’t care. Jonathan : Yeah.
Even in the midst of some fairly controversial topics of discussion (e.g. underage binge drinking), Jonathan’s neutral characteristic was consistently demonstrated in his calm, even responses (‘okay,’ ‘uh huh’). These neutral responses seemed to provide an unobtrusive backdrop for the respondent to discuss her experiences. Indeed, Jonathan did not even need to ask any questions to the respondent. With minimal prompting, the respondent shared her story.
In comparison to Jonathan, when discussing ATOD, Annie’s approach was coded as interpretive ; she often interjected commentary about the respondents’ stories of risky behavior:
Annie : Do you think that he drinks beer, or does chew or smokes cigarettes? Resp : He probably does … Annie : Um, and so when he offered this to you, were you, were you uncomfortable? Like, did you feel kind of weird? Resp : Mm hmm. Annie : Um, and, and maybe that boy’s brother – like, that guy’s brother – he might smoke or drink from time to time, but, um, that’s about it? Resp : Mm hmm. Annie : It doesn’t seem like too many kids around here do that stuff. Resp : Not as I know.
Annie’s interpretive characteristic stands in stark contrast to Jonathan’s neutral characteristic. Whereas Jonathan’s responses were short and dispassionate, Annie’s responses were somewhat opinionated. These interpretive comments did not seem to generate a conversational space conducive for the respondent’s continued disclosure. Indeed, the transcript above shows that most of the commentary came from Annie, not the respondent.
In discussions on risky behavior, Michelle’s self-disclosing characteristic was evidenced by her stories of her 14-year-old son, and appeared to serve as a point of identification with respondents:
Resp : My parents get mad because I listen to music a lot and I don’t do anything than watch TV. Just hang out with my friends. Michelle : Then your parents get mad because that’s all you do. You know but the good thing about me is I’m not your parent and I don’t care. So I just want to know what kids are doing. It’s, you know, I have an eighth grader actually he’s 14. And that’s exactly what he does. And in the winter it stinks, though you are right because what else is there to do? You know it’s the question, um any way, okay. So, do you know my question to you is, and again, this is purely confidential, we don’t know names we don’t want names or anything. Has anybody ever offered you any alcohol or cigarettes or marijuana or any of those? And have you said yes or no to that? Resp : Yes, they offered me and I’d always told them ‘no’ and what it does. Michelle : Okay, so tell me … pretend that we’re shooting this video. Okay tell me the who when what where why and how. Right? Where were you, not who, not a name. But was it a friend who was older, younger, male, female? That kind of thing. Tell me the story of at least one of these offers. Resp : Okay. I was hanging out with my friends, just walking around, and there is this bigger kid that we know and he was joined by these smokers, and they would always, he would always tell me never to smoke and we just saw him … And then he offered us and we said no. This is not good for you and he plays soccer and he is not really good at soccer.
Michelle’s self-disclosure about her son experiencing similar challenges as the respondent was initially met by the respondent with a short response. However, Michelle’s subsequent question, framed as a hypothetical task (‘ pretend that we’re shooting this video ’), seemed to create an opening in the conversational space for the respondent to share a story.
In looking closely at the different practices we employed as interviewers, we were able to identify a variety of distinguishing features that seemed to characterize each of us uniquely. If we were characters in a novel or play, Annie’s character name would be energy , Jonathan’s neutrality , and Michelle’s self-disclosure . Across the different conversation topics in the interview, from low to high risk, these interviewer characteristics functioned differently in eliciting detail from adolescent respondents.
When the adolescents and researchers discussed the low-risk topic of rural living, the three interviewer characteristics (i.e. energy, neutrality, or self-disclosure) generated sufficiently detailed responses from the respondents. Variance across interviewers did not seem to have much impact on the quality of the responses obtained from the adolescent participants. This may have been due, in part, to the low-risk nature of the topic. This is a topic many adolescents can talk easily about, have talked about with others, and do not perceive the information they share as particularly threatening.
When the topic was moderately risky, as was the topic of identities and future selves, Jonathan’s neutral approach contrasted with Michelle and Annie’s affirming approach. Although neutrality appeared somewhat effective in facilitating an open conversational space for respondents, the affirming interviewer characteristic seemed to offer a more nurturing environment for conversation. Rich, detailed disclosures from adolescents about their identities occurred more often when the interviewer utilized an affirming approach and set a tone of acceptance for the respondents. Affirmation may be particularly important with adolescents, since adolescence is a notoriously vulnerable time in development.
When discussing a high risk topic such as alcohol and other drug use, Annie’s interpretive approach appeared to be the least effective in providing a satisfying conversational space for respondents. Jonathan’s neutral characteristic and Michelle’s self-disclosing characteristic appeared to elicit detailed information from their respondents, while Annie’s interpretive characteristic only served to inhibit her respondent’s stories. Michelle’s disclosures, while also interpretive, did not appear to limit responses from the adolescents. Couching Michelle’s interpretive language within a personal narrative may have mitigated its presence, although it still presented leading information. Hence, it could be argued that neutrality (displayed in this context by Jonathan) may be most effective when discussing high risk topics, because this neutrality provides the respondents with the most freedom to disclose what they want and how they want.
An important factor to note in this discussion is that of gender. While we did not explicitly study the role of gender in our analyses, our interviewing styles were rooted in traditional gender norms: Jonathan’s minimalist and neutral styles could be characterized as stereotypically masculine, and Annie and Michelle’s effusive and affirming interviewing styles could be characterized as traditionally feminine. These qualities suggest that interviewing styles cannot be disentangled from one’s gender, and that conversational spaces are influenced by more than simply an interviewer’s words. To this end, practices of reflexivity must acknowledge the implications of what an interviewer says and how it is said, as well as the ways in which those utterances are connected to one’s gender.
Although this study provides some intriguing findings, it was limited in a variety of ways. For one, we did not employ detailed conversation analysis procedures on each individual utterance in the interview. And despite the range of conversational segments in the interviews (i.e. introductions, research explanations, establishing rapport, soliciting honesty and openness, a period of questions and answers on six core topics, summarizing the discussion, and closings), for the purposes of this study, we elected to limit our analysis specifically to three topics in the question and answer segment. Nor did we examine other conversational features, such as the role of silence or turn-taking. Conversational features such as those, while certainly worth our attention, were beyond the scope of this exercise.
Learning about interviewing and doing interviews are different tasks. This lesson was highly relevant for us when conducting this study. Even though we were all trained in interviewing, we still found ourselves displaying the classic mistakes of a novice researcher: asking long, complicated questions, posturing closed yes-or-no questions, and leading respondents ( deMarrais, 2004 ). While humbling, these mistakes forced us to reflect on how to develop our skills and have guided our interviewing work since that time. Indeed, the kind of self-reflexivity involved in conducting an analysis of your own interviews, and then comparing and contrasting them with others, could be beneficial for individual interviewers as they are honing their craft, and QRTs desiring to identify unique characteristics of their resident interviewers.
In considering our findings, we agree that researchers are indeed the ‘instruments’ in qualitative interview research. After all, it is through the researcher’s facilitative interaction that a conversational space is created where respondents share rich information about their lives. Yet, we argue that qualitative researchers are differently calibrated instruments.
In QRTs, in particular, the goal is often to calibrate all instruments to one standard of accuracy. However, the results of this study illustrate that variation in interviewer characteristics may be a benefit rather than a detriment to team-based qualitative inquiry. All interviewers in this study were effective in conducting engaging conversations with participants and eliciting information, but we did these things employing different practices, and sometimes to different ends. Each interviewer demonstrated a relatively consistent interviewer style across all of his or her interviews – Jonathan was consistently neutral, Michelle consistently self-disclosive, and Annie consistently energetic. This finding leads us to suggest that QRTs might benefit from learning what ‘natural style’ characterizes a possible interviewer and then staffing their teams with interviewers who have complementary styles. Interviewers may then be assigned interview tasks commensurate with their strengths. For example, our team needed to learn both about rural identity and about alcohol and drug use, so Michelle and Annie could have been assigned to interview respondents about rural identity (a ‘safe’ topic) and future selves (a moderately risky topic), which both fit our energetic style. This approach could have helped to engage participants in the research and establish rapport with them among the research team. Then, Jonathan could be assigned to the task of summarizing the information learned about the less risky topics and bringing that information into a second interview to pursue the high risk topic of drug use, implementing his neutral style for a non-evaluative conversational space. This suggestion is founded on a premise similar to utilizing information from personality inventories (e.g. Myers Briggs) to establish work teams in organizations ( Furlow, 2000 ).
Since many interviews must occur during a single visit, however, interviewer ‘profiling’ may not be realistic for QRTs. Another suggestion would be to audio-record interview trainees in mock interviews, share those recordings among the team, then devote some time for team members to offer commentary on (a) the ways in which their teammates embodied similar or different instruments in their interviews and (b) how those instruments seemed to create different conversational spaces. This process need not involve detailed conversation analysis tools; nor should it be formal or performance-based. Instead, it should be congenial and constructive, driven by efforts to respect interviewer flexibility while maintaining fidelity to the research approach. These recommendations are in line with calls issued by Mallozzi (2009) and Miller-Day et al. (2009) , who argued that consistency efforts be focused on research procedures (e.g. securing consent, managing empirical materials) and not on standardizing interviewer characteristics.
In carrying out these recommendations, more research will be needed to understand the complexities of how and under what conditions interviewer characteristics may impact respondent responses. More research will also be needed on the ways QRT practices may change if reflexivity was incorporated at other stages of the process (e.g. forming research questions and gaining access). Yet this study provides a running start toward that end. Through our exercise, we call for greater interviewer reflexivity and acknowledge that researchers are the primary instruments in qualitative interview studies – but differentially calibrated instruments. We disagree with claims that interviewers in qualitative research teams should receive the same standard training with an eye toward producing consistent interview strategies ( Bergman and Coxon, 2005 ) and argue, instead, that diversity of approaches among members of a research team has the potential to strengthen the team through complementarity.
This research received no specific grant from any funding agency in the public, commercial or not-for-profit sectors.
Annie Pezalla is the Academic Skills Director at Walden University. Her research addresses identity development across adolescence and young adulthood.
Jonathan Pettigrew is a research analyst and project coordinator for the Drug Resistance Strategies project at Penn State University. His research examines how interpersonal and family interactions correspond with adolescent health.
Michelle Miller-Day is an Associate Professor of Communication Arts and Sciences at the Pennsylvania State University. She is the Founding Director of the Penn State Qualitative Research Interest Group, an interdisciplinary community of researchers involved in and supporting qualitative inquiry at Penn State University. Her research addresses human communication and health, including areas such as substance use prevention, suicide, and families and mental health. Her community-embedded research has involved numerous creative projects to translate research findings into social change. For the past 20 years she has served as the principal qualitative methodologist for a National Institute on Drug Abuse line of research.
Anne E Pezalla, Pennsylvania State University, USA.
Jonathan Pettigrew, Pennsylvania State University, USA.
Michelle Miller-Day, Pennsylvania State University, USA.
Last updated
7 March 2023
Reviewed by
Miroslav Damyanov
Short on time? Get an AI generated summary of this article instead
Interviews can help you understand the context of a subject, eyewitness accounts of an event, people's perceptions of a product, and more.
In some instances, semi-structured or unstructured interviews can be more helpful; in others, structured interviews are the right choice to obtain the information you seek.
In some cases, structured interviews can save time, making your research more efficient. Let’s dive into everything you need to know about structured interviews.
Bring all your interviews into one place to analyze and understand
Structured interviews are also known as standardized interviews, patterned interviews, or planned interviews. They’re a research instrument that uses a standard sequence of questions to collect information about the research subject.
Often, you’ll use structured interviews when you need data that’s easy to categorize and quantify for a statistical analysis of responses.
Structured interviews are incredibly effective at helping researchers identify patterns and trends in response data. They’re great at minimizing the time and resources necessary for data collection and analysis.
Often, researchers use structured interviews for quantitative research . In these cases, they usually employ close-ended questions.
Close-ended questions have a fixed set of responses from which the interviewer can choose. Because of the limited response selection set, response data from close-ended questions is easy to aggregate and analyze.
Researchers often employ multiple-choice or dichotomous close-ended questions in interviews.
For multiple-choice questions, interviewees may choose between three or more possible answers. The interviewer will often restrict the response to four or five possible options. An interviewee will likely need help recalling more, which can slow down and complicate the interview process.
For dichotomous questions, the interviewee may choose between two possible options. Yes or no and true or false questions are examples of dichotomous questions.
Open-ended questions are common in structured interviews. However, researchers use them when conducting qualitative research and looking for in-depth information about the interviewee's perceptions or experiences.
These questions take longer for the interviewee to answer, and the answers take longer for the researcher to analyze. There's also a higher possibility of the researcher collecting irrelevant data. However, open-ended questions are more effective than close-ended questions in gathering in-depth information.
Sometimes, researchers use structured interviews in qualitative research. In this case, the research instrument contains open-ended questions in the same sequence. This usage is less common because it can be hard to compare feedback, especially with large sample sizes.
Researchers conduct structured interviews face-to-face, via telephone or videoconference, or through a survey instrument.
Face-to-face interviews help researchers collect data and gather more detailed information. They can collect and analyze facial expressions, body language, tone, and inflection easier than they might through other interview methods .
However, face-to-face interviews are the most resource-intensive to arrange. You'll likely need to assume travel and other related logistical costs for a face-to-face interview.
These interviews also take more time and are more vulnerable to bias than some other formats. For these reasons, face-to-face interviews are best with a small sample size.
You can conduct interviews via an audio or video call. They are less resource-intensive than face-to-face interviews and can use a larger sample size.
However, it can be difficult for the interviewer to engage effectively with the interviewee within this format, which can inject bias or ambiguity into the responses. This is particularly true for audio calls, especially if the interviewer and interviewee have not met before the interview.
A video call can help the interviewer capture some data from body language and facial expressions, but less so than in a face-to-face interview. Technical issues are another thing to consider. If you’re studying a group of people that live in an area with limited Internet connectivity, this can make a video call challenging.
Survey questionnaires mirror the essential elements of structured interviews by containing a consistent sequence of standard questions. Surveys in quantitative research usually include close-ended questions. This data collection method can be beneficial if you need feedback from a large sample size.
Surveys are resource-efficient from a data administration standpoint but are more limited in the data they can gather. Further, if a survey question is ambiguous, you can’t clear up the ambiguity before someone responds.
By contrast, in a face-to-face or tele-interview, an interviewee may ask clarifying questions or exhibit confusion when asked an unclear question, allowing the interviewer to clarify.
Structured interviews are relevant in many fields. You can find structured interviews in human resources, marketing, political science, psychology, and more.
Academic and applied researchers commonly use them to verify insights from analyzing academic literature or responses from other interview types.
However, one of the most common structured interview applications lies outside the research realm: Human resource professionals and hiring managers commonly use these interviews to hire employees.
A hiring manager can easily compare responses and whittle down the applicant pool by posing a standard set of closed-ended interview questions to multiple applicants.
Further, standard close-ended or open-ended questions can reduce bias and add objectivity and credibility to the hiring process.
Structured interviews are common in political polling. Candidates and political parties may conduct structured interviews with relatively small voter groups to obtain feedback. They ask questions about issues, messaging, and voting intentions to craft policies and campaigns.
The tools you need to conduct a structured interview vary by format. But fundamentally, you will need:
A participant
An interviewer
A pen and pad (or other note-taking tools)
A recording device
A consent form
A list of interview questions
While some interviewees may express qualms about you recording the interview, it’s challenging to conduct quality interviews while taking detailed notes. Even if you have a note-taker in the room, note-taking may introduce bias and can’t capture body language or facial expressions.
Depending on the nature of your study, others may wish to review your sources. If they call your conclusions into question, audio recordings are additional evidence in your favor.
To record, you should ask the interviewee to sign a consent form. Check with your employer's legal counsel or institutional review board at your academic institution for guidance about obtaining consent legally in your state.
If you're conducting a face-to-face interview, a camcorder, digital camera, or even some smartphones are sufficient for recording.
For a tele-interview, you'll find that today's leading video conferencing software applications feature a convenient recording function for data collection.
If a survey is your method of choice, you'll need the survey and a distribution and collection method. Online survey software applications allow you to create surveys by inputting the questions and distributing your survey via text or email.
In some cases, survey companies even offer packages in which they will call those who do not respond via email or text and conduct the survey over the phone.
If you're planning a face-to-face interview, you'll need to take a few steps to do it efficiently.
First, prepare your questions and double-check that the structured interview format is best for your study. Make sure that they are neutral, unbiased, and close-ended. Ask a friend or colleague to test your questions pre-interview to ensure they are clear and straightforward.
Choose the setting for your interviews. Ideally, you'll select a location that is easy to get to. If you live in a city, consider addresses accessible via public transportation.
The room where your interview takes place should be comfortable, without distraction, and quiet, so your recording device clearly captures your interviewee's audio.
If you're looking to interview people with specific characteristics, you'll need to recruit them. Some companies specialize in interview recruitment. You provide the attributes you need, and they identify a pool of candidates for a fee. Alternatively, you can advertise to participants on social media and other relevant avenues.
If you're looking for college students in a specific region, look at student newspaper ads or affiliated social media pages.
You'll also want to incentivize participation, as recruiting interview respondents without compensation is exceedingly difficult. It’s best to include a line or two about requiring written consent for participation and how you’ll use the interview audio.
When you have an interview participant, discuss the intent of your research and acquire their consent. Ensure your recording tools are working well, and begin your interview.
Don't rely on the recordings alone: Note the most significant insights from your participant, as you could easily forget them when it's time to analyze your data.
You'll want to transcribe your audio at the data analysis stage. Some recording applications use AI to generate transcripts. Remove filler words and other sounds to generate a clear transcript for the best results.
A written transcript will help you analyze data and pull quotes from your audio to include in your final research paper.
Typically, you'll find researchers using at least one of these other common interview types:
As the name suggests, semi-structured interviews include some elements of a structured interview. You’ll include preplanned questions, but you can deviate from those questions to explore the interviewee's answers in greater depth.
Typically, a researcher will conduct a semi-structured interview with preplanned questions and an interview guide. The guide will include topics and potential questions to ask. Sometimes, the guide may also include areas or questions to avoid asking.
In an unstructured interview , the researchers approach the interview subjects without predetermined questions. Researchers often use this qualitative instrument to probe into personal experiences and testimony, typically toward the beginning of a research study.
Often, you’ll validate the insights you gather during unstructured and semi-structured interviews with structured interviews, surveys, and similar quantitative research tools.
Focus group interviews differ from the other three types of interviews as you pose the questions to a small group. Focus groups are typically either structured or semi-structured. When researchers employ structured interview questions, they are typically confident in the areas they wish to explore.
Semi-structured interviews are perfect for a researcher seeking to explore broad issues. However, you must be careful that unplanned questions are unambiguous and neutral. Otherwise, you could wind up with biased results.
A structured interview consists of standard preplanned questions for data collection. These questions may be close-ended, open-ended, or a combination.
By contrast, an unstructured interview includes unplanned questions. In these interviews, you’ll usually equip facilitators with an interview guide. This includes guidelines for asking questions and samples that can help them ask relevant questions.
Relative to other interview formats, a structured interview is usually more time-efficient. With a preplanned set of questions, your interview is less likely to go into tangents, especially if you use close-ended questions.
The more structure you provide to the interview, the more likely you are to generate responses that are easy to analyze. By contrast, an unstructured interview may involve a freewheeling conversation with off-topic and irrelevant feedback that lasts a long time.
A structured question is any question you ask in an interview that you’ve preplanned and standardized.
For example, if you conduct five interviews and the first question you ask each one is, "Do you believe the world is round, yes or no?" you have asked them a structured question. This is also a close-ended dichotomous question.
Do you want to discover previous research faster?
Do you share your research findings with others?
Do you analyze research data?
Start for free today, add your research, and get to key insights faster
Last updated: 18 April 2023
Last updated: 27 February 2023
Last updated: 6 February 2023
Last updated: 6 October 2023
Last updated: 5 February 2023
Last updated: 16 April 2023
Last updated: 7 March 2023
Last updated: 9 March 2023
Last updated: 12 December 2023
Last updated: 11 March 2024
Last updated: 6 March 2024
Last updated: 5 March 2024
Last updated: 13 May 2024
Related topics, .css-je19u9{-webkit-align-items:flex-end;-webkit-box-align:flex-end;-ms-flex-align:flex-end;align-items:flex-end;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex-direction:row;-ms-flex-direction:row;flex-direction:row;-webkit-box-flex-wrap:wrap;-webkit-flex-wrap:wrap;-ms-flex-wrap:wrap;flex-wrap:wrap;-webkit-box-pack:center;-ms-flex-pack:center;-webkit-justify-content:center;justify-content:center;row-gap:0;text-align:center;max-width:671px;}@media (max-width: 1079px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}}@media (max-width: 799px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}} decide what to .css-1kiodld{max-height:56px;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;}@media (max-width: 1079px){.css-1kiodld{display:none;}} build next, decide what to build next.
Users report unexpectedly high data usage, especially during streaming sessions.
Users find it hard to navigate from the home page to relevant playlists in the app.
It would be great to have a sleep timer feature, especially for bedtime listening.
I need better filters to find the songs or artists I’m looking for.
Get started for free
Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.
Chapter 11: Quantitative Interview Techniques & Considerations
Much of what we learned in the previous chapter on survey research applies to quantitative interviews as well. In fact, quantitative interviews are sometimes referred to as survey interviews because they resemble survey-style question-and-answer formats. They might also be called standardized interviews . The difference between surveys and standardized interviews is that questions and answer options are read to respondents in a standardized interview, rather than having respondents complete a survey on their own. As with surveys, the questions posed in a standardized interview tend to be closed-ended. There are instances in which a quantitative interviewer might pose a few open-ended questions as well. In these cases, the coding process works somewhat differently than coding in-depth interview data. We will describe this process in the following section.
In quantitative interviews, an interview schedule is used to guide the researcher as he or she poses questions and answer options to respondents. An interview schedule is usually more rigid than an interview guide. It contains the list of questions and answer options that the researcher will read to respondents. Whereas qualitative researchers emphasize respondents’ roles in helping to determine how an interview progresses, in a quantitative interview, consistency in the way that questions and answer options are presented is very important. The aim is to pose every question-and-answer option in the very same way to every respondent. This is done to minimize interviewer effect, or possible changes in the way an interviewee responds based on how or when questions and answer options are presented by the interviewer.
Quantitative interviews may be recorded, but because questions tend to be closed-ended, taking notes during the interview is less disruptive than it can be during a qualitative interview. If a quantitative interview contains open-ended questions, recording the interview is advised. It may also be helpful to record quantitative interviews if a researcher wishes to assess possible interview effect. Noticeable differences in responses might be more attributable to interviewer effect than to any real respondent differences. Having a recording of the interview can help a researcher make such determinations.
Quantitative interviewers are usually more concerned with gathering data from a large, representative sample. Collecting data from many people via interviews can be quite laborious. In the past, telephone interviewing was quite common; however, growth in the use of mobile phones has raised concern regarding whether or not traditional landline telephone interviews and surveys are now representative of the general population (Busse & Fuchs, 2012). Indeed, there are other drawbacks to telephone interviews. Aside from the obvious problem that not everyone has a phone (mobile or landline), research shows that phone interview respondents were less cooperative, less engaged in the interview, and more likely to express dissatisfaction with the length of the interview than were face-to-face respondents (Holbrook, Green, & Krosnick, 2003, p. 79). Holbrook et al.’s research also demonstrated that telephone respondents were more suspicious of the interview process and more likely than face-to-face respondents to present themselves in a socially desirable manner.
Research Methods for the Social Sciences: An Introduction Copyright © 2020 by Valerie Sheppard is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.
March 7, 2016
We will start with a few key operational definitions. ‘ Surveying ’ is the process by which the researcher collects data through a questionnaire (O’Leary, 2014). A ‘ questionnaire ’ is the instrument for collecting the primary data (Cohen, 2013). ‘ Primary data’ by extension is data that would not otherwise exist if it were not for the research process and is collected through both questionnaires or interviews, which we discuss here today (O’Leary, 2014). An ‘ interview ’ is typically a face-to-face conversation between a researcher and a participant involving a transfer of information to the interviewer (Cresswell, 2012). We will investigate each data collection instrument independently, starting with the interview.
Interviews are primarily done in qualitative research and occur when researchers ask one or more participants general, open-ended questions and record their answers. Often audiotapes are utilized to allow for more consistent transcription (Creswell, 2012). The researcher often transcribes and types the data into a computer file, in order to analyze it after interviewing. Interviews are particularly useful for uncovering the story behind a participant’s experiences and pursuing in-depth information around a topic. Interviews may be useful to follow-up with individual respondents after questionnaires, e.g., to further investigate their responses. (McNamara, 1999). In qualitative research specifically, interviews are used to pursue the meanings of central themes in the world of their subjects. The main task in interviewing is to understand the meaning of what the interviewees say (McNamara, 2009). Usually open-ended questions are asked during interviews in hopes of obtaining impartial answers, while closed ended questions may force participants to answer in a particular way (Creswell, 2012; McNamara, 1999). An open-ended question gives participants more options for responding. For example an open-ended question may be, “How do you balance participation in athletics with your schoolwork (Creswell, 2012)”. A closed-ended question provides a preset response. For example, “Do you exercise?” where the answers are limited to yes or no (Cresswell, 2012).
Interviewer must be:
Both Creswell and McNamara highlighted very similar points about conducting interviews. McNamara’s literature is less descriptive, but more simple and concise. Another author who has come up consistently in the interviewing literature is Kvalve, whose literature is much more intensive and broad. These three authors are all very prominent in the interview research literature.
These are the steps that are consistent in the literature on conducting interviews in research (Creswell, 2012; McNamara, 1999):
Questionnaires have many uses, most notably to discover what the masses are thinking. These include: market research, political polling, customer service feedback, evaluations, opinion polls, and social science research (O’Leary, 2014).
Starting out.
Bell & Waters (2014) and O’Leary (2014), each offer clear checklists for creating a questionnaire from beginning to end. By comparing the two, we have created a comprehensive list. Bell starts by reminding the researcher to obtain approval prior to administering their questionnaire, then to reflect on what our question is and whether this is the best method to obtain the intended information (Bell & Waters, 2014). O’Leary (2014) suggests that you operationalize concepts in the beginning and define the measurable variables. Prior to writing your own questions, O’Leary (2014) would have you explore existing possibilities in order to adapt previous instruments rather than ‘reinventing the wheel’. At this point, both authors have you write your questions.
Bell & Waters (2014), utilizes Youngman (1982)’s Question Types:
Bell & Waters (2014), highlight a plethora of potential difficulties in wording your questions, including ambiguity and imprecision, assumptions, memory, knowledge, double questions, leading questions, presuming questions, hypothetical questions, offensive questions, and questions covering sensitive issues. It is imperative that you check for jargon within your language and return to your hypothesis or objectives often to decide which questions are most pertinent (Bell & Waters, 2014).
Bell & Waters (2014) and O’Leary (2014) seem to disagree on the next step; while O’Leary would focus next on the response category, Bell would have you look further into the wording of the questions. Following O’Leary (2014)’s logic, we decide now whether to use open or closed questions, considering how the category will translate to different data types. Closed response answers include: yes/no, agree/disagree, fill in the blanks, choosing from a list, ordering options, and interval response scales. Any of the three standard scaling methods, (Likert, Guttman, and Thurstone) may be used where appropriate (O’Leary, 2014).
Bell & Waters (2014) suggest you check your wording at this point. O’Leary (2014) goes into detail to point out problems with questions such as ambiguity, leading, confronting, offensiveness, unwarranted assumptions, double-barrelled questions, or pretentiousness. Questions to avoid according to O’Leary are those that are:
Both authors emphasize thoughtfulness about the order of questions, considering logic and ease for respondents. O’Leary (2014) goes into further detail regarding issues with organization and length; too lengthy and respondents are less likely to complete the questionnaire. He also suggests researchers avoid asking threatening, awkward, insulting, or difficult questions, especially in the beginning of the questionnaire. Bell & Waters (2014) takes a more broad view of the aesthetics of the questionnaire; leaving spaces for legibility, limiting the overall numbers of pages, and considering the impression the document leaves, to highlight a few examples.
Clear and unambiguous instructions for respondents are emphasized by both authors (O’Leary, 2014; Bell & Waters, 2014). This step is followed by a ‘layout’, or rearranging of questions, in both descriptions, likely because this is the best time to review once the questions and other writing is complete. O’Leary (2014) warns researchers to use professional and aesthetically-pleasing formatting, as well as to be organized in order to attract respondents and to lower the probability of making your own mistakes (in repeating questions, for example). O’Leary (2014) offers final instructions to include a cover letter that describes who you are, the aim of the project, assurances of confidentiality, etc.. Bell & Waters (2014), however, offers further steps.
Bell & Waters (2014) go into further detail regarding response rates and ensuring you have a representative or generalizable sample, which we believe is irrelevant to this article. More pertinent steps would be to pilot-test your questionnaire with preliminary respondents (even family and friends) and follow-through to preliminary data analysis in order to ensure your methods are effective, making adjustments accordingly (Bell & Waters, 2014). O’Leary (2014) lists six steps in a typical pilot test:
Bell & Waters (2014) briefly consider distribution methods; they emphasize the need to ensure confidentiality, to include a return date, to formulate a plan for ‘bounce backs’ via email, and to record data as soon as it arrives. O’Leary (2014) lists typical methods: face-to-face, snail mail, e-mail, and online. Bell & Waters (2014) highlight the advantage to administering your questionnaire personally, as it enables the researcher to explain the purpose of the study and increases the probability of receiving completed questionnaires in return. The authors go on to emphasize the value of online methods. In particular, they mention “Survey Monkey” as the most popular and versatile survey tool available (Bell & Waters, 2014). O’Leary (2014) suggests sending out reminder letters or E-mails in order to increase response rate and the speed of response.
Bell & Waters (2014) and O’Leary (2014) disagree once again with respect to the analysis. O’Leary (2014) suggests collecting the data as soon as possible, whereas Bell (2014) suggests the researcher merely glance through the responses prior to coding and recoding, if time allows. Both methods have merit, as the researcher must consider the time they have available, as well as the amount of data they are working with in order to make a logical decision.
O’Leary (2014) offers some concerns in using questionnaires as a research tool, as they are time consuming, expensive, and sampling is difficult. O’Leary (2014) asserts that questionnaires are ‘notoriously difficult to get right’ and they often do not go as planned.
O’Leary (2014) suggests some obvious strengths for this research method, as administering a questionnaire allows the researcher to generate data specific to their own research and offers insights that might otherwise be unavailable. In listing the additional benefits of questionnaires, O’Leary (2014) suggests that they can:
Cohen et al. (2013, p.394) offer special considerations for administering questionnaires within an educational setting:
Bell, J., Waters, S., & Ebooks Corporation. (2014). Doing your research project: A guide for first-time researchers (Sixth ed.). Maidenhead, Berkshire: Open University Press.
Cohen, L., Manion, L., Morrison, K., & Ebooks Corporation. (2011; 2013; 1993). Research methods in education (7th ed.). Abingdon, Oxon; New York: Routledge. doi:10.4324/9780203720967.
Creswell, J. W. (2009). Research design: Qualitative, quantitative, and mixed methods approaches (3rd ed.). Los Angeles: Sage.
Kvale, S., & SAGE Research Methods Online. (2008). Doing interviews . Thousand Oaks; London: SAGE Publications, Limited.
McNamara, C. (1999). General Guidelines for Conducting Interviews, Authenticity Consulting, LLC, Retrieved from: http://www.managementhelp.org/evaluatn/intrview.htm
O’Leary, Z. (2014). The essential guide to doing your research project (2nd ed.). London: SAGE.
Author: ADJP Quad
Published: March 7, 2016
Word Count: 2375
More to read.
Comments are closed.
Search writings.
A TRU Writer powered SPLOT : Research Methodology in Education
Blame @cogdog — Up ↑
Getting more out of your interview data: toward a framework for debriefing the transcriber of interviews., a call for enhancing saturation at the qualitative data analysis stage via the use of multiple qualitative data analysis approaches, exploration of nurses’ knowledge, attitudes, and perceived barriers towards medication error reporting in a tertiary health care facility: a qualitative approach, perceptions of importance and what safety is enough☆, a small scale survey assessing the impact of mentoring perspectives on mentoring schemes within primary and secondary schools., overcoming challenges in qualitative inquiry within a conservative society, virtual leadership in its socio-cultural context - the influence of the project leader’s culture in virtual project teams, situated practice in clil: voices from colombian teachers, exploration of factors that inform curriculum studies students to use e-resources in conducting masters of education dissertations at a south african university..
112 references, social research: the basics, doing research in the real world.
Social research: theory, methods and techniques, the research interview, research methods in education, research design: qualitative, quantitative, and mixed methods approaches, research methods: a process of inquiry, reworking qualitative data, learners' needs, attitudes and motivation towards the self-access mode of language learning, related papers.
Showing 1 through 3 of 0 Related Papers
Global Burden of Disease (GBD) estimates have significant policy implications nationally and internationally. Disease burden metrics, particularly for depression, have played a critical role in raising governmental awareness of mental health and in calculating the economic cost of depression. Recently, the World Health Organization ranked depression as the single largest contributor to global disability. The main aim of this paper was to assess the basis upon which GBD prevalence estimates for major depressive disorder (MDD) were made. We identify the instruments used in the 2019 GBD estimates and provide a descriptive assessment of the five most frequently used instruments. The majority of country studies, 356/566 (62.9%), used general mental health screeners or structured/semi-structured interview guides, 98/566 (17.3%) of the studies used dedicated depression screeners, and 112 (19.8%) used other tools for assessing depression. Thus, most of the studies used instruments that were not designed to make a diagnosis of depression or assess depression severity. Our results are congruent with and extend previous research that has identified critical flaws in the data underpinning the GBD estimates for MDD. Despite the widespread promotion of these prevalence estimates, caution is needed before using them to inform public policy and mental health interventions. This is particularly important in lower-income countries where resources are scarce.
This is a preview of subscription content, log in via an institution to check access.
Subscribe and save.
Price includes VAT (Russian Federation)
Instant access to the full article PDF.
Rent this article via DeepDyve
Institutional subscriptions
Bayetti, C., Bakhshi, P., Davar, B., Khemka, G. C., Kothari, P., Kumar, M., Kwon, W., Mathias, K., Mills, C., Montenegro, C. R., Trani, J. F., & Jain, S. (2023). Critical reflections on the concept and impact of “scaling up” in Global Mental Health. Transcultural Psychiatry, 60 (3), 602–609. https://doi.org/10.1177/13634615231183928
Article CAS PubMed PubMed Central Google Scholar
Brhlikova, P., Pollock, A. M., & Manners, R. (2011). Global Burden of Disease estimates of depression–how reliable is the epidemiological evidence? Journal of the Royal Society of Medicine, 104 (1), 25–34. https://doi.org/10.1258/jrsm.2010.100080
Article PubMed PubMed Central Google Scholar
Chapman, A. R. (2010). The social determinants of health, health equity, and human rights. Health and Human Rights, 12 (2), 17–30.
PubMed Google Scholar
Chisholm, D., Sweeny, K., Sheehan, P., Rasmussen, B., Smit, F., Cuijpers, P., & Saxena, S. (2016). Scaling-up treatment of depression and anxiety: A global return on investment analysis. The Lancet. Psychiatry, 3 (5), 415–424. https://doi.org/10.1016/S2215-0366(16)30024-4
Article PubMed Google Scholar
Copeland, J. R., Prince, M., Wilson, K. C., Dewey, M. E., Payne, J., & Gurland, B. (2002). The Geriatric Mental State Examination in the 21st century. International Journal of Geriatric Psychiatry, 17 (8), 729–732. https://doi.org/10.1002/gps.667eferences
Article CAS PubMed Google Scholar
Cosgrove, L., D’Ambrozio, G., Herrawi, F., Freeman, M., & Shaughnessy, A. (2023). Why psychiatry needs an honest dose of gentle medicine. Frontiers in Psychiatry, 14 , 664. https://doi.org/10.3389/fpsyt.2023.1167910
Article Google Scholar
Global Health Data Exchange. (n.d.). Global burden of disease study 2019 (GBD 2019) Data Input Sources Tool . Global Health Data Exchange (GHDx). https://ghdx.healthdata.org/gbd-2019/data-input-sources?components=5&causes=568&locations=1 . Accessed March 2023
Fried, E. I. (2017). The 52 symptoms of major depression: Lack of content overlap among seven common depression scales. Journal of Affective Disorders, 208 , 191–197. https://doi.org/10.1016/j.jad.2016.10.019
Goldhill, O. (2023). How a depression test devised by a Zoloft marketer became a crutch for a failing mental health system. STAT News . Retrieved May 2024 from https://www.statnews.com/2023/02/21/depression-test-phq9-zoloft-pfizer-mental-health/
Jerant, A., Kravitz, R. L., Fernandez Y Garcia, E., Feldman, M. D., Cipri, C., Nishio, D., Knoepfler, A., Wooddell, M. K., Baquero, V., & Franks, P. (2014). Potential antidepressant overtreatment associated with office use of brief depression symptom measures. Journal of the American Board of Family Medicine : JABFM, 27 (5), 611–620. https://doi.org/10.3122/jabfm.2014.05.140038
Karlsson, L., Marttunen, M., Karlsson, H., Kaprio, J., & Hillevi, A. (2010). Minor change in the diagnostic threshold leads into major alteration in the prevalence estimate of depression. Journal of Affective Disorders., 122 , 96–101.
Lancet Global Mental Health Group, Chisholm, D., Flisher, A. J., Lund, C., Patel, V., Saxena, S., Thornicroft, G., & Tomlinson, M. (2007). Scale up services for mental disorders: A call for action. Lancet (london, England), 370 (9594), 1241–1252. https://doi.org/10.1016/S0140-6736(07)61242-2
Lecrubier, Y., Sheehan, D. V., Weiller, E., Amorim, P., Bonora, I., Sheehan, K. H., Janavs, J., & Dunbar, G. C. (1997). The Mini International Neuropsychiatric Interview (MINI): A short diagnostic structured interview: Reliability and validity according to the CIDI. European Psychiatry, 12 (5), 224–231. https://doi.org/10.1016/S0924-9338(97)83296-8
Levis, B., Benedetti, A., Riehm, K. E., Saadat, N., Levis, A. W., Azar, M., et al. (2018). Probability of major depression diagnostic classification using semi-structured versus fully structured diagnostic interviews. The British Journal of Psychiatry, 212 (6), 377–385. https://doi.org/10.1192/bjp.2018.54
Levis, B., Benedetti, A., Ioannidis, J. P. A., Sun, Y., Negeri, Z., He, C., et al. (2020). Patient health questionnaire-9 scores do not accurately estimate depression prevalence: Individual participant data meta-analysis. Journal of Clinical Epidemiology, 122 , 115.e1–128.e1. https://doi.org/10.1016/j.jclinepi.2020.02.002
Lyus, R., Buamah, C., Pollock, A. M., Cosgrove, L., & Brhlikova, P. (2023). Global Burden of Disease 2017 estimates for Major Depressive Disorder: A critical appraisal of the epidemiological evidence. JRSM Open, 14 (9), 20542704231197590. https://doi.org/10.1177/20542704231197594
Maske, U. E., Busch, M. A., Jacobi, F., Beesdo-Baum, K., Seiffert, I., Wittchen, H. U., Riedel-Heller, S., & Hapke, U. (2015). Current major depressive syndrome measured with the Patient Health Questionnaire-9 (PHQ-9) and the Composite International Diagnostic Interview (CIDI): Results from a cross-sectional population-based study of adults in Germany. BMC Psychiatry, 15 , 77. https://doi.org/10.1186/s12888-015-0463-4
Mekonen, T., Chan, G. C. K., Connor, J. P., Hides, L., & Leung, J. (2021). Estimating the global treatment rates for depression: A systematic review and meta-analysis. Journal of Affective Disorders, 295 , 1234–1242. https://doi.org/10.1016/j.jad.2021.09.038
Merriott, D. (2016). Factors associated with the farmer suicide crisis in India. Journal of Epidemiology and Global Health, 6 (4), 217–227.
Mezzich, J. E., Kirmayer, L. J., Kleinman, A., Fabrega, H., Parron, D. L., Good, B. J., Lin, K. M., & Manson, S. M. (1999). The place of culture in DSM-IV. Journal of Nervous and Mental Disease, 187 (8), 457–464. https://doi.org/10.1097/00005053-199908000-00001
Patel, V., Collins, P. Y., Copeland, J., Kakuma, R., Katontoka, S., Lamichhane, J., et al. (2011). The movement for Global Mental Health. The British Journal of Psychiatry, 198 (2), 88–90. https://doi.org/10.1192/bjp.bp.109.074518
Patel, V., Saxena, S., Lund, C., Thornicroft, G., Baingana, F., Bolton, P., Chisholm, D., Collins, P. Y., Cooper, J. L., Eaton, J., Herrman, H., Herzallah, M. M., Huang, Y., Jordans, M. J. D., Kleinman, A., Medina-Mora, M. E., Morgan, E., Niaz, U., Omigbodun, O., … UnÜtzer, J. (2018). The Lancet Commission on global mental health and sustainable development. Lancet (london, England), 392 (10157), 1553–1598. https://doi.org/10.1016/S0140-6736(18)31612-X
Pettersson, A., Boström, K. B., Gustavsson, P., & Ekselius, L. (2015). Which instruments to support diagnosis of depression have sufficient accuracy? A systematic review. Nordic Journal of Psychiatry, 69 (7), 497–508. https://doi.org/10.3109/08039488.2015.1008568
Pūras, D. (2017). Statement by Mr Dainius Pūras, Special Rapporteur on the right of everyone to the enjoyment of the highest attainable standard of physical and mental health at the 35th session of the Human Rights Council . United Nations. https://www.ohchr.org/en/statements/2017/09/statement-mr-dainius-puras-special-rapporteur-right-everyone-enjoyment-highest
Robins, L. N., Wing, J., Wittchen, H. U., Helzer, J. E., Babor, T. F., Burke, J., Farmer, A., Jablenski, A., Pickens, R., & Regier, D. A. (1988). The Composite International Diagnostic Interview. An epidemiologic Instrument suitable for use in conjunction with different diagnostic systems and in different cultures. Archives of General Psychiatry, 45 (12), 1069–1077. https://doi.org/10.1001/archpsyc.1988.01800360017003
Roseman, M., Kloda, L. A., Saadat, N., Riehm, K. E., Ickowicz, A., Baltzer, F., Katz, L. Y., Patten, S. B., Rousseau, C., & Thombs, B. D. (2016). Accuracy of depression screening tools to detect major depression in children and adolescents: A systematic review. Canadian Journal of Psychiatry. Revue Canadienne De Psychiatrie, 61 (12), 746–757. https://doi.org/10.1177/0706743716651833
Summerfield, D. A. (2017). Western depression is not a universal condition. The British Journal of Psychiatry: THe Journal of Mental Science, 211 (1), 52. https://doi.org/10.1192/bjp.211.1.52
Thombs, B. D., Kwakkenbos, L., Levis, A. W., & Benedetti, A. (2018). Addressing overestimation of the prevalence of depression based on self-report screening questionnaires. CMAJ : Canadian Medical Association Journal = Journal De L’association Medicale Canadienne, 190 (2), E44–E49. https://doi.org/10.1503/cmaj.170691
Tichenor, M., & Sridhar, D. (2019). Metric partnerships: global burden of disease estimates within the World Bank, the World Health Organisation and the Institute for Health Metrics and Evaluation. Wellcome Open Research, 4 , 35. https://doi.org/10.12688/wellcomeopenres.15011.2
Wang, P. S., Aguilar-Gaxiola, S., Alonso, J., Angermeyer, M. C., Borges, G., Bromet, E. J., Bruffaerts, R., de Girolamo, G., de Graaf, R., Gureje, O., Haro, J. M., Karam, E. G., Kessler, R. C., Kovess, V., Lane, M. C., Lee, S., Levinson, D., Ono, Y., Petukhova, M., … Wells, J. E. (2007). Use of mental health services for anxiety, mood, and substance disorders in 17 countries in the WHO world mental health surveys. Lancet (london, England), 370 (9590), 841–850. https://doi.org/10.1016/S0140-6736(07)61414-7
Wildeman, S. (2013). Protecting rights and building capacities: Challenges to global mental health policy in light of the convention on the rights of persons with disabilities. The Journal of Law, Medicine & Ethics : A Journal of the American Society of Law, Medicine & Ethics, 41 (1), 48–73. https://doi.org/10.1111/jlme.12005
Williams, C., & Chapman, A. (2022). VIRTUAL ROUNDTABLE Impact of human rights council reports on mental health. Health and Human Rights Journal, 25 (2). 85–99. Retrieved May 2024 from https://www.hhrjournal.org/2022/12/virtual-roundtable-impact-of-human-rights-council-reports-on-mental-health/
World Health Organization. (WHO). (2017, January). Depression and Other Common Mental Disorders: Global Health Estimates. Geneva: World Health Organization. https://www.who.int/publications/i/item/depression-global-health-estimates
Yamin, A. E. (2019). Struggles for human rights in health in an age of neoliberalism: From civil disobedience to epistemic disobedience. Journal of Human Rights Practice , 11, 357–372. https://ssrn.com/abstract=3791014
Zeira, A. (2022). Mental health challenges related to neoliberal capitalism in the United States. Community Mental Health Journal, 58 (2), 205–212. https://doi.org/10.1007/s10597-021-00840-7
Download references
Authors and affiliations.
Centre for Mental Health, Human Rights, and Social Justice, University of Essex, Colchester, UK
Lisa Cosgrove
Counseling Psychology Department, Applied Ethics Center, University of Massachusetts Boston, 100 William T Morrissey Blvd, Boston, MA, 02125, USA
Lisa Cosgrove, Farahdeba Herrawi & Gianna D’Ambrozio
Population Health Sciences Institute, Newcastle University, Baddiley-Clark Bldg, Newcastle Upon Tyne, NE2 4AX, UK
Petra Brhlikova, Rosanna Lyus & Elia Abi-Jaoude
The Hospital for Sick Children, University of Toronto, 555 University Avenue, Toronto, ON, M5G 1X8, Canada
Allyson M. Pollock
You can also search for this author in PubMed Google Scholar
Correspondence to Lisa Cosgrove .
Publisher's note.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Below is the link to the electronic supplementary material.
Rights and permissions.
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
Reprints and permissions
Cosgrove, L., Brhlikova, P., Lyus, R. et al. Global Burden Disease Estimates for Major Depressive Disorders (MDD): A review of diagnostic instruments used in studies of prevalence. Community Ment Health J (2024). https://doi.org/10.1007/s10597-024-01302-6
Download citation
Received : 20 December 2023
Accepted : 21 May 2024
Published : 01 July 2024
DOI : https://doi.org/10.1007/s10597-024-01302-6
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
until file(s) become available
Degree grantor, degree level, degree name, committee member, thesis type, usage metrics.
IMAGES
VIDEO
COMMENTS
Learn about structured, semi-structured, unstructured and focus group interviews as qualitative research methods. Find out the advantages, disadvantages and examples of each type of interview.
Introduction. Interviewing people is at the heart of qualitative research. It is not merely a way to collect data but an intrinsically rewarding activity—an interaction between two people that holds the potential for greater understanding and interpersonal development. Unlike many of our daily interactions with others that are fairly shallow ...
Interviewing is a common data-gathering instrument used by social scientists to expand their understanding of how humans perceive their social worlds and how they act within them. Nonetheless, the interpretation of interviews is seen as problematic if researchers only present selected instances of interview passages as insights about a research ...
Abstract. This paper examined the implications of using interviews as method of. data collection in social sciences with reference to researcher's experience during. fieldwork. The paper is ...
Vancouver, Canada. Abstract. Interviews are one of the most promising ways of collecting qualitative data throug h establishment of a. communication between r esearcher and the interviewee. Re ...
Develop an interview guide. Introduce yourself and explain the aim of the interview. Devise your questions so interviewees can help answer your research question. Have a sequence to your questions / topics by grouping them in themes. Make sure you can easily move back and forth between questions / topics. Make sure your questions are clear and ...
What are interviews? An interviewing method is the most commonly used data collection technique in qualitative research. 1 The purpose of an interview is to explore the experiences, understandings, opinions and motivations of research participants. 2 Interviews are conducted one-on-one with the researcher and the participant. Interviews are most appropriate when seeking to understand a ...
182 DOING RESEARCH Learning how to design and use structured interviews, questionnaires and observation instruments is an important skill for research-ers. Such survey instruments can be used in many types of research, from case study, to cross-sectional survey, to experiment. A study of this sort can involve anything from a short
Revised on June 22, 2023. A structured interview is a data collection method that relies on asking questions in a set order to collect data on a topic. It is one of four types of interviews. In research, structured interviews are often quantitative in nature. They can also be used in qualitative research if the questions are open-ended, but ...
There are many different research instruments you can use in collecting data for your research: Interviews (either as a group or one-on-one). You can carry out interviews in many different ways. For example, your interview can be structured, semi-structured, or unstructured. The difference between them is how formal the set of questions is that ...
Interviewing. This is the most common format of data collection in qualitative research. According to Oakley, qualitative interview is a type of framework in which the practices and standards be not only recorded, but also achieved, challenged and as well as reinforced.[] As no research interview lacks structure[] most of the qualitative research interviews are either semi-structured, lightly ...
Most importantly, semi-structured interview questions must be based on the main aim and objectives of the study (Jamshed, 2014). This approach allows for a more in-depth exploration of the ...
The level of researcher involvement in qualitative interviewing - indeed, the embodiment of the unique researcher as the instrument for qualitative data collection - has been widely acknowledged (e.g. Cassell, 2005; Rubin and Rubin, 2005; Turato, 2005).Because the researcher is the instrument in semistructured or unstructured qualitative interviews, unique researcher attributes have the ...
Types of Research Instruments: Interviews Structured Interview: A formal set of questions posed to each interviewee and recorded using a standardized procedure. Unstructured Interview: A less formal set of questions; the interviewer modifies the sequence and wording of questions. Non-Directive Interview: An unguided
Researchers often use this qualitative instrument to probe into personal experiences and testimony, typically toward the beginning of a research study. Often, you'll validate the insights you gather during unstructured and semi-structured interviews with structured interviews, surveys, and similar quantitative research tools.
11.1 Conducting Quantitative Interviews. Much of what we learned in the previous chapter on survey research applies to quantitative interviews as well. In fact, quantitative interviews are sometimes referred to as survey interviews because they resemble survey-style question-and-answer formats. They might also be called standardized interviews.
A ' questionnaire ' is the instrument for collecting the primary data (Cohen, 2013). ' Primary data' by extension is data that would not otherwise exist if it were not for the research process and is collected through both questionnaires or interviews, which we discuss here today (O'Leary, 2014). An ' interview ' is typically a ...
Abstract. The paper describes the interview as an essential instrument in the social field research. Therefore, the main purpose of the interview is to obtain certain information about one or more ...
Using interviews as research instruments. Research differs in a number of aspects but they do have some commonalities. Many facets are involved in conducting research. One very essential factor is collecting data. Data collection can be gathered from a number of sources, which include documents, the workplace, the Internet surveys, focus groups ...
using interviews as research instruments - Free download as PDF File (.pdf), Text File (.txt) or read online for free. This document discusses using interviews as a method for collecting research data. It describes four main types of interviews: structured interviews, semi-structured interviews, unstructured interviews, and non-directive interviews.
University of Cape Coast. [email protected], 0246502881. Abstract. Deciding on the appropriate data collection instrument to use in capturing the needed. data to address a research problem as ...
We identify the instruments used in the 2019 GBD estimates and provide a descriptive assessment of the five most frequently used instruments. The majority of country studies, 356/566 (62.9%), used general mental health screeners or structured/semi-structured interview guides, 98/566 (17.3%) of the studies used dedicated depression screeners ...
Self-care agency (SCA), the ability that enables an individual to engage in self-care to promote their health, is significantly associated with adolescents' health outcomes. Because there is no instrument to specifically measure adolescents' SCA in Thailand, the purpose of this study was to translate the Denyes Self-Care Agency Instrument (DSCAI-90) for use with Thai adolescents. This ...
After recognizing and identifying the problems, the researcher tries to. figure out inv estigational plan to collect the desire d facts in effective manner. In this paper, interview method has ...
research instruments: a questionnaire and an interview guide used to investigate the implementation of higher education objectives and the attainment of cameroon's vision 2035 June 2021 DOI: 10. ...