U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List

Logo of plosone

Enhancing the sample diversity of snowball samples: Recommendations from a research project on anti-dam movements in Southeast Asia

Julian kirchherr.

1 Faculty of Geosciences, Utrecht University, Utrecht, the Netherlands

2 School of Geography and the Environment, University of Oxford, Oxford, United Kingdom

Katrina Charles

Associated data.

All data underlying the study are within the paper and its Supporting Information files.

Snowball sampling is a commonly employed sampling method in qualitative research; however, the diversity of samples generated via this method has repeatedly been questioned. Scholars have posited several anecdotally based recommendations for enhancing the diversity of snowball samples. In this study, we performed the first quantitative, medium- N analysis of snowball sampling to identify pathways to sample diversity, analysing 211 reach-outs conducted via snowball sampling, resulting in 81 interviews; these interviews were administered between April and August 2015 for a research project on anti-dam movements in Southeast Asia. Based upon this analysis, we were able to refine and enhance the previous recommendations (e.g., showcasing novel evidence on the value of multiple seeds or face-to-face interviews). This paper may thus be of particular interest to scholars employing or intending to employ snowball sampling.

Introduction

Snowball sampling is a commonly employed sampling method in qualitative research, used in medical science and in various social sciences, including sociology, political science, anthropology and human geography [ 1 – 3 ]. As is typical of terms adopted by a variety of fields, however, the phrase ‘snowball sampling’ is used inconsistently across disciplines [ 4 ]. The most frequently employed definition, suggested by Patton [ 5 ], Atkinson and Flint [ 6 ], Cohen and Arieli [ 7 ] and Bhattacherjee [ 8 ], is as a sampling method in which one interviewee gives the researcher the name of at least one more potential interviewee. That interviewee, in turn, provides the name of at least one more potential interviewee, and so on, with the sample growing like a rolling snowball if more than one referral per interviewee is provided.

This definition can initially seem self-explanatory, which may explain why snowball sampling is rarely discussed in most peer-reviewed papers that employ it. Various scholars use snowball sampling in their empirical work, but most provide only limited information on the method (see, e.g., [ 9 – 13 ]). Similarly, qualitative research textbooks often lack substantive discussion of snowball sampling (e.g., [ 8 , 14 – 19 ]). Bailey [ 14 ], for instance, devotes only a half-page of his 595-page book on social research methods to snowball sampling, acknowledging that ‘snowball sampling procedures have been rather loosely codified’ ([ 14 ], p. 96), an observation echoed by Penrod et al. [ 3 ].

This paper focuses on snowball sampling procedures, which we define as those actions undertaken to initiate, progress and terminate the snowball sample [ 1 , 20 ]. Despite the lack of substantive writing on snowball sampling as a method, several authors [ 2 , 3 , 21 ] have provided recommendations for enhancing a sample’s diversity in snowball sampling procedures (we discuss this further in Section 4). However, as this advice is not based on a quantitative analysis of evidence, but only on anecdotal evidence, there is a risk that these recommendations are based on coincidence. The aim of this paper is to provide advice on enhancing the sample diversity of a snowball sample. This advice is grounded in a medium- N analysis of relevant evidence, thus reducing the probability of positing advice that is based on coincidence [ 22 ]. A medium- N analysis is generally based on 10–100 cases, whereas anecdotal evidence is usually based only on a handful of cases [ 23 , 24 ]. At the core of our work, we provide descriptive analyses of various commonly prescribed strategies for enhancing the sample diversity of a snowball sample. These analyses are based on reach-outs to 211 individuals via snowball sampling for a research project on anti-dam movements in Southeast Asia, resulting in 81 interviews conducted between April and August 2015. As far as we are aware, ours is the first medium- N analysis to focus on enhancing the sample diversity of a snowball sample.

The remainder of this paper is organised as follows: in Section 2, we discuss snowball sampling as a method; in Section 3, we present the research project on anti-dam movements in Southeast Asia that served as the basis for our medium- N analysis on snowball sampling procedures; in Section 4, we present and discuss insights on snowball sampling procedures based upon this analysis as well as our resulting recommendations; finally, in Section 5, we summarise our argument.

Throughout this paper, we employ social science methodology terminology. We define key terms for this paper such as ‘snowball sampling’ or ‘sampling’, since these terms are not consistently codified in the scholarly literature. Due to limited space, however, we refrain from defining terms we have deemed common in this field of study, referring only to the relevant literature.

On snowball sampling

Traditional sampling methods are comprised of two elements [ 25 , 26 ]. First, a full set of data sources is defined, creating a list of the members of the population to be studied, known as a sampling frame. Second, a specific sample of data is collected from this sampling frame. Snowball sampling defies both elements, since it does not rely upon a sampling frame [ 27 ] (which may indicate that a different term for snowball sampling would be more accurate). Snowball sampling is often employed when no sampling frame can be constructed.

Researchers frequently cannot construct a sampling frame if a difficult-to-reach population is to be studied. Difficult-to-reach-populations are also referred to as ‘hard-to-reach-populations’ [ 28 ], ‘hidden populations’ [ 29 ] or ‘concealed populations’ [ 21 ] in the scholarly literature. Although not all scholars may agree that these terms are interchangeable, we deem them interchangeable for the purposes of this paper. For further discussion of this terminology, see [ 30 , 31 ].

A difficult-to-reach population does not wish to be found or contacted (e.g., illegal drug users, illegal migrants, prostitutes or homeless people [ 6 , 31 ]). Snowball sampling was originally used by researchers to study the structure of social networks [ 32 ]. The earliest empirical account of snowball sampling is from 1955 [ 33 ], with snowball sampling first described as a method in 1958 [ 34 ]. While it is still used to study the structure of social networks [ 35 ], over the last few decades, the method’s key purpose has largely transformed ‘into […] an expedient for locating members of a [difficult-to-reach] population’ ([ 36 ], p. 141).

Researchers grounded in quantitative thinking, such as Lijphart [ 37 ] and King et al. [ 38 ], tend to view the drawing of a random sample from a sampling frame as the gold standard of data collection. Even these researchers may nevertheless consider non-probability sampling methods, such as snowball sampling, a ‘necessary and irreplaceable sampling [method]’ ([ 39 ], p. 367) when confronted with difficult-to-reach populations, particularly if the dismissal of snowball sampling would mean that no research could be conducted at all. Ultimately, ‘an important topic is worth studying even if very little [access to] information is available’ ([ 38 ], p. 6). Still, some of those grounded in quantitative thinking call snowball sampling a method ‘at the margin of research practice’ ([ 6 ], p. 1), since the lack of a sampling frame means that, unlike individuals in a random sample, individuals in a population of interest do not have the same probability of being included in the final sample. Findings from a snowball sample would therefore not be generalisable [ 40 ] (on generalisability, see [ 41 ]).

Several qualitative scholars rebut such criticism. Creswell, for instance, notes that ‘the intent [of qualitative research] is not to generalise to a population, but to develop an in-depth [and contextualised] exploration of a central phenomenon’ ([ 42 ], p. 203). Others [ 1 , 39 ] specifically oppose quantitative scholars’ negative framing of snowball sampling, arguing that this method would ‘generate a unique type of social knowledge’ ([ 1 ], p. 327). Due to the diversity of perspectives gathered, this knowledge would be particularly valuable for an in-depth and contextualised exploration of a central phenomenon. We therefore define the diversity of a sample as a measure of the range of viewpoints that have been gathered on a central phenomenon.

Researchers critical of snowball sampling respond to this defence by arguing that the method is unable to ensure sample diversity, which is a necessary condition for valid research findings. Indeed, some scholars have stated that snowball samples underrepresent and may even exclude those least keen to cooperate, since referrals may not materialise in an interview if a potential interviewee is only somewhat keen or not at all keen to be interviewed [ 3 , 43 ]. Similarly, potential interviewees with smaller networks may be underrepresented, as they are less likely to be referred for an interview [ 31 , 44 ]. Those with smaller networks may also be in a specific network whose different perspectives may be of interest but are excluded in the final sample. Meanwhile, snowball sampling is said to over represent those interviewees (and their respective networks) that the interviewer spoke with first; the relevant literature refers to this as ‘anchoring’ [ 20 , 39 ].

We do not aim to argue the ‘validity’ of the method, but rather to inform snowball sampling methodologies in order to promote sample diversity. From a qualitative perspective, ‘validity’ can be defined as ‘the correctness or credibility of a description, conclusion, explanation, interpretation or other sort of account’ ([ 45 ], p. 87), while quantitative researchers frequently use the terms ‘generalisability’ and ‘(external) validity’ interchangeably [ 46 , 47 ]. The term ‘validity’ is contested among qualitative researchers, and some qualitative researchers entirely reject the concept for qualitative work [ 48 , 49 ]. We do not aim to resolve this debate via this paper; instead, we focus on the (seemingly less-contested) term ‘sample diversity’. While we acknowledge that this term is not codified in qualitative textbooks such as the SAGE Encyclopedia of Qualitative Research Methods , sample diversity is considered desirable by the various qualitative scholars we reviewed. Boulton and Fitzpatrick demand, for instance, that qualitative researchers ‘ensure that the full diversity of individuals […] is included [in their sample]’ ([ 50 ], p. 84), a mandate echoed by other scholars [ 16 , 51 – 53 ].

In order to operationalise the concept of sample diversity, we used five key methodological recommendations to inform our research. In this paper, we use quantitative analyses from our experiences with snowball sampling to further reflect on these recommendations, which are briefly described below.

Prior personal contacts of the researcher are required

Patton ([ 5 ], p. 176) notes that snowball sampling ‘begins by asking well-situated people: “Who knows a lot about ____? Who should I talk to?”‘. In the absence of a sampling frame for the population of interest, however, the researcher must retain at least some prior personal or professional contacts in the population of interest which can serve as the seeds of the snowball sample [ 2 , 54 ]. Waters contends that building a diverse snowball sample ‘depend[s] almost exclusively on the researcher’s [prior personal or professional] contacts’ ([ 39 ], p. 372).

Sample seed diversity is important

Morgan [ 21 ] has claimed that the ‘best defence’ against a lack of sample diversity is to begin the sample with seeds that are as diverse as possible. Others echo this advice [ 3 , 39 , 55 ], arguing that it is ‘compulsory for the researcher to ensure that the initial set of respondents is sufficiently varied’ ([ 55 ], p. 55). The term ‘chain referral sampling’ has been used for snowball samples that are strategically built via multiple varying seeds [ 3 ].

Technology means face-to-face interviews are no longer required

Some researchers have argued that face-to-face interviews are obsolete. For instance, over 25 years ago, it was claimed there were ‘no remarkable differences’ ([ 56 ], p. 211) between information collected via telephone and information collected via face-to-face interviews. The increasing use of telecommunications in recent years is likely to have further reduced barriers to remote interviewing, and various scholars [ 57 , 58 ] continue to claim that ‘evidence is lacking that [telephone interviews] produce lower quality data’ ([ 59 ], p. 391). In particular, they have highlighted the benefits of using Skype for semi-structured interviews [ 57 ].

However, for snowball sampling, face-to-face interviews help to generate the trust that scholars claim is required in order to gain referrals [ 1 , 31 , 39 , 60 ]. Noy argues that ‘the quality of the referring process is naturally related to the quality of the interaction: […] if the researcher did not win the informant’s trust […], the chances the latter will supply the former referrals decrease’ ([ 1 ], p. 334).

Persistence is necessary to secure interviews

Although the value of persistence may be considered self-evident by some scholars, it is seen by multiple academics [ 61 – 63 ] as a central virtue of qualitative researchers. Many young career scholars who embrace snowball sampling are likely to hear such advice as, ‘If you cannot interview your envisaged interviewees initially, don’t give up!’. A ‘helpful hint’ for qualitative researchers seeking informants is, ‘Persevere–repeat contact’ [ 64 ].

More waves of sampling are required to access more reluctant interviewees

As a remedy for snowball sampling’s previously discussed bias towards excluding those least keen to be interviewed, multiple scholars suggest pursuing a snowball sample for multiple waves (with a new sampling wave reached once an interviewee introduces the interviewer to one or more potential interviewees) [ 65 – 68 ]. Those suggesting this remedy assume that pursuing more waves increases the likelihood of being referred to an interviewee from a particularly difficult-to-reach population who is at least somewhat keen to be interviewed.

Approval for this study was granted by the Central University Research Ethics Committee (CUREC) of the University of Oxford. Our population of interest for our research project were stakeholders in Southeast Asia’s dam industry. Since ‘the most dramatic conflicts over how to pursue sustainable development’ ([ 69 ], p. 83) have occurred over the construction of large dams, we see this industry as a conflict environment with widely varying viewpoints. A conflict environment is one in which people perceive their goals and interests to be contradicted by the goals or interests of the opposing side [ 70 ]. The major conflicting parties in the dam industry tend to be local and international non-governmental organisations (NGOs) and academics (usually keen not to construct a particular dam) versus international donors, the private sector and governments (usually keen to construct a particular dam) [ 71 , 72 ]. Each sub-population operating in a conflict environment can be considered difficult to reach since fear and mistrust are often pervasive [ 7 ]. Snowball sampling is a suitable research method in conflict environments because the introductions through trusted social networks that are at the core of this method can help interviewees to overcome fear and mistrust, which, in turn, ensures access [ 7 ]. This access is needed to gather the widely varying viewpoints in the hydropower industry, in particular viewpoints with regards to what constitutes just resettlement [ 73 , 74 ]. Based on this rationale, we chose snowball sampling as the main method for our research.

In order to ensure sample diversity for our research project on anti-dam movements in Southeast Asia, we aimed to gather perspectives mostly from six main sub-populations: (1) local NGOs, (2) international NGOs, (3) international donors, (4) academia, (5) the private sector and (6) the government. We hypothesized that ‘dam developers’, a main sub-category of the interviewee category ‘private sector’, would be the most significant challenge to ensuring the diversity of our sample. Early in our process, many of the scholars with whom we discussed our research project argued that it would be impossible to interview a dam developer from a Chinese institution; meanwhile, researchers from a comparable research project that ended approximately when our project started reported being unable to interview any dam developers from European institutions. We also initially failed to collect data from dam developers: for instance, a survey we initiated that was distributed by Aqua~Media (host of a major global dam developer conference) to more than 1,500 dam developers yielded just five responses, only one of which was complete. We considered this weak response rate to be due, at least in part, to the dam industry’s negative view of academicians since the publication of Ansar et al. [ 75 ], which Nombre ([ 76 ], p. 1), the president of the International Commission on Large Dams (ICOLD), called ‘[highly] misleading’.

None of our researchers had significant direct links to the dam industry upon the start of the project; however, we did retain a variety of indirect links. Our researchers had past links to a management consultancy that serves various dam industry players, (more limited) links to an international donor working in the hydropower sector and links to activists in Myanmar advocating against dam projects.

After a favourable ethics review of our study by the CUREC of the University of Oxford, we commenced semi-structured interviews in April 2015, mostly via cold calls (we include cold e-mails in the term ‘cold calls’ throughout this paper). Initially, we conducted research via telephone only. We then undertook field research in Singapore, Myanmar and Thailand from June to August 2015 and terminated our data collection in late August 2015.

In total, 81 semi-structured interviews were carried out during this period. From a qualitative perspective, this is a relatively large sample size (for instance, the average qualitative PhD dissertation is based on 31 interviews [ 77 ]); from a quantitative perspective, however, the sample size is quite small [ 78 ]. Of our 81 interviews, 48 (59%) were conducted via telephone, 26 (32%) face-to-face and 7 (9%) online, either via e-mail or an online survey. Most of our interviews (57%) were carried out in July in Myanmar. Of our 81 interviewees, only 24 (30%) were women. Researchers who employ snowball sampling frequently employ personal/professional contact seeds and cold call seeds to build their sample (e.g., [ 2 , 79 , 80 ] with a seed defined as the starting point of a sample [ 65 ]). Of the 81 interviews analysed, 53 (65%) were rooted in a personal or professional contact ( Fig 1 ) (i.e. the seed of the interview pathway was a contact we had already retained prior to the research project). The remaining 28 (35%) interviews were rooted in cold calls.

An external file that holds a picture, illustration, etc.
Object name is pone.0201710.g001.jpg

Given the sensitive nature of the interview topic, all interviewees were assured anonymity. Thus, all of the interviews are coded, with the first letter indicating the mode of interview ( T for telephone, F for face-to-face, O for online survey or e-mail), the second letter indicating the category of interviewee ( A for academia, G for government, I for international donor, NI for international NGO, NL for national NGO, P for private sector) and the sequence of numbers indicating the interview number within a particular mode. Researcher A is indicated by RA , Researcher B by RB ; CON represents a conference event. Bold type indicates that an interview was completed, while X that an interview was not completed.

As outlined in the previous section, snowball sampling is sometimes criticised for producing samples that lack sample diversity. To address this criticism, we reviewed the (scarce) literature on enhancing sample diversity via snowball sampling procedures prior to commencing our study. Upon reflection during our research, we chose to pursue our analysis retrospectively in order to challenge some of the recommendations provided in literature. Our analysis is structured alongside the five core pieces of advice found in this literature ( Table 1 ). Our results are based on a quantitative analysis of the 81 interviews we conducted. Although we endeavoured to include all interview attempts, some initial cold calls may have been overlooked in this retrospective approach. Therefore, some of our analysis, particularly in Section 4.4, may be too optimistic. Overall, we were able reconstruct 211 reach-out attempts.

Sample diversity is measured by representation from five identified sub-groups.

RecommendationMeasure
Prior personal contacts of the researcher are requiredSample diversity within total interviews (and success of reach-outs) generated via cold calls compared with personal or professional contacts
Sample seed diversity is importantSample diversity compared to initial seed
Technology means face-to-face interviews are no longer requiredComparison of referrals from telephone interviews with face-to-face overall, and by sample diversity
Persistence is necessary to secure interviewsReach-outs to contacts per completed interview
More waves of sampling are required to access more reluctant intervieweesSample diversity by wave

Results and discussion

On prior personal and professional contacts.

Our analysis provides evidence that sample diversity can be reached even if no prior personal or professional contacts to the population of interest have been retained. The seeds of the interviews are depicted in Fig 2 , with the left side of the figure depicting the 53 interviews based on a personal or professional contact and the right side depicting the 28 interviews that were based on cold calls. This figure shows two main points of interest: first, both types of seeds include interviews in each interview category; second, the interview sub-category ‘dam developer’, which we hypothesised would be the most difficult to include in the sample, is also covered by both types of seeds. We can therefore conclude that a diverse sample could have been built even if we had relied solely on cold calls.

An external file that holds a picture, illustration, etc.
Object name is pone.0201710.g002.jpg

It is acknowledged, however, that building a snowball sample from cold calls is particularly labour-intensive [ 39 ]: in our research, only 25% of our cold calls led to an interview, compared to 62% of the referrals. Significant differences in the value of referrals persist from one interviewee group to another ( Fig 3 ). We measure the value of referrals via a concept we call ‘network premium’. To gauge the network premium, we subtracted the cold call response rate (i.e., the number of interviews initiated via cold calls divided by the total number of cold calls) from the referral response rate (i.e. the number of interviews initiated via referrals divided by the total number of referrals). Referrals were the most valuable when contacting international donors and private sector players, with network premiums of 74% and 52%, respectively, indicating that these groups are particularly difficult-to-reach populations.

An external file that holds a picture, illustration, etc.
Object name is pone.0201710.g003.jpg

(1) Unable to retrace for 13 identified reach-outs if initiated via referral or cold call; four reach-outs coded as ‘Other’. (2) Unable to retrace for one interview carried out via referral coded as ‘Other’. (3) Including personal contacts and contacts via conferences. (4) Referral response rate–Cold call response rate.

The overall results from these analyses are encouraging for scholars interested in researching a population to which no personal or professional contacts are retained prior to the research project. While personal or professional contacts maintained to the research population of interest can accelerate the research endeavour, our results also showcase that (at least for our topic of interest) a diverse sample can be built from cold calls if a researcher is willing to invest some time in reach-outs.

On seed variation

Our research confirms the scholars’ advice that seed diversity is important. Fig 4 (a variation of Fig 2 ) depicts the completed interviews from a seed perspective, with RA, RB and cold calls as the three main seeds of the sample. The sample built via RA, who has a background in the private sector, is largely biased towards this sector, with 47% of all interviews seeded via RA private sector interviews. RB conducted 57% of interviews, whose background is closest to local NGOs, were with local NGOs. Meanwhile, the sample built via cold calls indicates no significant biases towards any interviewee category. Interviews based on the network of RB included one (TNL17) with a leading activist from a remote area of Myanmar who provided unique insights into the early days of an anti-dam campaign. This insight helped us to develop a narrative of the campaign that was not skewed to the later days of the campaign and the activists prominent in these later days. The sample diversity ensured via RB was thus central to the quality of our research.

An external file that holds a picture, illustration, etc.
Object name is pone.0201710.g004.jpg

It is noteworthy that the three different seeds in Fig 4 include interviews in all interviewee categories, including the sub-category ‘dam developer’ (the sole exception is the interviewee category ‘international NGO, which contains zero interviews for RB). This indicates that, at least for our topic of interest, a fairly diverse sample can be generated even if the researcher is unable to vary her or his seed, although the overall data suggest that seed variation can significantly enhance sample diversity. Fig 3 may therefore be viewed as a case for collaboration among researchers; if researchers with different backgrounds and different personal and professional contacts to the population of interest begin to collaborate, such collaborations are bound to contribute to sample diversity.

On face-to-face interviews

Our descriptive analysis provides evidence to further support the argument that face-to-face interviews are redundant, with our data indicating that face-to-face interviews can lead to more sought referrals than telephone interviews (perhaps since trust may be more readily established via face-to-face conversations than over the telephone). Fig 5 aims to quantify the value of face-to-face interviews. Overall, 30 (37%) of our interviews were initiated via prior face-to-face conversations, while prior telephone conversations and online contact each led to only eight interviews (10%). An examination shows that of the nine interviews conducted with dam developers, the interviewee sub-category deemed most difficult to access, seven (78%) were initiated via prior face-to-face interviews, while not a single telephone interview led to a referral to a dam developer. These interviews proved to be essential for our research. For instance, one Chinese dam developer challenged a claim from numerous NGOs that his company would not engage with NGOs, which, in turn, allowed us to present a more balanced portrayal of the interplay between Chinese dam developers and NGOs.

An external file that holds a picture, illustration, etc.
Object name is pone.0201710.g005.jpg

(1) Comprises interviews with those already retaining a personal or professional contact prior to the research project.

While our research did not investigate whether face-to-face interviews lead to lower-quality data than telephone interviews, our data provide tentative evidence that face-to-face interviews are not obsolete; they can still be helpful for those employing or intending to employ snowball sampling, since these interviews can lead to more sought referrals and thus enhanced sample diversity. We acknowledge, however, that this finding may not be true for all populations. For instance, studies on individuals with sexually transmitted diseases have found that these interviewees (particularly men) tend to report more truthfully in an audio-computer-assisted self-interview (ACASI) than in a face-to-face interview, since interviewees tend to be more comfortable reporting on sexually transmitted diseases to a computer than to a live person [ 81 , 82 ].

On persistence

Our data suggest that persistence can indeed enhance sample diversity, but we can also conclude that excessive persistence does not necessarily yield dividends. Instead of distributing a great many interview reminders during our study, we reached out to the majority of our proposed interview subjects only once. Nevertheless, the scarce data we collected regarding persistence indicates its value. We map this data in Fig 6 , with the left side depicting our success rate in relation to the number of reach-outs (either one, two or three) and the right side depicting a deep dive on success rates achieved with two reach-outs (distinguishing between reach-out attempts to unknown potential interviewees and those to whom we were referred by other interviewees). We sent one interview reminder to 28 of our proposed interviewees. This led to 10 additional interviews, a success rate of 36%, equalling 12% of the total interviews analysed for this paper. Reminders appear to be only somewhat more helpful when contacting referrals in comparison to their usefulness with cold calls–a single reminder led to an interview in 39% of our cases for the former group and 38% for the latter. One of the most valuable interviews for our research gained via a reminder was with the CEO of a Burmese dam developer. This interviewee compared Chinese and European dam developers in Myanmar, which helped us to further refine our narrative on social-safeguard policy adherence by Chinese dam developers in Myanmar.

An external file that holds a picture, illustration, etc.
Object name is pone.0201710.g006.jpg

(1) Number of reach-outs unknown for 32 reach-outs. Eight potential interviewees responded, but refused interview.

Excessive persistence, however, does not appear to be worthwhile. We sent three reminders to seven of our envisaged interviewees, but as Fig 6 shows, this did not lead to a single additional interview. While our data does not suggest that excessive persistence is helpful to researchers, it may also not be recommended for ethical reasons. A potential interviewee who does not respond to an interview request after two reach-outs may be indicating via this non-response that she or he is not interested in participating in the research. If a single request remains unanswered, the researcher may hypothesise that, for instance, the e-mail was overlooked, a hypothesis particularly likely when conducting interviews with time-pressed leaders of organisations. Indeed, all 10 interviews only carried out upon the second reach-out were interviews with interviewees in management positions.

Our data on persistence provide some evidence that those employing or intending to employ snowball sampling can enhance sample diversity if every reach-out is carefully tracked and followed by a reminder. We typically sent a reminder after one week if no response was obtained upon the first reach-out. This persistence may help to include those least keen to be interviewed for a research endeavour.

Our data show some evidence that, for our topic of study, pursuing interviews for even a few waves provided the perspectives of particularly difficult-to-reach populations and thus achieved sample diversity. More than 60% of our interviews were conducted in the zeroth or first wave ( Fig 7 ). These include seven of the nine interviews conducted with dam developers, the sub-category we deemed most challenging to interview. The remaining two interviews with dam developers were conducted in the second wave. However, not a single interview with a dam developer was carried out in the third wave and beyond, although a fifth of our total interviews were carried out in the third or later waves. Pursuing interviews for multiple waves nevertheless yielded novel insights. For instance, interview FNL12, which was conducted in the sixth wave, yielded insights on small dam construction in Myanmar–a topic of (some) interest to our research endeavour, but not covered in detail by previous interviews. Furthermore, we note that our finding regarding the limited value of multiple waves may also be specific to our population, with this finding perhaps indicating a low degree of network segmentation in the population in question [ 83 ]. Meanwhile, a high degree of network segmentation may impede the pursuance of multiple waves, since interviewees may lack the suitable contacts for a referral [ 84 ].

An external file that holds a picture, illustration, etc.
Object name is pone.0201710.g007.jpg

While additional waves can lead to novel insights, our overall data on waves provide some evidence that the number of waves pursued is not a definitive indicator for sample diversity. Even very few waves can yield access to particularly difficult-to-access populations.

Our quantitative analysis of pathways to delivering sample diversity in snowball samples yielded the following revisions to the literature’s recommendations:

  • Prior personal contacts are not essential for achieving sample diversity but tend to be helpful, as generating new contacts during research can be labour-intensive.
  • Sample seed diversity is important to achieving sample diversity.
  • Face-to-face interviews build trust and can help to generate further referrals.
  • Persistence (within reason) is helpful in securing interviews.
  • Sample diversity is not necessarily enhanced if a seed is advanced over numerous waves.

We do not claim that these insights are comprehensive, but we believe that these interpretations of our data may serve as a starting point for future scholars using snowball sampling procedures. All of the analyses presented in this section are based only on descriptive statistics. This means, for instance, that we cannot control for confounds such as effort [ 85 ]. An experimental research design would yield the most robust insights on sampling procedures to enhance the sampling diversity of a snowball sample (with, for instance, one research project staffed with scholars with relevant personal or professional contacts and another staffed with scholars without relevant contacts).

Overall, this work aims to advance the literature on snowball sampling as a qualitative sampling approach. While snowball sampling procedures may qualify ‘as the least “sexy” facet of qualitative research’ ([ 1 ], p. 328), these procedures are ‘not self-evident or obvious’ ([ 20 ], p. 141), since the snowball sample does not ‘somehow magically’ ([ 20 ], p. 143) start, proceed and terminate when a scholar attempts to develop a diverse sample. Rather, continuous, deliberate effort by the researcher(s) is required. Our paper has attempted to provide some insights on this effort.

Unfortunately, we developed the idea to write this paper only during the course of our research project, and thus some of our data may be skewed. For instance, we may not have been able to trace all original reach-out attempts and our data on persistence may therefore be biased. Some of those scholars grounded in quantitative thinking may also claim that the insights outlined in Section 4 lack external validity since our sample size is relatively small from a quantitative methodological perspective. In addition, our population was very specific and thus may not be comparable to other difficult-to-reach populations, and we also did not adopt an experimental research design as described above. Hence, we encourage scholars to replicate our findings via their respective research projects that employ snowball sampling. With many scholars claiming to feel more pressed than ever to deliver research results with maximum efficiency, we hope that these initial descriptive analyses of snowball sampling procedures provide some valuable insights to those employing or intending to employ this method and aiming to improve their management of it.

Supporting information

Acknowledgments.

We wish to thank our reviewers at PLOS ONE who provided constructive thoughts on this piece of work. We also thank Ralf van Santen for his outstanding contributions to this work as a research assistant.

Funding Statement

The authors received no specific funding for this work.

Data Availability

  • Skip to main content [s]
  • Infonet (For Students and Staff)
  •    
  • Research Degrees
  • Library Research Resources

Snowball Sampling

-

Snowball sampling is one of the most popular methods of sampling in qualitative research, central to which are the characteristics of networking and referral. The researchers usually start with a small number of initial contacts (seeds), who fit the research criteria and are invited to become participants within the research. The agreeable participants are then asked to recommend other contacts who fit the research criteria and who potentially might also be willing participants, who then in turn recommend other potential participants, and so on. Researchers, therefore, use their social networks to establish initial links, with sampling momentum developing from these, capturing an increasing chain of participants. Sampling usually finishes once either a target sample size or saturation point has been reached. This entry begins with a description of the conveniences of snowball sampling, followed by some criticisms and limitations of the technique. The next section provides examples of how snowball sampling is used in qualitative research projects. Subsequent sections examine instances in which snowball sampling stalls or fails to produce participants, and offers two examples of cases in which researchers successfully overcame those obstacles. The entry concludes with a look at some variants of snowball sampling that have emerged given technological advances.

Item Type: Article
Article Type: Article
Uncontrolled Keywords: Sampling methods; Snowballing
Subjects: >
>
Divisions: >
Research Priority Areas:
Depositing User:
Date Deposited: 24 Jun 2019 10:03
Last Modified: 31 Aug 2023 08:24
URI:

University Staff: Request a correction | Repository Editors: Update this record

  • About the Repository
  • Repository Policies
  • Open Access Policies
  • Statistics Overview
  • Accessibility

Skip navigation links

snowball sampling qualitative research pdf

University Of Gloucestershire

Bookmark and Share

Find Us On Social Media:

Social Media Icons

Other University Web Sites

  • Staffnet (Staff Only)
  • © UoG 2008-18
  • Privacy and Cookies
  • Comments concerning this page to Webmaster

Snowball Sampling Method: Techniques & Examples

Julia Simkus

Editor at Simply Psychology

BA (Hons) Psychology, Princeton University

Julia Simkus is a graduate of Princeton University with a Bachelor of Arts in Psychology. She is currently studying for a Master's Degree in Counseling for Mental Health and Wellness in September 2023. Julia's research has been published in peer reviewed journals.

Learn about our Editorial Process

Saul McLeod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul McLeod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

Snowball sampling, also known as chain-referral sampling, is a non-probability sampling method where currently enrolled research participants help recruit future subjects for a study.

Snowball sampling is often used in qualitative research when the population is hard-to-reach or hidden. It’s particularly useful when studying sensitive topics or when the members of a population are difficult to locate.

snowball sampling

This sampling technique is called “snowball” because the sample group grows like a rolling snowball.

Non-probability sampling means that researchers, or other participants, choose the sample instead of randomly selecting it, so not all population members have an equal chance of being selected for the study.

Linear Snowball Sampling

  • Linear snowball sampling depends on a straight-line referral sequence, beginning with only one subject. This individual subject will provide one new referral, which is then recruited into the sample group.
  • This referral will provide another new referral, and this pattern continues until the ideal sample size is reached.

Exponential Non-Discriminative Snowball Sampling

  • In exponential non-discriminative snowball sampling, the first subject recruited to the sample provides multiple referrals. Each new referral will then provide the researchers with more potential research subjects.
  • This geometric chain sampling sequence continues until there are enough participants for the study.

Exponential Discriminative Snowball Sampling

  • This type of snowball sampling is very similar to exponential non-discriminative snowball sampling in that each subject provides multiple referrals.
  • However, in this case, only one subject is recruited from each referral. Researchers determine which referral to recruit based on the objectives and goals of the study.
  • First, researchers will form an initial sample by drafting any potential subjects from a population (seeds).
  • Even if only a couple of subjects are found at first, researchers will ask those subjects to recruit other individuals for the study. They recruit subjects by encouraging them to come forward on their own. Study participants will only provide specific names of recruited individuals if there is no risk of embarrassment or a violation of privacy. Otherwise, study participants do not identify any names of other potential participants.
  • Current participants will continue to recruit others until the necessary sample size has been reached.

Snowball sampling requires special approval by an Institutional Review Board (IRB), whereby the researchers must provide a valid justification for using this method.

Researchers must also take precautions to protect the privacy of potential subjects, especially if the topic is sensitive or personal, such as studies of networks of drug users or prostitutes.

In addition, each respondent has the opportunity to participate or decline. Current participants in studies using this method do not receive any compensation for providing referrals, and study participants are not required to identify any names of other potential participants.

Example Situations

Snowball sampling is used when researchers have difficulty finding participants for their studies. This typically occurs in studies on hidden populations, such as criminals, drug dealers, or sex workers, as these individuals are difficult for researchers to access.

For example, a researcher studying the experiences of undocumented immigrants in a particular city. This population might be difficult to reach through traditional sampling methods due to fear of legal repercussions, lack of formal records, and other barriers.

The researcher might start by contacting a local organization that provides services to immigrants. Through this organization, the researcher could connect with a few willing individuals to participate in the study.

These initial participants (the “seeds”) would then be asked to refer the researcher to other undocumented immigrants they know who might also be willing to participate.

The new participants would then refer the researcher to others, and so on, creating a “snowball” effect where the number of participants grows as each person refers the researcher to others in their network.

The snowball sampling method is beneficial because current participants are likely to know others who share similar characteristics relevant to the study.

Members of these hidden populations tend to be closely connected as they share interests or are involved in the same groups, and they can inform others about the benefits of the study and reassure them of confidentiality.

Research Examples

  • Researching non‐heterosexual women using social networks (Browne, 2002).
  • Investigating lifestyles of heroin users (Kaplan, Korf, & Sterk, 1987).
  • Identifying Argentinian immigrant entrepreneurs in Spain (Baltar & Brunet, 2012).
  • Studying illegal drug users over the age of 40 (Waters, 2015).
  • Assess the prevalence of irritable bowel syndrome in South China and its impact on health-related quality of life (Xiong, 2004).
  • Obtaining samples of populations at risk for HIV (Kendall et al., 2008).

Enables access to hidden populations

Snowball sampling enables researchers to conduct studies when finding participants might otherwise be challenging. Concealed individuals, such as drug users or sex workers, are difficult for researchers to access, but snowball sampling helps researchers to connect to these hidden populations.

Avoids risk

Snowball sampling requires the approval of an Institutional Review Board to ensure the study is conducted ethically. In addition, each respondent has the opportunity to participate or to decline participation.

Saves money and time

Since current subjects are used to locate other participants, researchers will invest less money and time in planning and sampling.

Limitations

Difficult to determine sampling error.

Snowball sampling is a non-probability sampling method, so researchers cannot calculate the sampling error.

Bias is possible

Since current participants select other members for the sample, bias is likely. The initial participants will strongly impact the rest of the sample. In addition, an individual who is well-known and sociable is more strongly impacted by one who is more introverted.

Not always representative of the greater population

Because researchers are not selecting the participants themselves, they have little control over the sample. Researchers will thus have minimal knowledge as to whether the sample is representative of the target population.

  • A sample is the participants you select from a target population (the group you are interested in) to make generalizations about. As an entire population tends to be too large to work with, a smaller group of participants must act as a representative sample.
  • Representative means the extent to which a sample mirrors a researcher’s target population and reflects its characteristics (e.g. gender, ethnicity, socioeconomic level). In an attempt to select a representative sample and avoid sampling bias (the over-representation of one category of participant in the sample), psychologists utilize various sampling methods.
  • Generalisability means the extent to which their findings can be applied to the larger population of which their sample was a part.

Felix-Medina, M. H., & Thompson, S. K. (2004). Combining link-tracing sampling and cluster sampling to estimate the size of hidden populations. JOURNAL OF OFFICIAL STATISTICS-STOCKHOLM- , 20 (1), 19-38.

Henderson, R. H., & Sundaresan, T. (1982). Cluster sampling to assess immunization coverage: a review of experience with a simplified sampling method. Bulletin of the World Health Organization , 60 (2), 253–260.

Malilay, J., Flanders, W. D., & Brogan, D. (1996). A modified cluster-sampling method for post-disaster rapid assessment of needs. Bulletin of the World Health Organization , 74 (4), 399–405.

Roesch, F. A. (1993). Adaptive cluster sampling for forest inventories. Forest Science , 39 (4), 655-669.

Smith, D. R., Conroy, M. J., & Brakhage, D. H. (1995). Efficiency of Adaptive Cluster Sampling for Estimating Density of Wintering Waterfowl. Biometrics , 51 (2), 777–788. https://doi.org/10.2307/2532964

Steven K. Thompson (1990) Adaptive Cluster Sampling, Journal of the American Statistical Association, 85:412,1050-1059, DOI: 10.1080/01621459.1990.10474975

Xiong, L. S., Chen, M. H., Chen, H. X., Xu, A. G., Wang, W. A., & Hu, P. J. (2004). A population‐based epidemiologic study of irritable bowel syndrome in South China: stratified randomized study by cluster sampling. Alimentary pharmacology & therapeutics , 19 (11), 1217-1224.

Print Friendly, PDF & Email

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • What Is Snowball Sampling? | Definition & Examples

What Is Snowball Sampling? | Definition & Examples

Published on August 17, 2022 by Kassiani Nikolopoulou . Revised on June 22, 2023.

Snowball sampling is a non-probability sampling method where new units are recruited by other units to form part of the sample . Snowball sampling can be a useful way to conduct research about people with specific traits who might otherwise be difficult to identify (e.g., people with a rare disease).

Also known as chain sampling or network sampling , snowball sampling begins with one or more study participants. It then continues on the basis of referrals from those participants. This process continues until you reach the desired sample, or a saturation point.

A number of criteria are used for the selection:

  • The couple must have been together for a period of at least five years.
  • The couple must live together now.
  • The couple must live within a certain geographic area.
  • The couple must have examples of changes or challenges they have experienced together (e.g., long-distance, illness or loss of a loved one).

Table of contents

When to use snowball sampling, types of snowball sampling, advantages and disadvantages of snowball sampling, other interesting articles, frequently asked questions about snowball sampling.

Snowball sampling is a widely employed method in qualitative research , specifically when studying hard-to-reach populations .

These may include:

  • Populations that are small relative to the general population
  • Geographically dispersed populations
  • Populations possessing a social stigma or particular shared characteristic of interest

In all these cases, accessing members of the population can be difficult for non-members, as there is no sampling frame available.

Research in the fields of public health (e.g., drug users), public policy (e.g., undocumented immigrants), or niche genres (e.g., buskers) often uses snowball sampling.

This sampling method is also used to study sensitive topics, or topics that people may prefer not to discuss publicly. This is usually due to a perceived risk associated with self-disclosure. Snowball sampling allows you to access these populations while considering ethical issues , such as protecting their privacy and ensuring confidentiality.

Prevent plagiarism. Run a free check.

Snowball sampling begins with a convenience sample of one or more initial participants. Multiple data collection points (or waves) follow. These initial participants, called “seeds,” are used to recruit the first wave’s participants.

Wave 1 participants recruit wave 2 participants, and the sample expands, wave by wave, like a snowball growing in size as it rolls down a hill.

Depending on your research objectives , there are three different types of snowball sampling methods to choose from:

Linear snowball sampling

Exponential non-discriminative snowball sampling, exponential discriminative snowball sampling.

Linear snowball sampling relies on one referral per participant. In other words, the researcher recruits only one participant, and this participant, in turn, recruits only one participant. This process goes on until you have included enough participants in the sample.

Linear snowball sampling works best when there are few restrictions (called inclusion and exclusion criteria ) as to who is included in the sample.

As you finish up the interview , you ask them if they can refer someone else who also owns a tiny house. They happen to know someone, and pass the contact details to you. You interview them as well. Towards the end of the interview, you ask them to introduce you to one more person.

If more than two names are mentioned, it is a good idea to ask the interviewee how well they know those people, and then interview the person who is least known to them.

In exponential non-discriminative snowball sampling , the first participant provides multiple referrals. In other words, the researcher recruits the first participant, and this participant in turn recruits several others. The researcher includes all referrals in the sample. This type of snowball sampling is best used when you want to reach a larger sample.

In this method, participants give multiple referrals. However, the researcher screens those referrals, choosing only those who meet specific criteria to participate in the sample. The key difference between this and exponential non-discriminative snowball sampling is that not all referrals are included in the sample.

Exponential discriminative snowball sampling is most used when screening participants according to specific criteria is vital to your research goals.

As you inquire with your acquaintances, you find someone who bought a tiny house a year ago. At the end of the interview, you ask them if they know of other owners. You do not specify that the purchase has to be in the past three years.

As it happens, they do know of two more people who bought tiny houses in the same area as they did. You contact both, and find out that one bought the house four years ago and the other eight months ago. Since the one who bought the house four years ago does not meet your criteria, you only interview the other.

Like all research methods , snowball sampling has distinct advantages and disadvantages. It is important to be aware of these in order to determine whether it’s the best approach for your research design .

Advantages of snowball sampling

Depending on your research goals, there are advantages to using snowball sampling.

  • Snowball sampling helps you research populations that you would not be able to access otherwise . Members of stigmatized groups (e.g., people experiencing homelessness) may hesitate to participate in a research study due to fear of exposure. Snowball sampling helps in this situation, as participants refer others whom they know and trust to the researcher.
  • Since snowball sampling involves individuals recruiting other individuals, it is low-cost and easy to recruit a sample in this way.
  • Unlike probability sampling , where you draw your sample following specific rules and some form of random selection , snowball sampling is flexible . All you need is to identify someone who is willing to participate and introduce you to others.

Disadvantages of convenience sampling

Snowball sampling has disadvantages, too, and is not a good fit for every research design.

  • As the sample is not chosen through random selection , it is not representative of the population being studied. This means that you cannot make statistical inferences about the entire population and there is a high chance of research bias .
  • The researcher has little or no control over the sampling process and relies mainly on referrals from already-identified participants. Since people refer others whom they know (and share traits with), this sampling method has a high potential for sampling bias .
  • Relying on referrals may lead to difficulty reaching your sample . People may not want to cooperate with you, hesitate to reveal their identities, or mistrust researchers in general.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Student’s  t -distribution
  • Normal distribution
  • Null and Alternative Hypotheses
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles
  • Cluster sampling
  • Stratified sampling
  • Data cleansing
  • Reproducibility vs Replicability
  • Peer review
  • Prospective cohort study

Research bias

  • Implicit bias
  • Cognitive bias
  • Placebo effect
  • Hawthorne effect
  • Hindsight bias
  • Affect heuristic
  • Social desirability bias

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

Snowball sampling is best used in the following cases:

  • If there is no sampling frame available (e.g., people with a rare disease)
  • If the population of interest is hard to access or locate (e.g., people experiencing homelessness)
  • If the research focuses on a sensitive topic (e.g., extramarital affairs)

Snowball sampling relies on the use of referrals. Here, the researcher recruits one or more initial participants, who then recruit the next ones.

Participants share similar characteristics and/or know each other. Because of this, not every member of the population has an equal chance of being included in the sample, giving rise to sampling bias .

Snowball sampling is a non-probability sampling method , where there is not an equal chance for every member of the population to be included in the sample .

This means that you cannot use inferential statistics and make generalizations —often the goal of quantitative research . As such, a snowball sample is not representative of the target population and is usually a better fit for qualitative research .

Snowball sampling is a non-probability sampling method . Unlike probability sampling (which involves some form of random selection ), the initial individuals selected to be studied are the ones who recruit new participants.

Because not every member of the target population has an equal chance of being recruited into the sample, selection in snowball sampling is non-random.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Nikolopoulou, K. (2023, June 22). What Is Snowball Sampling? | Definition & Examples. Scribbr. Retrieved September 9, 2024, from https://www.scribbr.com/methodology/snowball-sampling/

Is this article helpful?

Kassiani Nikolopoulou

Kassiani Nikolopoulou

Other students also liked, what is non-probability sampling | types & examples, population vs. sample | definitions, differences & examples, sampling bias and how to avoid it | types & examples, what is your plagiarism score.

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here .

Loading metrics

Open Access

Peer-reviewed

Research Article

Enhancing the sample diversity of snowball samples: Recommendations from a research project on anti-dam movements in Southeast Asia

Contributed equally to this work with: Julian Kirchherr, Katrina Charles

Roles Conceptualization, Investigation, Visualization, Writing – original draft

* E-mail: [email protected]

Affiliations Faculty of Geosciences, Utrecht University, Utrecht, the Netherlands, School of Geography and the Environment, University of Oxford, Oxford, United Kingdom

ORCID logo

Roles Supervision, Writing – review & editing

Affiliation School of Geography and the Environment, University of Oxford, Oxford, United Kingdom

  • Julian Kirchherr, 
  • Katrina Charles

PLOS

  • Published: August 22, 2018
  • https://doi.org/10.1371/journal.pone.0201710
  • Reader Comments

Fig 1

Snowball sampling is a commonly employed sampling method in qualitative research; however, the diversity of samples generated via this method has repeatedly been questioned. Scholars have posited several anecdotally based recommendations for enhancing the diversity of snowball samples. In this study, we performed the first quantitative, medium- N analysis of snowball sampling to identify pathways to sample diversity, analysing 211 reach-outs conducted via snowball sampling, resulting in 81 interviews; these interviews were administered between April and August 2015 for a research project on anti-dam movements in Southeast Asia. Based upon this analysis, we were able to refine and enhance the previous recommendations (e.g., showcasing novel evidence on the value of multiple seeds or face-to-face interviews). This paper may thus be of particular interest to scholars employing or intending to employ snowball sampling.

Citation: Kirchherr J, Charles K (2018) Enhancing the sample diversity of snowball samples: Recommendations from a research project on anti-dam movements in Southeast Asia. PLoS ONE 13(8): e0201710. https://doi.org/10.1371/journal.pone.0201710

Editor: Timothy C. Guetterman, University of Michigan, UNITED STATES

Received: September 13, 2017; Accepted: July 20, 2018; Published: August 22, 2018

Copyright: © 2018 Kirchherr, Charles. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Data Availability: All data underlying the study are within the paper and its Supporting Information files.

Funding: The authors received no specific funding for this work.

Competing interests: The authors have declared that no competing interests exist.

Introduction

Snowball sampling is a commonly employed sampling method in qualitative research, used in medical science and in various social sciences, including sociology, political science, anthropology and human geography [ 1 – 3 ]. As is typical of terms adopted by a variety of fields, however, the phrase ‘snowball sampling’ is used inconsistently across disciplines [ 4 ]. The most frequently employed definition, suggested by Patton [ 5 ], Atkinson and Flint [ 6 ], Cohen and Arieli [ 7 ] and Bhattacherjee [ 8 ], is as a sampling method in which one interviewee gives the researcher the name of at least one more potential interviewee. That interviewee, in turn, provides the name of at least one more potential interviewee, and so on, with the sample growing like a rolling snowball if more than one referral per interviewee is provided.

This definition can initially seem self-explanatory, which may explain why snowball sampling is rarely discussed in most peer-reviewed papers that employ it. Various scholars use snowball sampling in their empirical work, but most provide only limited information on the method (see, e.g., [ 9 – 13 ]). Similarly, qualitative research textbooks often lack substantive discussion of snowball sampling (e.g., [ 8 , 14 – 19 ]). Bailey [ 14 ], for instance, devotes only a half-page of his 595-page book on social research methods to snowball sampling, acknowledging that ‘snowball sampling procedures have been rather loosely codified’ ([ 14 ], p. 96), an observation echoed by Penrod et al. [ 3 ].

This paper focuses on snowball sampling procedures, which we define as those actions undertaken to initiate, progress and terminate the snowball sample [ 1 , 20 ]. Despite the lack of substantive writing on snowball sampling as a method, several authors [ 2 , 3 , 21 ] have provided recommendations for enhancing a sample’s diversity in snowball sampling procedures (we discuss this further in Section 4). However, as this advice is not based on a quantitative analysis of evidence, but only on anecdotal evidence, there is a risk that these recommendations are based on coincidence. The aim of this paper is to provide advice on enhancing the sample diversity of a snowball sample. This advice is grounded in a medium- N analysis of relevant evidence, thus reducing the probability of positing advice that is based on coincidence [ 22 ]. A medium- N analysis is generally based on 10–100 cases, whereas anecdotal evidence is usually based only on a handful of cases [ 23 , 24 ]. At the core of our work, we provide descriptive analyses of various commonly prescribed strategies for enhancing the sample diversity of a snowball sample. These analyses are based on reach-outs to 211 individuals via snowball sampling for a research project on anti-dam movements in Southeast Asia, resulting in 81 interviews conducted between April and August 2015. As far as we are aware, ours is the first medium- N analysis to focus on enhancing the sample diversity of a snowball sample.

The remainder of this paper is organised as follows: in Section 2, we discuss snowball sampling as a method; in Section 3, we present the research project on anti-dam movements in Southeast Asia that served as the basis for our medium- N analysis on snowball sampling procedures; in Section 4, we present and discuss insights on snowball sampling procedures based upon this analysis as well as our resulting recommendations; finally, in Section 5, we summarise our argument.

Throughout this paper, we employ social science methodology terminology. We define key terms for this paper such as ‘snowball sampling’ or ‘sampling’, since these terms are not consistently codified in the scholarly literature. Due to limited space, however, we refrain from defining terms we have deemed common in this field of study, referring only to the relevant literature.

On snowball sampling

Traditional sampling methods are comprised of two elements [ 25 , 26 ]. First, a full set of data sources is defined, creating a list of the members of the population to be studied, known as a sampling frame. Second, a specific sample of data is collected from this sampling frame. Snowball sampling defies both elements, since it does not rely upon a sampling frame [ 27 ] (which may indicate that a different term for snowball sampling would be more accurate). Snowball sampling is often employed when no sampling frame can be constructed.

Researchers frequently cannot construct a sampling frame if a difficult-to-reach population is to be studied. Difficult-to-reach-populations are also referred to as ‘hard-to-reach-populations’ [ 28 ], ‘hidden populations’ [ 29 ] or ‘concealed populations’ [ 21 ] in the scholarly literature. Although not all scholars may agree that these terms are interchangeable, we deem them interchangeable for the purposes of this paper. For further discussion of this terminology, see [ 30 , 31 ].

A difficult-to-reach population does not wish to be found or contacted (e.g., illegal drug users, illegal migrants, prostitutes or homeless people [ 6 , 31 ]). Snowball sampling was originally used by researchers to study the structure of social networks [ 32 ]. The earliest empirical account of snowball sampling is from 1955 [ 33 ], with snowball sampling first described as a method in 1958 [ 34 ]. While it is still used to study the structure of social networks [ 35 ], over the last few decades, the method’s key purpose has largely transformed ‘into […] an expedient for locating members of a [difficult-to-reach] population’ ([ 36 ], p. 141).

Researchers grounded in quantitative thinking, such as Lijphart [ 37 ] and King et al. [ 38 ], tend to view the drawing of a random sample from a sampling frame as the gold standard of data collection. Even these researchers may nevertheless consider non-probability sampling methods, such as snowball sampling, a ‘necessary and irreplaceable sampling [method]’ ([ 39 ], p. 367) when confronted with difficult-to-reach populations, particularly if the dismissal of snowball sampling would mean that no research could be conducted at all. Ultimately, ‘an important topic is worth studying even if very little [access to] information is available’ ([ 38 ], p. 6). Still, some of those grounded in quantitative thinking call snowball sampling a method ‘at the margin of research practice’ ([ 6 ], p. 1), since the lack of a sampling frame means that, unlike individuals in a random sample, individuals in a population of interest do not have the same probability of being included in the final sample. Findings from a snowball sample would therefore not be generalisable [ 40 ] (on generalisability, see [ 41 ]).

Several qualitative scholars rebut such criticism. Creswell, for instance, notes that ‘the intent [of qualitative research] is not to generalise to a population, but to develop an in-depth [and contextualised] exploration of a central phenomenon’ ([ 42 ], p. 203). Others [ 1 , 39 ] specifically oppose quantitative scholars’ negative framing of snowball sampling, arguing that this method would ‘generate a unique type of social knowledge’ ([ 1 ], p. 327). Due to the diversity of perspectives gathered, this knowledge would be particularly valuable for an in-depth and contextualised exploration of a central phenomenon. We therefore define the diversity of a sample as a measure of the range of viewpoints that have been gathered on a central phenomenon.

Researchers critical of snowball sampling respond to this defence by arguing that the method is unable to ensure sample diversity, which is a necessary condition for valid research findings. Indeed, some scholars have stated that snowball samples underrepresent and may even exclude those least keen to cooperate, since referrals may not materialise in an interview if a potential interviewee is only somewhat keen or not at all keen to be interviewed [ 3 , 43 ]. Similarly, potential interviewees with smaller networks may be underrepresented, as they are less likely to be referred for an interview [ 31 , 44 ]. Those with smaller networks may also be in a specific network whose different perspectives may be of interest but are excluded in the final sample. Meanwhile, snowball sampling is said to over represent those interviewees (and their respective networks) that the interviewer spoke with first; the relevant literature refers to this as ‘anchoring’ [ 20 , 39 ].

We do not aim to argue the ‘validity’ of the method, but rather to inform snowball sampling methodologies in order to promote sample diversity. From a qualitative perspective, ‘validity’ can be defined as ‘the correctness or credibility of a description, conclusion, explanation, interpretation or other sort of account’ ([ 45 ], p. 87), while quantitative researchers frequently use the terms ‘generalisability’ and ‘(external) validity’ interchangeably [ 46 , 47 ]. The term ‘validity’ is contested among qualitative researchers, and some qualitative researchers entirely reject the concept for qualitative work [ 48 , 49 ]. We do not aim to resolve this debate via this paper; instead, we focus on the (seemingly less-contested) term ‘sample diversity’. While we acknowledge that this term is not codified in qualitative textbooks such as the SAGE Encyclopedia of Qualitative Research Methods , sample diversity is considered desirable by the various qualitative scholars we reviewed. Boulton and Fitzpatrick demand, for instance, that qualitative researchers ‘ensure that the full diversity of individuals […] is included [in their sample]’ ([ 50 ], p. 84), a mandate echoed by other scholars [ 16 , 51 – 53 ].

In order to operationalise the concept of sample diversity, we used five key methodological recommendations to inform our research. In this paper, we use quantitative analyses from our experiences with snowball sampling to further reflect on these recommendations, which are briefly described below.

Prior personal contacts of the researcher are required

Patton ([ 5 ], p. 176) notes that snowball sampling ‘begins by asking well-situated people: “Who knows a lot about ____? Who should I talk to?”‘. In the absence of a sampling frame for the population of interest, however, the researcher must retain at least some prior personal or professional contacts in the population of interest which can serve as the seeds of the snowball sample [ 2 , 54 ]. Waters contends that building a diverse snowball sample ‘depend[s] almost exclusively on the researcher’s [prior personal or professional] contacts’ ([ 39 ], p. 372).

Sample seed diversity is important

Morgan [ 21 ] has claimed that the ‘best defence’ against a lack of sample diversity is to begin the sample with seeds that are as diverse as possible. Others echo this advice [ 3 , 39 , 55 ], arguing that it is ‘compulsory for the researcher to ensure that the initial set of respondents is sufficiently varied’ ([ 55 ], p. 55). The term ‘chain referral sampling’ has been used for snowball samples that are strategically built via multiple varying seeds [ 3 ].

Technology means face-to-face interviews are no longer required

Some researchers have argued that face-to-face interviews are obsolete. For instance, over 25 years ago, it was claimed there were ‘no remarkable differences’ ([ 56 ], p. 211) between information collected via telephone and information collected via face-to-face interviews. The increasing use of telecommunications in recent years is likely to have further reduced barriers to remote interviewing, and various scholars [ 57 , 58 ] continue to claim that ‘evidence is lacking that [telephone interviews] produce lower quality data’ ([ 59 ], p. 391). In particular, they have highlighted the benefits of using Skype for semi-structured interviews [ 57 ].

However, for snowball sampling, face-to-face interviews help to generate the trust that scholars claim is required in order to gain referrals [ 1 , 31 , 39 , 60 ]. Noy argues that ‘the quality of the referring process is naturally related to the quality of the interaction: […] if the researcher did not win the informant’s trust […], the chances the latter will supply the former referrals decrease’ ([ 1 ], p. 334).

Persistence is necessary to secure interviews

Although the value of persistence may be considered self-evident by some scholars, it is seen by multiple academics [ 61 – 63 ] as a central virtue of qualitative researchers. Many young career scholars who embrace snowball sampling are likely to hear such advice as, ‘If you cannot interview your envisaged interviewees initially, don’t give up!’. A ‘helpful hint’ for qualitative researchers seeking informants is, ‘Persevere–repeat contact’ [ 64 ].

More waves of sampling are required to access more reluctant interviewees

As a remedy for snowball sampling’s previously discussed bias towards excluding those least keen to be interviewed, multiple scholars suggest pursuing a snowball sample for multiple waves (with a new sampling wave reached once an interviewee introduces the interviewer to one or more potential interviewees) [ 65 – 68 ]. Those suggesting this remedy assume that pursuing more waves increases the likelihood of being referred to an interviewee from a particularly difficult-to-reach population who is at least somewhat keen to be interviewed.

Approval for this study was granted by the Central University Research Ethics Committee (CUREC) of the University of Oxford. Our population of interest for our research project were stakeholders in Southeast Asia’s dam industry. Since ‘the most dramatic conflicts over how to pursue sustainable development’ ([ 69 ], p. 83) have occurred over the construction of large dams, we see this industry as a conflict environment with widely varying viewpoints. A conflict environment is one in which people perceive their goals and interests to be contradicted by the goals or interests of the opposing side [ 70 ]. The major conflicting parties in the dam industry tend to be local and international non-governmental organisations (NGOs) and academics (usually keen not to construct a particular dam) versus international donors, the private sector and governments (usually keen to construct a particular dam) [ 71 , 72 ]. Each sub-population operating in a conflict environment can be considered difficult to reach since fear and mistrust are often pervasive [ 7 ]. Snowball sampling is a suitable research method in conflict environments because the introductions through trusted social networks that are at the core of this method can help interviewees to overcome fear and mistrust, which, in turn, ensures access [ 7 ]. This access is needed to gather the widely varying viewpoints in the hydropower industry, in particular viewpoints with regards to what constitutes just resettlement [ 73 , 74 ]. Based on this rationale, we chose snowball sampling as the main method for our research.

In order to ensure sample diversity for our research project on anti-dam movements in Southeast Asia, we aimed to gather perspectives mostly from six main sub-populations: (1) local NGOs, (2) international NGOs, (3) international donors, (4) academia, (5) the private sector and (6) the government. We hypothesized that ‘dam developers’, a main sub-category of the interviewee category ‘private sector’, would be the most significant challenge to ensuring the diversity of our sample. Early in our process, many of the scholars with whom we discussed our research project argued that it would be impossible to interview a dam developer from a Chinese institution; meanwhile, researchers from a comparable research project that ended approximately when our project started reported being unable to interview any dam developers from European institutions. We also initially failed to collect data from dam developers: for instance, a survey we initiated that was distributed by Aqua~Media (host of a major global dam developer conference) to more than 1,500 dam developers yielded just five responses, only one of which was complete. We considered this weak response rate to be due, at least in part, to the dam industry’s negative view of academicians since the publication of Ansar et al. [ 75 ], which Nombre ([ 76 ], p. 1), the president of the International Commission on Large Dams (ICOLD), called ‘[highly] misleading’.

None of our researchers had significant direct links to the dam industry upon the start of the project; however, we did retain a variety of indirect links. Our researchers had past links to a management consultancy that serves various dam industry players, (more limited) links to an international donor working in the hydropower sector and links to activists in Myanmar advocating against dam projects.

After a favourable ethics review of our study by the CUREC of the University of Oxford, we commenced semi-structured interviews in April 2015, mostly via cold calls (we include cold e-mails in the term ‘cold calls’ throughout this paper). Initially, we conducted research via telephone only. We then undertook field research in Singapore, Myanmar and Thailand from June to August 2015 and terminated our data collection in late August 2015.

In total, 81 semi-structured interviews were carried out during this period. From a qualitative perspective, this is a relatively large sample size (for instance, the average qualitative PhD dissertation is based on 31 interviews [ 77 ]); from a quantitative perspective, however, the sample size is quite small [ 78 ]. Of our 81 interviews, 48 (59%) were conducted via telephone, 26 (32%) face-to-face and 7 (9%) online, either via e-mail or an online survey. Most of our interviews (57%) were carried out in July in Myanmar. Of our 81 interviewees, only 24 (30%) were women. Researchers who employ snowball sampling frequently employ personal/professional contact seeds and cold call seeds to build their sample (e.g., [ 2 , 79 , 80 ] with a seed defined as the starting point of a sample [ 65 ]). Of the 81 interviews analysed, 53 (65%) were rooted in a personal or professional contact ( Fig 1 ) (i.e. the seed of the interview pathway was a contact we had already retained prior to the research project). The remaining 28 (35%) interviews were rooted in cold calls.

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

https://doi.org/10.1371/journal.pone.0201710.g001

Given the sensitive nature of the interview topic, all interviewees were assured anonymity. Thus, all of the interviews are coded, with the first letter indicating the mode of interview ( T for telephone, F for face-to-face, O for online survey or e-mail), the second letter indicating the category of interviewee ( A for academia, G for government, I for international donor, NI for international NGO, NL for national NGO, P for private sector) and the sequence of numbers indicating the interview number within a particular mode. Researcher A is indicated by RA , Researcher B by RB ; CON represents a conference event. Bold type indicates that an interview was completed, while X that an interview was not completed.

As outlined in the previous section, snowball sampling is sometimes criticised for producing samples that lack sample diversity. To address this criticism, we reviewed the (scarce) literature on enhancing sample diversity via snowball sampling procedures prior to commencing our study. Upon reflection during our research, we chose to pursue our analysis retrospectively in order to challenge some of the recommendations provided in literature. Our analysis is structured alongside the five core pieces of advice found in this literature ( Table 1 ). Our results are based on a quantitative analysis of the 81 interviews we conducted. Although we endeavoured to include all interview attempts, some initial cold calls may have been overlooked in this retrospective approach. Therefore, some of our analysis, particularly in Section 4.4, may be too optimistic. Overall, we were able reconstruct 211 reach-out attempts.

thumbnail

Sample diversity is measured by representation from five identified sub-groups.

https://doi.org/10.1371/journal.pone.0201710.t001

Results and discussion

On prior personal and professional contacts.

Our analysis provides evidence that sample diversity can be reached even if no prior personal or professional contacts to the population of interest have been retained. The seeds of the interviews are depicted in Fig 2 , with the left side of the figure depicting the 53 interviews based on a personal or professional contact and the right side depicting the 28 interviews that were based on cold calls. This figure shows two main points of interest: first, both types of seeds include interviews in each interview category; second, the interview sub-category ‘dam developer’, which we hypothesised would be the most difficult to include in the sample, is also covered by both types of seeds. We can therefore conclude that a diverse sample could have been built even if we had relied solely on cold calls.

thumbnail

https://doi.org/10.1371/journal.pone.0201710.g002

It is acknowledged, however, that building a snowball sample from cold calls is particularly labour-intensive [ 39 ]: in our research, only 25% of our cold calls led to an interview, compared to 62% of the referrals. Significant differences in the value of referrals persist from one interviewee group to another ( Fig 3 ). We measure the value of referrals via a concept we call ‘network premium’. To gauge the network premium, we subtracted the cold call response rate (i.e., the number of interviews initiated via cold calls divided by the total number of cold calls) from the referral response rate (i.e. the number of interviews initiated via referrals divided by the total number of referrals). Referrals were the most valuable when contacting international donors and private sector players, with network premiums of 74% and 52%, respectively, indicating that these groups are particularly difficult-to-reach populations.

thumbnail

(1) Unable to retrace for 13 identified reach-outs if initiated via referral or cold call; four reach-outs coded as ‘Other’. (2) Unable to retrace for one interview carried out via referral coded as ‘Other’. (3) Including personal contacts and contacts via conferences. (4) Referral response rate–Cold call response rate.

https://doi.org/10.1371/journal.pone.0201710.g003

The overall results from these analyses are encouraging for scholars interested in researching a population to which no personal or professional contacts are retained prior to the research project. While personal or professional contacts maintained to the research population of interest can accelerate the research endeavour, our results also showcase that (at least for our topic of interest) a diverse sample can be built from cold calls if a researcher is willing to invest some time in reach-outs.

On seed variation

Our research confirms the scholars’ advice that seed diversity is important. Fig 4 (a variation of Fig 2 ) depicts the completed interviews from a seed perspective, with RA, RB and cold calls as the three main seeds of the sample. The sample built via RA, who has a background in the private sector, is largely biased towards this sector, with 47% of all interviews seeded via RA private sector interviews. RB conducted 57% of interviews, whose background is closest to local NGOs, were with local NGOs. Meanwhile, the sample built via cold calls indicates no significant biases towards any interviewee category. Interviews based on the network of RB included one (TNL17) with a leading activist from a remote area of Myanmar who provided unique insights into the early days of an anti-dam campaign. This insight helped us to develop a narrative of the campaign that was not skewed to the later days of the campaign and the activists prominent in these later days. The sample diversity ensured via RB was thus central to the quality of our research.

thumbnail

https://doi.org/10.1371/journal.pone.0201710.g004

It is noteworthy that the three different seeds in Fig 4 include interviews in all interviewee categories, including the sub-category ‘dam developer’ (the sole exception is the interviewee category ‘international NGO, which contains zero interviews for RB). This indicates that, at least for our topic of interest, a fairly diverse sample can be generated even if the researcher is unable to vary her or his seed, although the overall data suggest that seed variation can significantly enhance sample diversity. Fig 3 may therefore be viewed as a case for collaboration among researchers; if researchers with different backgrounds and different personal and professional contacts to the population of interest begin to collaborate, such collaborations are bound to contribute to sample diversity.

On face-to-face interviews

Our descriptive analysis provides evidence to further support the argument that face-to-face interviews are redundant, with our data indicating that face-to-face interviews can lead to more sought referrals than telephone interviews (perhaps since trust may be more readily established via face-to-face conversations than over the telephone). Fig 5 aims to quantify the value of face-to-face interviews. Overall, 30 (37%) of our interviews were initiated via prior face-to-face conversations, while prior telephone conversations and online contact each led to only eight interviews (10%). An examination shows that of the nine interviews conducted with dam developers, the interviewee sub-category deemed most difficult to access, seven (78%) were initiated via prior face-to-face interviews, while not a single telephone interview led to a referral to a dam developer. These interviews proved to be essential for our research. For instance, one Chinese dam developer challenged a claim from numerous NGOs that his company would not engage with NGOs, which, in turn, allowed us to present a more balanced portrayal of the interplay between Chinese dam developers and NGOs.

thumbnail

(1) Comprises interviews with those already retaining a personal or professional contact prior to the research project.

https://doi.org/10.1371/journal.pone.0201710.g005

While our research did not investigate whether face-to-face interviews lead to lower-quality data than telephone interviews, our data provide tentative evidence that face-to-face interviews are not obsolete; they can still be helpful for those employing or intending to employ snowball sampling, since these interviews can lead to more sought referrals and thus enhanced sample diversity. We acknowledge, however, that this finding may not be true for all populations. For instance, studies on individuals with sexually transmitted diseases have found that these interviewees (particularly men) tend to report more truthfully in an audio-computer-assisted self-interview (ACASI) than in a face-to-face interview, since interviewees tend to be more comfortable reporting on sexually transmitted diseases to a computer than to a live person [ 81 , 82 ].

On persistence

Our data suggest that persistence can indeed enhance sample diversity, but we can also conclude that excessive persistence does not necessarily yield dividends. Instead of distributing a great many interview reminders during our study, we reached out to the majority of our proposed interview subjects only once. Nevertheless, the scarce data we collected regarding persistence indicates its value. We map this data in Fig 6 , with the left side depicting our success rate in relation to the number of reach-outs (either one, two or three) and the right side depicting a deep dive on success rates achieved with two reach-outs (distinguishing between reach-out attempts to unknown potential interviewees and those to whom we were referred by other interviewees). We sent one interview reminder to 28 of our proposed interviewees. This led to 10 additional interviews, a success rate of 36%, equalling 12% of the total interviews analysed for this paper. Reminders appear to be only somewhat more helpful when contacting referrals in comparison to their usefulness with cold calls–a single reminder led to an interview in 39% of our cases for the former group and 38% for the latter. One of the most valuable interviews for our research gained via a reminder was with the CEO of a Burmese dam developer. This interviewee compared Chinese and European dam developers in Myanmar, which helped us to further refine our narrative on social-safeguard policy adherence by Chinese dam developers in Myanmar.

thumbnail

(1) Number of reach-outs unknown for 32 reach-outs. Eight potential interviewees responded, but refused interview.

https://doi.org/10.1371/journal.pone.0201710.g006

Excessive persistence, however, does not appear to be worthwhile. We sent three reminders to seven of our envisaged interviewees, but as Fig 6 shows, this did not lead to a single additional interview. While our data does not suggest that excessive persistence is helpful to researchers, it may also not be recommended for ethical reasons. A potential interviewee who does not respond to an interview request after two reach-outs may be indicating via this non-response that she or he is not interested in participating in the research. If a single request remains unanswered, the researcher may hypothesise that, for instance, the e-mail was overlooked, a hypothesis particularly likely when conducting interviews with time-pressed leaders of organisations. Indeed, all 10 interviews only carried out upon the second reach-out were interviews with interviewees in management positions.

Our data on persistence provide some evidence that those employing or intending to employ snowball sampling can enhance sample diversity if every reach-out is carefully tracked and followed by a reminder. We typically sent a reminder after one week if no response was obtained upon the first reach-out. This persistence may help to include those least keen to be interviewed for a research endeavour.

Our data show some evidence that, for our topic of study, pursuing interviews for even a few waves provided the perspectives of particularly difficult-to-reach populations and thus achieved sample diversity. More than 60% of our interviews were conducted in the zeroth or first wave ( Fig 7 ). These include seven of the nine interviews conducted with dam developers, the sub-category we deemed most challenging to interview. The remaining two interviews with dam developers were conducted in the second wave. However, not a single interview with a dam developer was carried out in the third wave and beyond, although a fifth of our total interviews were carried out in the third or later waves. Pursuing interviews for multiple waves nevertheless yielded novel insights. For instance, interview FNL12, which was conducted in the sixth wave, yielded insights on small dam construction in Myanmar–a topic of (some) interest to our research endeavour, but not covered in detail by previous interviews. Furthermore, we note that our finding regarding the limited value of multiple waves may also be specific to our population, with this finding perhaps indicating a low degree of network segmentation in the population in question [ 83 ]. Meanwhile, a high degree of network segmentation may impede the pursuance of multiple waves, since interviewees may lack the suitable contacts for a referral [ 84 ].

thumbnail

https://doi.org/10.1371/journal.pone.0201710.g007

While additional waves can lead to novel insights, our overall data on waves provide some evidence that the number of waves pursued is not a definitive indicator for sample diversity. Even very few waves can yield access to particularly difficult-to-access populations.

Our quantitative analysis of pathways to delivering sample diversity in snowball samples yielded the following revisions to the literature’s recommendations:

  • Prior personal contacts are not essential for achieving sample diversity but tend to be helpful, as generating new contacts during research can be labour-intensive.
  • Sample seed diversity is important to achieving sample diversity.
  • Face-to-face interviews build trust and can help to generate further referrals.
  • Persistence (within reason) is helpful in securing interviews.
  • Sample diversity is not necessarily enhanced if a seed is advanced over numerous waves.

We do not claim that these insights are comprehensive, but we believe that these interpretations of our data may serve as a starting point for future scholars using snowball sampling procedures. All of the analyses presented in this section are based only on descriptive statistics. This means, for instance, that we cannot control for confounds such as effort [ 85 ]. An experimental research design would yield the most robust insights on sampling procedures to enhance the sampling diversity of a snowball sample (with, for instance, one research project staffed with scholars with relevant personal or professional contacts and another staffed with scholars without relevant contacts).

Overall, this work aims to advance the literature on snowball sampling as a qualitative sampling approach. While snowball sampling procedures may qualify ‘as the least “sexy” facet of qualitative research’ ([ 1 ], p. 328), these procedures are ‘not self-evident or obvious’ ([ 20 ], p. 141), since the snowball sample does not ‘somehow magically’ ([ 20 ], p. 143) start, proceed and terminate when a scholar attempts to develop a diverse sample. Rather, continuous, deliberate effort by the researcher(s) is required. Our paper has attempted to provide some insights on this effort.

Unfortunately, we developed the idea to write this paper only during the course of our research project, and thus some of our data may be skewed. For instance, we may not have been able to trace all original reach-out attempts and our data on persistence may therefore be biased. Some of those scholars grounded in quantitative thinking may also claim that the insights outlined in Section 4 lack external validity since our sample size is relatively small from a quantitative methodological perspective. In addition, our population was very specific and thus may not be comparable to other difficult-to-reach populations, and we also did not adopt an experimental research design as described above. Hence, we encourage scholars to replicate our findings via their respective research projects that employ snowball sampling. With many scholars claiming to feel more pressed than ever to deliver research results with maximum efficiency, we hope that these initial descriptive analyses of snowball sampling procedures provide some valuable insights to those employing or intending to employ this method and aiming to improve their management of it.

Supporting information

https://doi.org/10.1371/journal.pone.0201710.s001

Acknowledgments

We wish to thank our reviewers at PLOS ONE who provided constructive thoughts on this piece of work. We also thank Ralf van Santen for his outstanding contributions to this work as a research assistant.

  • View Article
  • Google Scholar
  • 4. Handcock MS, Gile KJ. On the Concept of Snowball Sampling. 2011 Aug;
  • 5. Patton M. Qualitative evaluation and research methods. Beverly Hills: SAGE Publications; 1990.
  • 6. Atkinson R, Flint J. Accessing Hidden and Hard-to-Reach Populations: Snowball Research Strategies. 2001.
  • 8. Bhattacherjee A. Social Science Research: Principles, Methods, and Practices. 2012.
  • 14. Bailey K. Methods of Social Research. New York, United States: The Free Press; 1994.
  • 15. Miles MB, Huberman M a. An Expanded Sourcebook: Qualitative Data Analysis. 2nd ed. SAGE Publications; 1994.
  • 16. Boeije HR. Analysis in Qualitative Research. Thousand Oaks, United States: SAGE Publications; 2009.
  • 17. Atkinson P, Delamont S. SAGE qualitative research methods. Sage; 2010.
  • 18. Daniel J. Sampling essentials: Practical guidelines for making sampling choices. SAGE Publications; 2011.
  • 19. Emmel N. Sampling and choosing cases in qualitative research: A realist approach. Sage; 2013.
  • 21. Morgan DL. Snowball sampling. In: The SAGE Encyclopedia of Qualitative Research Methods. Thousand Oaks, United States: SAGE Publications; 2008.
  • 24. Ragin CC. Fuzzy-Set Social Science. Chicago, United States: The University of Chicago Press; 2000.
  • 25. Morgan DL. Sampling. In: The SAGE Encyclopedia of Qualitative Research Methods. Thousand Oaks, United States: SAGE Publications; 2008.
  • 26. Morgan DL. Sampling frame. In: The SAGE Encyclopedia of Qualitative Research Methods. Thousand Oaks, United States: SAGE Publications; 2008.
  • PubMed/NCBI
  • 33. Whyte WF. Street Corner Society—The Social Structure of an Italian Slum. Chicago, United States: Chicago University Press; 1955.
  • 38. King G, Keohane RO, Verba S. Designing social inquiry: scientific inference in qualitative research. Princeton, United States: Princeton University Press; 1994.
  • 40. Morgan DL. Random Sampling. In: The SAGE Encyclopedia of Qualitative Research Methods. Thousand Oaks, United States: SAGE Publications; 2008.
  • 41. Donmoyer R. Generalizability. In: The SAGE Encyclopedia of Qualitative Research Methods. Thousand Oaks: SAGE Publications; 2008.
  • 42. Creswell JW. Educational Research. Planning, Conducting, and Evaluating Quantitative and Qualitative Research. Upper Sadle River, United States: Pearson Education; 2005.
  • 44. Daniel J. Choosing the Type of Nonprobability Sampling. In: Sampling Essentials: Practical Guidelines for Making Sampling Choices. Thousand Oaks, United States: SAGE Publications; 2012.
  • 45. Maxwell JA. Qualitative Research Design: An Interactive Approach. Thousand Oaks, United States: SAGE Publications; 1996.
  • 47. Shadish WR, Cook TD, Campbell DT. Experimental and quasi-experimental designs for generalized causal inference. Boston, United States: Wadsworth Cengage Learning; 2002.
  • 51. Olsen W. Triangulation in Social Research: Qualitative and Quantitative Methods Can Really Be Mixed. In: Developments in Sociology. Causeway Press; 2004.
  • 54. Berg S. Snowball sampling. In: Encyclopedia of Statistical World. Thousand Oaks, United States: SAGE Publications; 1988. p. 529–32.
  • 62. Harrison RT, Leitch CM. Fighting a rearguard action? Reflections on the philosophy and practice of qualitative research in entrepreneurship. In: Handbook of Research Methods and Applications in Entrepreneurship and Small Business. Cheltenham: Edward Elgar Publishing Ltd.; 2014.
  • 64. Feldman MS, Bell J, Berger MT. Gaining access: A practical and theoretical guide for qualitative researchers. Rowman Altamira; 2004.
  • 66. Heckathorn DD, Magnani R. Snowball and Respondent-Driven Sampling. In: Behavioral Surveillance Surveys: Guidelines for Repeated Behavioral Surveys in Populations at Risk of HIV. 2004.
  • 68. Robins G. Doing Social Network Research: Network-Based Research Design for Social Scientists. London, United Kingdom: SAGE Publications; 2015.
  • 69. Khagram S. Toward Democratic Governance for Sustainable Development: Transnational Civil Society Organizing Around Big Dams. In: Third Force : The Rise of Transnational Civil Society. Carnegie Endowment for International Peace; 2000.
  • 70. Kriesberg L. Constructive Conflict: From Escalation to Resolution. Lanham, United States: Rowman & Littlefield Publishers; 1998.
  • 71. Nüsser M. Political Ecology of Large Dams: a Critical Review. 2003.
  • 72. Nüsser M, Baghel R. The emergence of technological hydroscapes in the Anthropocene: socio- hydrology and development paradigms of large dams. In: Handbook on Geographies of Technology,. Cheltenham, United Kingdom: Edward Elgar Publishing; 2017. p. 287–301.
  • 73. Scudder T. The Future of Large Dams: Dealing with Social, Environmental, Institutional and Political Costs. London, United Kingdom: Earthscan; 2005.
  • 76. Nombre A. Yes, we need to build more large dams for water storage and energy for sustainable development! 2014.
  • 77. Mason M. Sample Size and Saturation in PhD Studies Using Qualitative Interviews. Vol. 11, Forum Qualitative Sozialforschung / Forum: Qualitative Social Research. 2010.
  • DOI: 10.1080/13645570701401305
  • Corpus ID: 53538877

Sampling Knowledge: The Hermeneutics of Snowball Sampling in Qualitative Research

  • Published 11 September 2008
  • International Journal of Social Research Methodology

2,525 Citations

The interview method in comparative politics: the process of interviewing far-right actors, enriching qualitative research by engaging peer interviewers: a case study, who, where, and how of interviewing peers, theoretical sampling in qualitative research: a multi-layered nested sampling scheme, rethinking sampling in grounded theory: reflections for novice grounded theorists, negative chain referral sampling: doing justice to subcultural diversity, enhancing the sample diversity of snowball samples: recommendations from a research project on anti-dam movements in southeast asia, interpreting erica: a study of fan engagement, can sample size in qualitative research be determined a priori, relational influence on entrepreneurial orientation: an exploratory study of small religious enterprises in brazil, 65 references, approaches to sampling and case selection in qualitative research: examples in the geography of health., reflections on the researcher-researched relationship: a woman interviewing men, qualitative methodologies for geographers : issues and debates, locating research informants in a multi-ethnic community: ethnic identities, social networks and recruitment methods, qualitative research : theory, method and practice, dialogue across the divides: `moments of rapport' and power in feminist research with anti-feminist women, interviewing men: vulnerabilities and dilemmas, “you must go trek there”: the persuasive genre of narration among israeli backpackers, snowball sampling: problems and techniques of chain referral sampling, the new production of knowledge: the dynamics of science and research in contemporary societies, related papers.

Showing 1 through 3 of 0 Related Papers

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

Sampling Knowledge: The Hermeneutics of Snowball Sampling in Qualitative Research

Profile image of Chaim Noy

"During the past two decades we have witnessed a rather impressive growth of theoretical innovations and conceptual revisions of epistemological and methodological approaches within constructivist-qualitative quarters of the social sciences. Methodological discussions have commonly addressed a variety of methods for collecting and analyzing empirical material, yet the critical grounds upon which these were reformulated have rarely been extended to embrace sampling concepts and procedures. The latter have been overlooked, qualifying only as a ‘technical’ research stage. This article attends to snowball sampling via constructivist and feminist hermeneutics, suggesting that when viewed critically, this popular sampling method can generate a unique type of social knowledge—knowledge which is emergent, political and interactional. The article reflects upon researches about backpacker tourists and marginalized men, where snowball sampling was successfully employed in investigating these groups’ organic social networks and social dynamics. In both studies, interesting interrelations were found between sampling and interviewing facets, leading to a reconceptualization of the method of snowball sampling in terms of power relations, social networks and social capital."

Related Papers

Barbara Adkins

While commercial images of “backpacking” emphasise adventure, youth and sightseeing, recent ethnographies of backpackers identify other motivations and rationales that accentuate travel experiences as formative of the self and identity. This raises the question of the basis of this apparently common orientation. This paper investigates, through analysis of postings on an electronic backpacker notice board, “backpacker” as a collaboratively constructed category. We propose that the shared understandings of “backpacker” enabled by these notice boards are consistent with cultural orientations captured in notions of cosmopolitanism (Beck 2000) involving a shift to new forms of sociality across borders: a solidarity with strangers.

snowball sampling qualitative research pdf

Hazel Andrews

Introduction This special issue journal is the end result of a reflective process that started with a casual conversation around gendered fieldwork by two anthropologists Hazel Andrews and Pamila Gupta researching tourism, in two very different places— Spain and India, respectively. It was this conversation—one initiated in Dubrovnik, Croatia in 2005 and continued a year later in Durban, South Africa that formed the basis for a joint panel convened at the annual ASA conference (the theme of which was Thinking Through Tourism‘) held in London at London Metropolitan University in April 2007. We not only wanted to raise some of the critical concerns we shared with respect to our gendered experiences of conducting anthropological fieldwork, but wanted to see how other scholars researching tourism responded to these same dilemmas but coming from very different locations, academic trainings, and (auto-ethnographic) spaces. In the original call for papers for our panel we wrote:

International Journal of Qualitative Methods

Silvia L Vilches

Dr. Samentha Goethals

Tourism Review International

Torun Elsrud

Kristin Lozanski

Fieldwork with independent travelers and seasonal workers raises specific methodological considerations that result from the temporary transience of both communities. In this paper the authors bring together their experiences of the ways in which this transience both enabled and challenged various aspects of data collection, including integrating themselves into the participant network, locating participants and arranging interviews, adhering to ethical standards, and withdrawing from their data collection, as well as participants' perceptions of their research projects.

Qualitative Research in Psychology

Oliver Robinson

Caroline Scarles

This paper explores the concept of ‘becoming researched’ through the opportunities for autoethnography as a technique of data collection. This research emerges from both authors’ PhD research experiences and explores the ways in which autoethnography mobilises an appreciation for the embodied, sensual and emotional experiences of tourist and research practice. As such, it proposes communication through inter-subjective negotiation and facilitates an enriched research space within which previously ‘hidden’ spaces emerge through stimulating creativity and deepening connection through mutual appreciation between researcher and participants. INTRODUCTION In response to recent developments in qualitative methods, this paper seeks to investigate the application of autoethnography in tourism research. Following recent calls by authors such as Franklin & Crang (2001), Coleman & Crang (2002), Crang (1997, 1999), Crouch (2000a/b) and Edensor (1998, 2000, 2001) to address the embodied, perform...

NORA - Nordic Journal of Feminist and Gender Research

Catrin Lundström

This article discusses methodological dilemmas in ethnographic research with first-generation Swedish migrant women living in the United States. From a (white) Swedish researcher perspective, it seeks to disentangle aspects of shared privileges between researcher and participants and constructions of white spaces in a non-Swedish context. What does it mean to pass as a white, middle-class Swede in research, and how are white privileges being upheld in such acting? How are class differences equalized when ethnography is conducted outside the national class system where internal hierarchies may be renegotiated? The article argues that the use of “methodological capital” (Gallagher 2000), such as embodied capital and passing strategies that might be necessary to reach specific groups of examination, may also reproduce structural privileges by not intervening into normative assumptions of race, class, gender, and sexuality. In these circumstances, the article inquires into what can be learned from studying privileged groups and, thereby, what may we fail to see.

Loading Preview

Sorry, preview is currently unavailable. You can download the paper by clicking the button above.

RELATED PAPERS

Fiona Allon , Kay Anderson

The Journal of Language Teaching and Learning (JLTL)

MKSES Publication

Vipul K U M A R Gautam

Kiunyu Chan

Current Issues in Tourism

Kevin D O'Gorman

Henna Qureshi

Sampling and Trustworthiness Issues in Qualitative Research

Madhusudan Subedi

Ahsan Qureshi

Stroma Cole

Tourism Geographies

Elisa Burrai

Tourism and Mobilities

Chiara Rabbiosi

Ben Iaquinto

Pegy Hutchison

Leisure/Loisir

Ashley Flanagan

Anne Pirrie

International Journal of the Sociology of Leisure

Kevin P Bingham

Advances in Business Information Systems and Analytics

SUDEEPTA PRADHAN

Michael O' Regan

Ana Irene Rovetta Cortés

Journal of Contemporary Ethnography

Sandra Bucerius

AIBR, Revista de Antropología Iberoamericana

Carlos García Grados

Leisure Studies

Carlos Monterrubio

Eric Anderson

RELATED TOPICS

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024

Basics of Qualitative Data Collection

  • First Online: 10 June 2024

Cite this chapter

snowball sampling qualitative research pdf

  • Rashina Hoda   ORCID: orcid.org/0000-0001-5147-8096 2  

Qualitative data is the main currency of the trade in a socio-technical grounded theory (STGT) study and in qualitative research in general. The quality of a STGT study depends first and foremost on the quality of the data collected. In this chapter, we will learn about popular sampling techniques such as convenience, snowball, random, representative, and theoretical sampling. Then we will learn some tips for data collection , including tips for effective sampling, recruitment, and engagement. Finally, we will learn about several aspects of dealing with data such as data ethics, data sharing, data quality using data quality evaluation guidelines , data usage, and data quantity including theoretical saturation. Researchers will benefit from reading this chapter in conjunction with the Techniques of Qualitative Data Collection , Qualitative Data Preparation and Filtering , and Socio-technical Grounded Theory for Qualitative Data Analysis chapters. Collectively, they cover socio-technical grounded theory’s basic stage.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Unable to display preview.  Download preview PDF.

Aldiabat, K. M., & Le Navenec, C.-L. (2018). Data saturation: The mysterious step in grounded theory methodology. The Qualitative Report, 23 (1), 245–261.

Google Scholar  

Breckenridge, J., & Jones, D. (2009). Demystifying theoretical sampling in grounded theory research. Grounded Theory Review, 8(2), 113–126.

Carver, J. C., & Prikladnicki, R. (2018). Industry–academia collaboration in software engineering. IEEE Software, 35 (5), 120–124.

Article   Google Scholar  

Charmaz, K. (2006). Constructing grounded theory: A practical guide through qualitative analysis . Sage.

Coyne, I. T. (1997). Sampling in qualitative research. Purposeful and theoretical sampling; merging or clear boundaries?. Journal of Advanced Nursing, 26 (3), 623–630.

Dittrich, D., Kenneally, E. et al. (2012). The menlo report: Ethical principles guiding information and communication technology research. Technical Report , US Department of Homeland Security.

El-Emam, K. (2001). Ethics and open source. Empirical Software Engineering, 6 (4), 291.

Glaser, B. G., & Strauss, A. L. (2017). Discovery of grounded theory: Strategies for qualitative research . Routledge (First Published 1967).

Gold, N. E., & Krinke, J. (2020). Ethical mining: A case study on MSR mining challenges. In Proceedings of the 17th International Conference on Mining Software Repositories (pp. 265–276).

Gold, N. E., & Krinke, J. (2022). Ethics in the mining of software repositories. Empirical Software Engineering, 27 (1), 17.

Gonzalez-Barahona, J. (2020). Mining software repositories while respecting privacy . Mining Software Repositories Tutorial. https://2020.msrconf.org/details/msr-2020-Education/1/Mining-Software-Repositories-While-Respecting-Privacy

Hoda, R. (2022). Socio-technical grounded theory for software engineering. IEEE Transactions on Software Engineering, 48 (10), 3808–3832.

Hoda, R., Noble, J., & Marshall, S. (2010). Organizing self-organizing teams. In Proceedings of the 32nd ACM/IEEE International Conference on Software Engineering-Volume 1 (pp. 285–294).

Hoda, R., Noble, J., & Marshall, S. (2012). Self-organizing roles on agile software development teams. IEEE Transactions on Software Engineering, 39 (3), 422–444.

Kvale, S., & Brinkmann, S. (2009). Interviews: Learning the craft of qualitative research interviewing . Sage.

Markham, A., Buchanan, E., with Feedback from the AOIR Ethics Working Committee et al. (2012). Ethical decision-making and internet research: Recommendations from the AOIR ethics working committee (version 2.0) . http://aoir.org/reports/ethics2.pdf

Morse, J. M. (2010). Sampling in grounded theory. In The SAGE handbook of grounded theory (pp. 229–244).

Patton, M. Q. (1990). Qualitative evaluation and research methods . SAGE Publications.

Salleh, N., Hoda, R., Su, M. T., Kanij, T., & Grundy, J. (2018). Recruitment, engagement and feedback in empirical software engineering studies in industrial contexts. Information and Software Technology, 98 , 161–172.

Sandelowski, M. (1995). Sample size in qualitative research. Research in Nursing & Health, 18 (2), 179–183.

Singer, J., & Vinson, N. G. (2001). Why and how research ethics matters to you, yes you! Empirical Software Engineering, 6 (4), 287–290.

Singer, J., & Vinson, N. G. (2002). Ethical issues in empirical studies of software engineering. IEEE Transactions on Software Engineering, 28(12), 1171–1180.

Thomson, S. B. (2010). Sample size and grounded theory. Thomson, SB (2010). Grounded Theory-Sample Size. Journal of Administration and Governance, 5 (1), 45–52.

Townsend, L., & Wallace, C. (2016). Social media research: A guide to ethics. University of Aberdeen, 1 (16), 1–16. https://www.gla.ac.uk/media/Media_487729_smxx.pdf

Download references

Author information

Authors and affiliations.

Faculty of Information Technology, Monash University, Melbourne, VIC, Australia

Rashina Hoda

You can also search for this author in PubMed   Google Scholar

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Hoda, R. (2024). Basics of Qualitative Data Collection. In: Qualitative Research with Socio-Technical Grounded Theory. Springer, Cham. https://doi.org/10.1007/978-3-031-60533-8_7

Download citation

DOI : https://doi.org/10.1007/978-3-031-60533-8_7

Published : 10 June 2024

Publisher Name : Springer, Cham

Print ISBN : 978-3-031-60532-1

Online ISBN : 978-3-031-60533-8

eBook Packages : Computer Science Computer Science (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research
  • Advanced search

CMAJ

Advanced Search

Managing “socially admitted” patients in hospital: a qualitative study of health care providers’ perceptions

  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Figures & Tables
  • Related Content

Background: Emergency departments are a last resort for some socially vulnerable patients without an acute medical illness (colloquially known as “socially admitted” patients), resulting in their occupation of hospital beds typically designated for patients requiring acute medical care. In this study, we aimed to explore the perceptions of health care providers regarding patients admitted as “social admissions.”

Methods: This qualitative study was informed by grounded theory and involved semistructured interviews at a Nova Scotia tertiary care centre. From October 2022 to July 2023, we interviewed eligible participants, including any health care clinician or administrator who worked directly with “socially admitted” patients. Virtual or in-person individual interviews were audio-recorded and transcribed, then independently and iteratively coded. We mapped themes on the 5 domains of the Quintuple Aim conceptual framework.

Results: We interviewed 20 nurses, physicians, administrators, and social workers. Most identified as female ( n = 11) and White ( n = 13), and were in their mid to late career ( n = 13). We categorized 9 themes into 5 domains: patient experience (patient description, provision of care); care team well-being (moral distress, hierarchy of care); health equity (stigma and missed opportunities, prejudices); cost of care (wait-lists and scarcity of alternatives); and population health (factors leading to vulnerability, system changes). Participants described experiences caring for “socially admitted” patients, perceptions and assumptions underlying “social” presentations, system barriers to care delivery, and suggestions of potential solutions.

Interpretation: Health care providers viewed “socially admitted” patients as needing enhanced care but identified individual, institutional, and system challenges that impeded its realization. Examining perceptions of the people who care for “socially admitted” patients offers insights to guide clinicians and policy-makers in caring for socially vulnerable patients.

See related editorial at www.cmaj.ca/lookup/doi/10.1503/cmaj.240577

Emergency departments have become a destination of last resort for some patients who are made vulnerable by social circumstances, resulting in their occupying hospital beds typically designated for people with acute medical issues. 1 “Social admission” is a colloquial, nondiagnostic label used to describe a person for whom no acute medical issues are recognized to be contributing to their seeking health care. However, many health care providers understand that patients who are admitted for social reasons face challenges such as a breakdown of care supports or an inability of the patient or family to cope with the demands of living at home. 2 These patients often have lengthy stays in emergency departments or hospital wards, and frequently encounter barriers (e.g., housing or home support) delaying safe discharge from hospital. The colloquial terms “failure to cope,” “acopia,” “orphan patient,” or “home care impossible,” among others, are sometimes used to refer to these patients. 3 – 5 Such terminology can be stigmatizing because it indicates a value judgment that patients require admission solely on “social” grounds, sometimes failing to account for underlying medical complexity. 6

The “social admission” phenomenon is an under-researched area in health care. These patients, often categorized by health care providers as not being acutely ill, experience in-hospital death rates as high as 22.2%–34.9%. 7 , 8 Explanations may include under-triaging in the emergency department owing to poor recognition of atypical clinical presentations and delays in timely assessments. 5 Patients may be misdiagnosed or develop acute illness during their hospital stay. In 2 international studies, by the end of hospitalization, an admission diagnosis of “acopia” was no longer the discharge diagnosis in 88%–92.5% of cases. 7 , 9 Diagnoses of falls, delirium, and mobility problems were common, but sepsis was initially undiagnosed in almost one-third of these patients. 7 This raises questions about health care providers’ awareness of atypical presentations and decision-making for “social” presentations, which often require a nuanced understanding of both medical and social care needs.

Health care providers face challenges providing high-quality care to this patient population across Canada 1 , 10 and internationally. 1 , 4 , 10 – 13 “Social admissions” may account for as many as 1 in 10 patients (0.57%–9.3%) presenting to the emergency department and 1 in 25 admissions to hospital, with increasing prevalence with age. 14 A survey from Wales showed that 51.8% of hospital physicians consider that they frequently care for these patients, encountering them several times per week. 15

Since “social admission” is a nondiagnostic label, its definition varies across regions and health care systems, meaning no guidelines exist to standardize approaches to meet medical or social care needs. Qualitative data evaluating how health care providers perceive and care for these patients are lacking. Therefore, we aimed to explore the perceptions of health care providers regarding patients admitted as “social admissions.”

Study design

This qualitative study was informed by constructivist grounded theory, which uses inductive analysis of data collected from participants to generate new theories. 16 , 17 We conducted semistructured interviews with clinicians and health care administrators between October 2022 and July 2023. Given that little is known about “social admissions,” grounded theory was best suited to our objective to generate an explanatory theory about this phenomenon. 17

The research team included qualitative methods experts, geriatric medicine specialists, clinician scientists, primary care and emergency department clinicians, and members with administrative leadership roles. We also included nursing students, medical students, and internal medicine residents of diverse backgrounds.

We reported this study using the Consolidated Criteria for Reporting Qualitative Research Checklist (Appendix 1, available at www.cmaj.ca/lookup/doi/10.1503/cmaj.231430/tab-related-content ). 18

Setting and participants

Studying “social admissions” can be challenging because of the variability in terminology and admission policies across different jurisdictions. 19 The Orphan Patient Policy is a standardized “social admission” pathway used at the Queen Elizabeth II Health Sciences Centre, a tertiary care centre in Halifax, Nova Scotia. Halifax is the provincial capital and the largest city in the Atlantic region of Canada. In Nova Scotia, health care is provided through a publicly funded health care system.

Since March 2012, any patient, regardless of age or living situation, can be admitted to the Queen Elizabeth II Health Sciences Centre under the Orphan Patient Policy if they have undergone a medical assessment by a physician in the emergency department, are determined to have no acute or new medical conditions, and have been seen by a social worker or discharge planning nurse to exhaust all home care options. Inability to return home includes situations of homelessness, unavailable community supports, or waiting for transitions to long-term care. These patients are admitted to the first available inpatient bed, based on a rotating roster of all hospital admission services (e.g., medicine, psychiatry, surgery, subspecialty medicine or surgery, and hospitalist). The admitting service and its allied health care team become responsible for the patient’s care and disposition, with the expectation that discharge planning is the primary issue. Although these patients are locally called “orphan patients,” we use the terminology “social admission” throughout this paper.

Eligible participants included any clinical provider or administrator who worked directly with “socially admitted” patients. To identify potential participants for our study, we held initial interviews with hospital nursing bed flow managers who are responsible for administering the Orphan Patient Policy.

To recruit participants, we used snowball sampling: we emailed each health care provider or department that had been recommended by the initial interviewees (i.e., the nursing bed flow managers), and those suggested by study participants during their interviews or by key knowledge users with whom we shared preliminary findings (see Data analysis). Preliminary analyses also informed recruitment, and we used purposive and theoretical sampling 20 , 21 to ensure that the perspectives of multiple health care professionals within the “social admission” care pathway were included, with the aim of data saturation. We approached several departments and individuals who declined to participate or did not respond to our requests for interviews. These included recreation therapy, physiotherapy, occupational therapy, some administrative positions, and several subspecialty medicine divisions.

Data collection

The interview guide (Appendix 2a, available at www.cmaj.ca/lookup/doi/10.1503/cmaj.231430/tab-related-content ) was based on our literature review of “social admissions” 14 and informed by our chart reviews of more than 350 “social admissions” in Nova Scotia (unpublished data, 2021). The entire research team gave input on the interview guide through several iterative processes: multiple meetings to develop the guide, a pilot test with non-author colleagues, and a meeting after all interviewers had conducted at least 1 interview to discuss whether the guide was robust enough to elicit the information we were seeking. We revised the interview guide wording for clarity and understanding, and we added 2 major questions (interview guide questions 7 and 8) and several prompting questions.

Experienced qualitative researchers (C.S. and E.G.M.) provided training. We held 2 group and 1 individual interactive training and practice sessions, which provided methodological context, and practical approaches and techniques in qualitative interviewing. One research team member (J.C.M., L.E., G.A., or M.K.) administered individual interviews. Interviews occurred virtually (via Microsoft Teams) or in person in quiet rooms on hospital wards or participants’ offices. After interviews were completed, we contacted participants by email to provide self-identified demographic data. The survey was voluntary and anonymous, and participants selected from predefined categories or supplied free text for sex, gender, ethnicity, role, and profession (Appendix 2b).

Interviews were audio-recorded and transcribed verbatim. For additional rigour and contextualization during analysis, interviewers kept detailed field notes of their reflections during the interviews.

Data analysis

Data collection and analysis occurred simultaneously. All participants were invited to review their transcripts before analysis (1 participant opted to). We used Dedoose software for data coding and organization.

Two team members independently coded interview transcripts using an inductive approach. 16 , 17 Throughout the initial coding process, the coders (J.C.M., C.S., G.A., and M.K.) met regularly to refine, merge and expand codes, come to consensus about any disagreements and interpretations, add context to certain transcripts with their field notes from the interviews, and identify additional participants suggested by the participants. Using constant comparative and selective coding processes, 16 , 17 we generated categories and subcategories to form themes to reflect participants’ perspectives on “social admissions.”

We used several strategies to ensure rigour and trustworthiness throughout the research process. As per the grounded theory approach, we incorporated reflexivity into our analytic process and acknowledged our dual roles as researchers and health care providers delivering care. Most members of the research team were affiliated with the research site and possessed an in-depth understanding of the local context and providers involved in “social admission” care. This intimate understanding enabled us to add context to the findings. However, we also challenged our preconceptions and biases by recruiting participants with diverse experiences and perspectives, and scheduling regular meetings among research team members to triangulate findings with our internal chart review, knowledge user feedback, and data analysis. 22

We put participant narratives at the forefront by presenting the data (from preliminary interviews and after completion of interviews) to engaged key knowledge users within our hospital and university network (e.g., experienced researchers, clinicians, social workers, and administrators) in a variety of settings (e.g., individual communications, small group sessions, or internal department presentations). The knowledge users provided feedback and suggested further participants. The data were also triangulated with findings from our recent literature review. 14

After data saturation was achieved, we mapped our findings on the Quintuple Aim conceptual framework at the suggestion of a knowledge user and as per consensus with the research group. 23 , 24 This framework adequately organized and contextualized our findings and is a well-known approach to optimizing health system performance and defines 5 fundamental domains (definitions in Appendix 1) for transforming health care: enhance patient experience, better population health, optimize cost of care, improve care team well-being, and advance health equity. 23 , 24

Ethics approval

Nova Scotia Health granted institutional research ethics approval (REB no. 1027628).

We conducted 20 interviews (9 in person and 11 virtual) among hospital administrators and clinicians ( Table 1 ). Clinicians were nurses (charge, discharge planning, and inpatient), physicians (residents and staff physicians), and social workers, representing the following services: emergency department, internal medicine, medical subspecialties (cardiology, neurology, and geriatric medicine), psychiatry, hospitalist, and surgical specialties (orthopedics, general surgery, cardiovascular surgery, and vascular surgery). Administrators included nursing bed managers and directors of hospital divisions and long-term care. The mean interview length was 38 (range 16–76) minutes.

  • View inline

Demographic information of hospital administrators and clinicians who were interviewed

We categorized 9 themes into each of the 5 domains of the Quintuple Aim framework as shown in Figure 1 : patient experience (patient description, provision of care); care team well-being (moral distress, hierarchy of care); health equity (stigma and missed opportunities, prejudices); cost of care (wait-lists and scarcity of alternatives); and population health (factors leading to vulnerability, system changes for addressing “social admissions”). Additional illustrative quotations are presented in Appendix 3, available at www.cmaj.ca/lookup/doi/10.1503/cmaj.231430/tab-related-content .

  • Download figure
  • Open in new tab
  • Download powerpoint

Domains (in the circle) and themes (outside the circle) using the Quintuple Aim framework. 23 , 24

Patient experience

Participants’ description of patients.

Participants provided diverse descriptions of these patients ( Table 2 ). One cited financial precarity as a key problem faced by these patients. Another highlighted recurrent health care system interactions as being important. Some mentioned these patients had a mix of medical, mental health, and social problems. Most equated “social admissions” with older patients or those who were cognitively impaired. Some deemed them the most frail, vulnerable, or complex cases. Few considered that “socially admitted” patients had no medical conditions involved (Appendix 3) or that the medical conditions could wholly be managed at a primary care level.

Descriptions and illustrative quotations of the patient description and provision of care themes in the patient experience domain

Provision of care

Participants described “socially admitted” patients as receiving passive and hands-off care, contrasting this with active approaches for medical and surgical cases. Participants reported that patients, especially those who were older or confused, often received limited attention and workup, leaving their needs unaddressed ( Table 2 ). The approach to care was characterized by patients being left in their beds, being the last person rounded on by the care team, and not being chosen to participate in rehabilitative programs or exercises. In short, these patients’ care needs were the last in the queue of nursing and physician priorities. Beyond direct provision of care, participants identified that hospital programs (e.g., recreation therapy) benefitting these patients had been discontinued or under-resourced (Appendix 3). Almost all clinical participants considered their ward was not the place to care for these patients.

Care team well-being

Moral distress.

Health care providers described their roles as acute care or sub-specialized experts but said they felt helpless when they were unable to provide care for “socially admitted” patients, who often had complex, unrecognized, or chronic health issues. They often stated that better care should be offered yet described challenges when caring for “socially admitted” patients. These included a lack of appropriate training, struggles to arrange suitable care, and resistance when attempting to involve other services, allied health care, or social work, leading to delays in appropriate management ( Table 3 ). As articulated by 1 participant (HC605): “I think that’s a lot to ask of different providers who may not have that skill set. So, sometimes I think it does cause, you know, moral distress and challenge for people sometimes, which then gets perhaps articulated as being ‘they shouldn’t be here.’” Many reported feeling negative toward the policy and labelling of these patients, and acknowledged it was used primarily to communicate with other health care providers. One participant suggested the policy prevented blame on clinicians for “admitting this [patient]” (HC840).

Descriptions and illustrative quotations of the moral distress and hierarchy of care themes in the care team well-being domain

Hierarchy of care

Participants highlighted a hierarchy in health care, prioritizing acute care patients over “social admissions.” One participant reflected on how hospitals rely on pathways with these patients not fitting into a clear “slot,” representing individuals not well differentiated, individuals with complexity, or individuals with issues that are not specialty specific. Consequently, “social admissions” were passed down the hierarchy, from physicians to residents, and sometimes to nursing assistants, implying they were less worthy of routine medical attention ( Table 3 ).

Health equity

Stigma and missed opportunities.

The term “social admission” led to incorrect assumptions about medical needs and cognitive abilities. Beliefs about behaviours were noted by several participants. These assumptions were propagated as early as handovers from paramedics to emergency nursing teams ( Table 4 ). Participants highlighted instances where these patients were not medically stable and emphasized that social stressors did not exempt patients from becoming medically ill during the admission. The label was reported to be an impediment to opportunities to look for underlying treatable medical issues, compounded by the need to make timely decisions because of pressures to free up beds.

Descriptions and illustrative quotations of the stigma and missed opportunities, and prejudices themes in the health equity domain

Ageist beliefs underpinned assumptions about capacity, especially for older “socially admitted” patients. Some participants recognized that these patients could not effectively advocate for themselves, and others pointed out that older patients were often assumed to be cognitively or functionally impaired, and decisions were made without them. Participants provided examples of premature capacity determinations made without proper medical evaluation or consultation ( Table 4 ). One participant described the invisibility of these patients, especially for women and minorities, and another noted how the care of “socially admitted” patients is undermined by negative attitudes similar to those encountered by individuals with substance use disorders (Appendix 3).

Cost of care

Wait-lists and scarcity of alternatives.

Inadequate community support often resulted in emergency department visits and hospital admissions, with the perception that hospitals are the safest place. Participants noted lengthy wait-lists for community services like home care, physiotherapy, or occupational therapy, which led to deconditioning ( Table 5 ). The transition to long-term care was described as “abysmal,” leaving patients in challenging situations for extended periods. Admissions were a “last resort” after all other options were exhausted, with patients and families struggling to access necessary care. The lack of alternatives contributed to participants’ distress when caring for “socially admitted” patients (Appendix 3).

Description and illustrative quotations of the wait-list and scarcity of alternatives theme in the cost of care domain

Population health

Factors leading to vulnerability.

Participants identified many issues that were associated with the “social admission” label, particularly for patients with cognitive impairment ( Table 6 ). These included physical barriers (e.g., inaccessible homes), homelessness, and financial challenges. Social isolation left individuals unsupported, managing alone until emergencies, such as falls, catalyzed hospital admission. The inability to advocate for oneself was also a common observation.

Descriptions and illustrative quotations of factors leading to vulnerability and system changes for addressing “social admission” themes in the population health domain

System changes for addressing “social admissions”

Participants identified systemic barriers that they considered disadvantaged “socially admitted” patients. Participants were concerned that the health care system is currently in crisis (e.g., with a lack of primary care and home support), and emergency departments cannot function as intended, causing the acute care system to become the community system or “the [inter]mediate pathway between community and long-term care” ( Table 6 ). Some called for specialized seniors’ care teams to address the unique needs of older adults. Participants emphasized the importance of understanding these patients’ situations holistically, with a multidisciplinary approach to assess medical history, social factors, and available resources; several examples of ideal approaches were shared. The system’s focus on individuals with higher functioning left “socially admitted” patients underserved, with emphases on services that are “organized from a provider lens, not from a patient-need lens” (HC605).

  • Interpretation

We sought to understand how health care providers perceive patients labelled as “socially admitted” in hospital, and we identified 9 key themes across the Quintuple Aim framework. 23 , 24 The themes in the patient experience domain highlighted inconsistent definitions and passive care approaches for these patients, who are often seen as low priority in hospital. Under the care team well-being domain, themes of moral distress and hierarchy of care showed the challenges and dilemmas faced by health care providers. Issues of stigma (e.g., “they have dementia”), prejudices (e.g., ageism), wait-lists, and scarcity of alternatives underscored systemic challenges under the health equity and cost of care domains. Finally, factors leading to vulnerability and potential system changes were described by participants as ways to better the health of this population.

Our findings highlight the potential adverse effects on care when patients are labelled as “socially admitted” (or as “orphan patients” in the study hospital), such as incorrect assumptions about medical needs and cognitive abilities, which impedes opportunities to look for treatable medical issues. Despite a “social admission” pathway ostensibly designed to ensure there are no acute or new medical issues, patients were still perceived as having “multiple comorbidities” or being “the most frail … the most complex” ( Table 2 ). This finding is in keeping with the results of a case–control study (in London, Ontario), in which medical comorbidity played a minimal role in the label of a “failure to cope” admission among adults aged 70 years or older. Instead, recent failed discharge from hospital was significantly associated with a “social admission” label, leading the authors to suggest blame was an important part of the use of this label in a system that prizes efficiency. 3 This supports the viewpoint that it is more a system’s failure to cope than the patient’s. 10

Our findings also demonstrate possible negative impacts on health care providers not addressed in previous research. Although similar patient populations (“failure to thrive” or “failure to cope”) in British Columbia 25 and Ontario, 3 and “acopia” admissions in the United Kingdom and Australia, 7 , 9 have been researched, these studies did not consider the insights of providers directly caring for these patients. We highlight some structures (e.g., propagation of the label early in care) or cultures (e.g., ageism) in our health care systems, leading to system and individual tensions caring for “socially admitted” patients, especially in the context of few readily available alternatives. We observed that participants frequently reported feeling conflicted defining, prioritizing, and managing this patient population, yet unequivocally considered these patients deserved care — albeit care delivered by someone else. This latter finding contrasts with a survey of physicians in Wales in which two-thirds (62.7%) considered patients labelled as “social admissions/acopia” were a burden on national health resources, with 44.8% of physicians admitted to feeling that these patients were a burden on their time. 15

Despite considering that “socially admitted” patients were deserving of care, our participants recounted how care was passed down to less-senior members of the health care team. This pattern of downgrading care can lead to situations in which “socially admitted” patients are looked after by team members who possess minimal experience recognizing evolving medical presentations or lack the authority to advocate strongly for clinical reassessments when needed. The implication that the care of “social admissions” should be delegated to others reflects an implicit attitude of hierarchy and detachment from the needs associated with this patient population. Not being able to provide the care that is warranted while at the same time believing that the needed care is beneath the care they provide is in keeping with cognitive dissonance literature in medicine (i.e., holding 2 or more inconsistent beliefs or behaving in a way that is inconsistent with core beliefs). 26 Cognitive dissonance can trigger negative emotions and subsequent defensive reactions resulting in fault finding in others (e.g., blaming “social admissions”), reinforced commitment to wrong actions (e.g., propagating labels), and overlooked medical errors, 26 , 27 offering some explanations for understanding how stigma and hierarchies of care can lead to missed acute medical illnesses (e.g., sepsis, malignancy, and strokes) in previous “social admission” populations. 5 , 7 , 9

Existing literature indicates that “social admission” labelling may harm patients. 14 Our findings suggest that the use of this label appears to have little benefit for the health care providers who care for this patient population. Moreover, no evidence exists to date that “social admissions” labelling or pathways help the health care system. Therefore, re-evaluating an approach to caring for “socially admitted” patients is imperative, and this may include abandoning the nondiagnostic label.

Better support for this patient population may be achieved through enhanced policies that propose feasible solutions to support these patients. To achieve this, further steps are required to define “social admissions,” and to highlight the importance and scope of the issues surrounding the patient population captured under this label. 28 However, we found inconsistencies in how “social admissions” are described, which adds to the challenge in developing effective policies for these patients, and in comparing similar presentations across Canada. 29 Developing a consistent definition for “social admissions” may also prompt clinical specialties to claim responsibility for this population, as champions are key to raising issues for prioritization in health care. 30

“Social admissions” can be considered a “wicked problem” with no single easy solution. 31 A previously proposed ecological approach can guide clinicians in managing “social” presentations. 2 , 32 Participants in our study made suggestions about community- and institutional-level solutions such as home care and primary care teams that support social integration, more multidisciplinary care teams in and out of the hospital, and “geriatrizing” acute care. These suggestions reflect many of the same calls for action made by previous scholars and advocates, 33 , 34 and are similar to solutions proposed by the National Institute on Ageing’s “Ageing in the Right Place” report. 35 Scholars in France have proposed a societal-level solution involving the procedural and financial restructuring of ultraspecialized medicine, coupled with a revival of historic values combining medicine and social work to address the needs of an increasingly frail and socially complex population. 36

Limitations

Our study was conducted in a single tertiary health centre in Nova Scotia, where “socially admitted” patients are admitted under an institution-specific Orphan Patient Policy, which likely limits the generalizability of our findings. Our participants were mainly White and female, which also limits the generalizability to other settings across the country and internationally. Furthermore, the participant sample did not include recreational therapists, volunteers, physiotherapists, or occupational therapists. In the study centre, recreation and volunteer programs had been discontinued or reduced following the COVID-19 pandemic, and there were no occupational or physiotherapists specifically assigned to this patient population. Another limitation of our study is that some interviewers had prior acquaintance with the participants they interviewed. This familiarity may introduce bias in the data collection and interpretation, although this should be balanced with constructivist grounded theory’s emphasis on researchers as co-participants in the research process.

Our research draws attention to health care providers’ challenges in managing care for “socially admitted” patients, and to perceptions regarding “social” presentations, perceived system barriers and resource shortages, and some potential solutions for better patient care. Overall, no consensus emerged as to what constitutes a “social admission” (who are the patients labelled as “socially admitted”?) or ownership for “social admissions” (who cares for these patients?), and participants reported inconsistencies in care delivered for such patients (how to care for “socially admitted” patients). To improve the patient experience and alleviate the moral distress of staff who care for “socially admitted” patients in hospital, the inherent structures of our health care system, such as hierarchies and stigmatization, should be reformed to better address the needs of patients with increasingly complex social problems who present to hospitals.

Competing interests: Jasmine Mah receives scholarships supporting her PhD research from the Department of Medicine at Dalhousie University, Dalhousie Medical Research Foundation, Dr. Patrick Madore Traineeship, and the Pierre Elliott Trudeau Foundation. Kenneth Rockwood has asserted copyright of the Clinical Frailty Scale through Dalhousie University’s Industry, Liaison, and Innovation Office. In addition to academic and hospital appointments, Kenneth Rockwood is cofounder of Ardea Outcomes, which (as DGI Clinical) in the last 3 years has contracts with pharmaceutical and device manufacturers (Danone, Hollister, INmune, Novartis, Takeda) on individualized outcome measurement. In 2020, he attended an advisory board meeting with Nutricia on dementia and chaired a Scientific Workshop & Technical Review Panel on frailty for the Singapore National Research Foundation. He is associate director of the Canadian Consortium on Neurodegeneration in Aging, itself funded by the Canadian Institutes for Health Research, the Alzheimer Society of Canada, and several other charities. He holds the Kathryn Allen Weldon Chair in Alzheimer Research, funded by the Dalhousie Medical Research Foundation. Kenneth Rockwood also reports personal fees from Ardea Outcomes, the Chinese Medical Association, Wake Forest University Medical School Centre, the University of Nebraska Omaha, the Australia and New Zealand Society for Geriatric Medicine, Atria Institute, Fraser Health Authority, McMaster University, and EpiPharma. In addition, Dr. Rockwood has licensed the Clinical Frailty Scale to Enanta Pharmaceuticals, Synairgen Research, Faraday Pharmaceuticals, KCR S.A., Icosavax, BioAge Labs, Biotest AG, Qu Biologics, AstraZeneca UK, Cellcolabs AB, Pfizer, W.L. Gore & Associates, pending to Cook Research Incorporated, Renibus Therapeutics, and, as part of Ardea Outcomes, has a pending patent for Electronic Goal Attainment Scaling. He also reports permission for the Pictorial Fit-Frail Scale licensed to Congenica. Use of both the Clinical Frailty Scale and Pictorial Fit-Frail Scale is free for education, research, and nonprofit health care with completion of a permission agreement stipulating users will not change, charge for, or commercialize the scales. For-profit entities pay a licensing fee, 15% of which is is retained by the Dalhousie University Office of Commercialization and Industry Engagement. The remainder of the licence fees are donated to the Dalhousie Medical Research Foundation. Melissa Andrew reports grants from Sanofi, grants and support to attend meetings from GSK, grants from Pfizer, grants from Canadian Frailty Network, personal fees from Sanofi, personal fees from Pfizer, personal fees from Seqirus, grants from Merck, grants from Public Health Agency of Canada, and grants from Canadian Institutes of Health Research, outside the submitted work. Dr. Andrew is a volunteer board member for the Alzheimer Society of Nova Scotia and the National Advisory Committee on Immunization. Sheliza Khan declares leadership in the patient flow department at Queen Elizabeth II Hospital. No other competing interests were declared.

This article has been peer reviewed.

Contributors: Jasmine Mah and Christie Stilwell contributed equally as co–first authors. Jasmine Mah contributed to the conceptualization and design, procurement of data, analysis of data, drafting of the original manuscript, and review of the manuscript. Christie Stilwell and Emily Marshall contributed to the conceptualization and design, analysis of data, drafting of the original manuscript, and review of the manuscript. Madeline Kubiseski and Gaurav Arora contributed to the conceptualization and design, procurement of data, analysis of data, and review of the manuscript. Karen Nicholls, Sheliza Khan, Jonathan Veinot, Lucy Eum, Susan Freter, Katalin Koller, Maia von Maltzahn, Kenneth Rockwood, Samuel Searle, and Melissa Andrew contributed to the conceptualization and design, analysis of data, and drafting of the original manuscript or review of manuscript drafts. All authors approved the final version to be published and agreed to be accountable for its accuracy and integrity.

Data sharing: Anonymized data from our study may be available on request. Interested parties are encouraged to contact the lead author via email to access these data or to obtain a copy of the Orphan Patient Policy. The data will be shared under terms that ensure the protection of participant privacy and compliance with relevant data protection regulations.

Funding: This study is supported by Nova Scotia Health, through a grant from the Nova Scotia Health Research Fund. Nova Scotia Health is the provincial health authority.

  • Accepted March 5, 2024.

This is an Open Access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY-NC-ND 4.0) licence, which permits use, distribution and reproduction in any medium, provided that the original publication is properly cited, the use is noncommercial (i.e., research or educational use), and no modifications or adaptations are made. See: https://creativecommons.org/licenses/by-nc-nd/4.0/

  • Luther RA ,
  • Richardson L ,
  • Andrew MK ,
  • Burrell A ,
  • Chahine S ,
  • Rutschmann OT ,
  • Chevalley T ,
  • Zumwald C ,
  • Rippingale C
  • Bielawska C ,
  • Murphy PJ ,
  • Campbell SG
  • Searle SD ,
  • Chattopadhyay I
  • Sainsbury P ,
  • Furlong KR ,
  • O’Donnell K ,
  • Farrell A ,
  • Strauss A ,
  • Cooper LA ,
  • Itchhaporia D
  • Harmon-Jones E ,
  • Shiffman J ,
  • Sinskey JL ,
  • Margolis RD ,
  • Bronfenbrenner U
  • Reuben DB ,
  • Auerbach J ,
  • ↵ Ageing in the right place: supporting older Canadians to live where they want . Toronto : National Institute on Ageing, Toronto Metropolitan University 2022 : 1 – 148 . Available: https://static1.squarespace.com/static/5c2fa7b03917eed9b5a436d8/t/638e0857c959d1546d9f6f3a/1670252637242/AIRP+Report+Final2022-.pdf ( accessed 2023 Dec. 18 ).

In this issue

Canadian Medical Association Journal: 196 (17)

  • Table of Contents
  • Index by author

Article tools

Thank you for your interest in spreading the word on CMAJ.

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Citation Manager Formats

  • EndNote (tagged)
  • EndNote 8 (xml)
  • RefWorks Tagged
  • Ref Manager

Twitter logo

  • Tweet Widget
  • Facebook Like

Jump to section

Related articles.

  • “Social admissions” to hospital are not personal failures but policy ones
  • « Admissions pour manque de soutien social » dans les hôpitaux: un échec des politiques, et non des personnes
  • Google Scholar

Cited By...

  • << Admissions pour manque de soutien social >> dans les hopitaux: un echec des politiques, et non des personnes
  • "Social admissions" to hospital are not personal failures but policy ones

More in this TOC Section

  • Respiratory syncytial virus vaccination strategies for older Canadian adults: a cost–utility analysis
  • Public funding for private for-profit centres and access to cataract surgery by patient socioeconomic status: an Ontario population-based study
  • Renin–angiotensin system inhibitors and risk of hepatocellular carcinoma among patients with hepatitis B virus infection

Similar Articles

Collections.

  • Access to health care
  • Geriatric medicine
  • Health policy
  • Homelessness
  • Internal medicine
  • Mental health
  • Patient safety & quality improvement
  • Physician health
  • Vulnerable populations

snowball sampling qualitative research pdf

Enhancing the sample diversity of snowball samples: Recommendations from a research project on anti-dam movements in Southeast Asia

PLOS

  • 13(8):e0201710

Julian Kirchherr at Utrecht University

  • Utrecht University

Katrina J Charles at University of Oxford

  • University of Oxford

Abstract and Figures

Recruitment network of snowball sample, starting with a single seed

Discover the world's research

  • 25+ million members
  • 160+ million publication pages
  • 2.3+ billion citations

Supplementary resource (1)

  • Kimberly Kong
  • Miao ling Zhang

Haitham Hadi Numan

  • Sjoerd H. P. Wiarda

Sonja van de koppel

  • Stephen Randell M. Padual

Ardvin Kester S. Ong

  • Rachel Cooke

Andy Peter Jones

  • Johnnie Daniel
  • Paul Atkinson
  • Sara Delamont
  • Sanjeev Khagram
  • Garry Robins

Ilker Etikan

  • Sidney Verba
  • Jeffrey S. Mullis
  • Louis Kriesberg
  • Recruit researchers
  • Join for free
  • Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up

IMAGES

  1. (PDF) Snowball Sampling: A Purposeful Method of Sampling in Qualitative

    snowball sampling qualitative research pdf

  2. Snowball Sampling

    snowball sampling qualitative research pdf

  3. (PDF) Snowball sampling

    snowball sampling qualitative research pdf

  4. [PDF] Snowball Sampling: A Purposeful Method of Sampling in Qualitative

    snowball sampling qualitative research pdf

  5. Snowball Sampling: Definition, Method, Advantages and Disadvantages

    snowball sampling qualitative research pdf

  6. Snowball Sampling Method: Techniques & Examples

    snowball sampling qualitative research pdf

VIDEO

  1. Snow bale picking process- Good tools and machinery make work easy

  2. Snowball Chiquitita Brown & Carl Dawkins

  3. Snowball Sampling

  4. QUANTITATIVE METHODOLOGY (Part 2 of 3):

  5. Snowflake Table Sampling

  6. Demographic Analysis in SPSS

COMMENTS

  1. (PDF) Snowball Sampling: A Purposeful Method of Sampling in Qualitative

    This research describes snowball sampling as a purposeful method of data collection in qualitative research. Methods: This paper is a descriptive review of previous research papers. Data were ...

  2. PDF Parker, C, Scott, S and Geddes, A (2019) Snowball Sampling. SAGE ...

    snowball sampling is used in qualitative research projects. Subsequent sections examine instances in which snowball sampling stalls or fails to produce participants, and offers two examples of cases in which researchers successfully overcame those obstacles. The entry

  3. [PDF] Snowball Sampling: A Purposeful Method of Sampling in Qualitative

    Background and Objectives: Snowball sampling is applied when samples with the target characteristics are not easily accessible. This research describes snowball sampling as a purposeful method of data collection in qualitative research. Methods: This paper is a descriptive review of previous research papers. Data were gathered using English keywords, including "review," "declaration ...

  4. PDF Snowball Sampling in Qualitative Research Sampling Knowledge: The

    all sampling in o. ganic social networks brings to the fore two relevant concepts:Social knowledge. Captured in the snowball sampling design, s. cial knowledge is presently viewed as primarily dynamic, processual and emergent. In line with qual-itative and feminist conceptualizations of '. owledge,' acce.

  5. Sage Research Methods Foundations

    Learn how to use snowball sampling, a non-probability sampling technique, for social research with SAGE Publications, a leading publisher of academic books and journals.

  6. (PDF) Snowball Sampling: A Purposeful Method of Sampling in Qualitative

    Keywords: Purposeful Sampling, Snowball, Qualitative Research, Descriptive Review 1. Background Qualitative research is an organized method of describing people's experiences and internal feelings (1). It can be said that qualitative research provides a thorough and deep overview of a phenomenon through data collection and presents a rich ...

  7. Enhancing the sample diversity of snowball samples: Recommendations

    Introduction. Snowball sampling is a commonly employed sampling method in qualitative research, used in medical science and in various social sciences, including sociology, political science, anthropology and human geography [1-3].As is typical of terms adopted by a variety of fields, however, the phrase 'snowball sampling' is used inconsistently across disciplines [].

  8. Enhancing the sample diversity of snowball samples ...

    Snowball sampling is a commonly employed sampling method in qualitative research; how-ever, the diversity of samples generated via this method has repeatedly been questioned. ... These analyses are based on reach-outs to 211 individuals via snowball sampling for a research project on anti-dam movements in Southeast Asia, resulting in 81 ...

  9. Snowball Sampling: Introduction

    Snowball sampling is a well-known, nonprobability method of survey sample selection that is commonly used to locate hidden populations. This method relies on referrals from initially sampled respondents to other persons believed to have the characteristic of interest. Limitations of this approach include nonrandom selection procedures ...

  10. Snowball Sampling

    Abstract. Snowball sampling is one of the most popular methods of sampling in qualitative research, central to which are the characteristics of networking and referral. The researchers usually start with a small number of initial contacts (seeds), who fit the research criteria and are invited to become participants within the research.

  11. Snowball Sampling Method: Techniques & Examples

    Key Terms. Snowball sampling, also known as chain-referral sampling, is a non-probability sampling method where currently enrolled research participants help recruit future subjects for a study. Snowball sampling is often used in qualitative research when the population is hard-to-reach or hidden. It's particularly useful when studying ...

  12. Objectifying Contextual Effects. The Use of Snowball Sampling in

    Among the different strategies that propose to achieve this objective, snowball sampling allows for isolating personal networks directly. This article interrogates the conditions of implementation of snowball sampling, as well as how the statistical materials that it produces are used in contextual analysis.

  13. (PDF) Snowball sampling

    The questionnaire was distributed using snowball sampling. Snowball sampling is one of the most popular methods in qualitative research, with its core feature being networking and referral [83] .

  14. (PDF) Sampling Knowledge: The Hermeneutics of Snowball Sampling in

    A qualitative research model was applied in the study, specifically using the snowball model from qualitative research models. A semi-structured interview form was used as the data collection tool.

  15. PDF Snowball Sampling: A Purposeful Method of Sampling in Qualitative Research

    Keywords: Purposeful Sampling, Snowball, Qualitative Research, Descriptive Review 1. Background Qualitative research is an organized method of de-scribing people's experiences and internal feelings (1). It can be said that qualitative research provides a thorough and deep overview of a phenomenon through data col-

  16. What Is Snowball Sampling?

    Snowball sampling is a non-probability sampling method where new units are recruited by other units to form part of the sample. Snowball sampling can be a useful way to conduct research about people with specific traits who might otherwise be difficult to identify (e.g., people with a rare disease). Also known as chain sampling or network ...

  17. Snowballing … #Prayforme: A Qualitative Study Using Snowball Sampling

    This case study discusses the experience of reaching an unknown population, the challenges that were encountered, and the need to be flexible to keep progressing in your research. The original research used followership, charismatic leadership, and sacrificial leadership as theoretical frameworks to focus on the survivors of the 1978 Jonestown ...

  18. (PDF) Sampling in Qualitative Research

    Sampling in Qualitative Research. probing questions are used which leads to the generation of rich information. and data. Further, the sampling techniques in qualitative research are. purposeful ...

  19. (PDF) Snowball Sampling: A Purposeful Method of Sampling in Qualitative

    The research sample included 60 junior high school students who were chosen by two-stage random sampling and were assigned to four groups (n=15) of teaching volleyball based on problem solving approach, teaching volleyball, teaching problem solving, and control.

  20. Enhancing the sample diversity of snowball samples ...

    Snowball sampling is a commonly employed sampling method in qualitative research; however, the diversity of samples generated via this method has repeatedly been questioned. Scholars have posited several anecdotally based recommendations for enhancing the diversity of snowball samples. In this study, we performed the first quantitative, medium-N analysis of snowball sampling to identify ...

  21. [PDF] Sampling Knowledge: The Hermeneutics of Snowball Sampling in

    The latter have been overlooked, qualifying only as a 'technical' research stage. This article attends to snowball sampling via constructivist and feminist hermeneutics, suggesting that when viewed critically, this popular sampling method can generate a unique type of social knowledge—knowledge which is emergent, political and interactional.

  22. (PDF) Sampling Knowledge: The Hermeneutics of Snowball Sampling in

    Hence the evolving 'snowball' effect, captured in a metaphor that touches on the central quality of this sampling procedure: its accumulative (diachronic and dynamic) dimension. Snowball sampling is arguably the most widely employed method of sampling in qualitative research in various disciplines across the social sciences.

  23. Basics of Qualitative Data Collection

    Qualitative data is the main currency of the trade in a socio-technical grounded theory (STGT) study and in qualitative research in general. The quality of a STGT study depends first and foremost on the quality of the data collected. In this chapter, we will learn...

  24. Managing "socially admitted" patients in hospital: a qualitative study

    Study design. This qualitative study was informed by constructivist grounded theory, which uses inductive analysis of data collected from participants to generate new theories.16, 17 We conducted semistructured interviews with clinicians and health care administrators between October 2022 and July 2023. Given that little is known about "social admissions," grounded theory was best suited ...

  25. (PDF) Enhancing the sample diversity of snowball samples

    Snowball sampling is a commonly employed sampling method in qualitative research; however, the diversity of samples generated via this method has repeatedly been questioned.