• Open access
  • Published: 06 January 2010

A tutorial on pilot studies: the what, why and how

  • Lehana Thabane 1 , 2 ,
  • Jinhui Ma 1 , 2 ,
  • Rong Chu 1 , 2 ,
  • Ji Cheng 1 , 2 ,
  • Afisi Ismaila 1 , 3 ,
  • Lorena P Rios 1 , 2 ,
  • Reid Robson 3 ,
  • Marroon Thabane 1 , 4 ,
  • Lora Giangregorio 5 &
  • Charles H Goldsmith 1 , 2  

BMC Medical Research Methodology volume  10 , Article number:  1 ( 2010 ) Cite this article

358k Accesses

1579 Citations

104 Altmetric

Metrics details

A Correction to this article was published on 11 March 2023

This article has been updated

Pilot studies for phase III trials - which are comparative randomized trials designed to provide preliminary evidence on the clinical efficacy of a drug or intervention - are routinely performed in many clinical areas. Also commonly know as "feasibility" or "vanguard" studies, they are designed to assess the safety of treatment or interventions; to assess recruitment potential; to assess the feasibility of international collaboration or coordination for multicentre trials; to increase clinical experience with the study medication or intervention for the phase III trials. They are the best way to assess feasibility of a large, expensive full-scale study, and in fact are an almost essential pre-requisite. Conducting a pilot prior to the main study can enhance the likelihood of success of the main study and potentially help to avoid doomed main studies. The objective of this paper is to provide a detailed examination of the key aspects of pilot studies for phase III trials including: 1) the general reasons for conducting a pilot study; 2) the relationships between pilot studies, proof-of-concept studies, and adaptive designs; 3) the challenges of and misconceptions about pilot studies; 4) the criteria for evaluating the success of a pilot study; 5) frequently asked questions about pilot studies; 7) some ethical aspects related to pilot studies; and 8) some suggestions on how to report the results of pilot investigations using the CONSORT format.

1. Introduction

The Concise Oxford Thesaurus [ 1 ] defines a pilot project or study as an experimental, exploratory, test, preliminary, trial or try out investigation. Epidemiology and statistics dictionaries provide similar definitions of a pilot study as a small scale

" ... test of the methods and procedures to be used on a larger scale if the pilot study demonstrates that the methods and procedures can work" [ 2 ];

"...investigation designed to test the feasibility of methods and procedures for later use on a large scale or to search for possible effects and associations that may be worth following up in a subsequent larger study" [ 3 ].

Table 1 provides a summary of definitions found on the Internet. A closer look at these definitions reveals that they are similar to the ones above in that a pilot study is synonymous with a feasibility study intended to guide the planning of a large-scale investigation. Pilot studies are sometimes referred to as "vanguard trials" (i.e. pre-studies) intended to assess the safety of treatment or interventions; to assess recruitment potential; to assess the feasibility of international collaboration or coordination for multicentre trials; to evaluate surrogate marker data in diverse patient cohorts; to increase clinical experience with the study medication or intervention, and identify the optimal dose of treatments for the phase III trials [ 4 ]. As suggested by an African proverb from the Ashanti people in Ghana " You never test the depth of a river with both feet ", the main goal of pilot studies is to assess feasibility so as to avoid potentially disastrous consequences of embarking on a large study - which could potentially "drown" the whole research effort.

Feasibility studies are routinely performed in many clinical areas. It is fair to say that every major clinical trial had to start with some piloting or a small scale investigation to assess the feasibility of conducting a larger scale study: critical care [ 5 ], diabetes management intervention trials [ 6 ], cardiovascular trials [ 7 ], primary healthcare [ 8 ], to mention a few.

Despite their noted importance, the reality is that pilot studies receive little or no attention in scientific research training. Few epidemiology or research textbooks cover the topic with the necessary detail. In fact, we are not aware of any textbook that dedicates a chapter on this issue - many just mention it in passing or provide a cursory coverage of the topic. The objective of this paper is to provide a detailed examination of the key aspects of pilot studies. In the next section, we narrow the focus of our definition of a pilot to phase III trials. Section 3 covers the general reasons for conducting a pilot study. Section 4 deals with the relationships between pilot studies, proof-of-concept studies, and adaptive designs, while section 5 addresses the challenges of pilot studies. Evaluation of a pilot study (i.e. how to determine if a pilot study was successful) is covered in Section 6. We deal with several frequently asked questions about pilot studies in Section 7 using a "question-and-answer" approach. Section 8 covers some ethical aspects related to pilot studies; and in Section 9, we follow the CONSORT format [ 9 ] to offer some suggestions on how to report the results of pilot investigations.

2. Narrowing the focus: Pilot studies for randomized studies

Pilot studies can be conducted in both quantitative and qualitative studies. Adopting a similar approach to Lancaster et al . [ 10 ], we focus on quantitative pilot studies - particularly those done prior to full-scale phase III trials. Phase I trials are non-randomized studies designed to investigate the pharmacokinetics of a drug (i.e. how a drug is distributed and metabolized in the body) including finding a dose that can be tolerated with minimal toxicity. Phase II trials provide preliminary evidence on the clinical efficacy of a drug or intervention. They may or may not be randomized. Phase III trials are randomized studies comparing two or more drugs or intervention strategies to assess efficacy and safety. Phase IV trials, usually done after registration or marketing of a drug, are non-randomized surveillance studies to document experiences (e.g. side-effects, interactions with other drugs, etc) with using the drug in practice.

For the purposes of this paper, our approach to utilizing pilot studies relies on the model for complex interventions advocated by the British Medical Research Council - which explicitly recommends the use of feasibility studies prior to Phase III clinical trials, but stresses the iterative nature of the processes of development, feasibility and piloting, evaluation and implementation [ 11 ].

3. Reasons for Conducting Pilot Studies

Van Teijlingen et al . [ 12 ] and van Teijlingen and Hundley [ 13 ] provide a summary of the reasons for performing a pilot study. In general, the rationale for a pilot study can be grouped under several broad classifications - process, resources, management and scientific (see also http://www.childrens-mercy.org/stats/plan/pilot.asp for a different classification):

Process: This assesses the feasibility of the steps that need to take place as part of the main study. Examples include determining recruitment rates, retention rates, etc.

Resources: This deals with assessing time and budget problems that can occur during the main study. The idea is to collect some pilot data on such things as the length of time to mail or fill out all the survey forms.

Management: This covers potential human and data optimization problems such as personnel and data management issues at participating centres.

Scientific: This deals with the assessment of treatment safety, determination of dose levels and response, and estimation of treatment effect and its variance.

Table 2 summarizes this classification with specific examples.

4. Relationships between Pilot Studies, Proof-of-Concept Studies, and Adaptive Designs

A proof-of-concept (PoC) study is defined as a clinical trial carried out to determine if a treatment (drug) is biologically active or inactive [ 14 ]. PoC studies usually use surrogate markers as endpoints. In general, they are phase I/II studies - which, as noted above, investigate the safety profile, dose level and response to new drugs [ 15 ]. Thus, although designed to inform the planning of phase III trials for registration or licensing of new drugs, PoC studies may not necessarily fit our restricted definition of pilot studies aimed at assessing feasibility of phase III trials as outlined in Section 2.

An adaptive trial design refers to a design that allows modifications to be made to a trial's design or statistical procedures during its conduct, with the purpose of efficiently identifying clinical benefits/risks of new drugs or to increase the probability of success of clinical development [ 16 ]. The adaptations can be prospective (e.g. stopping a trial early due to safety or futility or efficacy at interim analysis); concurrent (e.g. changes in eligibility criteria, hypotheses or study endpoints) or retrospective (e.g. changes to statistical analysis plan prior to locking database or revealing treatment codes to trial investigators or patients). Piloting is normally built into adaptive trial designs by determining a priori decision rules to guide the adaptations based on cumulative data. For example, data from interim analyses could be used to refine sample size calculations [ 17 , 18 ]. This approach is routinely used in internal pilot studies - which are primarily designed to inform sample size calculation for the main study, with recalculation of the sample size as the key adaptation. Unlike other phase III pilots, an internal pilot investigation does not usually address any other feasibility aspects - because it is essentially part of the main study [ 10 , 19 , 20 ]..

Nonetheless, we need to emphasize that whether or not a study is a pilot, depends on its objectives. An adaptive method is used as a strategy to reach that objective. Both a pilot and a non-pilot could be adaptive.

5. Challenges of and Common Misconceptions about Pilot Studies

Pilot studies can be very informative, not only to the researchers conducting them but also to others doing similar work. However, many of them never get published, often because of the way the results are presented [ 13 ]. Quite often the emphasis is wrongly placed on statistical significance, not on feasibility - which is the main focus of the pilot study. Our experience in reviewing submissions to a research ethics board also shows that most of the pilot projects are not well designed: i.e. there are no clear feasibility objectives; no clear analytic plans; and certainly no clear criteria for success of feasibility.

In many cases, pilot studies are conducted to generate data for sample size calculations. This seems especially sensible in situations where there are no data from previous studies to inform this process. However, it can be dangerous to use pilot studies to estimate treatment effects, as such estimates may be unrealistic/biased because of the limited sample sizes. Therefore if not used cautiously, results of pilot studies can potentially mislead sample size or power calculations [ 21 ] -- particularly if the pilot study was done to see if there is likely to be a treatment effect in the main study. In section 6, we provide guidance on how to proceed with caution in this regard.

There are also several misconceptions about pilot studies. Below are some of the common reasons that researchers have put forth for calling their study a pilot.

The first common reason is that a pilot study is a small single-centre study. For example, researchers often state lack of resources for a large multi-centre study as a reason for doing a pilot. The second common reason is that a pilot investigation is a small study that is similar in size to someone else's published study. In reviewing submissions to a research ethics board, we have come across sentiments such as

So-and-so did a similar study with 6 patients and got statistical significance - ours uses 12 patients (double the size)!

We did a similar pilot before (and it was published!)

The third most common reason is that a pilot is a small study done by a student or an intern - which can be completed quickly and does not require funding. Specific arguments include

I have funding for 10 patients only;

I have limited seed (start-up) funding;

This is just a student project!

My supervisor (boss) told me to do it as a pilot .

None of the above arguments qualifies as sound reasons for calling a study a pilot. A study should only be conducted if the results will be informative; studies conducted for the reasons above may result in findings of limited utility, which would be a waste of the researchers' and participants' efforts. The focus of a pilot study should be on assessment of feasibility, unless it was powered appropriately to assess statistical significance. Further, there is a vast number of poorly designed and reported studies. Assessment of the quality of a published report may be helpful to guide decisions of whether the report should be used to guide planning or designing of new studies. Finally, if a trainee or researcher is assigned a project as a pilot it is important to discuss how the results will inform the planning of the main study. In addition, clearly defined feasibility objectives and rationale to justify piloting should be provided.

Sample Size for Pilot Studies

In general, sample size calculations may not be required for some pilot studies. It is important that the sample for a pilot be representative of the target study population. It should also be based on the same inclusion/exclusion criteria as the main study. As a rule of thumb, a pilot study should be large enough to provide useful information about the aspects that are being assessed for feasibility. Note that PoC studies require sample size estimation based on surrogate markers [ 22 ], but they are usually not powered to detect meaningful differences in clinically important endpoints. The sample used in the pilot may be included in the main study, but caution is needed to ensure the key features of the main study are preserved in the pilot (e.g. blinding in randomized controlled trials). We recommend if any pooling of pilot and main study data is considered, this should be planned beforehand, described clearly in the protocol with clear discussion of the statistical consequences and methods. The goal is to avoid or minimize the potential bias that may occur due to multiple testing issues or any other opportunistic actions by investigators. In general, pooling when done appropriately can increase the efficiency of the main study [ 23 ].

As noted earlier, a carefully designed pilot study may be used to generate information for sample size calculations. Two approaches may be helpful to optimize information from a pilot study in this context: First, consider eliciting qualitative data to supplement the quantitative information obtained in the pilot. For example, consider having some discussions with clinicians using the approach suggested by Lenth [ 24 ] to illicit additional information on possible effect size and variance estimates. Second, consider creating a sample size table for various values of the effect or variance estimates to acknowledge the uncertainty surrounding the pilot estimates.

In some cases, one could use a confidence interval [CI] approach to estimate the sample size required to establish feasibility. For example, suppose we had a pilot trial designed primarily to determine adherence rates to the standardized risk assessment form to enhance venous thromboprophylaxis in hospitalized patients. Suppose it was also decided a priori that the criterion for success would be: the main trial would be ' feasibl e' if the risk assessment form is completed for ≥ 70% of eligible hospitalized patients.

6. How to Interpret the Results of a Pilot Study: Criteria for Success

It is always important to state the criteria for success of a pilot study. The criteria should be based on the primary feasibility objectives. These provide the basis for interpreting the results of the pilot study and determining whether it is feasible to proceed to the main study. In general, the outcome of a pilot study can be one of the following: (i) Stop - main study not feasible; (ii) Continue, but modify protocol - feasible with modifications; (iii) Continue without modifications, but monitor closely - feasible with close monitoring and (iv) Continue without modifications - feasible as is.

For example, the Prophylaxis of Thromboembolism in Critical Care Trial (PROTECT) was designed to assess the feasibility of a large-scale trial with the following criteria for determining success [ 25 ]:

98.5% of patients had to receive study drug within 12 hours of randomization;

91.7% of patients had to receive every scheduled dose of the study drug in a blinded manner;

90% or more of patients had to have lower limb compression ultrasounds performed at the specified times; and

> 90% of necessary dose adjustments had to have been made appropriately in response to pre-defined laboratory criteria .

In a second example, the PeriOperative Epidural Trial (POET) Pilot Study was designed to assess the feasibility of a large, multicentre trial with the following criteria for determining success [ 26 ]:

one subject per centre per week (i.e., 200 subjects from four centres over 50 weeks) can be recruited ;

at least 70% of all eligible patients can be recruited ;

no more than 5% of all recruited subjects crossed over from one modality to the other; and

complete follow-up in at least 95% of all recruited subjects .

7. Frequently asked questions about pilot studies

In this Section, we offer our thoughts on some of the frequently asked questions about pilot studies. These could be helpful to not only clinicians and trainees, but to anyone who is interested in health research.

Can I publish the results of a pilot study?

- Yes, every attempt should be made to publish.

Why is it important to publish the results of pilot studies?

- To provide information about feasibility to the research community to save resources being unnecessarily spent on studies that may not be feasible. Further, having such information can help researchers to avoid duplication of efforts in assessing feasibility.

- Finally, researchers have an ethical and scientific obligation to attempt publishing the results of every research endeavor. However, our focus should be on feasibility goals. Emphasis should not be placed on statistical significance when pilot studies are not powered to detect minimal clinically important differences. Such studies typically do not show statistically significant results - remember that underpowered studies (with no statistically significant results) are inconclusive, not negative since "no evidence of effect" is not "evidence of no effect" [ 27 ].

Can I combine data from a pilot with data from the main study?

- Yes, provided the sampling frame and methodologies are the same. This can increase the efficiency of the main study - see Section 5.

Can I combine the results of a pilot with the results of another study or in a meta-analysis?

- Yes, provided the sampling frame and methodologies are the same.

- No, if the main study is reported and it includes the pilot study.

Can the results of the pilot study be valid on their own, without existence of the main study

- Yes, if the results show that it is not feasible to proceed to the main study or there is insufficient funding.

Can I apply for funding for a pilot study?

- Yes. Like any grant, it is important to justify the need for piloting.

- The pilot has to be placed in the context of the main study.

Can I randomize patients in a pilot study?

- Yes. For a phase III pilot study, one of the goals could be to assess how a randomization procedure might work in the main study or whether the idea of randomization might be acceptable to patients [ 10 ]. In general, it is always best for a pilot to maintain the same design as the main study.

How can I use the information from a pilot to estimate the sample size?

- Use with caution, as results from pilot studies can potentially mislead sample size calculations.

- Consider supplementing the information with qualitative discussions with clinicians - see section 5; and

- Create a sample size table to acknowledge the uncertainty of the pilot information - see section 5.

Can I use the results of a pilot study to treat my patients?

- Not a good idea!

- Pilot studies are primarily for assessing feasibility.

What can I do with a failed or bad pilot study?

- No study is a complete failure; it can always be used as bad example! However, it is worth making clear that a pilot study that shows the main study is not likely to be feasible is not a failed (pilot) study. In fact, it is a success - because you avoided wasting scarce resources on a study destined for failure!

8. Ethical Aspects of Pilot Studies

Halpern et al . [ 28 ] stated that conducting underpowered trials is unethical. However, they proposed that underpowered trials are ethical in two situations: (i) small trials of interventions for rare diseases -- which require documenting explicit plans for including results with those of similar trials in a prospective meta-analysis; (ii) early-phase trials in the development of drugs or devices - provided they are adequately powered for defined purposes other than randomized treatment comparisons. Pilot studies of phase III trials (dealing with common diseases) are not addressed in their proposal. It is therefore prudent to ask: Is it ethical to conduct a study whose feasibility can not be guaranteed (i.e. with a high probability of success)?

It seems unethical to consider running a phase III study without having sufficient data or information about the feasibility. In fact, most granting agencies often require data on feasibility as part of their assessment of the scientific validity for funding decisions.

There is however one important ethical aspect about pilot studies that has received little or no attention from researchers, research ethics boards and ethicists alike. This pertains to the issue of the obligation that researchers have to patients or participants in a trial to disclose the feasibility nature of pilot studies. This is essential given that some pilot studies may not lead to further studies. A review of the commonly cited research ethics guidelines - the Nuremburg Code [ 29 ], Helsinki Declaration [ 30 ], the Belmont Report [ 31 ], ICH Good Clinical Practice [ 32 ], and the International Ethical Guidelines for Biomedical Research Involving Human Subjects [ 33 ] - shows that pilot studies are not addressed in any of these guidelines. Canadian researchers are also encouraged to follow the Tri-Council Policy Statement (TCPS) [ 34 ] - it too does not address how pilot studies need to be approached. It seems to us that given the special nature of feasibility or pilot studies, the disclosure of their purpose to study participants requires special wording - that informs them of the definition of a pilot study, the feasibility objectives of the study, and also clearly defines the criteria for success of feasibility. To fully inform participants, we suggest using the following wording in the consent form:

" The overall purpose of this pilot study is to assess the feasibility of conducting a large study to [state primary objective of the main study]. A feasibility or pilot study is a study that... [state a general definition of a feasibility study]. The specific feasibility objectives of this study are ... [state the specific feasibility objectives of the pilot study]. We will determine that it is feasible to carry on the main study if ... [state the criteria for success of feasibility] ."

9. Recommendation for Reporting the Results of Pilot Studies

Adopted from the CONSORT Statement [ 9 ], Table 3 provides a checklist of items to consider including in a report of a pilot study.

Title and abstract

Item #1: the title or abstract should indicate that the study is a "pilot" or "feasibility".

As a number one summary of the contents of any report, it is important for the title to clearly indicate that the report is for a pilot or feasibility study. This would also be helpful to other researchers during electronic information search about feasibility issues. Our quick search of PUBMED [on July 13, 2009], using the terms "pilot" OR "feasibility" OR "proof-of-concept" for revealed 24423 (16%) hits of studies that had these terms in the title or abstract compared with 149365 hits that had these terms anywhere in the text.

Item #2: Scientific background for the main study and explanation of rationale for assessing feasibility through piloting

The rationale for initiating a pilot should be based on the need to assess feasibility for the main study. Thus, the background of the main study should clearly describe what is known or not known about important feasibility aspects to provide context for piloting.

Item #3: Participants and setting of the study

The description of the inclusion-exclusion or eligibility criteria for participants should be the same as in the main study. The settings and locations where the data were collected should also be clearly described.

Item #4: Interventions

Precise details of the interventions intended for each group and how and when they were actually administered (if applicable) - state clearly if any aspects of the intervention are assessed for feasibility.

Item #5: Objectives

State the specific scientific primary and secondary objectives and hypotheses for the main study and the specific feasibility objectives. It is important to clearly indicate the feasibility objectives as the primary focus for the pilot.

Item #6: Outcomes

Clearly define primary and secondary outcome measures for the main study. Then, clearly define the feasibility outcomes and how they were operationalized - these should include key elements such as recruitment rates, consent rates, completion rates, variance estimates, etc. In some cases, a pilot study may be conducted with the aim to determine a suitable (clinical or surrogate) endpoint for the main study. In such a case, one may not be able to define the primary outcome of the main study until the pilot is finished. However, it is important that determining the primary outcome of the main study be clearly stated as part of feasibility outcomes.

Item #7: Sample Size

Describe how sample size was determined. If the pilot is a proof-of-concept study, is the sample size calculated based on primary/key surrogate marker(s)? In general if the pilot is for a phase III study, there may be no need for a formal sample size calculation. However, the confidence interval approach may be used to calculate and justify the sample size based on key feasibility objective(s).

Item #8: Feasibility criteria

Clearly describe the criteria for assessing success of feasibility - these should be based on the feasibility objectives.

Item #9: Statistical Analysis

Describe the statistical methods for the analysis of primary and secondary feasibility outcomes.

Item #10: Ethical Aspects

State whether the study received research ethics approval. Describe how informed consent was handled - given the feasibility nature of the study.

Item #11: Participant Flow

Describe the flow of participants through each stage of the study (use of a flow-diagram is strongly recommended -- see CONSORT [ 9 ] for a template). Describe protocol deviations from pilot study as planned with reasons for deviations. State the number of exclusions at each stage and corresponding reasons for exclusions.

Item #12: Recruitment

Report the dates defining the periods of recruitment and follow-up.

Item #13: Baseline Data

Report the baseline demographic and clinical characteristics of the participants.

Item #14: Outcomes and Estimation

For each primary and secondary feasibility outcomes, report the point estimate of effect and its precision ( e.g ., 95% CI) - if applicable.

Item # 15: Interpretation

Interpretation of the results should focus on feasibility, taking into account the stated criteria for success of feasibility, study hypotheses, sources of potential bias or imprecision (given the feasibility nature of the study) and the dangers associated with multiplicity - repeated testing on multiple outcomes.

Item #16: Generalizability

Discuss the generalizability (external validity) of the feasibility aspects observed in the study. State clearly what modifications in the design of the main study (if any) would be necessary to make it feasible.

Item #17: Overall evidence of feasibility

Discuss the general results in the context of overall evidence of feasibility. It is important that the focus be on feasibility.

9. Conclusions

Pilot or vanguard studies provide a good opportunity to assess feasibility of large full-scale studies. Pilot studies are the best way to assess feasibility of a large expensive full-scale study, and in fact are an almost essential pre-requisite. Conducting a pilot prior to the main study can enhance the likelihood of success of the main study and potentially help to avoid doomed main studies. Pilot studies should be well designed with clear feasibility objectives, clear analytic plans, and explicit criteria for determining success of feasibility. They should be used cautiously for determining treatment effects and variance estimates for power or sample size calculations. Finally, they should be scrutinized the same way as full scale studies, and every attempt should be taken to publish the results in peer-reviewed journals.

Change history

11 march 2023.

A Correction to this paper has been published: https://doi.org/10.1186/s12874-023-01880-1

Waite M: Concise Oxford Thesaurus. 2002, Oxford, England: Oxford University Press, 2

Google Scholar  

Last JM, editor: A Dictionary of Epidemiology. 2001, Oxford University Press, 4

Everitt B: Medical Statistics from A to Z: A Guide for Clinicians and Medical Students. 2006, Cambridge University Press: Cambridge, 2

Book   Google Scholar  

Tavel JA, Fosdick L, ESPRIT Vanguard Group. ESPRIT Executive Committee: Closeout of four phase II Vanguard trials and patient rollover into a large international phase III HIV clinical endpoint trial. Control Clin Trials. 2001, 22: 42-48. 10.1016/S0197-2456(00)00114-8.

Article   CAS   PubMed   Google Scholar  

Arnold DM, Burns KE, Adhikari NK, Kho ME, Meade MO, Cook DJ: The design and interpretation of pilot trials in clinical research in critical care. Crit Care Med. 2009, 37 (Suppl 1): 69-74. 10.1097/CCM.0b013e3181920e33.

Article   Google Scholar  

Computerization of Medical Practice for the Enhancement of Therapeutic Effectiveness. Last accessed August 8, 2009, [ http://www.compete-study.com/index.htm ]

Heart Outcomes Prevention Evaluation Study. Last accessed August 8, 2009, [ http://www.ccc.mcmaster.ca/hope.htm ]

Cardiovascular Health Awareness Program. Last accessed August 8, 2009, [ http://www.chapprogram.ca/resources.html ]

Moher D, Schulz KF, Altman DG, CONSORT Group (Consolidated Standards of Reporting Trials): The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomized trials. J Am Podiatr Med Assoc. 2001, 91: 437-442.

Lancaster GA, Dodd S, Williamson PR: Design and analysis of pilot studies: recommendations for good practice. J Eval Clin Pract. 2004, 10: 307-12. 10.1111/j..2002.384.doc.x.

Article   PubMed   Google Scholar  

Craig N, Dieppe P, Macintyre S, Michie S, Nazareth I, Petticrew M: Developing and evaluating complex interventions: the new Medical Research Council guidance. BMJ. 2008, 337: a1655-10.1136/bmj.a1655.

Article   PubMed   PubMed Central   Google Scholar  

Van Teijlingen ER, Rennie AM, Hundley V, Graham W: The importance of conducting and reporting pilot studies: the example of the Scottish Births Survey. J Adv Nurs. 2001, 34: 289-295. 10.1046/j.1365-2648.2001.01757.x.

Van Teijlingen ER, Hundley V: The Importance of Pilot Studies. Social Research Update. 2001, 35-[ http://sru.soc.surrey.ac.uk/SRU35.html ]

Lawrence Gould A: Timing of futility analyses for 'proof of concept' trials. Stat Med. 2005, 24: 1815-1835. 10.1002/sim.2087.

Fardon T, Haggart K, Lee DK, Lipworth BJ: A proof of concept study to evaluate stepping down the dose of fluticasone in combination with salmeterol and tiotropium in severe persistent asthma. Respir Med. 2007, 101: 1218-1228. 10.1016/j.rmed.2006.11.001.

Chow SC, Chang M: Adaptive design methods in clinical trials - a review. Orphanet J Rare Dis. 2008, 3: 11-10.1186/1750-1172-3-11.

Gould AL: Planning and revising the sample size for a trial. Stat Med. 1995, 14: 1039-1051. 10.1002/sim.4780140922.

Coffey CS, Muller KE: Properties of internal pilots with the univariate approach to repeated measures. Stat Med. 2003, 22: 2469-2485. 10.1002/sim.1466.

Zucker DM, Wittes JT, Schabenberger O, Brittain E: Internal pilot studies II: comparison of various procedures. Statistics in Medicine. 1999, 18: 3493-3509. 10.1002/(SICI)1097-0258(19991230)18:24<3493::AID-SIM302>3.0.CO;2-2.

Kieser M, Friede T: Re-calculating the sample size in internal pilot designs with control of the type I error rate. Statistics in Medicine. 2000, 19: 901-911. 10.1002/(SICI)1097-0258(20000415)19:7<901::AID-SIM405>3.0.CO;2-L.

Kraemer HC, Mintz J, Noda A, Tinklenberg J, Yesavage JA: Caution regarding the use of pilot studies to guide power calculations for study proposals. Arch Gen Psychiatry. 2006, 63: 484-489. 10.1001/archpsyc.63.5.484.

Yin Y: Sample size calculation for a proof of concept study. J Biopharm Stat. 2002, 12: 267-276. 10.1081/BIP-120015748.

Wittes J, Brittain E: The role of internal pilot studies in increasing the efficiency of clinical trials. Stat Med. 1990, 9: 65-71. 10.1002/sim.4780090113.

Lenth R: Some Practical Guidelines for Effective Sample Size Determination. The American Statistician. 2001, 55: 187-193. 10.1198/000313001317098149.

Cook DJ, Rocker G, Meade M, Guyatt G, Geerts W, Anderson D, Skrobik Y, Hebert P, Albert M, Cooper J, Bates S, Caco C, Finfer S, Fowler R, Freitag A, Granton J, Jones G, Langevin S, Mehta S, Pagliarello G, Poirier G, Rabbat C, Schiff D, Griffith L, Crowther M, PROTECT Investigators. Canadian Critical Care Trials Group: Prophylaxis of Thromboembolism in Critical Care (PROTECT) Trial: a pilot study. J Crit Care. 2005, 20: 364-372. 10.1016/j.jcrc.2005.09.010.

Choi PT, Beattie WS, Bryson GL, Paul JE, Yang H: Effects of neuraxial blockade may be difficult to study using large randomized controlled trials: the PeriOperative Epidural Trial (POET) Pilot Study. PLoS One. 2009, 4 (2): e4644-10.1371/journal.pone.0004644.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Altman DG, Bland JM: Absence of evidence is not evidence of absence. BMJ. 1995, 311: 485-

Halpern SD, Karlawish JH, Berlin JA: The continuing unethical conduct of underpowered clinical trials. JAMA. 2002, 288: 358-362. 10.1001/jama.288.3.358.

The Nuremberg Code, Research ethics guideline 2005. Last accessed August 8, 2009, [ http://www.hhs.gov/ohrp/references/nurcode.htm ]

The Declaration of Helsinki, Research ethics guideline. Last accessed December 22, 2009, [ http://www.wma.net/en/30publications/10policies/b3/index.html ]

The Belmont Report, Research ethics guideline. Last accessed August 8, 2009, [ http://ohsr.od.nih.gov/guidelines/belmont.html ]

The ICH Harmonized Tripartite Guideline-Guideline for Good Clinical Practice. Last accessed August 8, 2009, [ http://www.gcppl.org.pl/ma_struktura/docs/ich_gcp.pdf ]

The International Ethical Guidelines for Biomedical Research Involving Human Subjects. Last accessed August 8, 2009, [ http://www.fhi.org/training/fr/Retc/pdf_files/cioms.pdf ]

Tri-Council Policy Statement: Ethical Conduct for Research Involving Humans, Government of Canada. Last accessed August 8, 2009, [ http://www.pre.ethics.gc.ca/english/policystatement/policystatement.cfm ]

Pre-publication history

The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2288/10/1/prepub

Download references

Acknowledgements

Dr Lehana Thabane is clinical trials mentor for the Canadian Institutes of Health Research. We thank the reviewers for insightful comments and suggestions which led to improvements in the manuscript.

Author information

Authors and affiliations.

Department of Clinical Epidemiology and Biostatistics, McMaster University, Hamilton, ON, Canada

Lehana Thabane, Jinhui Ma, Rong Chu, Ji Cheng, Afisi Ismaila, Lorena P Rios, Marroon Thabane & Charles H Goldsmith

Biostatistics Unit, St Joseph's Healthcare Hamilton, Hamilton, ON, Canada

Lehana Thabane, Jinhui Ma, Rong Chu, Ji Cheng, Lorena P Rios & Charles H Goldsmith

Department of Medical Affairs, GlaxoSmithKline Inc., Mississauga, ON, Canada

Afisi Ismaila & Reid Robson

Department of Medicine, Division of Gastroenterology, McMaster University, Hamilton, ON, Canada

Marroon Thabane

Department of Kinesiology, University of Waterloo, Waterloo, ON, Canada

Lora Giangregorio

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Lehana Thabane .

Additional information

Competing interests.

The authors declare that they have no competing interests.

Authors' contributions

LT drafted the manuscript. All authors reviewed several versions of the manuscript, read and approved the final version.

The original online version of this article was revised: the authors would like to correct the number of sample size in the fourth paragraph under the heading Sample Size for Pilot Studies from “75 patients” to “289 patients”.

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License ( http://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article.

Thabane, L., Ma, J., Chu, R. et al. A tutorial on pilot studies: the what, why and how. BMC Med Res Methodol 10 , 1 (2010). https://doi.org/10.1186/1471-2288-10-1

Download citation

Received : 09 August 2009

Accepted : 06 January 2010

Published : 06 January 2010

DOI : https://doi.org/10.1186/1471-2288-10-1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Pilot Study
  • Sample Size Calculation
  • Research Ethic Board
  • Adaptive Design

BMC Medical Research Methodology

ISSN: 1471-2288

pilot study research paper

Pilot Study in Research: Definition & Examples

Julia Simkus

Editor at Simply Psychology

BA (Hons) Psychology, Princeton University

Julia Simkus is a graduate of Princeton University with a Bachelor of Arts in Psychology. She is currently studying for a Master's Degree in Counseling for Mental Health and Wellness in September 2023. Julia's research has been published in peer reviewed journals.

Learn about our Editorial Process

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

A pilot study, also known as a feasibility study, is a small-scale preliminary study conducted before the main research to check the feasibility or improve the research design.

Pilot studies can be very important before conducting a full-scale research project, helping design the research methods and protocol.

How Does it Work?

Pilot studies are a fundamental stage of the research process. They can help identify design issues and evaluate a study’s feasibility, practicality, resources, time, and cost before the main research is conducted.

It involves selecting a few people and trying out the study on them. It is possible to save time and, in some cases, money by identifying any flaws in the procedures designed by the researcher.

A pilot study can help the researcher spot any ambiguities (i.e., unusual things), confusion in the information given to participants, or problems with the task devised.

Sometimes the task is too hard, and the researcher may get a floor effect because none of the participants can score at all or can complete the task – all performances are low.

The opposite effect is a ceiling effect, when the task is so easy that all achieve virtually full marks or top performances and are “hitting the ceiling.”

This enables researchers to predict an appropriate sample size, budget accordingly, and improve the study design before performing a full-scale project.

Pilot studies also provide researchers with preliminary data to gain insight into the potential results of their proposed experiment.

However, pilot studies should not be used to test hypotheses since the appropriate power and sample size are not calculated. Rather, pilot studies should be used to assess the feasibility of participant recruitment or study design.

By conducting a pilot study, researchers will be better prepared to face the challenges that might arise in the larger study. They will be more confident with the instruments they will use for data collection.

Multiple pilot studies may be needed in some studies, and qualitative and/or quantitative methods may be used.

To avoid bias, pilot studies are usually carried out on individuals who are as similar as possible to the target population but not on those who will be a part of the final sample.

Feedback from participants in the pilot study can be used to improve the experience for participants in the main study. This might include reducing the burden on participants, improving instructions, or identifying potential ethical issues.

Experiment Pilot Study

In a pilot study with an experimental design , you would want to ensure that your measures of these variables are reliable and valid.

You would also want to check that you can effectively manipulate your independent variables and that you can control for potential confounding variables.

A pilot study allows the research team to gain experience and training, which can be particularly beneficial if new experimental techniques or procedures are used.

Questionnaire Pilot Study

It is important to conduct a questionnaire pilot study for the following reasons:
  • Check that respondents understand the terminology used in the questionnaire.
  • Check that emotive questions are not used, as they make people defensive and could invalidate their answers.
  • Check that leading questions have not been used as they could bias the respondent’s answer.
  • Ensure that the questionnaire can be completed in a reasonable amount of time. If it’s too long, respondents may lose interest or not have enough time to complete it, which could affect the response rate and the data quality.

By identifying and addressing issues in the pilot study, researchers can reduce errors and risks in the main study. This increases the reliability and validity of the main study’s results.

Assessing the practicality and feasibility of the main study

Testing the efficacy of research instruments

Identifying and addressing any weaknesses or logistical problems

Collecting preliminary data

Estimating the time and costs required for the project

Determining what resources are needed for the study

Identifying the necessity to modify procedures that do not elicit useful data

Adding credibility and dependability to the study

Pretesting the interview format

Enabling researchers to develop consistent practices and familiarize themselves with the procedures in the protocol

Addressing safety issues and management problems

Limitations

Require extra costs, time, and resources.

Do not guarantee the success of the main study.

Contamination (ie: if data from the pilot study or pilot participants are included in the main study results).

Funding bodies may be reluctant to fund a further study if the pilot study results are published.

Do not have the power to assess treatment effects due to small sample size.

  • Viscocanalostomy: A Pilot Study (Carassa, Bettin, Fiori, & Brancato, 1998)
  • WHO International Pilot Study of Schizophrenia (Sartorius, Shapiro, Kimura, & Barrett, 1972)
  • Stephen LaBerge of Stanford University ran a series of experiments in the 80s that investigated lucid dreaming. In 1985, he performed a pilot study that demonstrated that time perception is the same as during wakefulness. Specifically, he had participants go into a state of lucid dreaming and count out ten seconds, signaling the start and end with pre-determined eye movements measured with the EOG.
  • Negative Word-of-Mouth by Dissatisfied Consumers: A Pilot Study (Richins, 1983)
  • A pilot study and randomized controlled trial of the mindful self‐compassion program (Neff & Germer, 2013)
  • Pilot study of secondary prevention of posttraumatic stress disorder with propranolol (Pitman et al., 2002)
  • In unstructured observations, the researcher records all relevant behavior without a system. There may be too much to record, and the behaviors recorded may not necessarily be the most important, so the approach is usually used as a pilot study to see what type of behaviors would be recorded.
  • Perspectives of the use of smartphones in travel behavior studies: Findings from a literature review and a pilot study (Gadziński, 2018)

Further Information

  • Lancaster, G. A., Dodd, S., & Williamson, P. R. (2004). Design and analysis of pilot studies: recommendations for good practice. Journal of evaluation in clinical practice, 10 (2), 307-312.
  • Thabane, L., Ma, J., Chu, R., Cheng, J., Ismaila, A., Rios, L. P., … & Goldsmith, C. H. (2010). A tutorial on pilot studies: the what, why and how. BMC Medical Research Methodology, 10 (1), 1-10.
  • Moore, C. G., Carter, R. E., Nietert, P. J., & Stewart, P. W. (2011). Recommendations for planning pilot studies in clinical and translational research. Clinical and translational science, 4 (5), 332-337.

Carassa, R. G., Bettin, P., Fiori, M., & Brancato, R. (1998). Viscocanalostomy: a pilot study. European journal of ophthalmology, 8 (2), 57-61.

Gadziński, J. (2018). Perspectives of the use of smartphones in travel behaviour studies: Findings from a literature review and a pilot study. Transportation Research Part C: Emerging Technologies, 88 , 74-86.

In J. (2017). Introduction of a pilot study. Korean Journal of Anesthesiology, 70 (6), 601–605. https://doi.org/10.4097/kjae.2017.70.6.601

LaBerge, S., LaMarca, K., & Baird, B. (2018). Pre-sleep treatment with galantamine stimulates lucid dreaming: A double-blind, placebo-controlled, crossover study. PLoS One, 13 (8), e0201246.

Leon, A. C., Davis, L. L., & Kraemer, H. C. (2011). The role and interpretation of pilot studies in clinical research. Journal of psychiatric research, 45 (5), 626–629. https://doi.org/10.1016/j.jpsychires.2010.10.008

Malmqvist, J., Hellberg, K., Möllås, G., Rose, R., & Shevlin, M. (2019). Conducting the Pilot Study: A Neglected Part of the Research Process? Methodological Findings Supporting the Importance of Piloting in Qualitative Research Studies. International Journal of Qualitative Methods. https://doi.org/10.1177/1609406919878341

Neff, K. D., & Germer, C. K. (2013). A pilot study and randomized controlled trial of the mindful self‐compassion program. Journal of Clinical Psychology, 69 (1), 28-44.

Pitman, R. K., Sanders, K. M., Zusman, R. M., Healy, A. R., Cheema, F., Lasko, N. B., … & Orr, S. P. (2002). Pilot study of secondary prevention of posttraumatic stress disorder with propranolol. Biological psychiatry, 51 (2), 189-192.

Richins, M. L. (1983). Negative word-of-mouth by dissatisfied consumers: A pilot study. Journal of Marketing, 47 (1), 68-78.

Sartorius, N., Shapiro, R., Kimura, M., & Barrett, K. (1972). WHO International Pilot Study of Schizophrenia1. Psychological medicine, 2 (4), 422-425.

Teijlingen, E. R; V. Hundley (2001). The importance of pilot studies, Social research UPDATE, (35)

Print Friendly, PDF & Email

Related Articles

Qualitative Data Coding

Research Methodology

Qualitative Data Coding

What Is a Focus Group?

What Is a Focus Group?

Cross-Cultural Research Methodology In Psychology

Cross-Cultural Research Methodology In Psychology

What Is Internal Validity In Research?

What Is Internal Validity In Research?

What Is Face Validity In Research? Importance & How To Measure

Research Methodology , Statistics

What Is Face Validity In Research? Importance & How To Measure

Criterion Validity: Definition & Examples

Criterion Validity: Definition & Examples

  • Open access
  • Published: 31 October 2020

Guidance for conducting feasibility and pilot studies for implementation trials

  • Nicole Pearson   ORCID: orcid.org/0000-0003-2677-2327 1 , 2 ,
  • Patti-Jean Naylor 3 ,
  • Maureen C. Ashe 5 ,
  • Maria Fernandez 4 ,
  • Sze Lin Yoong 1 , 2 &
  • Luke Wolfenden 1 , 2  

Pilot and Feasibility Studies volume  6 , Article number:  167 ( 2020 ) Cite this article

86k Accesses

129 Citations

24 Altmetric

Metrics details

Implementation trials aim to test the effects of implementation strategies on the adoption, integration or uptake of an evidence-based intervention within organisations or settings. Feasibility and pilot studies can assist with building and testing effective implementation strategies by helping to address uncertainties around design and methods, assessing potential implementation strategy effects and identifying potential causal mechanisms. This paper aims to provide broad guidance for the conduct of feasibility and pilot studies for implementation trials.

We convened a group with a mutual interest in the use of feasibility and pilot trials in implementation science including implementation and behavioural science experts and public health researchers. We conducted a literature review to identify existing recommendations for feasibility and pilot studies, as well as publications describing formative processes for implementation trials. In the absence of previous explicit guidance for the conduct of feasibility or pilot implementation trials specifically, we used the effectiveness-implementation hybrid trial design typology proposed by Curran and colleagues as a framework for conceptualising the application of feasibility and pilot testing of implementation interventions. We discuss and offer guidance regarding the aims, methods, design, measures, progression criteria and reporting for implementation feasibility and pilot studies.

Conclusions

This paper provides a resource for those undertaking preliminary work to enrich and inform larger scale implementation trials.

Peer Review reports

The failure to translate effective interventions for improving population and patient outcomes into policy and routine health service practice denies the community the benefits of investment in such research [ 1 ]. Improving the implementation of effective interventions has therefore been identified as a priority of health systems and research agencies internationally [ 2 , 3 , 4 , 5 , 6 ]. The increased emphasis on research translation has resulted in the rapid emergence of implementation science as a scientific discipline, with the goal of integrating effective medical and public health interventions into health care systems, policies and practice [ 1 ]. Implementation research aims to do this via the generation of new knowledge, including the evaluation of the effectiveness of implementation strategies [ 7 ]. The term “implementation strategies” is used to describe the methods or techniques (e.g. training, performance feedback, communities of practice) used to enhance the adoption, implementation and/or sustainability of evidence-based interventions (Fig. 1 ) [ 8 , 9 ].

Feasibility studies: an umbrella term used to describe any type of study relating to the preparation for a main study

: a subset of feasibility studies that specifically look at a design feature proposed for the main trial, whether in part or in full, conducted on a smaller scale [ ]

figure 1

Conceptual role of implementation strategies in improving intervention implementation and patient and public health outcomes

While there has been a rapid increase in the number of implementation trials over the past decade, the quality of trials has been criticised, and the effects of the strategies for such trials on implementation, patient or public health outcomes have been modest [ 11 , 12 , 13 ]. To improve the likelihood of impact, factors that may impede intervention implementation should be considered during intervention development and across each phase of the research translation process [ 2 ]. Feasibility and pilot studies play an important role in improving the conduct and quality of a definitive randomised controlled trial (RCT) for both intervention and implementation trials [ 10 ]. For clinical or public health interventions, pilot and feasibility studies may serve to identify potential refinements to the intervention, address uncertainties around the feasibility of intervention trial methods, or test preliminary effects of the intervention [ 10 ]. In implementation research, feasibility and pilot studies perform the same functions as those for intervention trials, however with a focus on developing or refining implementation strategies, refining research methods for an implementation intervention trial, or undertake preliminary testing of implementation strategies [ 14 , 15 ]. Despite this, reviews of implementation studies appear to suggest that few full implementation randomised controlled trials have undertaken feasibility and pilot work in advance of a larger trial [ 16 ].

A range of publications provides guidance for the conduct of feasibility and pilot studies for conventional clinical or public health efficacy trials including Guidance for Exploratory Studies of complex public health interventions [ 17 ] and the Consolidated Standards of Reporting Trials (CONSORT 2010) for Pilot and Feasibility trials [ 18 ]. However, given the differences between implementation trials and conventional clinical or public health efficacy trials, the field of implementation science has identified the need for nuanced guidance [ 14 , 15 , 16 , 19 , 20 ]. Specifically, unlike traditional feasibility and pilot studies that may include the preliminary testing of interventions on individual clinical or public health outcomes, implementation feasibility and pilot studies that explore strategies to improve intervention implementation often require assessing changes across multiple levels including individuals (e.g. service providers or clinicians) and organisational systems [ 21 ]. Due to the complexity of influencing behaviour change, the role of feasibility and pilot studies of implementation may also extend to identifying potential causal mechanisms of change and facilitate an iterative process of refining intervention strategies and optimising their impact [ 16 , 17 ]. In addition, where conventional clinical or public health efficacy trials are typically conducted under controlled conditions and directed mostly by researchers, implementation trials are more pragmatic [ 15 ]. As is the case for well conducted effectiveness trials, implementation trials often require partnerships with end-users and at times, the prioritisation of end-user needs over methods (e.g. random assignment) that seek to maximise internal validity [ 15 , 22 ]. These factors pose additional challenges for implementation researchers and underscore the need for guidance on conducting feasibility and pilot implementation studies.

Given the importance of feasibility and pilot studies in improving implementation strategies and the quality of full-scale trials of those implementation strategies, our aim is to provide practice guidance for those undertaking formative feasibility or pilot studies in the field of implementation science. Specifically, we seek to provide guidance pertaining to the three possible purposes of undertaking pilot and feasibility studies, namely (i) to inform implementation strategy development, (ii) to assess potential implementation strategy effects and (iii) to assess the feasibility of study methods.

A series of three facilitated group discussions were conducted with a group comprising of the 6 members from Canada, the U.S. and Australia (authors of the manuscript) that were mutually interested in the use of feasibility and pilot trials in implementation science. Members included international experts in implementation and behavioural science, public health and trial methods, and had considerable experience in conducting feasibility, pilot and/ or implementation trials. The group was responsible for developing the guidance document, including identification and synthesis of pertinent literature, and approving the final guidance.

To inform guidance development, a literature review was undertaken in electronic bibliographic databases and google, to identify and compile existing recommendations and guidelines for feasibility and pilot studies broadly. Through this process, we identified 30 such guidelines and recommendations relevant to our aim [ 2 , 10 , 14 , 15 , 17 , 18 , 23 , 24 , 25 , 26 , 27 , 28 , 29 , 30 , 31 , 32 , 33 , 34 , 35 , 36 , 37 , 38 , 39 , 40 , 41 , 42 , 43 , 44 , 45 ]. In addition, seminal methods and implementation science texts recommended by the group were examined. These included the CONSORT 2010 Statement: extension to randomised pilot and feasibility trials [ 18 ], the Medical Research Council’s framework for development and evaluation of randomised controlled trials for complex interventions to improve health [ 2 ], the National Institute of Health Research (NIHR) definitions [ 39 ] and the Quality Enhancement Research Initiative (QUERI) Implementation Guide [ 4 ]. A summary of feasibility and pilot study guidelines and recommendations, and that of seminal methods and implementation science texts, was compiled by two authors. This document served as the primary discussion document in meetings of the group. Additional targeted searches of the literature were undertaken in circumstances where the identified literature did not provide sufficient guidance. The manuscript was developed iteratively over 9 months via electronic circulation and comment by the group. Any differences in views between reviewers was discussed and resolved via consensus during scheduled international video-conference calls. All members of the group supported and approved the content of the final document.

The broad guidance provided is intended to be used as supplementary resources to existing seminal feasibility and pilot study resources. We used the definitions of feasibility and pilot studies as proposed by Eldridge and colleagues [ 10 ]. These definitions propose that any type of study relating to the preparation for a main study may be classified as a “feasibility study”, and that the term “pilot” study represents a subset of feasibility studies that specifically look at a design feature proposed for the main trial, whether in part of in full, that is being conducted on a smaller scale [ 10 ]. In addition, when referring to pilot studies, unless explicitly stated otherwise, we will primarily focus on pilot trials using a randomised design. We focus on randomised trials as such designs are the most common trial design in implementation research, and randomised designs may provide the most robust estimates of the potential effect of implementation strategies [ 46 ]. Those undertaking pilot studies that employ non-randomised designs need to interpret the guidance provided in this context. We acknowledge, however, that using randomised designs can prove particularly challenging in the field of implementation science, where research is often undertaken in real-world contexts with pragmatic constraints.

We used the effectiveness-implementation hybrid trial design typology proposed by Curran and colleagues as the framework for conceptualising the application of feasibility testing of implementation interventions [ 47 ]. The typology makes an explicit distinction between the purpose and methods of implementation and conventional clinical (or public health efficacy) trials. Specifically, the first two of the three hybrid designs may be relevant for implementation feasibility or pilot studies. Hybrid Type 1 trials are those designed to test the effectiveness of an intervention on clinical or public health outcomes (primary aim) while conducting a feasibility or pilot study for future implementation via observing and gathering information regarding implementation in a real-world setting/situation (secondary aim) [ 47 ]. Hybrid Type 2 trials involve the simultaneous testing of both the clinical intervention and the testing or feasibility of a formed implementation intervention/strategy as co-primary aims. For this design, “testing” is inclusive of pilot studies with an outcome measure and related hypothesis [ 47 ]. Hybrid Type 3 trials are definitive implementation trials designed to test the effectiveness of an implementation strategy whilst also collecting secondary outcome data on clinical or public health outcomes on a population of interest [ 47 ]. As the implementation aim of the trial is a definitively powered trial, it was not considered relevant to the conduct of feasibility and pilot studies in the field and will not be discussed.

Embedding of feasibility and pilot studies within Type 1 and Type 2 effectiveness-implementation hybrid trials has been recommended as an efficient way to increase the availability of information and evidence to accelerate the field of implementation science and the development and testing of implementation strategies [ 4 ]. However, implementation feasibility and pilot studies are also undertaken as stand-alone exploratory studies and do not include effectiveness measures in terms of the patient or public health outcomes. As such, in addition to discussing feasibility and pilot trials embedded in hybrid trial designs, we will also refer to stand-alone implementation feasibility and pilot studies.

An overview of guidance (aims, design, measures, sample size and power, progression criteria and reporting) for feasibility and pilot implementation studies can be found in Table 1 .

Purpose (aims)

The primary objective of hybrid type 1 trial is to assess the effectiveness of a clinical or public health intervention (rather than an implementation strategy) on the patient or population health outcomes [ 47 ]. Implementation strategies employed in these trials are often designed to maximise the likelihood of an intervention effect [ 51 ], and may not be intended to represent the strategy that would (or could feasibly), be used to support implementation in more “real world” contexts. Specific aims of implementation feasibility or pilot studies undertaken as part of Hybrid Type 1 trials are therefore formative and descriptive as the implementation strategy has not been fully formed nor will be tested. Thus, the purpose of a Hybrid Type 1 feasibility study is generally to inform the development or refinement of the implementation strategy rather than to test potential effects or mechanisms [ 22 , 47 ]. An example of a Hybrid Type 1 trial by Cabassa and colleagues is provided in Additional file 1 [ 52 ].

In Hybrid Type 2 trial designs, there is a dual purpose to test: (i) the clinical or public health effectiveness of the intervention on clinical or public health outcomes (e.g. measure of disease or health behaviour) and (ii) test or measure the impact of the implementation strategy on implementation outcomes (e.g. adoption of health policy in a community setting) [ 53 ]. However, testing the implementation strategy on implementation outcomes may be a secondary aim in these trials and positioned as a pilot [ 22 ]. In Hybrid Type 2 trial designs, the implementation strategy is more developed than in Hybrid Type 1 trials, resembling that intended for future testing in a definitive implementation randomised controlled trial. The dual testing of the evidence-based intervention and implementation interventions or strategies in Hybrid Type 2 trial designs allows for direct assessment of potential effects of an implementation strategy and exploration of components of the strategy to further refine logic models. Additionally, such trials allow for assessments of the feasibility, utility, acceptability or quality of research methods for use in a planned definitive trial. An example of a Hybrid Type 2 trial design by Barnes and colleagues [ 54 ] is included in Additional file 2 .

Non-hybrid pilot implementation studies are undertaken in the absence of a broader effectiveness trial. Such studies typically occur when the effectiveness of a clinical or public health intervention is well established, but robust strategies to promote its broader uptake and integration into clinical or public health services remain untested [ 15 ]. In these situations, implementation pilot studies may test or explore specific trial methods for a future definitive randomised implementation trial. Similarly, a pilot implementation study may also be undertaken in a way that provides a more rigorous formative evaluation of hypothesised implementation strategy mechanisms [ 55 ], or potential impact of implementation strategies [ 56 ], using similar approaches to that employed in Hybrid Type 2 trials. Examples of potential aims for feasibility and pilot studies are outlined in Table 2 .

For implementation feasibility or pilot studies, as is the case for these types of studies in general, the selection of research design should be guided by the specific research question that the study is seeking to address [ 57 ]. Although almost any study design may be used, researchers should review the merits and potential threats to internal and external validity to help guide the selection of research design for feasibility/pilot testing [ 15 ].

As Hybrid Type 1 trials are primarily concerned with testing the effectiveness of an intervention (rather than implementation strategy), the research design will typically employ power calculations and randomisation procedures at the health outcome level to measure the effect on behaviour, symptoms, functional and/or other clinical or public health outcomes. Hybrid Type 1 feasibility studies may employ a variety of designs usually nested within the experimental group (those receiving the intervention and any form of an implementation support strategy) of the broader efficacy trial [ 47 ]. Consistent with the aims of Hybrid Type 1 feasibility and pilot studies, the research designs employed are likely to be non-comparative. Cross-sectional surveys, interviews or document review, qualitative research or mix methods approaches may be used to assess implementation contextual factors, such as barriers and enablers to implementation and/or the acceptability, perceived feasibility or utility of implementation strategies or research methods [ 47 ].

Pilot implementation studies as part of Hybrid Type 2 designs can make use of the comparative design of the broader effectiveness trial to examine the potential effects of the implementation strategy [ 47 ] and more robustly assess the implementation mechanisms, determinants and influence of broader contextual factors [ 53 ]. In this trial type, mixed method and qualitative methods may complement the findings of between group (implementation strategy arm versus comparison) quantitative comparisons, enable triangulation and provide more comprehensive evidence to inform implementation strategy development and assessment. Stand-alone implementation feasibility and pilot implementation studies are free from the constraints and opportunities of research embedded in broader effectiveness trials. As such, research can be designed in a way that best addresses the explicit implementation objectives of the study. Specifically, non-hybrid pilot studies can maximise the applicability of study findings for future definitive trials by employing methods to directly test trial methods such as recruitment or retention strategies [ 17 ], enabling estimates of implementation strategies effects [ 56 ] or capturing data to explicitly test logic models or strategy mechanisms.

The selection of outcome measures should be linked directly to the objectives of the feasibility or pilot study. Where appropriate, measures should be objective or have suitable psychometric properties, such as evidence of reliability and validity [ 58 , 59 ]. Public health evaluation frameworks often guide the choice of outcome measure in feasibility and pilot implementation work and include RE_AIM [ 60 ], PRECEDE_PROCEED [ 61 ], Proctor and colleagues framework on outcomes for implementation research [ 62 ] and more recently, the “Implementation Mapping” framework [ 63 ]. Recent work by McKay and colleagues suggests a minimum data set of implementation outcomes that includes measures of adoption, reach, dose, fidelity and sustainability [ 46 ]. We discuss selected measures below and provide a summary in Table 3 [ 46 ]. Such measures could be assessed using quantitative or qualitative or mixed methods [ 46 ].

Measures to assess potential implementation strategy effects

In addition to assessing the effects of an intervention on individual clinical or public health outcomes, Hybrid Type 2 trials (and some non-hybrid pilot studies) are interested in measures of the potential effects of an implementation strategy on desired organisational or clinician practice change such as adherence to a guideline, process, clinical standard or delivery of a program [ 62 ]. A range of potential outcomes that could be used to assess implementation strategy effects has been identified, including measures of adoption, reach, fidelity and sustainability [ 46 ]. These outcomes are described in Table 2 , including definitions and examples of how they may be applied to the implementation component of innovation being piloted. Standardised tools to assess these outcomes are often unavailable due to the unique nature of interventions being implemented and the variable (and changing) implementation context in which the research is undertaken [ 64 ]. Researchers may collect outcome data for these measures as part of environmental observations, self-completed checklists or administrative records, audio recording of client sessions or other methods suited to their study and context [ 62 ]. The limitations of such methods, however, need to be considered.

Measures to inform the design or development of the implementation strategy

Measures informing the design or development of the implementation strategy are potentially part of all types of feasibility and pilot implementation studies. An understanding of the determinants of implementation is critical to implementation strategy development. A range of theoretical determinant frameworks have been published which describe factors that may influence intervention implementation [ 65 ], and systematic reviews have been undertaken describing the psychometric properties of many of these measures [ 64 , 66 ]. McKay and colleagues have also identified a priority set of determinants for implementation trials that could be considered for use in implementation feasibility and pilot studies, including measures of context, acceptability, adaptability, feasibility, compatibility, cost, culture, dose, complexity and self-efficacy [ 46 ]. These determinants are described in Table 3 , including definitions and how such measures may be applied to an implementation feasibility or pilot study. Researchers should consider, however, the application of such measures to assess both the intervention that is being implemented (as in a conventional intervention feasibility and pilot study) and the strategy that is being employed to facilitate its implementation, given the importance of the interaction between these factors and implementation success [ 46 ]. Examples of the potential application of measures to both the intervention and its implementation strategies have been outlined elsewhere [ 46 ]. Although a range of quantitative tools could be used to measure such determinants [ 58 , 66 ], qualitative or mixed methods are generally recommended given the capacity of qualitative measures to provide depth to the interpretation of such evaluations [ 40 ].

Measures of potential implementation determinants may be included to build or enhance logic models (Hybrid Type 1 and 2 feasibility and pilot studies) and explore implementation strategy mechanisms (Hybrid Type 2 pilot studies and non-hybrid pilot studies) [ 67 ]. If exploring strategy mechanisms, a hypothesized logic model underpinning the implementation strategy should be articulated including strategy-mechanism linkages, which are required to guide the measurement of key determinants [ 55 , 63 ]. An important determinant which can complicate logic model specification and measurement is the process of adaptation—modifications to the intervention or its delivery (implementation), through the input of service providers or implementers [ 68 ]. Logic models should specify components of implementation strategies thought to be “core” to their effects and those which are thought to be “non-core” where adaptation may occur without adversely impacting on effects. Stirman and colleagues propose a method for assessing adaptations that could be considered for use in pilot and feasibility studies of implementation trials [ 69 ]. Figure 2 provides an example of some of the implementation logic model components that may be developed or refined as part of feasibility or pilot studies of implementation [ 15 , 63 ].

figure 2

Example of components of an Implementation logic model

Measures to assess the feasibility of study methods

Measures of implementation feasibility and pilot study methods are similar to those of conventional studies for clinical or public health interventions. For example, standard measures of study participation and thresholds for study attrition (e.g. >20%) rates [ 73 ] can be employed in implementation studies [ 67 ]. Previous studies have also surveyed study data collectors to assess the success of blinding strategies [ 74 ]. Researchers may also consider assessing participation or adherence to implementation data collection procedures, the comprehension of survey items, data management strategies or other measures of feasibility of study methods [ 15 ].

Pilot study sample size and power

In effectiveness trials, power calculations and sample size decisions are primarily based on the detection of a clinically meaningful difference in measures of the effects of the intervention on the patient or public health outcomes such as behaviour, disease, symptomatology or functional outcomes [ 24 ]. In this context, the available study sample for implementation measures included in Hybrid Type 1 or 2 feasibility and pilot studies may be constrained by the sample and power calculations of the broader effectiveness trial in which they are embedded [ 47 ]. Nonetheless, a justification for the anticipated sample size for all implementation feasibility or pilot studies (hybrid or stand-alone) is recommended [ 18 ], to ensure that implementation measures and outcomes achieve sufficient estimates of precision to be useful. For Hybrid type 2 and relevant stand-alone implementation pilot studies, sample size calculations for implementation outcomes should seek to achieve adequate estimates of precision deemed sufficient to inform progression to a fully powered trial [ 18 ].

Progression criteria

Stating progression criteria when reporting feasibility and pilot studies is recommended as part of the CONSORT 2010 extension to randomised pilot and feasibility trials guidelines [ 18 ]. Generally, it is recommended that progression criteria should be set a priori and be specific to the feasibility measures, components and/or outcomes assessed in the study [ 18 ]. While little guidance is available, ideas around suitable progression criteria include assessment of uncertainties around feasibility, meeting recruitment targets, cost-effectiveness and refining causal hypotheses to be tested in future trials [ 17 ]. When developing progression criteria, the use of guidelines is suggested rather than strict thresholds [ 18 ], in order to allow for appropriate interpretation and exploration of potential solutions, for example, the use of a traffic light system with varying levels of acceptability [ 17 , 24 ]. For example, Thabane and colleagues recommend that, in general, the outcome of a pilot study can be one of the following: (i) stop—main study not feasible (red); (ii) continue, but modify protocol—feasible with modifications (yellow); (iii) continue without modifications, but monitor closely—feasible with close monitoring and (iv) continue without modifications (green) (44)p5.

As the goal of Hybrid Type 1 implementation component is usually formative, it may not be necessary to set additional progression criteria in terms of the implementation outcomes and measures examined. As Hybrid Type 2 trials test an intervention and can pilot an implementation strategy, criteria for these and non-hybrid pilot studies may set progression criteria based on evidence of potential effects but may also consider the feasibility of trial methods, service provider, organisational or patient (or community) acceptability, fit with organisational systems and cost-effectiveness [ 17 ]. In many instances, the progression of implementation pilot studies will often require the input and agreement of stakeholders [ 27 ]. As such, the establishment of progression criteria and the interpretation of pilot and feasibility study findings in the context of such criteria require stakeholder input [ 27 ].

Reporting suggestions

As formal reporting guidelines do not exist for hybrid trial designs, we would recommend that feasibility and pilot studies as part of hybrid designs draw upon best practice recommendations from relevant reporting standards such as the CONSORT extension for randomised pilot and feasibility trials, the Standards for Reporting Implementation Studies (STaRI) guidelines and the Template for Intervention Description and Replication (TIDieR) guide as well as any other design relevant reporting standards [ 48 , 50 , 75 ]. These, and further reporting guidelines, specific to the particular research design chosen, can be accessed as part of the EQUATOR (Enhancing the QUAility and Transparency Of health Research) network—a repository for reporting guidance [ 76 ]. In addition, researchers should specify the type of implementation feasibility or pilot study being undertaken using accepted definitions. If applicable, specification and justification behind the choice of hybrid trial design should also be stated. In line with existing recommendations for reporting of implementation trials generally, reporting on the referent of outcomes (e.g. specifying if the measure in relation to the specific intervention or the implementation strategy) [ 62 ], is also particularly pertinent when reporting hybrid trial designs.

Concerns are often raised regarding the quality of implementation trials and their capacity to contribute to the collective evidence base [ 3 ]. Although there have been many recent developments in the standardisation of guidance for implementation trials, information on the conduct of feasibility and pilot studies for implementation interventions remains limited, potentially contributing to a lack of exploratory work in this area and a limited evidence base to inform effective implementation intervention design and conduct [ 15 ]. To address this, we synthesised the existing literature and provide commentary and guidance for the conduct of implementation feasibility and pilot studies. To our knowledge, this work is the first to do so and is an important first step to the development of standardised guidelines for implementation-related feasibility and pilot studies.

Availability of data and materials

Not applicable.

Abbreviations

Randomised controlled trial

Consolidated Standards of Reporting Trials

Enhancing the QUAility and Transparency Of health Research

Standards for Reporting Implementation Studies

Strengthening the Reporting of Observational Studies in Epidemiology

Template for Intervention Description and Replication

National Institute of Health Research

Quality Enhancement Research Initiative

Bauer MS, Damschroder L, Hagedorn H, Smith J, Kilbourne AM. An introduction to implementation science for the non-specialist. BMC Psychol. 2015;3:32.

Article   Google Scholar  

Craig P, Dieppe P, Macintyre S, Michie S, Nazareth I, Petticrew M, et al. Developing and evaluating complex interventions: the new Medical Research Council guidance. BMJ. 2008;337:a1655.

Eccles MP, Armstrong D, Baker R, Cleary K, Davies H, Davies S, et al. An implementation research agenda. Implement Sci. 2009;4:18.

Department of Veterans Health Administration. Implementation Guide. Health Services Research & Development, Quality Enhancement Research Initiative. Updated 2013.

Peters DH, Nhan TT, Adam T. Implementation research: a practical guide; 2013.

Google Scholar  

Neta G, Sanchez MA, Chambers DA, Phillips SM, Leyva B, Cynkin L, et al. Implementation science in cancer prevention and control: a decade of grant funding by the National Cancer Institute and future directions. Implement Sci. 2015;10:4.

Foy R, Sales A, Wensing M, Aarons GA, Flottorp S, Kent B, et al. Implementation science: a reappraisal of our journal mission and scope. Implement Sci. 2015;10:51.

Proctor EK, Powell BJ, McMillen JC. Implementation strategies: recommendations for specifying and reporting. Implement Sci. 2013;8:139.

Leeman J, Birken SA, Powell BJ, Rohweder C, Shea CM. Beyond "implementation strategies": classifying the full range of strategies used in implementation science and practice. Implement Sci. 2017;12(1):125.

Eldridge SM, Lancaster GA, Campbell MJ, Thabane L, Hopewell S, Coleman CL, et al. Defining feasibility and pilot studies in preparation for randomised controlled trials: development of a conceptual framework. PLoS One. 2016;11(3):e0150205.

Article   CAS   Google Scholar  

Powell BJ, McMillen JC, Proctor EK, Carpenter CR, Griffey RT, Bunger AC, et al. A compilation of strategies for implementing clinical innovations in health and mental health. Med Care Res Rev. 2012;69(2):123–57.

Powell BJ, Waltz TJ, Chinman MJ, Damschroder LJ, Smith JL, Matthieu MM, et al. A refined compilation of implementation strategies: results from the expert recommendations for implementing change (ERIC) project. Implement Sci. 2015;10:21.

Lewis CC, Stanick C, Lyon A, Darnell D, Locke J, Puspitasari A, et al. Proceedings of the fourth biennial conference of the Society for Implementation Research Collaboration (SIRC) 2017: implementation mechanisms: what makes implementation work and why? Part 1. Implement Sci. 2018;13(Suppl 2):30.

Levati S, Campbell P, Frost R, Dougall N, Wells M, Donaldson C, et al. Optimisation of complex health interventions prior to a randomised controlled trial: a scoping review of strategies used. Pilot Feasibility Stud. 2016;2:17.

Bowen DJ, Kreuter M, Spring B, Cofta-Woerpel L, Linnan L, Weiner D, et al. How we design feasibility studies. Am J Prev Med. 2009;36(5):452–7.

Eccles M, Grimshaw J, Walker A, Johnston M, Pitts N. Changing the behavior of healthcare professionals: the use of theory in promoting the uptake of research findings. J Clin Epidemiol. 2005;58(2):107–12.

Hallingberg B, Turley R, Segrott J, Wight D, Craig P, Moore L, et al. Exploratory studies to decide whether and how to proceed with full-scale evaluations of public health interventions: a systematic review of guidance. Pilot Feasibility Stud. 2018;4:104.

Eldridge SM, Chan CL, Campbell MJ, Bond CM, Hopewell S, Thabane L, et al. CONSORT 2010 statement: extension to randomised pilot and feasibility trials. Pilot Feasibility Stud. 2016;2:64.

Proctor EK, Powell BJ, Baumann AA, Hamilton AM, Santens RL. Writing implementation research grant proposals: ten key ingredients. Implement Sci. 2012;7:96.

Stetler CB, Legro MW, Wallace CM, Bowman C, Guihan M, Hagedorn H, et al. The role of formative evaluation in implementation research and the QUERI experience. J Gen Intern Med. 2006;21(Suppl 2):S1–8.

Aarons GA, Hurlburt M, Horwitz SM. Advancing a conceptual model of evidence-based practice implementation in public service sectors. Admin Pol Ment Health. 2011;38(1):4–23.

Johnson AL, Ecker AH, Fletcher TL, Hundt N, Kauth MR, Martin LA, et al. Increasing the impact of randomized controlled trials: an example of a hybrid effectiveness-implementation design in psychotherapy research. Transl Behav Med. 2018.

Arain M, Campbell MJ, Cooper CL, Lancaster GA. What is a pilot or feasibility study? A review of current practice and editorial policy. BMC Med Res Methodol. 2010;10(1):67.

Avery KN, Williamson PR, Gamble C, O’Connell Francischetto E, Metcalfe C, Davidson P, et al. Informing efficient randomised controlled trials: exploration of challenges in developing progression criteria for internal pilot studies. BMJ Open. 2017;7(2):e013537.

Bell ML, Whitehead AL, Julious SA. Guidance for using pilot studies to inform the design of intervention trials with continuous outcomes. J Clin Epidemiol. 2018;10:153–7.

Billingham SAM, Whitehead AL, Julious SA. An audit of sample sizes for pilot and feasibility trials being undertaken in the United Kingdom registered in the United Kingdom clinical research Network database. BMC Med Res Methodol. 2013;13(1):104.

Bugge C, Williams B, Hagen S, Logan J, Glazener C, Pringle S, et al. A process for decision-making after pilot and feasibility trials (ADePT): development following a feasibility study of a complex intervention for pelvic organ prolapse. Trials. 2013;14:353.

Charlesworth G, Burnell K, Hoe J, Orrell M, Russell I. Acceptance checklist for clinical effectiveness pilot trials: a systematic approach. BMC Med Res Methodol. 2013;13(1):78.

Eldridge SM, Costelloe CE, Kahan BC, Lancaster GA, Kerry SM. How big should the pilot study for my cluster randomised trial be? Stat Methods Med Res. 2016;25(3):1039–56.

Fletcher A, Jamal F, Moore G, Evans RE, Murphy S, Bonell C. Realist complex intervention science: applying realist principles across all phases of the Medical Research Council framework for developing and evaluating complex interventions. Evaluation (Lond). 2016;22(3):286–303.

Hampson LV, Williamson PR, Wilby MJ, Jaki T. A framework for prospectively defining progression rules for internal pilot studies monitoring recruitment. Stat Methods Med Res. 2018;27(12):3612–27.

Kraemer HC, Mintz J, Noda A, Tinklenberg J, Yesavage JA. Caution regarding the use of pilot studies to guide power calculations for study proposals. Arch Gen Psychiatry. 2006;63(5):484–9.

Smith LJ, Harrison MB. Framework for planning and conducting pilot studies. Ostomy Wound Manage. 2009;55(12):34–48.

Lancaster GA, Dodd S, Williamson PR. Design and analysis of pilot studies: recommendations for good practice. J Eval Clin Pract. 2004;10(2):307–12.

Leon AC, Davis LL, Kraemer HC. The role and interpretation of pilot studies in clinical research. J Psychiatr Res. 2011;45(5):626–9.

Medical Research Council. A framework for development and evaluation of RCTs for complex interventions to improve health. London: Medical Research Council; 2000.

Möhler R, Bartoszek G, Meyer G. Quality of reporting of complex healthcare interventions and applicability of the CReDECI list - a survey of publications indexed in PubMed. BMC Med Res Methodol. 2013;13(1):125.

Möhler R, Köpke S, Meyer G. Criteria for reporting the development and evaluation of complex interventions in healthcare: revised guideline (CReDECI 2). Trials. 2015;16(1):204.

National Institute for Health Research. Definitions of feasibility vs pilot stuides [Available from: https://www.nihr.ac.uk/documents/guidance-on-applying-for-feasibility-studies/20474 ].

O'Cathain A, Hoddinott P, Lewin S, Thomas KJ, Young B, Adamson J, et al. Maximising the impact of qualitative research in feasibility studies for randomised controlled trials: guidance for researchers. Pilot Feasibility Stud. 2015;1:32.

Shanyinde M, Pickering RM, Weatherall M. Questions asked and answered in pilot and feasibility randomized controlled trials. BMC Med Res Methodol. 2011;11(1):117.

Teare MD, Dimairo M, Shephard N, Hayman A, Whitehead A, Walters SJ. Sample size requirements to estimate key design parameters from external pilot randomised controlled trials: a simulation study. Trials. 2014;15(1):264.

Thabane L, Lancaster G. Improving the efficiency of trials using innovative pilot designs: the next phase in the conduct and reporting of pilot and feasibility studies. Pilot Feasibility Stud. 2017;4(1):14.

Thabane L, Ma J, Chu R, Cheng J, Ismaila A, Rios LP, et al. A tutorial on pilot studies: the what, why and how. BMC Med Res Methodol. 2010;10:1.

Westlund E. E.a. S. The nonuse, misuse, and proper use of pilot studies in experimental evaluation research. Am J Eval. 2016;38(2):246–61.

McKay H, Naylor PJ, Lau E, Gray SM, Wolfenden L, Milat A, et al. Implementation and scale-up of physical activity and behavioural nutrition interventions: an evaluation roadmap. Int J Behav Nutr Phys Act. 2019;16(1):102.

Curran GM, Bauer M, Mittman B, Pyne JM, Stetler C. Effectiveness-implementation hybrid designs: combining elements of clinical effectiveness and implementation research to enhance public health impact. Med Care. 2012;50(3):217–26.

Equator Network. Standards for reporting implementation studies (StaRI) statement 2017 [Available from: http://www.equator-network.org/reporting-guidelines/stari-statement/ ].

Vandenbroucke JP, von Elm E, Altman DG, Gøtzsche PC, Mulrow CD, Pocock SJ, et al. Strengthening the Reporting of Observational Studies in Epidemiology (STROBE): explanation and elaboration. PLoS Med. 2007;4(10):e297–e.

Hoffmann TC, Glasziou PP, Boutron I, Milne R, Perera R, Moher D, et al. Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide. BMJ. 2014;348:g1687.

Schliep ME, Alonzo CN, Morris MA. Beyond RCTs: innovations in research design and methods to advance implementation science. Evid Based Commun Assess Inter. 2017;11(3-4):82–98.

Cabassa LJ, Stefancic A, O'Hara K, El-Bassel N, Lewis-Fernández R, Luchsinger JA, et al. Peer-led healthy lifestyle program in supportive housing: study protocol for a randomized controlled trial. Trials. 2015;16:388.

Landes SJ, McBain SA, Curran GM. Reprint of: An introduction to effectiveness-implementation hybrid designs. J Psychiatr Res. 2020;283:112630.

Barnes C, Grady A, Nathan N, Wolfenden L, Pond N, McFayden T, Ward DS, Vaughn AE, Yoong SL. A pilot randomised controlled trial of a web-based implementation intervention to increase child intake of fruit and vegetables within childcare centres. Pilot and Feasibility Studies. 2020. https://doi.org/10.1186/s40814-020-00707-w .

Lewis CC, Klasnja P, Powell BJ, Lyon AR, Tuzzio L, Jones S, et al. From classification to causality: advancing understanding of mechanisms of change in implementation science. Front Public Health. 2018;6:136.

Department of Veterans Health Affairs. Implementation Guide. Health Services Research & Development, Quality Enhancement Research Initiative. 2013.

Moore GF, Audrey S, Barker M, Bond L, Bonell C, Hardeman W, et al. Process evaluation of complex interventions: Medical Research Council guidance. BMJ. 2015;350:h1258.

Weiner BJ, Lewis CC, Stanick C, Powell BJ, Dorsey CN, Clary AS, et al. Psychometric assessment of three newly developed implementation outcome measures. Implement Sci. 2017;12(1):108.

Lewis CC, Mettert KD, Dorsey CN, Martinez RG, Weiner BJ, Nolen E, et al. An updated protocol for a systematic review of implementation-related measures. Syst Rev. 2018;7(1):66.

Glasgow RE, Klesges LM, Dzewaltowski DA, Estabrooks PA, Vogt TM. Evaluating the impact of health promotion programs: using the RE-AIM framework to form summary measures for decision making involving complex issues. Health Educ Res. 2006;21(5):688–94.

Green L, Kreuter M. Health promotion planning: an educational and ecological approach. Mountain View: Mayfield Publishing; 1999.

Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, et al. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Admin Pol Ment Health. 2011;38(2):65–76.

Fernandez ME, Ten Hoor GA, van Lieshout S, Rodriguez SA, Beidas RS, Parcel G, et al. Implementation mapping: using intervention mapping to develop implementation strategies. Front Public Health. 2019;7:158.

Lewis CC, Weiner BJ, Stanick C, Fischer SM. Advancing implementation science through measure development and evaluation: a study protocol. Implement Sci. 2015;10:102.

Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4:50.

Clinton-McHarg T, Yoong SL, Tzelepis F, Regan T, Fielding A, Skelton E, et al. Psychometric properties of implementation measures for public health and community settings and mapping of constructs against the consolidated framework for implementation research: a systematic review. Implement Sci. 2016;11(1):148.

Moore CG, Carter RE, Nietert PJ, Stewart PW. Recommendations for planning pilot studies in clinical and translational research. Clin Transl Sci. 2011;4(5):332–7.

Pérez D, Van der Stuyft P, Zabala MC, Castro M, Lefèvre P. A modified theoretical framework to assess implementation fidelity of adaptive public health interventions. Implement Sci. 2016;11(1):91.

Stirman SW, Miller CJ, Toder K, Calloway A. Development of a framework and coding system for modifications and adaptations of evidence-based interventions. Implement Sci. 2013;8:65.

Carroll C, Patterson M, Wood S, Booth A, Rick J, Balain S. A conceptual framework for implementation fidelity. Implement Sci. 2007;2:40.

Durlak JA, DuPre EP. Implementation matters: a review of research on the influence of implementation on program outcomes and the factors affecting implementation. Am J Community Psychol. 2008;41(3-4):327–50.

Saunders RP, Evans MH, Joshi P. Developing a process-evaluation plan for assessing health promotion program implementation: a how-to guide. Health Promot Pract. 2005;6(2):134–47.

Higgins JP, Altman DG, Gøtzsche PC, Jüni P, Moher D, Oxman AD, et al. The Cochrane Collaboration's tool for assessing risk of bias in randomised trials. BMJ. 2011;343:d5928.

Wyse RJ, Wolfenden L, Campbell E, Brennan L, Campbell KJ, Fletcher A, et al. A cluster randomised trial of a telephone-based intervention for parents to increase fruit and vegetable consumption in their 3- to 5-year-old children: study protocol. BMC Public Health. 2010;10:216.

Consort Transparent Reporting of Trials. Pilot and Feasibility Trials 2016 [Available from: http://www.consort-statement.org/extensions/overview/pilotandfeasibility ].

Equator Network. Ehancing the QUAlity and Transparency Of health Research. [Avaliable from: https://www.equator-network.org/ ].

Download references

Acknowledgements

Associate Professor Luke Wolfenden receives salary support from a NHMRC Career Development Fellowship (grant ID: APP1128348) and Heart Foundation Future Leader Fellowship (grant ID: 101175). Dr Sze Lin Yoong is a postdoctoral research fellow funded by the National Heart Foundation. A/Prof Maureen C. Ashe is supported by the Canada Research Chairs program.

Author information

Authors and affiliations.

School of Medicine and Public Health, University of Newcastle, University Drive, Callaghan, NSW 2308, Australia

Nicole Pearson, Sze Lin Yoong & Luke Wolfenden

Hunter New England Population Health, Locked Bag 10, Wallsend, NSW 2287, Australia

School of Exercise Science, Physical and Health Education, Faculty of Education, University of Victoria, PO Box 3015 STN CSC, Victoria, BC, V8W 3P1, Canada

Patti-Jean Naylor

Center for Health Promotion and Prevention Research, University of Texas Health Science Center at Houston School of Public Health, Houston, TX, 77204, USA

Maria Fernandez

Department of Family Practice, University of British Columbia (UBC) and Centre for Hip Health and Mobility, University Boulevard, Vancouver, BC, V6T 1Z3, Canada

Maureen C. Ashe

You can also search for this author in PubMed   Google Scholar

Contributions

NP and LW led the development of the manuscript. NP, LW, NP, MCA, PN, MF and SY contributed to the drafting and final approval of the manuscript.

Corresponding author

Correspondence to Nicole Pearson .

Ethics declarations

Ethics approval and consent to participate, consent for publication, competing interests.

The authors have no financial or non-financial interests to declare .

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Additional file 1..

Example of a Hybrid Type 1 trial. Summary of publication by Cabassa et al.

Additional file 2.

Example of a Hybrid Type 2 trial. Summary of publication by Barnes et al.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Pearson, N., Naylor, PJ., Ashe, M.C. et al. Guidance for conducting feasibility and pilot studies for implementation trials. Pilot Feasibility Stud 6 , 167 (2020). https://doi.org/10.1186/s40814-020-00634-w

Download citation

Received : 08 January 2020

Accepted : 18 June 2020

Published : 31 October 2020

DOI : https://doi.org/10.1186/s40814-020-00634-w

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Feasibility
  • Hybrid trial designs
  • Implementation science

Pilot and Feasibility Studies

ISSN: 2055-5784

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

pilot study research paper

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

Doing A Pilot Study: Why Is It Essential?

Affiliations.

  • 1 FRACGP, Klinik Keluarga, Kuala Lumpur;
  • 2 MMed, Department of General Practice, Monash University, Australia.
  • PMID: 27570591
  • PMCID: PMC4453116

A pilot study is one of the essential stages in a research project. This paper aims to describe the importance of and steps involved in executing a pilot study by using an example of a descriptive study in primary care. The process of testing the feasibility of the project proposal, recruitment of subjects, research tool and data analysis was reported. We conclude that a pilot study is necessary and useful in providing the groundwork in a research project.

Keywords: pilot study; primary care.

PubMed Disclaimer

Flow chart of the pilot…

Flow chart of the pilot study

Similar articles

  • Primary Care Research Team Assessment (PCRTA): development and evaluation. Carter YH, Shaw S, Macfarlane F. Carter YH, et al. Occas Pap R Coll Gen Pract. 2002 Feb;(81):iii-vi, 1-72. Occas Pap R Coll Gen Pract. 2002. PMID: 12049028 Free PMC article.
  • The Feasibility of Studying Fatigue Over Time in Patients on Hemodialysis: A Pilot Study and Lessons Learned. Horigan AE. Horigan AE. Nephrol Nurs J. 2019 May-Jun;46(6):591-595. Nephrol Nurs J. 2019. PMID: 31872989
  • The assisted living project: a process evaluation of implementation of sensor technology in community assisted living. A feasibility study. Holthe T, Casagrande FD, Halvorsrud L, Lund A. Holthe T, et al. Disabil Rehabil Assist Technol. 2020 Jan;15(1):29-36. doi: 10.1080/17483107.2018.1513572. Epub 2018 Oct 14. Disabil Rehabil Assist Technol. 2020. PMID: 30318955
  • The use of cardiac rehabilitation services to aid the recovery of patients with bowel cancer: a pilot randomised controlled trial with embedded feasibility study. Hubbard G, Munro J, O’Carroll R, Mutrie N, Kidd L, Haw S, Adams R, Watson AJM, Leslie SJ, Rauchhaus P, Campbell A, Mason H, Manoukian S, Sweetman G, Treweek S. Hubbard G, et al. Southampton (UK): NIHR Journals Library; 2016 Aug. Southampton (UK): NIHR Journals Library; 2016 Aug. PMID: 27583314 Free Books & Documents. Review.
  • Improving quality of care and outcome at very preterm birth: the Preterm Birth research programme, including the Cord pilot RCT. Duley L, Dorling J, Ayers S, Oliver S, Yoxall CW, Weeks A, Megone C, Oddie S, Gyte G, Chivers Z, Thornton J, Field D, Sawyer A, McGuire W. Duley L, et al. Southampton (UK): NIHR Journals Library; 2019 Sep. Southampton (UK): NIHR Journals Library; 2019 Sep. PMID: 31566938 Free Books & Documents. Review.
  • Obstetrics and gynecology patients' perceptions about bedside teaching at a Saudi teaching hospital. Mohamed ER, Almulhem MA, AlElq AH, Zeeshan M, Alharbi RS, Almuhanna AE, Alotaibi MS, Alhabib FM. Mohamed ER, et al. J Family Community Med. 2024 Apr-Jun;31(2):168-175. doi: 10.4103/jfcm.jfcm_229_23. Epub 2024 Apr 15. J Family Community Med. 2024. PMID: 38800788 Free PMC article.
  • Financial scarcity, psychological well-being and perceptions: an evaluation of the Nigerian currency redesign policy outcomes. Ani JI, Ajayi-Ojo VO, Batisai K. Ani JI, et al. BMC Public Health. 2024 Apr 25;24(1):1164. doi: 10.1186/s12889-024-18603-w. BMC Public Health. 2024. PMID: 38664712 Free PMC article.
  • Incidence and development of validated mortality prediction model among asphyxiated neonates admitted to neonatal intensive care unit at Felege Hiwot Comprehensive Specialized Hospital, Bahir Dar, Northwest Ethiopia, 2021: retrospective follow-up study. Tegegne YS, Birhan TY, Takele H, Mekonnen FA. Tegegne YS, et al. BMC Pediatr. 2024 Mar 28;24(1):219. doi: 10.1186/s12887-024-04696-0. BMC Pediatr. 2024. PMID: 38539138 Free PMC article.
  • A Practical Guide to Pilot Testing Community-Based Vaccination Coverage Surveys. Rhoda DA, Cutts FT, Agócs M, Brustrom J, Trimner MK, Clary CB, Clark K, Koffi D, Manibaruta JC, Sowe A, Gunnala R, Ogbuanu IU, Gacic-Dobo M, Danovaro-Holliday MC. Rhoda DA, et al. Vaccines (Basel). 2023 Nov 28;11(12):1773. doi: 10.3390/vaccines11121773. Vaccines (Basel). 2023. PMID: 38140178 Free PMC article.
  • Evaluating medical students' knowledge of patient-reported outcomes and the impact of curriculum intervention in consecutive cohorts. Florentino SA, Karan SB, Ramirez G, Baumhauer JF. Florentino SA, et al. J Patient Rep Outcomes. 2023 Dec 13;7(1):131. doi: 10.1186/s41687-023-00670-z. J Patient Rep Outcomes. 2023. PMID: 38091156 Free PMC article.
  • Stewart PW. Small or pilot study, GCRC protocols which propose “pilot studies”. Cincinnati Children’s Hospital Medical Center
  • Lancaster GA, Dodd S, Williamson PR. Design and analysis of pilot studies: recommendations for good practice. J Eval Clin Pract. 2004;10((2)):307–12. - PubMed
  • Kraemer HC, Mintz J, Noda A, Tinklenberg J, Yesavage JA. Caution regarding the use of pilot studies to guide power calculations for study proposals. Arch Gen Psychiatry. 2006;63((5)):484–9. - PubMed
  • Simon S. Steve attempt to teach statistics. Children’s Mercy Hospital and Clinic
  • Shochat T, Umphress J, Israel AG, Yesavage JA. Insomnia in primary care patients. Sleep. 1999;22(2):359–65. - PubMed

Related information

Linkout - more resources, full text sources.

  • Europe PubMed Central
  • PubMed Central
  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

Subscribe to the PwC Newsletter

Join the community, edit social preview.

pilot study research paper

Add a new code entry for this paper

Remove a code repository from this paper, mark the official implementation from paper authors, add a new evaluation result row.

TASK DATASET MODEL METRIC NAME METRIC VALUE GLOBAL RANK REMOVE

Remove a task

Add a method, remove a method, edit datasets, a pilot study from the first course-based undergraduate research experience for online degree-seeking astronomy students.

24 May 2024  ·  Justin Hom , Jennifer Patience , Karen Knierman , Molly N. Simon , Ara Austin · Edit social preview

Research-based active learning approaches are critical for the teaching and learning of undergraduate STEM majors. Course-based undergraduate research experiences (CUREs) are becoming more commonplace in traditional, in-person academic environments, but have only just started to be utilized in online education. Online education has been shown to create accessible pathways to knowledge for individuals from nontraditional student backgrounds, and increasing the diversity of STEM fields has been identified as a priority for future generations of scientists and engineers. We developed and instructed a rigorous, six-week curriculum on the topic of observational astronomy, dedicated to educating second year online astronomy students in practices and techniques for astronomical research. Throughout the course, the students learned about telescopes, the atmosphere, filter systems, adaptive optics systems, astronomical catalogs, and image viewing and processing tools. We developed a survey informed by previous research validated assessments aimed to evaluate course feedback, course impact, student self-efficacy, student science identity and community values, and student sense of belonging. The survey was administered at the conclusion of the course to all eleven students yielding eight total responses. Although preliminary, the results of our analysis indicate that student confidence in utilizing the tools and skills taught in the course was significant. Students also felt a great sense of belonging to the astronomy community and increased confidence in conducting astronomical research in the future.

Code Edit Add Remove Mark official

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here .

PLOS ONE 

June 4, 2024

PLOS ONE 

An inclusive journal community working together to advance science by making all rigorous research accessible without barriers

Calling all experts!

Plos one is seeking talented individuals to join our editorial board. .

Cancer Epidemiology

Impact of aging on acute myeloid leukemia epidemiology and survival outcomes: A real-world, population-based longitudinal cohort study

Han and colleagues report an association between aging and incidence of acute myeloid leukemia diagnoses in South Korea, with recommendations to expand treatment options for older patients.

Image credit: Couple by Mabel Amber, Pixabay

Impact of aging on acute myeloid leukemia epidemiology and survival outcomes: A real-world, population-based longitudinal cohort study

Agriculture

Persistence of genetically engineered canola populations in the U.S. and the adventitious presence of transgenes in the environment

Travers and colleagues research reveal that escaped GMO canola plants persist long-term outside farms but may be losing their herbicide resistant transgenes.

Image credit: Canola field in Manitoba, Canada by Ethan Sahagun, Wikimedia Commons

Persistence of genetically engineered canola populations in the U.S. and the adventitious presence of transgenes in the environment

Climate Change

Uncertainty reduction for precipitation prediction in North America

Lou and colleagues investigated the uncertainties across 27 CMIP6 models for projecting future annual precipitation increases in North America. They captured emergent constraint relationships between annual growth rates of simulated historical temperature and future precipitation. This reduced precipitation prediction uncertainties and improved temperature trend accuracy.

Image credit: Puddle by pictures101, Pixabay

Uncertainty reduction for precipitation prediction in North America

Neuroscience 

Orienteering combines vigorous-intensity exercise with navigation to improve human cognition and increase brain-derived neurotrophic factor

Waddington and colleagues report the benefits of the sport of orienteering, which combines vigorous exercise with spatial navigation, on memory and molecular markers of cognition such as BDNF.

Image credit: Fig 7 by Waddington et al., CC BY 4.0

Orienteering combines vigorous-intensity exercise with navigation to improve human cognition and increase brain-derived neurotrophic factor

Official PLOS Blog

Driving Open Science adoption with a global framework: the Open Science Monitoring Initiative

PLOS discusses the recent launch of the Open Science Monitoring Initiative.

Driving Open Science adoption with a global framework: the Open Science Monitoring Initiative

Image credit: Lighthouse by Masami, CC BY 4.0

Editor Spotlight: Frank Kyei-Arthur

In this interview, PLOS ONE Academic Editor Dr Frank Kyei-Arthur discusses assessing reviewers' comments, his research interest in diverse populations, and the importance of Open Science in population health research.

Editor Spotlight: Frank Kyei-Arthur

Image credit: Dr. Frank Kyei-Arthur by Dr. Frank Kyei-Arthur, CC BY 4.0

Editor Spotlight: Bogdan Cristescu

In this interview, PLOS ONE Academic Editor Dr Bogdan Cristescu shares his experiences with PLOS ONE as author, reviewer and editor, his research in wildlife conservation ecology, and memorable places from his fieldwork.

Editor Spotlight: Bogdan Cristescu

Image credit: Dr. Bogdan Cristescu by Dr. Bogdan Cristescu, CC BY 4.0

Biochemistry

Formamide denaturation of double-stranded DNA for fluorescence in situ hybridization (FISH) distorts nanoscale chromatin structure

Shim and colleagues compare DNA labelling methods. 

Formamide denaturation of double-stranded DNA for fluorescence in situ hybridization (FISH) distorts nanoscale chromatin structure

Image credit: Bottoms spiral string by Qimono, Pixabay

Archaeology

Pottery spilled the beans: Patterns in the processing and consumption of dietary lipids in Central Germany from the Early Neolithic to the Bronze Age

Breu and colleagues analyzed pottery vessels to study ancient culinary traditions in Germany.

Pottery spilled the beans: Patterns in the processing and consumption of dietary lipids in Central Germany from the Early Neolithic to the Bronze Age

Image credit: Fig 2 by Breu et al., CC BY 4.0

Rapid respiratory microbiological point-of-care-testing and antibiotic prescribing in primary care: Protocol for the RAPID-TEST randomised controlled trial

Abbs and colleagues report the protocol for the RAPID-TEST randomised controlled trial.

Rapid respiratory microbiological point-of-care-testing and antibiotic prescribing in primary care: Protocol for the RAPID-TEST randomised controlled trial

Image credit: Healthcare worker taking PCR test by Drazen Zigic, Freepik

Animal behaviour

Eurasian jays ( Garrulus glandarius ) show episodic-like memory through the incidental encoding of information

Davies and colleagues show how Eurasian jays can use mental time travel like humans

Eurasian jays (Garrulus glandarius) show episodic-like memory through the incidental encoding of information

Image credit: Eurasian Jay (Garrulus glandarius) by Zeynel Cebeci, Wikimedia Commons

Collections

Browse the lastest collections of papers from across PLOS

Watch this space for future collections of papers in PLOS ONE

RCPSYCH International Congress 2024

Associate Editor Annesha Sil will be representing PLOS ONE at this conference in Edinburgh, UK, June 17-20, 2024.

Sunbelt 2024

Senior Editor Hanna Landenmark will be representing PLOS ONE at this conference in Edinburgh, UK, June 24-30, 2024.

UK Alliance for Disaster Reduction (UKADR) 2024 Conference

Associate Editor Joanna Tindall will be representing PLOS ONE at this conference in London, UK, June 26-27, 2024.

Publish with PLOS ONE

  • Submission Instructions
  • Submit Your Manuscript

Connect with Us

  • PLOS ONE on Twitter
  • PLOS on Facebook

Get new content from PLOS ONE in your inbox

Thank you you have successfully subscribed to the plos one newsletter., sorry, an error occurred while sending your subscription. please try again later..

Learning videos to overcome learning loss for junior high school students: A pilot study of mathematics education

  • Lestari, Mulia
  • Johar, Rahmah
  • Mailizar, Mailizar

Indonesia is recovering from the COVID-19 pandemic, which has hit the world for two years. The impact of COVID-19, which requires long school closures and the transfer of learning, has an effect on education, especially for students because they experience learning loss due to learning that is not optimal. This pilot study aims to analyse the feasibility or usability of learning videos in overcoming learning loss. This type of research is in the early stage of development research. A quantitative descriptive approach was used to investigate the usefulness of instructional videos to overcome learning loss. The students involved in the usability test were 17 grade VIII students in one of the junior high schools in Banda Aceh. The instrument used in this study was a questionnaire. Based on the student assessment results, it was found that 14 students gave useful responses with very good criteria, 1 student answered with good criteria, and the rest gave sufficient criteria. Based on these trials, it can be concluded that learning videos to overcome learning loss have very good usability quality.

  • MATHEMATICS EDUCATION
  • Search Menu
  • Sign in through your institution
  • Advance Articles
  • Supplements
  • Special Issues
  • Trending Articles

Prize Winning Papers

  • Author Guidelines
  • Submission Site
  • Reasons to Publish
  • Open Access Policy
  • Self-Archiving Policy
  • The Journal of Sexual Medicine
  • Editorial Board
  • International Society for Sexual Medicine
  • Advertising & Corporate Services
  • Sexual Medicine Reviews
  • Sexual Medicine
  • Journals on Oxford Academic
  • Books on Oxford Academic

Issue Cover

Latest Articles

pilot study research paper

Why Publish with JSM ?

Discover the top reasons why you should publish your research in The Journal of Sexual Medicine .

View Author Benefits

Alerts in the Inbox

Email alerts

Register to receive table of contents email alerts as soon as new issues of The Journal of Sexual Medicine are published online.

Author resources

Author resources

Learn about how to submit your article, our publishing process, and tips on how to promote your article.

Find out more

Read and publish

Read and Publish deals

Authors interested in publishing in The Journal of Sexual Medicine may be able to publish their paper Open Access using funds available through their institution’s agreement with OUP.

Find out if your institution is participating

Recommend to your library

Recommend to your library

Fill out our simple online form to recommend The Journal of Sexual Medicine to your library.

Recommend now

Gold fireworks and sparkles

Discover The Journal of Sexual Medicine 's award-winning papers, including the:

ISSM/SMSNA Mental Health Prize 2022 ISSM/SMSNA FSD Prize 2022 ISSM Zorgniotti-Newman Meeting Prize 2022

Related titles

Sexual Medicine Reviews

  • About The Journal of Sexual Medicine
  • About the International Society for Sexual Medicine
  • Recommend to your Library
  • Advertising & Corporate Services
  • Journals Career Network

Affiliations

  • Online ISSN 1743-6109
  • Print ISSN 1743-6095
  • Copyright © 2024 International Society for Sexual Medicine
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • HHS Author Manuscripts

Logo of nihpa

The Role and Interpretation of Pilot Studies in Clinical Research

Andrew c. leon.

1 Weill Cornell Medical College, Department of Psychiatry, New York, NY

Lori L. Davis

2 University of Alabama School of Medicine, Birmingham, AL VA Medical Center, Tuscaloosa, AL

Helena C. Kraemer

3 Stanford University, Department of Psychiatry and Behavioral Sciences, Stanford, CA

Pilot studies represent a fundamental phase of the research process. The purpose of conducting a pilot study is to examine the feasibility of an approach that is intended to be used in a larger scale study. The roles and limitations of pilot studies are described here using a clinical trial as an example. A pilot study can be used to evaluate the feasibility of recruitment, randomization, retention, assessment procedures, new methods, and implementation of the novel intervention.

A pilot study is not a hypothesis testing study. Safety, efficacy and effectiveness are not evaluated in a pilot. Contrary to tradition, a pilot study does not provide a meaningful effect size estimate for planning subsequent studies due to the imprecision inherent in data from small samples. Feasibility results do not necessarily generalize beyond the inclusion and exclusion criteria of the pilot design.

A pilot study is a requisite initial step in exploring a novel intervention or an innovative application of an intervention. Pilot results can inform feasibility and identify modifications needed in the design of a larger, ensuing hypothesis testing study. Investigators should be forthright in stating these objectives of a pilot study. Grant reviewers and other stakeholders should expect no more.

INTRODUCTION

Over the past several years, research funding agencies have requested applications for pilot studies that are typically limited to a shorter duration (one to three years) and a reduced budget using, for example, the NIH R34 funding mechanism ( NIMH, 2010 ). Pilot studies play a key role in the development or refinement of new interventions, assessments, and other study procedures. Commonly, results from pilot studies are used to support more expensive and lengthier pivotal efficacy or effectiveness studies. Importantly, investigators, grant reviewers, and other stakeholders need to be aware of the essential elements, appropriate role, and exceptional strengths and limitations in the interpretation of pilot studies.

A pilot study is, “A small-scale test of the methods and procedures to be used on a larger scale …” ( Porta, 2008 ). The fundamental purpose of conducting a pilot study is to examine the feasibility of an approach that is intended to ultimately be used in a larger scale study. This applies to all types of research studies. Here we use the randomized controlled clinical trial (RCT) for illustration. Prior to initiating a full scale RCT an investigator may choose to conduct a pilot study in order to evaluate the feasibility of recruitment, randomization, retention, assessment procedures, new methods, and/or implementation of the novel intervention. A pilot study, however, is not used for hypothesis testing. Instead it serves as an earlier-phase developmental function that will enhance the probability of success in the larger subsequent RCTs that are anticipated.

For purpose of contrast, a hypothesis testing clinical trial is designed to compare randomized treatment groups in order to draw an inference about efficacy/effectiveness and safety in the patient population , based on sample results. The primary goal in designing such a study is to minimize the bias in the estimate of the treatment effect. ( Leon et al., 2006 ; Leon & Davis, 2009 ). That is, the trial is designed to ask the question, “Is the treatment efficacious, and if so, what is the magnitude of the effect?”. Features of RCTs that help us achieve this goal are randomized group assignment, double-blinded assessments, and control or comparison groups.

This manuscript will focus on pilot studies, those used to shape some, but not all aspects of the design and implementation of hypothesis testing clinical trials. It is the feasibility results, not the efficacy or safety results, that inform subsequent trials. The objective of this manuscript is to elaborate on each of these points: efficacy, safety, and feasibility. We discuss both the design of a pilot study and the interpretation and application of pilot study results. What is discussed here applies to pilot studies, feasibility studies and proof of concept studies, terms that have been used somewhat interchangeably in the literature and henceforth are referred to here as “pilot studies”.

WHAT A PILOT STUDY CAN DO: ASSESS FEASIBILITY

Pilot study results can guide in the design and implementation of larger scale efficacy studies. There are several aspects of RCT feasibility that are informed by conducting a pilot study. A pilot study can be used to evaluate the feasibility of recruitment, randomization, retention, assessment procedures, and implementation of the novel intervention and each of these can be quantified ( Table 1 ). Study components that are deemed infeasible or unsatisfactory should be modified in the subsequent trial or removed altogether.

Aspects of Feasibility that Can be Examined with a Pilot Study

Study ComponentFeasibility Quantification
ScreeningNumber screened per month
RecruitmentNumber enrolled per month
RandomizationProportion of screen eligible who enroll
RetentionTreatment-specific retention rates
Treatment adherenceRates of adherence to protocol for each intervention
Treatment fidelityFidelity rates per unit monitored
Assessment processProportion of planned ratings that are completed; duration of assessment visit

Rationale for a Control or Comparison Group in a Pilot

The inclusion of a control or comparator group in a full scale trial accounts for the passage of time, the increased attention received in the study, the expectation of a therapeutic intervention, and the psychological consequences of legitimized sick role.( Klerman, 1986 ) Nevertheless, an investigator might wonder what purpose a control group serves in a pilot if no inferential comparisons are to be conducted. Although not essential for many aspects of the study, inclusion of a control group allows for a more realistic examination of recruitment, randomization, implementation of interventions, blinded assessment procedures, and retention in blinded interventions. Each aspect of feasibility could be quite different from an uncontrolled study when intervention assignment is randomized and blinded, particularly if placebo is a distinct possibility. In an open label pilot that has no control group, participants are recruited to a known, albeit experimental, intervention with no risk of receiving placebo. Assessments are conducted in an unblinded fashion. The implementation of only one intervention can be evaluated. Retention information is based on those receiving unblinded treatment. With these issues in mind, a pilot study can better address its goals if a control group is part of the design. A control group would also be particularly illuminating for psychotherapy or psychosocial interventions, whereby the control group’s aspects and procedures are also tested for feasibility, consistency, and acceptability.

Good Clinical Practices

A pilot study provides opportunity to develop consistent practices to enhance data integrity and the protection of human subjects. These good clinical practices include the refinement of source documentation, informed consent procedures, data collection tools, regulatory reporting procedures, and monitoring/oversight procedures, especially when multiple sites and investigators are engaged in the study. A pilot study can be critical in research staff training and provide experiences that strengthen and confirm competencies and skills required for the investigation to be conducted with accuracy and precision.

SAMPLE SIZE DETERMINATION IN DESIGNING A PILOT STUDY

A pilot study is not a hypothesis testing study. Therefore, no inferential statistical tests should be proposed in a pilot study protocol. With no inferential statistical tests, a pilot study will not provide p -values. Power analyses are used to determine the sample size that is needed to provide adequate statistical power (typically 80% or 90%) to detect a clinically meaningful difference with the specified inferential statistical test. However, power analyses should not be presented in an application for a pilot study that does not propose inferential tests. A pilot sample size is instead based on the pragmatics of recruitment and the necessities for examining feasibility.

Pilot Data for a Pilot Study

Pilot studies are exploratory ventures. Pilot studies generate pilot data, their design need not be guided with the support of prior pilot data. It is quite reasonable and expected that a pilot study is proposed with no pilot or other preliminary data supporting the proposal and that its proposed sample size is based on pragmatics such as patient flow and budgetary constraints. This does not preclude the need for a theoretical rationale for the intervention or the methodology being proposed for a pilot study.

Are Pilot Data Included in the Larger Trial?

Pilot study data generally should not be combined with data from the subsequent larger scale study. This is because it is quite likely that the methods will be modified after the pilot, even if minimally. Such changes in protocol risk adding additional, perhaps unknown, source of variation. However, if a well-specified adaptive design were explicated prior to the start of a pilot study, and the risk of elevated type I error appropriately controlled, it is conceivable that data from before and after protocol changes could, in fact, be pooled. This is a rare exception.

EXCEEDING THE LIMITATIONS OF A PILOT STUDY: WHAT PILOT STUDIES CANNOT DO

Although a pilot study will undoubtedly incorporate relevant outcome measures and can serve a vital role in treatment development, it is not, and should not, be considered a preliminary test of the intervention hypothesis. There are two fundamental reasons that hypothesis testing is not used in a pilot study: the limited state of knowledge about the methods or intervention in the patient population to be studied and the smaller proposed sample size.

Tolerability and Preliminary Safety

Only in an extreme, unfortunate case, where a death occurs or repeated serious adverse events surface, do pilot studies inform the safety of testing an intervention due to the small sample size. However, pilot studies provide an opportunity to implement and examine the feasibility of the adverse event reporting system. Nevertheless, if some safety concerns are detected in a pilot study group-specific rates (with 95% confidence intervals) should be reported for adverse events, treatment emergent adverse events and serious adverse events. When event rates are reported and no adverse event is observed for a particular category, the rule of three should be applied to estimate the upper bound of the 95% CI, where the upper bound is approximately 3/n. ( Jovanovic & Levy, 1997 ; Jovanovic et al., 1997 ) For example, consider a study with N = 15 receiving medication and no suicidal ideation was reported for that group. Although the observed rate of suicidal ideation is 0%, the upper bound of the 95% confidence interval is 3/15 or 20%. This imprecision, seen in the wide confidence interval, underscores the limited value of safety data from a pilot study.

Pilot Study Effect Sizes and Sample Size Determination

There has been a venerable tradition of using pilot studies to estimate between group effect sizes that, in turn, are used to inform the design of subsequent larger scale hypothesis testing studies. Despite it widespread use, it has been argued that the tradition is ill-founded. ( Friedman, Furberg and DeMets, 1998 ; Kraemer et al., 2006 ) Pilot study results should not be used for sample size determination due to the inherent imprecision in between treatment group effect size estimates from studies with small samples. Furthermore, pilot results that are presented to grant review committees tend to be selective, overly optimistic and, at times, misrepresentational.

The adverse consequences of using a pilot study effect size for sample size estimation correspond with the two errors of inferential testing: false positive results (Type I error) and false negative results (Type II error). If a pilot study effect size is unduly large (i.e., a false positive result), subsequent trials will be designed with an inadequate number of participants to provide the statistical power needed to detect clinically meaningful effects and that would lead to negative trials. If a pilot study effect size is unduly small (i.e., a false negative result), subsequent development of the intervention could very well be terminated – even if the intervention eventually would have proven to be effective. Unfortunately, a false negative result could preclude the opportunity to further examine its latent efficacy.

An essential challenge of therapeutic development is that the true population effect size is unknown at the time a pilot study is designed. It is this gap in knowledge that motivates much research. An enthusiastic investigator may well believe that a series of cases provides evidence of efficacy, but such data are observational and uncontrolled; realistically, they are seldom replicated in RCTs -- as seen years ago with reserpine ( Kinross-Wright, 1955 ; Campden-Main, Wegielski, 1955 ; Goller 1960 ). A case series estimate tends to be steeped in optimism, particularly if an estimate of such magnitude is seldom, if ever, seen in full scale trials for psychiatric disorders. Nevertheless, it is not unusual for research grant applications and pilot study publications to convey such optimism, particularly when based on pilot data.

It is possible, but highly unlikely, that the between group effect size (d) from a pilot study sample will provide a reasonable estimate of the population effect size ( Δ ), but that cannot be known based on the pilot data. (It is the population effect size, not the sample effect size, that an RCT is designed to detect with sufficient power.) This estimation problem has to do with the precision of d and its relation to sample size. Estimates become more precise with larger sample sizes. Therefore, estimates of effect sizes should not be a specific aim of a pilot proposal. This applies to effects sizes for any type of outcome, be it a severity rating, a response status, or survival status. The reasoning for this is as follows.

Hypothetical Example

Precision is embodied in the confidence interval (CI) around d . By definition, there is a 95% probability that Δ falls within the range of the 95% CI. Consider some examples, initially a hypothetical example. For simplicity, assume that two groups (e.g., active and placebo) of equal size ( n i = n j , where the total sample size is N = 2n i ) will be compared on a normally distributed outcome measure for which the groups have a common variance. The between group effect size, Cohen’s d , is estimated as: d = X ¯ 1 − X ¯ 2 s . With equal sample sizes, the 95% CI for d is approximately: d + / − ( 4 / N ) . (Note that the 4 in the numerator of the final term is derived from 2*t (N-2, α/2) .) For example, if the sample effect size is d =. 50 (i.e., the two groups differ by one-half standard deviation unit) and there are 18 participants per group, the 95% CI is 0.50+/− 0.67: −0.17≤ Δ ≤1.17 (i.e., .50 + / − 4 / 36 ). This intervals denotes that the true effect of active relative to placebo ( Δ ) is somewhere between is slightly detrimental (−0.17) to tremendously beneficial (1.17). The corresponding estimates of sample size/group range from as many as 576 to as few as 12. Hence, with imprecision comes a vast disparity in sample size estimates and, if sample size determination for a subsequent study is based on an imprecise estimate, there is an enormous risk of underpowered or overpowered design. In other words, the efficacy data from a pilot study of this size are uninformative. Many pilot studies have far fewer than 18 participants/group and therefore even greater imprecision. We learn little if anything about the efficacy of an intervention with data from a small sample; yet, as discussed earlier, a great deal can be learned from a pilot study.

An Alternative to Using Pilot Data for Sample Size Determination

An alternative approach is to base sample size estimates for a full scale RCT on what is deemed a clinically meaningful effect . For example, the investigator must use clinical experience to describe a clinically meaningful difference on the primary outcome, in the case of MDD trials, the HAMD. How many HAMD units represent a meaningful between treatment group difference? Assume that the pre-post difference on the HAMD total has an sd = 6.0 . Then d = .20 represents 1.2 units of HAMD change, d = .40 represents 2.4 units of HAMD change, and d = .50 represents 3.0 units of HAMD change. The respective sample sizes needed per group for 80% power with two-tailed t-test (alpha=.05) are: 393, 100, and 64. (N/group ≈ 16/ d 2 ; Lehr, 1992 ) The clinical interpretation of HAMD change of 1.2 to 3.0 would drive the choice among possible sample sizes in planning a study. Ideally, a hypothesis testing study should be designed to detect the smallest difference that is generally agreed to be clinically meaningful .

The primary role of a pilot study is to examine the feasibility of a research endeavor. For instance, feasibility of recruitment, randomization, intervention implementation, blinded assessment procedures, and retention can all be examined. Investigators should be forthright in stating these objectives of a pilot study and bravely accept the limitations of a pilot study. Grant reviewers should expect no more.

The choice of the intervention for a pilot study should be based on theory, mechanism of action, a case series, or animal studies that justify a rationale for therapeutic effect. Well-conceived and implemented pilot studies will reduce the risk of several problems that are commonly faced in clinical trials. These include the inability to recruit the proposed sample size and a corresponding reduction in statistical power, excessive attrition due to intolerable procedures or interventions, and the need to modify a protocol midway through a trial. As a consequence, pilot studies can reduce the proportion of failed trials and allow research funds to be spent on projects for which feasibility has been demonstrated and quantified.

Despite the convention, pilot studies do not provide useful information regarding the population effect size because the estimates are quite crude owing to the small sample sizes. Basing a decision to proceed or terminate evaluation of a particular intervention on pilot data is perilous because there is a very good chance that the decision will be derived from false positive or false negative results. In lieu of using pilot results, sample size determination should be derived based on that required for sufficient statistical power to detect a clinically meaningful treatment effect. The definition of clinically meaningful is not entirely empirically-based, but instead requires input from clinicians who treat the patient population of interest and perhaps from patients with the disorder.

By the very nature of pilot studies, there are critical limitations to their role and interpretation. For example, a pilot study is not a hypothesis testing study and therefore safety and efficacy are not evaluated. Further, a study can only examine feasibility of the patient type included in the study. The feasibility results do not necessarily generalize beyond the inclusion and exclusion criteria of the pilot.

In summary, pilot studies are a necessary first step in exploring novel interventions and novel applications of interventions – whether in a new patient population or with a novel delivery system (e.g., transdermal patch). Pilot results inform feasibility, which in turn, is instructive in that it points to modifications needed in the planning and design of a larger efficacy trial.

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

  • Campden-Main BC, Wegielski Z. The control of deviant behavior in chronically disturbed psychotic patients by the oral administration of reserpine. Ann N Y Acad Sci. 1955 Apr 15; 61 (1):117–122. [ PubMed ] [ Google Scholar ]
  • Friedman LM, Furberg CD, DeMets DL. Fundamentals of Clinical Trials. 3. New York: Springer; 1998. [ Google Scholar ]
  • Goller ES. A controlled trial of reserpine in chronic schizophrenia. J Ment Sci Oct. 1960; 106 :1408–1412. [ PubMed ] [ Google Scholar ]
  • Jovanovic BD, Levy PS. A look at the rule of three. The American Statistician. 1997; 57 :137–139. [ Google Scholar ]
  • Jovanovic BD, Zalenski RJ. Safety evaluation and confidence intervals when the number of observed events is small or zero. Annals of Emergency Medicine. 1997; 30 :301–6. [ PubMed ] [ Google Scholar ]
  • Kinross-Wright V. Chlorpromazine and reserpine in the treatment of psychoses. Ann N Y Acad Sci. 1955 Apr 15; 61 (1):174–182. [ PubMed ] [ Google Scholar ]
  • Klerman GL. Scientific and ethical considerations in the use of placebo controls in clinical trials in psychopharmacology. Psychopharmacology Bulletin. 1986; 22 :25–29. [ PubMed ] [ Google Scholar ]
  • Kraemer HC, Mintz J, Noda A, Tinklenberg J, Yesavge JA. Caution regarding the use of pilot studies to guide power calculations for study proposals. Archives of General Psychiatry. 2006; 63 :484–489. [ PubMed ] [ Google Scholar ]
  • Lehr R. Sixteen s-squared over d-squared: A relation for crude sample size estimates. Statistics in Medicine. 1992; 11 :1099–1102. [ PubMed ] [ Google Scholar ]
  • Leon AC, Davis LL. Enhancing Clinical Trial Design of Interventions for Posttraumatic Stress Disorder. Journal of Traumatic Stress. 2009; 22 :603–611. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Leon AC, Mallinckrodt CH, Chuang-Stein C, Archibald DG, Archer GE, Chartier K. Attrition in randomized controlled clinical trials: Methodological issues in psychopharmacology. Biological Psychiatry. 2006; 59 :1001–1005. [ PubMed ] [ Google Scholar ]
  • National Institute of Mental Health. Pilot Intervention and Services Research Grants (R34) [Accessed October 4, 2010]. http://grants.nih.gov/grants/guide/pa-files/PAR-09-173.html .
  • Porta M. A Dictionary of Epidemiology. 5. Oxford: Oxford University Press; 2008. [ Google Scholar ]

The state of AI in early 2024: Gen AI adoption spikes and starts to generate value

If 2023 was the year the world discovered generative AI (gen AI) , 2024 is the year organizations truly began using—and deriving business value from—this new technology. In the latest McKinsey Global Survey  on AI, 65 percent of respondents report that their organizations are regularly using gen AI, nearly double the percentage from our previous survey just ten months ago. Respondents’ expectations for gen AI’s impact remain as high as they were last year , with three-quarters predicting that gen AI will lead to significant or disruptive change in their industries in the years ahead.

About the authors

This article is a collaborative effort by Alex Singla , Alexander Sukharevsky , Lareina Yee , and Michael Chui , with Bryce Hall , representing views from QuantumBlack, AI by McKinsey, and McKinsey Digital.

Organizations are already seeing material benefits from gen AI use, reporting both cost decreases and revenue jumps in the business units deploying the technology. The survey also provides insights into the kinds of risks presented by gen AI—most notably, inaccuracy—as well as the emerging practices of top performers to mitigate those challenges and capture value.

AI adoption surges

Interest in generative AI has also brightened the spotlight on a broader set of AI capabilities. For the past six years, AI adoption by respondents’ organizations has hovered at about 50 percent. This year, the survey finds that adoption has jumped to 72 percent (Exhibit 1). And the interest is truly global in scope. Our 2023 survey found that AI adoption did not reach 66 percent in any region; however, this year more than two-thirds of respondents in nearly every region say their organizations are using AI. 1 Organizations based in Central and South America are the exception, with 58 percent of respondents working for organizations based in Central and South America reporting AI adoption. Looking by industry, the biggest increase in adoption can be found in professional services. 2 Includes respondents working for organizations focused on human resources, legal services, management consulting, market research, R&D, tax preparation, and training.

Also, responses suggest that companies are now using AI in more parts of the business. Half of respondents say their organizations have adopted AI in two or more business functions, up from less than a third of respondents in 2023 (Exhibit 2).

Gen AI adoption is most common in the functions where it can create the most value

Most respondents now report that their organizations—and they as individuals—are using gen AI. Sixty-five percent of respondents say their organizations are regularly using gen AI in at least one business function, up from one-third last year. The average organization using gen AI is doing so in two functions, most often in marketing and sales and in product and service development—two functions in which previous research  determined that gen AI adoption could generate the most value 3 “ The economic potential of generative AI: The next productivity frontier ,” McKinsey, June 14, 2023. —as well as in IT (Exhibit 3). The biggest increase from 2023 is found in marketing and sales, where reported adoption has more than doubled. Yet across functions, only two use cases, both within marketing and sales, are reported by 15 percent or more of respondents.

Gen AI also is weaving its way into respondents’ personal lives. Compared with 2023, respondents are much more likely to be using gen AI at work and even more likely to be using gen AI both at work and in their personal lives (Exhibit 4). The survey finds upticks in gen AI use across all regions, with the largest increases in Asia–Pacific and Greater China. Respondents at the highest seniority levels, meanwhile, show larger jumps in the use of gen Al tools for work and outside of work compared with their midlevel-management peers. Looking at specific industries, respondents working in energy and materials and in professional services report the largest increase in gen AI use.

Investments in gen AI and analytical AI are beginning to create value

The latest survey also shows how different industries are budgeting for gen AI. Responses suggest that, in many industries, organizations are about equally as likely to be investing more than 5 percent of their digital budgets in gen AI as they are in nongenerative, analytical-AI solutions (Exhibit 5). Yet in most industries, larger shares of respondents report that their organizations spend more than 20 percent on analytical AI than on gen AI. Looking ahead, most respondents—67 percent—expect their organizations to invest more in AI over the next three years.

Where are those investments paying off? For the first time, our latest survey explored the value created by gen AI use by business function. The function in which the largest share of respondents report seeing cost decreases is human resources. Respondents most commonly report meaningful revenue increases (of more than 5 percent) in supply chain and inventory management (Exhibit 6). For analytical AI, respondents most often report seeing cost benefits in service operations—in line with what we found last year —as well as meaningful revenue increases from AI use in marketing and sales.

Inaccuracy: The most recognized and experienced risk of gen AI use

As businesses begin to see the benefits of gen AI, they’re also recognizing the diverse risks associated with the technology. These can range from data management risks such as data privacy, bias, or intellectual property (IP) infringement to model management risks, which tend to focus on inaccurate output or lack of explainability. A third big risk category is security and incorrect use.

Respondents to the latest survey are more likely than they were last year to say their organizations consider inaccuracy and IP infringement to be relevant to their use of gen AI, and about half continue to view cybersecurity as a risk (Exhibit 7).

Conversely, respondents are less likely than they were last year to say their organizations consider workforce and labor displacement to be relevant risks and are not increasing efforts to mitigate them.

In fact, inaccuracy— which can affect use cases across the gen AI value chain , ranging from customer journeys and summarization to coding and creative content—is the only risk that respondents are significantly more likely than last year to say their organizations are actively working to mitigate.

Some organizations have already experienced negative consequences from the use of gen AI, with 44 percent of respondents saying their organizations have experienced at least one consequence (Exhibit 8). Respondents most often report inaccuracy as a risk that has affected their organizations, followed by cybersecurity and explainability.

Our previous research has found that there are several elements of governance that can help in scaling gen AI use responsibly, yet few respondents report having these risk-related practices in place. 4 “ Implementing generative AI with speed and safety ,” McKinsey Quarterly , March 13, 2024. For example, just 18 percent say their organizations have an enterprise-wide council or board with the authority to make decisions involving responsible AI governance, and only one-third say gen AI risk awareness and risk mitigation controls are required skill sets for technical talent.

Bringing gen AI capabilities to bear

The latest survey also sought to understand how, and how quickly, organizations are deploying these new gen AI tools. We have found three archetypes for implementing gen AI solutions : takers use off-the-shelf, publicly available solutions; shapers customize those tools with proprietary data and systems; and makers develop their own foundation models from scratch. 5 “ Technology’s generational moment with generative AI: A CIO and CTO guide ,” McKinsey, July 11, 2023. Across most industries, the survey results suggest that organizations are finding off-the-shelf offerings applicable to their business needs—though many are pursuing opportunities to customize models or even develop their own (Exhibit 9). About half of reported gen AI uses within respondents’ business functions are utilizing off-the-shelf, publicly available models or tools, with little or no customization. Respondents in energy and materials, technology, and media and telecommunications are more likely to report significant customization or tuning of publicly available models or developing their own proprietary models to address specific business needs.

Respondents most often report that their organizations required one to four months from the start of a project to put gen AI into production, though the time it takes varies by business function (Exhibit 10). It also depends upon the approach for acquiring those capabilities. Not surprisingly, reported uses of highly customized or proprietary models are 1.5 times more likely than off-the-shelf, publicly available models to take five months or more to implement.

Gen AI high performers are excelling despite facing challenges

Gen AI is a new technology, and organizations are still early in the journey of pursuing its opportunities and scaling it across functions. So it’s little surprise that only a small subset of respondents (46 out of 876) report that a meaningful share of their organizations’ EBIT can be attributed to their deployment of gen AI. Still, these gen AI leaders are worth examining closely. These, after all, are the early movers, who already attribute more than 10 percent of their organizations’ EBIT to their use of gen AI. Forty-two percent of these high performers say more than 20 percent of their EBIT is attributable to their use of nongenerative, analytical AI, and they span industries and regions—though most are at organizations with less than $1 billion in annual revenue. The AI-related practices at these organizations can offer guidance to those looking to create value from gen AI adoption at their own organizations.

To start, gen AI high performers are using gen AI in more business functions—an average of three functions, while others average two. They, like other organizations, are most likely to use gen AI in marketing and sales and product or service development, but they’re much more likely than others to use gen AI solutions in risk, legal, and compliance; in strategy and corporate finance; and in supply chain and inventory management. They’re more than three times as likely as others to be using gen AI in activities ranging from processing of accounting documents and risk assessment to R&D testing and pricing and promotions. While, overall, about half of reported gen AI applications within business functions are utilizing publicly available models or tools, gen AI high performers are less likely to use those off-the-shelf options than to either implement significantly customized versions of those tools or to develop their own proprietary foundation models.

What else are these high performers doing differently? For one thing, they are paying more attention to gen-AI-related risks. Perhaps because they are further along on their journeys, they are more likely than others to say their organizations have experienced every negative consequence from gen AI we asked about, from cybersecurity and personal privacy to explainability and IP infringement. Given that, they are more likely than others to report that their organizations consider those risks, as well as regulatory compliance, environmental impacts, and political stability, to be relevant to their gen AI use, and they say they take steps to mitigate more risks than others do.

Gen AI high performers are also much more likely to say their organizations follow a set of risk-related best practices (Exhibit 11). For example, they are nearly twice as likely as others to involve the legal function and embed risk reviews early on in the development of gen AI solutions—that is, to “ shift left .” They’re also much more likely than others to employ a wide range of other best practices, from strategy-related practices to those related to scaling.

In addition to experiencing the risks of gen AI adoption, high performers have encountered other challenges that can serve as warnings to others (Exhibit 12). Seventy percent say they have experienced difficulties with data, including defining processes for data governance, developing the ability to quickly integrate data into AI models, and an insufficient amount of training data, highlighting the essential role that data play in capturing value. High performers are also more likely than others to report experiencing challenges with their operating models, such as implementing agile ways of working and effective sprint performance management.

About the research

The online survey was in the field from February 22 to March 5, 2024, and garnered responses from 1,363 participants representing the full range of regions, industries, company sizes, functional specialties, and tenures. Of those respondents, 981 said their organizations had adopted AI in at least one business function, and 878 said their organizations were regularly using gen AI in at least one function. To adjust for differences in response rates, the data are weighted by the contribution of each respondent’s nation to global GDP.

Alex Singla and Alexander Sukharevsky  are global coleaders of QuantumBlack, AI by McKinsey, and senior partners in McKinsey’s Chicago and London offices, respectively; Lareina Yee  is a senior partner in the Bay Area office, where Michael Chui , a McKinsey Global Institute partner, is a partner; and Bryce Hall  is an associate partner in the Washington, DC, office.

They wish to thank Kaitlin Noe, Larry Kanter, Mallika Jhamb, and Shinjini Srivastava for their contributions to this work.

This article was edited by Heather Hanselman, a senior editor in McKinsey’s Atlanta office.

Explore a career with us

Related articles.

One large blue ball in mid air above many smaller blue, green, purple and white balls

Moving past gen AI’s honeymoon phase: Seven hard truths for CIOs to get from pilot to scale

A thumb and an index finger form a circular void, resembling the shape of a light bulb but without the glass component. Inside this empty space, a bright filament and the gleaming metal base of the light bulb are visible.

A generative AI reset: Rewiring to turn potential into value in 2024

High-tech bees buzz with purpose, meticulously arranging digital hexagonal cylinders into a precisely stacked formation.

Implementing generative AI with speed and safety

IMAGES

  1. (PDF) Conducting the Pilot Study: A Neglected Part of the Research

    pilot study research paper

  2. (PDF) A pilot study: research poster presentations as an educational

    pilot study research paper

  3. Pilot study-research

    pilot study research paper

  4. (PDF) Introduction of a pilot study

    pilot study research paper

  5. Importance of the Pilot Study Thesis Example

    pilot study research paper

  6. (PDF) Sample size planning for pilot studies

    pilot study research paper

VIDEO

  1. Pilot study in research :simple and quickest explanation

  2. Pilot Studies in Research: What, Why, How?

  3. What is Pilot Study? How to Conduct Pilot Study? Advantages of Pilot Study

  4. How to conduct pilot study in research

  5. How to do pre testing and pilot testing of Questionnaire in research

  6. What is a pilot study in research?

COMMENTS

  1. A tutorial on pilot studies: the what, why and how

    2. Narrowing the focus: Pilot studies for randomized studies. Pilot studies can be conducted in both quantitative and qualitative studies. Adopting a similar approach to Lancaster et al.[], we focus on quantitative pilot studies - particularly those done prior to full-scale phase III trialsPhase I trials are non-randomized studies designed to investigate the pharmacokinetics of a drug (i.e ...

  2. Conducting the Pilot Study: A Neglected Part of the Research Process

    The pilot study had three aims: (1) to gather data to provide guidance for a substantive study adapted to Swedish conditions through modification of Irish research procedures and instruments, (2) to critically interrogate how we as researchers could most effectively conduct a pilot study utilizing observational and video-recorded data, and (3 ...

  3. Guidelines for Designing and Evaluating Feasibility Pilot Studies

    Pilot studies are a necessary first step to assess the feasibility of methods and procedures to be used in a larger study. Some consider pilot studies to be a subset of feasibility studies (), while others regard feasibility studies as a subset of pilot studies.As a result, the terms have been used interchangeably ().Pilot studies have been used to estimate effect sizes to determine the sample ...

  4. A tutorial on pilot studies: the what, why and how

    Conducting a pilot prior to the main study can enhance the likelihood of success of the main study and potentially help to avoid doomed main studies. The objective of this paper is to provide a detailed examination of the key aspects of pilot studies for phase III trials including: 1) the general reasons for conducting a pilot study; 2) the ...

  5. (PDF) The Importance of Pilot Studies

    The importance of pilot studies. The term pilot study is used in two different. ways in social science research. It can refer. to so-called feasibility studies which are. "small scale version [s ...

  6. Design and analysis of pilot studies: recommendations for good ...

    Abstract. Pilot studies play an important role in health research, but they can be misused, mistreated and misrepresented. In this paper we focus on pilot studies that are used specifically to plan a randomized controlled trial (RCT). Citing examples from the literature, we provide a methodological framework in which to work, and discuss ...

  7. Pilot Studies in Clinical Research

    Pilot studies are small-scale studies conducted to gather information and provide a foundation for the design of a definitive trial. They do not seek to estimate treatment efficacy or effectiveness themselves but may be used to assess whether a definitive trial is feasible and how it can be carried out. The objectives of a study can be met only ...

  8. (PDF) Doing A Pilot Study: Why Is It Essential?

    A pilot study is one of the essential stages in a research pr oject. This paper aims to describe the importance of and steps. involved in executing a pilot study by using an example of a de ...

  9. Pilot Study in Research: Definition & Examples

    Advantages. Limitations. Examples. A pilot study, also known as a feasibility study, is a small-scale preliminary study conducted before the main research to check the feasibility or improve the research design. Pilot studies can be very important before conducting a full-scale research project, helping design the research methods and protocol.

  10. Guidance for conducting feasibility and pilot studies for

    In implementation research, feasibility and pilot studies perform the same functions as those for intervention trials, however with a focus on developing or refining implementation strategies, refining research methods for an implementation intervention trial, or undertake preliminary testing of implementation strategies [14, 15].

  11. Conducting the Pilot Study: A Neglected Part of the Research Process

    important part in designing a research study and that they need to be adapted for the main study. This is, however, not suffi-cient according to van Teijlingen and Hundley (2001) who state that the use of pilot studies needs to be more widely discussed and experiences from pilot studies disseminated as these issues are related to research quality.

  12. (PDF) Introduction of a pilot study

    A pilot study is the first. step of the entire research p rotocol and is often a smaller-sized. study assisting in planning and m odification of the main study. [1,2]. More specifically, in large ...

  13. A brief overview of pilot studies and their sample size justification

    A pilot study can be either an internal study (i.e., the pilot data are included with the main trial study data) or an external study (i.e., the pilot data are independently assessed and not included in the main trial). ... Recommendations for planning pilot studies in clinical and translational research. Clin Transl Sci, 4 (2011), pp. 332-337 ...

  14. Full article: Guidance for using pilot studies to inform the design of

    We illustrate how to use pilot studies to inform the design of future randomized controlled trials (RCTs) so that the likelihood of answering the research question is high. We show how pilot studies can address each of the objectives listed earlier, how to optimally design a pilot trial, and how to perform sample size sensitivity analysis.

  15. Guidance for using pilot studies to inform the design of intervention

    Introduction. Prior to a definitive intervention trial, a pilot study may be undertaken. Pilot trials are often small versions of the main trial, undertaken to test trial methods and procedures. 1, 2 The overall aim of pilot studies is to demonstrate that a future trial can be undertaken. To address this aim, there are a number of objectives for a pilot study including assessing recruitment ...

  16. What Is a Pilot Study?

    The primary purpose of a pilot study is not to answer specific research questions but to prevent researchers from launching a large-scale study without adequate knowledge of the methods proposed; in essence, a pilot study is conducted to prevent the occurrence of a fatal flaw in a study that is costly in time and money. (Polit & Beck, 2017).

  17. The importance of conducting and reporting pilot studies: the example

    Background: In many research papers, pilot studies are only reported as a means of justifying the methods. This justification might refer to the overall research design, or simply to the validity and reliability of the research tools. It is unusual for reports of pilot studies to include practical problems faced by the researcher(s).

  18. Doing A Pilot Study: Why Is It Essential?

    Abstract. A pilot study is one of the essential stages in a research project. This paper aims to describe the importance of and steps involved in executing a pilot study by using an example of a descriptive study in primary care. The process of testing the feasibility of the project proposal, recruitment of subjects, research tool and data ...

  19. Papers with Code

    We developed a survey informed by previous research validated assessments aimed to evaluate course feedback, course impact, student self-efficacy, student science identity and community values, and student sense of belonging. The survey was administered at the conclusion of the course to all eleven students yielding eight total responses.

  20. Plos One

    International Day of Women and Girls in Science - Interview with Dr. Swetavalli Raghavan. PLOS ONE Associate Editor Dr Johanna Pruller interviews Dr Swetavalli Raghavan, full professor and founder of Scientists & Co. about mentorship, role models, and the changing landscape for women in science. Image credit: Dr Swetavalli Raghavan by ...

  21. (PDF) Pilot Study, Does It Really Matter? Learning Lessons from

    A Pilot Study (PS) is a small-scale research project conducted before the final full-scale study. A PS helps researchers to test in reality how likely the research process is to work, in order to ...

  22. The Effectiveness of Modified Atkins Ketogenic Diet on Children with

    1. Background. Epilepsy is still the most common neurological disorder in children, with estimated 10.5 million cases worldwide and 3.5 million new cases being reported annually, 40% of them being diagnosed in less than 18 years old, and more than 80% cases occur in developing countries [1 - 3].The underlying mechanism of epilepsy is the overexcitation of nerves and repeated synaptic ...

  23. Doing A Pilot Study: Why Is It Essential?

    Abstract. A pilot study is one of the essential stages in a research project. This paper aims to describe the importance of and steps involved in executing a pilot study by using an example of a descriptive study in primary care. The process of testing the feasibility of the project proposal, recruitment of subjects, research tool and data ...

  24. Cleveland Clinic Study Links Xylitol to Heart Attack, Stroke

    Cleveland Clinic researchers found higher amounts of the sugar alcohol xylitol are associated with increased risk of cardiovascular events like heart attack and stroke.. The team, led by Stanley Hazen, M.D., Ph.D., confirmed the association in a large-scale patient analysis, preclinical research models and a clinical intervention study.Findings were published today in the European Heart Journal.

  25. Learning videos to overcome learning loss for junior high school

    Indonesia is recovering from the COVID-19 pandemic, which has hit the world for two years. The impact of COVID-19, which requires long school closures and the transfer of learning, has an effect on education, especially for students because they experience learning loss due to learning that is not optimal. This pilot study aims to analyse the feasibility or usability of learning videos in ...

  26. The Journal of Sexual Medicine

    Aysu Yıldız Karaahmet and Fatma Şule Bilgiç. Background Although sexual life and its knowledge are still taboo in many cultures, especially for women, it can negatively affect women's sexual health. Aim The aim of this study was to examine the relationship between the frequency and duration of masturbation and the sexual health literacy ...

  27. The Role and Interpretation of Pilot Studies in Clinical Research

    A pilot study is, "A small-scale test of the methods and procedures to be used on a larger scale …" (Porta, 2008). The fundamental purpose of conducting a pilot study is to examine the feasibility of an approach that is intended to ultimately be used in a larger scale study. This applies to all types of research studies.

  28. The state of AI in early 2024: Gen AI adoption spikes and starts to

    The average organization using gen AI is doing so in two functions, most often in marketing and sales and in product and service development—two functions in which previous research determined that gen AI adoption could generate the most value 3 "The economic potential of generative AI: The next productivity frontier," McKinsey, June 14 ...