What is research funding, how does it influence research, and how is it recorded? Key dimensions of variation

  • Open access
  • Published: 16 September 2023
  • Volume 128 , pages 6085–6106, ( 2023 )

Cite this article

You have full access to this open access article

limitations of research funding

  • Mike Thelwall   ORCID: orcid.org/0000-0001-6065-205X 1 , 2 ,
  • Subreena Simrick   ORCID: orcid.org/0000-0002-0170-6940 3 ,
  • Ian Viney   ORCID: orcid.org/0000-0002-9943-4989 4 &
  • Peter Van den Besselaar   ORCID: orcid.org/0000-0002-8304-8565 5 , 6  

8272 Accesses

6 Citations

3 Altmetric

Explore all metrics

Evaluating the effects of some or all academic research funding is difficult because of the many different and overlapping sources, types, and scopes. It is therefore important to identify the key aspects of research funding so that funders and others assessing its value do not overlook them. This article outlines 18 dimensions through which funding varies substantially, as well as three funding records facets. For each dimension, a list of common or possible variations is suggested. The main dimensions include the type of funder of time and equipment, any funding sharing, the proportion of costs funded, the nature of the funding, any collaborative contributions, and the amount and duration of the grant. In addition, funding can influence what is researched, how and by whom. The funding can also be recorded in different places and has different levels of connection to outputs. The many variations and the lack of a clear divide between “unfunded” and funded research, because internal funding can be implicit or unrecorded, greatly complicate assessing the value of funding quantitatively at scale. The dimensions listed here should nevertheless help funding evaluators to consider as many differences as possible and list the remainder as limitations. They also serve as suggested information to collect for those compiling funding datasets.

Similar content being viewed by others

limitations of research funding

Exploring the effectiveness, efficiency and equity (3e’s) of research and research impact assessment

limitations of research funding

Obtaining Support and Grants for Research

limitations of research funding

Myths, Challenges, Risks and Opportunities in Evaluating and Supporting Scientific Research

Explore related subjects.

  • Artificial Intelligence

Avoid common mistakes on your manuscript.

Introduction

Academic research grants account for billions of pounds in many countries and so the funders may naturally want to assess their value for money in the sense of financing desirable outcomes at a reasonable cost (Raftery et al., 2016 ). Since many of the benefits of research are long term and difficult to identify or quantify financially, it is common to benchmark against previous results or other funders to judge progress and efficiency. This is a complex task because academic funding has many small and large variations and is influenced by, and may influence, many aspects of the work and environment of the funded academics (e.g., Reale et al., 2017 ). The goal of this article is to support future analyses of the effectiveness or influence of grant funding by providing a typology of the important dimensions to be considered in evaluations (or otherwise acknowledged as limitations). The focus is on grant funding rather than block funding.

The ideal way to assess the value of a funding scheme would be a counterfactual analyses that showed its contribution by identifying what would have happened without the funding. Unfortunately, counterfactual analyses are usually impossible because of the large number of alternative funding sources. Similarly, comparisons between successful and unsuccessful bidders are faced with major confounding factors that include groups not winning one grant winning another (Neufeld, 2016 ), and complex research projects attracting funding of different kinds from multiple sources (Langfeldt et al., 2015 ; Rigby, 2011 ). Even analyses with effective control groups, such as a study of funded vs. unfunded postdocs (Schneider & van Leeuwen, 2014 ), cannot separate the effect of the funding from the success of the grant selection process: were better projects funded or did the funding or reviewer feedback improve the projects? Although qualitative analyses of individual projects help to explain what happened to the money and what it achieved, large scale analyses are sometimes needed to inform management decision making. For example: would a funder get more value for money from larger or smaller, longer or shorter, more specific or more general grants? For such analyses, many simplifying assumptions need to be made. The same is true for checks of the peer review process of research funders. For example, a funder might compute the average citation impact of publications produced by their grants and compare it to a reference set. This reference set might be as outputs from the rejected set or outputs from a comparable funder. The selection of the reference set is crucial for any attempt to identify the added value of any funding, however defined. For example, comparing the work of grant winners with that of high-quality unsuccessful applicants (e.g., those that just failed to be funded) would be useful to detect the added value of the money rather than the success of the procedure to select winners, assuming that there is little difference in potential between winners and narrow losers (Van den Besselaar & Leydesdorff, 2009 ). Because of the need to make comparisons between groups of outputs based on the nature of their funding, it is important to know the major variations in academic research funding types.

The dimensions of funding analysed in previous evaluations can point to how the above issues have been tackled. Unfortunately, most evaluations of the effectiveness, influence, or products of research funding (however defined) have probably been private reports for or by research funders, but some are in the public domain. Two non-funder studies have analysed whether funding improves research in specific contexts: peer review scores for Scoliosis conference submissions (Roach et al., 2008 ), and the methods of randomised controlled trials in urogynecology (Kim et al., 2018 ). Another compared research funded by China with that funded by the EU (Wang et al., 2020 ). An interesting view on the effect of funding on research output suggests that a grant does not necessarily always result in increased research output compared to participation in a grant competition (Ayoubi et al., 2019 ; Jonkers et al., 2017 ). Finally, a science-wide study of funding for journal articles from the UK suggested that it associated with higher quality research in at least some and possibly all fields (the last figure in: Thelwall et al., 2023 ).

From a different perspective, at least two studies have investigated whether academic funding has commercial value. The UK Medical Research Council (MRC) has analysed whether medical spinouts fared better if they were from teams that received MRC funding rather than from unsuccessful applicants, suggesting that funding helped spin-outs to realise commercial value from their health innovations (Annex A2.7 of: MRC, 2019 ). Also in the UK, firms participating in UK research council funded projects tended to grow faster afterwards compared to comparator firms (ERC, 2017 ).

Discussing the main variations in academic research funding types to inform analyses of the value of research funding is the purpose of the current article. Few prior studies seem to have introduced any systematic attempt to characterise the key dimensions of research funding, although some have listed several different types (e.g., four in: Garrett-Jones, 2000 ; three in: Paulson et al., 2011 ; nine in: Versleijen et al., 2007 ). The focus of the current paper is on grant-funded research conducted at least partly by people employed by an academic institution rather than by people researching as part of their job in a business, government, or other non-academic organisation. The latter are presumably funded usually by their employer, although they may sometimes conduct collaborative projects with academics or win academic research funding. The focus is also on research outputs, such as journal articles, books, patents, performances, or inventions, rather than research impacts or knowledge generation. Nevertheless, many of the options apply to the more general case. The list of dimensions relevant to evaluating the value of research funding has been constructed from a literature review of academic research about funding and insights from discussions with funders and analyses of funding records. The influence of funding on individual research projects is analysed, rather than systematic effects of funding, such as at the national level (e.g., for this, see: Sandström & Van den Besselaar, 2018 ; Van den Besselaar & Sandström, 2015 ). The next sections discuss dimensions in difference in the funding awarded, the influence of the funding on the research, and the way in which the funding is recorded.

Funding sources

There are many types of funders of academic research (Hu, 2009 ). An effort to distinguish between types of funding schemes based on a detailed analysis of the Dutch government budget and the annual reports of the main research funders in the Netherlands found the following nine types of funding instruments (Versleijen et al., 2007 ), but the remainder of this section gives finer-grained breakdown of types. The current paper is primarily concerned with all these except for the basic funding category, which includes the block grants that many universities receive for general research support. Block grants were originally uncompetitive but now may also be fully competitive, as in the UK where they depend on Research Excellence Framework scores, or partly competitive as in the Netherlands, where they partly depend on performance-based parameters like PhD completions (see also: Jonkers & Zacharewicz, 2016 ).

Contract research (project—targeted—small scale)

Open competition (project—free—small scale)

Thematic competition (project—targeted—small scale)

Competition between consortia (project—targeted—large scale)

Mission oriented basic funding (basic—targeted—large scale)

Funding of infrastructure and equipment (basic—targeted—diverse)

Basic funding for universities and public research institutes (basic—free—large scale)

International funding of programs and institutes (basic, both, mainly large scale)

EU funding (which can be subdivided in the previous eight types)

Many studies of the influence of research funding have focused on individual funders (Thelwall et al, 2016 ) and funding agencies’ (frequently unpublished) internal analyses presumably often compare between their own funding schemes, compare overall against a world benchmark, or check whether a funding scheme performance has changed over time (BHF, 2022 ). Public evaluations sometimes analyse individual funding schemes, particularly for large funders (e.g., Defazio et al., 2009 ). The source of funding for a project could be the employing academic institution, academic research funders, or other organisations that sometimes fund research. There are slightly different sets of possibilities for equipment and time funding.

Who funded the research project (type of funder)?

A researcher may be funded by their employer, a specialist research funding organisation (e.g., government-sponsored or non-profit) or an organisation that needs the research. Commercial funding seems likely to have different requirements and goals from academic funding (Kang & Motohashi, 2020 ), such as a closer focus on product or service development, different accounting rules, and confidentiality agreements. The source of funding is an important factor in funding analysis because funders have different selection criteria and methods to allocate and monitor funding. This is a non-exhaustive list.

Self-funded or completely unfunded (individual). Although the focus of this paper is on grant funding, this (and the item below) may be useful to record because it may partly underpin projects with other sources and may form parts of comparator sets (e.g., for the research of unfunded highly qualified applicants) in other contexts.

University employer. This includes funding reallocated from national competitive (e.g., performance-based research funding: Hicks, 2012 ) or non-competitive block research grants, from teaching income, investments and other sources that are allocated for research in general rather than equipment, time, or specific projects.

Other university (e.g., as a visiting researcher on a collaborative project).

National academic research funder (e.g., the UK’s Economic and Social Research Council: ESRC).

International academic research funder (e.g., European Union grants).

Government (contract, generally based on a tender and not from a pot of academic research funding)

Commercial (contract or research funding), sometimes called industry funding.

NGO (contract or research funding, e.g., Cancer Research charity). Philanthropic organisations not responsible to donors may have different motivations to charities, so it may be useful to separate the two sometimes.

Who funded the time needed for the research?

Research typically needs both people and equipment, and these two are sometimes supported separately. The funding for a researcher, if any, might be generic and implicit (it is part of their job to do research) or explicit in terms of a specified project that needs to be completed. Clinicians can have protected research time too: days that are reserved for research activities as part of their employment, including during advanced training (e.g., Elkbuli et al., 2020 ; Voss et al., 2021 ). For academics, research time is sometimes “borrowed” from teaching time (Bernardin, 1996 ; Olive, 2017 ). Time for a project may well be funded differently between members, such as the lead researcher being institutionally supported but using a grant to hire a team of academic and support staff. Inter-institutional research may also have a source for each team. The following list covers a range of different common arrangements.

Independent researcher, own time (e.g., not employed by but emeritus or affiliated with a university).

University researcher, own time (e.g., holidays, evenings, weekends).

University, percentage of the working time of academic staff devoted for research. In some countries this is large related to the amount of block finding versus project funding (Sandström & Van den Besselaar, 2018 ).

University, time borrowed from other activities (e.g., teaching, clinical duties, law practice).

Funder, generic research time funding (e.g., Gates chair of neuropsychology, long term career development funding for a general research programme).

University/Funder, specific time allocated for research programme (e.g., five years to develop cybersecurity research expertise).

University/Funder, employed for specific project (e.g., PhD student, postdoc supervised by member of staff).

University/Funder, specific time allocated for specific study (e.g., sabbatical to write a book).

Who funded the equipment or other non-human resources used in the research?

The resources needed for a research project might be funded as part of the project by the main funder, it may be already available to the researcher (e.g., National Health Service equipment that an NHS researcher could expect to access), or it may be separately funded and made available during the project (e.g., Richards, 2019 ). Here, “equipment” includes data or samples that are access-controlled as well as other resources unrelated to pay, such as travel. These types can be broken down as follows.

Researcher’s own equipment (e.g., a musician’s violin for performance-based research or composition; an archaeologist’s Land Rover to transport equipment to a dig).

University equipment, borrowed/repurposed (e.g., PC for teaching, unused library laptop).

University equipment, dual purpose (e.g., PC for teaching and research, violin for music teaching and research).

University/funder equipment for generic research (e.g., research group’s shared microbiology lab).

University/funder equipment research programme (e.g., GPU cluster to investigate deep learning).

University/funder equipment for specific project (e.g., PCs for researchers recruited for project; travel time).

University/funder equipment for single study (e.g., travel for interviews).

Of course, a funder may only support the loan or purchase of equipment on the understanding that the team will find other funding for research projects using it (e.g., “Funding was provided by the Water Research Commission [WRC]. The Covidence software was purchased by the Water Research fund”: Deglon et al., 2023 ). Getting large equipment working for subsequent research (e.g., a space telescope, a particle accelerator, a digitisation project) might also be the primary goal of a project.

How many funders contributed?

Although many projects are funded by a single source, some have multiple funders sharing the costs by agreement or by chance (Davies, 2016 ), and the following seem to be the logical possibilities for cost sharing.

Partially funded from one source, partly unfunded.

Partially funded from multiple sources, partly unfunded.

Fully funded from multiple sources.

Fully funded from a single source.

As an example of unplanned cost sharing, a researcher might have their post funded by one source and then subsequently bid for funding for equipment and support workers to run a large project. This project would then be part funded by the two sources, but not in a coordinated way. It seems likely that a project with a single adequate source of funding might be more efficient than a project with multiple sources that need to be coordinated. Conversely, a project with multiple funders may have passed through many different quality control steps or shown relevance to a range of different audiences. Those funded by multiple sources may also be less dependent on individual funders and therefore more able to autonomously follow their own research agenda, potentially leading to more innovative research.

How competitive was the funding allocation process?

Whilst government and charitable funding is often awarded on a competitive basis, the degree of competition (e.g., success rate) clearly varies between countries and funding calls and changes over time. In contrast, commercial funding may be gained without transparent competition (Kang & Motohashi, 2020 ), perhaps as part of ongoing work in an established collaboration or even due to a chance encounter. In between these, block research grants and prizes may be awarded for past achievements, so they are competitive, but the recipients are relatively free to spend on any type of research and do not need to write proposals (Franssen et al., 2018 ). Similarly, research centre grants may be won competitively but give the freedom to conduct a wide variety of studies over a long period. This gives the following three basic dimensions.

The success rate from the funding call (i.e., the percentage of initial applicants that were funded) OR

The success rate based on funding awarded for past performance (e.g., prize or competitive block grant, although this may be difficult to estimate) OR

The contract or other funding was allocated non-competitively (e.g., non-competitive block funding).

How was the funding decision made?

Who decides on which researchers receive funding and through which processes is also relevant (Van den Besselaar & Horlings, 2011 ). This is perhaps one of the most important considerations for funders.

The procedure for grant awarding: who decided and how?

There is a lot of research into the relative merits of different selection criteria for grants, such as a recent project to assess whether randomisation could be helpful (Fang & Casadevall, 2016 ; researchonresearch.org/experimental-funder). Peer review, triage, and deliberative committees are common, but not universal, components (Meadmore et al., 2020 ) and sources of variation include whether non-academic stakeholders are included within peer review teams (Luo et al., 2021 ), whether one or two stage submissions are required (Gross & Bergstrom, 2019 ) and whether sandpits are used (Meadmore et al., 2020 ). Although each procedure may be unique in personnel and fine details, broad information about it would be particularly helpful in comparisons between funders or schemes.

What were the characteristics of the research team?

The characteristics of successful proposals or applicants are relevant to analyses of competitive calls (Grimpe, 2012 ), although there are too many to list individually. Some deserve some attention here.

What are the characteristics of the research team behind the project or output (e.g., gender, age, career status, institution)?

What is the track record of the research team (e.g., citations, publications, awards, previous grants, service work).

Gender bias is an important consideration and whether it plays a role is highly disputed in the literature. Recent findings suggest that there is gender bias in reviews, but not success rates (Bol et al., 2022 ; Van den Besselaar & Mom, 2021 ). Some funding schemes have team requirements (e.g., established vs. early career researcher grants) and many evaluate applicants’ track records. Applicants’ previous achievements may be critical to success for some calls, such as those for established researchers or funding for leadership, play a minor role in others, or be completely ignored (e.g., for double blind grant reviewing). In any case, research team characteristics may be important for evaluating the influence of the funding or the fairness of the selection procedure.

What were the funder’s goals?

Funding streams or sources often have goals that influence what type of research can be funded. Moreover, researchers can be expected to modify their aspirations to align with the funding stream. The funder may have different types of goal, from supporting aspects of the research process to supporting relevant projects or completing a specific task (e.g., Woodward & Clifton, 1994 ), to generating societal benefits (Fernández-del-Castillo et al., 2015 ).

A common distinction is between basic and applied research, and the category “strategic research” has also been used to capture basic research aiming at long term societal benefits (Sandström, 2009 ). The Frascati Manual uses Basic Research, Applied Research and Experimental Development instead (OECD, 2015 ), but this is more relevant for analyses that incorporate industrial research and development.

Research funding does not necessarily have the goal to fund research because some streams support network formation in the expectation that the network will access other resources to support studies (Aagaard et al., 2021 ). European Union COST (European Cooperation in Science and Technology) Actions are an example (cost.eu). Others may have indirect goals, such as capacity building or creating a strong national research base that helps industry or attracts to international business research investment (Cooksey, 2006 ), or promoting a topic (e.g., educational research: El-Sawi et al., 2009 ). As a corollary to the last point, some topics may be of little interest to most funders, for example because they would mainly benefit marginalised communities (Woodson & Williams, 2020 ).

Since the early 2000s, many countries have also issued so-called career grants which have become prestigious. At the European level career grants started in 2009: the European Research Council (ERC) grants. These grants have a career effect (Bloch et al., 2014 ; Danell & Hjerm, 2013 ; Schroder et al., 2021 ; Van den Besselaar & Sandström, 2015 ) but this dimension, and the longer-term effects of funding other than on specific outputs, is not considered here. A funding scheme may also have multiple of the following goals.

Basic research (e.g., the Malaysia Toray Science Foundation supports basic research by young scientists to boost national capacity: www.mtsf.org ).

Strategic research (e.g., the UK Natural Environment Research Council’s strategic research funding targets areas of important environmental concern, targeting long term solutions: www.ukri.org/councils/nerc/ ).

Applied research (e.g., the Dutch NWO [Dutch Research Council] applied research fund to develop innovations supporting food security: www.nwo.nl/en/researchprogrammes/food-business-research ).

Technology transfer (i.e., applying research knowledge or skills to a non-research problem) or translational research.

Researcher development and training (including career grants).

Capacity building (e.g., to support research in resource-poor settings).

Collaboration formation (e.g., industry-academia, international, inter-university).

Research within a particular field.

Research with a particular application area (e.g., any research helping Alzheimer’s patients, including a ring-fenced proportion of funding within a broader call).

Tangible academic outputs (e.g., articles, books).

Tangible non-academic outputs (e.g., policy changes, medicine accreditation, patents, inventions).

Extent of the funding

The extent of funding of a project can vary substantially from a small percentage, such as for a single site visit, to 100%. A project might even make a surplus if it is allowed to keep any money left over, its equipment survives the project, or it generates successful intellectual property. The financial value of funding is clearly an important consideration because a cheaper project delivering similar outcomes to a more expensive one would have performed better. Nevertheless, grant size is often ignored in academic studies of the value of funding (e.g., Thelwall et al., 2023 ) because it is difficult to identify the amount and to divide it amongst grant outputs. This section covers four dimensions of the extent of a grant.

What proportion of the research was funded?

A research project might be fully funded, funded for the extras needed above what is already available, or deliberately partly funded (Comins, 2015 ). This last approach is sometimes called “cost sharing”. A grant applied on the Full Economic Cost (FEC) model would pay for the time and resources used by the researchers as well as the administrative support and accommodation provided by their institution. The following seem to be the main possibilities.

Partly funded.

Fully funded but on a partial FEC or sub-FEC model cost sharing model.

FEC plus surplus.

The Frascatti Manual about collecting research and development statistics distinguishes between funding internally within a unit of analysis or externally (OECD, 2015 ) but here the distinction is between explicit and implicit funding, with the latter being classed as “Unfunded”.

How was the funding delivered?

Whilst a research grant would normally be financial, a project might be supported in kind by the loan or gift of equipment or time. For instance, agricultural research might be supported with access to relevant land or livestock (Tricarico et al., 2022 ). Here are three common approaches for delivering funding.

In kind—lending time or loaning/giving equipment or other resources.

Fixed amount of money.

A maximum amount of money, with actual spending justified by receipts.

How much funding did the project receive?

Project funding can be tiny, such as a few pounds for a trip or travel expenses, or enormous, such as for a particle accelerator. Grants of a few thousand pounds can also be common in some fields and for some funders (e.g., Gallo et al., 2014 ; Lyndon, 2018 ). In competitive processes, the funder normally indicates the grant size range that it is prepared to fund. The amount of funding for research has increased over time (Bloch & Sørensen, 2015 ).

The money awarded and/or claimed by the project.

How long was the funding for?

Funded projects can be short term, such as for a one-day event, or very long term, such as a 50-year nuclear fusion reactor programme. There seems to be a trend for longer term and larger amounts of funding, such as for centres of excellence that can manage multiple different lines of research (Hellström, 2018 ; OECD, 2014 ).

The intended or actual (e.g., due to costed or non-costed extensions) duration of the project.

Influence of the funding on the research project

A variety of aspects of the funding system were discussed in the previous sections, and this section and the next switch to the effects of funding on what research is conducted and how. Whist some grant schemes explicitly try to direct research (e.g., funding calls to build national artificial intelligence research capacity), even open calls may have indirect influences on team formation, goals, and broader research directions. This section discusses three different ways in which funding can influence a research project.

Influence on what the applicant did

Whilst funding presumably has a decisive influence on whether a study occurs most of the time because of the expense of the equipment or effort (e.g., to secure ethical approval for medical studies: Jonker et al., 2011 ), there may be exceptions. For example, an analysis of unfunded medical research found that it was often hospital-based (Álvarez-Bornstein et al., 2019 ), suggesting that it was supported by employers. Presumably the researcher applying for funding would usually have done something else research-related if they did not win the award, such as conducting different studies or applying for other funding. The following seem to be the main dimensions of variation here.

No influence (the study would have gone ahead without the funding).

Improved existing study (e.g., more time to finish, more/better equipment, more collaborators, constructive ideas from the peer review process). An extreme example of the latter is the Medical Research Council’s Developmental Pathway Funding Scheme (DPFS), which has expert input and decision making throughout a project.

Made the study possible, replacing other research-related activities (e.g., a different type of investigation, supporting another project, PhD mentoring).

Made the study possible, replacing non-research activities (e.g., teaching, clinical practice).

Researchers may conduct unfunded studies if financing is not essential and they would like to choose their own goals (Edwards, 2022 ; Kayrooz et al., 2007 ), or if their research time can be subsidised by teaching revenue (Olive, 2017 ). Some types of research are also inherently cheaper than others, such as secondary data analysis (Vaduganathan et al., 2018 ) and reviews in medical fields, so may not need funding. At the other extreme, large funding sources may redirect the long-term goals of an entire research group (Jeon, 2019 ). In between these two, funding may improve the quality of a study that would have gone ahead anyway, such as by improving its methods, including the sample size or the range of analyses used (Froud et al., 2015 ). Alternatively, it may have changed a study without necessarily improving it, such as by incorporating funder-relevant goals, methods, or target groups. Scholars with topics that do not match the major funding sources may struggle to be able to do research (Laudel, 2005 ).

Influence on research goals or methods

In addition to supporting the research, the nature of the influence of the source of funding can be minor or major, from the perspective of the funded researcher. It seems likely most funding requires some changes to what a self-funded researcher might otherwise do, if only to give reassurance that the proposed research will deliver tangible outputs (Serrano Velarde, 2018 ), or to fit specific funder requirements (Luukkonen & Thomas, 2016 ). Funding influence can perhaps be split into the following broad types, although they are necessarily imprecise, with considerable overlaps.

No influence (the applicant did not modify their research goals for the funder, or ‘relabelled’ their research goals to match the funding scheme).

Partial influence (the applicant modified their research goals for the funder)

Strong influence (the applicant developed new research goals for the funder, such as a recent call for non-AI researchers to retrain to adopt AI).

Full determination (the funder specified the project, such as a pharmaceutical industry contract to test a new vaccine).

Focusing on more substantial changes only, the funding has no influence if the academic did not need to consider funder-related factors when proposing their study, or could select a funder that fully aligned with their goals. On the other hand, the influence is substantial if the researcher changed their goals to fit the funder requirements (Currie-Alder, 2015 ; Tellmann, 2022 ). In between, a project goals may be tailored to a funder or funding requirements (Woodward & Clifton, 1994 ). An indirect way in which health-related funders often influence research is by requiring Patient and Public Involvement (PPI) at all levels of a project, including strategy development (e.g., Brett et al., 2014 ). Funding initiatives may aim to change researchers’ goals, such as to encourage the growth of a promising new field (Gläser et al., 2016 ). The wider funding environment may also effectively block some research types or topics if it is not in scope for most grants (Laudel & Gläser, 2014 ).

It seems likely that funding sources have the greatest influence on researchers’ goals in resource intensive areas, presumably including most science and health research, and especially those that routinely issue topic-focused calls (e.g., Laudel, 2006 ; Woelert et al., 2021 ). The perceived likelihood of receiving future funding may also influence research methods, such as by encouraging researchers to hoard resources (e.g., perform fewer laboratory experiments for a funded paper) when future access may be at risk (Laudel, 2023 ).

Influence on research team composition

The funder call may list eligibility requirements of various types. For example, the UK national funders specify that applicants must be predominantly UK academics. One common type of specification seems to be team size and composition since many funders (e.g., EU) specify or encourage collaborative projects. Funding may also encourage commercial participants or end user partnerships, which may affect team composition (e.g., Gaughan & Bozeman, 2002 ). Four different approaches may be delineated as follows.

No influence (the funder allows any team size).

Partial influence (the applicant chooses a team size to enhance their perceived success rate).

Funder parameters (the funder specifies parameters, such as a requirement for collaboration or partners from at least three EU countries, disciplinary composition or interdisciplinarity mandate).

Full determination (the funder specifies the team size, such as individual applicants only for career-related grants).

The influence of funders on research team composition is unlikely to be strict even if they fully determine grant applicant team sizes because the funded researchers may choose to collaborate with others using their own grants or unfunded.

Influence of the funding on the research outputs

The above categories cover how research funding helps or influences research studies. This section focuses on what may change in the outputs of researchers or projects due to the receipt of funding. This is important to consider because research outputs are the most visible and countable outcomes of research projects, but they are not always necessary (e.g., funding for training or equipment) and different types can be encouraged. Four relevant dimensions of influence are discussed below.

Influence of funding on the applicant’s productivity

Funding can normally be expected to support the production of new outputs by an academic or team (Bloch et al., 2014 ; Danell & Hjerm, 2013 ), but this may be field dependent. Studying the factors affecting productivity, DFG grants had a positive effect on the productivity for German political scientists (Habicht et al., 2021 ). However, in some cases funding may produce fewer tangible outputs because of the need to collaborate with end users or conduct activities of value to them (Hottenrott & Thorwarth, 2011 ), or if the funding is for long-term high-risk investigations. In areas where funding is inessential or where or core/block funding provides some baseline capability, academics who choose not to apply for it can devote all their research time to research rather than grant writing, which may increase their productivity (Thyer, 2011 ). Although simplistic, the situation may therefore be characterised into three situations.

Reduction in the number or size of outputs of relevant types by the applicant(s) during and/or after the project.

No change in the number or size of outputs of relevant types by the applicant(s) during and/or after the project.

Increase in the number or size of outputs of relevant types by the applicant(s) during and/or after the project.

Funding can also have the long-term indirect effect of improving productivity, though career benefits for those funded, such as making them more likely to attract collaborators and future funding (Defazio et al., 2009 ; Heyard & Hottenrott, 2021 ; Hussinger & Carvalho, 2022 ; Saygitov, 2018 ; Shimada et al., 2017 ). Writing grant applications may also provide an intensive learning process, which may help careers (Ayoubi et al., 2019 ; Jonkers et al., 2017 ).

Influence of funding on the applicant’s research output types

Funding may change what a researcher or research team produces. For example, a commercial component of grants may reduce the number of journal articles produced (Hottenrott & Lawson, 2017 ). Funder policies may have other influences on what a researcher does, such as conditions to disseminate the results in a certain way. This may include open access, providing accessible research data, or writing briefings for policy makers or the public. Whilst this may be considered good practice, some may be an additional overhead for the researcher. This may be summarised as follows, although the distinctions are qualitative.

No change in the nature of the outputs produced.

Partial change in the nature of the outputs produced.

Complete change in the nature of the outputs produced (e.g., patents instead of articles).

Influence of funding on the impact or quality of the research

Although cause-and-effect may be difficult to prove (e.g., Aagaard & Schneider, 2017 ), funding seems likely to change the citation, scholarly, societal, or other impacts of what a researcher or research team produces. For example, a reduction in citation impact may occur if the research becomes more application-focused and an increase may occur if the funding improves the quality of the research.

Most studies have focused on citation impact, finding that funded research, or research funded by a particular funder, tends to be more cited than other research (Álvarez-Bornstein et al., 2019 ; Gush et al., 2018 ; Heyard & Hottenrott, 2021 ; Rigby, 2011 ; Roshani et al., 2021 ; Thelwall et al., 2016 ; Yan et al., 2018 ), albeit with a few exceptions (Alkhawtani et al., 2020 ; Jowkar et al., 2011 ; Muscio et al., 2017 ). Moreover, unfunded work, or work that does not explicitly declare funding sources, in some fields can occasionally be highly cited (Sinha et al., 2016 ; Zhao, 2010 ). Logically, however, there are three broad types of influence on the overall impacts of the outputs produced, in addition to changes in the nature of the impacts.

Reduction in the citation/scholarly/societal/other impact of the outputs produced.

No change in the citation/scholarly/societal/other impact of the outputs produced.

Increase in the citation/scholarly/societal/other impact of the outputs produced.

The quality of the research produced is also important and could be assessed by a similar list to the one above. Research quality is normally thought to encompass three aspects: methodological rigour, innovativeness, and societal/scientific impact (Langfeldt et al., 2020 ). Considering quality overall therefore entails attempting to also assess the rigour and innovativeness of research. These seem likely to correlate positively with research impact and are difficult to assess on a large scale. Whilst rigour might be equated with passing journal peer review in some cases, innovation has no simple proxy indictor and is a particular concern for funding decisions (Franssen, et al., 2018 ; Whitley et al., 2018 ).

The number and types of outcomes supported by a grant

When evaluating funding, it is important to consider the nature and number of the outputs and other outcomes produced specifically from it. Research projects often deliver multiple products, such as journal articles, scholarly talks, public-facing talks, and informational websites. There may also be more applied outputs, such as health policy changes, spin-out companies, and new drugs (Ismail et al., 2012 ). Since studies evaluating research funding often analyse only the citation impact of the journal articles produced (because of the ease of benchmarking), it is important to at least acknowledge that other outputs are also produced by researchers, even if it is difficult to take them into account in quantitative analyses.

The number and type of outcomes or outputs associated with a grant.

Of course, the non-citation impacts of research, such as policy changes or drug development, are notoriously difficult to track down even for individual projects (Boulding et al., 2020 ; Raftery et al., 2016 ), although there have been systematic attempts to identify policy citations (Szomszor & Adie, 2022 ). Thus, most types of impacts could not be analysed on a large scale and individual qualitative analyses are the only option for detailed impact analyses (Guthrie et al., 2015 ). In parallel with this, studies that compare articles funded by different sources should really consider the number of outputs per grant, since a grant producing more outputs would tend to be more successful. This approach does not seem to be used when average citation impact is compared, which is a limitation.

A pragmatic issue for studies of grants: funding records

Finally, from a pragmatic data collection perspective, the funding for a research output can be recorded in different places, not all of which are public. A logical place to look for this information is within the output, although it may be recorded within databases maintained by the funder or employer. Related to this, it is not always clear how much of an output can be attributed to an acknowledged funding source. Whilst the location of a funding record presumably has no influence on the effectiveness of the funding, so is not relevant to the goals of this article, it is included here an important practical consideration that all studies of grant funding must cope with. Three relevant dimensions of this ostensibly simple issue are discussed below.

Where the funding is recorded inside the output

Funding can be acknowledged explicitly in journal articles (Aagaard et al., 2021 ) and other research outputs, whether to thank the funder or to record possible conflicts of interest. This information may be omitted because the authors forget or do not want to acknowledge some or all funders. Here is a list of common locations.

A Funding section.

An Acknowledgements section.

A Notes section.

A Declaration of Interests section.

The first footnote.

The last footnote.

The last paragraph of the conclusions.

Elsewhere in the output.

Not recorded in the output.

The compulsory funding declaration sections of an increasing minority of journals are the ideal place for funder information. These force corresponding authors to declare funding, although they may not be able to track down all sources for large, multiply-funded teams. This section also is probably the main place where a clear statement that a study was unfunded could be found. A Declaration of Interests section may also announce an absence of funding, although this cannot be inferred from the more usual statement that the authors have no competing interests. Funding statements in other places are unsystematic in the sense that it seems easy for an author to forget them. Nevertheless, field norms may dictate a specific location for funding information (e.g., always a first page footnote), and this seems likely to reduce the chance that this step is overlooked.

Where the funding is recorded outside the output

Large funders are likely to keep track of the outputs from their funded research, and research institutions may also keep systematic records (Clements et al., 2017 ). These may be completed by researchers or administrators and may be mandatory or optional. Funders usually also record descriptive qualitative information about funded projects that is not essential for typical large-scale analyses of funded research but is important to keep track of individual projects. It may also be used large scale descriptive analyses of grant portfolio changes over time. For example, the UKRI Gateway to Research information includes project title, abstract (lay and technical), value (amount awarded by UKRI—so usually 80% FEC), funded period (start and end), project status (whether still active), category (broad research grant type—e.g., Fellowship), grant reference, Principle Investigator (PI) (and all co-Investigators), research classifications (e.g. Health Research Classification System [HRCS] for MRC grants), research organisations involved (whether as proposed collaborators or funding recipients/partners), and, as the project progresses, any outputs reported via Researchfish.

Academic employers may also track the outputs and funding of their staff in a current research information system or within locally designed databases or spreadsheets. Dimensions for Funders (Dimensions, 2022 ), for example, compiles funding information from a wide range of sources. Other public datasets include the UKRI Gateway to Research (extensive linkage to outputs), the Europe PMC grant lookup tool (good linkage to publications) or the UKCDR covid funding tracker (some linkage to publications via Europe PMC), or the occasional UK Health Research Analysis (.net), and the European commission CORDIS dataset. There are also some initiatives to comprehensively catalogue who funds what in particular domains, such as for UK non-commercial health research (UKCRC, 2020 ). Of course, there are ad-hoc funding statements too, such as in narrative claims of research impact in university websites or as part of evaluations (Grant & Hinrichs, 2015 ), but these may be difficult to harvest systematically. The following list includes a range of common locations.

In a university/employer public/private funding record.

In the academic’s public/private CV.

In the funder’s public/private record.

In a shared public/private research funding system used by the funder (e.g., Researchfish).

In publicity for the grant award (if output mentioned specifically enough).

In publicity for the output (e.g., a theatre programme for a performance output).

Elsewhere outside the output.

Not recorded outside the output.

From the perspective of third parties obtaining information about funding for outputs, if the employer and/or funder databases are private or public but difficult to search then online publicity about the outputs or funding may give an alternative record.

What is the connection between outputs and their declared funders?

Some outputs have a clear identifiable funder or set of funders. For example, a grant may be awarded to write a book and the book would therefore clearly be the primary output of the project. Similarly, a grant to conduct a specified randomised controlled trial seems likely to produce an article reporting the results; this, after passing review, would presumably be the primary research output even though an unpublished statistical summary of the results might suffice in some cases, especially when time is a factor. More loosely, a grant may specify a programme of research and promise several unspecified or vaguely specified outputs. In this case there may be outputs related to the project but not essential to it that might be classed as being part of it. It is also possible that outputs with little connection to a project are recorded as part of it for strategic reasons, such as to satisfy a project quota or gain a higher end-of-project grade. For example, Researchfish (Reddick et al., 2022 ) allows grant holders to select which publications on their CVs associate with each grant. There are also genuine mistakes in declaring funding (e.g., Elmunim et al., 2022 ). The situation may be summarised with the following logical categories.

Direct, clear connection (e.g., the study is a named primary output of a project).

Indirect, clear connection (e.g., the study is a writeup of a named project outcome).

Indirect, likely connection (e.g., the study is an output of someone working on the project and the output is on the project topic).

Tenuous connection (e.g., the study was completed before the project started, by personnel not associated with the project, or by project personnel on an unrelated topic).

No connection at all (such as due to a recording error; presumably rare).

Conclusions

This paper has described dimensions along which research funding differs between projects, with a focus on grant funding. This includes dimensions that are important to consider when analysing the value of research funding quantitatively. This list is incomplete, and not all aspects will be relevant to all future analyses of funding. Most qualitative and rarer dimensions of difference associated with funding are omitted, including the exact nature of any societal impact, support for researcher development, and support for wider social, ethical or scientific issues (e.g., promoting open science).

Organisations that compile funding datasets or otherwise record funding information may also consult the lists above when considering the records that are desirable to collect. Of course, the providers of large datasets, such as the Dimensions for Funders system, may often not be able to find this information for inclusion (not provided by funders) or not be able to adequately process it (e.g., simply too many variations in funding types, and no straightforward way to present this data to users).

When comparing funding sources or evaluating the impact of funding, it is important to consider as many dimensions as practically possible to ensure that comparisons are fair as achievable, whilst acknowledging the remaining sources of variation as limitations. Even at the level of funding schemes, all have unique features but since comparisons must be made for management purposes, it is important to consider differences or to at least be aware of them when making comparisons.

Aagaard, K., Mongeon, P., Ramos-Vielba, I., & Thomas, D. A. (2021). Getting to the bottom of research funding: Acknowledging the complexity of funding dynamics. PLoS ONE, 16 (5), e0251488.

Article   Google Scholar  

Aagaard, K., & Schneider, J. W. (2017). Some considerations about causes and effects in studies of performance-based research funding systems. Journal of Informetrics, 11 (3), 923–926.

Alkhawtani, R. H., Kwee, T. C., & Kwee, R. M. (2020). Funding of radiology research: Frequency and association with citation rate. American Journal of Roentgenology, 215 , 1286–1289.

Álvarez-Bornstein, B., Díaz-Faes, A. A., & Bordons, M. (2019). What characterises funded biomedical research? Evidence from a basic and a clinical domain. Scientometrics, 119 (2), 805–825.

Ayoubi, C., Pezzoni, M., & Visentin, F. (2019). The important thing is not to win, it is to take part: What if scientists benefit from participating in research grant competitions? Research Policy, 48 (1), 84–97.

Bernardin, H. J. (1996). Academic research under siege: Toward better operational definitions of scholarship to increase effectiveness, efficiencies and productivity. Human Resource Management Review, 6 (3), 207–229.

BHF. (2022). Research evaluation report—British Heart Foundation. Retrieved from https://www.bhf.org.uk/for-professionals/information-for-researchers/managing-your-grant/research-evaluation

Bloch, C., Graversen, E., & Pedersen, H. (2014). Competitive grants and their impact on career performance. Minerva, 52 , 77–96.

Bloch, C., & Sørensen, M. P. (2015). The size of research funding: Trends and implications. Science and Public Policy, 42 (1), 30–43.

Bol, T., de Vaan, T., & van de Rijt, A. (2022). Gender-equal funding rates conceal unequal evaluations. Research Policy, 51 (2022), 104399.

Boulding, H., Kamenetzky, A., Ghiga, I., Ioppolo, B., Herrera, F., Parks, S., & Hinrichs-Krapels, S. (2020). Mechanisms and pathways to impact in public health research: A preliminary analysis of research funded by the National Institute for health research (NIHR). BMC Medical Research Methodology, 20 (1), 1–20.

Brett, J. O., Staniszewska, S., Mockford, C., Herron-Marx, S., Hughes, J., Tysall, C., & Suleman, R. (2014). Mapping the impact of patient and public involvement on health and social care research: A systematic review. Health Expectations, 17 (5), 637–650.

Clements, A., Reddick, G., Viney, I., McCutcheon, V., Toon, J., Macandrew, H., & Wastl, J. (2017). Let’s Talk-Interoperability between university CRIS/IR and Researchfish: A case study from the UK. Procedia Computer Science, 106 , 220–231.

Comins, J. A. (2015). Data-mining the technological importance of government-funded patents in the private sector. Scientometrics, 104 (2), 425–435.

Cooksey, D. (2006). A review of UK health research funding. Retrieved from https://www.jla.nihr.ac.uk/news-and-publications/downloads/Annual-Report-2007-08/Annexe-8-2007-2008-CookseyReview.pdf

Currie-Alder, B. (2015). Research for the developing world: Public funding from Australia, Canada, and the UK . Oxford University Press.

Book   Google Scholar  

Danell, R., & Hjerm, R. (2013). The importance of early academic career opportunities and gender differences in promotion rates. Research Evaluation, 22 , 2010–2214.

Davies, J. (2016). Collaborative funding for NCDs—A model of research funding. The Lancet Diabetes & Endocrinology, 4 (9), 725–727.

Defazio, D., Lockett, A., & Wright, M. (2009). Funding incentives, collaborative dynamics and scientific productivity: Evidence from the EU framework program. Research Policy, 38 (2), 293–305.

Deglon, M., Dalvie, M. A., & Abrams, A. (2023). The impact of extreme weather events on mental health in Africa: A scoping review of the evidence. Science of the Total Environment, 881 , 163420.

Dimensions. (2022). Dimensions for funders. Retrieved from https://www.dimensions.ai/who/government-and-funders/dimensions-for-funders/

Edwards, R. (2022). Why do academics do unfunded research? Resistance, compliance and identity in the UK neo-liberal university. Studies in Higher Education, 47 (4), 904–914.

Elkbuli, A., Zajd, S., Narvel, R. I., Dowd, B., Hai, S., Mckenney, M., & Boneva, D. (2020). Factors affecting research productivity of trauma surgeons. The American Surgeon, 86 (3), 273–279.

Elmunim, N. A., Abdullah, M., & Bahari, S. A. (2022). Correction: Elnumin et al. Evaluating the Performance of IRI-2016 Using GPS-TEC measurements over the equatorial region: Atmosphere 2021, 12, 1243. Atmosphere, 13 (5), 762.

El-Sawi, N. I., Sharp, G. F., & Gruppen, L. D. (2009). A small grants program improves medical education research productivity. Academic Medicine, 84 (10), S105–S108.

ERC. (2017). Assessing the business performance effects of receiving publicly-funded science, research and innovation grants. Retrieved from https://www.enterpriseresearch.ac.uk/publications/accessing-business-performance-effects-receiving-publicly-funded-science-research-innovation-grants-research-paper-no-61/

Fang, F. C., & Casadevall, A. (2016). Research funding: The case for a modified lottery. Mbio, 7 (2), 10–1128.

Fernández-del-Castillo, E., Scardaci, D., & García, Á. L. (2015). The EGI federated cloud e-infrastructure. Procedia Computer Science, 68 , 196–205.

Franssen, T., Scholten, W., Hessels, L. K., & de Rijcke, S. (2018). The drawbacks of project funding for epistemic innovation: Comparing institutional affordances and constraints of different types of research funding. Minerva, 56 (1), 11–33.

Froud, R., Bjørkli, T., Bright, P., Rajendran, D., Buchbinder, R., Underwood, M., & Eldridge, S. (2015). The effect of journal impact factor, reporting conflicts, and reporting funding sources, on standardized effect sizes in back pain trials: A systematic review and meta-regression. BMC Musculoskeletal Disorders, 16 (1), 1–18.

Gallo, S. A., Carpenter, A. S., Irwin, D., McPartland, C. D., Travis, J., Reynders, S., & Glisson, S. R. (2014). The validation of peer review through research impact measures and the implications for funding strategies. PLoS ONE, 9 (9), e106474.

Garrett-Jones, S. (2000). International trends in evaluating university research outcomes: What lessons for Australia? Research Evaluation, 9 (2), 115–124.

Gaughan, M., & Bozeman, B. (2002). Using curriculum vitae to compare some impacts of NSF research grants with research center funding. Research Evaluation, 11 (1), 17–26.

Gläser, J., Laudel, G., & Lettkemann, E. (2016). Hidden in plain sight: The impact of generic governance on the emergence of research fields. The local configuration of new research fields: On regional and national diversity, 25–43.

Grant, J., & Hinrichs, S. (2015). The nature, scale and beneficiaries of research impact: An initial analysis of the Research Excellence Framework (REF) 2014 impact case studies. Retrieved from https://kclpure.kcl.ac.uk/portal/files/35271762/Analysis_of_REF_impact.pdf

Grimpe, C. (2012). Extramural research grants and scientists’ funding strategies: Beggars cannot be choosers? Research Policy, 41 (8), 1448–1460.

Gross, K., & Bergstrom, C. T. (2019). Contest models highlight inherent inefficiencies of scientific funding competitions. PLoS Biology, 17 (1), e3000065.

Gush, J., Jaffe, A., Larsen, V., & Laws, A. (2018). The effect of public funding on research output: The New Zealand Marsden Fund. New Zealand Economic Papers, 52 (2), 227–248.

Guthrie, S., Bienkowska-Gibbs, T., Manville, C., Pollitt, A., Kirtley, A., & Wooding, S. (2015). The impact of the national institute for health research health technology assessment programme, 2003–13: A multimethod evaluation. Health Technology Assessment, 19 (67), 1–291.

Habicht, I. M., Lutter, M., & Schröder, M. (2021). How human capital, universities of excellence, third party funding, mobility and gender explain productivity in German political science. Scientometrics, 126 , 9649–9675.

Hellström, T. (2018). Centres of excellence and capacity building: From strategy to impact. Science and Public Policy, 45 (4), 543–552.

Heyard, R., & Hottenrott, H. (2021). The value of research funding for knowledge creation and dissemination: A study of SNSF research grants. Humanities and Social Sciences Communications, 8 (1), 1–16.

Hicks, D. (2012). Performance-based university research funding systems. Research Policy, 41 (2), 251–261.

Hottenrott, H., & Lawson, C. (2017). Fishing for complementarities: Research grants and research productivity. International Journal of Industrial Organization, 51 (1), 1–38.

Hottenrott, H., & Thorwarth, S. (2011). Industry funding of university research and scientific productivity. Kyklos, 64 (4), 534–555.

Hu, M. C. (2009). Developing entrepreneurial universities in Taiwan: The effects of research funding sources. Science, Technology and Society, 14 (1), 35–57.

Hussinger, K., & Carvalho, J. N. (2022). The long-term effect of research grants on the scientific output of university professors. Industry and Innovation, 29 (4), 463–487.

Ismail, S., Tiessen, J., & Wooding, S. (2012). Strengthening research portfolio evaluation at the medical research council: Developing a survey for the collection of information about research outputs. Rand Health Quarterly , 1 (4). Retrieved from https://www.rand.org/pubs/technical_reports/TR743.html

Jeon, J. (2019). Invisibilizing politics: Accepting and legitimating ignorance in environmental sciences. Social Studies of Science, 49 (6), 839–862.

Jonker, L., Cox, D., & Marshall, G. (2011). Considerations, clues and challenges: Gaining ethical and trust research approval when using the NHS as a research setting. Radiography, 17 (3), 260–264.

Jonkers, K., & Zacharewicz, T. (2016). Research performance based funding systems: A comparative assessment. European Commission. Retrieved from https://ec.europa.eu/jrc/en/publication/eur-scientific-and-technical-research-reports/research-performance-based-funding-systems-comparative-assessment

Jonkers, K., Fako P., Isella, L., Zacharewicz, T., Sandstrom, U., & Van den Besselaar, P. (2017). A comparative analysis of the publication behaviour of MSCA fellows. Proceedings STI conference . Retrieved from https://www.researchgate.net/profile/Ulf-Sandstroem-2/publication/319547178_A_comparative_analysis_of_the_publication_behaviour_of_MSCA_fellows/links/59b2ae00458515a5b48d133f/A-comparative-analysis-of-the-publication-behaviour-of-MSCA-fellows.pdf

Jowkar, A., Didegah, F., & Gazni, A. (2011). The effect of funding on academic research impact: A case study of Iranian publications. Aslib Proceedings, 63 (6), 593–602.

Kang, B., & Motohashi, K. (2020). Academic contribution to industrial innovation by funding type. Scientometrics, 124 (1), 169–193.

Kayrooz, C., Åkerlind, G. S., & Tight, M. (Eds.). (2007). Autonomy in social science research, volume 4: The View from United Kingdom and Australian Universities . Emerald Group Publishing Limited.

Google Scholar  

Kim, K. S., Chung, J. H., Jo, J. K., Kim, J. H., Kim, S., Cho, J. M., & Lee, S. W. (2018). Quality of randomized controlled trials published in the international urogynecology journal 2007–2016. International Urogynecology Journal, 29 (7), 1011–1017.

Langfeldt, L., Bloch, C. W., & Sivertsen, G. (2015). Options and limitations in measuring the impact of research grants—Evidence from Denmark and Norway. Research Evaluation, 24 (3), 256–270.

Langfeldt, L., Nedeva, M., Sörlin, S., & Thomas, D. A. (2020). Co-existing notions of research quality: A framework to study context-specific understandings of good research. Minerva, 58 (1), 115–137.

Laudel, G. (2005). Is external research funding a valid indicator for research performance? Research Evaluation, 14 (1), 27–34.

Laudel, G. (2006). The art of getting funded: How scientists adapt to their funding conditions. Science and Public Policy, 33 (7), 489–504.

Laudel, G. (2023). Researchers’ responses to their funding situation. In: B. Lepori & B. Jongbloed (Eds.), Handbook of public funding of research (pp. 261–278).

Laudel, G., & Gläser, J. (2014). Beyond breakthrough research: Epistemic properties of research and their consequences for research funding. Research Policy, 43 (7), 1204–1216.

Luo, J., Ma, L., & Shankar, K. (2021). Does the inclusion of non-academic reviewers make any difference for grant impact panels? Science and Public Policy, 48 (6), 763–775.

Lutter, M., Habicht, I. M., & Schröder, M. (2022). Gender differences in the determinants of becoming a professor in Germany: An event history analysis of academic psychologists from 1980 to 2019. Research Policy, 51 , 104506.

Luukkonen, T., & Thomas, D. A. (2016). The ‘negotiated space’ of university researchers’ pursuit of a research agenda. Minerva, 54 (1), 99–127.

Lyndon, A. R. (2018). Influence of the FSBI small research grants scheme: An analysis and appraisal. Journal of Fish Biology, 92 (3), 846–850.

Meadmore, K., Fackrell, K., Recio-Saucedo, A., Bull, A., Fraser, S. D., & Blatch-Jones, A. (2020). Decision-making approaches used by UK and international health funding organisations for allocating research funds: A survey of current practice. PLoS ONE, 15 (11), e0239757.

MRC. (2019). MRC 10 year translational research evaluation report 2008 to 2018. Retrieved from https://www.ukri.org/publications/mrc-translational-research-evaluation-report/

Muscio, A., Ramaciotti, L., & Rizzo, U. (2017). The complex relationship between academic engagement and research output: Evidence from Italy. Science and Public Policy, 44 (2), 235–245.

Neufeld, J. (2016). Determining effects of individual research grants on publication output and impact: The case of the Emmy Noether Programme (German Research Foundation). Research Evaluation, 25 (1), 50–61.

OECD. (2014). Promoting research excellence: new approaches to funding. OECD. Retrieved from https://www.oecd-ilibrary.org/science-and-technology/promoting-research-excellence_9789264207462-en

OECD. (2015). Frascati manual 2015. Retrieved from https://www.oecd.org/innovation/frascati-manual-2015-9789264239012-en.htm

Olive, V. (2017). How much is too much? Cross-subsidies from teaching to research in British Universities . Higher Education Policy Institute.

Paulson, K., Saeed, M., Mills, J., Cuvelier, G. D., Kumar, R., Raymond, C., & Seftel, M. D. (2011). Publication bias is present in blood and marrow transplantation: An analysis of abstracts at an international meeting. Blood, the Journal of the American Society of Hematology, 118 (25), 6698–6701.

Raftery, J., Hanley, S., Greenhalgh, T., Glover, M., & Blotch-Jones, A. (2016). Models and applications for measuring the impact of health research: Update of a systematic review for the health technology assessment programme. Health Technology Assessment, 20 (76), 1–254. https://doi.org/10.3310/hta20760

Reale, E., Lepori, B., & Scherngell, T. (2017). Analysis of national public research funding-pref. JRC-European Commission. Retrieved from https://core.ac.uk/download/pdf/93512415.pdf

Reddick, G., Malkov, D., Sherbon, B., & Grant, J. (2022). Understanding the funding characteristics of research impact: A proof-of-concept study linking REF 2014 impact case studies with Researchfish grant agreements. F1000Research, 10 , 1291.

Richards, H. (2019). Equipment grants: It’s all in the details. Journal of Biomolecular Techniques: JBT, 30 (Suppl), S49.

Rigby, J. (2011). Systematic grant and funding body acknowledgement data for publications: New dimensions and new controversies for research policy and evaluation. Research Evaluation, 20 (5), 365–375.

Roach, J. W., Skaggs, D. L., Sponseller, P. D., & MacLeod, L. M. (2008). Is research presented at the scoliosis research society annual meeting influenced by industry funding? Spine, 33 (20), 2208–2212.

Roshani, S., Bagherylooieh, M. R., Mosleh, M., & Coccia, M. (2021). What is the relationship between research funding and citation-based performance? A comparative analysis between critical disciplines. Scientometrics, 126 (9), 7859–7874.

Sandström, U. (2009). Research quality and diversity of funding: A model for relating research money to output of research. Scientometrics, 79 (2), 341–349.

Sandström, U., & Van den Besselaar, P. (2018). Funding, evaluation, and the performance of national research systems. Journal of Informetrics, 12 , 365–384.

Saygitov, R. T. (2018). The impact of grant funding on the publication activity of awarded applicants: A systematic review of comparative studies and meta-analytical estimates. Biorxiv , 354662.

Schneider, J. W., & van Leeuwen, T. N. (2014). Analysing robustness and uncertainty levels of bibliometric performance statistics supporting science policy: A case study evaluating Danish postdoctoral funding. Research Evaluation, 23 (4), 285–297.

Schroder, M., Lutter, M., & Habicht, I. M. (2021). Publishing, signalling, social capital, and gender: Determinants of becoming a tenured professor in German political science. PLoS ONE, 16 (1), e0243514.

Serrano Velarde, K. (2018). The way we ask for money… The emergence and institutionalization of grant writing practices in academia. Minerva, 56 (1), 85–107.

Shimada, Y. A., Tsukada, N., & Suzuki, J. (2017). Promoting diversity in science in Japan through mission-oriented research grants. Scientometrics, 110 (3), 1415–1435.

Sinha, Y., Iqbal, F. M., Spence, J. N., & Richard, B. (2016). A bibliometric analysis of the 100 most-cited articles in rhinoplasty. Plastic and Reconstructive Surgery Global Open, 4 (7), e820. https://doi.org/10.1097/GOX.0000000000000834

Szomszor, M., & Adie, E. (2022). Overton: A bibliometric database of policy document citations. arXiv preprint arXiv:2201.07643 .

Tellmann, S. M. (2022). The societal territory of academic disciplines: How disciplines matter to society. Minerva, 60 (2), 159–179.

Thelwall, M., Kousha, K., Abdoli, M., Stuart, E., Makita, M., Font-Julián, C. I., Wilson, P., & Levitt, J. (2023). Is research funding always beneficial? A cross-disciplinary analysis of UK research 2014–20. Quantitative Science Studies, 4 (2), 501–534. https://doi.org/10.1162/qss_a_00254

Thelwall, M., Kousha, K., Dinsmore, A., & Dolby, K. (2016). Alternative metric indicators for funding scheme evaluations. Aslib Journal of Information Management, 68 (1), 2–18. https://doi.org/10.1108/AJIM-09-2015-0146

Thyer, B. A. (2011). Harmful effects of federal research grants. Social Work Research, 35 (1), 3–7.

Tricarico, J. M., de Haas, Y., Hristov, A. N., Kebreab, E., Kurt, T., Mitloehner, F., & Pitta, D. (2022). Symposium review: Development of a funding program to support research on enteric methane mitigation from ruminants. Journal of Dairy Science, 105 , 8535–8542.

UKCRC. (2020). UK health research analysis 2018. Retrieved from https://hrcsonline.net/reports/analysis-reports/uk-health-research-analysis-2018/

Vaduganathan, M., Nagarur, A., Qamar, A., Patel, R. B., Navar, A. M., Peterson, E. D., & Butler, J. (2018). Availability and use of shared data from cardiometabolic clinical trials. Circulation, 137 (9), 938–947.

Van den Besselaar, P., & Horlings, E. (2011). Focus en massa in het wetenschappelijk onderzoek. de Nederlandse onderzoeksportfolio in internationaal perspectief. (In Dutch : Focus and mass in research: The Dutch research portfolio from an international perspective ). Den Haag, Rathenau Instituut.

Van den Besselaar, P. & Mom, C. (2021). Gender bias in grant allocation, a mixed picture . Preprint.

Van den Besselaar, P., & Leydesdorff, L. (2009). Past performance, peer review, and project selection: A case study in the social and behavioral sciences. Research Evaluation, 18 (4), 273–288.

Van den Besselaar, P., & Sandström, U. (2015). Early career grants, performance and careers; a study of predictive validity in grant decisions. Journal of Informetrics, 9 , 826–838.

Versleijen, A., van der Meulen, B., van Steen, J., Kloprogge, P., Braam, R., Mamphuis, R., & van den Besselaar, P. (2007). Dertig jaar onderzoeksfinanciering—rends, beleid en implicaties. (In Dutch: Thirty years research funding in the Netherlands—1975–2005) . Den Haag: Rathenau Instituut 2007.

Voss, A., Andreß, B., Pauzenberger, L., Herbst, E., Pogorzelski, J., & John, D. (2021). Research productivity during orthopedic surgery residency correlates with pre-planned and protected research time: A survey of German-speaking countries. Knee Surgery, Sports Traumatology, Arthroscopy, 29 , 292–299.

Wang, L., Wang, X., Piro, F. N., & Philipsen, N. J. (2020). The effect of competitive public funding on scientific output: A comparison between China and the EU. Research Evaluation, 29 (4), 418–429.

Whitley, R., Gläser, J., & Laudel, G. (2018). The impact of changing funding and authority relationships on scientific innovations. Minerva, 56 , 109–134.

Woelert, P., Lewis, J. M., & Le, A. T. (2021). Formally alive yet practically complex: An exploration of academics’ perceptions of their autonomy as researchers. Higher Education Policy, 34 , 1049–1068.

Woodson, T. S., & Williams, L. D. (2020). Stronger together: Inclusive innovation and undone science frameworks in the Global South. Third World Quarterly, 41 (11), 1957–1972.

Woodward, D. K., & Clifton, G. D. (1994). Development of a successful research grant application. American Journal of Health-System Pharmacy, 51 (6), 813–822.

Yan, E., Wu, C., & Song, M. (2018). The funding factor: A cross-disciplinary examination of the association between research funding and citation impact. Scientometrics, 115 (1), 369–384.

Zhao, D. (2010). Characteristics and impact of grant-funded research: A case study of the library and information science field. Scientometrics, 84 (2), 293–306.

Download references

No funding was received for conducting this study.

Author information

Authors and affiliations.

Statistical Cybermetrics and Research Evaluation Group, University of Wolverhampton, Wolverhampton, UK

Mike Thelwall

Information School, University of Sheffield, Sheffield, UK

MRC Secondee, Evaluation and Analysis Team, Medical Research Council, London, UK

Subreena Simrick

Evaluation and Analysis Team, Medical Research Council, London, UK

Department of Organization Sciences, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands

Peter Van den Besselaar

German Centre for Higher Education Research and Science Studies (DZHW), Berlin, Germany

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Mike Thelwall .

Ethics declarations

Competing interest.

The first and fourth authors are members of the Distinguished Reviewers Board of Scientometrics. The second and third authors work for research funders.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Thelwall, M., Simrick, S., Viney, I. et al. What is research funding, how does it influence research, and how is it recorded? Key dimensions of variation. Scientometrics 128 , 6085–6106 (2023). https://doi.org/10.1007/s11192-023-04836-w

Download citation

Received : 12 February 2023

Accepted : 05 September 2023

Published : 16 September 2023

Issue Date : November 2023

DOI : https://doi.org/10.1007/s11192-023-04836-w

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Research funding
  • Academic research funding
  • Research funding typology
  • Funding effects
  • Find a journal
  • Publish with us
  • Track your research

How to Write Limitations of the Study (with examples)

This blog emphasizes the importance of recognizing and effectively writing about limitations in research. It discusses the types of limitations, their significance, and provides guidelines for writing about them, highlighting their role in advancing scholarly research.

Updated on August 24, 2023

a group of researchers writing their limitation of their study

No matter how well thought out, every research endeavor encounters challenges. There is simply no way to predict all possible variances throughout the process.

These uncharted boundaries and abrupt constraints are known as limitations in research . Identifying and acknowledging limitations is crucial for conducting rigorous studies. Limitations provide context and shed light on gaps in the prevailing inquiry and literature.

This article explores the importance of recognizing limitations and discusses how to write them effectively. By interpreting limitations in research and considering prevalent examples, we aim to reframe the perception from shameful mistakes to respectable revelations.

What are limitations in research?

In the clearest terms, research limitations are the practical or theoretical shortcomings of a study that are often outside of the researcher’s control . While these weaknesses limit the generalizability of a study’s conclusions, they also present a foundation for future research.

Sometimes limitations arise from tangible circumstances like time and funding constraints, or equipment and participant availability. Other times the rationale is more obscure and buried within the research design. Common types of limitations and their ramifications include:

  • Theoretical: limits the scope, depth, or applicability of a study.
  • Methodological: limits the quality, quantity, or diversity of the data.
  • Empirical: limits the representativeness, validity, or reliability of the data.
  • Analytical: limits the accuracy, completeness, or significance of the findings.
  • Ethical: limits the access, consent, or confidentiality of the data.

Regardless of how, when, or why they arise, limitations are a natural part of the research process and should never be ignored . Like all other aspects, they are vital in their own purpose.

Why is identifying limitations important?

Whether to seek acceptance or avoid struggle, humans often instinctively hide flaws and mistakes. Merging this thought process into research by attempting to hide limitations, however, is a bad idea. It has the potential to negate the validity of outcomes and damage the reputation of scholars.

By identifying and addressing limitations throughout a project, researchers strengthen their arguments and curtail the chance of peer censure based on overlooked mistakes. Pointing out these flaws shows an understanding of variable limits and a scrupulous research process.

Showing awareness of and taking responsibility for a project’s boundaries and challenges validates the integrity and transparency of a researcher. It further demonstrates the researchers understand the applicable literature and have thoroughly evaluated their chosen research methods.

Presenting limitations also benefits the readers by providing context for research findings. It guides them to interpret the project’s conclusions only within the scope of very specific conditions. By allowing for an appropriate generalization of the findings that is accurately confined by research boundaries and is not too broad, limitations boost a study’s credibility .

Limitations are true assets to the research process. They highlight opportunities for future research. When researchers identify the limitations of their particular approach to a study question, they enable precise transferability and improve chances for reproducibility. 

Simply stating a project’s limitations is not adequate for spurring further research, though. To spark the interest of other researchers, these acknowledgements must come with thorough explanations regarding how the limitations affected the current study and how they can potentially be overcome with amended methods.

How to write limitations

Typically, the information about a study’s limitations is situated either at the beginning of the discussion section to provide context for readers or at the conclusion of the discussion section to acknowledge the need for further research. However, it varies depending upon the target journal or publication guidelines. 

Don’t hide your limitations

It is also important to not bury a limitation in the body of the paper unless it has a unique connection to a topic in that section. If so, it needs to be reiterated with the other limitations or at the conclusion of the discussion section. Wherever it is included in the manuscript, ensure that the limitations section is prominently positioned and clearly introduced.

While maintaining transparency by disclosing limitations means taking a comprehensive approach, it is not necessary to discuss everything that could have potentially gone wrong during the research study. If there is no commitment to investigation in the introduction, it is unnecessary to consider the issue a limitation to the research. Wholly consider the term ‘limitations’ and ask, “Did it significantly change or limit the possible outcomes?” Then, qualify the occurrence as either a limitation to include in the current manuscript or as an idea to note for other projects. 

Writing limitations

Once the limitations are concretely identified and it is decided where they will be included in the paper, researchers are ready for the writing task. Including only what is pertinent, keeping explanations detailed but concise, and employing the following guidelines is key for crafting valuable limitations:

1) Identify and describe the limitations : Clearly introduce the limitation by classifying its form and specifying its origin. For example:

  • An unintentional bias encountered during data collection
  • An intentional use of unplanned post-hoc data analysis

2) Explain the implications : Describe how the limitation potentially influences the study’s findings and how the validity and generalizability are subsequently impacted. Provide examples and evidence to support claims of the limitations’ effects without making excuses or exaggerating their impact. Overall, be transparent and objective in presenting the limitations, without undermining the significance of the research. 

3) Provide alternative approaches for future studies : Offer specific suggestions for potential improvements or avenues for further investigation. Demonstrate a proactive approach by encouraging future research that addresses the identified gaps and, therefore, expands the knowledge base.

Whether presenting limitations as an individual section within the manuscript or as a subtopic in the discussion area, authors should use clear headings and straightforward language to facilitate readability. There is no need to complicate limitations with jargon, computations, or complex datasets.

Examples of common limitations

Limitations are generally grouped into two categories , methodology and research process .

Methodology limitations

Methodology may include limitations due to:

  • Sample size
  • Lack of available or reliable data
  • Lack of prior research studies on the topic
  • Measure used to collect the data
  • Self-reported data

methodology limitation example

The researcher is addressing how the large sample size requires a reassessment of the measures used to collect and analyze the data.

Research process limitations

Limitations during the research process may arise from:

  • Access to information
  • Longitudinal effects
  • Cultural and other biases
  • Language fluency
  • Time constraints

research process limitations example

The author is pointing out that the model’s estimates are based on potentially biased observational studies.

Final thoughts

Successfully proving theories and touting great achievements are only two very narrow goals of scholarly research. The true passion and greatest efforts of researchers comes more in the form of confronting assumptions and exploring the obscure.

In many ways, recognizing and sharing the limitations of a research study both allows for and encourages this type of discovery that continuously pushes research forward. By using limitations to provide a transparent account of the project's boundaries and to contextualize the findings, researchers pave the way for even more robust and impactful research in the future.

Charla Viera, MS

See our "Privacy Policy"

Ensure your structure and ideas are consistent and clearly communicated

Pair your Premium Editing with our add-on service Presubmission Review for an overall assessment of your manuscript.

All Subjects

study guides for every class

That actually explain what's on your next test, research funding limitations, from class:, global studies.

Research funding limitations refer to the constraints and challenges faced by researchers in obtaining adequate financial resources to conduct their studies. These limitations can significantly impact the quality, scope, and outcomes of research initiatives, especially in addressing critical global health issues and challenges where funding is essential for innovation and implementation.

congrats on reading the definition of research funding limitations . now let's actually learn it.

5 Must Know Facts For Your Next Test

  • Funding for research often comes from governmental bodies, private foundations, and non-profit organizations, but competition for these resources can be intense.
  • Limited funding can lead to cuts in research projects, resulting in fewer studies being conducted on pressing global health issues such as infectious diseases or mental health.
  • Research funding limitations can create disparities in the quality of research conducted across different regions, with wealthier countries often receiving more support than low- and middle-income countries.
  • Many researchers rely on short-term grants, which can hinder long-term studies that are crucial for understanding complex health issues over time.
  • The rise of innovative funding mechanisms like crowdfunding is becoming more common as traditional sources of funding become increasingly competitive and limited.

Review Questions

  • Research funding limitations can severely hinder scientists' efforts to tackle global health challenges by restricting the number of studies that can be conducted and limiting the resources available for those that do proceed. When funding is insufficient, researchers may have to prioritize certain areas over others, potentially leaving significant health issues underexplored. This can slow down the development of new treatments or public health interventions that are desperately needed in many parts of the world.
  • Disparities in research funding lead to uneven health outcomes globally, as wealthier countries often attract more investment for health research compared to lower-income nations. This unequal distribution means that critical local health issues in poorer regions may go unaddressed due to a lack of funding. Furthermore, when high-income countries prioritize their own health challenges over global issues, it can exacerbate existing inequalities in healthcare access and outcomes worldwide.
  • To address research funding limitations effectively, a multi-faceted approach is necessary. Solutions could include increasing investment from both public and private sectors in underfunded areas, fostering international collaborations that pool resources and expertise, and leveraging alternative funding methods such as crowdfunding. Additionally, advocating for policy changes that prioritize public health research could help ensure that crucial areas receive sustained financial support, ultimately leading to improved health outcomes globally.

Related terms

Grant : A sum of money given by an organization, often a government or foundation, to support a specific research project.

Crowdfunding : A method of raising funds for research or projects by collecting small amounts of money from a large number of people, typically via the internet.

Public Health Investment : Financial resources allocated to initiatives and programs aimed at improving population health outcomes and addressing health disparities.

" Research funding limitations " also found in:

© 2024 fiveable inc. all rights reserved., ap® and sat® are trademarks registered by the college board, which is not affiliated with, and does not endorse this website..

  • Open access
  • Published: 18 August 2021

Fundamental challenges in assessing the impact of research infrastructure

  • Sana Zakaria 1 ,
  • Jonathan Grant 2 &
  • Jane Luff 1  

Health Research Policy and Systems volume  19 , Article number:  119 ( 2021 ) Cite this article

4422 Accesses

9 Citations

16 Altmetric

Metrics details

Clinical research infrastructure is one of the unsung heroes of the scientific response to the current COVID-19 pandemic. The extensive, long-term funding into research support structures, skilled people, and technology allowed the United Kingdom research response to move off the starting blocks at pace by utilizing pre-existing platforms. The increasing focus from funders on evaluating the outcomes and impact of research infrastructure investment requires both a reframing and progression of the current models in order to address the contribution of the underlying support infrastructure. The majority of current evaluation/outcome models focus on a “pipeline” approach using a methodology which follows the traditional research funding route with the addition of quantitative metrics. These models fail to embrace the complexity caused by the interplay of previous investment, the coalescing of project outputs from different funders, the underlying infrastructure investment, and the parallel development across different parts of the system. Research infrastructure is the underpinning foundation of a project-driven research system and requires long-term, sustained funding and capital investment to maintain scientific and technological expertise. Therefore, the short-term focus on quantitative metrics that are easy to collect and interpret and that can be assessed in a roughly 5-year funding cycle needs to be addressed. The significant level of investment in research infrastructure necessitates investment to develop bespoke methodologies that develop fit-for-purpose, longer-term/continual approach(es) to evaluation. Real-world research should reflect real-world evaluation and allow for the accrual of a narrative of value indicators that build a picture of the contribution of infrastructure to research outcomes. The linear approach is not fit for purpose, the research endeavour is a complex, twisted road, and the evaluation approach needs to embrace this complexity through the development of realist approaches and the rapidly evolving data ecosystem. This paper sets out methodological challenges and considers the need to develop bespoke methodological approaches to allow a richer assessment of impact, contribution, attribution, and evaluation of research infrastructure. This paper is the beginning of a conversation that invites the community to “take up the mantle” and tackle the complexity of real-world research translation and evaluation.

Peer Review reports

Introduction

The scientific response to the COVID-19 pandemic has been unstinting. Within a matter of days, researchers around the world—in public and private settings—mobilized to sequence SARS-CoV-2 (the virus that causes COVID-19) [ 1 ], began the development of vaccines [ 2 ], tested the use of various steroids to improve outcomes [ 3 ], and developed citizen science networks for population surveillance [ 4 ]. At the time of writing, some 15 months after the virus emerged in China [ 5 ], a variety of vaccines using different technologies are being used to protect populations from the acute, and often fatal, respiratory disease COVID-19. This is an extraordinarily fast scientific development, given that historically it takes about 17 years for research to translate from bench to bedside. The reason for this, as discussed by Hanney et al. (2020), is that given the serious public health emergency, the classic “pipeline” or programmatic model of linear innovation was abandoned, in favour of a faster but most likely more expensive approach of parallel working where various activities were undertaken at the same time, including the manufacture of vaccines before their safety and efficacy were proven [ 6 ]. These multiple strands of activities were supported by underlying research infrastructures (RI)—or “platforms”, as we refer to them in this paper. As explored in this paper, one of the main reasons that vaccine development was so quick was because, critically, a number of underlying RIs/platforms existed before the emergence of SARS-CoV-2.

Given the economic hardships that will ensue in the post-pandemic environment, governments are likely to be under pressure to review and scrutinize research allocation. Roope and colleagues shed light on the importance of RI investment and caution against short-termism in resource allocation whilst making a case for developing a framework that allows the value of this investment to be surfaced in the public eye [ 7 ]. The complex contributions played by different parts of the research system may be better described by progressing the current research evaluation model from one characterized by a “pipeline” to one better described as a “platform” (as illustrated in Fig.  1 ).

figure 1

Schematic diagram illustrating the difference between a pipeline model of evaluation and the platform models of research production

It should be noted that the purpose of Fig.  1 is neither to present a conceptual nor an empirical depiction of the difference between the two approaches, but to schematically illustrate some of the differing characteristics. The top panel in Fig.  1 illustrates a classic logic model, whereby the research impact would be evaluated through a series of “if … then” statements. For example, if a funder supported a project (inputs), then a hypothesis could be investigated. If this hypothesis was proven (process), then it would be written up in a research paper (output). If that output was, for example, cited in a clinical guideline (outcome), then it could lead to longer and healthier lives (impact).

This theory underpins the majority of research evaluations [ 8 ], but as illustrated in the bottom half of Fig.  1 , is not suited to holistically account for investments into underlying platforms and their complex interactions—whether they are bio-resource, skilled people, research equipment, or collaboration with complementary infrastructure (networks). Multiple platforms may work in parallel, working across the research ecosystem and utilizing the collective outputs of multiple research projects as indicated by the bidirectional arrows in Fig.  1 . Although one may argue that investment into infrastructure could be captured as “inputs” and so on, this model is reductionist in its ability to account for the complex interactions between multiple infrastructures and taking into account the existing knowledge that has been produced/available for use. Simply stating investment into RI as inputs limits a holistic view of the added value of RI. For instance, platforms A, B, and C, as denoted in Fig.  1 , all worked in parallel and in collaboration to deliver the Oxford University/AstraZeneca vaccine—AZD1222; the vaccine relies on the delivery of genetic material via a viral vector which acts as a carrier to stimulate an immune response. In the case of AZD1222, the vector carries code for the SARS-CoV-2 spike protein [ 9 ]. The Pfizer and Moderna vaccines also focus on the spike protein, but use alternative technology based on messenger RNA (mRNA) to trigger an immune response. However, both approaches are based on decades of research, and critically having in place existing “platforms” meant that candidate vectors and mRNA (both bio-resources) could quickly be adapted for the specific genetic profile of SARS-CoV-2 [ 10 , 11 ]. When it comes to evaluating the impact of these advances, it would be inappropriate to focus solely on the programmatic pipeline—that is, the development of the vaccine since January 2020, or the investment into the current cycle of RI—without including the pre-existing platforms or the parallel contributions from different RIs (Biomedical Research Centres [BRCs], Clinical Research Facilities [CRFs], Clinical Research Network [CRN]) that enabled such rapid scientific progress.

A notable example of a platform model is one exemplified by National Institute for Health Research (NIHR)-funded BRCs which link closely with clinical trial delivery partners (NIHR CRFs). In England, these clinical RI centres are contracted with National Health Service (NHS) Trusts primarily to fund the underlying support mechanisms that are required to deliver clinical-, health-, and care-related research. The NIHR infrastructure funding provides long-term support towards the cost of delivering early-phase experimental medicine research across England. This includes support towards the salaries of skilled research professionals, collaborations, and funding for services and facilities [ 12 ]. First awarded in 2007, these platforms have provided targeted and strategic investment to support world-leading research in health and care and have been crucial in pioneering first-in-human studies and progressing novel treatments for patient benefit.

Again, when assessing the swiftness and magnitude of the response to the COVID-19 pandemic, the role played by NIHR BRCs and CRFs, as part of the multiple enabling platforms, has been crucial. For example, the Randomised Evaluation of COVID-19 Therapy (RECOVERY) Trial, which has been identifying treatments for people hospitalized with COVID-19, proceeded at a rapid speed with patients enrolled 9 days after the study protocol was drafted. This was a national endeavour, coordinated largely through the NIHR CRN, involving 176 acute hospital trusts including all of the NIHR BRCs. Most notably, the researchers supported by NIHR Oxford BRC found that dexamethasone, an inexpensive and widely available drug, cut the risk of death by a third for patients on ventilators, and by a fifth for those on oxygen [ 3 ].

NIHR BRCs played an equally monumental role in developing a vaccine for COVID-19. When the virus emerged at the end of 2019, the BRC Vaccines research theme team at Oxford was already working on human coronavirus vaccines and was in a unique position to rapidly respond to the pandemic. The vaccine candidate progressed rapidly to phase III clinical trials across 19 trial sites in the United Kingdom, South Africa, and Brazil within the space of weeks [ 2 ].

These examples highlight the role of established platforms in being able to leverage expertise, facilities, multidisciplinary teams with dedicated personnel, and pre-existing strategic partnerships with industry (in this case AstraZeneca) to deliver at pace, on a global scale. It is highly unlikely that this would have been the case had the infrastructure not been in place. This counterfactual argument would be hard to establish using traditional approaches to assessing research impact and evaluating research outputs and outcomes. It is our impression that research impact assessment often excludes underlying infrastructures and platform contributions, and that this is confirmed to a degree by our selective scan of the literature, but it would be important to empirically try to test that assumption in due course.

The purpose of this paper is to explore the challenges of assessing the impact of RI—or platforms as we refer to them in this paper—and to define a methodological agenda to improve such evaluations in the future. To do this, we provide a brief and selective review of the literature (methodology for review provided as a Additional files 1 and 2 ) on the limited approaches for assessing the impact of RI, and from that review and our experience in the field, most notably an internally led review of NIHR-funded BRCs and CRFs, identify key challenges that need to be addressed. We conclude with some reflections on what this means for the field of research impact assessment.

How are RIs traditionally addressed?

Research impact assessment is “a growing field of practice that is interested in science and innovation, research ecosystems and the effective management and administration of research funding” [ 13 ]. The practice of evaluating the impact of RIs has been gathering momentum and evolving over the last decade. With increasing demand from stakeholders (e.g. funders, government treasuries, and the public/taxpayers) to understand the value of RI, there has been an increasing focus on quantifying and qualifying the impact of investing in these platforms. For the purpose of this paper, we are borrowing the European Strategy Forum on Research Infrastructure (ESFRI) definition of RI:

facilities, resources and related services that are used by the scientific community to conduct top-level research in their respective fields and covers major scientific equipment; knowledge-based resources such as collections, archives or structures for scientific information; enabling Information and Communications Technology-based infrastructures such as Grid, computing, software and communication, or any other entity of a unique nature essential to achieve excellence in research. Such infrastructures may be ‘single-sited’ or ‘distributed’. [ 14 ]

Although this is a broad definition of RI, it encapsulates most of the relevant aspects of a clinical RI funded through NIHR such as BRCs, CRFs, and the CRN. The NIHR makes a significant investment in clinical infrastructure each year. The 2018/19 annual report indicates that £622 m (more than 50% of the annual budget) was used to support clinical RI.

Much of the select literature analysed made insightful observations about large-scale, technology-driven global infrastructure, such as those encompassed by the ESFRI programme, its distinct phases, and the varied evaluation needs associated with each phase [ 15 , 16 , 17 ]. In addition, there has been much discussion of how the context and type of RI affects impact assessments—for instance, whether RI is virtual or a single site, or for basic science or applied research [ 18 ].

There have been accounts of use of multiple evaluation models and assessment frameworks, ranging from Dilts’ “three-plus-one” evaluation model to WHO’s eight evaluative principles, Dozier's use of social network analysis, Davies and Dart’s most significant change theory, the Payback Framework, and Donovan and Hanney’s social impact assessment methods [ 19 , 20 , 21 , 22 , 23 ]; however, most of these models of assessment are built to suit a programmatic pipeline model of progression of research. We are taking a simplistic view of linear models of assessment for effect; work from Hanney and colleagues used the Payback Framework to assess the value of RI by reviewing networks and the absorptive capacity of the research system; however, this is still the least-studied aspect of the framework. Moreover, in practice, the application of frameworks is led by pragmatism [ 8 ] which can mean these important nuances can often be overlooked when using programmatic frameworks for assessing RI. In fact, there is much literature discussing the limitations of utilizing logic model-based frameworks in accounting for complexity and interactions, and there is a recognition that the traditional logic model needs to evolve into something more dynamic [ 24 ].

The use of cost–benefit analysis (CBA) and cost-effectiveness analysis (CEA) methodologies to quantify benefits of infrastructure in particular has been the most favourable approach and remains so to this day, especially when articulating benefits to the Government and the Treasury [ 25 , 26 , 27 ], despite the challenges around monetizing the value of health and the quality of life. The clinical RI in the United States, the Clinical and Translational Science Awards (CTSA) Consortium, have conducted a series of evaluations to articulate the benefits of RI and have managed their portfolio by defining consistent terminology of inputs, outputs, and outcomes to collect data that can be harmonized and compared across the national portfolio of 62 centres [ 25 ]. In recent years, a large amount of evaluative techniques have focussed on bibliometric network analysis and a structured use of case studies/qualitative analyses, exemplified in the Research Excellence Framework (REF) exercise and the ACCELERATE framework [ 26 , 27 ]. A recently emerging modular approach provided by the RI paths project also provides an interesting lens whereby the modular approach allows the user flexibility in tailoring the evaluative approach to select aspects that are important to focus on [ 28 ]. In doing so, it addresses some of the challenges being raised in that it starts breaking out of the mould of a traditional pipeline model of evaluation.

However, as mentioned earlier, most of the literature we reviewed was geared towards assessing the impact of RI in the context of a “pipeline” model which is borrowed and adapted from assessing research grants and programmes rather than RI per se. The evaluation models and metrics blur the pipeline and platform models and do not draw a clear enough distinction. The nuances and complexity of a “platform” model, where utilization of expertise and facilities, delivery of team science, and fostering of innovation translates into benefits for the population, is not typically addressed through any of the tools and methods mentioned in the literature. Although methods like CBA or the modular approach provided by the RI paths project are needed, they need to be complemented by metrics and techniques that can surface the value of RI in the context of a platform model to articulate benefits such as the development of a COVID-19 vaccine within 12 months, facilitated by support from NIHR BRCs, CRFs, CRN, and commercial partners, among others.

Additionally, one of the biggest challenges identified in the literature is around assessing impact of RI with respect to time lags and the challenge of contribution/attribution. None of the methods can account for this; rather, the literature calls for charting realistic milestones over the course of the life cycle of a RI, instead of tracking every infrastructure-supported project over a course of a typical 17 years—a rough projection of the time taken to translate advances from bench to bedside.

Challenge 1: Traditional criteria for assessing the impact of RI are not fit for purpose

Despite the multitude of frameworks and methodologies devised to support funders and recipients of funds to evidence the value of RI, it remains a subjective and challenging task, as no single methodology or concept serves all stakeholders’ needs nor does it reconcile the platform and pipeline model dichotomy. The challenge is further compounded when resource allocation, in a fiercely competitive environment, is primarily based on an allocation system that values project-based funding approaches. We are going to frame this challenge through the lens of contribution/attribution, time lags, and marginality or nuanced differences.

From the viewpoint of the contribution/attribution challenge, some postulate that public investment in RIs is justified given the multifaceted role they play in advancing our knowledge, innovating, driving inward investment/economic growth, and building capacity [ 29 ], whilst others have taken a view that there is no standardized evidence to attest to these claims [ 30 ]. Despite the undeniably crucial role played by RI in tackling the COVID-19 pandemic, most criteria of assessing infrastructure are not suited to disentangling the contribution of RI in outcomes and impact achieved. Most outputs and outcomes claimed by infrastructure are also claimed by project grants with little or no assessment of the unique elements that have been supported by RI. Contribution analysis methodology (and quantification of contribution) is well established and can be deployed here; however, it is based on the premise of a linear pipeline model and thus is perceived as such and only utilized in that manner [ 31 ]. There is a need to establish contribution analysis suited to RI so that the unique aspects and benefits of infrastructure can be articulated.

Although many theories and indicators are emerging, uniquely placed for assessing RI [ 28 , 32 , 33 ], academically accepted impact assessment metrics are primarily the yardstick with which the success of RI is measured. These “prestige” metrics such as citation counts are often culturally accepted by funding organizations with a substantive focus on “volume” and metrics that are easy to collect and count. Focusing on the number of patients recruited or numbers of trainees trained means that quite often the value of undertaking such activities is neither questioned nor surfaced. RIs can be compared against each other on the basis of these criteria which do not address the fundamental differences in the strategic purpose each play in the translational research landscape. A large BRC may produce more papers than a smaller BRC for instance; however, it tells us little about the significance of these contributions in their respective fields. BRCs are established to drive research into the clinic; CRFs are delivery vehicles for the progression of early-phase or first-in-class trials.

More focus is needed on assessing the value of innovation, team science, and encouraging the research community to review RI through the lens of complex and system-level change, linking up as clusters to propagate regional and national health research agendas. Evaluations commissioned for RI should look beyond economic returns/regional multipliers (as important as they are) and the traditional “pipeline metrics” to attest to the value of RI as a platform. The REF and the Knowledge Exchange Framework (KEF), which look at benefits beyond academia, the emergence of the responsible metrics movement, and reducing waste in research agendas, all provide a meaningful lens through which RI impact assessments criteria can be focussed and improved. The use of qualitative analyses can support the reframing of RI in the context of a platform, recognizing the added value it provides in fostering innovation and high-risk research.

Lastly, given the time lags of translating research into patient benefit, quite often there is a disconnect between the criteria of assessment and the time frame within which it is warranted. There needs to be a reframing of the kind of outputs and outcomes that should be assessed in relation to RIs and their life cycle.

Challenge 2: Despite the long-term nature of infrastructural investments, research impact assessments are often undertaken in unrealistic timescales

One of the most talked about aspects of impact assessments, especially in biomedical research, are time lags. Time lags are widely debated in terms of agreeing upon models of assessment and what constitutes the starting point for a particular intervention/innovation [ 34 , 35 ].

Time lags are of particular interest when assessing RI due to the premise that investing in platforms like NIHR-funded BRCs will expedite the translation of biomedical research (i.e. translate lab-based science into human application) and bridge the T1 gap (whereby T stands for translation, and 1 denotes the first phase of translational research) [ 36 ]. Understanding what affects time lags is complex and multifaceted which is why a systematic approach is rarely applied across a health system to understand changes in translation timelines. Multiple studies have found that factors like political pressure, research community engagement, and funder clout are all contributing factors to expediting time lags in research translation [ 37 ]. COVID-19 vaccine development, for instance, provides a classic example of an expedited translation event due to political pressure and increased access to rapid funding due to the acute nature of the problem being addressed.

Let us use COVID-19 vaccine development as an example to highlight the complexity of time lags and to reflect on appropriate timescales for impact assessments of RI. Although the delivery of the vaccine itself has been rapid compared to vaccine development in other areas, the technology of utilizing viral vectors and mRNA spike proteins had already been established some years ago. The two key challenges emerging here are determining what constitutes the start point of this particular intervention and at what time points can appraisals of outcomes and impact take place? When vector technology was developed, the assessment of its impact could not have truly taken place as the technology continues to be utilized and the magnitude of its impact has been increasing over time.

Hence, one of the biggest challenges of impact assessments in RI is evidencing expected outcomes and impact within one funding cycle of investment (typically 5 years in the NIHR). There is a need to determine what the expected outcomes and impact should be for the duration of an award cycle and what impact can be expected over a longer time period of continued investment.

It is therefore important to ensure that in the short-term the evidence collected for the purposes of understanding impact are made up of value indicators as discussed in Challenge 1 rather than solely focussing on metrics that accrue quickly and are easy to capture. It may even warrant development of hypothetical scenarios and “projected impact”, which is currently not the desired choice of evidence by United Kingdom funders and government bodies.

In addition, creating a shared expectation of long-term outcomes and impact and defining timelines for that can enable systematic evaluations to take place every 10–15 years against those expectations with the caveat that long-term impacts may continue to accrue outside of this assessment period. This, however, requires acceptance of the need for a longer-term view to allow benefit to accrue and a distinction between what is meaningful to measure rather than what is obtainable. It also requires planning for undertaking impact assessments over a longer time frame than an annual setting.

Challenge 3: There is limited appetite and opportunity for innovating new and appropriate criteria and methods for assessing the impact of RI

One of the interesting reflections in reviewing the select literature on research evaluation is how it is so strongly embedded in the theoretical framework of logic modelling. To a degree, this is understandable as the research funding process is itself a series of linear steps that naturally follow the logic model of inputs, process, outputs, outcomes, and impact. However, it is also the case that the innovation literature is quite clear that the research process is itself not linear [ 38 ], and as discussed above, this is especially the case for the contribution of research platforms.

Broadly speaking, there are three dominant evaluation paradigms: logic models; systems-based approaches; and realist evaluation [ 39 ]. Systems-based approaches allow for a complex and dynamic set of interactions to assess how “a set of things work together as a whole” [ 40 ]. Given their inherent messiness, systems evaluation approaches combine multiple methods and data sources to build up a view of the “whole” and the contribution that different components make to that whole, including in this case, research platforms. Realist evaluation adopts a “context–mechanism–outcome” (CMO) framework and is based on understanding what works in what contexts, how, and for whom (i.e. in “real” life) rather than does it work [ 41 ]. In essence, such evaluations focus on understanding how different mechanisms of an intervention result in change and, critically, what contextual factors will influence that mechanism in determining outcomes, and variations in those outcomes. As such, the realist approach or the systems approach may be more appropriate for assessing the impact of RI and capturing the nuances and complex interactions of the platform model, as illustrated in Fig.  1 , than the more traditional logic model approach. One of the advantages of the realist and systems approaches (over logic models) is that there is more focus on relationships and power which, in the context of COVID-19, may prove to be an important enabler. For example, pre-existing relationships between the scientific community, science and medical advisors, and the political and decision-making elite seem critical in the rapid start-up of the RECOVERY trials.

It is interesting that in our brief review of select literature (which we stress was not systematic), we did not identify any realist approaches to evaluating RI. In addition to thinking about the theoretical underpinning of evaluating research platforms, another innovation is the emerging data ecosystem that can support such evaluations. An interesting mix of suppliers—Dimensions (looking at grant data), Researchfish (tracking outcomes), Overton (identifying citations on policy documents)—complements the more traditional bibliometric suppliers (e.g. Clarivate and Scopus) in providing a lot of data that can increasingly be aligned through the DOI (Digital Object Identifier), ORCID (Open Researcher and Contributor Identifier), and GRID (Global Research Identifier Database) systems. Perhaps a next step would be to think how platforms can be both classified and identified, thus allowing them to be an explicit part of the evaluation data-ecosystem.

In suggesting the adoption of alternative paradigms to the logic model, we should stress we are not suggesting that it is “bad”, or that the others paradigms are “good” or “better”; what we are arguing is that we need to be more selective in using different paradigms based on the nature of the research that is being assessed, and suggesting that in the context of RI, the use of the realist or system approach may have advantages over the logic model that deserve being experimented with.

But overall, as research evaluators, and as funders, we should perhaps spend a bit more time and effort thinking about how we assess the impact of research platforms and in doing so move beyond our traditional comfort zones and try out new theoretical paradigms and innovate the way we capture, link, and present data.

The assessment and evaluation of research is not a new field, with a number of landmark studies dating back to the 1970s and beyond [ 42 ]. It is thus perhaps a poor reflection on the field that we are dominated by a single methodological paradigm of using logic models (in various guises) to assess research impacts. This may not be too surprising given the cultural history of research funding which prioritizes the value of the project-led approach. Linear models provide the easiest way to address both the contribution challenge and the time lag challenge: simply put, it is relatively easier to link inputs to process, process to outputs, outputs to outcomes, and outcome to impact, and to measure the time lag between each of those stages. In a more practical sense, when assessing the impact of projects or programmes, this pipeline approach often works very well.

Conversely, however, the use of linear or logic models (and their derivatives) is less applicable to RIs as they provide the platform from which the projects and programmes are delivered. Moreover, research platforms are rarely given the visibility and kudos on a similar footing as research projects, despite significant investments. Given this, it is appropriate to seek or develop other evaluation paradigms to assess the impact of such platforms. As noted above, during our scan of the literature, it was notable how few studies there were using either systems approaches or realist evaluation in the context of assessing RI impact. This in part may be an artefact of historical data infrastructures—and data availability—where it is easier, say, to systematically count papers than for example viral vectors. Nevertheless, over the past 5 to 10 years, there has been somewhat of a data science revolution, meaning that in the future, as a community of people interested in assessing research, we should perhaps challenge ourselves to adopt and test different approaches using new and more innovative data sources.

The somewhat overlooked value of RI and the case for public investment has always been a topic of political debate; however, the COVID-19 pandemic has provided the most compelling evidence in support of RI. Roope and colleagues [ 7 ] articulate this, pointing to the resilience of the healthcare system and its underpinning RI (i.e. NIHR-funded BRCs, CRNs, etc.) and warns against the dangers of short-term allocation efficiency at the price of lack of capacity to meet future demands, especially if research budgets are cut to the cloth of current economic turmoil in the United Kingdom. Investments in RI are likely here to stay, and the case for taking robust and innovative approaches to quantify and qualify their impact has never been stronger.

We should stress that in writing this paper, we do not have the answers and do not know whether these alternative approaches work, but felt obliged to raise these issues for debate. In attempting to review and evaluate the impact of NIHR BRCs, especially in the context of COVID, we had a crisis of confidence in conceptualizing the BRCs within a wider biomedical and health research system and then assessing them comprehensively to derive their true value. We were left with an intellectual itch in that the current approaches to evaluating RI are not fit for purpose, and this is something that, as a community of researchers and funders, we should try to address.

Availability of data and materials

The data sets used and/or analysed during the current study are available from the corresponding author on reasonable request.

Abbreviations

Biomedical Research Centre

Cost–benefit analysis

Cost-effectiveness analysis

Context–mechanism–outcome

Clinical Research Facilities

Clinical Research Network

Clinical and Translational Science Award

Digital Object Identifier

European Strategy Forum on Research Infrastructure

Global Research Identifier Database

Knowledge Exchange Framework

National Institute for Health Research

Open Researcher and Contributor Identifier

Randomised Evaluation of COVID-19 Therapy

Research Excellence Framework

  • Research infrastructure

Wang H, Li X, Li T, et al. The genetic sequence, origin, and diagnosis of SARS-CoV-2. Eur J Clin Microbiol Infect Dis. 2020;39(9):1629–35.

Article   CAS   Google Scholar  

Voysey M, Clemens SAC, Madhi SA, et al. Safety and efficacy of the ChAdOx1 nCoV-19 vaccine (AZD1222) against SARS-CoV-2: an interim analysis of four randomised controlled trials in Brazil, South Africa, and the UK. Lancet. 2021;397(10269):99–111.

The RECOVERY Collaborative Group. Dexamethasone in hospitalized patients with covid-19—preliminary report. N Eng J Med. 2021;384:693–704.

Article   Google Scholar  

Varsavsky T, Graham MS, Canas LS, et al. Detecting COVID-19 infection hotspots in England using large-scale self-reported data from a mobile application: a prospective, observational study. Lancet Public Health. 2021;6(1):E21-29.

Freedman L. Strategy for a pandemic: the UK and COVID-19. Survival. 2020;62(3):25–76.

Hanney SR, Wooding S, Sussex J, et al. From COVID-19 research to vaccine application: why might it take 17 months not 17 years and what are the wider lessons? Health Res Policy Sys. 2020;18:61.

Roope LSJ, Candio P, Kiparoglou V, McShane H, Duch R, Clarke PM. Lessons from the pandemic on the value of research infrastructure. Health Res Policy Sys. 2021. https://doi.org/10.1186/s12961-021-00704-2 .

Raftery J, Hanney S, Greenhalgh T, Glover M, Blatch-Jones A. Models and applications for measuring the impact of health research: update of a systematic review for the Health Technology Assessment Programme. Health Technol Assess. 2016;20:76.

Rauch S, Jasny E, Schmidt KE, Petsch B. New vaccine technologies to combat outbreak situations. Front Immunol. 2018;19(9):1963.

Ball P. The lightning-fast quest for COVID vaccines—and what it means for other diseases. The speedy approach used to tackle SARS-CoV-2 could change the future of vaccine science. Nature. 2021;589:16–8.

van Riel D, de Wit E. Next-generation vaccine platforms for COVID-19. Nat Mater. 2020;19:810–2.

Snape K, Trembath RC, Lord GM. Translational medicine and the NIHR Biomedical Research Centre concept. QJM. 2008;101(11):901–6.

Adam P, et al. ISRIA statement: ten-point guidelines for an effective process of research impact assessments. Health Res Policy Sys. 2018. https://doi.org/10.1186/s12961-018-0281-5 .

European Research Infrastructures [Internet]. European Commission - European Commission. 2021. https://ec.europa.eu/info/research-and-innovation/strategy/european-research-infrastructures_en . Accessed 23 Feb 2021.

Van Elzakker I, Van Drooge L. The political context of Research Infrastructures: consequences for impact and evaluation. fteval J Res Tech Policy Eval. 2019;47:135–9.

Google Scholar  

Reid A, Griniece E, Angelis J. Evaluating and Monitoring the Socio-Economic Impact of Investment in Research Infrastructures. 2015. https://www.researchgate.net/publication/275037404_Evaluating_and_Monitoring_the_Socio-Economic_Impact_of_Investment_in_Research_Infrastructures . Accessed 23 Feb 2021.

ESFRI WG on EVALUATION of RIs. 2011. https://ec.europa.eu/research/infrastructures/pdf/esfri_evaluation_report_2011.pdf . Accessed 23 Feb 2021.

Giffoni F, Vignetti S, Kroll H, Zenker A, Schubert T, Becker ED, et al. Working note on Research Infrastructure Typology. 2018. https://www.researchgate.net/publication/327645276_Working_note_on_Research_Infrastructure_Typology_Deliverable_31 . Accessed 23 Feb 2021.

Pincus HA, Abedin Z, Blank AE, Mazmanian PE. Evaluation and the NIH clinical and translational science awards: a “top ten” list. Eval Health Prof. 2013;36:411–31.

Dilts DM. A “three-plus-one” evaluation model for clinical research management. Eval Health Prof. 2013;36:464–77.

Pancotti C, Pellegrin J, Vignetti S. Appraisal of Research Infrastructures: approaches, methods and practical implications. 2014. Departmental Working Papers, Department of Economics, Management and Quantitative Methods at Università degli Studi di Milano. https://ideas.repec.org/p/mil/wpdepa/2014-13.html . Accessed 23 Feb 2021.

Hogle JA, Moberg DP. Success case studies contribute to evaluation of complex research infrastructure. Eval Health Prof . 2013;37:98–113.

Donovan C, Hanney S. The payback framework explained. Res Eval. 2011;20(3):181–3.

Grazier KL, Trochim WM, Dilts DM, Kirk R. Estimating return on investment in translational research. Eval Health Prof. 2013;36:478–91.

Penfield T, Baker MJ, Scoble R, Wykes MC. Assessment, evaluations, and definitions of research impact: a review. Res Eval. 2013;23:21–32.

Drooge L van, Elzakker I van. Societal impact of Research Infrastructures final protocol. 2019. https://ec.europa.eu/research/participants/documents/downloadPublic?documentIds=080166e5c709f1e0&appId=PPGMS . Accessed 23 Feb 2021.

RI-Paths—Charting Impact Pathways of Investment in Research Infrastructure. 2020. https://ri-paths-tool.eu/en . Accessed 23 Feb 2021.

Ribeiro M. Towards a sustainable European research infrastructures ecosystem. The economics of big science. Springer Link; 2020. https://doi.org/10.1007/978-3-030-52391-6_2

Berger F, Angelis J, Brown N, Simmonds P, Zuijdam F. International comparative study: appraisal and evaluation practices of science capital spending on research infrastructures. 2017; https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/734603/Technopolis_final_report_.pdf . Accessed 23 Feb 2021.

Ton G, Mayne J, Delahais T, Morell J, Befani B, Apgar M, et al. Contribution analysis and estimating the size of effects: can we reconcile the possible with the impossible? 2017. https://www.ids.ac.uk/publications/contribution-analysis-and-estimating-the-size-of-effects-can-we-reconcile-the-possible-with-the-impossible/ . Accessed 23 Feb 2021.

Knowledge exchange framework—research England. https://re.ukri.org/knowledge-exchange/knowledge-exchange-framework/ . Accessed 23 Feb 2021.

Dymond-Green N. The rise of altmetrics: Shaping new ways of evaluating research. 2020. http://blog.ukdataservice.ac.uk/rise-of-altmetrics/ . Accessed 23 Feb 2021.

Morris ZS, Wooding S, Grant J. The answer is 17 years, what is the question: understanding time lags in translational research. J R Soc Med. 2011;104:510–20.

Hanney SR, Castle-Clarke S, Grant J, Guthrie S, Henshall C, Mestre-Ferrandiz J, et al. How long does biomedical research take? Studying the time taken between biomedical and health research and its translation into products, policy, and practice. Health Res Policy Sys. 2015. https://doi.org/10.1186/1478-4505-13-1 .

Cooksey D. A review of UK health research funding. 2006. https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/228984/0118404881.pdf . Accessed 23 Feb 2021.

Uygur B, Duberman J, Ferguson SM. A guide to time lag and time lag shortening strategies in oncology-based drug development. JCB. 2017. https://doi.org/10.5912/jcb792 .

Gibbons M. Mode 1, mode 2, and innovation. In: Carayannis EG, editor. Encyclopedia of creativity, invention, innovation and entrepreneurship. New York: Springer; 2013. p. 1285–92.

Chapter   Google Scholar  

Befani B, Stedman-Bryce G. Process Tracing and Bayesian Updating for impact evaluation. Evaluation. 2016;23:42–60.

Caffrey L, Munro E. A systems approach to policy evaluation. Evaluation. 2017;23:463–78.

Pawson R, Tilley N. Realistic evaluation bloodlines. Am J Eval. 2001;22:317–24.

Comroe JH Jr, Dripps RD. Scientific basis for the support of biomedical science. Science. 1976;192(4235):105–11.

Mills T, Lawton R, Sheard L. Advancing complexity science in healthcare research: the logic of logic models. BMC Med Res Methodol. 2019. https://doi.org/10.1186/s12874-019-0701-4 .

Article   PubMed   PubMed Central   Google Scholar  

Download references

Acknowledgements

We thank Stephen Hanney for his valuable input and steer on the draft manuscript.

Not applicable.

Author information

Authors and affiliations.

Central Commissioning Facility, National Institute of Health Research, 15 Church Street, TW1 3NL, Twickenham, United Kingdom

Sana Zakaria & Jane Luff

Policy Institute, King’s College London, SE1 8WA, London, United Kingdom

Jonathan Grant

You can also search for this author in PubMed   Google Scholar

Contributions

SZ conducted the literature review, contributed to the introduction, and wrote Challenge 1 and 2 sections. JL wrote the abstract and contributed to the introduction and challenge “ Introduction ”. JG wrote the introduction and conclusion as well as Challenge 3. JG and SZ conceptualized and created the framework for the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Sana Zakaria .

Ethics declarations

Ethics approval and consent to participate, consent for publication, competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1..

Method for review of literature.

Additional file 2.

Schematic diagram of the difference between a pipeline model of evaluation and the platform models of research production.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Zakaria, S., Grant, J. & Luff, J. Fundamental challenges in assessing the impact of research infrastructure. Health Res Policy Sys 19 , 119 (2021). https://doi.org/10.1186/s12961-021-00769-z

Download citation

Received : 15 April 2021

Accepted : 03 August 2021

Published : 18 August 2021

DOI : https://doi.org/10.1186/s12961-021-00769-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Impact assessment
  • Realist evaluation
  • Impact frameworks

Health Research Policy and Systems

ISSN: 1478-4505

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

limitations of research funding

limitations of research funding

Research Limitations 101 📖

A Plain-Language Explainer (With Practical Examples)

By: Derek Jansen (MBA) | Expert Reviewer: Dr. Eunice Rautenbach | May 2024

Research limitations are one of those things that students tend to avoid digging into, and understandably so. No one likes to critique their own study and point out weaknesses. Nevertheless, being able to understand the limitations of your study – and, just as importantly, the implications thereof – a is a critically important skill.

In this post, we’ll unpack some of the most common research limitations you’re likely to encounter, so that you can approach your project with confidence.

Overview: Research Limitations 101

  • What are research limitations ?
  • Access – based limitations
  • Temporal & financial limitations
  • Sample & sampling limitations
  • Design limitations
  • Researcher limitations
  • Key takeaways

What (exactly) are “research limitations”?

At the simplest level, research limitations (also referred to as “the limitations of the study”) are the constraints and challenges that will invariably influence your ability to conduct your study and draw reliable conclusions .

Research limitations are inevitable. Absolutely no study is perfect and limitations are an inherent part of any research design. These limitations can stem from a variety of sources , including access to data, methodological choices, and the more mundane constraints of budget and time. So, there’s no use trying to escape them – what matters is that you can recognise them.

Acknowledging and understanding these limitations is crucial, not just for the integrity of your research, but also for your development as a scholar. That probably sounds a bit rich, but realistically, having a strong understanding of the limitations of any given study helps you handle the inevitable obstacles professionally and transparently, which in turn builds trust with your audience and academic peers.

Simply put, recognising and discussing the limitations of your study demonstrates that you know what you’re doing , and that you’ve considered the results of your project within the context of these limitations. In other words, discussing the limitations is a sign of credibility and strength – not weakness. Contrary to the common misconception, highlighting your limitations (or rather, your study’s limitations) will earn you (rather than cost you) marks.

So, with that foundation laid, let’s have a look at some of the most common research limitations you’re likely to encounter – and how to go about managing them as effectively as possible.

Need a helping hand?

limitations of research funding

Limitation #1: Access To Information

One of the first hurdles you might encounter is limited access to necessary information. For example, you may have trouble getting access to specific literature or niche data sets. This situation can manifest due to several reasons, including paywalls, copyright and licensing issues or language barriers.

To minimise situations like these, it’s useful to try to leverage your university’s resource pool to the greatest extent possible. In practical terms, this means engaging with your university’s librarian and/or potentially utilising interlibrary loans to get access to restricted resources. If this sounds foreign to you, have a chat with your librarian 🙃

In emerging fields or highly specific study areas, you might find that there’s very little existing research (i.e., literature) on your topic. This scenario, while challenging, also offers a unique opportunity to contribute significantly to your field , as it indicates that there’s a significant research gap .

All of that said, be sure to conduct an exhaustive search using a variety of keywords and Boolean operators before assuming that there’s a lack of literature. Also, remember to snowball your literature base . In other words, scan the reference lists of the handful of papers that are directly relevant and then scan those references for more sources. You can also consider using tools like Litmaps and Connected Papers (see video below).

Limitation #2: Time & Money

Almost every researcher will face time and budget constraints at some point. Naturally, these limitations can affect the depth and breadth of your research – but they don’t need to be a death sentence.

Effective planning is crucial to managing both the temporal and financial aspects of your study. In practical terms, utilising tools like Gantt charts can help you visualise and plan your research timeline realistically, thereby reducing the risk of any nasty surprises. Always take a conservative stance when it comes to timelines, especially if you’re new to academic research. As a rule of thumb, things will generally take twice as long as you expect – so, prepare for the worst-case scenario.

If budget is a concern, you might want to consider exploring small research grants or adjusting the scope of your study so that it fits within a realistic budget. Trimming back might sound unattractive, but keep in mind that a smaller, well-planned study can often be more impactful than a larger, poorly planned project.

If you find yourself in a position where you’ve already run out of cash, don’t panic. There’s usually a pivot opportunity hidden somewhere within your project. Engage with your research advisor or faculty to explore potential solutions – don’t make any major changes without first consulting your institution.

Research methodology webinar

Limitation #3: Sample Size & Composition

As we’ve discussed before , the size and representativeness of your sample are crucial , especially in quantitative research where the robustness of your conclusions often depends on these factors. All too often though, students run into issues achieving a sufficient sample size and composition.

To ensure adequacy in terms of your sample size, it’s important to plan for potential dropouts by oversampling from the outset . In other words, if you aim for a final sample size of 100 participants, aim to recruit 120-140 to account for unexpected challenges. If you still find yourself short on participants, consider whether you could complement your dataset with secondary data or data from an adjacent sample – for example, participants from another city or country. That said, be sure to engage with your research advisor before making any changes to your approach.

A related issue that you may run into is sample composition. In other words, you may have trouble securing a random sample that’s representative of your population of interest. In cases like this, you might again want to look at ways to complement your dataset with other sources, but if that’s not possible, it’s not the end of the world. As with all limitations, you’ll just need to recognise this limitation in your final write-up and be sure to interpret your results accordingly. In other words, don’t claim generalisability of your results if your sample isn’t random.

Limitation #4: Methodological Limitations

As we alluded earlier, every methodological choice comes with its own set of limitations . For example, you can’t claim causality if you’re using a descriptive or correlational research design. Similarly, as we saw in the previous example, you can’t claim generalisability if you’re using a non-random sampling approach.

Making good methodological choices is all about understanding (and accepting) the inherent trade-offs . In the vast majority of cases, you won’t be able to adopt the “perfect” methodology – and that’s okay. What’s important is that you select a methodology that aligns with your research aims and research questions , as well as the practical constraints at play (e.g., time, money, equipment access, etc.). Just as importantly, you must recognise and articulate the limitations of your chosen methods, and justify why they were the most suitable, given your specific context.

Limitation #5: Researcher (In)experience 

A discussion about research limitations would not be complete without mentioning the researcher (that’s you!). Whether we like to admit it or not, researcher inexperience and personal biases can subtly (and sometimes not so subtly) influence the interpretation and presentation of data within a study. This is especially true when it comes to dissertations and theses , as these are most commonly undertaken by first-time (or relatively fresh) researchers.

When it comes to dealing with this specific limitation, it’s important to remember the adage “ We don’t know what we don’t know ”. In other words, recognise and embrace your (relative) ignorance and subjectivity – and interpret your study’s results within that context . Simply put, don’t be overly confident in drawing conclusions from your study – especially when they contradict existing literature.

Cultivating a culture of reflexivity within your research practices can help reduce subjectivity and keep you a bit more “rooted” in the data. In practical terms, this simply means making an effort to become aware of how your perspectives and experiences may have shaped the research process and outcomes.

As with any new endeavour in life, it’s useful to garner as many outsider perspectives as possible. Of course, your university-assigned research advisor will play a large role in this respect, but it’s also a good idea to seek out feedback and critique from other academics. To this end, you might consider approaching other faculty at your institution, joining an online group, or even working with a private coach .

Your inexperience and personal biases can subtly (but significantly) influence how you interpret your data and draw your conclusions.

Key Takeaways

Understanding and effectively navigating research limitations is key to conducting credible and reliable academic work. By acknowledging and addressing these limitations upfront, you not only enhance the integrity of your research, but also demonstrate your academic maturity and professionalism.

Whether you’re working on a dissertation, thesis or any other type of formal academic research, remember the five most common research limitations and interpret your data while keeping them in mind.

  • Access to Information (literature and data)
  • Time and money
  • Sample size and composition
  • Research design and methodology
  • Researcher (in)experience and bias

If you need a hand identifying and mitigating the limitations within your study, check out our 1:1 private coaching service .

Literature Review Course

Psst… there’s more!

This post is an extract from our bestselling short course, Methodology Bootcamp . If you want to work smart, you don't want to miss this .

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

limitations of research funding

  • Print Friendly

limitations of research funding

Stating the Obvious: Writing Assumptions, Limitations, and Delimitations

Stating the Obvious: Writing Assumptions, Limitations, and Delimitations

During the process of writing your thesis or dissertation, you might suddenly realize that your research has inherent flaws. Don’t worry! Virtually all projects contain restrictions to your research. However, being able to recognize and accurately describe these problems is the difference between a true researcher and a grade-school kid with a science-fair project. Concerns with truthful responding, access to participants, and survey instruments are just a few of examples of restrictions on your research. In the following sections, the differences among delimitations, limitations, and assumptions of a dissertation will be clarified.

Delimitations

Delimitations are the definitions you set as the boundaries of your own thesis or dissertation, so delimitations are in your control. Delimitations are set so that your goals do not become impossibly large to complete. Examples of delimitations include objectives, research questions, variables, theoretical objectives that you have adopted, and populations chosen as targets to study. When you are stating your delimitations, clearly inform readers why you chose this course of study. The answer might simply be that you were curious about the topic and/or wanted to improve standards of a professional field by revealing certain findings. In any case, you should clearly list the other options available and the reasons why you did not choose these options immediately after you list your delimitations. You might have avoided these options for reasons of practicality, interest, or relativity to the study at hand. For example, you might have only studied Hispanic mothers because they have the highest rate of obese babies. Delimitations are often strongly related to your theory and research questions. If you were researching whether there are different parenting styles between unmarried Asian, Caucasian, African American, and Hispanic women, then a delimitation of your study would be the inclusion of only participants with those demographics and the exclusion of participants from other demographics such as men, married women, and all other ethnicities of single women (inclusion and exclusion criteria). A further delimitation might be that you only included closed-ended Likert scale responses in the survey, rather than including additional open-ended responses, which might make some people more willing to take and complete your survey. Remember that delimitations are not good or bad. They are simply a detailed description of the scope of interest for your study as it relates to the research design. Don’t forget to describe the philosophical framework you used throughout your study, which also delimits your study.

Limitations

Limitations of a dissertation are potential weaknesses in your study that are mostly out of your control, given limited funding, choice of research design, statistical model constraints, or other factors. In addition, a limitation is a restriction on your study that cannot be reasonably dismissed and can affect your design and results. Do not worry about limitations because limitations affect virtually all research projects, as well as most things in life. Even when you are going to your favorite restaurant, you are limited by the menu choices. If you went to a restaurant that had a menu that you were craving, you might not receive the service, price, or location that makes you enjoy your favorite restaurant. If you studied participants’ responses to a survey, you might be limited in your abilities to gain the exact type or geographic scope of participants you wanted. The people whom you managed to get to take your survey may not truly be a random sample, which is also a limitation. If you used a common test for data findings, your results are limited by the reliability of the test. If your study was limited to a certain amount of time, your results are affected by the operations of society during that time period (e.g., economy, social trends). It is important for you to remember that limitations of a dissertation are often not something that can be solved by the researcher. Also, remember that whatever limits you also limits other researchers, whether they are the largest medical research companies or consumer habits corporations. Certain kinds of limitations are often associated with the analytical approach you take in your research, too. For example, some qualitative methods like heuristics or phenomenology do not lend themselves well to replicability. Also, most of the commonly used quantitative statistical models can only determine correlation, but not causation.

Assumptions

Assumptions are things that are accepted as true, or at least plausible, by researchers and peers who will read your dissertation or thesis. In other words, any scholar reading your paper will assume that certain aspects of your study is true given your population, statistical test, research design, or other delimitations. For example, if you tell your friend that your favorite restaurant is an Italian place, your friend will assume that you don’t go there for the sushi. It’s assumed that you go there to eat Italian food. Because most assumptions are not discussed in-text, assumptions that are discussed in-text are discussed in the context of the limitations of your study, which is typically in the discussion section. This is important, because both assumptions and limitations affect the inferences you can draw from your study. One of the more common assumptions made in survey research is the assumption of honesty and truthful responses. However, for certain sensitive questions this assumption may be more difficult to accept, in which case it would be described as a limitation of the study. For example, asking people to report their criminal behavior in a survey may not be as reliable as asking people to report their eating habits. It is important to remember that your limitations and assumptions should not contradict one another. For instance, if you state that generalizability is a limitation of your study given that your sample was limited to one city in the United States, then you should not claim generalizability to the United States population as an assumption of your study. Statistical models in quantitative research designs are accompanied with assumptions as well, some more strict than others. These assumptions generally refer to the characteristics of the data, such as distributions, correlational trends, and variable type, just to name a few. Violating these assumptions can lead to drastically invalid results, though this often depends on sample size and other considerations.

Click here to cancel reply.

You must be logged in to post a comment.

Copyright © 2024 PhDStudent.com. All rights reserved. Designed by Divergent Web Solutions, LLC .

  • USC Libraries
  • Research Guides

Organizing Your Social Sciences Research Paper

  • Limitations of the Study
  • Purpose of Guide
  • Design Flaws to Avoid
  • Independent and Dependent Variables
  • Glossary of Research Terms
  • Reading Research Effectively
  • Narrowing a Topic Idea
  • Broadening a Topic Idea
  • Extending the Timeliness of a Topic Idea
  • Academic Writing Style
  • Applying Critical Thinking
  • Choosing a Title
  • Making an Outline
  • Paragraph Development
  • Research Process Video Series
  • Executive Summary
  • The C.A.R.S. Model
  • Background Information
  • The Research Problem/Question
  • Theoretical Framework
  • Citation Tracking
  • Content Alert Services
  • Evaluating Sources
  • Primary Sources
  • Secondary Sources
  • Tiertiary Sources
  • Scholarly vs. Popular Publications
  • Qualitative Methods
  • Quantitative Methods
  • Insiderness
  • Using Non-Textual Elements
  • Common Grammar Mistakes
  • Writing Concisely
  • Avoiding Plagiarism
  • Footnotes or Endnotes?
  • Further Readings
  • Generative AI and Writing
  • USC Libraries Tutorials and Other Guides
  • Bibliography

The limitations of the study are those characteristics of design or methodology that impacted or influenced the interpretation of the findings from your research. Study limitations are the constraints placed on the ability to generalize from the results, to further describe applications to practice, and/or related to the utility of findings that are the result of the ways in which you initially chose to design the study or the method used to establish internal and external validity or the result of unanticipated challenges that emerged during the study.

Price, James H. and Judy Murnan. “Research Limitations and the Necessity of Reporting Them.” American Journal of Health Education 35 (2004): 66-67; Theofanidis, Dimitrios and Antigoni Fountouki. "Limitations and Delimitations in the Research Process." Perioperative Nursing 7 (September-December 2018): 155-163. .

Importance of...

Always acknowledge a study's limitations. It is far better that you identify and acknowledge your study’s limitations than to have them pointed out by your professor and have your grade lowered because you appeared to have ignored them or didn't realize they existed.

Keep in mind that acknowledgment of a study's limitations is an opportunity to make suggestions for further research. If you do connect your study's limitations to suggestions for further research, be sure to explain the ways in which these unanswered questions may become more focused because of your study.

Acknowledgment of a study's limitations also provides you with opportunities to demonstrate that you have thought critically about the research problem, understood the relevant literature published about it, and correctly assessed the methods chosen for studying the problem. A key objective of the research process is not only discovering new knowledge but also to confront assumptions and explore what we don't know.

Claiming limitations is a subjective process because you must evaluate the impact of those limitations . Don't just list key weaknesses and the magnitude of a study's limitations. To do so diminishes the validity of your research because it leaves the reader wondering whether, or in what ways, limitation(s) in your study may have impacted the results and conclusions. Limitations require a critical, overall appraisal and interpretation of their impact. You should answer the question: do these problems with errors, methods, validity, etc. eventually matter and, if so, to what extent?

Price, James H. and Judy Murnan. “Research Limitations and the Necessity of Reporting Them.” American Journal of Health Education 35 (2004): 66-67; Structure: How to Structure the Research Limitations Section of Your Dissertation. Dissertations and Theses: An Online Textbook. Laerd.com.

Descriptions of Possible Limitations

All studies have limitations . However, it is important that you restrict your discussion to limitations related to the research problem under investigation. For example, if a meta-analysis of existing literature is not a stated purpose of your research, it should not be discussed as a limitation. Do not apologize for not addressing issues that you did not promise to investigate in the introduction of your paper.

Here are examples of limitations related to methodology and the research process you may need to describe and discuss how they possibly impacted your results. Note that descriptions of limitations should be stated in the past tense because they were discovered after you completed your research.

Possible Methodological Limitations

  • Sample size -- the number of the units of analysis you use in your study is dictated by the type of research problem you are investigating. Note that, if your sample size is too small, it will be difficult to find significant relationships from the data, as statistical tests normally require a larger sample size to ensure a representative distribution of the population and to be considered representative of groups of people to whom results will be generalized or transferred. Note that sample size is generally less relevant in qualitative research if explained in the context of the research problem.
  • Lack of available and/or reliable data -- a lack of data or of reliable data will likely require you to limit the scope of your analysis, the size of your sample, or it can be a significant obstacle in finding a trend and a meaningful relationship. You need to not only describe these limitations but provide cogent reasons why you believe data is missing or is unreliable. However, don’t just throw up your hands in frustration; use this as an opportunity to describe a need for future research based on designing a different method for gathering data.
  • Lack of prior research studies on the topic -- citing prior research studies forms the basis of your literature review and helps lay a foundation for understanding the research problem you are investigating. Depending on the currency or scope of your research topic, there may be little, if any, prior research on your topic. Before assuming this to be true, though, consult with a librarian! In cases when a librarian has confirmed that there is little or no prior research, you may be required to develop an entirely new research typology [for example, using an exploratory rather than an explanatory research design ]. Note again that discovering a limitation can serve as an important opportunity to identify new gaps in the literature and to describe the need for further research.
  • Measure used to collect the data -- sometimes it is the case that, after completing your interpretation of the findings, you discover that the way in which you gathered data inhibited your ability to conduct a thorough analysis of the results. For example, you regret not including a specific question in a survey that, in retrospect, could have helped address a particular issue that emerged later in the study. Acknowledge the deficiency by stating a need for future researchers to revise the specific method for gathering data.
  • Self-reported data -- whether you are relying on pre-existing data or you are conducting a qualitative research study and gathering the data yourself, self-reported data is limited by the fact that it rarely can be independently verified. In other words, you have to the accuracy of what people say, whether in interviews, focus groups, or on questionnaires, at face value. However, self-reported data can contain several potential sources of bias that you should be alert to and note as limitations. These biases become apparent if they are incongruent with data from other sources. These are: (1) selective memory [remembering or not remembering experiences or events that occurred at some point in the past]; (2) telescoping [recalling events that occurred at one time as if they occurred at another time]; (3) attribution [the act of attributing positive events and outcomes to one's own agency, but attributing negative events and outcomes to external forces]; and, (4) exaggeration [the act of representing outcomes or embellishing events as more significant than is actually suggested from other data].

Possible Limitations of the Researcher

  • Access -- if your study depends on having access to people, organizations, data, or documents and, for whatever reason, access is denied or limited in some way, the reasons for this needs to be described. Also, include an explanation why being denied or limited access did not prevent you from following through on your study.
  • Longitudinal effects -- unlike your professor, who can literally devote years [even a lifetime] to studying a single topic, the time available to investigate a research problem and to measure change or stability over time is constrained by the due date of your assignment. Be sure to choose a research problem that does not require an excessive amount of time to complete the literature review, apply the methodology, and gather and interpret the results. If you're unsure whether you can complete your research within the confines of the assignment's due date, talk to your professor.
  • Cultural and other type of bias -- we all have biases, whether we are conscience of them or not. Bias is when a person, place, event, or thing is viewed or shown in a consistently inaccurate way. Bias is usually negative, though one can have a positive bias as well, especially if that bias reflects your reliance on research that only support your hypothesis. When proof-reading your paper, be especially critical in reviewing how you have stated a problem, selected the data to be studied, what may have been omitted, the manner in which you have ordered events, people, or places, how you have chosen to represent a person, place, or thing, to name a phenomenon, or to use possible words with a positive or negative connotation. NOTE :   If you detect bias in prior research, it must be acknowledged and you should explain what measures were taken to avoid perpetuating that bias. For example, if a previous study only used boys to examine how music education supports effective math skills, describe how your research expands the study to include girls.
  • Fluency in a language -- if your research focuses , for example, on measuring the perceived value of after-school tutoring among Mexican-American ESL [English as a Second Language] students and you are not fluent in Spanish, you are limited in being able to read and interpret Spanish language research studies on the topic or to speak with these students in their primary language. This deficiency should be acknowledged.

Aguinis, Hermam and Jeffrey R. Edwards. “Methodological Wishes for the Next Decade and How to Make Wishes Come True.” Journal of Management Studies 51 (January 2014): 143-174; Brutus, Stéphane et al. "Self-Reported Limitations and Future Directions in Scholarly Reports: Analysis and Recommendations." Journal of Management 39 (January 2013): 48-75; Senunyeme, Emmanuel K. Business Research Methods. Powerpoint Presentation. Regent University of Science and Technology; ter Riet, Gerben et al. “All That Glitters Isn't Gold: A Survey on Acknowledgment of Limitations in Biomedical Studies.” PLOS One 8 (November 2013): 1-6.

Structure and Writing Style

Information about the limitations of your study are generally placed either at the beginning of the discussion section of your paper so the reader knows and understands the limitations before reading the rest of your analysis of the findings, or, the limitations are outlined at the conclusion of the discussion section as an acknowledgement of the need for further study. Statements about a study's limitations should not be buried in the body [middle] of the discussion section unless a limitation is specific to something covered in that part of the paper. If this is the case, though, the limitation should be reiterated at the conclusion of the section.

If you determine that your study is seriously flawed due to important limitations , such as, an inability to acquire critical data, consider reframing it as an exploratory study intended to lay the groundwork for a more complete research study in the future. Be sure, though, to specifically explain the ways that these flaws can be successfully overcome in a new study.

But, do not use this as an excuse for not developing a thorough research paper! Review the tab in this guide for developing a research topic . If serious limitations exist, it generally indicates a likelihood that your research problem is too narrowly defined or that the issue or event under study is too recent and, thus, very little research has been written about it. If serious limitations do emerge, consult with your professor about possible ways to overcome them or how to revise your study.

When discussing the limitations of your research, be sure to:

  • Describe each limitation in detailed but concise terms;
  • Explain why each limitation exists;
  • Provide the reasons why each limitation could not be overcome using the method(s) chosen to acquire or gather the data [cite to other studies that had similar problems when possible];
  • Assess the impact of each limitation in relation to the overall findings and conclusions of your study; and,
  • If appropriate, describe how these limitations could point to the need for further research.

Remember that the method you chose may be the source of a significant limitation that has emerged during your interpretation of the results [for example, you didn't interview a group of people that you later wish you had]. If this is the case, don't panic. Acknowledge it, and explain how applying a different or more robust methodology might address the research problem more effectively in a future study. A underlying goal of scholarly research is not only to show what works, but to demonstrate what doesn't work or what needs further clarification.

Aguinis, Hermam and Jeffrey R. Edwards. “Methodological Wishes for the Next Decade and How to Make Wishes Come True.” Journal of Management Studies 51 (January 2014): 143-174; Brutus, Stéphane et al. "Self-Reported Limitations and Future Directions in Scholarly Reports: Analysis and Recommendations." Journal of Management 39 (January 2013): 48-75; Ioannidis, John P.A. "Limitations are not Properly Acknowledged in the Scientific Literature." Journal of Clinical Epidemiology 60 (2007): 324-329; Pasek, Josh. Writing the Empirical Social Science Research Paper: A Guide for the Perplexed. January 24, 2012. Academia.edu; Structure: How to Structure the Research Limitations Section of Your Dissertation. Dissertations and Theses: An Online Textbook. Laerd.com; What Is an Academic Paper? Institute for Writing Rhetoric. Dartmouth College; Writing the Experimental Report: Methods, Results, and Discussion. The Writing Lab and The OWL. Purdue University.

Writing Tip

Don't Inflate the Importance of Your Findings!

After all the hard work and long hours devoted to writing your research paper, it is easy to get carried away with attributing unwarranted importance to what you’ve done. We all want our academic work to be viewed as excellent and worthy of a good grade, but it is important that you understand and openly acknowledge the limitations of your study. Inflating the importance of your study's findings could be perceived by your readers as an attempt hide its flaws or encourage a biased interpretation of the results. A small measure of humility goes a long way!

Another Writing Tip

Negative Results are Not a Limitation!

Negative evidence refers to findings that unexpectedly challenge rather than support your hypothesis. If you didn't get the results you anticipated, it may mean your hypothesis was incorrect and needs to be reformulated. Or, perhaps you have stumbled onto something unexpected that warrants further study. Moreover, the absence of an effect may be very telling in many situations, particularly in experimental research designs. In any case, your results may very well be of importance to others even though they did not support your hypothesis. Do not fall into the trap of thinking that results contrary to what you expected is a limitation to your study. If you carried out the research well, they are simply your results and only require additional interpretation.

Lewis, George H. and Jonathan F. Lewis. “The Dog in the Night-Time: Negative Evidence in Social Research.” The British Journal of Sociology 31 (December 1980): 544-558.

Yet Another Writing Tip

Sample Size Limitations in Qualitative Research

Sample sizes are typically smaller in qualitative research because, as the study goes on, acquiring more data does not necessarily lead to more information. This is because one occurrence of a piece of data, or a code, is all that is necessary to ensure that it becomes part of the analysis framework. However, it remains true that sample sizes that are too small cannot adequately support claims of having achieved valid conclusions and sample sizes that are too large do not permit the deep, naturalistic, and inductive analysis that defines qualitative inquiry. Determining adequate sample size in qualitative research is ultimately a matter of judgment and experience in evaluating the quality of the information collected against the uses to which it will be applied and the particular research method and purposeful sampling strategy employed. If the sample size is found to be a limitation, it may reflect your judgment about the methodological technique chosen [e.g., single life history study versus focus group interviews] rather than the number of respondents used.

Boddy, Clive Roland. "Sample Size for Qualitative Research." Qualitative Market Research: An International Journal 19 (2016): 426-432; Huberman, A. Michael and Matthew B. Miles. "Data Management and Analysis Methods." In Handbook of Qualitative Research . Norman K. Denzin and Yvonna S. Lincoln, eds. (Thousand Oaks, CA: Sage, 1994), pp. 428-444; Blaikie, Norman. "Confounding Issues Related to Determining Sample Size in Qualitative Research." International Journal of Social Research Methodology 21 (2018): 635-641; Oppong, Steward Harrison. "The Problem of Sampling in qualitative Research." Asian Journal of Management Sciences and Education 2 (2013): 202-210.

  • << Previous: 8. The Discussion
  • Next: 9. The Conclusion >>
  • Last Updated: Sep 17, 2024 10:59 AM
  • URL: https://libguides.usc.edu/writingguide

helpful professor logo

21 Research Limitations Examples

21 Research Limitations Examples

Chris Drew (PhD)

Dr. Chris Drew is the founder of the Helpful Professor. He holds a PhD in education and has published over 20 articles in scholarly journals. He is the former editor of the Journal of Learning Development in Higher Education. [Image Descriptor: Photo of Chris]

Learn about our Editorial Process

research limitations examples and definition, explained below

Research limitations refer to the potential weaknesses inherent in a study. All studies have limitations of some sort, meaning declaring limitations doesn’t necessarily need to be a bad thing, so long as your declaration of limitations is well thought-out and explained.

Rarely is a study perfect. Researchers have to make trade-offs when developing their studies, which are often based upon practical considerations such as time and monetary constraints, weighing the breadth of participants against the depth of insight, and choosing one methodology or another.

In research, studies can have limitations such as limited scope, researcher subjectivity, and lack of available research tools.

Acknowledging the limitations of your study should be seen as a strength. It demonstrates your willingness for transparency, humility, and submission to the scientific method and can bolster the integrity of the study. It can also inform future research direction.

Typically, scholars will explore the limitations of their study in either their methodology section, their conclusion section, or both.

Research Limitations Examples

Qualitative and quantitative research offer different perspectives and methods in exploring phenomena, each with its own strengths and limitations. So, I’ve split the limitations examples sections into qualitative and quantitative below.

Qualitative Research Limitations

Qualitative research seeks to understand phenomena in-depth and in context. It focuses on the ‘why’ and ‘how’ questions.

It’s often used to explore new or complex issues, and it provides rich, detailed insights into participants’ experiences, behaviors, and attitudes. However, these strengths also create certain limitations, as explained below.

1. Subjectivity

Qualitative research often requires the researcher to interpret subjective data. One researcher may examine a text and identify different themes or concepts as more dominant than others.

Close qualitative readings of texts are necessarily subjective – and while this may be a limitation, qualitative researchers argue this is the best way to deeply understand everything in context.

Suggested Solution and Response: To minimize subjectivity bias, you could consider cross-checking your own readings of themes and data against other scholars’ readings and interpretations. This may involve giving the raw data to a supervisor or colleague and asking them to code the data separately, then coming together to compare and contrast results.

2. Researcher Bias

The concept of researcher bias is related to, but slightly different from, subjectivity.

Researcher bias refers to the perspectives and opinions you bring with you when doing your research.

For example, a researcher who is explicitly of a certain philosophical or political persuasion may bring that persuasion to bear when interpreting data.

In many scholarly traditions, we will attempt to minimize researcher bias through the utilization of clear procedures that are set out in advance or through the use of statistical analysis tools.

However, in other traditions, such as in postmodern feminist research , declaration of bias is expected, and acknowledgment of bias is seen as a positive because, in those traditions, it is believed that bias cannot be eliminated from research, so instead, it is a matter of integrity to present it upfront.

Suggested Solution and Response: Acknowledge the potential for researcher bias and, depending on your theoretical framework , accept this, or identify procedures you have taken to seek a closer approximation to objectivity in your coding and analysis.

3. Generalizability

If you’re struggling to find a limitation to discuss in your own qualitative research study, then this one is for you: all qualitative research, of all persuasions and perspectives, cannot be generalized.

This is a core feature that sets qualitative data and quantitative data apart.

The point of qualitative data is to select case studies and similarly small corpora and dig deep through in-depth analysis and thick description of data.

Often, this will also mean that you have a non-randomized sample size.

While this is a positive – you’re going to get some really deep, contextualized, interesting insights – it also means that the findings may not be generalizable to a larger population that may not be representative of the small group of people in your study.

Suggested Solution and Response: Suggest future studies that take a quantitative approach to the question.

4. The Hawthorne Effect

The Hawthorne effect refers to the phenomenon where research participants change their ‘observed behavior’ when they’re aware that they are being observed.

This effect was first identified by Elton Mayo who conducted studies of the effects of various factors ton workers’ productivity. He noticed that no matter what he did – turning up the lights, turning down the lights, etc. – there was an increase in worker outputs compared to prior to the study taking place.

Mayo realized that the mere act of observing the workers made them work harder – his observation was what was changing behavior.

So, if you’re looking for a potential limitation to name for your observational research study , highlight the possible impact of the Hawthorne effect (and how you could reduce your footprint or visibility in order to decrease its likelihood).

Suggested Solution and Response: Highlight ways you have attempted to reduce your footprint while in the field, and guarantee anonymity to your research participants.

5. Replicability

Quantitative research has a great benefit in that the studies are replicable – a researcher can get a similar sample size, duplicate the variables, and re-test a study. But you can’t do that in qualitative research.

Qualitative research relies heavily on context – a specific case study or specific variables that make a certain instance worthy of analysis. As a result, it’s often difficult to re-enter the same setting with the same variables and repeat the study.

Furthermore, the individual researcher’s interpretation is more influential in qualitative research, meaning even if a new researcher enters an environment and makes observations, their observations may be different because subjectivity comes into play much more. This doesn’t make the research bad necessarily (great insights can be made in qualitative research), but it certainly does demonstrate a weakness of qualitative research.

6. Limited Scope

“Limited scope” is perhaps one of the most common limitations listed by researchers – and while this is often a catch-all way of saying, “well, I’m not studying that in this study”, it’s also a valid point.

No study can explore everything related to a topic. At some point, we have to make decisions about what’s included in the study and what is excluded from the study.

So, you could say that a limitation of your study is that it doesn’t look at an extra variable or concept that’s certainly worthy of study but will have to be explored in your next project because this project has a clearly and narrowly defined goal.

Suggested Solution and Response: Be clear about what’s in and out of the study when writing your research question.

7. Time Constraints

This is also a catch-all claim you can make about your research project: that you would have included more people in the study, looked at more variables, and so on. But you’ve got to submit this thing by the end of next semester! You’ve got time constraints.

And time constraints are a recognized reality in all research.

But this means you’ll need to explain how time has limited your decisions. As with “limited scope”, this may mean that you had to study a smaller group of subjects, limit the amount of time you spent in the field, and so forth.

Suggested Solution and Response: Suggest future studies that will build on your current work, possibly as a PhD project.

8. Resource Intensiveness

Qualitative research can be expensive due to the cost of transcription, the involvement of trained researchers, and potential travel for interviews or observations.

So, resource intensiveness is similar to the time constraints concept. If you don’t have the funds, you have to make decisions about which tools to use, which statistical software to employ, and how many research assistants you can dedicate to the study.

Suggested Solution and Response: Suggest future studies that will gain more funding on the back of this ‘ exploratory study ‘.

9. Coding Difficulties

Data analysis in qualitative research often involves coding, which can be subjective and complex, especially when dealing with ambiguous or contradicting data.

After naming this as a limitation in your research, it’s important to explain how you’ve attempted to address this. Some ways to ‘limit the limitation’ include:

  • Triangulation: Have 2 other researchers code the data as well and cross-check your results with theirs to identify outliers that may need to be re-examined, debated with the other researchers, or removed altogether.
  • Procedure: Use a clear coding procedure to demonstrate reliability in your coding process. I personally use the thematic network analysis method outlined in this academic article by Attride-Stirling (2001).

Suggested Solution and Response: Triangulate your coding findings with colleagues, and follow a thematic network analysis procedure.

10. Risk of Non-Responsiveness

There is always a risk in research that research participants will be unwilling or uncomfortable sharing their genuine thoughts and feelings in the study.

This is particularly true when you’re conducting research on sensitive topics, politicized topics, or topics where the participant is expressing vulnerability .

This is similar to the Hawthorne effect (aka participant bias), where participants change their behaviors in your presence; but it goes a step further, where participants actively hide their true thoughts and feelings from you.

Suggested Solution and Response: One way to manage this is to try to include a wider group of people with the expectation that there will be non-responsiveness from some participants.

11. Risk of Attrition

Attrition refers to the process of losing research participants throughout the study.

This occurs most commonly in longitudinal studies , where a researcher must return to conduct their analysis over spaced periods of time, often over a period of years.

Things happen to people over time – they move overseas, their life experiences change, they get sick, change their minds, and even die. The more time that passes, the greater the risk of attrition.

Suggested Solution and Response: One way to manage this is to try to include a wider group of people with the expectation that there will be attrition over time.

12. Difficulty in Maintaining Confidentiality and Anonymity

Given the detailed nature of qualitative data , ensuring participant anonymity can be challenging.

If you have a sensitive topic in a specific case study, even anonymizing research participants sometimes isn’t enough. People might be able to induce who you’re talking about.

Sometimes, this will mean you have to exclude some interesting data that you collected from your final report. Confidentiality and anonymity come before your findings in research ethics – and this is a necessary limiting factor.

Suggested Solution and Response: Highlight the efforts you have taken to anonymize data, and accept that confidentiality and accountability place extremely important constraints on academic research.

13. Difficulty in Finding Research Participants

A study that looks at a very specific phenomenon or even a specific set of cases within a phenomenon means that the pool of potential research participants can be very low.

Compile on top of this the fact that many people you approach may choose not to participate, and you could end up with a very small corpus of subjects to explore. This may limit your ability to make complete findings, even in a quantitative sense.

You may need to therefore limit your research question and objectives to something more realistic.

Suggested Solution and Response: Highlight that this is going to limit the study’s generalizability significantly.

14. Ethical Limitations

Ethical limitations refer to the things you cannot do based on ethical concerns identified either by yourself or your institution’s ethics review board.

This might include threats to the physical or psychological well-being of your research subjects, the potential of releasing data that could harm a person’s reputation, and so on.

Furthermore, even if your study follows all expected standards of ethics, you still, as an ethical researcher, need to allow a research participant to pull out at any point in time, after which you cannot use their data, which demonstrates an overlap between ethical constraints and participant attrition.

Suggested Solution and Response: Highlight that these ethical limitations are inevitable but important to sustain the integrity of the research.

For more on Qualitative Research, Explore my Qualitative Research Guide

Quantitative Research Limitations

Quantitative research focuses on quantifiable data and statistical, mathematical, or computational techniques. It’s often used to test hypotheses, assess relationships and causality, and generalize findings across larger populations.

Quantitative research is widely respected for its ability to provide reliable, measurable, and generalizable data (if done well!). Its structured methodology has strengths over qualitative research, such as the fact it allows for replication of the study, which underpins the validity of the research.

However, this approach is not without it limitations, explained below.

1. Over-Simplification

Quantitative research is powerful because it allows you to measure and analyze data in a systematic and standardized way. However, one of its limitations is that it can sometimes simplify complex phenomena or situations.

In other words, it might miss the subtleties or nuances of the research subject.

For example, if you’re studying why people choose a particular diet, a quantitative study might identify factors like age, income, or health status. But it might miss other aspects, such as cultural influences or personal beliefs, that can also significantly impact dietary choices.

When writing about this limitation, you can say that your quantitative approach, while providing precise measurements and comparisons, may not capture the full complexity of your subjects of study.

Suggested Solution and Response: Suggest a follow-up case study using the same research participants in order to gain additional context and depth.

2. Lack of Context

Another potential issue with quantitative research is that it often focuses on numbers and statistics at the expense of context or qualitative information.

Let’s say you’re studying the effect of classroom size on student performance. You might find that students in smaller classes generally perform better. However, this doesn’t take into account other variables, like teaching style , student motivation, or family support.

When describing this limitation, you might say, “Although our research provides important insights into the relationship between class size and student performance, it does not incorporate the impact of other potentially influential variables. Future research could benefit from a mixed-methods approach that combines quantitative analysis with qualitative insights.”

3. Applicability to Real-World Settings

Oftentimes, experimental research takes place in controlled environments to limit the influence of outside factors.

This control is great for isolation and understanding the specific phenomenon but can limit the applicability or “external validity” of the research to real-world settings.

For example, if you conduct a lab experiment to see how sleep deprivation impacts cognitive performance, the sterile, controlled lab environment might not reflect real-world conditions where people are dealing with multiple stressors.

Therefore, when explaining the limitations of your quantitative study in your methodology section, you could state:

“While our findings provide valuable information about [topic], the controlled conditions of the experiment may not accurately represent real-world scenarios where extraneous variables will exist. As such, the direct applicability of our results to broader contexts may be limited.”

Suggested Solution and Response: Suggest future studies that will engage in real-world observational research, such as ethnographic research.

4. Limited Flexibility

Once a quantitative study is underway, it can be challenging to make changes to it. This is because, unlike in grounded research, you’re putting in place your study in advance, and you can’t make changes part-way through.

Your study design, data collection methods, and analysis techniques need to be decided upon before you start collecting data.

For example, if you are conducting a survey on the impact of social media on teenage mental health, and halfway through, you realize that you should have included a question about their screen time, it’s generally too late to add it.

When discussing this limitation, you could write something like, “The structured nature of our quantitative approach allows for consistent data collection and analysis but also limits our flexibility to adapt and modify the research process in response to emerging insights and ideas.”

Suggested Solution and Response: Suggest future studies that will use mixed-methods or qualitative research methods to gain additional depth of insight.

5. Risk of Survey Error

Surveys are a common tool in quantitative research, but they carry risks of error.

There can be measurement errors (if a question is misunderstood), coverage errors (if some groups aren’t adequately represented), non-response errors (if certain people don’t respond), and sampling errors (if your sample isn’t representative of the population).

For instance, if you’re surveying college students about their study habits , but only daytime students respond because you conduct the survey during the day, your results will be skewed.

In discussing this limitation, you might say, “Despite our best efforts to develop a comprehensive survey, there remains a risk of survey error, including measurement, coverage, non-response, and sampling errors. These could potentially impact the reliability and generalizability of our findings.”

Suggested Solution and Response: Suggest future studies that will use other survey tools to compare and contrast results.

6. Limited Ability to Probe Answers

With quantitative research, you typically can’t ask follow-up questions or delve deeper into participants’ responses like you could in a qualitative interview.

For instance, imagine you are surveying 500 students about study habits in a questionnaire. A respondent might indicate that they study for two hours each night. You might want to follow up by asking them to elaborate on what those study sessions involve or how effective they feel their habits are.

However, quantitative research generally disallows this in the way a qualitative semi-structured interview could.

When discussing this limitation, you might write, “Given the structured nature of our survey, our ability to probe deeper into individual responses is limited. This means we may not fully understand the context or reasoning behind the responses, potentially limiting the depth of our findings.”

Suggested Solution and Response: Suggest future studies that engage in mixed-method or qualitative methodologies to address the issue from another angle.

7. Reliance on Instruments for Data Collection

In quantitative research, the collection of data heavily relies on instruments like questionnaires, surveys, or machines.

The limitation here is that the data you get is only as good as the instrument you’re using. If the instrument isn’t designed or calibrated well, your data can be flawed.

For instance, if you’re using a questionnaire to study customer satisfaction and the questions are vague, confusing, or biased, the responses may not accurately reflect the customers’ true feelings.

When discussing this limitation, you could say, “Our study depends on the use of questionnaires for data collection. Although we have put significant effort into designing and testing the instrument, it’s possible that inaccuracies or misunderstandings could potentially affect the validity of the data collected.”

Suggested Solution and Response: Suggest future studies that will use different instruments but examine the same variables to triangulate results.

8. Time and Resource Constraints (Specific to Quantitative Research)

Quantitative research can be time-consuming and resource-intensive, especially when dealing with large samples.

It often involves systematic sampling, rigorous design, and sometimes complex statistical analysis.

If resources and time are limited, it can restrict the scale of your research, the techniques you can employ, or the extent of your data analysis.

For example, you may want to conduct a nationwide survey on public opinion about a certain policy. However, due to limited resources, you might only be able to survey people in one city.

When writing about this limitation, you could say, “Given the scope of our research and the resources available, we are limited to conducting our survey within one city, which may not fully represent the nationwide public opinion. Hence, the generalizability of the results may be limited.”

Suggested Solution and Response: Suggest future studies that will have more funding or longer timeframes.

How to Discuss Your Research Limitations

1. in your research proposal and methodology section.

In the research proposal, which will become the methodology section of your dissertation, I would recommend taking the four following steps, in order:

  • Be Explicit about your Scope – If you limit the scope of your study in your research question, aims, and objectives, then you can set yourself up well later in the methodology to say that certain questions are “outside the scope of the study.” For example, you may identify the fact that the study doesn’t address a certain variable, but you can follow up by stating that the research question is specifically focused on the variable that you are examining, so this limitation would need to be looked at in future studies.
  • Acknowledge the Limitation – Acknowledging the limitations of your study demonstrates reflexivity and humility and can make your research more reliable and valid. It also pre-empts questions the people grading your paper may have, so instead of them down-grading you for your limitations; they will congratulate you on explaining the limitations and how you have addressed them!
  • Explain your Decisions – You may have chosen your approach (despite its limitations) for a very specific reason. This might be because your approach remains, on balance, the best one to answer your research question. Or, it might be because of time and monetary constraints that are outside of your control.
  • Highlight the Strengths of your Approach – Conclude your limitations section by strongly demonstrating that, despite limitations, you’ve worked hard to minimize the effects of the limitations and that you have chosen your specific approach and methodology because it’s also got some terrific strengths. Name the strengths.

Overall, you’ll want to acknowledge your own limitations but also explain that the limitations don’t detract from the value of your study as it stands.

2. In the Conclusion Section or Chapter

In the conclusion of your study, it is generally expected that you return to a discussion of the study’s limitations. Here, I recommend the following steps:

  • Acknowledge issues faced – After completing your study, you will be increasingly aware of issues you may have faced that, if you re-did the study, you may have addressed earlier in order to avoid those issues. Acknowledge these issues as limitations, and frame them as recommendations for subsequent studies.
  • Suggest further research – Scholarly research aims to fill gaps in the current literature and knowledge. Having established your expertise through your study, suggest lines of inquiry for future researchers. You could state that your study had certain limitations, and “future studies” can address those limitations.
  • Suggest a mixed methods approach – Qualitative and quantitative research each have pros and cons. So, note those ‘cons’ of your approach, then say the next study should approach the topic using the opposite methodology or could approach it using a mixed-methods approach that could achieve the benefits of quantitative studies with the nuanced insights of associated qualitative insights as part of an in-study case-study.

Overall, be clear about both your limitations and how those limitations can inform future studies.

In sum, each type of research method has its own strengths and limitations. Qualitative research excels in exploring depth, context, and complexity, while quantitative research excels in examining breadth, generalizability, and quantifiable measures. Despite their individual limitations, each method contributes unique and valuable insights, and researchers often use them together to provide a more comprehensive understanding of the phenomenon being studied.

Attride-Stirling, J. (2001). Thematic networks: an analytic tool for qualitative research. Qualitative research , 1 (3), 385-405. ( Source )

Atkinson, P., Delamont, S., Cernat, A., Sakshaug, J., & Williams, R. A. (2021).  SAGE research methods foundations . London: Sage Publications.

Clark, T., Foster, L., Bryman, A., & Sloan, L. (2021).  Bryman’s social research methods . Oxford: Oxford University Press.

Köhler, T., Smith, A., & Bhakoo, V. (2022). Templates in qualitative research methods: Origins, limitations, and new directions.  Organizational Research Methods ,  25 (2), 183-210. ( Source )

Lenger, A. (2019). The rejection of qualitative research methods in economics.  Journal of Economic Issues ,  53 (4), 946-965. ( Source )

Taherdoost, H. (2022). What are different research approaches? Comprehensive review of qualitative, quantitative, and mixed method research, their applications, types, and limitations.  Journal of Management Science & Engineering Research ,  5 (1), 53-63. ( Source )

Walliman, N. (2021).  Research methods: The basics . New York: Routledge.

Chris

  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd-2/ 10 Reasons you’re Perpetually Single
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd-2/ 20 Montessori Toddler Bedrooms (Design Inspiration)
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd-2/ 21 Montessori Homeschool Setups
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd-2/ 101 Hidden Talents Examples

Leave a Comment Cancel Reply

Your email address will not be published. Required fields are marked *

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

The PMC website is updating on October 15, 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Perspect Med Educ
  • v.8(4); 2019 Aug

Logo of pmeded

Limited by our limitations

Paula t. ross.

Medical School, University of Michigan, Ann Arbor, MI USA

Nikki L. Bibler Zaidi

Study limitations represent weaknesses within a research design that may influence outcomes and conclusions of the research. Researchers have an obligation to the academic community to present complete and honest limitations of a presented study. Too often, authors use generic descriptions to describe study limitations. Including redundant or irrelevant limitations is an ineffective use of the already limited word count. A meaningful presentation of study limitations should describe the potential limitation, explain the implication of the limitation, provide possible alternative approaches, and describe steps taken to mitigate the limitation. This includes placing research findings within their proper context to ensure readers do not overemphasize or minimize findings. A more complete presentation will enrich the readers’ understanding of the study’s limitations and support future investigation.

Introduction

Regardless of the format scholarship assumes, from qualitative research to clinical trials, all studies have limitations. Limitations represent weaknesses within the study that may influence outcomes and conclusions of the research. The goal of presenting limitations is to provide meaningful information to the reader; however, too often, limitations in medical education articles are overlooked or reduced to simplistic and minimally relevant themes (e.g., single institution study, use of self-reported data, or small sample size) [ 1 ]. This issue is prominent in other fields of inquiry in medicine as well. For example, despite the clinical implications, medical studies often fail to discuss how limitations could have affected the study findings and interpretations [ 2 ]. Further, observational research often fails to remind readers of the fundamental limitation inherent in the study design, which is the inability to attribute causation [ 3 ]. By reporting generic limitations or omitting them altogether, researchers miss opportunities to fully communicate the relevance of their work, illustrate how their work advances a larger field under study, and suggest potential areas for further investigation.

Goals of presenting limitations

Medical education scholarship should provide empirical evidence that deepens our knowledge and understanding of education [ 4 , 5 ], informs educational practice and process, [ 6 , 7 ] and serves as a forum for educating other researchers [ 8 ]. Providing study limitations is indeed an important part of this scholarly process. Without them, research consumers are pressed to fully grasp the potential exclusion areas or other biases that may affect the results and conclusions provided [ 9 ]. Study limitations should leave the reader thinking about opportunities to engage in prospective improvements [ 9 – 11 ] by presenting gaps in the current research and extant literature, thereby cultivating other researchers’ curiosity and interest in expanding the line of scholarly inquiry [ 9 ].

Presenting study limitations is also an ethical element of scientific inquiry [ 12 ]. It ensures transparency of both the research and the researchers [ 10 , 13 , 14 ], as well as provides transferability [ 15 ] and reproducibility of methods. Presenting limitations also supports proper interpretation and validity of the findings [ 16 ]. A study’s limitations should place research findings within their proper context to ensure readers are fully able to discern the credibility of a study’s conclusion, and can generalize findings appropriately [ 16 ].

Why some authors may fail to present limitations

As Price and Murnan [ 8 ] note, there may be overriding reasons why researchers do not sufficiently report the limitations of their study. For example, authors may not fully understand the importance and implications of their study’s limitations or assume that not discussing them may increase the likelihood of publication. Word limits imposed by journals may also prevent authors from providing thorough descriptions of their study’s limitations [ 17 ]. Still another possible reason for excluding limitations is a diffusion of responsibility in which some authors may incorrectly assume that the journal editor is responsible for identifying limitations. Regardless of reason or intent, researchers have an obligation to the academic community to present complete and honest study limitations.

A guide to presenting limitations

The presentation of limitations should describe the potential limitations, explain the implication of the limitations, provide possible alternative approaches, and describe steps taken to mitigate the limitations. Too often, authors only list the potential limitations, without including these other important elements.

Describe the limitations

When describing limitations authors should identify the limitation type to clearly introduce the limitation and specify the origin of the limitation. This helps to ensure readers are able to interpret and generalize findings appropriately. Here we outline various limitation types that can occur at different stages of the research process.

Study design

Some study limitations originate from conscious choices made by the researcher (also known as delimitations) to narrow the scope of the study [ 1 , 8 , 18 ]. For example, the researcher may have designed the study for a particular age group, sex, race, ethnicity, geographically defined region, or some other attribute that would limit to whom the findings can be generalized. Such delimitations involve conscious exclusionary and inclusionary decisions made during the development of the study plan, which may represent a systematic bias intentionally introduced into the study design or instrument by the researcher [ 8 ]. The clear description and delineation of delimitations and limitations will assist editors and reviewers in understanding any methodological issues.

Data collection

Study limitations can also be introduced during data collection. An unintentional consequence of human subjects research is the potential of the researcher to influence how participants respond to their questions. Even when appropriate methods for sampling have been employed, some studies remain limited by the use of data collected only from participants who decided to enrol in the study (self-selection bias) [ 11 , 19 ]. In some cases, participants may provide biased input by responding to questions they believe are favourable to the researcher rather than their authentic response (social desirability bias) [ 20 – 22 ]. Participants may influence the data collected by changing their behaviour when they are knowingly being observed (Hawthorne effect) [ 23 ]. Researchers—in their role as an observer—may also bias the data they collect by allowing a first impression of the participant to be influenced by a single characteristic or impression of another characteristic either unfavourably (horns effect) or favourably (halo effort) [ 24 ].

Data analysis

Study limitations may arise as a consequence of the type of statistical analysis performed. Some studies may not follow the basic tenets of inferential statistical analyses when they use convenience sampling (i.e. non-probability sampling) rather than employing probability sampling from a target population [ 19 ]. Another limitation that can arise during statistical analyses occurs when studies employ unplanned post-hoc data analyses that were not specified before the initial analysis [ 25 ]. Unplanned post-hoc analysis may lead to statistical relationships that suggest associations but are no more than coincidental findings [ 23 ]. Therefore, when unplanned post-hoc analyses are conducted, this should be clearly stated to allow the reader to make proper interpretation and conclusions—especially when only a subset of the original sample is investigated [ 23 ].

Study results

The limitations of any research study will be rooted in the validity of its results—specifically threats to internal or external validity [ 8 ]. Internal validity refers to reliability or accuracy of the study results [ 26 ], while external validity pertains to the generalizability of results from the study’s sample to the larger, target population [ 8 ].

Examples of threats to internal validity include: effects of events external to the study (history), changes in participants due to time instead of the studied effect (maturation), systematic reduction in participants related to a feature of the study (attrition), changes in participant responses due to repeatedly measuring participants (testing effect), modifications to the instrument (instrumentality) and selecting participants based on extreme scores that will regress towards the mean in repeat tests (regression to the mean) [ 27 ].

Threats to external validity include factors that might inhibit generalizability of results from the study’s sample to the larger, target population [ 8 , 27 ]. External validity is challenged when results from a study cannot be generalized to its larger population or to similar populations in terms of the context, setting, participants and time [ 18 ]. Therefore, limitations should be made transparent in the results to inform research consumers of any known or potentially hidden biases that may have affected the study and prevent generalization beyond the study parameters.

Explain the implication(s) of each limitation

Authors should include the potential impact of the limitations (e.g., likelihood, magnitude) [ 13 ] as well as address specific validity implications of the results and subsequent conclusions [ 16 , 28 ]. For example, self-reported data may lead to inaccuracies (e.g. due to social desirability bias) which threatens internal validity [ 19 ]. Even a researcher’s inappropriate attribution to a characteristic or outcome (e.g., stereotyping) can overemphasize (either positively or negatively) unrelated characteristics or outcomes (halo or horns effect) and impact the internal validity [ 24 ]. Participants’ awareness that they are part of a research study can also influence outcomes (Hawthorne effect) and limit external validity of findings [ 23 ]. External validity may also be threatened should the respondents’ propensity for participation be correlated with the substantive topic of study, as data will be biased and not represent the population of interest (self-selection bias) [ 29 ]. Having this explanation helps readers interpret the results and generalize the applicability of the results for their own setting.

Provide potential alternative approaches and explanations

Often, researchers use other studies’ limitations as the first step in formulating new research questions and shaping the next phase of research. Therefore, it is important for readers to understand why potential alternative approaches (e.g. approaches taken by others exploring similar topics) were not taken. In addition to alternative approaches, authors can also present alternative explanations for their own study’s findings [ 13 ]. This information is valuable coming from the researcher because of the direct, relevant experience and insight gained as they conducted the study. The presentation of alternative approaches represents a major contribution to the scholarly community.

Describe steps taken to minimize each limitation

No research design is perfect and free from explicit and implicit biases; however various methods can be employed to minimize the impact of study limitations. Some suggested steps to mitigate or minimize the limitations mentioned above include using neutral questions, randomized response technique, force choice items, or self-administered questionnaires to reduce respondents’ discomfort when answering sensitive questions (social desirability bias) [ 21 ]; using unobtrusive data collection measures (e.g., use of secondary data) that do not require the researcher to be present (Hawthorne effect) [ 11 , 30 ]; using standardized rubrics and objective assessment forms with clearly defined scoring instructions to minimize researcher bias, or making rater adjustments to assessment scores to account for rater tendencies (halo or horns effect) [ 24 ]; or using existing data or control groups (self-selection bias) [ 11 , 30 ]. When appropriate, researchers should provide sufficient evidence that demonstrates the steps taken to mitigate limitations as part of their study design [ 13 ].

In conclusion, authors may be limiting the impact of their research by neglecting or providing abbreviated and generic limitations. We present several examples of limitations to consider; however, this should not be considered an exhaustive list nor should these examples be added to the growing list of generic and overused limitations. Instead, careful thought should go into presenting limitations after research has concluded and the major findings have been described. Limitations help focus the reader on key findings, therefore it is important to only address the most salient limitations of the study [ 17 , 28 ] related to the specific research problem, not general limitations of most studies [ 1 ]. It is important not to minimize the limitations of study design or results. Rather, results, including their limitations, must help readers draw connections between current research and the extant literature.

The quality and rigor of our research is largely defined by our limitations [ 31 ]. In fact, one of the top reasons reviewers report recommending acceptance of medical education research manuscripts involves limitations—specifically how the study’s interpretation accounts for its limitations [ 32 ]. Therefore, it is not only best for authors to acknowledge their study’s limitations rather than to have them identified by an editor or reviewer, but proper framing and presentation of limitations can actually increase the likelihood of acceptance. Perhaps, these issues could be ameliorated if academic and research organizations adopted policies and/or expectations to guide authors in proper description of limitations.

  • Future Perfect

Science has a short-term memory problem

Scientists are trapped in an endless loop of grant applications. How can we set them free?

by Celia Ford

diane-serik-M3t3E7gzPKQ-unsplash

Back in 2016, Vox asked 270 scientists to name the biggest problems facing science . Many of them agreed that the constant search for funding, brought on by the increasingly competitive grant system , serves as one of the biggest barriers to scientific progress.

Even though we have more scientists throwing more time and resources at projects, we seem to be blocked on big questions — like how to help people live healthier for longer — and that has major real-world impacts.

This story was first featured in the Future Perfect newsletter .

Sign up here to explore the big, complicated problems the world faces and the most efficient ways to solve them. Sent twice a week.

Grants are funds given to researchers by the government or private organizations, ranging from tens to hundreds of thousands of dollars earmarked for a specific project. Most grant applications are very competitive. Only about 20 percent of applications for research project grants at the National Institutes of Health (NIH), which funds the vast majority of biomedical research in the US, are successful.

If you do get a grant, they usually expire after a few years — far less time than it normally takes to make groundbreaking discoveries. And most grants, even the most prestigious ones, don’t provide enough money to keep a lab running on their own.

Between the endless cycle of grant applications and the constant turnover of early-career researchers in labs, pushing science forward is slow at best and Sisyphean at worst.

In other words, science has a short-term memory problem — but there are steps funding agencies can take to make it better.

Grants are too small, too short, and too restrictive

Principal investigators — often tenure-track university professors — doing academic research in the US are responsible not only for running their own lab, but also for funding it. That includes the costs of running experiments, keeping the lights on, hiring other scientists, and often covering their own salary, too. In this way, investigators are more like entrepreneurs than employees , running their labs like a small-business owner.

In the US, basic science research, studying how the world works for the sake of expanding knowledge, is mostly funded by the federal government . The NIH funds the vast majority of biomedical research, and the National Science Foundation (NSF) funds other sciences, like astrophysics, geology, and genetics. The Advanced Research Projects Agency for Health (ARPA-H) also funds some biomedical research, and the Defense Advanced Research Projects Agency (DARPA) funds technology development for the military, some of which finds uses in the civilian world, like the internet .

The grant application system worked well a few decades ago, when over half of submitted grants were funded . But today, we have more scientists — especially young ones — and less money, once inflation is taken into account. Getting a grant is harder than ever, scientists I spoke with said. What ends up happening is that principal investigators are forced to spend more of their time writing grant applications — which often take dozens of hours each — than actually doing the science they were trained for. Because funding is so competitive, applicants increasingly have to twist their research proposals to align with whoever will give them money. A lab interested in studying how cells communicate with each other, for example, may spin it as a study of cancer, heart disease, or depression to convince the NIH that its project is worth funding.

Federal agencies generally fund specific projects, and require scientists to provide regular progress updates. Some of the best science happens when experiments lead researchers in unexpected directions, but grantees generally need to stick with the specific aims listed in their application or risk having their funding taken away — even if the first few days of an experiment suggest things won’t go as planned.

This system leaves principal investigators constantly scrambling to plug holes in their patchwork of funding. In her first year as a tenure-track professor, Jennifer Garrison , now a reproductive longevity researcher at the Buck Institute , applied for 45 grants to get her lab off the ground. “I’m so highly trained and specialized,” she told me. “The fact that I spend the majority of my time on administrative paperwork is ridiculous.”

Relying on a transient, underpaid workforce makes science worse

For the most part, the principal investigators applying for grants aren’t doing science — their graduate students and postdoctoral fellows are. While professors are teaching, doing administrative paperwork, and managing students, their early-career trainees are the ones who conduct the experiments and analyze data.

Since they do the bulk of the intellectual and physical labor, these younger scientists are usually the lead authors of their lab’s publications. In smaller research groups, a grad student may be the only one who fully understands their project.

In some ways, this system works for universities. With most annual stipends falling short of $40,000 , “Young researchers are highly trained but relatively inexpensive sources of labor for faculty,” then-graduate researcher Laura Weingartner told Vox in 2016 .

Grad students and postdocs are cheap, but they’re also transient. It takes an average of six years to earn a PhD , with only about three to five of those years devoted to research in a specific lab. This time constraint forces trainees to choose projects that can be wrapped up by the time they graduate, but science, especially groundbreaking science, rarely fits into a three- to five-year window. CRISPR, for instance, was first characterized in the ’90s — 20 years before it was first used for gene editing.

Trainees generally try to publish their findings by the time they leave, or pass ownership along to someone they have trained to take the wheel. The pressure to squeeze exciting, publishable data from a single PhD thesis project forces many inexperienced scientists into roles they can’t realistically fulfill. Many people (admittedly, myself included , as a burnt-out UC Berkeley neuroscience graduate student) wind up leaving a trail of unfinished experiments behind when they leave academia — and have no formal obligation to complete them.

When the bulk of your workforce is underpaid , burning out , and constantly turning over, it creates a continuity problem. When one person leaves, they often take a bunch of institutional knowledge with them. Ideally, research groups would have at least one or two senior scientists — with as much training as a tenured professor — working in the lab to run experiments, mentor newer scientists, and serve as a stable source of expertise as other researchers come and go.

One major barrier here: Paying a highly trained scientist enough to compete with six-figure industry jobs costs far more than a single federal grant can provide. One $250,000/year NIH R01 — the primary grant awarded to scientists for research projects — barely funds one person’s salary and benefits. While the NIH has specialized funding that students, postdocs, junior faculty, and other trainees can apply for to pay their own wages, funding opportunities for senior scientists are limited. “It’s just not feasible to pay for a senior scientist role unless you have an insane amount of other support,” Garrison told me.

How can we help scientists do cooler, more ambitious research?

Funding scientists themselves, rather than the experiments they say they’ll do, helps — and we already have some evidence to prove it.

The Howard Hughes Medical Institute (HHMI) has a funding model worth replicating. It is driven by a “people, not projects” philosophy, granting scientists many years worth of money, without tying them down to specific projects. Grantees continue working at their home institution, but they — along with their postdocs — become employees of HHMI, which pays their salary and benefits.

HHMI reportedly provides enough funding to operate a small- to medium-sized lab without requiring any extra grants. The idea is that if investigators are simply given enough money to do their jobs, they can redirect all their wasted grant application time toward actually doing science. It’s no coincidence that over 30 HHMI-funded scientists have won Nobel Prizes in the past 50 years.

The Arc Institute , a new, independent nonprofit collaboration partnered with research giants Stanford, UC Berkeley, and UC San Francisco, also provides investigators and their labs with renewable eight-year “no-strings-attached” grants. Arc aims to give scientists the freedom and resources to do the slow, unsexy work of developing better research tools — something crucial to science but unappealing to scientific journals (and scientists who need to publish stuff to earn more funding).

Operating Arc is expensive, and the funding model currently relies on donations from philanthropists and tech billionaires. Arc supports eight labs so far, and hopes to expand to no more than 350 scientists someday — far short of the 50,000-some biomedical researchers applying for grants every year.

For now, institutional experiments like Arc are just that: experiments. They’re betting that scientists who feel invigorated, creative, and unburdened will be better equipped to take the risks required to make big discoveries.

Building brand-new institutions isn’t the only way to break the cycle of short-term, short-sighted projects in biomedical research. Anything that makes it financially easier for investigators to keep their labs running will help. Universities could pay the salaries of their employees directly, rather than making investigators find money for their trainees themselves. Federal funding agencies could also make grants bigger to match the level of inflation — but Congress is unlikely to approve that kind of spending.

Science might also benefit from having fewer, better-paid scientists in long-term positions, rather than relying on the labor of underpaid, under-equipped trainees. “I think it would be better to have fewer scientists doing real, deep work than what we have now,” Garrison said.

It’s not that scientists aren’t capable of creative, exciting, ambitious work — they’ve just been forced to bend to a grant system that favors short, risk-averse projects. And if the grant system changes, odds are science will too.

Clarification, September 12, 2:15 pm ET: This story, published September 11, has been changed to make it clearer that Arc Institute is independent from its university partners.

Most Popular

  • Take a mental break with the newest Vox crossword
  • The Israeli attacks in Lebanon could lead to the wider war we’ve been fearing
  • Why America hates to love chicken nuggets
  • I give to charity — but never to people on the street. Is that wrong?
  • Sally Rooney’s new book is an exquisite return to form

Today, Explained

Understand the world with a daily explainer plus the most compelling stories of the day.

 alt=

This is the title for the native ad

 alt=

More in Future Perfect

Yuval Noah Harari on whether democracy and AI can coexist

“If humans are so smart, why are we so stupid?”

I give to charity — but never to people on the street. Is that wrong?

The quest to give effectively feels like it’s turning me into an cold-hearted jerk.

What it means that new AIs can “reason”

OpenAI’s new ChatGPT is better than ever — and more dangerous.

I asked Bill Gates why there are still so many hungry children

The billionaire philanthropist talks about how the fight for global health lost steam — and how to get it back.

The moral case for paying kidney donors

Kidney donors save lives. Why aren’t we compensated for it?

Are Americans generous?

New research sheds light on changing patterns of US giving and volunteering.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • 23 September 2024

Budget cuts hit world’s largest cancer-research funder: what it means for scientists

  • Heidi Ledford

You can also search for this author in PubMed   Google Scholar

Joe Biden speaks about cancer moonshot at the John F. Kennedy Library and Museum

Money from the Cancer Moonshot programme supported by President Joe Biden is drying up, leading to a funding cut for the National Cancer Institute. Credit: Vanessa Leroy/Bloomberg/Getty

For the first time in nearly a decade, the US National Cancer Institute (NCI) is grappling with a budget cut — with scant hope for a boost in the coming year.

The NCI’s budget of US$7.2 billion in fiscal year 2024 still secures its position as the world’s largest funder of cancer research . But it leaves the agency $96 million short of the previous year’s figure, largely owing to the end of programmes such as the first Cancer Moonshot Initiative , which were funded separately from the agency’s core budget.

And next year is unlikely to bring a reprieve. Thanks to a two-year agreement in the US Congress to limit the nation’s debt , it’s doubtful that the NCI will see a boost in 2025, says Jon Retzlaff, chief policy officer at the American Association for Cancer Research (AACR) in Philadelphia, Pennsylvania. “We’re likely going to be in this position again next year,” he says. “It’s very tough to go in and convince lawmakers that this is one area they can increase when, if they do that, they’ll have to cut many other programmes dramatically.”

limitations of research funding

Forget lung, breast or prostate cancer: why tumour naming needs to change

That did not stop the NCI from setting an ambitious target of $11.5 billion for its 2026 budget proposal released on 4 September. But this was the same budget that the agency had proposed in its 2025 request. That’s unusual: institute directors often increase their proposed budget each year. (The NCI is part of the US National Institutes of Health, but submits a separate budget request, sometimes called a bypass budget, each year.)

“It felt like giving an even bigger number really wasn’t wise in this current era,” NCI director Kimryn Rathmell told a panel of NCI advisers during a meeting on 3 September. “We’re trying to signal that we get it. We understand that there is a financial reality, and we understand that there are real economic constraints facing our country.”

Behind the decline

The NCI’s shrinking budget does not mean that the agency has lost bipartisan support from US lawmakers, says Retzlaff. And the decline was not due to cuts to the agency’s base budget, but rather because special, separately funded programmes administered by the NCI have come to an end and were not replaced. These include the initial Cancer Moonshot programme funded by the 21 st Century Cures Act, which was signed into law in 2016.

THE END OF A GROWTH SPURT. Graphic shows he budget of the US National Cancer Institute declined in 2024 after years of growth.

Source: National Cancer Institute

But over the past two decades, the steady growth in the NCI’s coffers has not translated into increased purchasing power, said Weston Ricks, director of NCI’s Office of Budget and Finance at the 3 September meeting. In absolute dollars, the NCI’s base budget rose from $4.6 billion in 2003 to $7.2 billion in 2024. But that increase translates to a 15% loss in purchasing power after factoring in the rate of biomedical inflation, a measure that typically outpaces overall consumer inflation, Ricks said.

Meanwhile, the number of grant applicants to the NCI has increased by 40% in the past decade, compared with 17% at other institutes and centres in the National Institutes of Health.

With all of this in mind, the institute must make some difficult decisions, Rathmell said. The agency will prioritize studies that could lead to new therapies over research intended to improve access to existing therapies. It will also favour research projects initiated by grant applicants, rather than grants that target specific subjects chosen by NCI officials.

And Rathmell pledged to work to hold steady the proportion of grants awarded to young investigators — an approach that the AACR supports, says Retzlaff. “That pipeline is the future of cancer research,” he says. “It needs to be a priority.”

doi: https://doi.org/10.1038/d41586-024-02978-2

Reprints and permissions

Related Articles

limitations of research funding

  • Scientific community

Menopause age shaped by genes that influence mutation risk

Menopause age shaped by genes that influence mutation risk

News & Views 11 SEP 24

Lipid recycling by macrophage cells drives the growth of brain cancer

Lipid recycling by macrophage cells drives the growth of brain cancer

Why some women enter menopause early — and how that could affect their cancer risk

Why some women enter menopause early — and how that could affect their cancer risk

News 11 SEP 24

More measures needed to ease funding competition in China

Correspondence 24 SEP 24

We must train specialists in botany and zoology — or risk more devastating extinctions

We must train specialists in botany and zoology — or risk more devastating extinctions

World View 24 SEP 24

The UK’s $1-billion bet to create technologies that change the world

The UK’s $1-billion bet to create technologies that change the world

News Feature 18 SEP 24

Gaza: Why is it so hard to establish the death toll?

Gaza: Why is it so hard to establish the death toll?

News Explainer 24 SEP 24

Why aren’t there talks with the Taliban about getting women and girls back into education?

Why aren’t there talks with the Taliban about getting women and girls back into education?

Editorial 18 SEP 24

Unearthing ‘hidden’ science would help to tackle the world’s biggest problems

Unearthing ‘hidden’ science would help to tackle the world’s biggest problems

Editorial 17 SEP 24

[DGIST] 2024-2025 Tenure-Track Faculty Public Invitation

South Korea (KR)

limitations of research funding

Locum Editor, BMC Infectious Diseases

As an Editor with a background in biomedical sciences, you will handle editorial content for BMC Infectious Diseases.

London or Heidelberg– Hybrid Working Model

Springer Nature Ltd

limitations of research funding

Survey Interviewer - St. Jude LIFE Research Study

Memphis, Tennessee (US)

St. Jude Children's Research Hospital (St. Jude)

limitations of research funding

Faculty Position, Department of Hematology

limitations of research funding

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies
  • Search Menu
  • Sign in through your institution
  • Advance articles
  • Author Guidelines
  • Submission Site
  • Open Access
  • Why Publish?
  • About Research Evaluation
  • Editorial Board
  • Advertising and Corporate Services
  • Journals Career Network
  • Self-Archiving Policy
  • Dispatch Dates
  • Journals on Oxford Academic
  • Books on Oxford Academic

Issue Cover

Article Contents

1. introduction, 2. data and methods, 3. overview results, 4. discussion, 5. conclusions and implications.

  • < Previous

Options and limitations in measuring the impact of research grants—evidence from Denmark and Norway

  • Article contents
  • Figures & tables
  • Supplementary Data

Liv Langfeldt, Carter Walter Bloch, Gunnar Sivertsen, Options and limitations in measuring the impact of research grants—evidence from Denmark and Norway, Research Evaluation , Volume 24, Issue 3, July 2015, Pages 256–270, https://doi.org/10.1093/reseval/rvv012

  • Permissions Icon Permissions

Competitive grant schemes are set up with the intention of improving research performance. It may, however, be difficult to find evidence of the intervention impact of research grants for ex post evaluations of grant schemes. Based on data on applicants to Danish and Norwegian open mode grant schemes—research projects as well as post doc fellowships—this article applies difference in difference analysis to study to what extent research grants are likely to affect the publication and citation rates of the principle investigators (PIs).

The results show higher increases in the number of publications for grant recipients than for rejected applicants, while increases in mean normalized citation rates were not significantly higher for the successful applicants. In other words, the grants seem to have increased productivity, e.g. by helping PIs to add staff to their research teams, but not to have influenced the importance of the research as measured by average citations. However, along with increases in the number of publications also came a greater increase in the number of highly cited papers for grant recipients than for rejected applicants. In sum, the analyses indicate that the measurement of grant impact is sensitive to how research performance is defined and tested using bibliometric indicators. Furthermore, the applicants’ complex landscape of multiple projects and grants makes it difficult to isolate the output of a single grant. Hence, using bibliometrics to measure the impact of smaller grant schemes and smaller grants may often yield inconclusive results.

R&D funding schemes aiming at enhancing conditions for research and improving research output are major instruments in public research policy. A key reason for allocating part of national research funding as project grants, rather than solely relying on block funding to the research institutions, and a raison d’être for research councils and other bodies funding fundamental/academic research, is to target the most promising research projects and support the best researchers. The underlying idea is that competitive grant schemes can increase research performance and optimize funding impact. In later years, impact evaluations of such funding schemes have become more common, as funding authorities would like evidence that the grants have in fact yielded intended effects on research performance and scientific quality. 1 Evaluating impact is however generally a complex and demanding endeavour. Even when we put aside the complexities of investigating the (broader) impacts of research ( Donovan 2011 ), and concentrate on how research grants impact first-order outputs such as publications, adequate tests of the intervention impact of grant schemes are hard to design and perform. In addition to good data on the relevant funding schemes and grant awardees, we need a matching control group, to decide on adequate indicators and time-slots for measuring impact/change, and to be able to attribute the research output to specific grants ( Vedung 1997 ; Nedeva et al. 2012 ).

Many studies have examined the correlation between past performance and the selection/peer review of grant proposals, and found that the applicants tend to have better track records (citation scores) than non-applicants, and that awarded applicants on average have better track records than the rejected (but not always better than the best performing rejected applicants, van Leeuwen and Moed 2012 ; Bornmann et al. 2010 ; Neufeld and van Ins 2011 ). However, in this article we focus on the impact of the grants on the subsequent performance of the researchers. Impacts of R&D funding schemes have proved hard to measure based on bibliometric data, as has also been demonstrated in numerous studies. Some impact on productivity (number of publications) has been found, but little impact on citation rates ( Sandström 2009 ; Rigby 2011 ; Benavente 2012 ). Research projects are complex activities and often lack one-to-one relations between principle investigators (PIs), grants, research activities/projects and publications. Moreover, citations are not an unproblematic proxy of research performance. Difficulties in identifying an impact of funding schemes can thus be due to complex funding structures—e.g. rejected grant applications may be funded from other sources, projects may profit from multiple grants and PIs may be involved in multiple, parallel (and interrelated) projects, thus limiting the possibilities of attributing particular outcomes to a specific grant or rejection, or to comparing outputs of funded and rejected projects. Negative results may also be due to measurement problems, such as limitations in available data, highly skewed distribution of citations, and varying time lags between grant application and publication.

Some notable findings in previous studies include a study matching grants and publication records of Swedish researchers which found that productivity measures (number of papers) were related to the total sum of grants (all sources for a 6-year period), but that citation scores were essentially unaffected by research grants ( Sandström 2009 ). Likewise, a study comparing outcomes for successful and unsuccessful applicants for Chilean research funding found a significant impact on number of publications, but no impact on citations ( Benavente 2012 ). Another study—comparing funded and rejected applicants for large 6-year grants to young investigators in Sweden—found no impact on the number of publications, but positive effects for the funded PIs were found in terms of a higher proportion of international co-authorship, as well as securing further funding for their research groups ( Melin and Danell 2006 ). Moreover, there are some indications that grants increase the probability of passing publication and citation thresholds, but not significantly impact the total number of citations: In a study comparing outcomes for successful and unsuccessful applicants for NIH postdoc grants (comparing those with peer review scores just above and just below the cut-off), Jacob and Lefgren (2011) found that grant receipt increased the number of publications (by about 20% in the 5-year period following the grant award), and significantly increased the probability of crossing a citation threshold (200 citations in the 10-year period after the grant), but had no significant effect on the total number of citations. They argued that this is consistent with grants improving outcomes at lower levels, but have little effect on high achievers ( Jacob and Lefgren 2011 : 870).

Another approach to study impacts of research grants is to match input and output factors by analysing publications attributed to funding sources. Whereas such studies are more promising in terms of studying the impacts of specific grants and comparing impacts of different funding schemes, data limitations are more challenging and results are ambiguous. Notably, a study of funding body acknowledgements in published papers in two disciplines (physics and biology) found that a large proportion of publications contained multiple funding acknowledgements ( Rigby 2011 ). Hence, evaluations based on methods assuming that papers owe their existence to a single funding source may be misconceived. Moreover, when studying citation scores by number of funding body acknowledgements, no clear relationship between number of funders and research impact was found ( Rigby 2011 ). Still, there are indications that publications with an accredited funding source have higher impact than publications which do not accredit grants. A study by Zhao (2010) comparing the citation scores of publications (within library and information sciences) with accredited funding sources and publications without accredited funding, showed higher citation scores for papers accrediting funding. Other studies found that highly cited papers within malaria research acknowledged multiple funding sources more often than papers with lower citations scores within this field ( MacLean et al. 1998 ), and that multiple funding sources of papers within biomedical research were correlated with publication in high-impact journals ( Lewison and Dawson 1998 ). Another study based on lists of publications accrediting grants from the Human Frontier Science Program (HFSP), found significant citation impact compared to the world average, and also higher citation impact than other publications by the co-authors ( HFSP 2009 ). Note that these studies did not address impact/change in researchers’ performance, but merely the difference between publications resulting/not resulting from (specific) funding schemes.

With these mixed findings in previous studies as a point of departure, the focus in this article is data from some recent evaluations of open mode funding schemes in two Scandinavian countries, Norway and Denmark.

FRIPRO is a key funding instrument of the Research Council of Norway and the only Norwegian scheme allocating project grants for independent basic research in all fields of research. It is based on open calls and national competition and has a high standing and popularity in the Norwegian research community, and also a low success rate (11 per cent of applications were funded in 2010). Fostering research of high scientific quality, promoting research recruitment and developing basic theory and methods are among the aims of the scheme. University researchers are the main target group, receiving 86 per cent of the FRIPRO funding. In the studied period (grant decisions in 2005–7), 413 research projects and PhD and postdocs fellowships were funded, average total projects size was 0.5–0.7 million Euros 2 and the normal project period 3 or 4 years.

The programmes of the Danish Council for Independent Research to fund research projects and postdoc fellowships are very similar to the Norwegian FRIPRO programme both in terms of aims and funding criteria. Research project grants are the most important instrument for the Danish Council, accounting for around 60% of Council funding. In the period 2001–8, around 2,600 project grants were awarded to a total sum of approximately 480 million Euros. In the same period, around 880 postdoc fellowships were awarded. The Danish research project grants differ somewhat from the Norwegian in terms of size and success rate. For example, the average size of project grants in 2007 and 2008 was around 0.25 million Euros and the success rate around 23%. The success rate for project grant has however fallen significantly in more recent years and is now around the same level as in Norway.

Evaluations of schemes for projects grants and postdoc fellowships from the Danish Council for Independent Research and of the similar Norwegian FRIPRO scheme failed to demonstrate any substantial impact of grants on PI’s publications or citation rates, but still found some notable differences between successful and rejected applicants based on their survey replies ( Faber et al. 2010 ; Bloch et al. 2011 ; Langfeldt et al. 2012 ). In both countries the survey data indicated important impact of the funding schemes, and bibliometric analysis that successful applicants had significantly higher publication and citation rates than the rejected applicants; however, the funding or rejection of applications did not seem to have a measurable impact on the citation or publication rates of the PIs—the before–after changes in publication and citation rates did not differ significantly between the funded and rejected applicants.

In this article we study how and to what extent research grants are likely to affect publication and citation rates, based on further analysis of the data from these Danish and Norwegian evaluations. Using the data sets provided for the evaluations, we test a variety of impact indicators and examine the possible explanations of the discrepancy between the survey data and the bibliometric analyses. Notably, when using more refined methods, more positive impacts on PI’s publication and citation rates appear than in the evaluations of these schemes. In this way we identify and discuss numerous factors which complicate the identification and measurement of impact of funding schemes.

The analyses are conducted using field normalized indicators of citation impact and through comparison of performance before and after grant application. Our data are thus more suited to this type of analysis than data that are only measured after application or are not field normalized (such as the number of citations). These data allows us to calculate difference in difference measures, i.e. whether publication activity or citation impact has increased more over the grant period for successful applicants than rejected applicants. This is important given that ex ante performance is typically greater for grant recipients. Hence, a finding that grant recipients also perform better ex post does not necessarily indicate an effect of grant funding.

While there are some differences for Denmark and Norway, the general result of the analysis is that we find that increases in productivity are higher for grant recipients, while increases in citation impact are not significantly greater. However, it should also be noted that these increases in productivity also include greater productivity of highly cited papers.

The next section describes both the Norwegian and Danish evaluations and the bibliometric data used in the analysis. In this section we also outline the methods used in the analysis. Section 3 presents the results of the analysis, while Sections 4 and 5 include a discussion of the results and their implications, followed by concluding remarks.

2.1 The data set and methods for the study of the Norwegian FRIPRO grants

In the Norwegian part of the study, the publication and citation rates of PIs who were granted and/or rejected in the above-mentioned FRIPRO scheme are analysed before and after their evaluations in 2005–7. Both grant reception and rejection for the same PI are possible in the period, since it is the PIs of the FRIPRO applications and their scientific articles, not their projects, that constitute the main unit of study in the bibliometric analysis. The PIs are thereby divided in two partly overlapping groups, where grant recipients have been awarded at least one grant during the period and rejected applicants have not received any FRIPRO grant during the period. Their articles are divided into two 5-year periods in order to detect possible changes after funding or rejection. The first period includes the application year, while the last period commences the year after the start-up year.

The publication and citation rates are studied with data from the National Citation Report for Norway, a bibliographic database from Thomson Reuters representing all journal articles that have been indexed in Web of Science since 1981 with at least one address indicating an institutional affiliation in Norway. In the 10-year period selected for analysis here, 2001–10, the database consists of 72,263 articles. Of these, 22,030 articles (30.4%) have been matched (using author names) with PIs included in this study and selected for further analysis.

The Web of Science represents the scientific production in the natural sciences and medicine more comprehensively than in other areas. Compared to complete data for scientific publications recorded at the institutional level in Norway (the Cristin database), the following shares of peer-reviewed original research publications are covered by the Web of Science ( Sivertsen and Larsen 2012 ): Natural sciences: 80%; Health sciences: 76%; Engineering sciences: 61%; Social sciences: 20%; Humanities: 11%. Not surprisingly, a number of PIs in the last mentioned fields could not be identified with publications in the database. In addition, for those that could be identified, we found a limited number of publications even at the level of disciplines in several instances. In order to derive all the indicators for this study, PIs need to have at least one publication in Web of Science in the relevant period. This has resulted in the removal of a number of PIs in the humanities and in several subfields of the social sciences. Furthermore, we have excluded a small number of very extreme values (PIs with a mean normalized citation score (MNCS) that is more than 10 times the world average).

Totals of 306 granted and 1,036 rejected PIs are included in the study. Some of these PIs may be co-authors of each others’ articles. Thereby, the differences between the two groups may diminish. However, we found that only 7.6% of the articles represent an overlap between those that can be attributed to the granted PIs and those that can be attributed to the rejected PIs. Table 1 shows the included PIs split by gender and research area, and the total size of the applications.

Descriptive statistics for the Norwegian samples of PIs for research project grants and postdocs

PIs—research projects Postdocs
GrantedRejectedGrantedRejected
Number24281264224
Sex (% women)24.028.940.642.0
Application size (million Euros) 126.5469.518.064.5
Natural sciences (%)49.640.843.833.0
Medical sciences (%)31.432.026.629.9
Technical sciences (%)2.93.83.13.6
Social sciences (%)10.715.512.521.4
Humanities (%)5.47.914.112.1
PIs—research projects Postdocs
GrantedRejectedGrantedRejected
Number24281264224
Sex (% women)24.028.940.642.0
Application size (million Euros) 126.5469.518.064.5
Natural sciences (%)49.640.843.833.0
Medical sciences (%)31.432.026.629.9
Technical sciences (%)2.93.83.13.6
Social sciences (%)10.715.512.521.4
Humanities (%)5.47.914.112.1

Sources : Bibliometric data: National Citation Report for Norway (NCR)/Thomson Reuters. Application data: The Research council of Norway.

a Sum of applied amounts. Exchange rate 8.542 (from Norwegian kroner, 6 March 2015).

2.2 The data set and methods for the study of Danish project grants and postdoc grants

The Danish bibliometric data set stems from two separate, but similar, evaluations of funding instruments of the Danish Council for Independent Research. The first concerns an evaluation of the funding of postdocs and specialized programs to support younger researchers and female researchers over the period 2001–8 ( Faber et al. 2010 ). The largest group examined in this study was postdocs, where in all 2,402 had applied for a postdoc grant over 2001–8 and of these 802 were awarded a grant. The second evaluation examines the funding of research projects over the same period, 2001–8. In all, 4,077 researchers submitted an application for a research project grant over the period, with 1,602 receiving a grant.

As with the Norwegian evaluation, both Danish evaluations are broad reaching, drawing on a number of approaches including a questionnaire survey, interviews, analysis of register-based career and income data, and bibliometric analysis. For both types of grants, and in particular for research projects, many have received more than one grant from the Danish Council during the period. For example, the 1,602 grant recipients have been awarded in total 2,604 research project grants over the period. For multiple grant recipients, the evaluation focuses on the first grant received in the period. Rejected applicants are included in the analyses for both evaluations, and defined as not having received any grants from the Danish Council over the period.

Bibliometric analyses were conducted in a similar fashion in the two Danish evaluations. In each case a small subset of grant awardees and rejected applicants were selected due to resource constraints. Grant recipients and rejected applicants were selected according to a matching procedure, where the two groups were matched according to field, application year, gender, age, academic position, and years since PhD degree. Applicants were chosen from a 3-year period (in terms of the year of application) in order to allow for the construction of 4-year windows both before the application and afterwards (starting 2 years after application), for which data on journal publications and citations was collected. The period 2001–3 was chosen for postdocs and 2002–4 for research projects. For postdocs, the bibliometric data covers 206 applicants (104 grant awardees and 102 rejected applicants) evenly divided across three main fields: natural sciences, medical sciences, and engineering and technical sciences. The bibliometric sample for research projects attempted to cover all five main fields (including social sciences and humanities), but coverage in the Scopus database (in particular concerning citations) was not adequate in all cases for social sciences and humanities. Out of an initial sample of 208 observations across all five main fields, the resulting sample consists of 174 PIs, 88 grant awardees, and 86 rejected applicants. Table 2 shows basic statistics for the matched samples used in the analysis below. The data source for postdocs is Thomsen Reuters Web of Science database while the source for research projects is the Scopus database. Publication and citation data were collected for each researcher using a broad search method followed by manual validation for errors. Field-based indicators were constructed based on the full database using pre-defined subject categories. 3

Descriptive statistics for the Danish matched samples of PIs for research project grants and postdocs

PIs—research projects Postdocs
GrantedRejectedGrantedRejected
Number8886104102
Sex (% women)20.424.430.832.4
Application size (1,000 euros)287.1306.7aa
Natural sciences (%)23.924.436.535.3
Medical sciences (%)23.924.450.051.0
Technical sciences (%)23.924.413.513.7
Social sciences (%)18.219.8
Humanities (%)10.27.0
PIs—research projects Postdocs
GrantedRejectedGrantedRejected
Number8886104102
Sex (% women)20.424.430.832.4
Application size (1,000 euros)287.1306.7aa
Natural sciences (%)23.924.436.535.3
Medical sciences (%)23.924.450.051.0
Technical sciences (%)23.924.413.513.7
Social sciences (%)18.219.8
Humanities (%)10.27.0

Sources : Bibliometric data: Scopus for PIs-research projects and Thomson Reuters Web of Science for postdocs. Application data: Danish Agency for Science, Technology and Innovation.

a Data not available. All postdoc grants awarded for 2-year period.

2.3 Options and limitations in comparing results: Similarities and differences between the Norwegian and Danish data and analysis

There are a number of challenges in attempting to estimate the effects of grants on research performance. Two of the most important are constructing a counterfactual case for comparison and isolating the outcomes of grant projects from other work by the researchers in question. There are a number of similarities in the Danish and Norwegian approaches, but also some important differences. Concerning the issue of a counterfactual, given that we are unable to observe what would have happened to PIs had they not received the grants, we need to find a control group for comparison that is as similar as possible with grantees. Rejected applicants are typically the best possible choice to form a control group, and have also been used in a number of the other studies mentioned above (e.g. Melin and Danell 2006 ; Jacob and Lefgren 2011 ; Benavente 2012 ; Van Leeuwen and Moed 2012 ). Both the Danish and Norwegian studies have used rejected applicants as a control group, but have followed different approaches. The Norwegian study utilizes a broad approach that includes all grantees and rejected applicants for the years 2005–7. The key advantages of this approach are both that it covers the full population of grantees as opposed to a subsample, and that the large sample improves the precision of estimates.

In contrast, the Danish study relies on a small sample of grantees and rejected applicants that were chosen through a matching procedure. The aim of the matching procedure is to isolate effects of the grant by ensuring that selected grant awardees and rejected applicants are comparable in all other respects than the receipt of the grant, so that eventual differences in publication activity are not due to differences in academic position, research experience, etc. The Danish samples for both postdocs and PIs for research projects consist of pairs of grantees and rejected applicants with the best match on all criteria. Given that the bibliometric data was collected after sample selection, it was however not possible to include prior publications and citations as matching criteria.

Hence, for both the Danish and Norwegian samples, there is a need to control for differences in initial conditions. In both cases, we attempt to address this through the use of Differences-in-Differences (DiD), focusing on changes over the analysis period as opposed to absolute levels. DiD enables the estimation of treatment effects while eliminating individual time-invariant effects and time effects; however, the method neither controls for unobserved temporary individual-specific components nor for differential impacts of macro-effects across the groups of comparison. The Danish study thus relies on a combination of matching and DiD approaches, providing scope for unobserved determinants of participation as long as it lies on separable individual and/or time-specific components of the error term ( Blundell & Costa Dias 2002 ). The combination of the two methods is argued to provide more reliable results, as argued by e.g. Heckman et al. (1998) , Dehejia & Wahba ( 1999 , 2002 ), and Smith and Todd (2005) .

The second issue described above concerned the isolation of outcomes of grants. The comparison of periods before and after the grant period reflects an attempt to isolate the output of grants. However, this is a very difficult task and is at best only partially possible. We do not have information about the publications that resulted from the funded projects. PIs may hold multiple parallel grants and be involved in multiple parallel research projects, and in addition one publication may be the combined outcome of multiple grants. As we use the rejected applicants—and their publications and citations—as a quasi-control group, we have not tried to isolate outcomes of the specific grants, but instead examine the total publications and citations of the PIs in the relevant period.

2.4 The bibliometric performance indicators

Both the Danish and Norwegian bibliometric data are divided into two periods, one before (including application year) and one after (from year after startup year) the funding decision. The differences between the two periods are analysed on the basis of the following indicators:

PIs’ average number of publications per year

The MNCS of PIs publications (normalized by field of research and year of publication, 1 = world average)

Number of publications with MNCS: (a) above the world average, and (b) two times the world average or more.

Share of publications with MNCS: (a) above the world average, and (b) two times the world average or more.

MNCS measures the average impact of publications, while the number of publications measures overall productivity and the two other indicators measure the production of top articles. In order to better discern productivity effects from ‘impact’ effects, the last two measures are used to examine whether the share of highly cited articles has increased for the two groups. Since the distribution of citations is highly skewed across publications, both (a) and (b) will be above the median.

Other relevant performance indicators, such as international co-authorship and authorship position (first/last author) are not included in the study, mainly for simplicity reasons and the fact that the data encompasses grants in a large variety research fields. The Norwegian evaluation included analyses of international co-authorship and found that overall it increased somewhat more for the rejected than the successful PIs, whereas there were substantial differences between research areas ( Langfeldt et al. 2012 : 32). Hence, the likeliness that more refined analyses should find any positive impact of grants in our data set is small. Moreover, we did not find any differences in authorship position of successful versus unsuccessful applicants in the disciplines where this position is a sign of the role of the researcher in the group.

This section examines publication and citation activity among applicants in Norway and Denmark and tests whether there are differences in outcomes for funded and rejected applicants. We are both interested in whether there are differences in productivity and impact for the two groups. We examine the four measures defined above, both in a period prior to application, after the grant (or a corresponding period for rejected applicants), and the difference between before and after. Given the peer review process involved in making funding decisions, there is great potential for a selection effect; that on average and independent of the grant itself, grant recipients will have higher performance both before and after grant receipt. Hence, in examining whether funding has had an effect, we should consider the ‘difference in difference’ (DiD), i.e. whether changes in research performance from before to after are larger for grant recipients.

Figure 1 shows box plots of the MNCS for PIs of research project applications in Norway and Denmark: citation scores before the grant period, after, and difference in difference. Corresponding box plots for postdoc applicants can be found in the appendix. 4 We can see that, as expected, citation scores are highly skewed both for grant recipients and rejected applicants. Though, before–after differences appear to more closely follow a normal distribution around a mean that is slightly above zero.

Citation scores for research project applications, Norway and Denmark: Box plot /quartiles distribution of field standardized citation rate (per paper, 1 = world average) before and after application, for successful and unsuccessful applicants, and the increase/reduction in citation rate from the period before to period after the application. (outliers: 4 PIs with MNCS > 10 are excluded; years with 0 publ not included).

Citation scores for research project applications, Norway and Denmark: Box plot /quartiles distribution of field standardized citation rate (per paper, 1 = world average) before and after application, for successful and unsuccessful applicants, and the increase/reduction in citation rate from the period before to period after the application. (outliers: 4 PIs with MNCS > 10 are excluded; years with 0 publ not included).

Both for median values and middle quartiles, grantees clearly have higher scores both before and after application. However, differences are less apparent when considering the DiD scores. Hence, it is not clear that citation scores for grantees have improved more than for rejected applicants.

Table 3 shows the results of statistical tests of differences between grantees and rejected applicants for the DiD measures, while separate results for before and after measures are included in the appendix. As noted above and illustrated in Figure 1 , citation scores before or after grant reception are highly skewed, while the DiD measures do not show any strong indication of being skewed and appear to better approximate a normal distribution. Taking this into account, for the DiD measures we have both conducted parametric tests (T-tests) of differences in mean values and non-parametric tests (Mann–Whitney tests) that examine whether distributions are significantly different from each other. In contrast, for comparison of before or after values, which are shown in tables A1–A4 , only the results of Mann–Whitney tests are reported. Table 3 shows average values of the DiD measures for grantee and rejected applicants in Norway and Denmark and P-values for the two tests. As the results shown in the appendix confirm, average citation impacts (MNCS) are significantly higher among grantees both before and after. This is also the case for the number of articles with impact above or more than double the world average. Results are also qualitatively the same for Norway and Denmark.

Research project applications. Comparison of differences before–after for Grant recipients and rejected applicants

GrantedRejectedT-test (P-value)Mann–Whitney (P-value)
Number of publications per year (DiD)
    Norway1.028 (0.158)0.759 (0.086)0.1370.002***
    Denmark2.080 (1.141)0.814 (1.096)0.2130.908
Mean normalized citation score (MNCS, DiD)
    Norway0.058 (0.068)0.032 (0.043)0.7630.317
    Denmark0.125 (0.100)−0.020 (0.128)0.1850.194
Number of articles with citations above world average (DiD)
    Norway0.496 (0.116)0.299 (0.042)0.050**0.015**
    Denmark1.159 (0.644)0.070 (0.481)0.089*0.759
Number of articles with citations more than twice world average (DiD)
    Norway0.291 (0.070)0.155 (0.024)0.000***0.012**
    Denmark0.591 (0.375)−0.105 (0.306)0.077*0.357
Per cent of publications above world average (DiD)
    Norway4.517 (2.178)0.055 (1.224)0.079*0.150
    Denmark5.583 (4.097)3.680 (3.276)0.3610.706
Per cent of publications more than twice above world average (DiD)
    Norway2.146 (1.459)0.424 (0.815)0.3090.517
    Denmark1.372 (2.465)−5.139 (2.288)0.4240.440
GrantedRejectedT-test (P-value)Mann–Whitney (P-value)
Number of publications per year (DiD)
    Norway1.028 (0.158)0.759 (0.086)0.1370.002***
    Denmark2.080 (1.141)0.814 (1.096)0.2130.908
Mean normalized citation score (MNCS, DiD)
    Norway0.058 (0.068)0.032 (0.043)0.7630.317
    Denmark0.125 (0.100)−0.020 (0.128)0.1850.194
Number of articles with citations above world average (DiD)
    Norway0.496 (0.116)0.299 (0.042)0.050**0.015**
    Denmark1.159 (0.644)0.070 (0.481)0.089*0.759
Number of articles with citations more than twice world average (DiD)
    Norway0.291 (0.070)0.155 (0.024)0.000***0.012**
    Denmark0.591 (0.375)−0.105 (0.306)0.077*0.357
Per cent of publications above world average (DiD)
    Norway4.517 (2.178)0.055 (1.224)0.079*0.150
    Denmark5.583 (4.097)3.680 (3.276)0.3610.706
Per cent of publications more than twice above world average (DiD)
    Norway2.146 (1.459)0.424 (0.815)0.3090.517
    Denmark1.372 (2.465)−5.139 (2.288)0.4240.440

Number observations: Norway: granted = 242; rejected = 812; Denmark: granted = 88; rejected = 86.

Standard error of mean in brackets. *** P < 0.01, **P < 0.05, *P < 0.1.

Hence, performance is on average better among grantees prior to grant application, and it is also better after the grant period. However, as Table 3 shows, average impact has not improved more for grantees than for rejected applicants, neither for Norway or Denmark. Both parametric and non-parametric tests fail to find a statistically significant difference in DiD measures of average impact for grantees compared to rejected applicants (MNCS and % above world average). We do however find some varied evidence of effects based on the other indicators. Before–after differences for the number of publications is significantly higher for grantees in Norway based on the Mann–Whitney test (at 0.002), while the P-value for the t-test is not significant (at 0.137). Differences in Denmark are not statistically significant, despite the fact that differences in means are much larger than for Norway. This could potentially suggest that the Danish results are influenced by the small sample size.

Differences for the number of highly cited articles are strongly significant for Norway, with average numbers of articles with both more than and double world averages for citations significantly higher for grantees. Results are again weaker for Denmark, though along the same lines, with differences weakly significant based on t-test statistics. Results are much weaker for shares of these highly cited papers, though for Norway the percentage with citations over the world average is weakly significant based on the t-test.

What is our overall interpretation of these results for research project grants? First, the MNCS is the most appropriate indicator to measure the effects not related to number of publications, and here we find no evidence of a significant difference between grantees and rejected applicants. On the other hand, mean values are larger for grantees in all cases. This at least leaves open the possibility that we might have found more significant results with larger samples. While it does not appear that there is a difference in average impact between grantees and rejected applicants, there are some indications of an effect in terms of productivity, where this increase in output also includes highly cited articles.

While results are fairly similar across countries concerning PIs for research project grant applicants, there are a number of differences between Norway and Denmark regarding postdoc grants. For Norway, while average citation scores are higher among grantees both before and after, the difference is not significant in either case (see Tables A1 and A2 ). Furthermore, average citation scores actually fall for both grantees and rejected applicants in Norway. In contrast, the number of top publications is higher for both groups after the grant period, and DiD measures are also larger for grantees, though not significant. Results for postdoc grants are shown in Table 4 .

Postdoc applicants. Comparison of differences before–after for grant recipients and rejected applicants

GrantedRejectedT-test (P-value)Mann–Whitney (P-value)
Number of publications per year (DiD)
    Norway0.428 (0.173)0.287 (0.097)0.4880.268
    Denmark3.173 (0.675)1.961 (0.670)0.1020.246
Mean normalized citation score (MNCS, DiD)
    Norway−0.209 (0.178)−0.038 (0.100)0.4170.866
    Denmark0.342 (0.129)0.202 (0.137)0.2280.240
Number of articles with citations above world average (DiD)
    Norway0.179 (0.086)0.074 (0.043)0.2510.111
    Denmark1.471 (0.269)0.549 (0.229)0.005***0.025**
Number of articles with citations more than twice world average (DiD)
    Norway0.076 (0.064)0.047 (0.027)0.6370.273
    Denmark0.779 (0.195)0.294 (0.156)0.027**0.057*
Per cent of publications above world average (DiD)
    Norway−0.030 (5.626)−1.689 (3.257)0.8070.723
    Denmark12.017 (2.755)2.707 (3.423)0.017**0.026**
Per cent of publications more than twice above world average (DiD)
    Norway−3.896 (4.763)1.360 (2.043)0.2530.861
    Denmark6.174 (2.365)2.159 (2.119)0.1080.111
GrantedRejectedT-test (P-value)Mann–Whitney (P-value)
Number of publications per year (DiD)
    Norway0.428 (0.173)0.287 (0.097)0.4880.268
    Denmark3.173 (0.675)1.961 (0.670)0.1020.246
Mean normalized citation score (MNCS, DiD)
    Norway−0.209 (0.178)−0.038 (0.100)0.4170.866
    Denmark0.342 (0.129)0.202 (0.137)0.2280.240
Number of articles with citations above world average (DiD)
    Norway0.179 (0.086)0.074 (0.043)0.2510.111
    Denmark1.471 (0.269)0.549 (0.229)0.005***0.025**
Number of articles with citations more than twice world average (DiD)
    Norway0.076 (0.064)0.047 (0.027)0.6370.273
    Denmark0.779 (0.195)0.294 (0.156)0.027**0.057*
Per cent of publications above world average (DiD)
    Norway−0.030 (5.626)−1.689 (3.257)0.8070.723
    Denmark12.017 (2.755)2.707 (3.423)0.017**0.026**
Per cent of publications more than twice above world average (DiD)
    Norway−3.896 (4.763)1.360 (2.043)0.2530.861
    Denmark6.174 (2.365)2.159 (2.119)0.1080.111

Number of observations: Norway: granted = 64; rejected = 224; Denmark: granted = 104; rejected = 102.

Standard error of mean in brackets. ***P < 0.01, **P < 0.05, *P < 0.1.

Results for Danish postdoc fellowships resemble to a greater degree the results for research projects. As with research projects, differences for average citation scores (MNCS) are positive but insignificant, while differences for top publications are significant. Results for Danish postdocs in fact show a stronger effect than for research projects, in particular Danish projects. It is not fully clear why there is such a substantial difference in results among Danish and Norwegian postdoc fellowships, though a partial factor may be the smaller number of observations for Norwegian grantees. Moreover, as will be discussed in Section 4, the scope of outputs from individual fellowships can be expected to differ from those of regular (collaborative) research projects, particularly when we measure the impact on publication and citation rates of the PI.

As an additional check of the robustness of our statistical results above, we employed simple regressions, examining whether the receipt of grants impacts changes in performance after controlling for basic individual characteristics, such as age, gender and main field of science. The analysis was conducted for both PIs of research project applications and postdoc applicants, using three of the performance measures (in differences) as dependent variables: MNCS, publications with citations over the world average, and publications with more than double the world average. The regression results can be found in the appendix.

For Norway, the regression results correspond well with the statistical tests above. A significant effect of receiving a grant is found on the number of publications with citations above world averages, but not for the average citation rate (MNCS). In contrast, for Denmark no significant impact is found on the number of publications cited above world averages, nor MNCS. This can be compared with the statistical test results above, where differences were weakly significant for both measures of top publications. Hence, when we control for other basic factors, the grant effect for Danish research projects disappears.

As described in Section 1, the summarized results of previous studies of the impact of research grants are inconclusive; the studies apply a variety of approaches and different impact measures/citation scores, and concern different kinds of funding schemes and contexts, and yield divergent results. Are there still some overall conclusions that can be drawn? In this section we discuss the above results for Denmark and Norway in light of previous studies. In this we address factors complicating the identification and measurement of impacts of grant schemes, and in particular the difference between productivity measures and standardized measures related to impact, such as MNCS.

The results for Norway above, along with the findings from the evaluation of the FRIPRO scheme, indicated that both the FRIPRO applicants and the awardees were a highly selected group: both funded and rejected applicants were more highly cited than the world average, the funded somewhat more so than the rejected. Hence, past scientific performance seems important both for applying for FRIPRO grants and for receiving them. On the other hand, bibliometric results show only marginal changes in the field-normalized relative citation rates for the period before and after funding decisions for both funded and rejected applicants. 5 Data indicate that PI’s multiple projects and funding sources were part of the explanation for this negative result. PI’s research typically covers several projects and funding sources within smaller or larger networks of national and international scientific collaboration, and the evaluation was unable to trace the impact of a grant on a PI’s publication or citation rate. Many applicants found alternative funding sources for their rejected FRIPRO projects, and a majority of rejected projects appeared to have been implemented. For instance, several universities had economic incentives for FRIPRO applicants and awarded highly rated, but not funded, projects ( Langfeldt et al. 2012 ). On the other hand, obtaining FRIPRO funding was also argued to open doors for additional/further funding, and a substantial proportion of those obtaining FRIPRO grants reported that the FRIPRO funding enabled them to successfully compete for funding from other external sources. The applicant survey in the evaluation also indicated substantial added value of FRIPRO funding concerning international cooperation and research results: Those obtaining FRIPRO funds much more often reported that their long-term international cooperation had been enhanced as a result of the project. Moreover, in comparing successful applicants with applicants who implemented the project with other resources, the former were more likely to report unexpected results of importance to the research field, or that the project had explored new research areas of significant importance for their future research. Still these results from the survey could not be substantiated by bibliometric analyses comparing the citation rates of the funded and rejected applicants ( Langfeldt et al. 2012 , cf. section 2.4 above concerning international cooperation).

Survey and interview results in the Danish evaluations found similar, broader effects of receiving a grant. These included positive impacts on opportunities for career advancement, collaboration with top researchers within one’s field, research management competences, obtaining subsequent grants from both national and international sources, and enhanced status and recognition. In particular the latter was attached great importance for grantees’ subsequent research. A high share also stated that grants have a positive impact on their research and had allowed them to conduct research that otherwise would not have been possible. Though, as for Norway, these survey results have been difficult to support through the bibliometric results.

Overall, the present analyses indicate that research grants may have statistically significant impact on bibliometric productivity scores, both on the number of publications per year and the number of highly cited papers; though in many cases evidence of an impact is weak, and notably these scores increase both for successful and rejected project grant applicants—the successful slightly significantly more than the rejected. Whereas the differences between successful and rejected project applicants are significant for the productivity scores, the increase in MNCS is not significantly higher for the successful than the rejected applicants; the average citation score of publications does not increase more for the successful than for the rejected applicants. A number of factors may account for these results.

Firstly, the successful applicants are already very productive and highly cited, and what specific kind of grant they receive may have limited impact on their further success in terms of MNCS. On the other hand, the grant helps the PIs to add staff to their research team and hence the possibility to publish more papers (with the PI/grant holder as author/co-author). The slight impact found on number of papers above (and twice above) world average citation rate, but lack of impact on MNCS, could be understood as a difference between impact on productivity and impact on scientific importance. Grants primarily impact research productivity, whereas rejections imply postponed projects and or/smaller research groups and a lower number of parallel projects. Moreover, higher productivity implies higher likelihood of (some) highly cited papers and hence also (slightly) impact the number of papers above (and twice above) world average citation rate. MNCS, on the other hand, does not seem impacted by grants—nor by productivity.

Concerning the postdoc fellowships, the Norwegian and Danish results differ and no obvious reasons for these differences are found. Still, some general differences between impacts of project grants and (individual) postdocs fellowships should be mentioned. Firstly, the size and duration of projects and size of project groups are likely to impact productivity. Research grants facilitate increased research capacity and activity of the PIs—but whereas this for postdocs fellowships mainly include own salary and is limited to 2 years, for the research projects the grant is larger, 3–4 years, and imply the ability to employ research staff (e.g. multiple PhDs or postdocs). Secondly, general conditions for obtaining high citation rates may differ between project grants and postdocs fellowships. Doing research with an international/broader orientation may yield high citation rates, and individual postdocs may be less likely (than PIs with larger grants/groups) to do research attracting wide scientific attention or to be affiliated with large-scale projects with a greater potential for high citation rates. Hence, the funding or rejection of the individual postdoc fellowships may have little impact on publication or citation rates. In fact, the rejection of individual postdoc fellowships may result in these young scholars being affiliated with larger research projects, instead of pursuing their own projects, and doing more collaborative research and (co-) authoring more publications than they would in their individual projects. Moreover, it might be more difficult to measure change in the scientific productivity of postdocs due to the early stage of their research career. 6

Finally, general measurement problems may limit the possibility of identifying and delineating the publications resulting from a project/grant, and give weak or varying results of impact studies. There are varying time-lags in publication and publications may be sponsored by several projects/funding sources. When multiple funding sources fund the same research, or PIs conduct multiple projects in parallel, the impact of a particular funding scheme is hard to trace, especially when the time-lag from the project period to publications may vary. Some analyses of this could be provided by developing databases containing adequate data for linking publications to specific funding schemes, but in order to compare the success of successful versus rejected applicants, we would need to include the total publication portfolio of both groups in the comparison. Hence, what are compared in the present analysis are not the results of the specific grant applications, but whether or not rejection/funding of the application impacts the publication or citation profile of the applicant.

The cumulative, two-way effect between publications and grants deriving from the analyses should also be noted: PIs with high scores on publications and citations more easily succeed in grant competitions, and to some extent grants contribute to their publications/productivity (due to increased capacity to perform research). Even with such cumulative advantages, there is no evidence that the grants impact the mean normalized citation rate of the applicants.

A variety of impacts of grant schemes on research may be unrelated to the overall citation rate of the PIs. Results and added values emphasized in the survey replies of the Norwegian FRIPRO applicants and Danish research project and postdoc grant applicants included competence building/PhD candidates, new insight and research results, new/extended research networks, and the exploration of new research topics. These survey results can partly be substantiated by bibliometric impact in terms of increased productivity, but not in terms of any increase in mean normalized citation rate (the increase in MNCS is not significantly higher for supported than rejected PIs). Hence, it could be concluded that the grant schemes increased the capacity to perform research and to publish more papers, and generally improved research conditions, but did not impact ‘scientific importance’ as measured by the general citation rate (MNCS) of the PIs—either because MNCS is an inadequate indicator of the scientific importance of a PI’s research or because grants do not enable PIs to perform ‘more important’ research. It should be added that the survey data still points to important career impacts of grants, especially for the younger researchers ( Langfeldt et al. 2012 : 51). These researchers are often dependent on external funding, and their ability to attract such funding may be important for obtaining a permanent position. Similar results were found in the Danish evaluation, both from interviews and the survey of grantees. In addition, Bloch et al. (2014) found that these grants from the Danish Council for Independent Research had a significant, positive impact on career progression for PIs.

The findings bring up two general questions concerning the aims and evaluation of open mode funding schemes aiming at funding scientifically important projects (scientific value/relevance) and facilitating ground-breaking or frontier research. Firstly, as far as grant schemes support researcher-initiated projects, and do not demand any changes in research content, approaches or collaboration patterns of the PIs, is it reasonable to expect research grants to impact the ‘scientific importance’ of the PIs’ research? Secondly, how may research councils design proper impact evaluations of open mode funding schemes, in order to assess whether they have managed to select and fund the most successful projects?

The answers to these two questions are interlinked. It hardly seems reasonable to expect open mode research grants to increase the ‘scientific importance’ of the research of already high performing PIs. Still, according to our analyses, providing more resources to those who already perform at the higher level (i.e. have a higher MNCS/more highly cited papers) tend to yield a higher number of highly cited papers than if the grants were allocated to PIs with less impressive track records—as the score is increased from an already high level. Hence, grant allocation based on track records could be justified even without any general ‘treatment effect’ on the ‘scientific importance’ of the research; those with the higher level of past performance tend to perform better also for future research. It should be emphasized that the impact of the particular grant seems to principally be through increased productivity, not through increased scientific impact.

Concerning implications for the design of impact evaluations, it seems useful to include a variety of indicators, in order to separate between productivity and impact measures, as well as using both bibliometric data and surveys to applicants. When combining reported added value for successful vs rejected applicants (survey data) and DiD analyses of publication and citation impacts, we can obtain a more comprehensive and nuanced picture of the applicants’ situation, funding sources, publication activity and citation profile, and the ways in which grants may impact research activities and scientific success. It should be noted that the measurable impact of grants on bibliometric scores is likely to be weak and large samples are needed in order to obtain significant DiD analysis results. Moreover, with a view to the complex, multiple funding structures of grantees and the lack of significant impacts found for the studied postdoc fellowships, it may be more promising to focus impact evaluations on large and long-term grants than smaller grants which are often combined with other funding. Bibliometric impacts of short-term grants (1–2 years) that account for a minor part of a PI’s portfolio and/or do not significantly increase research capacity are hard to establish based on available data. 7

1 Moreover, some funding agencies develop systems for linking funding and outcome data to facilitate more robust evaluations of programme impacts ( Haak et al. 2012 ).

2 Average project size increased from 2005 to 2007. Total amount per project varied from 0.01 to 2.5 mill Euro.

3 The bibliometric data for the two Danish evaluations are described in detail in Mortensen and Thomsen (2010) and Mortensen et al. (2011) .

4 Due to space constraints, box plots are not shown for the other indicators.

5 Using descriptive statistics, in the evaluation, significant differences between funded and rejected applicants were not found, neither in terms of increase in the number of publications per PI, nor in their citation rates ( Langfeldt et al. 2012 ). In the present reanalyses of the data using more refined methods, impact is found on productivity (number of publications per PI) and measures dependent on productivity (number of publications per PI cited above world average), but not on MNCS.

6 In our case, as the smaller samples for analysis/limited numbers of awarded postdoc fellowships provide an additional limitation for measuring impact.

7 One alternative in order to demonstrate bibliometric impact of ‘minor’ grants, would be to develop databases with complete and precise information of the specific funding schemes which have sponsored the publications (names sponsor organizations do not suffice), and to compare publication frequencies and citation scores between funding schemes and their total allowance, rather than to compare the successful and rejected applicants (cf. MacLean et al. 1998 ).

Benavente J. M. et al.  . ( 2012 ) ‘The impact of national research funds: A regression discontinuity approach to the Chilean FONDECYT’ , Research Policy , 41 / 8 : 1461 – 75 .

Google Scholar

Bloch C. Graversen E. K. Pedersen H. S. ( 2014 ) ‘Competitive research grants and their impact on career performance' , Minerva , 52 , 77 – 96 .

Bloch C. et al.  . ( 2011 ) An Evaluation of Research Project Grants of the Danish Council for Independent Research, Main report and subreports . Copenhagen : Danish Agency for Science, Technology and Innovation (in Danish) .

Google Preview

Blundell R. Costa Dias M. ( 2002 ) ‘Alternative approaches to empirical evaluations in microeconomics’ , Portuguese Economic Journal , 1 / 2 , 91 – 115 .

Bornmann L. Leydesdorff L Van den Besselaar P. ( 2010 ) ‘A meta-evaluation of scientific research proposals: Different ways of comparing rejected to awarded applications’ , Journal of Informetrics , 4 / 3 , 211 – 20 .

Dehejia R. H. Wahba S. ( 1999 ) ‘Casual effects in non-experimental studies: re-evaluating the evaluation of training programs’ , Journal of the American Statistical Association , 94 / 448 , 1053 – 62 .

Dehejia R. H. Wahba S. ( 2002 ) ‘Propensity score-matching for nonexperimental studies’ , The Review of Economics and Statistics , 84 / 1 , 151 – 61 .

Donovan C. ( 2011 ). ‘State of the art in assessing research impact: introduction to a special issue’ , Research Evaluation , 20 / 3 , 175 – 9 .

Faber S. T. et al.  . ( 2010 ) The Danish Research Council’s Support of Female Researchers and Researchers at an Early Stage of their Career . Copenhagen : Danish Agency for Science, Technology and Innovation (in Danish) .

Haak L.L. et al.  . ( 2012 ) ‘The electronic Scientific Portfolio Assistant: Integrating scientific knowledge databases to support program impact assessment’ , Science and Public Policy , 39 / 4 : 464 – 75 .

Heckman J. J. et al.  . ( 1998 ) ‘Characterizing selection bias using experimental data’ , Econometrica , 66 / 5 , 1017 – 98 .

HFSP ( 2009 ) ‘Report on the citation database for the Human Frontier Science Program’ , Evidence/HFSP October 2009. < http://www.hfsp.org/sites/www.hfsp.org/files/webfm/Executive/HFSP%20Bibliometrics%202010.pdf > accessed 11 Nov 2014 .

Jacob B. A. Lefgren L. ( 2011 ) ‘The impact of NIH postdoctoral training grants on scientific productivity’ , Research Policy , 40 / 6 , 864 – 74 .

Langfeldt L. et al.  . ( 2012 ). ‘Evaluation of the Norwegian scheme for independent research projects (FRIPRO)’ . Oslo: NIFU Report 8/2012 < http://www.nifu.no/files/2012/11/NIFUrapport2012-8.pdf > accessed 11 Nov 2014 .

Lewison G. Dawson G ( 1998 ) ‘The effect of funding on the outputs of biomedical research’ , Scientometrics , 41 / 1–2 , 17 – 27 .

MacLean M. et al.  . ( 1998 ) ‘Evaluating the research activity and impact of funding agencies’ , Research Evaluation , 7 / 1 , 7 – 16 .

Melin G. Danell R. ( 2006 ) ‘The top eight percent: development of approved and rejected applicants for a prestigious grant in Sweden’ , Science and Public Policy , 33 / 10 , 702 – 12 .

Mortensen P. S. Thomsen G. S. ( 2010 ) Delrapport 4: Den bibliometriske undersøgelse blandt ansøgere til postdocstipendier og talentprojekter inden for natur-, sundheds- og teknisk videnskab, 2001-2003 . Aarhus : Dansk Center for Forskningsanalyse .

Mortensen P. S. Thomsen G. S. Kruuse J. ( 2011 ) Evaluering af virkemidlet “forskningsprojekter, Delrapport 4: Den bibliometirske undersøgelse . Aarhus : Dansk Center for Forskningsanalyse < http://fivu.dk/publikationer/2011/filer-2011/5-bibliometri_delrapport.pdf > accessed 11 Nov 2014 .

Nedeva M. et al.  . ( 2012 ) ‘Understanding and Assessing the Impact and Outcomes of the ERC and its Funding Schemes (EURECIA). Final Synthesis Report.’ European Commission. < http://erc.europa.eu/sites/default/files/document/file/eurecia_final_synthesis_report.pdf >

Neufeld J. von Ins M. ( 2011 ) ‘Informed peer review and uninformed bibliometrics?’ , Research Evaluation , 20 / 1 , 31 – 46 .

Rigby J. ( 2011 ) ‘Systematic grant and funding body acknowledgement data for publications: new dimensions and new controversies for research policy and evaluation’ Research Evaluation , 20 / 5 , 365 – 75 .

Sandström U. ( 2009 ) ‘Research quality and diversity of funding: A model for relating research money to output of research’ , Scientometrics Volume , 79 / 2 , 341 – 9 .

Sivertsen G. Larsen B. ( 2012 ). Comprehensive bibliographic coverage of the social sciences and humanities in a citation index: an empirical analysis of the potential . Scientometrics , 91 / 2 , 567 – 75 .

Smith J. A. Todd P. E. ( 2005 ). Does matching overcome LaLonde's critique of nonexperimental estimators? Journal of Econometrics , 125 / 1–2 , 305 – 53 .

van Leeuwen T. N. Moed H. F. ( 2012 ) ‘Funding decisions, peer review, and scientific excellence in physical sciences, chemistry, and geosciences’ , Research Evaluation , 21 / 3 , 189 – 98 .

Vedung E. ( 1997 ) Public Policy and Program Evaluation . New Brunswick, NJ : Transaction Publishers .

Zhao D. Z. ( 2010 ) ‘Characteristics and impact of grant-funded research: a case study of the library and information science field’ , Scientometrics , 84 / 2 , 293 – 306

Postdoc fellowship applications: Box plot /quartiles distribution of field standardized citation rate (per paper, 1 = word average) before and after application, for successful and unsuccessful applicants, and the increase/reduction in citation rate from the period before to period after the application (outliers: 4 PIs with MNCS > 10 citations are excluded; years with 0 publ not included).

Postdoc fellowship applications: Box plot /quartiles distribution of field standardized citation rate (per paper, 1 = word average) before and after application, for successful and unsuccessful applicants, and the increase/reduction in citation rate from the period before to period after the application (outliers: 4 PIs with MNCS > 10 citations are excluded; years with 0 publ not included).

Research project applications. Comparison of differences before for grant recipients and rejected applicants

GrantedRejectedMann– Whitney
MNCS
    Norway1.210 (0.068)0.989 (0.035)0.000
    Denmark0.887 (0.097)0.807 (0.106)0.496
Publications over world avg.
    Norway1.403 (0.157)0.706 (0.035)0.000
    Denmark4.298 (0.698)3.000 (0.558)0.128
Publication over twice world avg.
    Norway0.610 (0.081)0.290 (0.018)0.000
    Denmark2.154 (0.444)1.433 (0.303)0.116
GrantedRejectedMann– Whitney
MNCS
    Norway1.210 (0.068)0.989 (0.035)0.000
    Denmark0.887 (0.097)0.807 (0.106)0.496
Publications over world avg.
    Norway1.403 (0.157)0.706 (0.035)0.000
    Denmark4.298 (0.698)3.000 (0.558)0.128
Publication over twice world avg.
    Norway0.610 (0.081)0.290 (0.018)0.000
    Denmark2.154 (0.444)1.433 (0.303)0.116

Number of observations: Norway: granted = 242; rejected = 812; Denmark: granted = 88; rejected = 86. *** P < 0.01, **P < 0.05, *P < 0.1.

Research project applications. Comparison of differences after for Grant recipients and rejected applicants

GrantedRejectedMann– Whitney
MNCS
    Norway1.268 (0.060)1.021 (0.041)0.000
    Denmark0.994 (0.099)0.790 (0.084)0.205
Publications over world avg.
    Norway1.898 (0.157)1.005 (0.057)0.000
    Denmark5.279 (0.987)3.058 (0.456)0.456
Publication over twice world avg.
    Norway0.900 (0.087)0.445 (0.031)0.000
    Denmark2.654 (0.573)1.346 (0.235)0.193
GrantedRejectedMann– Whitney
MNCS
    Norway1.268 (0.060)1.021 (0.041)0.000
    Denmark0.994 (0.099)0.790 (0.084)0.205
Publications over world avg.
    Norway1.898 (0.157)1.005 (0.057)0.000
    Denmark5.279 (0.987)3.058 (0.456)0.456
Publication over twice world avg.
    Norway0.900 (0.087)0.445 (0.031)0.000
    Denmark2.654 (0.573)1.346 (0.235)0.193

Number of observations: Norway: granted = 242; rejected = 812; Denmark: granted = 88; rejected = 86. ***P < 0.01, **P < 0.05, *P < 0.1.

Postdoc applications. Comparison of differences before for grant recipients and rejected applicants

GrantedRejectedMann– Whitney
MNCS
    Norway1.206 (0.189)0.914 (0.084)0.101
    Denmark1.012 (0.083)0.805 (0.076)0.047
Publications over world avg.
    Norway0.597 (0.135)0.389 (0.058)0.150
    Denmark1.200 (0.153)1.087 (0.181)0.182
Publication over twice world avg.
    Norway0.284 (0.059)0.144 (0.028)0.006
    Denmark0.571 (0.107)0.394 (0.097)0.205
GrantedRejectedMann– Whitney
MNCS
    Norway1.206 (0.189)0.914 (0.084)0.101
    Denmark1.012 (0.083)0.805 (0.076)0.047
Publications over world avg.
    Norway0.597 (0.135)0.389 (0.058)0.150
    Denmark1.200 (0.153)1.087 (0.181)0.182
Publication over twice world avg.
    Norway0.284 (0.059)0.144 (0.028)0.006
    Denmark0.571 (0.107)0.394 (0.097)0.205

Number of observations: Norway: granted = 64; rejected = 224; Denmark: granted = 104; rejected = 102. ***P < 0.01, **P < 0.05, *P < 0.1.

Postdoc applications. Comparison of differences after for grant recipients and rejected applicants

GrantedRejectedMann– Whitney
MNCS
    Norway0.997 (0.132)0.876 (0.080)0.194
    Denmark1.351 (0.120)1.003 (0.114)0.007
Publications over world avg.
    Norway0.778 (0.171)0.463 (0.068)0.034
    Denmark2.657 (0.279)1.625 (0.225)0.001
Publication over twice world avg.
    Norway0.359 (0.083)0.192 (0.028)0.008
    Denmark1.343 (0.201)0.683 (0.124)0.003
GrantedRejectedMann– Whitney
MNCS
    Norway0.997 (0.132)0.876 (0.080)0.194
    Denmark1.351 (0.120)1.003 (0.114)0.007
Publications over world avg.
    Norway0.778 (0.171)0.463 (0.068)0.034
    Denmark2.657 (0.279)1.625 (0.225)0.001
Publication over twice world avg.
    Norway0.359 (0.083)0.192 (0.028)0.008
    Denmark1.343 (0.201)0.683 (0.124)0.003

Regression analysis of the impact of grant reception on mean citation scores and number of top publications for Norway and Denmark. Research projects

Dependent variable:MNCSMNCSAbove world avg.Above world avg.Twice world avg.Twice world avg.
NorwayDenmarkNorwayDenmarkNorwayDenmark
Constant0.149 (0.263)0.168 (0.411)0.690 (0.340)1.979 (2.019)0.173 (0.199)1.406 (1.223)
Grant0.052 (0.083)0.123 (0.137)0.185* (0.108)0.913 (0.674)0.133** (0.063)0.582 (0.408)
Gender0.010 (0.080)0.006 (0.178)−0.048 (0.103)0.672 (0.874)0.004 (0.060)0.305 (0.530)
Age−0.002 (0.004)−0.004 (0.009)−0.010** (0.005)−0.055 (0.042)−0.003 (0.003)−0.031 (0.026)
Humanities0.077 (0.225)−0.002 (0.221)−0.166 (0.292)−0.253 (1.082)−0.014 (0.171)−0.450 (0.656)
Natural Sciences−0.072 (0.191)−0.027 (0.221)0.078 (0.248)−0.398 (1.087)0.102 (0.145)−0.567 (0.659)
Medical Sciences−0.125 (0.194)−0.030 (0.219)0.250 (0.252)1.722 (1.072)0.212 (0.147)0.252 (0.650)
Social Sciences0.034 (0.205)0.066 (0.217)0.030 (0.266)−0.240 (1.062)0.119 (0.156)−0.544 (0.644)
105417410541741054174
R-squared0.0040.0070.0160.0430.0130.031
Dependent variable:MNCSMNCSAbove world avg.Above world avg.Twice world avg.Twice world avg.
NorwayDenmarkNorwayDenmarkNorwayDenmark
Constant0.149 (0.263)0.168 (0.411)0.690 (0.340)1.979 (2.019)0.173 (0.199)1.406 (1.223)
Grant0.052 (0.083)0.123 (0.137)0.185* (0.108)0.913 (0.674)0.133** (0.063)0.582 (0.408)
Gender0.010 (0.080)0.006 (0.178)−0.048 (0.103)0.672 (0.874)0.004 (0.060)0.305 (0.530)
Age−0.002 (0.004)−0.004 (0.009)−0.010** (0.005)−0.055 (0.042)−0.003 (0.003)−0.031 (0.026)
Humanities0.077 (0.225)−0.002 (0.221)−0.166 (0.292)−0.253 (1.082)−0.014 (0.171)−0.450 (0.656)
Natural Sciences−0.072 (0.191)−0.027 (0.221)0.078 (0.248)−0.398 (1.087)0.102 (0.145)−0.567 (0.659)
Medical Sciences−0.125 (0.194)−0.030 (0.219)0.250 (0.252)1.722 (1.072)0.212 (0.147)0.252 (0.650)
Social Sciences0.034 (0.205)0.066 (0.217)0.030 (0.266)−0.240 (1.062)0.119 (0.156)−0.544 (0.644)
105417410541741054174
R-squared0.0040.0070.0160.0430.0130.031

*** (**,*) significant at 1% (5%, 10%); Standard errors are denoted in parentheses. Regression method: Ordinary Least Squares.

Variables: Grant: Rejected=0, Funded=1; Gender: Female PI =0, Male PI=1; Age: PIs age, numeric scale; Humanities: application within the humanities=1, other fields=0; Natural Sciences: application within Natural Sciences the =1, other fields=0; Medical Sciences: application within the Medical Sciences=1, other fields=0; Social Sciences application within the Social Sciences =1, other fields=0.

Regression analysis of the impact of grant reception on mean citation scores and number of top publications for Norway and Denmark. Postdoc fellowships

MNCSMNCSAbove world avg.Above world avg.Twice world avg.Twice world avg.
Dependent variableNorwayDenmarkNorwayDenmarkNorwayDenmark
Constant−0.906 (0.690)−1.196 (0.828)−0.079 (0.307)1.183 (1.569)−0.086 (0.204)0.951 (1.105)
Grant−0.214 (0.218)0.253 (0.197)0.095 (0.096)0.814** (0.373)0.012 (0.063)0.406 (0.262)
Gender0.143 (0.191)−0.088 (0.295)0.110 (0.084)0.311 (0.390)0.087 (0.056)0.319 (0.274)
Age0.015 (0.013)0.042* (0.023)0.004 (0.006)−0.045 (0.043)0.001 (0.004)−0.030 (0.030)
Humanities0.651 (0.562)−0.109 (0.250)0.086 (0.166)
Natural Sciences0.396 (0.522)0.149 (0.0305)0.065 (0.233)−0.030 (0.578)0.155 (0.154)0.254 (0.407)
Medical Sciences0.044 (0.530)−0.087 (0.295)−0.207 (0.236)0.102 (0.558)−0.034 (0.156)0.132 (0.393)
Social Sciences0.184 (0.543)−0.032 (0.242)0.033 (0.161)
R-squared0.0290.0210.0410.0420.0430.033
288206288206288206
MNCSMNCSAbove world avg.Above world avg.Twice world avg.Twice world avg.
Dependent variableNorwayDenmarkNorwayDenmarkNorwayDenmark
Constant−0.906 (0.690)−1.196 (0.828)−0.079 (0.307)1.183 (1.569)−0.086 (0.204)0.951 (1.105)
Grant−0.214 (0.218)0.253 (0.197)0.095 (0.096)0.814** (0.373)0.012 (0.063)0.406 (0.262)
Gender0.143 (0.191)−0.088 (0.295)0.110 (0.084)0.311 (0.390)0.087 (0.056)0.319 (0.274)
Age0.015 (0.013)0.042* (0.023)0.004 (0.006)−0.045 (0.043)0.001 (0.004)−0.030 (0.030)
Humanities0.651 (0.562)−0.109 (0.250)0.086 (0.166)
Natural Sciences0.396 (0.522)0.149 (0.0305)0.065 (0.233)−0.030 (0.578)0.155 (0.154)0.254 (0.407)
Medical Sciences0.044 (0.530)−0.087 (0.295)−0.207 (0.236)0.102 (0.558)−0.034 (0.156)0.132 (0.393)
Social Sciences0.184 (0.543)−0.032 (0.242)0.033 (0.161)
R-squared0.0290.0210.0410.0420.0430.033
288206288206288206

Variables: Grant: Rejected = 0, Funded = 1; Gender: Female PI = 0, Male PI = 1; Age: PIs age, numeric scale; Humanities: application within the humanities = 1, other fields = 0; Natural Sciences: application within Natural Sciences the = 1, other fields = 0; Medical Sciences: application within the Medical Sciences = 1, other fields = 0; Social Sciences application within the Social Sciences = 1, other fields = 0.

Month: Total Views:
November 2016 2
December 2016 5
January 2017 13
February 2017 23
March 2017 12
April 2017 16
May 2017 11
June 2017 3
July 2017 10
August 2017 3
September 2017 8
October 2017 15
November 2017 22
December 2017 62
January 2018 159
February 2018 176
March 2018 349
April 2018 336
May 2018 231
June 2018 197
July 2018 223
August 2018 297
September 2018 198
October 2018 181
November 2018 188
December 2018 151
January 2019 113
February 2019 152
March 2019 153
April 2019 192
May 2019 232
June 2019 206
July 2019 196
August 2019 154
September 2019 33
October 2019 50
November 2019 60
December 2019 48
January 2020 36
February 2020 55
March 2020 35
April 2020 19
May 2020 24
June 2020 38
July 2020 16
August 2020 35
September 2020 27
October 2020 50
November 2020 39
December 2020 34
January 2021 23
February 2021 48
March 2021 64
April 2021 50
May 2021 50
June 2021 31
July 2021 19
August 2021 36
September 2021 29
October 2021 66
November 2021 44
December 2021 34
January 2022 41
February 2022 36
March 2022 45
April 2022 43
May 2022 40
June 2022 32
July 2022 30
August 2022 35
September 2022 40
October 2022 59
November 2022 29
December 2022 21
January 2023 52
February 2023 31
March 2023 31
April 2023 45
May 2023 31
June 2023 37
July 2023 34
August 2023 29
September 2023 84
October 2023 51
November 2023 44
December 2023 30
January 2024 56
February 2024 80
March 2024 81
April 2024 66
May 2024 62
June 2024 66
July 2024 50
August 2024 57
September 2024 38

Email alerts

Citing articles via.

  • Recommend to your Library

Affiliations

  • Online ISSN 1471-5449
  • Print ISSN 0958-2029
  • Copyright © 2024 Oxford University Press
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

Educational resources and simple solutions for your research journey

Limitations of a Study

How to Present the Limitations of a Study in Research?

The limitations of the study convey to the reader how and under which conditions your study results will be evaluated. Scientific research involves investigating research topics, both known and unknown, which inherently includes an element of risk. The risk could arise due to human errors, barriers to data gathering, limited availability of resources, and researcher bias. Researchers are encouraged to discuss the limitations of their research to enhance the process of research, as well as to allow readers to gain an understanding of the study’s framework and value.

Limitations of the research are the constraints placed on the ability to generalize from the results and to further describe applications to practice. It is related to the utility value of the findings based on how you initially chose to design the study, the method used to establish internal and external validity, or the result of unanticipated challenges that emerged during the study. Knowing about these limitations and their impact can explain how the limitations of your study can affect the conclusions and thoughts drawn from your research. 1

Table of Contents

What are the limitations of a study

Researchers are probably cautious to acknowledge what the limitations of the research can be for fear of undermining the validity of the research findings. No research can be faultless or cover all possible conditions. These limitations of your research appear probably due to constraints on methodology or research design and influence the interpretation of your research’s ultimate findings. 2 These are limitations on the generalization and usability of findings that emerge from the design of the research and/or the method employed to ensure validity internally and externally. But such limitations of the study can impact the whole study or research paper. However, most researchers prefer not to discuss the different types of limitations in research for fear of decreasing the value of their paper amongst the reviewers or readers.

limitations of research funding

Importance of limitations of a study

Writing the limitations of the research papers is often assumed to require lots of effort. However, identifying the limitations of the study can help structure the research better. Therefore, do not underestimate the importance of research study limitations. 3

  • Opportunity to make suggestions for further research. Suggestions for future research and avenues for further exploration can be developed based on the limitations of the study.
  • Opportunity to demonstrate critical thinking. A key objective of the research process is to discover new knowledge while questioning existing assumptions and exploring what is new in the particular field. Describing the limitation of the research shows that you have critically thought about the research problem, reviewed relevant literature, and correctly assessed the methods chosen for studying the problem.
  • Demonstrate Subjective learning process. Writing limitations of the research helps to critically evaluate the impact of the said limitations, assess the strength of the research, and consider alternative explanations or interpretations. Subjective evaluation contributes to a more complex and comprehensive knowledge of the issue under study.

Why should I include limitations of research in my paper

All studies have limitations to some extent. Including limitations of the study in your paper demonstrates the researchers’ comprehensive and holistic understanding of the research process and topic. The major advantages are the following:

  • Understand the study conditions and challenges encountered . It establishes a complete and potentially logical depiction of the research. The boundaries of the study can be established, and realistic expectations for the findings can be set. They can also help to clarify what the study is not intended to address.
  • Improve the quality and validity of the research findings. Mentioning limitations of the research creates opportunities for the original author and other researchers to undertake future studies to improve the research outcomes.
  • Transparency and accountability. Including limitations of the research helps maintain mutual integrity and promote further progress in similar studies.
  • Identify potential bias sources.  Identifying the limitations of the study can help researchers identify potential sources of bias in their research design, data collection, or analysis. This can help to improve the validity and reliability of the findings.

Where do I need to add the limitations of the study in my paper

The limitations of your research can be stated at the beginning of the discussion section, which allows the reader to comprehend the limitations of the study prior to reading the rest of your findings or at the end of the discussion section as an acknowledgment of the need for further research.

Types of limitations in research

There are different types of limitations in research that researchers may encounter. These are listed below:

  • Research Design Limitations : Restrictions on your research or available procedures may affect the research outputs. If the research goals and objectives are too broad, explain how they should be narrowed down to enhance the focus of your study. If there was a selection bias in your sample, explain how this may affect the generalizability of your findings. This can help readers understand the limitations of the study in terms of their impact on the overall validity of your research.
  • Impact Limitations : Your study might be limited by a strong regional-, national-, or species-based impact or population- or experimental-specific impact. These inherent limitations on impact affect the extendibility and generalizability of the findings.
  • Data or statistical limitations : Data or statistical limitations in research are extremely common in experimental (such as medicine, physics, and chemistry) or field-based (such as ecology and qualitative clinical research) studies. Sometimes, it is either extremely difficult to acquire sufficient data or gain access to the data. These limitations of the research might also be the result of your study’s design and might result in an incomplete conclusion to your research.

Limitations of study examples

All possible limitations of the study cannot be included in the discussion section of the research paper or dissertation. It will vary greatly depending on the type and nature of the study. These include types of research limitations that are related to methodology and the research process and that of the researcher as well that you need to describe and discuss how they possibly impacted your results.

Common methodological limitations of the study

Limitations of research due to methodological problems are addressed by identifying the potential problem and suggesting ways in which this should have been addressed. Some potential methodological limitations of the study are as follows. 1

  • Sample size: The sample size 4 is dictated by the type of research problem investigated. If the sample size is too small, finding a significant relationship from the data will be difficult, as statistical tests require a large sample size to ensure a representative population distribution and generalize the study findings.
  • Lack of available/reliable data: A lack of available/reliable data will limit the scope of your analysis and the size of your sample or present obstacles in finding a trend or meaningful relationship. So, when writing about the limitations of the study, give convincing reasons why you feel data is absent or untrustworthy and highlight the necessity for a future study focused on developing a new data-gathering strategy.
  • Lack of prior research studies: Citing prior research studies is required to help understand the research problem being investigated. If there is little or no prior research, an exploratory rather than an explanatory research design will be required. Also, discovering the limitations of the study presents an opportunity to identify gaps in the literature and describe the need for additional study.
  • Measure used to collect the data: Sometimes, the data gathered will be insufficient to conduct a thorough analysis of the results. A limitation of the study example, for instance, is identifying in retrospect that a specific question could have helped address a particular issue that emerged during data analysis. You can acknowledge the limitation of the research by stating the need to revise the specific method for gathering data in the future.
  • Self-reported data: Self-reported data cannot be independently verified and can contain several potential bias sources, such as selective memory, attribution, and exaggeration. These biases become apparent if they are incongruent with data from other sources.

General limitations of researchers

Limitations related to the researcher can also influence the study outcomes. These should be addressed, and related remedies should be proposed.

  • Limited access to data : If your study requires access to people, organizations, data, or documents whose access is denied or limited, the reasons need to be described. An additional explanation stating why this limitation of research did not prevent you from following through on your study is also needed.
  • Time constraints : Researchers might also face challenges in meeting research deadlines due to a lack of timely participant availability or funds, among others. The impacts of time constraints must be acknowledged by mentioning the need for a future study addressing this research problem.
  • Conflicts due to biased views and personal issues : Differences in culture or personal views can contribute to researcher bias, as they focus only on the results and data that support their main arguments. To avoid this, pay attention to the problem statement and data gathering.

Steps for structuring the limitations section

Limitations are an inherent part of any research study. Issues may vary, ranging from sampling and literature review to methodology and bias. However, there is a structure for identifying these elements, discussing them, and offering insight or alternatives on how the limitations of the study can be mitigated. This enhances the process of the research and helps readers gain a comprehensive understanding of a study’s conditions.

  • Identify the research constraints : Identify those limitations having the greatest impact on the quality of the research findings and your ability to effectively answer your research questions and/or hypotheses. These include sample size, selection bias, measurement error, or other issues affecting the validity and reliability of your research.
  • Describe their impact on your research : Reflect on the nature of the identified limitations and justify the choices made during the research to identify the impact of the study’s limitations on the research outcomes. Explanations can be offered if needed, but without being defensive or exaggerating them. Provide context for the limitations of your research to understand them in a broader context. Any specific limitations due to real-world considerations need to be pointed out critically rather than justifying them as done by some other author group or groups.
  • Mention the opportunity for future investigations : Suggest ways to overcome the limitations of the present study through future research. This can help readers understand how the research fits into the broader context and offer a roadmap for future studies.

Frequently Asked Questions

  • Should I mention all the limitations of my study in the research report?

Restrict limitations to what is pertinent to the research question under investigation. The specific limitations you include will depend on the nature of the study, the research question investigated, and the data collected.

  • Can the limitations of a study affect its credibility?

Stating the limitations of the research is considered favorable by editors and peer reviewers. Connecting your study’s limitations with future possible research can help increase the focus of unanswered questions in this area. In addition, admitting limitations openly and validating that they do not affect the main findings of the study increases the credibility of your study. However, if you determine that your study is seriously flawed, explain ways to successfully overcome such flaws in a future study. For example, if your study fails to acquire critical data, consider reframing the research question as an exploratory study to lay the groundwork for more complete research in the future.

  • How can I mitigate the limitations of my study?

Strategies to minimize limitations of the research should focus on convincing reviewers and readers that the limitations do not affect the conclusions of the study by showing that the methods are appropriate and that the logic is sound. Here are some steps to follow to achieve this:

  • Use data that are valid.
  • Use methods that are appropriate and sound logic to draw inferences.
  • Use adequate statistical methods for drawing inferences from the data that studies with similar limitations have been published before.

Admit limitations openly and, at the same time, show how they do not affect the main conclusions of the study.

  • Can the limitations of a study impact its publication chances?

Limitations in your research can arise owing to restrictions in methodology or research design. Although this could impact your chances of publishing your research paper, it is critical to explain your study’s limitations to your intended audience. For example, it can explain how your study constraints may impact the results and views generated from your investigation. It also shows that you have researched the flaws of your study and have a thorough understanding of the subject.

  • How can limitations in research be used for future studies?

The limitations of a study give you an opportunity to offer suggestions for further research. Your study’s limitations, including problems experienced during the study and the additional study perspectives developed, are a great opportunity to take on a new challenge and help advance knowledge in a particular field.

References:

  • Brutus, S., Aguinis, H., & Wassmer, U. (2013). Self-reported limitations and future directions in scholarly reports: Analysis and recommendations.  Journal of Management ,  39 (1), 48-75.
  • Ioannidis, J. P. (2007). Limitations are not properly acknowledged in the scientific literature.  Journal of Clinical Epidemiology ,  60 (4), 324-329.
  • Price, J. H., & Murnan, J. (2004). Research limitations and the necessity of reporting them.  American Journal of Health Education ,  35 (2), 66.
  • Boddy, C. R. (2016). Sample size for qualitative research.  Qualitative Market Research: An International Journal ,  19 (4), 426-432.

R Discovery is a literature search and research reading platform that accelerates your research discovery journey by keeping you updated on the latest, most relevant scholarly content. With 250M+ research articles sourced from trusted aggregators like CrossRef, Unpaywall, PubMed, PubMed Central, Open Alex and top publishing houses like Springer Nature, JAMA, IOP, Taylor & Francis, NEJM, BMJ, Karger, SAGE, Emerald Publishing and more, R Discovery puts a world of research at your fingertips.  

Try R Discovery Prime FREE for 1 week or upgrade at just US$72 a year to access premium features that let you listen to research on the go, read in your language, collaborate with peers, auto sync with reference managers, and much more. Choose a simpler, smarter way to find and read research – Download the app and start your free 7-day trial today !  

Related Posts

phd in computer science

How to get a PhD in Computer Science? 

phd in accounting

How to get a PhD in Accounting? 

IMAGES

  1. 21 Research Limitations Examples (2024)

    limitations of research funding

  2. what are research limitations and delimitations

    limitations of research funding

  3. Strategies for Funding Your Research

    limitations of research funding

  4. Limitations in Research

    limitations of research funding

  5. Research Limitations vs. Research Delimitations

    limitations of research funding

  6. How To Write The Research Limitations Section Of Your Masters

    limitations of research funding

VIDEO

  1. Introducing Mendeley Funding

  2. Research Methodology for Life Science Projects (4 Minutes)

  3. Giving Research Funding Bodies a Memory: An Idea for Improving the Research Funding Ecosystem

  4. Quantum Computing: Myths and Realities #future #computer

  5. How to get Funded Project/Project Grant/ Research Proposal Financial support

  6. OR EP 04 PHASES , SCOPE & LIMITATIONS OF OPERATION RESEARCH

COMMENTS

  1. What is research funding, how does it influence research, and how is it

    Evaluating the effects of some or all academic research funding is difficult because of the many different and overlapping sources, types, and scopes. It is therefore important to identify the key aspects of research funding so that funders and others assessing its value do not overlook them. This article outlines 18 dimensions through which funding varies substantially, as well as three ...

  2. What is research funding, how does it influence research, and how is it

    Evaluating the effects of some or all academic research funding is difficult because of the many different and overlapping sources, types, and scopes.

  3. Research Funding—Why, When, and How?

    Research funding is defined as a grant obtained for conducting scientific research generally through a competitive process. ... a grant proposal. Apart from this, different funding agencies have different timelines for proposal submission and limitation on funds. Details about funding bodies have been tabulated in Table 1. These details are ...

  4. Getting to the bottom of research funding: Acknowledging the ...

    Limitations of existing research funding studies mainly stem from insufficient attention to recent dynamics, i.e. to growing heterogeneity, complexity and related dynamic trends in contemporary research funding [5, 6]. There have been few attempts to acknowledge and identify the wide variety of potentially interlinked sets of funders now ...

  5. How to Write Limitations of the Study (with examples)

    Common types of limitations and their ramifications include: Theoretical: limits the scope, depth, or applicability of a study. Methodological: limits the quality, quantity, or diversity of the data. Empirical: limits the representativeness, validity, or reliability of the data. Analytical: limits the accuracy, completeness, or significance of ...

  6. Practical Problems Related to Health Research Funding Decisions

    The second practical problem is that health research funding is often highly politicized (Dresser 2000, Resnik 2009 ). Patient advocacy groups, medical and scientific organizations, universities, and other stakeholders often substantially influence biomedical research budgets and allocations. For example, in the 1980s, political activists ...

  7. Research funding limitations

    Research funding limitations refer to the constraints and challenges faced by researchers in obtaining adequate financial resources to conduct their studies. These limitations can significantly impact the quality, scope, and outcomes of research initiatives, especially in addressing critical global health issues and challenges where funding is essential for innovation and implementation.

  8. PDF What is research funding, how does it influence research, and how is it

    Academic research grants account for billions of pounds in many countries and so the ... (or otherwise acknowledged as limitations). The focus is on grant funding rather than block funding.

  9. The ripple effects of funding on researchers and output

    The impact of funding may vary across funding mechanism and/or research area. Figure S2E provides estimates by funding mechanism—R01s only and all research grants and research program grants including R01s. The estimates for all research grants are similar to those for all funding, but the estimates for R01s turn out to be quite noisy.

  10. Fundamental challenges in assessing the impact of research

    Clinical research infrastructure is one of the unsung heroes of the scientific response to the current COVID-19 pandemic. The extensive, long-term funding into research support structures, skilled people, and technology allowed the United Kingdom research response to move off the starting blocks at pace by utilizing pre-existing platforms. The increasing focus from funders on evaluating the ...

  11. Why many funding schemes harm rather than support research

    Funding organizations only actually support research if the expected funding value considerably exceeds the costs of the distribution (that is, producing and evaluating grant proposals plus ...

  12. Research Limitations: Simple Explainer With Examples

    Research limitations are inevitable. ... If budget is a concern, you might want to consider exploring small research grants or adjusting the scope of your study so that it fits within a realistic budget. Trimming back might sound unattractive, but keep in mind that a smaller, well-planned study can often be more impactful than a larger, poorly ...

  13. Stating the Obvious: Writing Assumptions, Limitations, and

    Limitations. Limitations of a dissertation are potential weaknesses in your study that are mostly out of your control, given limited funding, choice of research design, statistical model constraints, or other factors. In addition, a limitation is a restriction on your study that cannot be reasonably dismissed and can affect your design and results.

  14. Getting to the bottom of research funding: Acknowledging the ...

    This stems in part from how research funding has been studied. In this paper we develop and test an approach that attempts to broaden how such studies are framed, by focusing on how researchers co-use funding at multiple different levels of aggregation. Limitations of existing research funding studies mainly stem from insufficient attention to

  15. PDF How to discuss your study's limitations effectively

    sentence tha. signals what you're about to discu. s. For example:"Our study had some limitations."Then, provide a concise sentence or two identifying each limitation and explaining how the limitation may have affected the quality. of the study. s findings and/or their applicability. For example:"First, owing to the rarity of the ...

  16. The size of research funding: Trends and implications

    This paper examines the role of grant size in research funding. There is an increasing focus in a number of countries on larger grant forms, such as centers of excellence, and in some cases also increases in the size of individual project grants. Among the rationales for this are economies of scale in research and redistribution of resources ...

  17. Revisiting Bias in Qualitative Research: Reflections on Its

    In the United Kingdom, the key driver of this is the research excellence framework, a research impact assessment for establishing reputational benchmarks for higher education institutions and determining what size slice of the £1 billion "block grant" funding pie they receive (quality-related research funding). Research, they say, is all ...

  18. Limitations of the Study

    The limitations of the study are those characteristics of design or methodology that impacted or influenced the interpretation of the findings from your research. Study limitations are the constraints placed on the ability to generalize from the results, to further describe applications to practice, and/or related to the utility of findings ...

  19. 21 Research Limitations Examples

    In research, studies can have limitations such as limited scope, researcher subjectivity, and lack of available research tools. Acknowledging the limitations of your study should be seen as a strength. It demonstrates your willingness for transparency, humility, and submission to the scientific method and can bolster the integrity of the study.

  20. Limited by our limitations

    Limited by our limitations. Study limitations represent weaknesses within a research design that may influence outcomes and conclusions of the research. Researchers have an obligation to the academic community to present complete and honest limitations of a presented study. Too often, authors use generic descriptions to describe study limitations.

  21. How research grant applications are slowing scientific progress

    Back in 2016, Vox asked 270 scientists to name the biggest problems facing science. Many of them agreed that the constant search for funding, brought on by the increasingly competitive grant ...

  22. Budget cuts hit world's largest cancer-research funder: what ...

    The US National Cancer Institute is prioritizing young investigators as it navigates its first funding decline in nearly a decade. ... Budget cuts hit world's largest cancer-research funder: what ...

  23. Options and limitations in measuring the impact of research grants

    Research grants facilitate increased research capacity and activity of the PIs—but whereas this for postdocs fellowships mainly include own salary and is limited to 2 years, for the research projects the grant is larger, 3-4 years, and imply the ability to employ research staff (e.g. multiple PhDs or postdocs).

  24. How to Present the Limitations of a Study in Research?

    Writing the limitations of the research papers is often assumed to require lots of effort. However, identifying the limitations of the study can help structure the research better. Therefore, do not underestimate the importance of research study limitations. 3. Opportunity to make suggestions for further research.

  25. Biosketch Format Pages, Instructions, and Samples

    Funding. As the largest public funder of biomedical research in the world, NIH supports a variety of programs from grants and contracts to loan repayment. Learn about assistance programs, how to identify a potential funding organization, and past NIH funding. Explore Funding

  26. The effect of perceived risks and overall risk perception on

    This research aims to investigate the effect of international tourists' perceived risk factors on (n = 257) overall risk perceptions and consequently on their behavioural intention (i.e. intention to revisit, to recommend destination and repeat same activities) from the lens of prospect theory.An online survey was circulated to the onsite international tourists who were visiting Bali.

  27. Full article: University system & multiple enrollment policy: dropout

    Research showed that students, ... Public university funding has been revised, with 'student activity' being one of the largest funding indicators. ... To circumvent the technical limitations of the hardware, a random sample of 1,000 rows was created from a subset if n was higher than 1,000 rows. The visualization of the average silhouette ...