Ohio State nav bar

The Ohio State University

  • BuckeyeLink
  • Find People
  • Search Ohio State

Research Questions & Hypotheses

Generally, in quantitative studies, reviewers expect hypotheses rather than research questions. However, both research questions and hypotheses serve different purposes and can be beneficial when used together.

Research Questions

Clarify the research’s aim (farrugia et al., 2010).

  • Research often begins with an interest in a topic, but a deep understanding of the subject is crucial to formulate an appropriate research question.
  • Descriptive: “What factors most influence the academic achievement of senior high school students?”
  • Comparative: “What is the performance difference between teaching methods A and B?”
  • Relationship-based: “What is the relationship between self-efficacy and academic achievement?”
  • Increasing knowledge about a subject can be achieved through systematic literature reviews, in-depth interviews with patients (and proxies), focus groups, and consultations with field experts.
  • Some funding bodies, like the Canadian Institute for Health Research, recommend conducting a systematic review or a pilot study before seeking grants for full trials.
  • The presence of multiple research questions in a study can complicate the design, statistical analysis, and feasibility.
  • It’s advisable to focus on a single primary research question for the study.
  • The primary question, clearly stated at the end of a grant proposal’s introduction, usually specifies the study population, intervention, and other relevant factors.
  • The FINER criteria underscore aspects that can enhance the chances of a successful research project, including specifying the population of interest, aligning with scientific and public interest, clinical relevance, and contribution to the field, while complying with ethical and national research standards.
  • The P ICOT approach is crucial in developing the study’s framework and protocol, influencing inclusion and exclusion criteria and identifying patient groups for inclusion.
  • Defining the specific population, intervention, comparator, and outcome helps in selecting the right outcome measurement tool.
  • The more precise the population definition and stricter the inclusion and exclusion criteria, the more significant the impact on the interpretation, applicability, and generalizability of the research findings.
  • A restricted study population enhances internal validity but may limit the study’s external validity and generalizability to clinical practice.
  • A broadly defined study population may better reflect clinical practice but could increase bias and reduce internal validity.
  • An inadequately formulated research question can negatively impact study design, potentially leading to ineffective outcomes and affecting publication prospects.

Checklist: Good research questions for social science projects (Panke, 2018)

hypothesis et hypotheses

Research Hypotheses

Present the researcher’s predictions based on specific statements.

  • These statements define the research problem or issue and indicate the direction of the researcher’s predictions.
  • Formulating the research question and hypothesis from existing data (e.g., a database) can lead to multiple statistical comparisons and potentially spurious findings due to chance.
  • The research or clinical hypothesis, derived from the research question, shapes the study’s key elements: sampling strategy, intervention, comparison, and outcome variables.
  • Hypotheses can express a single outcome or multiple outcomes.
  • After statistical testing, the null hypothesis is either rejected or not rejected based on whether the study’s findings are statistically significant.
  • Hypothesis testing helps determine if observed findings are due to true differences and not chance.
  • Hypotheses can be 1-sided (specific direction of difference) or 2-sided (presence of a difference without specifying direction).
  • 2-sided hypotheses are generally preferred unless there’s a strong justification for a 1-sided hypothesis.
  • A solid research hypothesis, informed by a good research question, influences the research design and paves the way for defining clear research objectives.

Types of Research Hypothesis

  • In a Y-centered research design, the focus is on the dependent variable (DV) which is specified in the research question. Theories are then used to identify independent variables (IV) and explain their causal relationship with the DV.
  • Example: “An increase in teacher-led instructional time (IV) is likely to improve student reading comprehension scores (DV), because extensive guided practice under expert supervision enhances learning retention and skill mastery.”
  • Hypothesis Explanation: The dependent variable (student reading comprehension scores) is the focus, and the hypothesis explores how changes in the independent variable (teacher-led instructional time) affect it.
  • In X-centered research designs, the independent variable is specified in the research question. Theories are used to determine potential dependent variables and the causal mechanisms at play.
  • Example: “Implementing technology-based learning tools (IV) is likely to enhance student engagement in the classroom (DV), because interactive and multimedia content increases student interest and participation.”
  • Hypothesis Explanation: The independent variable (technology-based learning tools) is the focus, with the hypothesis exploring its impact on a potential dependent variable (student engagement).
  • Probabilistic hypotheses suggest that changes in the independent variable are likely to lead to changes in the dependent variable in a predictable manner, but not with absolute certainty.
  • Example: “The more teachers engage in professional development programs (IV), the more their teaching effectiveness (DV) is likely to improve, because continuous training updates pedagogical skills and knowledge.”
  • Hypothesis Explanation: This hypothesis implies a probable relationship between the extent of professional development (IV) and teaching effectiveness (DV).
  • Deterministic hypotheses state that a specific change in the independent variable will lead to a specific change in the dependent variable, implying a more direct and certain relationship.
  • Example: “If the school curriculum changes from traditional lecture-based methods to project-based learning (IV), then student collaboration skills (DV) are expected to improve because project-based learning inherently requires teamwork and peer interaction.”
  • Hypothesis Explanation: This hypothesis presumes a direct and definite outcome (improvement in collaboration skills) resulting from a specific change in the teaching method.
  • Example : “Students who identify as visual learners will score higher on tests that are presented in a visually rich format compared to tests presented in a text-only format.”
  • Explanation : This hypothesis aims to describe the potential difference in test scores between visual learners taking visually rich tests and text-only tests, without implying a direct cause-and-effect relationship.
  • Example : “Teaching method A will improve student performance more than method B.”
  • Explanation : This hypothesis compares the effectiveness of two different teaching methods, suggesting that one will lead to better student performance than the other. It implies a direct comparison but does not necessarily establish a causal mechanism.
  • Example : “Students with higher self-efficacy will show higher levels of academic achievement.”
  • Explanation : This hypothesis predicts a relationship between the variable of self-efficacy and academic achievement. Unlike a causal hypothesis, it does not necessarily suggest that one variable causes changes in the other, but rather that they are related in some way.

Tips for developing research questions and hypotheses for research studies

  • Perform a systematic literature review (if one has not been done) to increase knowledge and familiarity with the topic and to assist with research development.
  • Learn about current trends and technological advances on the topic.
  • Seek careful input from experts, mentors, colleagues, and collaborators to refine your research question as this will aid in developing the research question and guide the research study.
  • Use the FINER criteria in the development of the research question.
  • Ensure that the research question follows PICOT format.
  • Develop a research hypothesis from the research question.
  • Ensure that the research question and objectives are answerable, feasible, and clinically relevant.

If your research hypotheses are derived from your research questions, particularly when multiple hypotheses address a single question, it’s recommended to use both research questions and hypotheses. However, if this isn’t the case, using hypotheses over research questions is advised. It’s important to note these are general guidelines, not strict rules. If you opt not to use hypotheses, consult with your supervisor for the best approach.

Farrugia, P., Petrisor, B. A., Farrokhyar, F., & Bhandari, M. (2010). Practical tips for surgical research: Research questions, hypotheses and objectives.  Canadian journal of surgery. Journal canadien de chirurgie ,  53 (4), 278–281.

Hulley, S. B., Cummings, S. R., Browner, W. S., Grady, D., & Newman, T. B. (2007). Designing clinical research. Philadelphia.

Panke, D. (2018). Research design & method selection: Making good choices in the social sciences.  Research Design & Method Selection , 1-368.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • How to Write a Strong Hypothesis | Guide & Examples

How to Write a Strong Hypothesis | Guide & Examples

Published on 6 May 2022 by Shona McCombes .

A hypothesis is a statement that can be tested by scientific research. If you want to test a relationship between two or more variables, you need to write hypotheses before you start your experiment or data collection.

Table of contents

What is a hypothesis, developing a hypothesis (with example), hypothesis examples, frequently asked questions about writing hypotheses.

A hypothesis states your predictions about what your research will find. It is a tentative answer to your research question that has not yet been tested. For some research projects, you might have to write several hypotheses that address different aspects of your research question.

A hypothesis is not just a guess – it should be based on existing theories and knowledge. It also has to be testable, which means you can support or refute it through scientific research methods (such as experiments, observations, and statistical analysis of data).

Variables in hypotheses

Hypotheses propose a relationship between two or more variables . An independent variable is something the researcher changes or controls. A dependent variable is something the researcher observes and measures.

In this example, the independent variable is exposure to the sun – the assumed cause . The dependent variable is the level of happiness – the assumed effect .

Prevent plagiarism, run a free check.

Step 1: ask a question.

Writing a hypothesis begins with a research question that you want to answer. The question should be focused, specific, and researchable within the constraints of your project.

Step 2: Do some preliminary research

Your initial answer to the question should be based on what is already known about the topic. Look for theories and previous studies to help you form educated assumptions about what your research will find.

At this stage, you might construct a conceptual framework to identify which variables you will study and what you think the relationships are between them. Sometimes, you’ll have to operationalise more complex constructs.

Step 3: Formulate your hypothesis

Now you should have some idea of what you expect to find. Write your initial answer to the question in a clear, concise sentence.

Step 4: Refine your hypothesis

You need to make sure your hypothesis is specific and testable. There are various ways of phrasing a hypothesis, but all the terms you use should have clear definitions, and the hypothesis should contain:

  • The relevant variables
  • The specific group being studied
  • The predicted outcome of the experiment or analysis

Step 5: Phrase your hypothesis in three ways

To identify the variables, you can write a simple prediction in if … then form. The first part of the sentence states the independent variable and the second part states the dependent variable.

In academic research, hypotheses are more commonly phrased in terms of correlations or effects, where you directly state the predicted relationship between variables.

If you are comparing two groups, the hypothesis can state what difference you expect to find between them.

Step 6. Write a null hypothesis

If your research involves statistical hypothesis testing , you will also have to write a null hypothesis. The null hypothesis is the default position that there is no association between the variables. The null hypothesis is written as H 0 , while the alternative hypothesis is H 1 or H a .

Hypothesis testing is a formal procedure for investigating our ideas about the world using statistics. It is used by scientists to test specific predictions, called hypotheses , by calculating how likely it is that a pattern or relationship between variables could have arisen by chance.

A hypothesis is not just a guess. It should be based on existing theories and knowledge. It also has to be testable, which means you can support or refute it through scientific research methods (such as experiments, observations, and statistical analysis of data).

A research hypothesis is your proposed answer to your research question. The research hypothesis usually includes an explanation (‘ x affects y because …’).

A statistical hypothesis, on the other hand, is a mathematical statement about a population parameter. Statistical hypotheses always come in pairs: the null and alternative hypotheses. In a well-designed study , the statistical hypotheses correspond logically to the research hypothesis.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

McCombes, S. (2022, May 06). How to Write a Strong Hypothesis | Guide & Examples. Scribbr. Retrieved 27 May 2024, from https://www.scribbr.co.uk/research-methods/hypothesis-writing/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, operationalisation | a guide with examples, pros & cons, what is a conceptual framework | tips & examples, a quick guide to experimental design | 5 steps & examples.

  • Privacy Policy

Research Method

Home » What is a Hypothesis – Types, Examples and Writing Guide

What is a Hypothesis – Types, Examples and Writing Guide

Table of Contents

What is a Hypothesis

Definition:

Hypothesis is an educated guess or proposed explanation for a phenomenon, based on some initial observations or data. It is a tentative statement that can be tested and potentially proven or disproven through further investigation and experimentation.

Hypothesis is often used in scientific research to guide the design of experiments and the collection and analysis of data. It is an essential element of the scientific method, as it allows researchers to make predictions about the outcome of their experiments and to test those predictions to determine their accuracy.

Types of Hypothesis

Types of Hypothesis are as follows:

Research Hypothesis

A research hypothesis is a statement that predicts a relationship between variables. It is usually formulated as a specific statement that can be tested through research, and it is often used in scientific research to guide the design of experiments.

Null Hypothesis

The null hypothesis is a statement that assumes there is no significant difference or relationship between variables. It is often used as a starting point for testing the research hypothesis, and if the results of the study reject the null hypothesis, it suggests that there is a significant difference or relationship between variables.

Alternative Hypothesis

An alternative hypothesis is a statement that assumes there is a significant difference or relationship between variables. It is often used as an alternative to the null hypothesis and is tested against the null hypothesis to determine which statement is more accurate.

Directional Hypothesis

A directional hypothesis is a statement that predicts the direction of the relationship between variables. For example, a researcher might predict that increasing the amount of exercise will result in a decrease in body weight.

Non-directional Hypothesis

A non-directional hypothesis is a statement that predicts the relationship between variables but does not specify the direction. For example, a researcher might predict that there is a relationship between the amount of exercise and body weight, but they do not specify whether increasing or decreasing exercise will affect body weight.

Statistical Hypothesis

A statistical hypothesis is a statement that assumes a particular statistical model or distribution for the data. It is often used in statistical analysis to test the significance of a particular result.

Composite Hypothesis

A composite hypothesis is a statement that assumes more than one condition or outcome. It can be divided into several sub-hypotheses, each of which represents a different possible outcome.

Empirical Hypothesis

An empirical hypothesis is a statement that is based on observed phenomena or data. It is often used in scientific research to develop theories or models that explain the observed phenomena.

Simple Hypothesis

A simple hypothesis is a statement that assumes only one outcome or condition. It is often used in scientific research to test a single variable or factor.

Complex Hypothesis

A complex hypothesis is a statement that assumes multiple outcomes or conditions. It is often used in scientific research to test the effects of multiple variables or factors on a particular outcome.

Applications of Hypothesis

Hypotheses are used in various fields to guide research and make predictions about the outcomes of experiments or observations. Here are some examples of how hypotheses are applied in different fields:

  • Science : In scientific research, hypotheses are used to test the validity of theories and models that explain natural phenomena. For example, a hypothesis might be formulated to test the effects of a particular variable on a natural system, such as the effects of climate change on an ecosystem.
  • Medicine : In medical research, hypotheses are used to test the effectiveness of treatments and therapies for specific conditions. For example, a hypothesis might be formulated to test the effects of a new drug on a particular disease.
  • Psychology : In psychology, hypotheses are used to test theories and models of human behavior and cognition. For example, a hypothesis might be formulated to test the effects of a particular stimulus on the brain or behavior.
  • Sociology : In sociology, hypotheses are used to test theories and models of social phenomena, such as the effects of social structures or institutions on human behavior. For example, a hypothesis might be formulated to test the effects of income inequality on crime rates.
  • Business : In business research, hypotheses are used to test the validity of theories and models that explain business phenomena, such as consumer behavior or market trends. For example, a hypothesis might be formulated to test the effects of a new marketing campaign on consumer buying behavior.
  • Engineering : In engineering, hypotheses are used to test the effectiveness of new technologies or designs. For example, a hypothesis might be formulated to test the efficiency of a new solar panel design.

How to write a Hypothesis

Here are the steps to follow when writing a hypothesis:

Identify the Research Question

The first step is to identify the research question that you want to answer through your study. This question should be clear, specific, and focused. It should be something that can be investigated empirically and that has some relevance or significance in the field.

Conduct a Literature Review

Before writing your hypothesis, it’s essential to conduct a thorough literature review to understand what is already known about the topic. This will help you to identify the research gap and formulate a hypothesis that builds on existing knowledge.

Determine the Variables

The next step is to identify the variables involved in the research question. A variable is any characteristic or factor that can vary or change. There are two types of variables: independent and dependent. The independent variable is the one that is manipulated or changed by the researcher, while the dependent variable is the one that is measured or observed as a result of the independent variable.

Formulate the Hypothesis

Based on the research question and the variables involved, you can now formulate your hypothesis. A hypothesis should be a clear and concise statement that predicts the relationship between the variables. It should be testable through empirical research and based on existing theory or evidence.

Write the Null Hypothesis

The null hypothesis is the opposite of the alternative hypothesis, which is the hypothesis that you are testing. The null hypothesis states that there is no significant difference or relationship between the variables. It is important to write the null hypothesis because it allows you to compare your results with what would be expected by chance.

Refine the Hypothesis

After formulating the hypothesis, it’s important to refine it and make it more precise. This may involve clarifying the variables, specifying the direction of the relationship, or making the hypothesis more testable.

Examples of Hypothesis

Here are a few examples of hypotheses in different fields:

  • Psychology : “Increased exposure to violent video games leads to increased aggressive behavior in adolescents.”
  • Biology : “Higher levels of carbon dioxide in the atmosphere will lead to increased plant growth.”
  • Sociology : “Individuals who grow up in households with higher socioeconomic status will have higher levels of education and income as adults.”
  • Education : “Implementing a new teaching method will result in higher student achievement scores.”
  • Marketing : “Customers who receive a personalized email will be more likely to make a purchase than those who receive a generic email.”
  • Physics : “An increase in temperature will cause an increase in the volume of a gas, assuming all other variables remain constant.”
  • Medicine : “Consuming a diet high in saturated fats will increase the risk of developing heart disease.”

Purpose of Hypothesis

The purpose of a hypothesis is to provide a testable explanation for an observed phenomenon or a prediction of a future outcome based on existing knowledge or theories. A hypothesis is an essential part of the scientific method and helps to guide the research process by providing a clear focus for investigation. It enables scientists to design experiments or studies to gather evidence and data that can support or refute the proposed explanation or prediction.

The formulation of a hypothesis is based on existing knowledge, observations, and theories, and it should be specific, testable, and falsifiable. A specific hypothesis helps to define the research question, which is important in the research process as it guides the selection of an appropriate research design and methodology. Testability of the hypothesis means that it can be proven or disproven through empirical data collection and analysis. Falsifiability means that the hypothesis should be formulated in such a way that it can be proven wrong if it is incorrect.

In addition to guiding the research process, the testing of hypotheses can lead to new discoveries and advancements in scientific knowledge. When a hypothesis is supported by the data, it can be used to develop new theories or models to explain the observed phenomenon. When a hypothesis is not supported by the data, it can help to refine existing theories or prompt the development of new hypotheses to explain the phenomenon.

When to use Hypothesis

Here are some common situations in which hypotheses are used:

  • In scientific research , hypotheses are used to guide the design of experiments and to help researchers make predictions about the outcomes of those experiments.
  • In social science research , hypotheses are used to test theories about human behavior, social relationships, and other phenomena.
  • I n business , hypotheses can be used to guide decisions about marketing, product development, and other areas. For example, a hypothesis might be that a new product will sell well in a particular market, and this hypothesis can be tested through market research.

Characteristics of Hypothesis

Here are some common characteristics of a hypothesis:

  • Testable : A hypothesis must be able to be tested through observation or experimentation. This means that it must be possible to collect data that will either support or refute the hypothesis.
  • Falsifiable : A hypothesis must be able to be proven false if it is not supported by the data. If a hypothesis cannot be falsified, then it is not a scientific hypothesis.
  • Clear and concise : A hypothesis should be stated in a clear and concise manner so that it can be easily understood and tested.
  • Based on existing knowledge : A hypothesis should be based on existing knowledge and research in the field. It should not be based on personal beliefs or opinions.
  • Specific : A hypothesis should be specific in terms of the variables being tested and the predicted outcome. This will help to ensure that the research is focused and well-designed.
  • Tentative: A hypothesis is a tentative statement or assumption that requires further testing and evidence to be confirmed or refuted. It is not a final conclusion or assertion.
  • Relevant : A hypothesis should be relevant to the research question or problem being studied. It should address a gap in knowledge or provide a new perspective on the issue.

Advantages of Hypothesis

Hypotheses have several advantages in scientific research and experimentation:

  • Guides research: A hypothesis provides a clear and specific direction for research. It helps to focus the research question, select appropriate methods and variables, and interpret the results.
  • Predictive powe r: A hypothesis makes predictions about the outcome of research, which can be tested through experimentation. This allows researchers to evaluate the validity of the hypothesis and make new discoveries.
  • Facilitates communication: A hypothesis provides a common language and framework for scientists to communicate with one another about their research. This helps to facilitate the exchange of ideas and promotes collaboration.
  • Efficient use of resources: A hypothesis helps researchers to use their time, resources, and funding efficiently by directing them towards specific research questions and methods that are most likely to yield results.
  • Provides a basis for further research: A hypothesis that is supported by data provides a basis for further research and exploration. It can lead to new hypotheses, theories, and discoveries.
  • Increases objectivity: A hypothesis can help to increase objectivity in research by providing a clear and specific framework for testing and interpreting results. This can reduce bias and increase the reliability of research findings.

Limitations of Hypothesis

Some Limitations of the Hypothesis are as follows:

  • Limited to observable phenomena: Hypotheses are limited to observable phenomena and cannot account for unobservable or intangible factors. This means that some research questions may not be amenable to hypothesis testing.
  • May be inaccurate or incomplete: Hypotheses are based on existing knowledge and research, which may be incomplete or inaccurate. This can lead to flawed hypotheses and erroneous conclusions.
  • May be biased: Hypotheses may be biased by the researcher’s own beliefs, values, or assumptions. This can lead to selective interpretation of data and a lack of objectivity in research.
  • Cannot prove causation: A hypothesis can only show a correlation between variables, but it cannot prove causation. This requires further experimentation and analysis.
  • Limited to specific contexts: Hypotheses are limited to specific contexts and may not be generalizable to other situations or populations. This means that results may not be applicable in other contexts or may require further testing.
  • May be affected by chance : Hypotheses may be affected by chance or random variation, which can obscure or distort the true relationship between variables.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Data collection

Data Collection – Methods Types and Examples

Delimitations

Delimitations in Research – Types, Examples and...

Research Process

Research Process – Steps, Examples and Tips

Research Design

Research Design – Types, Methods and Examples

Institutional Review Board (IRB)

Institutional Review Board – Application Sample...

Evaluating Research

Evaluating Research – Process, Examples and...

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons

Margin Size

  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Social Sci LibreTexts

2.4: Developing a Hypothesis

  • Last updated
  • Save as PDF
  • Page ID 16104

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\id}{\mathrm{id}}\)

\( \newcommand{\kernel}{\mathrm{null}\,}\)

\( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\)

\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\)

\( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

\( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vectorC}[1]{\textbf{#1}} \)

\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

Learning Objectives

  • Distinguish between a theory and a hypothesis.
  • Discover how theories are used to generate hypotheses and how the results of studies can be used to further inform theories.
  • Understand the characteristics of a good hypothesis.

Theories and Hypotheses

Before describing how to develop a hypothesis it is important to distinguish between a theory and a hypothesis. A theory is a coherent explanation or interpretation of one or more phenomena. Although theories can take a variety of forms, one thing they have in common is that they go beyond the phenomena they explain by including variables, structures, processes, functions, or organizing principles that have not been observed directly. Consider, for example, Zajonc’s theory of social facilitation and social inhibition. He proposed that being watched by others while performing a task creates a general state of physiological arousal, which increases the likelihood of the dominant (most likely) response. So for highly practiced tasks, being watched increases the tendency to make correct responses, but for relatively unpracticed tasks, being watched increases the tendency to make incorrect responses. Notice that this theory—which has come to be called drive theory—provides an explanation of both social facilitation and social inhibition that goes beyond the phenomena themselves by including concepts such as “arousal” and “dominant response,” along with processes such as the effect of arousal on the dominant response.

Outside of science, referring to an idea as a theory often implies that it is untested—perhaps no more than a wild guess. In science, however, the term theory has no such implication. A theory is simply an explanation or interpretation of a set of phenomena. It can be untested, but it can also be extensively tested, well supported, and accepted as an accurate description of the world by the scientific community. The theory of evolution by natural selection, for example, is a theory because it is an explanation of the diversity of life on earth—not because it is untested or unsupported by scientific research. On the contrary, the evidence for this theory is overwhelmingly positive and nearly all scientists accept its basic assumptions as accurate. Similarly, the “germ theory” of disease is a theory because it is an explanation of the origin of various diseases, not because there is any doubt that many diseases are caused by microorganisms that infect the body.

A hypothesis , on the other hand, is a specific prediction about a new phenomenon that should be observed if a particular theory is accurate. It is an explanation that relies on just a few key concepts. Hypotheses are often specific predictions about what will happen in a particular study. They are developed by considering existing evidence and using reasoning to infer what will happen in the specific context of interest. Hypotheses are often but not always derived from theories. So a hypothesis is often a prediction based on a theory but some hypotheses are a-theoretical and only after a set of observations have been made, is a theory developed. This is because theories are broad in nature and they explain larger bodies of data. So if our research question is really original then we may need to collect some data and make some observation before we can develop a broader theory.

Theories and hypotheses always have this if-then relationship. “ If drive theory is correct, then cockroaches should run through a straight runway faster, and a branching runway more slowly, when other cockroaches are present.” Although hypotheses are usually expressed as statements, they can always be rephrased as questions. “Do cockroaches run through a straight runway faster when other cockroaches are present?” Thus deriving hypotheses from theories is an excellent way of generating interesting research questions.

But how do researchers derive hypotheses from theories? One way is to generate a research question using the techniques discussed in this chapter and then ask whether any theory implies an answer to that question. For example, you might wonder whether expressive writing about positive experiences improves health as much as expressive writing about traumatic experiences. Although this question is an interesting one on its own, you might then ask whether the habituation theory—the idea that expressive writing causes people to habituate to negative thoughts and feelings—implies an answer. In this case, it seems clear that if the habituation theory is correct, then expressive writing about positive experiences should not be effective because it would not cause people to habituate to negative thoughts and feelings. A second way to derive hypotheses from theories is to focus on some component of the theory that has not yet been directly observed. For example, a researcher could focus on the process of habituation—perhaps hypothesizing that people should show fewer signs of emotional distress with each new writing session.

Among the very best hypotheses are those that distinguish between competing theories. For example, Norbert Schwarz and his colleagues considered two theories of how people make judgments about themselves, such as how assertive they are (Schwarz et al., 1991) [1] . Both theories held that such judgments are based on relevant examples that people bring to mind. However, one theory was that people base their judgments on the number of examples they bring to mind and the other was that people base their judgments on how easily they bring those examples to mind. To test these theories, the researchers asked people to recall either six times when they were assertive (which is easy for most people) or 12 times (which is difficult for most people). Then they asked them to judge their own assertiveness. Note that the number-of-examples theory implies that people who recalled 12 examples should judge themselves to be more assertive because they recalled more examples, but the ease-of-examples theory implies that participants who recalled six examples should judge themselves as more assertive because recalling the examples was easier. Thus the two theories made opposite predictions so that only one of the predictions could be confirmed. The surprising result was that participants who recalled fewer examples judged themselves to be more assertive—providing particularly convincing evidence in favor of the ease-of-retrieval theory over the number-of-examples theory.

Theory Testing

The primary way that scientific researchers use theories is sometimes called the hypothetico-deductive method (although this term is much more likely to be used by philosophers of science than by scientists themselves). A researcher begins with a set of phenomena and either constructs a theory to explain or interpret them or chooses an existing theory to work with. He or she then makes a prediction about some new phenomenon that should be observed if the theory is correct. Again, this prediction is called a hypothesis. The researcher then conducts an empirical study to test the hypothesis. Finally, he or she reevaluates the theory in light of the new results and revises it if necessary. This process is usually conceptualized as a cycle because the researcher can then derive a new hypothesis from the revised theory, conduct a new empirical study to test the hypothesis, and so on. As Figure 2.2 shows, this approach meshes nicely with the model of scientific research in psychology presented earlier in the textbook—creating a more detailed model of “theoretically motivated” or “theory-driven” research.

4.4.png

As an example, let us consider Zajonc’s research on social facilitation and inhibition. He started with a somewhat contradictory pattern of results from the research literature. He then constructed his drive theory, according to which being watched by others while performing a task causes physiological arousal, which increases an organism’s tendency to make the dominant response. This theory predicts social facilitation for well-learned tasks and social inhibition for poorly learned tasks. He now had a theory that organized previous results in a meaningful way—but he still needed to test it. He hypothesized that if his theory was correct, he should observe that the presence of others improves performance in a simple laboratory task but inhibits performance in a difficult version of the very same laboratory task. To test this hypothesis, one of the studies he conducted used cockroaches as subjects (Zajonc, Heingartner, & Herman, 1969) [2] . The cockroaches ran either down a straight runway (an easy task for a cockroach) or through a cross-shaped maze (a difficult task for a cockroach) to escape into a dark chamber when a light was shined on them. They did this either while alone or in the presence of other cockroaches in clear plastic “audience boxes.” Zajonc found that cockroaches in the straight runway reached their goal more quickly in the presence of other cockroaches, but cockroaches in the cross-shaped maze reached their goal more slowly when they were in the presence of other cockroaches. Thus he confirmed his hypothesis and provided support for his drive theory. (Zajonc also showed that drive theory existed in humans (Zajonc & Sales, 1966) [3] in many other studies afterward).

Incorporating Theory into Your Research

When you write your research report or plan your presentation, be aware that there are two basic ways that researchers usually include theory. The first is to raise a research question, answer that question by conducting a new study, and then offer one or more theories (usually more) to explain or interpret the results. This format works well for applied research questions and for research questions that existing theories do not address. The second way is to describe one or more existing theories, derive a hypothesis from one of those theories, test the hypothesis in a new study, and finally reevaluate the theory. This format works well when there is an existing theory that addresses the research question—especially if the resulting hypothesis is surprising or conflicts with a hypothesis derived from a different theory.

To use theories in your research will not only give you guidance in coming up with experiment ideas and possible projects, but it lends legitimacy to your work. Psychologists have been interested in a variety of human behaviors and have developed many theories along the way. Using established theories will help you break new ground as a researcher, not limit you from developing your own ideas.

Characteristics of a Good Hypothesis

There are three general characteristics of a good hypothesis. First, a good hypothesis must be testable and falsifiable . We must be able to test the hypothesis using the methods of science and if you’ll recall Popper’s falsifiability criterion, it must be possible to gather evidence that will disconfirm the hypothesis if it is indeed false. Second, a good hypothesis must be logical. As described above, hypotheses are more than just a random guess. Hypotheses should be informed by previous theories or observations and logical reasoning. Typically, we begin with a broad and general theory and use deductive reasoning to generate a more specific hypothesis to test based on that theory. Occasionally, however, when there is no theory to inform our hypothesis, we use inductive reasoning which involves using specific observations or research findings to form a more general hypothesis. Finally, the hypothesis should be positive. That is, the hypothesis should make a positive statement about the existence of a relationship or effect, rather than a statement that a relationship or effect does not exist. As scientists, we don’t set out to show that relationships do not exist or that effects do not occur so our hypotheses should not be worded in a way to suggest that an effect or relationship does not exist. The nature of science is to assume that something does not exist and then seek to find evidence to prove this wrong, to show that really it does exist. That may seem backward to you but that is the nature of the scientific method. The underlying reason for this is beyond the scope of this chapter but it has to do with statistical theory.

Key Takeaways

  • A theory is broad in nature and explains larger bodies of data. A hypothesis is more specific and makes a prediction about the outcome of a particular study.
  • Working with theories is not “icing on the cake.” It is a basic ingredient of psychological research.
  • Like other scientists, psychologists use the hypothetico-deductive method. They construct theories to explain or interpret phenomena (or work with existing theories), derive hypotheses from their theories, test the hypotheses, and then reevaluate the theories in light of the new results.
  • Practice: Find a recent empirical research report in a professional journal. Read the introduction and highlight in different colors descriptions of theories and hypotheses.
  • Schwarz, N., Bless, H., Strack, F., Klumpp, G., Rittenauer-Schatka, H., & Simons, A. (1991). Ease of retrieval as information: Another look at the availability heuristic. Journal of Personality and Social Psychology, 61 , 195–202.
  • Zajonc, R. B., Heingartner, A., & Herman, E. M. (1969). Social enhancement and impairment of performance in the cockroach. Journal of Personality and Social Psychology, 13 , 83–92.
  • Zajonc, R.B. & Sales, S.M. (1966). Social facilitation of dominant and subordinate responses. Journal of Experimental Social Psychology, 2 , 160-168.

The Research Hypothesis: Role and Construction

  • First Online: 01 January 2012

Cite this chapter

hypothesis et hypotheses

  • Phyllis G. Supino EdD 3  

A hypothesis is a logical construct, interposed between a problem and its solution, which represents a proposed answer to a research question. It gives direction to the investigator’s thinking about the problem and, therefore, facilitates a solution. There are three primary modes of inference by which hypotheses are developed: deduction (reasoning from a general propositions to specific instances), induction (reasoning from specific instances to a general proposition), and abduction (formulation/acceptance on probation of a hypothesis to explain a surprising observation).

A research hypothesis should reflect an inference about variables; be stated as a grammatically complete, declarative sentence; be expressed simply and unambiguously; provide an adequate answer to the research problem; and be testable. Hypotheses can be classified as conceptual versus operational, single versus bi- or multivariable, causal or not causal, mechanistic versus nonmechanistic, and null or alternative. Hypotheses most commonly entail statements about “variables” which, in turn, can be classified according to their level of measurement (scaling characteristics) or according to their role in the hypothesis (independent, dependent, moderator, control, or intervening).

A hypothesis is rendered operational when its broadly (conceptually) stated variables are replaced by operational definitions of those variables. Hypotheses stated in this manner are called operational hypotheses, specific hypotheses, or predictions and facilitate testing.

Wrong hypotheses, rightly worked from, have produced more results than unguided observation

—Augustus De Morgan, 1872[ 1 ]—

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

hypothesis et hypotheses

The Nature and Logic of Science: Testing Hypotheses

hypothesis et hypotheses

Abductive Research Methods in Psychological Science

hypothesis et hypotheses

Abductive Research Methods in Psychological Science

De Morgan A, De Morgan S. A budget of paradoxes. London: Longmans Green; 1872.

Google Scholar  

Leedy Paul D. Practical research. Planning and design. 2nd ed. New York: Macmillan; 1960.

Bernard C. Introduction to the study of experimental medicine. New York: Dover; 1957.

Erren TC. The quest for questions—on the logical force of science. Med Hypotheses. 2004;62:635–40.

Article   PubMed   Google Scholar  

Peirce CS. Collected papers of Charles Sanders Peirce, vol. 7. In: Hartshorne C, Weiss P, editors. Boston: The Belknap Press of Harvard University Press; 1966.

Aristotle. The complete works of Aristotle: the revised Oxford Translation. In: Barnes J, editor. vol. 2. Princeton/New Jersey: Princeton University Press; 1984.

Polit D, Beck CT. Conceptualizing a study to generate evidence for nursing. In: Polit D, Beck CT, editors. Nursing research: generating and assessing evidence for nursing practice. 8th ed. Philadelphia: Wolters Kluwer/Lippincott Williams and Wilkins; 2008. Chapter 4.

Jenicek M, Hitchcock DL. Evidence-based practice. Logic and critical thinking in medicine. Chicago: AMA Press; 2005.

Bacon F. The novum organon or a true guide to the interpretation of nature. A new translation by the Rev G.W. Kitchin. Oxford: The University Press; 1855.

Popper KR. Objective knowledge: an evolutionary approach (revised edition). New York: Oxford University Press; 1979.

Morgan AJ, Parker S. Translational mini-review series on vaccines: the Edward Jenner Museum and the history of vaccination. Clin Exp Immunol. 2007;147:389–94.

Article   PubMed   CAS   Google Scholar  

Pead PJ. Benjamin Jesty: new light in the dawn of vaccination. Lancet. 2003;362:2104–9.

Lee JA. The scientific endeavor: a primer on scientific principles and practice. San Francisco: Addison-Wesley Longman; 2000.

Allchin D. Lawson’s shoehorn, or should the philosophy of science be rated, ‘X’? Science and Education. 2003;12:315–29.

Article   Google Scholar  

Lawson AE. What is the role of induction and deduction in reasoning and scientific inquiry? J Res Sci Teach. 2005;42:716–40.

Peirce CS. Collected papers of Charles Sanders Peirce, vol. 2. In: Hartshorne C, Weiss P, editors. Boston: The Belknap Press of Harvard University Press; 1965.

Bonfantini MA, Proni G. To guess or not to guess? In: Eco U, Sebeok T, editors. The sign of three: Dupin, Holmes, Peirce. Bloomington: Indiana University Press; 1983. Chapter 5.

Peirce CS. Collected papers of Charles Sanders Peirce, vol. 5. In: Hartshorne C, Weiss P, editors. Boston: The Belknap Press of Harvard University Press; 1965.

Flach PA, Kakas AC. Abductive and inductive reasoning: background issues. In: Flach PA, Kakas AC, ­editors. Abduction and induction. Essays on their relation and integration. The Netherlands: Klewer; 2000. Chapter 1.

Murray JF. Voltaire, Walpole and Pasteur: variations on the theme of discovery. Am J Respir Crit Care Med. 2005;172:423–6.

Danemark B, Ekstrom M, Jakobsen L, Karlsson JC. Methodological implications, generalization, scientific inference, models (Part II) In: explaining society. Critical realism in the social sciences. New York: Routledge; 2002.

Pasteur L. Inaugural lecture as professor and dean of the faculty of sciences. In: Peterson H, editor. A treasury of the world’s greatest speeches. Douai, France: University of Lille 7 Dec 1954.

Swineburne R. Simplicity as evidence for truth. Milwaukee: Marquette University Press; 1997.

Sakar S, editor. Logical empiricism at its peak: Schlick, Carnap and Neurath. New York: Garland; 1996.

Popper K. The logic of scientific discovery. New York: Basic Books; 1959. 1934, trans. 1959.

Caws P. The philosophy of science. Princeton: D. Van Nostrand Company; 1965.

Popper K. Conjectures and refutations. The growth of scientific knowledge. 4th ed. London: Routledge and Keegan Paul; 1972.

Feyerabend PK. Against method, outline of an anarchistic theory of knowledge. London, UK: Verso; 1978.

Smith PG. Popper: conjectures and refutations (Chapter IV). In: Theory and reality: an introduction to the philosophy of science. Chicago: University of Chicago Press; 2003.

Blystone RV, Blodgett K. WWW: the scientific method. CBE Life Sci Educ. 2006;5:7–11.

Kleinbaum DG, Kupper LL, Morgenstern H. Epidemiological research. Principles and quantitative methods. New York: Van Nostrand Reinhold; 1982.

Fortune AE, Reid WJ. Research in social work. 3rd ed. New York: Columbia University Press; 1999.

Kerlinger FN. Foundations of behavioral research. 1st ed. New York: Hold, Reinhart and Winston; 1970.

Hoskins CN, Mariano C. Research in nursing and health. Understanding and using quantitative and qualitative methods. New York: Springer; 2004.

Tuckman BW. Conducting educational research. New York: Harcourt, Brace, Jovanovich; 1972.

Wang C, Chiari PC, Weihrauch D, Krolikowski JG, Warltier DC, Kersten JR, Pratt Jr PF, Pagel PS. Gender-specificity of delayed preconditioning by isoflurane in rabbits: potential role of endothelial nitric oxide synthase. Anesth Analg. 2006;103:274–80.

Beyer ME, Slesak G, Nerz S, Kazmaier S, Hoffmeister HM. Effects of endothelin-1 and IRL 1620 on myocardial contractility and myocardial energy metabolism. J Cardiovasc Pharmacol. 1995;26(Suppl 3):S150–2.

PubMed   CAS   Google Scholar  

Stone J, Sharpe M. Amnesia for childhood in patients with unexplained neurological symptoms. J Neurol Neurosurg Psychiatry. 2002;72:416–7.

Naughton BJ, Moran M, Ghaly Y, Michalakes C. Computer tomography scanning and delirium in elder patients. Acad Emerg Med. 1997;4:1107–10.

Easterbrook PJ, Berlin JA, Gopalan R, Matthews DR. Publication bias in clinical research. Lancet. 1991;337:867–72.

Stern JM, Simes RJ. Publication bias: evidence of delayed publication in a cohort study of clinical research projects. BMJ. 1997;315:640–5.

Stevens SS. On the theory of scales and measurement. Science. 1946;103:677–80.

Knapp TR. Treating ordinal scales as interval scales: an attempt to resolve the controversy. Nurs Res. 1990;39:121–3.

The Cochrane Collaboration. Open Learning Material. www.cochrane-net.org/openlearning/html/mod14-3.htm . Accessed 12 Oct 2009.

MacCorquodale K, Meehl PE. On a distinction between hypothetical constructs and intervening ­variables. Psychol Rev. 1948;55:95–107.

Baron RM, Kenny DA. The moderator-mediator variable distinction in social psychological research: ­conceptual, strategic and statistical considerations. J Pers Soc Psychol. 1986;51:1173–82.

Williamson GM, Schultz R. Activity restriction mediates the association between pain and depressed affect: a study of younger and older adult cancer patients. Psychol Aging. 1995;10:369–78.

Song M, Lee EO. Development of a functional capacity model for the elderly. Res Nurs Health. 1998;21:189–98.

MacKinnon DP. Introduction to statistical mediation analysis. New York: Routledge; 2008.

Download references

Author information

Authors and affiliations.

Department of Medicine, College of Medicine, SUNY Downstate Medical Center, 450 Clarkson Avenue, 1199, Brooklyn, NY, 11203, USA

Phyllis G. Supino EdD

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Phyllis G. Supino EdD .

Editor information

Editors and affiliations.

, Cardiovascular Medicine, SUNY Downstate Medical Center, Clarkson Avenue, box 1199 450, Brooklyn, 11203, USA

Phyllis G. Supino

, Cardiovascualr Medicine, SUNY Downstate Medical Center, Clarkson Avenue 450, Brooklyn, 11203, USA

Jeffrey S. Borer

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer Science+Business Media, LLC

About this chapter

Supino, P.G. (2012). The Research Hypothesis: Role and Construction. In: Supino, P., Borer, J. (eds) Principles of Research Methodology. Springer, New York, NY. https://doi.org/10.1007/978-1-4614-3360-6_3

Download citation

DOI : https://doi.org/10.1007/978-1-4614-3360-6_3

Published : 18 April 2012

Publisher Name : Springer, New York, NY

Print ISBN : 978-1-4614-3359-0

Online ISBN : 978-1-4614-3360-6

eBook Packages : Medicine Medicine (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research
  • Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Best Family Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Guided Meditations
  • Verywell Mind Insights
  • 2024 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

How to Write a Great Hypothesis

Hypothesis Definition, Format, Examples, and Tips

Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

hypothesis et hypotheses

Amy Morin, LCSW, is a psychotherapist and international bestselling author. Her books, including "13 Things Mentally Strong People Don't Do," have been translated into more than 40 languages. Her TEDx talk,  "The Secret of Becoming Mentally Strong," is one of the most viewed talks of all time.

hypothesis et hypotheses

Verywell / Alex Dos Diaz

  • The Scientific Method

Hypothesis Format

Falsifiability of a hypothesis.

  • Operationalization

Hypothesis Types

Hypotheses examples.

  • Collecting Data

A hypothesis is a tentative statement about the relationship between two or more variables. It is a specific, testable prediction about what you expect to happen in a study. It is a preliminary answer to your question that helps guide the research process.

Consider a study designed to examine the relationship between sleep deprivation and test performance. The hypothesis might be: "This study is designed to assess the hypothesis that sleep-deprived people will perform worse on a test than individuals who are not sleep-deprived."

At a Glance

A hypothesis is crucial to scientific research because it offers a clear direction for what the researchers are looking to find. This allows them to design experiments to test their predictions and add to our scientific knowledge about the world. This article explores how a hypothesis is used in psychology research, how to write a good hypothesis, and the different types of hypotheses you might use.

The Hypothesis in the Scientific Method

In the scientific method , whether it involves research in psychology, biology, or some other area, a hypothesis represents what the researchers think will happen in an experiment. The scientific method involves the following steps:

  • Forming a question
  • Performing background research
  • Creating a hypothesis
  • Designing an experiment
  • Collecting data
  • Analyzing the results
  • Drawing conclusions
  • Communicating the results

The hypothesis is a prediction, but it involves more than a guess. Most of the time, the hypothesis begins with a question which is then explored through background research. At this point, researchers then begin to develop a testable hypothesis.

Unless you are creating an exploratory study, your hypothesis should always explain what you  expect  to happen.

In a study exploring the effects of a particular drug, the hypothesis might be that researchers expect the drug to have some type of effect on the symptoms of a specific illness. In psychology, the hypothesis might focus on how a certain aspect of the environment might influence a particular behavior.

Remember, a hypothesis does not have to be correct. While the hypothesis predicts what the researchers expect to see, the goal of the research is to determine whether this guess is right or wrong. When conducting an experiment, researchers might explore numerous factors to determine which ones might contribute to the ultimate outcome.

In many cases, researchers may find that the results of an experiment  do not  support the original hypothesis. When writing up these results, the researchers might suggest other options that should be explored in future studies.

In many cases, researchers might draw a hypothesis from a specific theory or build on previous research. For example, prior research has shown that stress can impact the immune system. So a researcher might hypothesize: "People with high-stress levels will be more likely to contract a common cold after being exposed to the virus than people who have low-stress levels."

In other instances, researchers might look at commonly held beliefs or folk wisdom. "Birds of a feather flock together" is one example of folk adage that a psychologist might try to investigate. The researcher might pose a specific hypothesis that "People tend to select romantic partners who are similar to them in interests and educational level."

Elements of a Good Hypothesis

So how do you write a good hypothesis? When trying to come up with a hypothesis for your research or experiments, ask yourself the following questions:

  • Is your hypothesis based on your research on a topic?
  • Can your hypothesis be tested?
  • Does your hypothesis include independent and dependent variables?

Before you come up with a specific hypothesis, spend some time doing background research. Once you have completed a literature review, start thinking about potential questions you still have. Pay attention to the discussion section in the  journal articles you read . Many authors will suggest questions that still need to be explored.

How to Formulate a Good Hypothesis

To form a hypothesis, you should take these steps:

  • Collect as many observations about a topic or problem as you can.
  • Evaluate these observations and look for possible causes of the problem.
  • Create a list of possible explanations that you might want to explore.
  • After you have developed some possible hypotheses, think of ways that you could confirm or disprove each hypothesis through experimentation. This is known as falsifiability.

In the scientific method ,  falsifiability is an important part of any valid hypothesis. In order to test a claim scientifically, it must be possible that the claim could be proven false.

Students sometimes confuse the idea of falsifiability with the idea that it means that something is false, which is not the case. What falsifiability means is that  if  something was false, then it is possible to demonstrate that it is false.

One of the hallmarks of pseudoscience is that it makes claims that cannot be refuted or proven false.

The Importance of Operational Definitions

A variable is a factor or element that can be changed and manipulated in ways that are observable and measurable. However, the researcher must also define how the variable will be manipulated and measured in the study.

Operational definitions are specific definitions for all relevant factors in a study. This process helps make vague or ambiguous concepts detailed and measurable.

For example, a researcher might operationally define the variable " test anxiety " as the results of a self-report measure of anxiety experienced during an exam. A "study habits" variable might be defined by the amount of studying that actually occurs as measured by time.

These precise descriptions are important because many things can be measured in various ways. Clearly defining these variables and how they are measured helps ensure that other researchers can replicate your results.

Replicability

One of the basic principles of any type of scientific research is that the results must be replicable.

Replication means repeating an experiment in the same way to produce the same results. By clearly detailing the specifics of how the variables were measured and manipulated, other researchers can better understand the results and repeat the study if needed.

Some variables are more difficult than others to define. For example, how would you operationally define a variable such as aggression ? For obvious ethical reasons, researchers cannot create a situation in which a person behaves aggressively toward others.

To measure this variable, the researcher must devise a measurement that assesses aggressive behavior without harming others. The researcher might utilize a simulated task to measure aggressiveness in this situation.

Hypothesis Checklist

  • Does your hypothesis focus on something that you can actually test?
  • Does your hypothesis include both an independent and dependent variable?
  • Can you manipulate the variables?
  • Can your hypothesis be tested without violating ethical standards?

The hypothesis you use will depend on what you are investigating and hoping to find. Some of the main types of hypotheses that you might use include:

  • Simple hypothesis : This type of hypothesis suggests there is a relationship between one independent variable and one dependent variable.
  • Complex hypothesis : This type suggests a relationship between three or more variables, such as two independent and dependent variables.
  • Null hypothesis : This hypothesis suggests no relationship exists between two or more variables.
  • Alternative hypothesis : This hypothesis states the opposite of the null hypothesis.
  • Statistical hypothesis : This hypothesis uses statistical analysis to evaluate a representative population sample and then generalizes the findings to the larger group.
  • Logical hypothesis : This hypothesis assumes a relationship between variables without collecting data or evidence.

A hypothesis often follows a basic format of "If {this happens} then {this will happen}." One way to structure your hypothesis is to describe what will happen to the  dependent variable  if you change the  independent variable .

The basic format might be: "If {these changes are made to a certain independent variable}, then we will observe {a change in a specific dependent variable}."

A few examples of simple hypotheses:

  • "Students who eat breakfast will perform better on a math exam than students who do not eat breakfast."
  • "Students who experience test anxiety before an English exam will get lower scores than students who do not experience test anxiety."​
  • "Motorists who talk on the phone while driving will be more likely to make errors on a driving course than those who do not talk on the phone."
  • "Children who receive a new reading intervention will have higher reading scores than students who do not receive the intervention."

Examples of a complex hypothesis include:

  • "People with high-sugar diets and sedentary activity levels are more likely to develop depression."
  • "Younger people who are regularly exposed to green, outdoor areas have better subjective well-being than older adults who have limited exposure to green spaces."

Examples of a null hypothesis include:

  • "There is no difference in anxiety levels between people who take St. John's wort supplements and those who do not."
  • "There is no difference in scores on a memory recall task between children and adults."
  • "There is no difference in aggression levels between children who play first-person shooter games and those who do not."

Examples of an alternative hypothesis:

  • "People who take St. John's wort supplements will have less anxiety than those who do not."
  • "Adults will perform better on a memory task than children."
  • "Children who play first-person shooter games will show higher levels of aggression than children who do not." 

Collecting Data on Your Hypothesis

Once a researcher has formed a testable hypothesis, the next step is to select a research design and start collecting data. The research method depends largely on exactly what they are studying. There are two basic types of research methods: descriptive research and experimental research.

Descriptive Research Methods

Descriptive research such as  case studies ,  naturalistic observations , and surveys are often used when  conducting an experiment is difficult or impossible. These methods are best used to describe different aspects of a behavior or psychological phenomenon.

Once a researcher has collected data using descriptive methods, a  correlational study  can examine how the variables are related. This research method might be used to investigate a hypothesis that is difficult to test experimentally.

Experimental Research Methods

Experimental methods  are used to demonstrate causal relationships between variables. In an experiment, the researcher systematically manipulates a variable of interest (known as the independent variable) and measures the effect on another variable (known as the dependent variable).

Unlike correlational studies, which can only be used to determine if there is a relationship between two variables, experimental methods can be used to determine the actual nature of the relationship—whether changes in one variable actually  cause  another to change.

The hypothesis is a critical part of any scientific exploration. It represents what researchers expect to find in a study or experiment. In situations where the hypothesis is unsupported by the research, the research still has value. Such research helps us better understand how different aspects of the natural world relate to one another. It also helps us develop new hypotheses that can then be tested in the future.

Thompson WH, Skau S. On the scope of scientific hypotheses .  R Soc Open Sci . 2023;10(8):230607. doi:10.1098/rsos.230607

Taran S, Adhikari NKJ, Fan E. Falsifiability in medicine: what clinicians can learn from Karl Popper [published correction appears in Intensive Care Med. 2021 Jun 17;:].  Intensive Care Med . 2021;47(9):1054-1056. doi:10.1007/s00134-021-06432-z

Eyler AA. Research Methods for Public Health . 1st ed. Springer Publishing Company; 2020. doi:10.1891/9780826182067.0004

Nosek BA, Errington TM. What is replication ?  PLoS Biol . 2020;18(3):e3000691. doi:10.1371/journal.pbio.3000691

Aggarwal R, Ranganathan P. Study designs: Part 2 - Descriptive studies .  Perspect Clin Res . 2019;10(1):34-36. doi:10.4103/picr.PICR_154_18

Nevid J. Psychology: Concepts and Applications. Wadworth, 2013.

By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

Logo for Portland State University Pressbooks

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Developing a Hypothesis

Rajiv S. Jhangiani; I-Chant A. Chiang; Carrie Cuttler; and Dana C. Leighton

Learning Objectives

  • Distinguish between a theory and a hypothesis.
  • Discover how theories are used to generate hypotheses and how the results of studies can be used to further inform theories.
  • Understand the characteristics of a good hypothesis.

Theories and Hypotheses

Before describing how to develop a hypothesis, it is important to distinguish between a theory and a hypothesis. A  theory  is a coherent explanation or interpretation of one or more phenomena. Although theories can take a variety of forms, one thing they have in common is that they go beyond the phenomena they explain by including variables, structures, processes, functions, or organizing principles that have not been observed directly. Consider, for example, Zajonc’s theory of social facilitation and social inhibition (1965) [1] . He proposed that being watched by others while performing a task creates a general state of physiological arousal, which increases the likelihood of the dominant (most likely) response. So for highly practiced tasks, being watched increases the tendency to make correct responses, but for relatively unpracticed tasks, being watched increases the tendency to make incorrect responses. Notice that this theory—which has come to be called drive theory—provides an explanation of both social facilitation and social inhibition that goes beyond the phenomena themselves by including concepts such as “arousal” and “dominant response,” along with processes such as the effect of arousal on the dominant response.

Outside of science, referring to an idea as a theory often implies that it is untested—perhaps no more than a wild guess. In science, however, the term theory has no such implication. A theory is simply an explanation or interpretation of a set of phenomena. It can be untested, but it can also be extensively tested, well supported, and accepted as an accurate description of the world by the scientific community. The theory of evolution by natural selection, for example, is a theory because it is an explanation of the diversity of life on earth—not because it is untested or unsupported by scientific research. On the contrary, the evidence for this theory is overwhelmingly positive and nearly all scientists accept its basic assumptions as accurate. Similarly, the “germ theory” of disease is a theory because it is an explanation of the origin of various diseases, not because there is any doubt that many diseases are caused by microorganisms that infect the body.

A  hypothesis , on the other hand, is a specific prediction about a new phenomenon that should be observed if a particular theory is accurate. It is an explanation that relies on just a few key concepts. Hypotheses are often specific predictions about what will happen in a particular study. They are developed by considering existing evidence and using reasoning to infer what will happen in the specific context of interest. Hypotheses are often but not always derived from theories. So a hypothesis is often a prediction based on a theory but some hypotheses are a-theoretical and only after a set of observations have been made, is a theory developed. This is because theories are broad in nature and they explain larger bodies of data. So if our research question is really original then we may need to collect some data and make some observations before we can develop a broader theory.

Theories and hypotheses always have this  if-then  relationship. “ If   drive theory is correct,  then  cockroaches should run through a straight runway faster, and a branching runway more slowly, when other cockroaches are present.” Although hypotheses are usually expressed as statements, they can always be rephrased as questions. “Do cockroaches run through a straight runway faster when other cockroaches are present?” Thus deriving hypotheses from theories is an excellent way of generating interesting research questions.

But how do researchers derive hypotheses from theories? One way is to generate a research question using the techniques discussed in this chapter  and then ask whether any theory implies an answer to that question. For example, you might wonder whether expressive writing about positive experiences improves health as much as expressive writing about traumatic experiences. Although this  question  is an interesting one  on its own, you might then ask whether the habituation theory—the idea that expressive writing causes people to habituate to negative thoughts and feelings—implies an answer. In this case, it seems clear that if the habituation theory is correct, then expressive writing about positive experiences should not be effective because it would not cause people to habituate to negative thoughts and feelings. A second way to derive hypotheses from theories is to focus on some component of the theory that has not yet been directly observed. For example, a researcher could focus on the process of habituation—perhaps hypothesizing that people should show fewer signs of emotional distress with each new writing session.

Among the very best hypotheses are those that distinguish between competing theories. For example, Norbert Schwarz and his colleagues considered two theories of how people make judgments about themselves, such as how assertive they are (Schwarz et al., 1991) [2] . Both theories held that such judgments are based on relevant examples that people bring to mind. However, one theory was that people base their judgments on the  number  of examples they bring to mind and the other was that people base their judgments on how  easily  they bring those examples to mind. To test these theories, the researchers asked people to recall either six times when they were assertive (which is easy for most people) or 12 times (which is difficult for most people). Then they asked them to judge their own assertiveness. Note that the number-of-examples theory implies that people who recalled 12 examples should judge themselves to be more assertive because they recalled more examples, but the ease-of-examples theory implies that participants who recalled six examples should judge themselves as more assertive because recalling the examples was easier. Thus the two theories made opposite predictions so that only one of the predictions could be confirmed. The surprising result was that participants who recalled fewer examples judged themselves to be more assertive—providing particularly convincing evidence in favor of the ease-of-retrieval theory over the number-of-examples theory.

Theory Testing

The primary way that scientific researchers use theories is sometimes called the hypothetico-deductive method  (although this term is much more likely to be used by philosophers of science than by scientists themselves). Researchers begin with a set of phenomena and either construct a theory to explain or interpret them or choose an existing theory to work with. They then make a prediction about some new phenomenon that should be observed if the theory is correct. Again, this prediction is called a hypothesis. The researchers then conduct an empirical study to test the hypothesis. Finally, they reevaluate the theory in light of the new results and revise it if necessary. This process is usually conceptualized as a cycle because the researchers can then derive a new hypothesis from the revised theory, conduct a new empirical study to test the hypothesis, and so on. As  Figure 2.3  shows, this approach meshes nicely with the model of scientific research in psychology presented earlier in the textbook—creating a more detailed model of “theoretically motivated” or “theory-driven” research.

hypothesis et hypotheses

As an example, let us consider Zajonc’s research on social facilitation and inhibition. He started with a somewhat contradictory pattern of results from the research literature. He then constructed his drive theory, according to which being watched by others while performing a task causes physiological arousal, which increases an organism’s tendency to make the dominant response. This theory predicts social facilitation for well-learned tasks and social inhibition for poorly learned tasks. He now had a theory that organized previous results in a meaningful way—but he still needed to test it. He hypothesized that if his theory was correct, he should observe that the presence of others improves performance in a simple laboratory task but inhibits performance in a difficult version of the very same laboratory task. To test this hypothesis, one of the studies he conducted used cockroaches as subjects (Zajonc, Heingartner, & Herman, 1969) [3] . The cockroaches ran either down a straight runway (an easy task for a cockroach) or through a cross-shaped maze (a difficult task for a cockroach) to escape into a dark chamber when a light was shined on them. They did this either while alone or in the presence of other cockroaches in clear plastic “audience boxes.” Zajonc found that cockroaches in the straight runway reached their goal more quickly in the presence of other cockroaches, but cockroaches in the cross-shaped maze reached their goal more slowly when they were in the presence of other cockroaches. Thus he confirmed his hypothesis and provided support for his drive theory. (Zajonc also showed that drive theory existed in humans [Zajonc & Sales, 1966] [4] in many other studies afterward).

Incorporating Theory into Your Research

When you write your research report or plan your presentation, be aware that there are two basic ways that researchers usually include theory. The first is to raise a research question, answer that question by conducting a new study, and then offer one or more theories (usually more) to explain or interpret the results. This format works well for applied research questions and for research questions that existing theories do not address. The second way is to describe one or more existing theories, derive a hypothesis from one of those theories, test the hypothesis in a new study, and finally reevaluate the theory. This format works well when there is an existing theory that addresses the research question—especially if the resulting hypothesis is surprising or conflicts with a hypothesis derived from a different theory.

To use theories in your research will not only give you guidance in coming up with experiment ideas and possible projects, but it lends legitimacy to your work. Psychologists have been interested in a variety of human behaviors and have developed many theories along the way. Using established theories will help you break new ground as a researcher, not limit you from developing your own ideas.

Characteristics of a Good Hypothesis

There are three general characteristics of a good hypothesis. First, a good hypothesis must be testable and falsifiable . We must be able to test the hypothesis using the methods of science and if you’ll recall Popper’s falsifiability criterion, it must be possible to gather evidence that will disconfirm the hypothesis if it is indeed false. Second, a good hypothesis must be logical. As described above, hypotheses are more than just a random guess. Hypotheses should be informed by previous theories or observations and logical reasoning. Typically, we begin with a broad and general theory and use  deductive reasoning to generate a more specific hypothesis to test based on that theory. Occasionally, however, when there is no theory to inform our hypothesis, we use  inductive reasoning  which involves using specific observations or research findings to form a more general hypothesis. Finally, the hypothesis should be positive. That is, the hypothesis should make a positive statement about the existence of a relationship or effect, rather than a statement that a relationship or effect does not exist. As scientists, we don’t set out to show that relationships do not exist or that effects do not occur so our hypotheses should not be worded in a way to suggest that an effect or relationship does not exist. The nature of science is to assume that something does not exist and then seek to find evidence to prove this wrong, to show that it really does exist. That may seem backward to you but that is the nature of the scientific method. The underlying reason for this is beyond the scope of this chapter but it has to do with statistical theory.

  • Zajonc, R. B. (1965). Social facilitation.  Science, 149 , 269–274 ↵
  • Schwarz, N., Bless, H., Strack, F., Klumpp, G., Rittenauer-Schatka, H., & Simons, A. (1991). Ease of retrieval as information: Another look at the availability heuristic.  Journal of Personality and Social Psychology, 61 , 195–202. ↵
  • Zajonc, R. B., Heingartner, A., & Herman, E. M. (1969). Social enhancement and impairment of performance in the cockroach.  Journal of Personality and Social Psychology, 13 , 83–92. ↵
  • Zajonc, R.B. & Sales, S.M. (1966). Social facilitation of dominant and subordinate responses. Journal of Experimental Social Psychology, 2 , 160-168. ↵

A coherent explanation or interpretation of one or more phenomena.

A specific prediction about a new phenomenon that should be observed if a particular theory is accurate.

A cyclical process of theory development, starting with an observed phenomenon, then developing or using a theory to make a specific prediction of what should happen if that theory is correct, testing that prediction, refining the theory in light of the findings, and using that refined theory to develop new hypotheses, and so on.

The ability to test the hypothesis using the methods of science and the possibility to gather evidence that will disconfirm the hypothesis if it is indeed false.

Developing a Hypothesis Copyright © by Rajiv S. Jhangiani; I-Chant A. Chiang; Carrie Cuttler; and Dana C. Leighton is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons

Margin Size

  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Statistics LibreTexts

4.4: Hypothesis Testing

  • Last updated
  • Save as PDF
  • Page ID 283

  • David Diez, Christopher Barr, & Mine Çetinkaya-Rundel
  • OpenIntro Statistics

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\id}{\mathrm{id}}\)

\( \newcommand{\kernel}{\mathrm{null}\,}\)

\( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\)

\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\)

\( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

\( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vectorC}[1]{\textbf{#1}} \)

\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

Is the typical US runner getting faster or slower over time? We consider this question in the context of the Cherry Blossom Run, comparing runners in 2006 and 2012. Technological advances in shoes, training, and diet might suggest runners would be faster in 2012. An opposing viewpoint might say that with the average body mass index on the rise, people tend to run slower. In fact, all of these components might be influencing run time.

In addition to considering run times in this section, we consider a topic near and dear to most students: sleep. A recent study found that college students average about 7 hours of sleep per night.15 However, researchers at a rural college are interested in showing that their students sleep longer than seven hours on average. We investigate this topic in Section 4.3.4.

Hypothesis Testing Framework

The average time for all runners who finished the Cherry Blossom Run in 2006 was 93.29 minutes (93 minutes and about 17 seconds). We want to determine if the run10Samp data set provides strong evidence that the participants in 2012 were faster or slower than those runners in 2006, versus the other possibility that there has been no change. 16 We simplify these three options into two competing hypotheses :

  • H 0 : The average 10 mile run time was the same for 2006 and 2012.
  • H A : The average 10 mile run time for 2012 was different than that of 2006.

We call H 0 the null hypothesis and H A the alternative hypothesis.

Null and alternative hypotheses

  • The null hypothesis (H 0 ) often represents either a skeptical perspective or a claim to be tested.
  • The alternative hypothesis (H A ) represents an alternative claim under consideration and is often represented by a range of possible parameter values.

15 theloquitur.com/?p=1161

16 While we could answer this question by examining the entire population data (run10), we only consider the sample data (run10Samp), which is more realistic since we rarely have access to population data.

The null hypothesis often represents a skeptical position or a perspective of no difference. The alternative hypothesis often represents a new perspective, such as the possibility that there has been a change.

Hypothesis testing framework

The skeptic will not reject the null hypothesis (H 0 ), unless the evidence in favor of the alternative hypothesis (H A ) is so strong that she rejects H 0 in favor of H A .

The hypothesis testing framework is a very general tool, and we often use it without a second thought. If a person makes a somewhat unbelievable claim, we are initially skeptical. However, if there is sufficient evidence that supports the claim, we set aside our skepticism and reject the null hypothesis in favor of the alternative. The hallmarks of hypothesis testing are also found in the US court system.

Exercise \(\PageIndex{1}\)

A US court considers two possible claims about a defendant: she is either innocent or guilty. If we set these claims up in a hypothesis framework, which would be the null hypothesis and which the alternative? 17

Jurors examine the evidence to see whether it convincingly shows a defendant is guilty. Even if the jurors leave unconvinced of guilt beyond a reasonable doubt, this does not mean they believe the defendant is innocent. This is also the case with hypothesis testing: even if we fail to reject the null hypothesis, we typically do not accept the null hypothesis as true. Failing to find strong evidence for the alternative hypothesis is not equivalent to accepting the null hypothesis.

17 H 0 : The average cost is $650 per month, \(\mu\) = $650.

In the example with the Cherry Blossom Run, the null hypothesis represents no difference in the average time from 2006 to 2012. The alternative hypothesis represents something new or more interesting: there was a difference, either an increase or a decrease. These hypotheses can be described in mathematical notation using \(\mu_{12}\) as the average run time for 2012:

  • H 0 : \(\mu_{12} = 93.29\)
  • H A : \(\mu_{12} \ne 93.29\)

where 93.29 minutes (93 minutes and about 17 seconds) is the average 10 mile time for all runners in the 2006 Cherry Blossom Run. Using this mathematical notation, the hypotheses can now be evaluated using statistical tools. We call 93.29 the null value since it represents the value of the parameter if the null hypothesis is true. We will use the run10Samp data set to evaluate the hypothesis test.

Testing Hypotheses using Confidence Intervals

We can start the evaluation of the hypothesis setup by comparing 2006 and 2012 run times using a point estimate from the 2012 sample: \(\bar {x}_{12} = 95.61\) minutes. This estimate suggests the average time is actually longer than the 2006 time, 93.29 minutes. However, to evaluate whether this provides strong evidence that there has been a change, we must consider the uncertainty associated with \(\bar {x}_{12}\).

1 6 The jury considers whether the evidence is so convincing (strong) that there is no reasonable doubt regarding the person's guilt; in such a case, the jury rejects innocence (the null hypothesis) and concludes the defendant is guilty (alternative hypothesis).

We learned in Section 4.1 that there is fluctuation from one sample to another, and it is very unlikely that the sample mean will be exactly equal to our parameter; we should not expect \(\bar {x}_{12}\) to exactly equal \(\mu_{12}\). Given that \(\bar {x}_{12} = 95.61\), it might still be possible that the population average in 2012 has remained unchanged from 2006. The difference between \(\bar {x}_{12}\) and 93.29 could be due to sampling variation, i.e. the variability associated with the point estimate when we take a random sample.

In Section 4.2, confidence intervals were introduced as a way to find a range of plausible values for the population mean. Based on run10Samp, a 95% confidence interval for the 2012 population mean, \(\mu_{12}\), was calculated as

\[(92.45, 98.77)\]

Because the 2006 mean, 93.29, falls in the range of plausible values, we cannot say the null hypothesis is implausible. That is, we failed to reject the null hypothesis, H 0 .

Double negatives can sometimes be used in statistics

In many statistical explanations, we use double negatives. For instance, we might say that the null hypothesis is not implausible or we failed to reject the null hypothesis. Double negatives are used to communicate that while we are not rejecting a position, we are also not saying it is correct.

Example \(\PageIndex{1}\)

Next consider whether there is strong evidence that the average age of runners has changed from 2006 to 2012 in the Cherry Blossom Run. In 2006, the average age was 36.13 years, and in the 2012 run10Samp data set, the average was 35.05 years with a standard deviation of 8.97 years for 100 runners.

First, set up the hypotheses:

  • H 0 : The average age of runners has not changed from 2006 to 2012, \(\mu_{age} = 36.13.\)
  • H A : The average age of runners has changed from 2006 to 2012, \(\mu _{age} 6 \ne 36.13.\)

We have previously veri ed conditions for this data set. The normal model may be applied to \(\bar {y}\) and the estimate of SE should be very accurate. Using the sample mean and standard error, we can construct a 95% con dence interval for \(\mu _{age}\) to determine if there is sufficient evidence to reject H 0 :

\[\bar{y} \pm 1.96 \times \dfrac {s}{\sqrt {100}} \rightarrow 35.05 \pm 1.96 \times 0.90 \rightarrow (33.29, 36.81)\]

This confidence interval contains the null value, 36.13. Because 36.13 is not implausible, we cannot reject the null hypothesis. We have not found strong evidence that the average age is different than 36.13 years.

Exercise \(\PageIndex{2}\)

Colleges frequently provide estimates of student expenses such as housing. A consultant hired by a community college claimed that the average student housing expense was $650 per month. What are the null and alternative hypotheses to test whether this claim is accurate? 18

Sample distribution of student housing expense. These data are moderately skewed, roughly determined using the outliers on the right.

H A : The average cost is different than $650 per month, \(\mu \ne\) $650.

18 Applying the normal model requires that certain conditions are met. Because the data are a simple random sample and the sample (presumably) represents no more than 10% of all students at the college, the observations are independent. The sample size is also sufficiently large (n = 75) and the data exhibit only moderate skew. Thus, the normal model may be applied to the sample mean.

Exercise \(\PageIndex{3}\)

The community college decides to collect data to evaluate the $650 per month claim. They take a random sample of 75 students at their school and obtain the data represented in Figure 4.11. Can we apply the normal model to the sample mean?

If the court makes a Type 1 Error, this means the defendant is innocent (H 0 true) but wrongly convicted. A Type 2 Error means the court failed to reject H 0 (i.e. failed to convict the person) when she was in fact guilty (H A true).

Example \(\PageIndex{2}\)

The sample mean for student housing is $611.63 and the sample standard deviation is $132.85. Construct a 95% confidence interval for the population mean and evaluate the hypotheses of Exercise 4.22.

The standard error associated with the mean may be estimated using the sample standard deviation divided by the square root of the sample size. Recall that n = 75 students were sampled.

\[ SE = \dfrac {s}{\sqrt {n}} = \dfrac {132.85}{\sqrt {75}} = 15.34\]

You showed in Exercise 4.23 that the normal model may be applied to the sample mean. This ensures a 95% confidence interval may be accurately constructed:

\[\bar {x} \pm z*SE \rightarrow 611.63 \pm 1.96 \times 15.34 \times (581.56, 641.70)\]

Because the null value $650 is not in the confidence interval, a true mean of $650 is implausible and we reject the null hypothesis. The data provide statistically significant evidence that the actual average housing expense is less than $650 per month.

Decision Errors

Hypothesis tests are not flawless. Just think of the court system: innocent people are sometimes wrongly convicted and the guilty sometimes walk free. Similarly, we can make a wrong decision in statistical hypothesis tests. However, the difference is that we have the tools necessary to quantify how often we make such errors.

There are two competing hypotheses: the null and the alternative. In a hypothesis test, we make a statement about which one might be true, but we might choose incorrectly. There are four possible scenarios in a hypothesis test, which are summarized in Table 4.12.

A Type 1 Error is rejecting the null hypothesis when H0 is actually true. A Type 2 Error is failing to reject the null hypothesis when the alternative is actually true.

Exercise 4.25

In a US court, the defendant is either innocent (H 0 ) or guilty (H A ). What does a Type 1 Error represent in this context? What does a Type 2 Error represent? Table 4.12 may be useful.

To lower the Type 1 Error rate, we might raise our standard for conviction from "beyond a reasonable doubt" to "beyond a conceivable doubt" so fewer people would be wrongly convicted. However, this would also make it more difficult to convict the people who are actually guilty, so we would make more Type 2 Errors.

Exercise 4.26

How could we reduce the Type 1 Error rate in US courts? What influence would this have on the Type 2 Error rate?

To lower the Type 2 Error rate, we want to convict more guilty people. We could lower the standards for conviction from "beyond a reasonable doubt" to "beyond a little doubt". Lowering the bar for guilt will also result in more wrongful convictions, raising the Type 1 Error rate.

Exercise 4.27

How could we reduce the Type 2 Error rate in US courts? What influence would this have on the Type 1 Error rate?

A skeptic would have no reason to believe that sleep patterns at this school are different than the sleep patterns at another school.

Exercises 4.25-4.27 provide an important lesson:

If we reduce how often we make one type of error, we generally make more of the other type.

Hypothesis testing is built around rejecting or failing to reject the null hypothesis. That is, we do not reject H 0 unless we have strong evidence. But what precisely does strong evidence mean? As a general rule of thumb, for those cases where the null hypothesis is actually true, we do not want to incorrectly reject H 0 more than 5% of the time. This corresponds to a significance level of 0.05. We often write the significance level using \(\alpha\) (the Greek letter alpha): \(\alpha = 0.05.\) We discuss the appropriateness of different significance levels in Section 4.3.6.

If we use a 95% confidence interval to test a hypothesis where the null hypothesis is true, we will make an error whenever the point estimate is at least 1.96 standard errors away from the population parameter. This happens about 5% of the time (2.5% in each tail). Similarly, using a 99% con dence interval to evaluate a hypothesis is equivalent to a significance level of \(\alpha = 0.01\).

A confidence interval is, in one sense, simplistic in the world of hypothesis tests. Consider the following two scenarios:

  • The null value (the parameter value under the null hypothesis) is in the 95% confidence interval but just barely, so we would not reject H 0 . However, we might like to somehow say, quantitatively, that it was a close decision.
  • The null value is very far outside of the interval, so we reject H 0 . However, we want to communicate that, not only did we reject the null hypothesis, but it wasn't even close. Such a case is depicted in Figure 4.13.

In Section 4.3.4, we introduce a tool called the p-value that will be helpful in these cases. The p-value method also extends to hypothesis tests where con dence intervals cannot be easily constructed or applied.

alt

Formal Testing using p-Values

The p-value is a way of quantifying the strength of the evidence against the null hypothesis and in favor of the alternative. Formally the p-value is a conditional probability.

definition: p-value

The p-value is the probability of observing data at least as favorable to the alternative hypothesis as our current data set, if the null hypothesis is true. We typically use a summary statistic of the data, in this chapter the sample mean, to help compute the p-value and evaluate the hypotheses.

A poll by the National Sleep Foundation found that college students average about 7 hours of sleep per night. Researchers at a rural school are interested in showing that students at their school sleep longer than seven hours on average, and they would like to demonstrate this using a sample of students. What would be an appropriate skeptical position for this research?

This is entirely based on the interests of the researchers. Had they been only interested in the opposite case - showing that their students were actually averaging fewer than seven hours of sleep but not interested in showing more than 7 hours - then our setup would have set the alternative as \(\mu < 7\).

alt

We can set up the null hypothesis for this test as a skeptical perspective: the students at this school average 7 hours of sleep per night. The alternative hypothesis takes a new form reflecting the interests of the research: the students average more than 7 hours of sleep. We can write these hypotheses as

  • H 0 : \(\mu\) = 7.
  • H A : \(\mu\) > 7.

Using \(\mu\) > 7 as the alternative is an example of a one-sided hypothesis test. In this investigation, there is no apparent interest in learning whether the mean is less than 7 hours. (The standard error can be estimated from the sample standard deviation and the sample size: \(SE_{\bar {x}} = \dfrac {s_x}{\sqrt {n}} = \dfrac {1.75}{\sqrt {110}} = 0.17\)). Earlier we encountered a two-sided hypothesis where we looked for any clear difference, greater than or less than the null value.

Always use a two-sided test unless it was made clear prior to data collection that the test should be one-sided. Switching a two-sided test to a one-sided test after observing the data is dangerous because it can inflate the Type 1 Error rate.

TIP: One-sided and two-sided tests

If the researchers are only interested in showing an increase or a decrease, but not both, use a one-sided test. If the researchers would be interested in any difference from the null value - an increase or decrease - then the test should be two-sided.

TIP: Always write the null hypothesis as an equality

We will find it most useful if we always list the null hypothesis as an equality (e.g. \(\mu\) = 7) while the alternative always uses an inequality (e.g. \(\mu \ne 7, \mu > 7, or \mu < 7)\).

The researchers at the rural school conducted a simple random sample of n = 110 students on campus. They found that these students averaged 7.42 hours of sleep and the standard deviation of the amount of sleep for the students was 1.75 hours. A histogram of the sample is shown in Figure 4.14.

Before we can use a normal model for the sample mean or compute the standard error of the sample mean, we must verify conditions. (1) Because this is a simple random sample from less than 10% of the student body, the observations are independent. (2) The sample size in the sleep study is sufficiently large since it is greater than 30. (3) The data show moderate skew in Figure 4.14 and the presence of a couple of outliers. This skew and the outliers (which are not too extreme) are acceptable for a sample size of n = 110. With these conditions veri ed, the normal model can be safely applied to \(\bar {x}\) and the estimated standard error will be very accurate.

What is the standard deviation associated with \(\bar {x}\)? That is, estimate the standard error of \(\bar {x}\). 25

The hypothesis test will be evaluated using a significance level of \(\alpha = 0.05\). We want to consider the data under the scenario that the null hypothesis is true. In this case, the sample mean is from a distribution that is nearly normal and has mean 7 and standard deviation of about 0.17. Such a distribution is shown in Figure 4.15.

alt

The shaded tail in Figure 4.15 represents the chance of observing such a large mean, conditional on the null hypothesis being true. That is, the shaded tail represents the p-value. We shade all means larger than our sample mean, \(\bar {x} = 7.42\), because they are more favorable to the alternative hypothesis than the observed mean.

We compute the p-value by finding the tail area of this normal distribution, which we learned to do in Section 3.1. First compute the Z score of the sample mean, \(\bar {x} = 7.42\):

\[Z = \dfrac {\bar {x} - \text {null value}}{SE_{\bar {x}}} = \dfrac {7.42 - 7}{0.17} = 2.47\]

Using the normal probability table, the lower unshaded area is found to be 0.993. Thus the shaded area is 1 - 0.993 = 0.007. If the null hypothesis is true, the probability of observing such a large sample mean for a sample of 110 students is only 0.007. That is, if the null hypothesis is true, we would not often see such a large mean.

We evaluate the hypotheses by comparing the p-value to the significance level. Because the p-value is less than the significance level \((p-value = 0.007 < 0.05 = \alpha)\), we reject the null hypothesis. What we observed is so unusual with respect to the null hypothesis that it casts serious doubt on H 0 and provides strong evidence favoring H A .

p-value as a tool in hypothesis testing

The p-value quantifies how strongly the data favor H A over H 0 . A small p-value (usually < 0.05) corresponds to sufficient evidence to reject H 0 in favor of H A .

TIP: It is useful to First draw a picture to find the p-value

It is useful to draw a picture of the distribution of \(\bar {x}\) as though H 0 was true (i.e. \(\mu\) equals the null value), and shade the region (or regions) of sample means that are at least as favorable to the alternative hypothesis. These shaded regions represent the p-value.

The ideas below review the process of evaluating hypothesis tests with p-values:

  • The null hypothesis represents a skeptic's position or a position of no difference. We reject this position only if the evidence strongly favors H A .
  • A small p-value means that if the null hypothesis is true, there is a low probability of seeing a point estimate at least as extreme as the one we saw. We interpret this as strong evidence in favor of the alternative.
  • We reject the null hypothesis if the p-value is smaller than the significance level, \(\alpha\), which is usually 0.05. Otherwise, we fail to reject H 0 .
  • We should always state the conclusion of the hypothesis test in plain language so non-statisticians can also understand the results.

The p-value is constructed in such a way that we can directly compare it to the significance level ( \(\alpha\)) to determine whether or not to reject H 0 . This method ensures that the Type 1 Error rate does not exceed the significance level standard.

alt

If the null hypothesis is true, how often should the p-value be less than 0.05?

About 5% of the time. If the null hypothesis is true, then the data only has a 5% chance of being in the 5% of data most favorable to H A .

alt

Exercise 4.31

Suppose we had used a significance level of 0.01 in the sleep study. Would the evidence have been strong enough to reject the null hypothesis? (The p-value was 0.007.) What if the significance level was \(\alpha = 0.001\)? 27

27 We reject the null hypothesis whenever p-value < \(\alpha\). Thus, we would still reject the null hypothesis if \(\alpha = 0.01\) but not if the significance level had been \(\alpha = 0.001\).

Exercise 4.32

Ebay might be interested in showing that buyers on its site tend to pay less than they would for the corresponding new item on Amazon. We'll research this topic for one particular product: a video game called Mario Kart for the Nintendo Wii. During early October 2009, Amazon sold this game for $46.99. Set up an appropriate (one-sided!) hypothesis test to check the claim that Ebay buyers pay less during auctions at this same time. 28

28 The skeptic would say the average is the same on Ebay, and we are interested in showing the average price is lower.

Exercise 4.33

During early October, 2009, 52 Ebay auctions were recorded for Mario Kart.29 The total prices for the auctions are presented using a histogram in Figure 4.17, and we may like to apply the normal model to the sample mean. Check the three conditions required for applying the normal model: (1) independence, (2) at least 30 observations, and (3) the data are not strongly skewed. 30

30 (1) The independence condition is unclear. We will make the assumption that the observations are independent, which we should report with any nal results. (2) The sample size is sufficiently large: \(n = 52 \ge 30\). (3) The data distribution is not strongly skewed; it is approximately symmetric.

H 0 : The average auction price on Ebay is equal to (or more than) the price on Amazon. We write only the equality in the statistical notation: \(\mu_{ebay} = 46.99\).

H A : The average price on Ebay is less than the price on Amazon, \(\mu _{ebay} < 46.99\).

29 These data were collected by OpenIntro staff.

Example 4.34

The average sale price of the 52 Ebay auctions for Wii Mario Kart was $44.17 with a standard deviation of $4.15. Does this provide sufficient evidence to reject the null hypothesis in Exercise 4.32? Use a significance level of \(\alpha = 0.01\).

The hypotheses were set up and the conditions were checked in Exercises 4.32 and 4.33. The next step is to find the standard error of the sample mean and produce a sketch to help find the p-value.

alt

Because the alternative hypothesis says we are looking for a smaller mean, we shade the lower tail. We find this shaded area by using the Z score and normal probability table: \(Z = \dfrac {44.17 \times 46.99}{0.5755} = -4.90\), which has area less than 0.0002. The area is so small we cannot really see it on the picture. This lower tail area corresponds to the p-value.

Because the p-value is so small - specifically, smaller than = 0.01 - this provides sufficiently strong evidence to reject the null hypothesis in favor of the alternative. The data provide statistically signi cant evidence that the average price on Ebay is lower than Amazon's asking price.

Two-sided hypothesis testing with p-values

We now consider how to compute a p-value for a two-sided test. In one-sided tests, we shade the single tail in the direction of the alternative hypothesis. For example, when the alternative had the form \(\mu\) > 7, then the p-value was represented by the upper tail (Figure 4.16). When the alternative was \(\mu\) < 46.99, the p-value was the lower tail (Exercise 4.32). In a two-sided test, we shade two tails since evidence in either direction is favorable to H A .

Exercise 4.35 Earlier we talked about a research group investigating whether the students at their school slept longer than 7 hours each night. Let's consider a second group of researchers who want to evaluate whether the students at their college differ from the norm of 7 hours. Write the null and alternative hypotheses for this investigation. 31

Example 4.36 The second college randomly samples 72 students and nds a mean of \(\bar {x} = 6.83\) hours and a standard deviation of s = 1.8 hours. Does this provide strong evidence against H 0 in Exercise 4.35? Use a significance level of \(\alpha = 0.05\).

First, we must verify assumptions. (1) A simple random sample of less than 10% of the student body means the observations are independent. (2) The sample size is 72, which is greater than 30. (3) Based on the earlier distribution and what we already know about college student sleep habits, the distribution is probably not strongly skewed.

Next we can compute the standard error \((SE_{\bar {x}} = \dfrac {s}{\sqrt {n}} = 0.21)\) of the estimate and create a picture to represent the p-value, shown in Figure 4.18. Both tails are shaded.

31 Because the researchers are interested in any difference, they should use a two-sided setup: H 0 : \(\mu\) = 7, H A : \(\mu \ne 7.\)

alt

An estimate of 7.17 or more provides at least as strong of evidence against the null hypothesis and in favor of the alternative as the observed estimate, \(\bar {x} = 6.83\).

We can calculate the tail areas by rst nding the lower tail corresponding to \(\bar {x}\):

\[Z = \dfrac {6.83 - 7.00}{0.21} = -0.81 \xrightarrow {table} \text {left tail} = 0.2090\]

Because the normal model is symmetric, the right tail will have the same area as the left tail. The p-value is found as the sum of the two shaded tails:

\[ \text {p-value} = \text {left tail} + \text {right tail} = 2 \times \text {(left tail)} = 0.4180\]

This p-value is relatively large (larger than \(\mu\)= 0.05), so we should not reject H 0 . That is, if H 0 is true, it would not be very unusual to see a sample mean this far from 7 hours simply due to sampling variation. Thus, we do not have sufficient evidence to conclude that the mean is different than 7 hours.

Example 4.37 It is never okay to change two-sided tests to one-sided tests after observing the data. In this example we explore the consequences of ignoring this advice. Using \(\alpha = 0.05\), we show that freely switching from two-sided tests to onesided tests will cause us to make twice as many Type 1 Errors as intended.

Suppose the sample mean was larger than the null value, \(\mu_0\) (e.g. \(\mu_0\) would represent 7 if H 0 : \(\mu\) = 7). Then if we can ip to a one-sided test, we would use H A : \(\mu > \mu_0\). Now if we obtain any observation with a Z score greater than 1.65, we would reject H 0 . If the null hypothesis is true, we incorrectly reject the null hypothesis about 5% of the time when the sample mean is above the null value, as shown in Figure 4.19.

Suppose the sample mean was smaller than the null value. Then if we change to a one-sided test, we would use H A : \(\mu < \mu_0\). If \(\bar {x}\) had a Z score smaller than -1.65, we would reject H 0 . If the null hypothesis is true, then we would observe such a case about 5% of the time.

By examining these two scenarios, we can determine that we will make a Type 1 Error 5% + 5% = 10% of the time if we are allowed to swap to the "best" one-sided test for the data. This is twice the error rate we prescribed with our significance level: \(\alpha = 0.05\) (!).

alt

Caution: One-sided hypotheses are allowed only before seeing data

After observing data, it is tempting to turn a two-sided test into a one-sided test. Avoid this temptation. Hypotheses must be set up before observing the data. If they are not, the test must be two-sided.

Choosing a Significance Level

Choosing a significance level for a test is important in many contexts, and the traditional level is 0.05. However, it is often helpful to adjust the significance level based on the application. We may select a level that is smaller or larger than 0.05 depending on the consequences of any conclusions reached from the test.

  • If making a Type 1 Error is dangerous or especially costly, we should choose a small significance level (e.g. 0.01). Under this scenario we want to be very cautious about rejecting the null hypothesis, so we demand very strong evidence favoring H A before we would reject H 0 .
  • If a Type 2 Error is relatively more dangerous or much more costly than a Type 1 Error, then we should choose a higher significance level (e.g. 0.10). Here we want to be cautious about failing to reject H 0 when the null is actually false. We will discuss this particular case in greater detail in Section 4.6.

Significance levels should reflect consequences of errors

The significance level selected for a test should reflect the consequences associated with Type 1 and Type 2 Errors.

Example 4.38

A car manufacturer is considering a higher quality but more expensive supplier for window parts in its vehicles. They sample a number of parts from their current supplier and also parts from the new supplier. They decide that if the high quality parts will last more than 12% longer, it makes nancial sense to switch to this more expensive supplier. Is there good reason to modify the significance level in such a hypothesis test?

The null hypothesis is that the more expensive parts last no more than 12% longer while the alternative is that they do last more than 12% longer. This decision is just one of the many regular factors that have a marginal impact on the car and company. A significancelevel of 0.05 seems reasonable since neither a Type 1 or Type 2 error should be dangerous or (relatively) much more expensive.

Example 4.39

The same car manufacturer is considering a slightly more expensive supplier for parts related to safety, not windows. If the durability of these safety components is shown to be better than the current supplier, they will switch manufacturers. Is there good reason to modify the significance level in such an evaluation?

The null hypothesis would be that the suppliers' parts are equally reliable. Because safety is involved, the car company should be eager to switch to the slightly more expensive manufacturer (reject H 0 ) even if the evidence of increased safety is only moderately strong. A slightly larger significance level, such as \(\mu = 0.10\), might be appropriate.

Exercise 4.40

A part inside of a machine is very expensive to replace. However, the machine usually functions properly even if this part is broken, so the part is replaced only if we are extremely certain it is broken based on a series of measurements. Identify appropriate hypotheses for this test (in plain language) and suggest an appropriate significance level. 32

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List

The Hierarchy-of-Hypotheses Approach: A Synthesis Method for Enhancing Theory Development in Ecology and Evolution

Department of Biodiversity Research and Systematic Botany, University of Potsdam, Potsdam, Germany

Department of Restoration Ecology, Technical University of Munich, Freising, Germany

Berlin-Brandenburg Institute of Advanced Biodiversity Research (BBIB), Berlin, Germany

Carlos A Aguilar-Trigueros

Institute of Biology, Freie Universität, Berlin, Berlin, Germany

Isabelle Bartram

Institute of Sociology, University of Freiburg, Freiburg

Raul Rennó Braga

Universidade Federal do Paraná, Laboratório de Ecologia e Conservação, Curitiba, Brazil

Gregory P Dietl

Paleontological Research Institution and the Department of Earth and Atmospheric Sciences at Cornell University, Ithaca, New York

Martin Enders

Leibniz Institute of Freshwater Ecology and Inland Fisheries (IGB), Berlin, Germany

David J Gibson

School of Biological Sciences, Southern Illinois University Carbondale, Carbondale, Illinois

Lorena Gómez-Aparicio

Instituto de Recursos Naturales y Agrobiología de Sevilla, CSIC, LINCGlobal, Sevilla, Spain

Pierre Gras

Department of Ecological Dynamics, Leibniz Institute for Zoo and Wildlife Research (IZW), also in Berlin, Germany

Department of Conservation Biology, Helmholtz Centre for Environmental Research—UFZ, Leipzig, Germany

Sophie Lokatis

Christopher j lortie.

Department of Biology, York University, York, Canada, as well as with the National Center for Ecological Analysis and Synthesis, University of California Santa Barbara, Santa Barbara, California

Anne-Christine Mupepele

Chair of Nature Conservation and Landscape Ecology, University of Freiburg, Freiburg, and the Senckenberg Biodiversity and Climate Research Centre, Frankfurt am Main, both in Germany

Stefan Schindler

Environment Agency Austria and University of Vienna's Division of Conservation, Biology, Vegetation, and Landscape Ecology, Vienna, Austria, and his third affiliation is with Community Ecology and Conservation, Czech University of Life Sciences Prague, Prague, Czech Republic, Finally

Jostein Starrfelt

University of Oslo's Centre for Ecological and Evolutionary Synthesis and with the Norwegian Scientific Committee for Food and Environment, Norwegian Institute of Public Health, both in Oslo, Norway

Alexis D Synodinos

Department of Plant Ecology and Nature Conservation, University of Potsdam, Potsdam, Germany

Centre for Biodiversity Theory and Modelling, Theoretical, and Experimental Ecology Station, CNRS, Moulis, France

Jonathan M Jeschke

Associated data.

In the current era of Big Data, existing synthesis tools such as formal meta-analyses are critical means to handle the deluge of information. However, there is a need for complementary tools that help to (a) organize evidence, (b) organize theory, and (c) closely connect evidence to theory. We present the hierarchy-of-hypotheses (HoH) approach to address these issues. In an HoH, hypotheses are conceptually and visually structured in a hierarchically nested way where the lower branches can be directly connected to empirical results. Used for organizing evidence, this tool allows researchers to conceptually connect empirical results derived through diverse approaches and to reveal under which circumstances hypotheses are applicable. Used for organizing theory, it allows researchers to uncover mechanistic components of hypotheses and previously neglected conceptual connections. In the present article, we offer guidance on how to build an HoH, provide examples from population and evolutionary biology and propose terminological clarifications.

In many disciplines, the volume of evidence published in scientific journals is steadily increasing. In principle, this increase should make it possible to describe and explain complex systems in much greater detail than ever before. However, an increase in available information does not necessarily correspond to an increase in knowledge and understanding (Jeschke et al. 2019 ). Publishing results in scientific journals and depositing data in public archives does not guarantee their practical application, reuse, or the advancement of theory. We suggest that this situation can be improved by the development, establishment, and regular application of methods that have the explicit aim of linking evidence and theory.

An important step toward more efficiently exploiting results from case studies is synthesis (for this and other key terms, see box 1 ). There is a wealth of methods available for statistically combining the results of multiple studies (Pullin et al. 2016 , Dicks et al. 2017 ). These methods enable the synthesis of research results stemming from different studies that address a common question (Koricheva et al. 2013 ). In the environmental sciences, evidence synthesis has increased both in frequency and importance (Lortie 2014 ), seeking to make empirical evidence readily available and more suitable as a basis for decision-making (e.g., evidence-based decision making; Sutherland 2006 , Diefenderfer et al. 2016 , Pullin et al. 2016 , Cook et al. 2017 , Dicks et al. 2017 ). Moreover, methodological guidelines have been developed, and web portals implemented to collect and synthesize the results of primary studies. Prime examples are the platforms www.conservationevidence.com and www.environmentalevidence.org , alongside the European Union–funded projects EKLIPSE ( www.eklipse-mechanism.eu ) and BiodiversityKnowledge (Nesshöver et al. 2016 ). These initiatives have promoted significant advances in the organization and assessment of evidence and the implementation of synthesis, thus allowing for a comprehensive representation of applied knowledge in environmental sciences.

Box 1. Glossary.

Evidence. Available body of data and information indicating whether a belief or proposition is true or valid (Howick 2011 , Mupepele et al. 2016 ). These data and information can, for example, stem from an empirical observation, model output, or simulation.

Hypothesis. An assumption that (a) is based on a formalized or nonformalized theoretical model of the real world and (b) can deliver one or more testable predictions (after Giere et al. 2005 ).

Mechanistic hypothesis . Narrowed version of an overarching hypothesis, resulting from specialization or decomposition of the unspecified hypothesis with respect to assumed underlying causes.

Operational hypothesis. Narrowed version of an overarching hypothesis, accounting for a specific study design. Operational hypotheses explicate which method (e.g., which study system or research approach) is used to study the overarching hypothesis.

Overarching hypothesis. Unspecified assumption derived from a general idea, concept or major principle (i.e., from a general ­theoretical model).

Prediction. Statement about how data (i.e., measured outcome of an experiment or observation) should look if the underlying hypothesis is true.

Synthesis. Process of identifying, compiling and combining relevant knowledge from multiple sources.

Theory. A high-level—that is, general—system of conceptual constructs or devices to explain and understand ecological, evolutionary or other phenomena and systems (adapted from Pickett et al. 2007 ). Theory can consist of a worked out, integrated body of mechanistic rules or even natural laws, but it may also consist of a loose collection of conceptual frameworks, ideas and hypotheses.

Fostering evidence-based decision-making is crucial to solving specific applied problems. However, findings resulting from these applied approaches for evidence synthesis are usually not reconnected to a broader body of theory. Therefore, they do not consistently contribute to a structured or targeted advancement of theory—for example, by assessing the usefulness of ideas. It is a missed opportunity to not feed this synthesized evidence back into theory. A similar lack of connection to theory has been observed for studies addressing basic research questions (e.g., Jeltsch et al. 2013 , Scheiner 2013 ). Evidence feeding back into theory, subsequently leading to further theory development, would become a more appealing, simpler and, therefore, more common process if there were well described and widely accepted methods. A positive example in this respect is structural equation modeling, especially if combined with metamodels (Grace et al. 2010 ). With this technique, theoretical knowledge directly feeds into mathematical models, and empirical data are then used to select the model best matching the observations.

In the present article, we provide a detailed description of a relatively new synthesis method—the hierarchy-of-hypotheses (HoH) approach (Jeschke et al. 2012 , Heger et al. 2013 )—that is complementary to existing knowledge synthesis tools. This approach offers the opportunity to organize evidence and ideas, and to create and display links between single study results and theory. We suggest that the representation of broad ideas as nested hierarchies of hypotheses can be powerful and can be used to more efficiently connect single studies to a body of theory. Empirical studies usually formulate very specific hypotheses, derive predictions from these about expected data, and test these predictions in experiments or observations. With an HoH, it can be made explicit which broader ideas these specific hypotheses are linked to. The specific hypotheses can be characterized and visualized as subhypotheses of a broader idea or theory. Therefore, it becomes clear that the single study, although necessarily limited in its scope, is testing an important aspect of a broader idea or theory. Similarly, an HoH can be used to organize a body of literature that is too heterogeneous for statistical meta-analysis. It can be linked with a systematic review of existing studies, so that the studies and their findings are organized and hierarchically structured, thus visualizing which aspects of an overarching question or hypothesis each study is addressing. Alternatively, the HoH approach can be used to refine a broad idea on theoretical grounds and to identify different possibilities of how an idea, concept, or hypothesis can become more specific, less ambiguous, and better structured. Taken together, the approach can help to strengthen the theoretical foundations of a research field.

In this context, it is important to clarify what is meant by hypothesis . In the present article, we apply the terminology offered by the philosopher of science Ronald Giere and colleagues (Giere et al. 2005 , see also Griesemer 2018 ). Accordingly, a hypothesis provides the connection of the (formalized or nonformalized) theoretical model that a researcher has, describing how a specific part of the world works in theory, to the real world by asserting that the model fits that part of the world in some specified aspect. A hypothesis needs to be testable, thus allowing the investigation of whether the theoretical model actually fits the real world. This is done by deriving one or more predictions from the hypothesis that state how data (gathered in an observation or experiment) should look if the hypothesis is true.

The HoH approach has already been introduced as a tool for synthesis in invasion ecology (Jeschke et al. 2012 , Heger et al. 2013 , Heger and Jeschke 2014 , Jeschke and Heger 2018a ). So far, however, explicit and consistent guidance on how to build a hierarchy of hypotheses has not been formally articulated. The primary objective of this publication therefore is to offer a concrete, consistent, and refined description for those who want to use this tool or want to adopt it to their discipline. Furthermore, we want to stimulate methodological discussions about its further development and improvement. In the following, we outline the main ideas behind the HoH approach and the history of its development, present a primer for creating HoHs, provide examples for applications within and outside of invasion ecology, and discuss its strengths and limitations.

The hierarchy-of-hypotheses approach

The basic tenet behind the HoH approach is that complexity can often be handled by hierarchically structuring the topic under study (Heger and Jeschke 2018c ). The approach has been developed to clarify the link between big ideas, and experiments or surveys designed to test them. Usually, experiments and surveys actually test predictions derived from smaller, more specific ideas that represent an aspect or one manifestation of the big idea. Different studies all addressing a joint major hypothesis consequently often each address different versions of it. This diversity makes it hard to reconcile their results. The HoH approach addresses this challenge by dividing the major hypothesis into more specific formulations or subhypotheses. These can be further divided until the level of refinement allows for direct empirical testing. The result is a tree that visually depicts different ways in which a major hypothesis can be formulated. The empirical studies can then be explicitly linked to the branch of the tree they intend to address, thus making a conceptual and visual connection to the major hypothesis. Hierarchical nestedness therefore allows one to structure and display relationships between different versions of an idea, and to conceptually collate empirical tests addressing the same overall question with divergent approaches. A hierarchical arrangement of hypotheses has also been suggested by Pickett and colleagues ( 2007 ) in the context of the method of pairwise alternative hypothesis testing (or strong inference, Platt 1964 ). However, we are not aware of studies that picked up on or further developed this idea.

The HoH approach in its first version (Jeschke et al. 2012 , Heger et al. 2013 , Heger and Jeschke 2014 ) was not a formalized method with a clear set of rules on how to proceed. It emerged and evolved during a literature synthesis project through dealing with the problem of how to merge results of a set of highly diverse studies without losing significant information on what precisely these studies were addressing. In that first iteration of the HoH method, the branches of the hierarchy were selected by the respective author team, on the basis of expert knowledge and assessment of published data. Therefore, pragmatic questions guided the creation of the HoH (e.g., which kind of branching helps group studies in a way that enhances interpretation? ). Through further work on the approach, helpful discussions with colleagues, and critical comments (Farji-Brener and Amador-Vargas 2018 , Griesemer 2018 , Scheiner and Fox 2018 ), suggestions for its refinement were formulated (Heger and Jeschke 2018b , 2018c ). The present article amounts to a further step in the methodological development and refinement of the HoH approach, including terminological clarifications and practical suggestions.

A primer for building a hierarchy of hypotheses

With the methodological guidance provided in the following, we take the initial steps toward formalizing the application of the HoH approach. However, we advocate that its usage should not be confined by rules that are too strict. Although we appreciate the advantages of strict methodological guidelines, such as those provided by The Collaboration for Environmental Evidence ( 2018 ) for synthesizing evidence in systematic reviews, we believe that when it comes to conceptual work and theory development, room is needed for creativity and methodological flexibility.

Applying the HoH approach involves four steps (figure  1 ). We distinguish two basic aims for creating an HoH: organizing evidence and organizing theory. These basic aims reflect the distinction between empirical and theoretical modeling approaches in Griesemer ( 2013 ). Creating and displaying links between evidence and theory can be part of the process in either case. In the first case (i.e., if the aim is to organize evidence), the process starts with a diverse set of empirical results and the question of how these can be grouped to enhance their joint interpretation or further analysis. In the second case (i.e., if the aim is to organize theory), the process of creating the hierarchy starts with decomposing an overarching hypothesis. An HoH allows one to make the meaning of this overarching hypothesis more explicit by formulating its components as separate subhypotheses from which testable, specific predictions can be derived.

An external file that holds a picture, illustration, etc.
Object name is biaa130fig1.jpg

Workflow for the creation of a hierarchy of hypotheses. For a detailed explanation, see the main text.

The starting point for an HoH-based analysis in both cases, for organizing evidence as well as for organizing theory, is the identification of a focal hypothesis. This starting point is followed by the compilation of information (step 1 in figure  1 ). Which information needs to be compiled depends on whether the aim is structuring and synthesizing empirical evidence provided by a set of studies (e.g., Jeschke and Heger 2018a and example 1 below) or whether the research interest is more in the theoretical structure and subdivision of the overarching hypothesis (see examples 2 and 3 below). The necessary information needs to be gathered by means of a literature review guided by expert knowledge. Especially if the aim is to organize evidence, we recommend applying a standardized procedure (e.g., PRISMA, Moher et al. 2015 , or ROSES, Haddaway et al. 2018 ) and recording the performed steps.

The next step is to create the hierarchy (step 2 in figure  1 ). If the aim is to organize evidence, step 1 will have led to the compilation of a set of studies empirically addressing the overarching hypothesis or a sufficiently homogeneous overarching theoretical framework. In step 2, these studies will need to be grouped. Depending on the aim of the study, it can be helpful to group the empirical tests of the overarching hypothesis according to study system (e.g., habitat, taxonomic group) or research approach (e.g., measured response variable). For example, in tests of the biotic resistance hypothesis in invasion ecology, which posits that an ecosystem with high biodiversity is more resistant against nonnative species than an ecosystem with lower biodiversity, Jeschke and colleagues ( 2018a ) grouped empirical tests according to how the tests measured biodiversity and resistance against nonnative species. Some tests measured biodiversity as species richness, others as evenness or functional richness. The groups resulting from such considerations can be interpreted as representing operational hypotheses, because they specify the general hypothesis by accounting for diverse research approaches—that is, options for measuring the hypothesized effect (see also Griesemer 2018 , Heger and Jeschke 2018c , as well as figure  2 a and example 1 below). In such cases, we recommend displaying all resulting subhypotheses, if feasible.

An external file that holds a picture, illustration, etc.
Object name is biaa130fig2.jpg

Three different types of branching in a hierarchy of hypotheses. The branching example shown in (a) is inspired by example 1 in the main text, (b) by example 2 (see also figure  3 b), and (c) by example 3 (see also figure  4 ).

If the aim is to organize theory, the overarching hypothesis is split into independent components on the basis of conceptual considerations (figure  2 b and  2 c). This splitting of the overarching hypothesis can be done by creating branches according to which factors could have caused the respective process or pattern (see example 2 below, figure  2 b).

Broad, overarching hypotheses often consist of several complementary partial arguments that are necessary elements. Consider the question why species often are well adapted to their biotic environment. A common hypothesis suggests that enduring interaction with enemies drives evolutionary changes, thus leading to adaptations of prey to their enemies (see example 3). This hypothesis presupposes that species face increasing risks from enemies but also that species’ traits evolve in response to the changed risk (figure  2 c and example 3 below). Decomposing overarching hypotheses into their partial arguments by formulating separate mechanistic hypotheses can enhance conceptual clarity and elucidate that sometimes, studies combined under one header are in fact addressing very different things.

For any type of branching, it is critical to identify components or groups (i.e., branches) that are mutually exclusive and not overlapping, so that an unambiguous assignment of single cases or observations into a box (i.e., subhypothesis) can be possible. If this is not feasible, it may be necessary to use conceptual maps, networks or Venn diagrams rather than hierarchical structures (figure  1 , step 2; also see supplemental table S1). Therefore, care should be taken not to impose a hierarchical structure in cases where it is not helpful.

For many applications, the process of building an HoH can stop at this step, and a publication of the results can be considered (step 4). The resulting HoH can, for example, show the connection of a planned study to a body of theory, explicate and visualize the complexity of ideas implicitly included in a major hypothesis, or develop a research program around an overarching idea.

If the aim is to identify research gaps, or to assess the generality or range of applicability of a major hypothesis, however, a further step must be taken (figure  1 , step 3a): The HoH needs to be linked to empirical data. In previous studies (e.g., Jeschke and Heger 2018a ), this step was done by assigning empirical studies to the subhypotheses they addressed and assessing the level of supporting evidence for the predictions derived from each hypothesis or subhypothesis. This assignment of studies to subhypotheses can be done either by using expert judgment or by applying machine learning algorithms (for further details, see Heger and Jeschke 2014 , Jeschke and Heger 2018a , Ryo et al. 2019 ). Depending on the research question, the available resources and the structure of the data, the level of evidence can be assessed for each subhypothesis as well as for the higher-level hypotheses and can then be compared across subhypotheses. Such a comparison can provide information on the generality of an overarching hypothesis (i.e., its unifying power and breadth of applicability) or on the range of conditions under which a mechanism applies (see supplemental table S2 for examples). Before an HoH organizing theory is connected to empirical evidence, it will be necessary in most cases to include operational hypotheses at the lower levels, specifying, for example, different possible experimental approaches.

The hierarchical approach can additionally be used to connect the HoH developed in step 2 to a related body of theory. For example, Heger and colleagues ( 2013 ) suggested that the existing HoH on the enemy release hypothesis (see example 1 below) was conceptually connected to another well-known hypothesis—the novel weapons hypothesis. As a common overarching hypothesis addressing the question why species can successfully establish and spread outside of their native range, they suggested the “lack of eco-evolutionary experience hypothesis”; the enemy release and the novel weapons hypotheses are considered subhypotheses of this overarching hypothesis. This optional step can therefore help to create missing links within a discipline or even across disciplines.

Performing this step requires the study of the related body of theory, looking for conceptual overlaps and overarching topics. It may turn out that hypotheses, concepts, and ideas exist that are conceptually linked to the focal overarching hypothesis but that these links are nonhierarchical. In these cases, it can be useful to build hypothesis networks and apply clustering techniques to identify underlying structures (see, e.g., Enders et al. 2020 ). This step can also be applied in cases in which the HoH has been built to organize evidence.

Once the HoH is finalized, it can be published in order to enter the public domain and facilitate the advancement of the methodology and theory development. For the future, we envision a platform for the publication of HoHs to make the structured representations of research topics available not only via the common path of journal publications. The webpage www.hi-knowledge.org (Jeschke et al. 2018b ) is a first step in this direction and is planned to allow for the upload of results in the future.

Application of the HoH approach: Three examples

We will now exemplify the process of creating an HoH. The first example starts with a diverse set of empirical tests addressing one overarching hypothesis (i.e., with the aim to organize evidence), whereas the second and third examples start with conceptual considerations on how different aspects are linked to one overarching hypothesis (in the present article, the aim is to organize theory).

Example 1: the enemy release hypothesis as a hierarchy

The first published study showing a detailed version of an HoH focused on the enemy release hypothesis (Heger and Jeschke 2014 ). This is a prominent hypothesis in invasion biology (Enders et al. 2018 ). With respect to the research question of why certain species become invasive—that is, why they establish and spread in a new range—it posits, “The absence of enemies is a cause of invasion success” (e.g., Keane and Crawley 2002 ). With a systematic literature review, Heger and Jeschke ( 2014 ) identified studies addressing this hypothesis. This review revealed that the hypothesis has been tested in many different ways. After screening the empirical tests with a specific focus on which research approach had been used, the authors decided to use three branching criteria: the indicator for enemy release (actual damage, infestation with enemies or performance of the invader); the type of comparison (alien versus natives, aliens in native versus invaded range or invasive versus noninvasive aliens); and the type of enemies (specialists or generalists). On the basis of these criteria, Heger and Jeschke created a hierarchically organized representation of the hypothesis's multiple aspects. The order in which the three criteria were applied to create the hierarchy in this case was based on practical considerations. Empirical studies providing evidence were then assigned to the respective branch of the corresponding hierarchy to reveal specific subhypotheses that were more and others that were less supported (Heger and Jeschke 2014 ).

In later publications, Heger and Jeschke suggested some optional refinements of the original approach (Heger and Jeschke 2018b , 2018c ). One of the suggestions was to distinguish between mechanistic hypotheses (originally termed working hypotheses) and operational hypotheses as different forms of subhypotheses when building the hierarchy. Mechanistic hypotheses serve the purpose of refining the broad, overarching idea in a conceptual sense (figure  2 b and  2 c), whereas operational hypotheses refine the hypotheses by accounting for the diversity of study approaches (figure  2 a).

The enemy release hypothesis example indicates that it can be useful to apply different types of branching criteria within one study. Heger and Jeschke ( 2014 ) looked for helpful ways of grouping diverse empirical tests. Some of the branches they decided to create were based on differences in the research methods, such as the distinction between comparisons of aliens versus natives, and comparisons of aliens in their native versus the invaded range (figure  2 a). Other branches explicate complementary partial arguments contained in the major hypothesis: Studies in which the researchers asked whether aliens are confronted with fewer enemies were separated from those in which they asked whether aliens that are released show enhanced performance.

In this example, the HoH approach was used to organize evidence and therefore to expose the variety of manifestations of the enemy release hypothesis and to display the level of evidence for each branch of the HoH (see Heger and Jeschke 2018b and supplemental table S2 for an interpretation of the results).

Example 2: illustrating the potential drivers of the snowshoe hare–canadian lynx population cycles

Understanding and predicting the spatiotemporal dynamics of populations is one of ecology's central goals (Sutherland et al. 2013 ), and population ecology has a long tradition of trying to understand causes for observed patterns in population dynamics. However, research efforts do not always produce clear conclusions, and often lead to competing explanatory hypotheses. A good example, which has been popularized through textbooks, is the 8–11-year synchronized population cycles of the snowshoe hare ( Lepus americanus ) and the Canadian lynx ( Lynx canadensis ; figure  3 a). From eighteenth- to nineteenth-century fur trapping records across the North American boreal and northern temperate forests, it has been known that predator (lynx) and prey (hare) exhibit broadly synchronous population cycles. Research since the late 1930s (MacLulich 1937 , Elton and Nicholson 1942 ) has tried to answer the question how these patterns are produced. A linear food chain of producer (vegetation)—primary consumer or prey (snowshoe hares)—secondary consumer or predator (Canadian lynx) proved too simplistic as an explanation (Stenseth et al. 1997 ). Instead, multiple drivers could have been responsible, resulting in the development of multiple competing explanations (Oli et al. 2020 ).

An external file that holds a picture, illustration, etc.
Object name is biaa130fig3.jpg

(a) The population cycle of snowshoe hare and Canadian lynx and (b) a hierarchy of hypotheses illustrating its potential drivers. The hypotheses (blue boxes) branch from the overarching hypothesis into more and more precise mechanistic hypotheses and are confronted with empirical tests (arrows leading to grey boxes) at lower levels of the hierarchy. The broken lines indicate where the hierarchy may be extended. Sources: The figure is based on the summary of snowshoe hare–Canadian lynx research (Krebs et al. 2001 , Krebs et al. 2018 and references therein). Panel (a) is reprinted with permission from OpenStax Biology, Chapter 45.6 Community Ecology, Rice University Publishers, Creative Commons Attribution License (by 4.0).

In the present article, we created an HoH to organize the current suggestions on what drives the snowshoe hare–lynx cycle (figure  3 b). The aim of this exercise is to visualize conceptual connections rooted in current population ecological theory and, therefore, to enhance understanding of the complexity of involved processes.

A major hypothesis in population ecology is that populations are regulated by the interaction between biotic and abiotic factors. This regulation can either happen through processes coupled with the density of the focal organisms (density-dependent processes) or through density-independent processes, such as variability in environmental conditions or disturbances. This conceptual distinction can be used to branch out multiple mechanistic hypotheses that specify particular hypothetical mechanisms inducing the observed cycles. For example, potential drivers of the hare–lynx cycles include density-dependent mechanisms linked to bottom-up resource limitation and top-down predation, and density-independent mechanisms related to 10-year sun spot cycles. Figure  3 b also summarizes the kind of experiments that have been performed and how they relate to the corresponding mechanistic hypotheses. For example, food supplementation and fertilization experiments were used to test the resource limitation hypothesis and predator exclusion experiments to test the hypothesis that hare cycles are induced by predator abundance. Figure  3 b therefore highlights why it can be useful to apply very different types of experiments to test one broad overarching hypothesis.

The experiments that have been performed suggest that the predator–prey cycles result from an interaction between predation and food supplies combined with other modifying factors including social stress, disease and parasitism (Krebs et al., 2001 , 2018 ). Other experiments can be envisioned to test additional hypotheses, such as snow-removal experiments to test whether an increase in winter snow, induced by changed sun spot activity, causes food shortages and high hare mortality (Krebs et al. 2018 ).

In this example, alternate hypotheses are visually contrasted, and the different experiments that have been done are linked to the nested structure of possible drivers. This allows one to intuitively grasp the conceptual contribution of evidence stemming from each experiment to the overall explanation of the pattern. In a next step, quantitative results from these experiments could be summarized and displayed as well—for example, applying formal meta-analyses to summarize and display evidence stemming from each type of experiment. This example highlights how hierarchically structuring hypotheses can help to visually organize ideas about which drivers potentially cause a pattern in a complex system (for a comparison, see figure 11 in Krebs et al. 2018 ).

Example 3: the escalation hypothesis of evolution

The escalation hypothesis is a prominent hypothesis in evolutionary biology. In response to the question why species often seem to be well adapted to their biotic environment, it states that enemies are predominant agents of natural selection, and that enduring interactions with enemies brings about long-term evolutionary trends in the morphology, behavior, and distribution of organisms. Escalation, however, is an intrinsically costly process that can proceed only as long as resources are both available and accessible. Since the publication of Vermeij's book Evolution and Escalation in 1987, which is usually considered the start of the respective modern research program, escalation has represented anything but a fixed theory in its structure or content. The growth of escalation studies has led to the development of an increasing number of specific subhypotheses derived from Vermeij's original formulation and therefore to an expansion of the theoretical domain of the escalation hypothesis. Escalation has been supported by some tests but questioned by others.

Similar as in example 2, an HoH can contribute to conceptual clarity by structuring the diversity of escalation ideas that have been proposed (figure  4 ; Dietl 2015 ). To create the HoH for the escalation hypothesis, instead of assembling empirical studies that have tested it, Dietl ( 2015 ) went through the conceptual exercise of arranging existing escalation ideas on the basis of expert knowledge.

An external file that holds a picture, illustration, etc.
Object name is biaa130fig4.jpg

A hierarchy of hypotheses for the escalation hypothesis in evolutionary biology. The broken lines indicate where the hierarchy may be extended.

In its most generalized formulation—that is, “enemies direct evolution”—the escalation hypothesis can be situated at the top of a branch (figure  4 ) along with other hypotheses positing the importance of interaction-related adaptation, such as Van Valen's ( 1973 ) Red Queen hypothesis and hypotheses derived from Thompson's ( 2005 ) geographic mosaic theory of coevolution. Vermeij's original ( 1987 ) formulation of the hypothesis of escalation is actually composed of two separate testable propositions: “Biological hazards due to competitors and predators have become more severe over the course of time in physically comparable habitats” ­(p. 49 in Vermeij 1987 ) and “traits that enhance the competitive and antipredatory capacities of individual organisms have increased in incidence and in degree of expression over the course of time within physically similar habitats” (p. 49 in Vermeij 1987 ). As is the case with other composite hypotheses, these ideas must be singled out before the overarching idea can be unambiguously tested. This requirement creates a natural branching point in the escalation HoH, the risk and response subhypotheses (figure  4 ).

Other lower-level hypotheses and aspects of the risk and response subhypotheses are possible. The risk side of the HoH can be further branched into subhypotheses suggesting either that the enemies evolved enhanced traits through time (e.g., allowing for greater effectiveness in prey capture) or that interaction intensity has increased through time (e.g., because of greater abundance or power of predators; figure  4 ). The response side of the HoH also can be further branched into several subhypotheses (all addressed by Vermeij 1987 ). In particular, species’ responses could take the form of a trend toward more rapid exploitation of resources through time, an increased emphasis on traits that enable individuals to combat or interfere with competitors, a trend toward reduced detectability of prey through time, a trend of increased mobility (that is, active escape defense) through time, or an increase in the development of armor (or passive defense) through time. Arranging these different options of how escalation can manifest in boxes connected to a hierarchical structure helps to gain an overview. The depiction of subhypotheses in separate boxes does not indicate that the authors believe there is no interaction possible among these factors. For example, the evolution of enhanced traits may lead to an increase in interaction intensity. The presented HoH should be viewed only as one way to organize theory. It puts emphasis on the upward connections of subhypotheses to more general hypotheses. If the focus is more on interactions among different factors, other graphical and conceptual approaches may be more helpful (e.g., causal networks; for an example, see Gurevitch et al. 2011 ).

The HoH shown in figure  4 can be used as a conceptual backbone for further work in this field. Also, it can be related to existing evidence. This HoH will allow identification of data gaps and an understanding of which branches of the tree receive support by empirical work and therefore should be considered important components of escalation theory.

Strengths and limits of the HoH approach

The HoH approach can help to organize theory, to organize evidence, and to conceptualize and visualize connections of evidence to theory. Previously published examples of HoHs (e.g., Jeschke and Heger 2018a ) and example 1 given above demonstrate its usefulness for organizing evidence, for pointing out important differences among subhypotheses and for conceptually and graphically connecting empirical results to a broader theoretical idea. Such an HoH can make the rationale underlying a specific study explicit and can elucidate the conceptual connection of the study to a concrete theoretical background.

Applying the HoH approach can also help disclose knowledge gaps and biases (Braga et al. 2018 ) and can help reveal which research approaches have been used to assess an overarching idea (for examples, see Jeschke and Heger 2018a ; other methods can be used to reach these aims too—e.g., systematic maps; Pullin et al. 2016 , Collaboration for Environmental Evidence 2018 ). On the basis of such information, future research can be focused on especially promising areas or methods.

Besides such descriptive applications, the HoH approach can be combined with evidence assessment techniques (step 3a in figure  1 ). It can help to analyze the level of evidence for subhypotheses and therefore deliver the basis for discussing their usefulness and range of applicability (table S2; Jeschke and Heger 2018). Recent studies demonstrate that this kind of application can be useful for research outside of ecology as well—for example, in biomedical research (Bartram and Jeschke 2019 ) or even in a distant field like company management research (Wu et al. 2019 ).

We did not detail in the present article how the confrontation of hypotheses with evidence in an HoH can be done, but in previous work it was shown that this step can deliver the basis for enhancing theory. For example, the HoH-based literature analyses presented in Jeschke and Heger ( 2018a ) showed that several major hypotheses in invasion biology are only weakly supported by evidence. The authors consequently suggested to reformulate them (Jeschke and Heger 2018b ) and to explicitly assess their range of applicability (Heger and Jeschke 2018a ). Because an HoH visually connects data and theory, the approach motivates one to feed empirical results back into theory and, therefore, use them for improving theory. It is our vision that in the future, theory development in ecology and evolution could largely profit from a regular application of the HoH approach. Steps to improve theory can include highlighting strongly supported subhypotheses, pointing out hypotheses with low unification power and breadth of applicability, shedding light on previously unnoticed connections, and revealing gaps in research.

The examples on the hare–lynx cycles and the escalation hypothesis showed that the HoH approach can also guide theory-driven reasoning in both the ecological and evolutionary domains, respectively. That is, the HoH approach can allow the reconsideration and reorganization of conceptual ideas without directly referring to data. Major hypotheses or research questions are usually composed of several elements, and above, we suggest how these elements can be exposed and visualized (figure  2 b and  2 c). In this way, applying the HoH approach can help to enhance conceptual clarity by displaying different meanings and components of broad concepts. Conceptual clarity is not only useful to avoid miscommunication or misinterpretation of empirical results, but we expect that it will also facilitate theory development by enhancing accurate thinking and argumentation.

In addition, the nested, hierarchical structure invites looking for connections upward: Figure  4 shows the escalation hypothesis as one variant of an even broader hypothesis, positing that “Species interactions direct evolution.” This in turn can enhance the future search for patterns and mechanisms across unconnected study fields. A respective example can be found in Schulz and colleagues ( 2019 ). In that article, the authors used the HoH approach to organize twelve hypotheses each addressing the roles that antagonists play during species invasions. By grouping the hypotheses in a hierarchically nested way, Schulz and colleagues showed their conceptual relatedness, which had not been demonstrated before.

In the future, the HoH approach could also be used for creating interdisciplinary links. There are many research questions that are being addressed in several research areas in parallel, using different approaches and addressing different aspects of the overall question. In an HoH, such connections could be revealed. Heger and colleagues ( 2019 ) suggested a future application of the HoH approach for organizing and structuring research on effects of global change on organisms, communities, and ecosystems. Under the broad header of “ecological novelty,” more specific research questions addressed in various disciplines (e.g., climate change research, biodiversity research, urban ecology, restoration ecology, evolutionary ecology, microbial ecology) could be organized and therefore conceptually connected.

Importantly, the HoH approach can be easily combined with existing synthesis tools. For example, as was outlined above and in figure  1 , a systematic literature review can be used to identify and structure primary studies to be used for building an HoH. Statistical approaches, such as machine learning, can be used to optimize branching with respect to levels of evidence (Ryo et al. 2019 ), and empirical data structured in an HoH can be analyzed with formal meta-analysis—for example, separately for each subhypothesis (Jeschke and Pyšek 2018 ). In future applications, an HoH could also be used to visualize the results of a research-weaving process, in which systematic mapping is combined with bibliometric approaches (Nakagawa et al. 2019 ). Furthermore, HoHs can be linked to a larger network. An example is the website https://hi-knowledge.org/invasion-biology/ (Jeschke et al. 2018b ) where the conceptual connections of 12 major hypotheses of invasion ecology are displayed as a hierarchical network. We believe that the combination of HoH with other knowledge synthesis tools, such as Venn diagrams, ontologies, controlled vocabularies, and systematic maps, can be useful as well and should be explored in the future.

We emphasize, however, that the HoH method is by far no panacea for managing complexity. Not all topics interesting for scientific inquiry can be organized hierarchically, and imposing a hierarchy may even lead to wrong conclusions, thus actually hindering theory development. For example, to focus a conceptual synthesis on one major overarching hypothesis may conceal that other factors not addressed by this single hypothesis have a major effect on underlying processes as well. Evidence assessed with respect to this one hypothesis can in such cases only be used to derive partial explanations, whereas for a more complete understanding of the underlying processes, interactions with other factors need to be considered. Furthermore, displaying interacting aspects of a system as discrete entities within a hierarchy can obfuscate the true dynamics of a system.

In our three examples—the enemy release hypothesis, the hare–lynx cycles, and the escalation hypothesis (figures  3 and  4 )—connections between the different levels of the hierarchies do not necessarily depict causal relationships. Also, the fact that multicausality is ubiquitous in ecological systems is not covered. It has been argued that approaches directly focusing on explicating causal relationships and multicausality could be more helpful for advancing theory (Scheiner and Fox 2018 ). The HoH approach is currently primarily a tool to provide conceptual structure. We suggest that revealing causal networks and multicausalities represent additional objectives and regard them as important aims also for further developing the HoH approach. Combining existing approaches for revealing causal relationships (e.g., Eco Evidence, Norris et al. 2012 , or CADDIS, www.epa.gov/caddis ) with the HoH approach seems to be a promising path forward. Also, a future aim could be to develop a version of the HoH approach with enhanced formalization, allowing different kinds of relationships among subhypotheses to be disclosed (e.g., applying semantic web methods. Such a formalized version of the HoH approach could be used for scrutinizing the logical structure of hypotheses (e.g., compatibility and incompatibility of subhypotheses) and identifying inevitable interdependencies (e.g., likelihood of cooccurrence of evidence along two branches).

The guidelines on how to build an HoH presented above and in figures  1 and  2 will help to increase the reproducibility of the process. Full reproducibility is unlikely to be reached for most applications because researchers need to make individual choices. For example, step 1 involves creative reasoning and may therefore potentially lead to differing results if repeated by different researchers. The process of creating an HoH can therefore lead to a whole set of outcomes. Usually, there will be not one single HoH that is the one “correct” answer to the research questions. Certain steps of the process can be automated using artificial intelligence, such as with the use of decision-tree algorithms to enhance reproducibility (Ryo et al. 2019 ). But even if such techniques are applied, the choice of which information is fed into the algorithms is made by a researcher. We suggest that this ambiguity should not be considered a flaw of the method, but instead an important and necessary concession to creativity, offering the chance to closely match the outcome of the process to the concrete requirements of the research project. Also, it should be noted that other approaches for knowledge synthesis do not necessarily yield reproducible results either, not even formal meta-analysis (de Vrieze 2018 ).

Conclusions

The current emphasis on statistical approaches for synthesizing evidence with the purpose of facilitating decision making in environmental management and nature conservation is undoubtedly important and necessary. However, knowledge and understanding of ecological systems would profit largely if results from empirical studies would in addition, and on a regular basis, be used to improve theory. With this contribution, we present one possibility for creating close links between evidence and theory, and we hope to stimulate future studies that feed results from case studies back into theory. Our goal is to motivate more conceptual work aimed at refining major hypotheses on how complex systems work. Above, we provided examples for how to develop a nuanced representation of major hypotheses, focusing on their mechanistic components.

Ecological systems are highly complex, and therefore, the theories describing them typically need to incorporate complexity. Nested, hierarchical structures in our view represent one possible path forward, because they allow zooming in and out and, therefore, moving between different levels of complexity. We propose that alternative tools such as causal networks should be further developed for application in ecology and evolution as well. Combining complementary conceptual tools would in our view be most promising for an efficient enhancement of knowledge and understanding in ecology.

Supplementary Material

Biaa130_supplemental_file, acknowledgments.

The ideas presented in this article were developed during the workshop “The hierarchy-of-hypotheses approach: Exploring its potential for structuring and analyzing theory, research, and evidence across disciplines,” 19–21 July 2017, and refined during the workshop “Research synthesis based on the hierarchy-of-hypotheses approach,” 10–12 October 2018, both in Hanover, Germany. We thank William Bausman, Adam Clark, Francesco DePrentis, Carsten Dormann, Alexandra Erfmeier, Gordon Fox, Jeremy Fox, James Griesemer, Volker Grimm, Thierry Hanser, Frank Havemann, Yuval Itescu, Marie Kaiser, Julia Koricheva, Peter Kraker, Ingolf Kühn, Andrew Latimer, Chunlong Liu, Bertram Ludäscher, Klaus Mainzer, Elijah Millgram, Bob O'Hara, Masahiro Ryo, Raphael Scholl, Gerhard Schurz, Philip Stephens, Koen van Benthem and Meike Wittman for participating in our lively discussions and Alkistis Elliot-Graves and Birgitta König-Ries for help with refining terminology. Furthermore, we thank Sam Scheiner and five anonymous reviewers for comments that helped to improve the manuscript. The workshops were funded by Volkswagen Foundation (Az 92,807 and 94,246). TH, CAA, ME, PG, ADS, and JMJ received funding from German Federal Ministry of Education and Research within the Collaborative Project “Bridging in Biodiversity Science” (grant no. 01LC1501A). ME additionally received funding from the Foundation of German Business, JMJ from the Deutsche Forschungsgemeinschaft (grants no. JE 288/9–1 and JE 288/9–2), and IB from German Federal Ministry of Education and Research (grant no. FKZ 01GP1710). CJL was supported by a grant from The Natural Sciences and Engineering Research Council of Canada and in-kind synthesis support from the US National Center for Ecological Analysis and Synthesis. LGA was supported by the Spanish Ministry of Science, Innovation, and Universities through project no. CGL2014–56,739-R, and RRB received funding from the Brazilian National Council for Scientific and Technological Development (process no. 152,289/2018–6).

Author Biographical

Tina Heger ( [email protected] ) is affiliated with the Department of Biodiversity Research and Systematic Botany and Alexis D. Synodinos is affiliated with the Department of Plant Ecology and Nature Conservation at the University of Potsdam, in Potsdam, Germany. Tina Heger and Kurt Jax are affiliated with the Department of Restoration Ecology at the Technical University of Munich, in Freising, Germany. Tina Heger, Carlos A. Aguilar-Trigueros, Martin Enders, Pierre Gras, Jonathan M. Jeschke, Sophie Lokatis, and Alexis Synodinos are affiliated with the ­Berlin-Brandenburg Institute of Advanced Biodiversity Research (BBIB), in Berlin, Germany. Carlos Aguilar, Isabelle Bartram, Martin Enders, Jonathan M. Jeschke, and Sophie Lokatis are affiliated with the Institute of Biology at Freie Universität Berlin, in Berlin, Germany. Martin Enders, Jonathan M. Jeschke, and Sophie Lokatis are also affiliated with the Leibniz Institute of Freshwater Ecology and Inland Fisheries (IGB), in Berlin, Germany. Pierre Gras is also affiliated with the Department of Ecological Dynamics at the Leibniz Institute for Zoo and Wildlife Research (IZW), also in Berlin, Germany. Isabelle Bartram is affiliated with the Institute of Sociology, at the University of Freiburg, in Freiburg. Kurt Jax is also affiliated with the Department of Conservation Biology at the Helmholtz Centre for Environmental Research—UFZ, in Leipzig, Germany. Raul R. Braga is located at the Universidade Federal do Paraná, Laboratório de Ecologia e Conservação, in Curitiba, Brazil. Gregory P. Dietl has two affiliations: the Paleontological Research Institution and the Department of Earth and Atmospheric Sciences at Cornell University, in Ithaca, New York. David J. Gibson is affiliated with the School of Biological Sciences at Southern Illinois University Carbondale, in Carbondale, Illinois. Lorena Gómez-Aparicio's affiliation is the Instituto de Recursos Naturales y Agrobiología de Sevilla, CSIC, LINCGlobal, in Sevilla, Spain. Christopher J. Lortie is affiliated with the Department of Biology at York University, in York, Canada, as well as with the National Center for Ecological Analysis and Synthesis, at the University of California Santa Barbara, in Santa Barbara, California. Anne-Christine Mupepele has two affiliations as well: the Chair of Nature Conservation and Landscape Ecology at the University of Freiburg, in Freiburg, and the Senckenberg Biodiversity and Climate Research Centre, in Frankfurt am Main, both in Germany. Stefan Schindler is working at the Environment Agency Austria and the University of Vienna's Division of Conservation Biology, Vegetation, and Landscape Ecology, in Vienna, Austria, and his third affiliation is with Community Ecology and Conservation, at the Czech University of Life Sciences Prague, in Prague, Czech Republic. Finally, Jostein Starrfelt is affiliated with the University of Oslo's Centre for Ecological and Evolutionary Synthesis and with the Norwegian Scientific Committee for Food and Environment, Norwegian Institute of Public Health, both in Oslo, Norway. Alexis D. Synodinos is affiliated with the Centre for Biodiversity Theory and Modelling, Theoretical, and Experimental Ecology Station, CNRS, in Moulis, France.

Contributor Information

Tina Heger, Department of Biodiversity Research and Systematic Botany, University of Potsdam, Potsdam, Germany. Department of Restoration Ecology, Technical University of Munich, Freising, Germany. Berlin-Brandenburg Institute of Advanced Biodiversity Research (BBIB), Berlin, Germany.

Carlos A Aguilar-Trigueros, Berlin-Brandenburg Institute of Advanced Biodiversity Research (BBIB), Berlin, Germany. Institute of Biology, Freie Universität, Berlin, Berlin, Germany.

Isabelle Bartram, Berlin-Brandenburg Institute of Advanced Biodiversity Research (BBIB), Berlin, Germany. Institute of Biology, Freie Universität, Berlin, Berlin, Germany. Institute of Sociology, University of Freiburg, Freiburg.

Raul Rennó Braga, Universidade Federal do Paraná, Laboratório de Ecologia e Conservação, Curitiba, Brazil.

Gregory P Dietl, Paleontological Research Institution and the Department of Earth and Atmospheric Sciences at Cornell University, Ithaca, New York.

Martin Enders, Berlin-Brandenburg Institute of Advanced Biodiversity Research (BBIB), Berlin, Germany. Institute of Biology, Freie Universität, Berlin, Berlin, Germany. Leibniz Institute of Freshwater Ecology and Inland Fisheries (IGB), Berlin, Germany.

David J Gibson, School of Biological Sciences, Southern Illinois University Carbondale, Carbondale, Illinois.

Lorena Gómez-Aparicio, Instituto de Recursos Naturales y Agrobiología de Sevilla, CSIC, LINCGlobal, Sevilla, Spain.

Pierre Gras, Berlin-Brandenburg Institute of Advanced Biodiversity Research (BBIB), Berlin, Germany. Department of Ecological Dynamics, Leibniz Institute for Zoo and Wildlife Research (IZW), also in Berlin, Germany.

Kurt Jax, Department of Restoration Ecology, Technical University of Munich, Freising, Germany. Department of Conservation Biology, Helmholtz Centre for Environmental Research—UFZ, Leipzig, Germany.

Sophie Lokatis, Berlin-Brandenburg Institute of Advanced Biodiversity Research (BBIB), Berlin, Germany. Institute of Biology, Freie Universität, Berlin, Berlin, Germany. Leibniz Institute of Freshwater Ecology and Inland Fisheries (IGB), Berlin, Germany.

Christopher J Lortie, Department of Biology, York University, York, Canada, as well as with the National Center for Ecological Analysis and Synthesis, University of California Santa Barbara, Santa Barbara, California.

Anne-Christine Mupepele, Chair of Nature Conservation and Landscape Ecology, University of Freiburg, Freiburg, and the Senckenberg Biodiversity and Climate Research Centre, Frankfurt am Main, both in Germany.

Stefan Schindler, Environment Agency Austria and University of Vienna's Division of Conservation, Biology, Vegetation, and Landscape Ecology, Vienna, Austria, and his third affiliation is with Community Ecology and Conservation, Czech University of Life Sciences Prague, Prague, Czech Republic, Finally.

Jostein Starrfelt, University of Oslo's Centre for Ecological and Evolutionary Synthesis and with the Norwegian Scientific Committee for Food and Environment, Norwegian Institute of Public Health, both in Oslo, Norway.

Alexis D Synodinos, Department of Plant Ecology and Nature Conservation, University of Potsdam, Potsdam, Germany. Berlin-Brandenburg Institute of Advanced Biodiversity Research (BBIB), Berlin, Germany. Centre for Biodiversity Theory and Modelling, Theoretical, and Experimental Ecology Station, CNRS, Moulis, France.

Jonathan M Jeschke, Berlin-Brandenburg Institute of Advanced Biodiversity Research (BBIB), Berlin, Germany. Institute of Biology, Freie Universität, Berlin, Berlin, Germany. Leibniz Institute of Freshwater Ecology and Inland Fisheries (IGB), Berlin, Germany.

  • Bartram I, Jeschke JM. 2019. Do cancer stem cells exist? A pilot study combining a systematic review with the hierarchy-of-hypotheses approach . PLOS ONE 14 : e0225898. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Braga RR, Gómez-Aparicio L, Heger T, Vitule JRS, Jeschke JM. 2018. Structuring evidence for invasional meltdown: Broad support but with biases and gaps . Biological Invasions 20 : 923–936. [ Google Scholar ]
  • Collaboration for Environmental Evidence . 2018. Guidelines and Standards for Evidence Synthesis in Environmental Management, version 5.0 . Collaboration for Environmental Evidence . www.environmentalevidence.org/information-for-authors . [ Google Scholar ]
  • Cook CN, Nichols SJ, Webb JA, Fuller RA, Richards RM. 2017. Simplifying the selection of evidence synthesis methods to inform environmental decisions: A guide for decision makers and scientists . Biological Conservation 213 : 135–145. [ Google Scholar ]
  • de Vrieze J. 2018. The metawars . Science 361 : 1184–1188. [ PubMed ] [ Google Scholar ]
  • Dicks LV, et al. 2017. . Knowledge Synthesis for Environmental Decisions: An Evaluation of Existing Methods, and Guidance for Their Selection, Use, and Development . EKLIPSE Project . [ Google Scholar ]
  • Diefenderfer HL, Johnson GE, Thom RM, Bunenau KE, Weitkamp LA, Woodley CM, Borde AB, Kropp RK. 2016. Evidence-based evaluation of the cumulative effects of ecosystem restoration . Ecosphere 7 : e01242. [ Google Scholar ]
  • Dietl GP. 2015. Evaluating the strength of escalation as a research program . Geological Society of America Abstracts with Programs 47 : 427. [ Google Scholar ]
  • Elton C, Nicholson M. 1942. The ten-year cycle in numbers of lynx in Canada . Journal of Animal Ecology 11 : 215–244. [ Google Scholar ]
  • Enders M, Hütt M-T, Jeschke JM. 2018. Drawing a map of invasion biology based on a network of hypotheses . Ecosphere 9 : e02146. [ Google Scholar ]
  • Enders M, et al. 2020. . A conceptual map of invasion biology: Integrating hypotheses into a consensus network . Global Ecology and Biogeography 29 : 978–999. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Farji-Brener AG, Amador-Vargas S. 2018. Hierarchy of hypotheses or hierarchy of predictions? Clarifying key concepts in ecological research . Pages 19–22 in Jeschke JM, Heger T, eds. Invasion Biology: Hypotheses and Evidence . CAB International. [ Google Scholar ]
  • Giere RN, Bickle J, Mauldin R. 2005. Understanding Scientific Reasoning , 5th ed. Wadsworth Cengage Learning. [ Google Scholar ]
  • Grace J, Anderson T, Olff H, Scheiner S. 2010. On the specification of structural equation models for ecological systems . Ecological Monographs 80 : 67–87. [ Google Scholar ]
  • Griesemer JR. 2013. Formalization and the meaning of “theory” in the inexact biological sciences . Biological Theory 7 : 298–310. [ Google Scholar ]
  • Griesemer J. 2018. Mapping theoretical and evidential landscapes in ecological science: Levin's virtue trade-off and the hierarchy-of-hypotheses approach . Pages 23–29 in Jeschke JM, Heger T, eds. Invasion Biology: Hypotheses and Evidence . CAB International. [ Google Scholar ]
  • Gurevitch J, Fox GA, Wardle GM, Inderjit Taub D. 2011. Emergent insights from the synthesis of conceptual frameworks for biological invasions . Ecology Letters 14 : 407–418. [ PubMed ] [ Google Scholar ]
  • Haddaway NR, Macura B, Whaley P, Pullin AS. 2018. ROSES: Reporting standards for systematic evidence syntheses: Pro forma, flow-diagram and descriptive summary of the plan and conduct of environmental systematic reviews and systematic maps . Environmental Evidence 7 : 7. [ Google Scholar ]
  • Heger T, Jeschke JM. 2014. The enemy release hypothesis as a hierarchy of hypotheses . Oikos 123 : 741–750. [ Google Scholar ]
  • Heger T, Jeschke JM. 2018a. Conclusions and outlook . Pages 167–172 in Jeschke JM, Heger T, eds. Invasion Biology: Hypotheses and Evidence . CAB International. [ Google Scholar ]
  • Heger T, Jeschke JM. 2018b. Enemy release hypothesis . Pages 92–102 in Jeschke JM, Heger T, eds. Invasion Biology: Hypotheses and Evidence . CAB International. [ Google Scholar ]
  • Heger T, Jeschke JM. 2018c. The hierarchy-of-hypotheses approach updated: A toolbox for structuring and analysing theory, research, and evidence . Pages 38–48 in Jeschke JM, Heger T, eds. Invasion Biology: Hypotheses and Evidence . CAB International. [ Google Scholar ]
  • Heger T, et al. 2013. Conceptual frameworks and methods for advancing invasion ecology . Ambio 42 : 527–540. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Heger T, et al. 2019. Towards an integrative, eco-evolutionary understanding of ecological novelty: Studying and communicating interlinked effects of global change . BioScience 69 : 888–899. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Howick J. 2011. The Philosophy of Evidence-based Medicine . Wiley-Blackwell. [ Google Scholar ]
  • Jeltsch F, et al. 2013. . How can we bring together empiricists and modelers in functional biodiversity research? Basic and Applied Ecology 14 : 93–101. [ Google Scholar ]
  • Jeschke JM, Heger T, eds. 2018a. Invasion Biology: Hypotheses and Evidence . CAB International. [ Google Scholar ]
  • Jeschke JM, Heger T, eds. 2018b. Synthesis . Pages 157–166 in Jeschke JM, Heger T, eds. Invasion Biology. Hypotheses and Evidence . CAB International. [ Google Scholar ]
  • Jeschke JM, Pyšek P. 2018. Tens rule . Pages 124–132 in Jeschke JM, Heger T, eds. Invasion Biology: Hypotheses and Evidence . CAB International. [ Google Scholar ]
  • Jeschke JM, Gómez Aparicio L, Haider S, Heger T, Lortie CJ, Pyšek P, Strayer DL. 2012. Support for major hypotheses in invasion biology is uneven and declining . NeoBiota 14 : 1–20. [ Google Scholar ]
  • Jeschke JM, Debille S, Lortie CJ. 2018a. Biotic resistance and island susceptibility hypotheses . Pages 60–70 in Jeschke JM, Heger T, eds. Invasion Biology: Hypotheses and Evidence . CAB International. [ Google Scholar ]
  • Jeschke JM, Enders M, Bagni M, Jeschke P, Zimmermann M, Heger T. 2018b. Hi Knowledge. Hi-Knowledge.org. www.hi-knowledge.org/invasion-biology [ Google Scholar ]
  • Jeschke JM, Lokatis S, Bartram I, Tockner K. 2019. Knowledge in the dark: Scientific challenges and ways forward . FACETS 4 : 1–19. [ Google Scholar ]
  • Keane RM, Crawley MJ. 2002. Exotic plant invasions and the enemy release hypothesis . Trends in Ecology and Evolution 17 : 164–170. [ Google Scholar ]
  • Koricheva J, Gurevitch J, Mengersen K, eds. 2013. Handbook of Meta-analysis in Ecology and Evolution . Princeton University Press. [ Google Scholar ]
  • Krebs CJ, Boonstra R, Boutin S. 2018. Using experimentation to understand the 10-year snowshoe hare cycle in the boreal forest of North America . Journal of Animal Ecology 87 : 87–100. [ PubMed ] [ Google Scholar ]
  • Krebs CJ, Boonstra R, Boutin S, Sinclair ARE. 2001. What drives the 10-year cycle of snowshow hares? BioScience 51 : 25–35. [ Google Scholar ]
  • Lortie CJ. 2014. Formalized synthesis opportunities for ecology: Systematic reviews and meta-analyses . Oikos 123 : 897–902. [ Google Scholar ]
  • MacLulich DA. 1937. Fluctuation in numbers of the varying hare (Lepus americanus) . Univ Toronto Studies Biol Series 43 : 1–136. [ Google Scholar ]
  • Moher D, Shamseer L, Clarke M, Ghersi D, Liberati A, Petticrew M, Shekelle P, Stewart LA, PRISMA-P Group . 2015. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement . Systematic Reviews 4 : 1. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Mupepele A-C, Walsh JC, Sutherland WJ, Dormann CF. 2016. An evidence assessment tool for ecosystem services and conservation studies . Ecological Applications 26 : 1295–1301. [ PubMed ] [ Google Scholar ]
  • Nakagawa S, Samarasinghe G, Haddaway NR, Westgate MJ, O'Dea RE, Noble DWA, Lagisz M. 2019. Research weaving: Visualizing the future of research synthesis . Trends in Ecology and Evolution 34 : 224–238. [ PubMed ] [ Google Scholar ]
  • Nesshöver C, et al. 2016. . The Network of Knowledge approach: Improving the science and society dialogue on biodiversity and ecosystem services in Europe . Biodiversity and Conservation 25 : 1215–1233. [ Google Scholar ]
  • Norris RH, Webb JA, Nichols SJ, Stewardson MJ, Harrison ET. 2012. Analyzing cause and effect in environmental assessments: Using weighted evidence from the literature . Freshwater Science 31 : 5–21. [ Google Scholar ]
  • Oli MK, Krebs CJ, Kenney AJ, Boonstra R, Boutin S, Hines JE. 2020. Demography of snowshoe hare population cycles . Ecology 101 : e02969. doi:10.1002/ecy.2969. [ PubMed ] [ Google Scholar ]
  • Pickett STA, Kolasa J, Jones CG. 2007. Ecological Understanding: The Nature of Theory and the Theory of Nature , 2nd ed. Academic Press. [ Google Scholar ]
  • Platt JR. 1964. Strong inference . Science 146 : 347–353. [ PubMed ] [ Google Scholar ]
  • Pullin A, et al. 2016. . Selecting appropriate methods of knowledge synthesis to inform biodiversity policy . Biodiversity and Conservation 25 : 1285–1300. [ Google Scholar ]
  • Ryo M, Jeschke JM, Rillig MC, Heger T. 2019. Machine learning with the hierarchy-of-hypotheses (HoH) approach discovers novel pattern in studies on biological invasions . Research Synthesis Methods 11 : 66–73doi:10.1002/jrsm.1363. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Scheiner SM. 2013. The ecological literature, an idea-free distribution . Ecology Letters 16 : 1421–1423. [ PubMed ] [ Google Scholar ]
  • Scheiner SM, Fox GA. 2018. A hierarchy of hypotheses or a network of models . Pages 30–37 in Jeschke JM, Heger T, eds. Invasion Biology: Hypotheses and Evidence . CAB International. [ Google Scholar ]
  • Schulz AN, Lucardi RD, Marsico TD. 2019. Successful invasions and failed biocontrol: The role of antagonistic species interactions . BioScience 69 : 711–724. [ Google Scholar ]
  • Silvertown JW, Charlesworth D. 2001. Introduction to Plant Population Biology . Blackwell Scientific. [ Google Scholar ]
  • Stenseth NC, Falck W, Bjørnstad ON, Krebs CJ. 1997. Population regulation in snowshoe hare and Canadian lynx: Asymmetric food web configurations between hare and lynx . Proceedings of the National Academy of Sciences of the United States of America 94 : 5147–5152. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Sutherland WJ. 2006. Predicting the ecological consequences of environmental change: A review of the methods . Journal of Applied Ecology 43 : 599–616. [ Google Scholar ]
  • Sutherland WJ, et al. 2013. . Identification of 100 fundamental ecological questions . Journal of Ecology 101 : 58–67. [ Google Scholar ]
  • Thompson JN. 2005. The Geographic Mosaic of Coevolution . University of Chicago Press. [ Google Scholar ]
  • Van Valen L. 1973. A new evolutionary law . Evolutionary Theory 1 : 1–30. [ Google Scholar ]
  • Vermeij GJ. 1987. Evolution and Escalation: An Ecological History of Life . Princeton University Press. [ Google Scholar ]
  • Wu L, Huang I-C, Huang W-C, Du P-L. 2019. Aligning organizational culture and operations strategy to improve innovation outcomes: An integrated perspective in organizational management . Journal of Organizational Change Management 32 : 224–250. doi:10.1108/JOCM-03-2018-0073. [ Google Scholar ]

Cinémadoc

Paratexte audiovisuel et annotation : le cas du “parcours découverte” de Tenk.ca – 30 mai 2024

Jeudi 30 mai 2024 à 11h30, dans le cadre du colloque international Des corpus audiovisuels en Humanités. Méthodes, expériences, résultats (30-31 mai 2024) organisé par l’équipe du Consortium Huma-num CANEVAS (Michael Bourgatte et Laurent Tessier, dir.) à la Maison des Sciences de l’Homme – Paris Nord, j’ai le plaisir de présenter un communication commune avec Marie Lavorel (Université Concordia) qui porte sur le sujet suivant: Paratexte audiovisuel et annotation : le cas du “parcours découverte” de Tenk.ca .

Résumé de la communication:

Cette communication à deux voix est un retour d’expérience sur une observation-participante d’environ un an d’un projet de recherche-création. Financé par le Conseil de recherches en sciences humaines du Canada (2023-2024), ce dernier repose sur la conception du “parcours découverte” de la plateforme de diffusion du documentaire de création Tenk.ca. Ce parcours ciblant les nouveaux inscrits de la plateforme donne non seulement accès à quatre films et à des paratextes textuels (métadonnées, note d’intention des programmateurs), mais aussi à quatre capsules audiovisuelles d’environ 5 minutes qui ont été réalisées spécialement pour le projet. Concrètement, nous présenterons surtout deux capsules dont nous avons particulièrement suivi la réalisation. Marie Lavorel reviendra sur celle liée au Bouton de Nacre (P. Guzman, 2015) et Rémy Besson sur celle associée au court métrage No Crying at the Dinner Table (C. Nguyen, 2019). Les différences et points communs seront soulignés. En effet, les deux capsules prennent des formes distinctes pour donner la parole à une programmatrice de Tenk.ca (N. Décarie-Daigneault et C. Valade), mais aussi à d’autres interlocutrices. Pour No Crying… c’est la réalisatrice qui présente son documentaire, alors que le film de Guzman est abordé par un artiste multimédia Beatriz Herrera canadienne et chilienne. Au-delà de cette étude de cas, nous expliquerons comment elle s’inscrit dans une tendance lourde, de tels contenus étant actuellement développés sur d’autres plateformes (Mubi ou The Criterion Channel, par exemple). Nous soulignerons également les principales originalités de ce projet de recherche-création, soit le principe que les paratextes peuvent participer à une éducation au cinéma et que leur format médiatique peut être similaire à celui des films. Enfin, nous montrerons que ces divers paratextes entretiennent un rapport d’annotation audiovisuelle vis-à-vis des films documentaires placés au centre du parcours “parcours découverte”.

De nombreuses autres communications s’annoncent passionnantes, vous pouvez consulter la version longue du programme du colloque.

Je tiens à noter que je suis également membre du comité scientifique du colloque en question et du Consortium CANEVAS.

Citer ce billet Rémy Besson (2024, 27 mai). Paratexte audiovisuel et annotation : le cas du “parcours découverte” de Tenk.ca – 30 mai 2024. Cinémadoc . Consulté le 28 mai 2024, à l’adresse https://cinemadoc.hypotheses.org/5483

Laisser un commentaire Annuler la réponse

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *

Commentaire *

Enregistrer mon nom, mon e-mail et mon site dans le navigateur pour mon prochain commentaire.

Ce site utilise Akismet pour réduire les indésirables. En savoir plus sur comment les données de vos commentaires sont utilisées .

Images animées, archives visuelles et intermédialité

Vous allez être redirigé vers OpenEdition Search

IMAGES

  1. 13 Different Types of Hypothesis (2024)

    hypothesis et hypotheses

  2. Research Hypothesis: Definition, Types, Examples and Quick Tips

    hypothesis et hypotheses

  3. 8 Different Types of Hypotheses (Plus Essential Facts)

    hypothesis et hypotheses

  4. How to Write a Hypothesis

    hypothesis et hypotheses

  5. What is a Hypothesis

    hypothesis et hypotheses

  6. 8 Different Types of Hypotheses (Plus Essential Facts)

    hypothesis et hypotheses

VIDEO

  1. Testing of Hypothesis MMPC

  2. Conduct the hypothesis test and provide the test statistic and the critical value, and state the co…

  3. Misunderstanding The Null Hypothesis

  4. What Is A Hypothesis?

  5. Hypotheses- Concept, Sources, Types (Research, Directional, Non-directional, Null)

  6. Concept of Hypothesis

COMMENTS

  1. Hypotheses vs Hypothesis: Deciding Between Similar Terms

    The answer is that both words are correct, but they have different meanings. Hypotheses is the plural form of hypothesis. A hypothesis is a proposed explanation or prediction for a phenomenon that can be tested through experimentation or observation. Hypotheses, on the other hand, refers to multiple hypotheses.

  2. How to Write a Strong Hypothesis

    Developing a hypothesis (with example) Step 1. Ask a question. Writing a hypothesis begins with a research question that you want to answer. The question should be focused, specific, and researchable within the constraints of your project. Example: Research question.

  3. A Practical Guide to Writing Quantitative and Qualitative Research

    INTRODUCTION. Scientific research is usually initiated by posing evidenced-based research questions which are then explicitly restated as hypotheses.1,2 The hypotheses provide directions to guide the study, solutions, explanations, and expected results.3,4 Both research questions and hypotheses are essentially formulated based on conventional theories and real-world processes, which allow the ...

  4. Scientific Hypotheses: Writing, Promoting, and Predicting Implications

    Scientific hypotheses are essential for progress in rapidly developing academic disciplines. Proposing new ideas and hypotheses require thorough analyses of evidence-based data and predictions of the implications. ... Adlassnig W, Risch JA, Anderlini S, Arguriou P, Armendariz AZ, et al. Peer review versus editorial review and their role in ...

  5. Research Questions & Hypotheses

    Generally, in quantitative studies, reviewers expect hypotheses rather than research questions. However, both research questions and hypotheses serve different purposes and can be beneficial when used together. Research Questions Clarify the research's aim (Farrugia et al., 2010)

  6. How to Write a Strong Hypothesis

    Step 5: Phrase your hypothesis in three ways. To identify the variables, you can write a simple prediction in if … then form. The first part of the sentence states the independent variable and the second part states the dependent variable. If a first-year student starts attending more lectures, then their exam scores will improve.

  7. When are hypotheses useful in ecology and evolution?

    Similarly, using "statistical hypotheses" and "null hypothesis testing" is not the same as developing mechanistic research hypotheses (Romesburg, 1981; Sells et al., 2018). " Not enough is known about my system to formulate hypotheses ". This is perhaps the most common defense against needing hypotheses (Golub, 2010). The argument ...

  8. What is a Hypothesis

    Examples of Hypothesis. Here are a few examples of hypotheses in different fields: Psychology: "Increased exposure to violent video games leads to increased aggressive behavior in adolescents.". Biology: "Higher levels of carbon dioxide in the atmosphere will lead to increased plant growth.".

  9. 2.4: Developing a Hypothesis

    Theories and Hypotheses. Before describing how to develop a hypothesis it is important to distinguish between a theory and a hypothesis. A theory is a coherent explanation or interpretation of one or more phenomena. Although theories can take a variety of forms, one thing they have in common is that they go beyond the phenomena they explain by including variables, structures, processes ...

  10. Scientific hypothesis

    The Royal Society - On the scope of scientific hypotheses (Apr. 24, 2024) scientific hypothesis, an idea that proposes a tentative explanation about a phenomenon or a narrow set of phenomena observed in the natural world. The two primary features of a scientific hypothesis are falsifiability and testability, which are reflected in an "If ...

  11. Formulating Hypotheses for Different Study Designs

    Formulating Hypotheses for Different Study Designs. Generating a testable working hypothesis is the first step towards conducting original research. Such research may prove or disprove the proposed hypothesis. Case reports, case series, online surveys and other observational studies, clinical trials, and narrative reviews help to generate ...

  12. Hypothesis Testing

    It is most often used by scientists to test specific predictions, called hypotheses, that arise from theories. There are 5 main steps in hypothesis testing: State your research hypothesis as a null hypothesis and alternate hypothesis (H o) and (H a or H 1). Collect data in a way designed to test the hypothesis. Perform an appropriate ...

  13. The Research Hypothesis: Role and Construction

    Abstract. A hypothesis is a logical construct, interposed between a problem and its solution, which represents a proposed answer to a research question. It gives direction to the investigator's thinking about the problem and, therefore, facilitates a solution. There are three primary modes of inference by which hypotheses are developed ...

  14. PDF DEVELOPING HYPOTHESIS AND RESEARCH QUESTIONS

    "A hypothesis can be defined as a tentative explanation of the research problem, a possible outcome of the research, or an educated guess about the research outcome." (Sarantakos, 1993: 1991) "Hypotheses are always in declarative sentence form, an they relate, either generally or specifically , variables to variables."

  15. Hypothesis: Definition, Examples, and Types

    A hypothesis is a tentative statement about the relationship between two or more variables. It is a specific, testable prediction about what you expect to happen in a study. It is a preliminary answer to your question that helps guide the research process. Consider a study designed to examine the relationship between sleep deprivation and test ...

  16. Developing a Hypothesis

    Hypotheses are often but not always derived from theories. So a hypothesis is often a prediction based on a theory but some hypotheses are a-theoretical and only after a set of observations have been made, is a theory developed. ... (Schwarz et al., 1991) [2]. Both theories held that such judgments are based on relevant examples that people ...

  17. 4.4: Hypothesis Testing

    Testing Hypotheses using Confidence Intervals. We can start the evaluation of the hypothesis setup by comparing 2006 and 2012 run times using a point estimate from the 2012 sample: ˉx12 = 95.61 minutes. This estimate suggests the average time is actually longer than the 2006 time, 93.29 minutes.

  18. Définition de hypothèse

    Étymologie de « hypothèse » Du latin hypothesis (« argument »), emprunté au grec ancien ὑπόθεσις, hupóthesis (« action de mettre dessous »). Le mot grec est une combinaison de ὑπὸ, hupo (« sous »), et θέσις, thésis (« thèse »). Usage du mot « hypothèse » Évolution historique de l'usage du mot « hypothèse » depuis 1800

  19. PDF LES HYPOTHÈSES

    LES HYPOTHÈSES - EXERCICES - page 5 /5 CC BY-NC-SA 4.0 - French Grammar Games for Grammar Geeks Exercice 3: Conjuguez les verbes au temps et au mode qui convient 1. Si tu mangeais moins, tu grossirais (grossir) moins vite 2. Si elle n'était pas venue, nous ne serions pas allé(e)s (ne pas aller) au cinéma 3. Si je peux (pouvoir), je partirai à 10 heures

  20. Research questions, hypotheses and objectives

    The development of the research question, including a supportive hypothesis and objectives, is a necessary key step in producing clinically relevant results to be used in evidence-based practice. A well-defined and specific research question is more likely to help guide us in making decisions about study design and population and subsequently ...

  21. Medical Hypotheses

    Medical Hypotheses is a forum for ideas in medicine and related biomedical sciences. It will publish interesting and important theoretical papers that foster the diversity and debate upon which the scientific process thrives. The Aims and Scope of Medical Hypotheses are no different now from what was proposed by the founder of the journal, the ...

  22. Hypotheses non fingo

    Hypotheses non fingo (Latin for "I frame no hypotheses", or "I contrive no hypotheses") is a phrase used by Isaac Newton in an essay, "General Scholium", which was appended to the second (1713) edition of the Principia. Original remark. A 1999 translation of the Principia presents Newton's remark as follows:

  23. The Hierarchy-of-Hypotheses Approach: A Synthesis Method for Enhancing

    Hypothesis. An assumption that (a) is based on a formalized or nonformalized theoretical model of the real world and (b) can deliver one or more testable predictions (after Giere et al. 2005). Mechanistic hypothesis. Narrowed version of an overarching hypothesis, resulting from specialization or decomposition of the unspecified hypothesis with ...

  24. Paratexte audiovisuel et annotation : le cas du "parcours découverte

    Jeudi 30 mai 2024 à 11h30, dans le cadre du colloque international Des corpus audiovisuels en Humanités. Méthodes, expériences, résultats (30-31 mai 2024) organisé par l'équipe du Consortium Huma-num CANEVAS (Michael Bourgatte et Laurent Tessier, dir.) à la Maison des Sciences de l'Homme - Paris Nord, j'ai le plaisir de présenter un communication commune avec … Continuer la ...