U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Beilstein J Org Chem

Logo of beilstein

Conjecture and hypothesis: The importance of reality checks

David deamer.

1 Department of Biomolecular Engineering, University of California, Santa Cruz CA 95060, USA

In origins of life research, it is important to understand the difference between conjecture and hypothesis. This commentary explores the difference and recommends alternative hypotheses as a way to advance our understanding of how life can begin on the Earth and other habitable planets. As an example of how this approach can be used, two conditions have been proposed for sites conducive to the origin of life: hydrothermal vents in salty seawater, and fresh water hydrothermal fields associated with volcanic landmasses. These are considered as alternative hypotheses and the accumulating weight of evidence for each site is described and analyzed.

Introduction

The word conjecture is defined as an opinion based on incomplete information. The word can be taken to be slightly pejorative, but given that conjecture also involves imagination and creative effort, I will argue here that in scientific research there is a natural progression from conjecture to hypothesis to consensus. Conjecture is an idea, hypothesis is a conjecture that can be tested by experiment or observation, and consensus emerges when other interested colleagues agree that evidence supports a hypothesis that has explanatory value. This approach is clearly relevant to origins of life research which is still at a stage where multiple conjectures abound yet vast gaps in knowledge and understanding remain, mostly due to lack of significant funding for research in this area. The result is that only a few dozen laboratories are supported in the global scientific community, in contrast to thousands of scientists investigating health related research or chemistry and physics having applications in industry. Another reason is that the origin of life is best understood in interdisciplinary terms involving knowledge of astronomy, planetary science, biophysics, chemistry and biochemistry, molecular biology and evolution. Relatively few scientists have a taste for research that demands such broad knowledge to make significant advances. The historical development of origins research has been well described by Iris Fry [ 1 ] and Antonio Lazcano [ 2 ].

Most scientists agree that hypothesis testing is an essential feature of research, and a typical proposal to a funding agency usually has a clearly stated hypothesis. However, there is a very human tendency for investigators to prefer positive results that support their idea. Karl Popper [ 3 ] had some good advice in this regard: Don't try to prove an idea is right. Instead, try to falsify it. Those rare ideas that cannot be falsified then emerge from the majority of ideas that fail the testing process. Günther Wächtershäuser [ 4 ] recently commented on how Popper's advice can be applied in origins of life research.

Hypothesis testing is an essential feature of good research, but its value can be increased by one additional step which was first clearly stated in 1964 by John Platt [ 5 ]. The title of Platt's article was Strong Inference , which he defines in the following way:

“Strong inference consists of applying the following steps to every problem in science, formally and explicitly and regularly.

  • Devising alternative hypotheses.
  • Devising crucial experiments ... with alternative possible outcomes, each of which will, as nearly as possible, exclude one or more of the hypotheses.
  • Carrying out the experiment so as to get a clean result.”

Research approaches that incorporate alternative hypotheses avoid the tendency to prefer positive results, because both positive and negative results have value in inferring which of the two alternatives is better supported by accumulating evidence. The aim of this commentary is to describe how alternative hypotheses can be applied to understanding the origin of life, with the focus on a simple question: Did life begin in salty water in a marine environment, or did life begin in fresh water in a terrestrial setting? Although the question seems simple, there are significant ramifications of possible answers for life detection missions to other planetary objects in the solar system.

We can begin with two conjectures and then attempt to turn them into alternative hypotheses. The first conjecture follows from the discovery of hydrothermal vents and observations related to their properties:

  • All life requires liquid water
  • Most of the water on Earth is in the ocean.
  • Hydrothermal vents emerging from the ocean floor are sources of chemical energy.
  • Populations of chemotrophic microbial life thrive in hydrothermal vents.

Conjecture: life originated in hydrothermal vents and later adapted to fresh water on volcanic and continental land masses. In the absence of alternatives this idea has been accepted as a reasonable suggestion.

Is there an alternative? Here is another list of facts:

  • A small fraction of the Earth's water is distilled from seawater and precipitates as fresh water on volcanic land masses.
  • The water accumulates in hydrothermal fields that undergo cycles of evaporation and refilling.
  • During evaporation, dilute solutes in the water become concentrated films on mineral surfaces.
  • If the solutes can undergo chemical or physical interactions, they will do so in the concentrated films.
  • The products will accumulate in the pools when water returns either in the form of precipitation or as fluctuations in water levels related to hot springs or geyser activity.

Conjecture: life originated in fresh water hydrothermal fields associated with volcanic land masses, then adapted to the salty seawater of the early ocean.

The current paradigm: Life began in the ocean in salty seawater

Now we can provide a few more details about two geophysical conditions that have been proposed as alternative sites conducive for the origin of life. Hydrothermal vents were discovered in 1977 [ 6 ] and were soon proposed to be a likely site for life to begin [ 7 – 10 ]. Hydrothermal vents referred to as black smokers are produced when seawater comes into contact with rocks heated by magma underlying mid-ocean ridges. The hot water dissolves mineral components of the rock and then emerges through the ocean floor where the mineral solutes come out of solution to form characteristic chimneys that emit a black smoke of precipitated metal sulfide particles.

A second type of hydrothermal vent was discovered in 2001 [ 11 ] that does not depend on volcanism. Instead they form when seawater reacts with mineral components of peridotite in the sea floor, a process called serpentinization. The reaction produces hydrogen and a strongly alkaline (pH 9–11) hot medium saturated with carbonate. When the warm fluid contacts cooler seawater, calcium carbonate and other minerals precipitate to form white chimney structures.

The hydrogen gas dissolved in the alkaline vent fluid is a potential source of reducing power. Certain microorganisms already use hydrogen for this purpose, so the hydrothermal vent hypothesis proposes that on the prebiotic Earth hydrogen could potentially reduce carbon dioxide to organic compounds that are then incorporated into a primitive metabolism [ 12 ]. Lane and Martin [ 13 ] noted that the alkaline vent minerals have a porous structure that could serve as cellular compartments with mineral membranes as boundaries. The assumption that the membranes may separate a strongly alkaline medium from mildly acidic Hadean sea water across suggested that a primitive version of chemiosmotic energy transduction might be possible to supply chemical energy for primitive forms of life. Weiss et al. [ 14 ] used genomic analysis of vent microorganisms to test the possibility that the last universal common ancestor (LUCA) may have originated in hydrothermal vents.

The iron-sulfur chemistry proposed for hydrothermal vents was tested by Huber and Wächtershäuser [ 15 – 16 ] who simulated vent conditions with boiling mixtures of iron and nickel sulfides to which various reactants were added. They reported that acetic acid, amino acids and peptide bonds could be synthesized under these conditions, and claimed that “The results support the theory of a chemoautotrophic origin of life with a CO-driven, (Fe,Ni)S-dependent primordial metabolism.”

More recently Herschy et al. [ 17 ] simulated hydrothermal vent conditions by injecting a solution of potassium phosphate, sodium silicate and sodium sulfide (pH 11) into a second solution of ferrous chloride, sodium bicarbonate and nickel chloride (pH 5). The aim was to determine whether carbon dioxide (present as 10 mM sodium bicarbonate) can be reduced under these conditions, and they were able to detect ≈50 μM formic acid. In a similar laboratory simulation of an alkaline hydrothermal vent, Burcar et al. [ 18 ] used mass spectrometry to detect a small yield of dimers produced from adenosine monophosphate circulating in the medium.

An alternative hypothesis: Life began in terrestrial fresh water

Although most of the Earth's water today is salty seawater, a small fraction (~1%) is present in the form of fresh water distilled by evaporation from the ocean and falling on continental land masses as precipitation. The Hadean Earth did not have continents but was likely to have volcanoes similar to those from the same era still visible on Mars. The volcanism associated with such islands suggests an alternative hydrothermal site we will refer to as hydrothermal fields. Iceland is an analogous site on today's Earth, with several active volcanoes and associated hydrothermal areas supplied by precipitation and dominated by hot springs and geyser activity. In contrast to the single rock-water interface of hydrothermal vents, hydrothermal fields have a more complex array of three interfaces in which minerals, water and atmosphere undergo continuous fluctuations of wetting and drying.

The fluctuating hydrothermal field hypothesis has been used as a model for polymerization reactions in which monomers like amino acids and mononucleotides form peptide and ester bonds of biologically relevant polymers. The idea that evaporation and heat can drive polymerization is obvious and was first proposed years ago [ 19 ]. Lahav and White [ 20 ] adopted the approach and demonstrated that peptide bonds could be produced using clay as a catalyst. The approach was largely abandoned with the advent of the RNA World scenario that suggested a way for life to begin in solution, rather than by evaporation to dryness. However, polymerization in an aqueous medium requires chemical activation of the monomers, and so far there is no obvious mechanism by which activation can occur. Recent studies have returned to evaporation as a way to drive polymerization reactions [ 21 – 22 ].

There are several advantages to using evaporation in this regard. First, simply concentrating potential reactants adds significant free energy to a system that can be used to drive condensation reactions [ 23 ]. Furthermore, if amphiphilic compounds are present they can organize and concentrate reactants within a two dimensional plane with the result that polymerization is enhanced [ 24 – 25 ].

The hydrothermal field hypothesis has been tested in laboratory simulations. For instance, peptide bonds have been produced [ 26 – 27 ] and cycles of drying and rehydration have been shown to drive polymerization of mononucleotides [ 22 , 28 – 29 ]. Because the resulting polymers can be encapsulated in lipid vesicles, it has been proposed that the resulting protocells are candidates for combinatorial selection and the first steps of evolution [ 30 ].

From the above discussion, alternative conjectures have been published and are available for critical analysis and commentary. How can we turn the two conjectures into John Platt's alternative hypotheses? The answer is simple. We follow Platt's advice to devise critical experiments that will add weight of evidence to either or both of the alternative conjectures which then become testable hypotheses. Here is a proposed list of conditions that seem to be essential prerequisites if cellular life is to originate in one of the two alternative conditions:

  • There must be a source of organic compounds relevant to biological processes, such as amino acids, nucleobases, simple sugars and phosphate.
  • The organic solutes are likely to be present as very dilute solutions, so there should be a process by which they can be sufficiently concentrated to undergo chemical reactions relevant to cellular life.
  • Energy sources must be present in the environment to drive a primitive metabolism and polymerization.
  • Products of reactions should accumulate within the site rather than dispersing into the bulk phase environment.
  • Biologically relevant polymers are synthesized with chain lengths sufficient to act as catalysts or incorporate genetic information.
  • If amphiphilic compounds are present in the mixture, the conditions will allow them to assemble into membranous compartments.
  • A plausible physical mechanism can produce encapsulated polymers in the form of protocells and subject them to combinatorial selection.

These conditions can also be considered to be predictions, because each condition in the above list can be tested by observation, by theoretical analysis or in laboratory simulations. If any one of the predictions fails experimentally or is shown to be impossible, for instance by being inconsistent with thermodynamic principles, that alternative can be considered to be falsified. As evidence accumulates, we will be able to judge the relative plausibility and explanatory power of the competing ideas. Continued testing of the alternative hypotheses is essential, because neither has yet reached the level of consensus. In both cases, laboratory simulations will ideally be extended to a second important step, which is to visit the alternative sites and demonstrate that what happens in the laboratory can also occur in the actual conditions of hydrothermal vents or fields.

This article is part of the Thematic Series "From prebiotic chemistry to molecular evolution".

Reset password New user? Sign up

Existing user? Log in

Conjectures

Already have an account? Log in here.

A conjecture is a mathematical statement that has not yet been rigorously proved. Conjectures arise when one notices a pattern that holds true for many cases . However, just because a pattern holds true for many cases does not mean that the pattern will hold true for all cases. Conjectures must be proved for the mathematical observation to be fully accepted. When a conjecture is rigorously proved, it becomes a theorem.

A conjecture is an important step in problem solving; it is not just a tool for professional mathematicians. In everyday problem solving, it is very rare that a problem's solution is immediately apparent. Instead, the problem solving process involves analyzing the problem structure, examining cases, developing a conjecture about the solution, and then confirming that conjecture through proof.

Developing Conjectures

Proving conjectures, open conjectures, recently proved conjectures, disproved conjectures.

Conjectures can be made by anyone, as long as one notices a consistent pattern. Consider the following example involving Pascal's triangle :

The \(0^\text{th}\) through \(4^\text{th}\) rows of Pascal's triangle are shown below. \[1\\ 1\quad 1\\ 1\quad 2 \quad 1\\ 1\quad 3 \quad 3 \quad 1\\ 1\quad 4 \quad 6 \quad 4 \quad 1\\ \cdots\] Conjecture an expression for the sum of the elements in the \(n^\text{th}\) row of Pascal's triangle. The most sensible approach to begin the process of conjecturing is to see what happens for simple cases. Start by summing the first couple of rows: \[\begin{array}{lrcl} 0^\text{th}\text{ row:} & 1 & = & 1 \\ 1^\text{st}\text{ row:} & 1+1 & = & 2 \\ 2^\text{nd}\text{ row:} & 1+2+1 & = & 4 \\ 3^\text{rd}\text{ row:} & 1+3+3+1 & = & 8 \\ 4^\text{th}\text{ row:} & 1+4+6+4+1 & = & 16. \end{array}\] Now, observe the pattern in these results. It is clear that these are powers of \(2\). Try the next row to see if the pattern holds (recall how to construct the rows of Pascal's triangle ): \[\begin{array}{lrcl} 5^\text{th}\text{ row:} & 1+5+10+10+5+1 & = & 32. \\ \end{array}\] The pattern seems to hold. One can try as many rows as one would like, but the information gathered so far is enough to make a conjecture. Conjecture : The sum of the elements in the \(n^\text{th}\) row of Pascal's triangle is \(2^n\). \(_\square\)

\[\begin{array} & & & & 1& & \\ & & 2 & 3 & 4 & & \\ & 5 & 6 & 7 & 8 & 9 & \\ 10 & 11 & 12 & 13 & 14 & 15 & 16 \\ &&& \vdots &&& \end{array}\]

Given that the pattern continues, find the second term in the \(13^\text{th}\) row.

Some conjectures can be more elusive to develop. If the pattern isn't obvious, carefully observe how the problem is structured.

Observe the following pattern: Let \(x_n\) be the number of segments that connect an \(n\times n\) square lattice. Conjecture an expression for \(x_n\). Observing the cases given by counting how many segments are in each figure, we have \[\begin{align} x_0 &= 0 \\ x_1 &= 4 \\ x_2 &= 12. \end{align}\] From these three cases, no obvious pattern emerges. Observe what the next case looks like: Counting the segments here gives \(x_3=24\). One might notice that each difference between consecutive terms in the sequence is a multiple of 4: \[\begin{align} x_1-x_0 &= 4 \\ x_2-x_1 &= 8 \\ x_3-x_2 &= 12. \end{align}\] This observation could lead one to write a recurrence relation \[x_n=x_{n-1}+4n.\] However, this would become a very tedious calculation if one was required to find the \(100^\text{th}\) term in the sequence. It would be more desirable to develop an expression for \(x_n\) purely in terms of \(n\). One could attempt to observe more cases in the sequence to see if any numerical pattern emerges. Often, a better way to tackle these kinds of problems is to think more creatively about how the problem is structured. Observe the same case for \(n=3\), except now the horizontal and vertical segments are color-coded. Notice that there are \(4\) horizontal lengths (in red), and each of them consists of \(3\) segments. The same is true for vertical lengths (in blue). Written as an expression, the total number of segments is \[x_3=2(3)(4)=24.\] Now consider the other cases, and see if the same structure applies: \[\begin{array}{ccccc} x_0 & = & 2(0)(1) & = & 0 \\ x_1 & = & 2(1)(2) & = & 4 \\ x_2 & = & 2(2)(3) & = & 12. \end{array}\] The pattern appears to hold. This gives enough information to write a conjecture. Conjecture : The number of segments connecting an \(n\times n\) lattice is defined by the sequence \(x_n=2n(n+1)\). \(_\square\)

Consecutive​ towers are built, as shown in the figure above.

The \(1^\text{st}\) tower has one floor made of two cards. The \(2^\text{nd}\) tower has two floors made of seven cards. The \(3^\text{rd}\) tower has three floors made of fifteen cards, and so on.

How many cards will the \(1000^\text{th}\) tower have?

The \(5\times 5\) array of dots represents trees in an orchard. If you were standing at the central spot marked C, you would not be able to see 8 of the 24 trees (shown as X). If you were standing at the center of a \(9\times 9\) array of trees , how many of the 80 trees would be hidden?

Keep in mind that observing a conjecture to be true for many cases doesn't make it true for all cases. In the history of mathematics, there have been many conjectures that were shown to be true for many cases, but were eventually disproved by a counterexample. For the sake of problem solving, it's important to prove each of these conjectures to ensure that they are correct.

One must always be wary of falling into the trap of observing a pattern and believing it must hold true for all cases. Consider the following values of the partition function :

\[\begin{align} p(2) &= 2 \\ p(3) &= 3 \\ p(4) &= 5 \\ p(5) &= 7 \\ p(6) &= 11. \end{align}\]

There is a very tempting pattern within these values, and it might cause one to make the following conjecture:

(Incorrect) Conjecture : The number of partitions of an integer \(n\) is \(p_{n-1}\), where \(p_k\) is the \(k^\text{th}\) prime number.

Observing the very next value of \(p(n)\) puts this conjecture to rest: \(p(7)=15\). As soon as a single case is shown to disobey the pattern, the conjecture is disproved. This is called a counterexample . Once a counterexample is found, it's not necessary to check any more values of the partition function. A conjecture must hold true for all cases, not just some.

\(A\) and \(B\) are two positive real numbers such that \(A\times B=100\). What is the maximum value of \(A+B\)?

Disproving a conjecture by counterexample can ensure that one isn't wasting time chasing a pattern that doesn't exist. However easy it is to disprove conjectures, a method to prove conjectures is still required.

The most common method for proving conjectures is direct proof . This method will be used to prove the lattice problem above.

Prove that the number of segments connecting an \(n\times n\) lattice is \(2n(n+1)\). Recall from the previous example how the segments in the lattice were counted. Most of work for the proof is already completed. Writing the proof is merely a process of formalizing how the formula was obtained. Proof : In each \(n\times n\) lattice, there are \(n+1\) horizontal lengths, each consisting of \(n\) segments. This is likewise true for vertical lengths. Thus, the total number of segments connecting an \(n\times n\) lattice is \(2n(n+1)\). \(_\square\)

If \(A\) is a positive integer, how many values of \(n\) satisfy \(1! + 2! + \cdots + n! = A^2\)?

Another possible method of proof is induction . Induction is most useful when the different cases in a problem are related to each other. As the elements of Pascal's triangle are very closely related to each other, this method is very useful for proofs involving Pascal's triangle.

Prove that the sum of the elements in the \(n^\text{th}\) row of Pascal's triangle is \(2^n\). Let \(s(n)\) be the sum of the elements in the \(n^\text{th}\) row of Pascal's triangle. Base Case : The \(0^\text{th}\) row contains only \(1\). Therefore, \(s(n)=1=2^0\). Inductive Step : Assume that \(s(n)=2^n\) for some integer \(n\). Show that if \(s(n)=2^n\), then \(s(n+1)=2^{n+1}\). This step requires some thinking about how the rows are related to each other. It might not be immediately apparent how this can be done, so begin with a single case. Examine how the \(3^\text{rd}\) and \(4^\text{th}\) rows are related to each other: \[ \begin{array}{rc} 3^\text{rd}\text{ row: } & 1 \quad 3 \quad 3 \quad 1 \\ 4^\text{th}\text{ row: } & 1 \quad 4 \quad 6 \quad 4 \quad 1. \end{array} \] The elements in the \(4^\text{th}\) row are composed of sums of elements in the \(3^\text{rd}\) row: \[\begin{array}{rc} 3^\text{rd}\text{ row: } & \color{red}{1} \qquad \quad \color{blue}{3} \qquad \quad \color{green}{3} \qquad \quad \color{purple}{1} \\ 4^\text{th}\text{ row: } & \color{red}{1} \qquad \color{red}{1}+\color{blue}{3} \quad \color{blue}{3}+\color{green}{3} \quad \color{green}{3}+\color{purple}{1} \qquad \color{purple}{1}. \end{array} \] Each element in the \(3^\text{rd}\) row appears exactly twice in the sums which compose the \(4^\text{th}\) row. Thus, the sum of elements in the \(4^\text{th}\) row is exactly twice as much as sum of elements in the \(3^\text{rd}\) row. Elements in Pascal's triangle are always composed of sums of elements from the preceding row. Thus, \(s(n+1)=2s(n)\) for any positive integer \(n\). If \(s(n)=2^n\), then \(s(n+1)=2\times 2^n=2^{n+1}\). The inductive step is complete. Thus, the sum of elements in the \(n^\text{th}\) row of Pascal's triangle is \(2^n\). \(_\square\)

Still another method for proving conjectures is to establish a bijection . Sometimes, other mathematicians have done the bulk of work required to solve a problem. What remains is to make the connection between other mathematicians' work and this problem, to apply formulas and theorems correctly.

Ann stands on the Southwest corner of the figure below. The lines represent streets. If Ann only travels North or East along the streets, how many paths will take her to the school in the Northeast corner? Generalize this problem for an \(m\times n\) grid. It may not seem immediately clear how to approach this problem. A good start would be to examine a couple of cases to see if a pattern emerges. One possible path for Ann would be to travel all the way North and then all the way East. \[\text{Path: NNNEEEE}\] Another possible path would be to travel all the way East and then all the way North. \[\text{Path: EEEENNN}\] It's also possible to alternate between traveling North and East. \[\text{Path: EENENEN}\] One could continue exhaustively listing out all the possible paths. As the paths are listed out, attempt to look for patterns or common threads. Notice how each path consists of exactly \(7\) moves, \(3\) of which are North moves and \(4\) of which are East moves. What makes a path distinct is in what order those moves occur. This information can be used to establish a bijection . Consider the order of Ann's moves to be defined as an order of \(7\) moves, \(3\) of which are North moves, with the rest being East moves. Ann's path can be defined as a combination of \(7\) moves, \(3\) of which are North. Thus, the paths that Ann could possibly take have a bijective relationship with the combinations of \(3\) distinct objects out of \(7\). The number of combinations can be calculated with the binomial coefficient \[\binom{7}{3}=35.\] Thus, there are \(35\) possible paths that Ann could take. More generally, the number of paths leading from one corner of an \(m\times n\) grid to the opposite corner is \(\binom{m+n}{n}\). This problem is explored further in the rectangular grid walk page . \(_\square\)

If \(A, B, C, D\) and \(E\) are all integers satisfying \(20 > A > B > C > D > E > 0\), how many different ways can the five variables be chosen?

There are many open conjectures in mathematics. An open conjecture is one that has been proposed, but no formal proof has yet been developed. The conjectures below are some of the most famous open conjectures.

Goldbach's Conjecture: (proposed 1742 by Christian Goldbach) Every even integer greater than \(2\) can be expressed as the sum of two (not necessarily distinct) prime numbers.

One can observe Goldbach's Conjecture for small cases:

\[\begin{align} 4 &= 2+2 \\ 6 &= 3+3 \\ 8 &= 3+5 \\ 10 &= 5+5 \\ 12 &= 5+7 \\ 14 &= 7+7 \\ 16 &= 3+13 \\ 18 &= 7+11 \\ &\cdots \end{align}\]

This process of checking all even numbers can be continued for a very long time. With the aid of computers, mathematicians have found that all even numbers up to \(4\times 10^{18}\) can be expressed as the sum of two prime numbers. Even though Goldbach's Conjecture holds for numbers so large, no mathematician has been able to prove that this pattern extends to infinity. If an even number that cannot be expressed as the sum of two primes were to be found, it would be very surprising.

How many distinct pairs of prime numbers sum to 2016?

Note : This problem is best done with the aid of a computer. The search is a bit tedious to do by hand.

Twin Prime Conjecture: (proposed 1849 by by Alphonse de Polignac) There are infinitely many pairs of twin primes .

It has been known for a very long time that there are infinitely many prime numbers . Twin primes , primes that differ by \(2\), are somewhat exceptional because primes are typically spaced far apart . The first couple of twin prime pairs are \((3,5)\), \((5,7)\), and \((11,13)\). Larger and larger pairs of twin primes continue to be discovered; as of September 2016, the largest known twin prime pair is \(2996863034895\times 2^{1290000}\pm 1\).

Recently, mathematicians Yitang Zhang and Terence Tao have produced work that suggests an upper bound for which there are infinitely many primes that differ by at most that amount. As of this writing, this upper bound is 246.

Riemann Hypothesis: (proposed 1859 by Bernhard Riemann) The Riemann zeta function has its zeros only at the negative even integers and the complex numbers with real part \(\frac{1}{2}\).

The Riemann hypothesis is one of the most important open problems in mathematics. If it were to be proved, it would lead to several important developments in number theory and algebra. The most notable of these potential developments would be a better understanding of the distribution of primes .

\(abc\) Conjecture: (proposed 1985 by Joseph Osterlé and David Masser) Let \(a\), \(b\), and \(c\) be positive pairwise co-prime integers such that \(a+b=c\). Let \(d\) be the product of the distinct prime factors of \(abc\). The \(abc\) conjecture states that \(d\) is usually not much smaller than \(c\).

The actual statement of the \(abc\) conjecture is much more precise and well-defined than the language, " usually not much smaller ," used here. However, this will suffice to demonstrate an example. The language implies that \(d\) is typically larger than \(c\), and only in extreme rare cases is \(d\) much smaller than \(c\).

Let \(a=49\), \(b=75\), and \(c=a+b\). Let \(d\) be the product of distinct prime factors of \(abc\). Show that \(d>c\). We have \[\begin{align} a&=7^2\\ b&=3\times 5^2\\\\ c&=a+b=49+75\\ &=124\\ &=2^2\times 31. \end{align}\] Note that \(\gcd(a,b)=1\), \(\gcd(a,c)=1\), and \(\gcd(b,c)=1\). This establishes that \(a\), \(b\), and \(c\) are pairwise co-prime , which is an important requirement of the \(abc\) conjecture. The distinct prime factors of \(abc\) are \(2\), \(3\), \(5\), \(7\), and \(31\). The product of these factors is then \[d=2\times 3\times 5\times 7\times 31=6510.\] Thus, \(d>c\). \(_\square\)

If one were to test many triplets \((a,b,c)\) that meet the requirements of the \(abc\) conjecture, one would find very few in which \(d<c\). The smallest possible triplet for which this is the case is \((1,8,9)\). The University of Leiden has led a search of triplets in which \(d<c\).

The \(abc\) conjecture is especially notable on this list because a proof is pending. In 2012, Shinichi Mochizuki published a series of new findings, including a proof of the \(abc\) conjecture. These new findings are, as of this writing, being reviewed by the mathematical community to ensure their accuracy. If this proof were to be accepted, then it would lead to an explosion of new theorems in number theory .

Although the above conjectures are still open, some conjectures have been open for a very long time, only to be recently proved. Below are a couple of the most famous examples.

Fermat's Last Theorem: (proposed 1637 by Pierre de Fermat, proved 1994 by Andrew Wiles) \[a^n+b^n=c^n\] There are no integer solutions \((a,\ b,\ c)\) for the above equation for any integer \(n>2\). \(_\square\)

Fermat's last theorem , originally written in the margins of Pierre de Fermat's copy of Arithmetica in 1637, frustrated mathematicians for centuries. During this time, many formal proofs were attempted, but none were successful. It wasn't until 1994 that Andrew Wiles released a formal proof that was accepted by the mathematical community.

Four Color Theorem: (proposed ~1850, proved 1976 by Kenneth Appel and Wolfgang Haken) Given any separation of a plane into contiguous regions, only four colors are needed to color the regions such that no pair of adjacent regions are the same color. \(_\square\) Image Credit : Wikipedia

The four color theorem is of particular interest because of how it was proved. It was the first major mathematical theorem to be proved with the help of computers. Appel and Haken's approach involved mapping out a set of possible counterexamples, and using these possible counterexamples to show that no counterexample could exist. If no counterexample could exist, then the theorem must be true. Their proof would have required an extremely extensive analysis by hand, but computers allowed this analysis to be done with much less effort.

Poincaré Conjecture: (proposed 1904 by Henri Poincaré, proved 2002 by Grigori Perelman) Every simply connected, closed 3-manifold is homeomorphic to the 3-sphere.

The Poincaré conjecture has been so recently proved that it is still popularly known as a conjecture rather than as the "Poincaré theorem." The wiki page linked here contains much more information and explanations about the theorem.

In some rare cases, a conjecture with strong evidence has been proposed, only to be disproved some time later. There are also some mathematical observations which strongly suggest a pattern, but this pattern does not hold for all cases. Below are a couple of examples.

Prime-Generating Function: A prime-generating function produces prime number outputs for a specified set of inputs. As of now, there is no known prime-generating function that can be efficiently computed.

Even though no known prime-generating functions exist, there are many examples of functions that seem to come close.

Euler's Prime-Generating Polynomial: We have \[f(n)=n^2+n+41.\] For non-negative integer values of \(n\) less than \(41\), \(f(n)\) is a prime number: \[\begin{align} f(0) &= 41 \\ f(1) &= 43 \\ f(2) &= 47 \\ f(3) &= 53 \\ \vdots \\ f(40) &= 1681. \end{align}\] Note that \(f(41)\) is certain to be a composite number: \[\begin{align} f(41) &= 41^2+41+41 \\ f(41) &= 41(41+1+1) \\ f(41) &= 41(43). \end{align}\]

Of course, Euler never seriously thought that he had found a prime-generating function. However, an inattentive observer, seeing the first 40 results, might believe that the function would continue to produce primes indefinitely.

Euler produced an astounding amount of important mathematical results in his lifetime. It is somewhat surprising that one of his conjectures turned out to be false.

Euler's Sum of Powers Conjecture: (proposed 1769 by Leonhard Euler, disproved 1966 by L.J. Lander and T.R. Parkin) Given \(n>1\) and \(a_1, a_2, \ldots, a_n, b\) are non-zero integers, if \[\sum\limits_{i=1}^n{a_i^k}=b^k,\] then \(n\ge k\). Lander and Parkin found a counterexample with \(n=4\) and \(k=5\), which disproved this conjecture: \[27^5 + 84^5 + 110^5 + 133^5 = 144^5.\]

Problem Loading...

Note Loading...

Set Loading...

Mathematical Mysteries

Revealing the mysteries of mathematics

Axiom, Corollary, Lemma, Postulate, Conjectures and Theorems

what is a hypothesis and conjecture

“Lions and tigers, and bears, oh my!” ~ Dorothy in Wizard of Oz

Or should we say axioms, corollaries, lemmas, postulates, conjectures and theorems, oh my!

There are certain elementary statements, which are self evident and which are accepted without any questions. These are called  axioms.

Axiom 1: Things which are equal to the same thing are equal to one another.

For example:

Draw a line segment AB of length 10cm. Draw a second line CD having length equal to that of AB, using a compass. Measure the length of CD. We see that, CD = 10cm.

We can write it as, CD = AB and AB = 10cm implies CD = 10cm.

Arif, View. 2016. “Axioms, Postulates And Theorems – Class VIII”.  Breath Math . https://breathmath.com/2016/02/18/axioms-postulates-and-theorems-class-viii/ .

A statement that is taken to be true, so that further reasoning can be done.

It is not something we want to prove.

Example: one of Euclid’s axioms (over 2300 years ago!) is: “If A and B are two numbers that are the same, and C and D are also the same, A+C is the same as B+D”

“Definition Of Axiom”. 2021.  mathsisfun.Com . https://www.mathsisfun.com/definitions/axiom.html .

In mathematics an axiom is something which is the starting point for the logical deduction of other theorems. They cannot be proven with a logic derivation unless they are redundant. That means every field in mathematics can be boiled down to a set of axioms. One of the axioms of arithmetic is that a + b = b + a. You can’t prove that, but it is the basis of arithmetic and something we use rather often.

“Theorems, Lemmas And Other Definitions | Mathblog”. 2011.  mathblog.dk . https://www.mathblog.dk/theorems-lemmas/ .

In math it is known that you can’t prove everything. So, in order to lay a ground work for proving things, there is a list of things we “take for granted as true”. These things are either very basic definitions such as “point” “line”, or facts assumed to be true without proof that are very very simple. Then with these an accepted rules, one can prove other statements are true. The assumed facts are called “axioms” or sometimes “postulates”. The most famous are five postulates/axioms that Euclid’s geometry takes for granted. There are the following:

  • A straight  line segment  can be drawn joining any two points.
  • Any straight  line segment  can be extended indefinitely in a straight  line .
  • Given any straight  line segment , a  circle  can be drawn having the segment as  radius  and one endpoint as center.
  • All  right angles  are  congruent .
  • If two lines are drawn which  intersect  a third in such a way that the sum of the inner angles on one side is less than two  right angles , then the two lines inevitably must  intersect  each other on that side if extended far enough. This postulate is equivalent to what is known as the  parallel postulate .

The fifth postulate is perhaps the most “famous” as it is complex and people wanted to prove it from the first four, but couldn’t, and then it was discovered that there were systems in which the first four were true but the fifth wasn’t. These are called “non-Euclidean” geometries. Of course, here we take for granted what a point, line segment, line, circle, angle, and radius are at least as well.

Farris, Steven. “I don’t understand the concept of an axiom in mathematics. What is an axiom? How would you introduce or explain this concept to a 10-year-old?”. 2023.  Quora . https://qr.ae/pyVTM1 .

An  axiom  is just any concept or statement that we take as being true, without any need for a formal proof. It is usually something very fundamental to a given field, very well-established and/or self-evident. A non-mathematical example might be a simple statement of an observed truth, such as “the Sun rises in the East.” In math, such things as “a line can be extended to infinity” or “a point has no size” might be good examples. An axiom differs from a  postulate  in that an axiom is typically more general and common, while a postulate may apply only to a specific field. For instance, the difference between Euclidean and non-Euclidean geometries are just changes to one or more of the postulates on which they’re based. Another way to look at this is that a postulate is something we assume to be true only within that specific field.

Myers, Bob. “I don’t understand the concept of an axiom in mathematics. What is an axiom? How would you introduce or explain this concept to a 10-year-old?”. 2023.  Quora . https://qr.ae/pyVTwW .

It’s not so much that they don’t  require  proof, it’s that they can’t be proven. Axioms are  starting assumptions .

Everything that is proven is based on axioms, theorems, or definitions. You can’t prove an axiom without already having something to base your proof on, because deductive reasoning always needs a starting place. You have to start with good assumptions, and hope they’re true, or at least useful in the type of math you wish to create. (Don’t forget that math is just a human construct!)

That doesn’t mean that axioms come out of thin air. Some axioms are developed because if they don’t exist, the math doesn’t model the way we want it to. If you put 3 apples in your grocery cart, then put 4 more in, you have 7. But it works the same if you put 3 in, then 4. Now you have the commutative property of addition. You can’t  prove  addition works this way, but you need to set it up so that it does.

Often axioms are demonstrable. Try to draw two non-congruent triangles with sides of length 3, 4, and 5 units. You can’t. But you haven’t  proved  it using deductive reasoning. You’ve made a conjecture using inductive reasoning.

McClung, Carter. “Why don’t axioms require proofs?”. 2023.  Quora . https://qr.ae/pyVTO4 .

The axioms or postulates are the assumptions that are obvious universal truths, they are not proved. Euclid has introduced the geometry fundamentals like geometric shapes and figures in his book elements and has stated 5 main axioms or postulates. Here, we are going to discuss the definition of euclidean geometry, its elements, axioms and five important postulates. [4]

A theorem that  follows on  from another theorem.

Example: there is a  Theorem  that says: two angles that together form a straight line are “supplementary” (they add to 180°).

A  Corollary  to this is the “Vertical Angle Theorem” that says: where two lines intersect, the angles opposite each other are equal (a=c and b=d in the diagram).

Proof that a=c: Angles a and b are on a straight line, so: ⇒ angles a + b = 180° and so a = 180° − b Angles c and b are also on a straight line, so: ⇒ angles c + b = 180° and so c = 180° − b So angle a = angle c

“Corollary Definition (Illustrated Mathematics Dictionary)”. 2021.  mathsisfun.com . https://www.mathsisfun.com/definitions/corollary.html .

A corollary of a theorem or a definition is a statement that can be deduced directly from that theorem or statement. It still needs to be proved, though.

A simple example: Theorem: The sum of the angles of a triangle is pi radians.

Corollary: No angle in a right angled triangle can be obtuse.

Or: Definition: A prime number is one that can be divided without remainder only by 1 and itself.

Corollary: No even number > 2 can be prime.

A corollary is a theorem that can be proved from another theorem. For example: If two angles of a triangle are equal, then the sides opposite them are equal . A corollary would be: If a triangle is equilateral, it is also equiangular.

“What Are The Examples Of Corollary In Math? – Quora”. 2021.  quora.com . https://www.quora.com/What-are-the-examples-of-corollary-in-math .

Lemmas and corollaries are theorems themselves. It’s really not necessary to have different names for them. A corollary is a theorem that “easily” follows from the preceding theorem. For example, after proving the theorem that the sum of the angles in a triangle is 180°, an easy theorem to prove is that the sum of the angles in a quadrilateral is 360°. The proof is just to cut the quadrilateral into two triangles. So that theorem could be called a corollary. [2]

There is not formal difference between a theorem and a lemma.  A lemma is a proven proposition just like a theorem. Usually a lemma is used as a stepping stone for proving something larger. That means the convention is to call the main statement for a theorem and then split the problem into several smaller problems which are stated as lemmas.  Wolfram  suggest that a lemma  is a short theorem used to prove something larger.

Breaking part of the main proof out into lemmas is a good way to create a structure in a proof and sometimes their importance will prove more valuable than the main theorem.

Like a Theorem, but not as important. It is a minor result that has been proved to be true (using facts that were already known). [3]

Lemmas and corollaries are theorems themselves. It’s really not necessary to have different names for them. A lemma is a theorem that’s mentioned primarily because it’s used in one or more following theorems, but it’s not so interesting in itself. Sometimes lemmas are just minor observations, but sometimes they’ve got detailed proofs. [2]

Postulates  in geometry are very similar to axioms, self-evident truths, and beliefs in logic, political philosophy and personal decision-making.

Geometry postulates, or axioms, are accepted statements or facts. Thus, there is no need to prove them.

Postulate 1.1, Through two points, there is exactly 1 line. Line t is the only line passing through E and F.

what is a hypothesis and conjecture

In geometry, “ Axiom ” and “ Postulate ” are essentially interchangeable. In antiquity, they referred to propositions that were “obviously true” and only had to be stated, and not proven. In modern mathematics there is no longer an assumption that axioms are “obviously true”. Axioms are merely ‘background’ assumptions we make. The best analogy I know is that axioms are the “rules of the game”. In Euclid’s Geometry, the main axioms/postulates are:

  • Given any two distinct points, there is a line that contains them.
  • Any line segment can be extended to an infinite line.
  • Given a point and a radius, there is a circle with center in that point and that radius.
  • All right angles are equal to one another.
  • If a straight line falling on two straight lines makes the interior angles on the same side less than two right angles, the two straight lines, if produced indefinitely, meet on that side on which are the angles less than the two right angles. (The  parallel postulate ).

A  theorem  is a logical consequence of the axioms. In Geometry, the “propositions” are all theorems: they are derived using the axioms and the valid rules. A “Corollary” is a theorem that is usually considered an “easy consequence” of another theorem. What is or is not a corollary is entirely subjective. Sometimes what an author thinks is a ‘corollary’ is deemed more important than the corresponding theorem. (The same goes for “ Lemma “s, which are theorems that are considered auxiliary to proving some other, more important in the view of the author, theorem).

A “ hypothesis ” is an assumption made. For example, “If xx is an even integer, then x2x2 is an even integer” I am not asserting that x2x2 is even or odd; I am asserting that if  something  happens (namely, if xx happens to be an even integer) then  something else  will also happen. Here, “xx is an even integer” is the hypothesis being made to prove it.

Gordon Gustafson, and Arturo Magidin. 2010. “Difference Between Axioms, Theorems, Postulates, Corollaries, And Hypotheses”.  Mathematics Stack Exchange . https://math.stackexchange.com/questions/7717/difference-between-axioms-theorems-postulates-corollaries-and-hypotheses .

In geometry, a postulate is a statement that is assumed to be true based on basic geometric principles. An example of a postulate is the statement “exactly one line may be drawn through any two points.” A long time ago, postulates were the ideas that were thought to be so obviously true they did not require a proof. [1]

An axiom is a statement, usually considered to be self-evident, that assumed to be true without proof. It is used as a starting point in mathematical proof for deducing other truths.

Classically, axioms were considered different from postulates. An axiom would refer to a self-evident assumption common to many areas of inquiry, while a postulate referred to a hypothesis specific to a certain line of inquiry, that was accepted without proof. As an example, in Euclid’s Elements, you can compare “common notions” (axioms) with postulates.

In much of modern mathematics, however, there is generally no difference between what were classically referred to as “axioms” and “postulates”. Modern mathematics distinguishes between logical axioms and non-logical axioms, with the latter sometimes being referred to as postulates.

Postulates are assumptions which are specific to geometry but axioms are assumptions are used thru’ out mathematics and not specific to geometry.

“What is the difference between an axiom and postulates”. 2023.  BYJUs . https://byjus.com/question-answer/what-is-the-difference-between-an-axiom-and-postulates/ .

Hint: First you need to define both the terms, axiom and postulates. Examples of both can be stated. The main difference is between their application in specific fields in mathematics.

An axiom is a statement or proposition which is regarded as being established, accepted, or self-evidently true on which an abstractly defined structure is based. More precisely an axiom is a statement that is self-evident without any proof which is a starting point for further reasoning and arguments.

Postulate verbally means a fact, or truth of (something) as a basis for reasoning, discussion, or belief. Postulates are the basic structure from which lemmas and theorems are derived.

Nowadays ‘axiom’ and ‘postulate’ are usually interchangeable terms. One key difference between them is that postulates are true assumptions that are specific to geometry. Axioms are true assumptions used throughout mathematics and not specifically linked to geometry.

“What is the difference between an axiom and a postulate?”. 2023. Vedantu . https://www.vedantu.com/question-answer/difference-between-an-axiom-and-a-post-class-10-maths-cbse-5efeafa98c08f1791a1cc34a .

A  conjecture  is a mathematical statement that has not yet been rigorously proved. Conjectures arise when one notices a pattern that holds true for many  cases . However, just because a pattern holds true for many cases does not mean that the pattern will hold true for all cases. Conjectures must be proved for the mathematical observation to be fully accepted. When a conjecture is rigorously proved, it becomes a theorem.

“Conjectures | Brilliant Math & Science Wiki”. 2022.  brilliant.org . https://brilliant.org/wiki/conjectures/ .

“The Subtle Art Of The Mathematical Conjecture | Quanta Magazine”. 2019.  Quanta Magazine . https://www.quantamagazine.org/the-subtle-art-of-the-mathematical-conjecture-20190507/ .

A result that has been  proved to be true  (using operations and facts that were already known).

Example: The “Pythagoras Theorem” proved that a 2  + b 2  = c 2  for a right angled triangle.

A Theorem is a major result, a minor result is called a Lemma.

“Theorem Definition (Illustrated Mathematics Dictionary)”. 2021.  mathsisfun.Com . https://www.mathsisfun.com/definitions/theorem.html .

“Theorems, Corollaries, Lemmas”. 2021.  mathsisfun.com . https://www.mathsisfun.com/algebra/theorems-lemmas.html .

A statement that is proven true using postulates, definitions, and previously proven theorems.

A theorem is a mathematical statement that can and must be proven to be true. You may have been first exposed to the term when learning about the Pythagorean Theorem . Learning different theorems and proving they are true is an important part of Geometry. [1]

[1] “4.1 Theorems and Proofs”. 2022. CK-12 Foundation . https://flexbooks.ck12.org/cbook/ck-12-interactive-geometry-for-ccss/section/4.1/primary/lesson/theorems-and-proofs-geo-ccss/ .

[2] Joyce, David . “Can a theorem be proved by a corollary?”. 2023.  Quora . https://qr.ae/pybAMq .

Yes, a theorem can be proved by a corollary just so long as the corollary is proved first. You might have a sequence of theorems in logical order like this: Theorem 1, Corollary 2, Lemma 3, Theorem 4, Theorem 5. Each one is proved from those that precede it, but Theorem 5 could depend only on Corollary 2 and Lemma 3. Sometimes theorems are presented in a different order than the logical order, and sometimes even in reverse logical order, but whatever order they’re presented, it is necessary that there is no circular logic.

[3] “Definition Of Lemma”. 2021. mathsisfun.com . https://www.mathsisfun.com/definitions/lemma.html .

[4] “Euclidean Geometry (Definition, Facts, Axioms and Postulates)”. 2021. BYJUS . BYJU’S. September 20. https://byjus.com/maths/euclidean-geometry/ .

Additional Reading

“Basic Math Definitions”. 2021.  mathsisfun.com . https://www.mathsisfun.com/basic-math-definitions.html .

Browning, Wes . “Can a theorem be proved by another theorem?”. 2023.  Quora . https://qr.ae/pybAUz .

Sure. Sometimes the second theorem is called a “corollary.” Sometimes the first theorem is called a “lemma” and the second is called a theorem implied by the lemma. Or they’re both called theorems. The choice of names is up to the author of the exposition and is meant to clarify the logical flow. You may occasionally also see the term “ porism ” used. After a theorem has been proved, a porism is another theorem that can be proved by essentially the same proof as the first, usually by obvious modifications. I had a professor in math grad school who loved to trot porisms out after proving a theorem in his classes.

“Byrne’s Euclid”. 2021.  C82.Net . https://www.c82.net/euclid/ .

THE FIRST SIX BOOKS OF THE ELEMENTS OF EUCLID WITH COLOURED DIAGRAMS AND SYMBOLS A reproduction of Oliver Byrne’s celebrated work from 1847 plus interactive diagrams, cross references, and posters designed by Nicholas Rougeux

“Definitions. Postulates. Axioms: First Principles Of Plane Geometry “. 2021.  themathpage.com . https://themathpage.com/aBookI/first.htm#post .

“Geometry Postulates”. 2021.  basic-mathematics.com . https://www.basic-mathematics.com/geometry-postulates.html .

Mystery, Mike the. 2024. “Is George Orwell Right About 2+2=4 in Maths?”  Medium . Medium. March 12. https://medium.com/@Mike_Meng/is-george-orwell-right-about-2-2-4-in-maths-3bb0f6d5dd88 .

Freedom is the freedom to say that two plus two makes four. ——George Orwell, Nineteen Eighty-Four. When I first read George Orwell’s great “1984”, the above sentence left an indelible impact on me. It is worth mentioning that my first reaction to this quote was why Orwell used 2+2=4 instead of 1+1=2. And that’s exactly the first time I realized I was pedantic enough to get a maths degree in future. Ok, so why 2+2=4 is true? Before directly into the topic, i need to introduce some basic rules that we use to calculate numbers every single day. The rule is actually called Peano axioms, which is a logic system about natural numbers proposed by the 19th-century mathematician Giuseppe Peano . And we can establish an arithmetic system by these sets of axioms, which is also known as the Peano arithmetic system.

“Zermelo-Fraenkel Set Theory (ZFC)”. 2023.  Mathematical Mysteries . https://mathematicalmysteries.org/zermelo-fraenkel-set-theory-zfc/ .

Zermelo–Fraenkel set theory  (abbreviated  ZF ) is a system of  axioms  used to describe  set theory . When the  axiom of choice  is added to ZF, the system is called  ZFC . It is the system of axioms used in set theory by most mathematicians today.

The featured image on this page is from the Redubble website.

Website Powered by WordPress.com .

' src=

  • Already have a WordPress.com account? Log in now.
  • Subscribe Subscribed
  • Copy shortlink
  • Report this content
  • View post in Reader
  • Manage subscriptions
  • Collapse this bar

SEP home page

  • Table of Contents
  • Random Entry
  • Chronological
  • Editorial Information
  • About the SEP
  • Editorial Board
  • How to Cite the SEP
  • Special Characters
  • Advanced Tools
  • Support the SEP
  • PDFs for SEP Friends
  • Make a Donation
  • SEPIA for Libraries
  • Entry Contents

Bibliography

Academic tools.

  • Friends PDF Preview
  • Author and Citation Info
  • Back to Top

The Continuum Hypothesis

The continuum hypothesis (CH) is one of the most central open problems in set theory, one that is important for both mathematical and philosophical reasons.

The problem actually arose with the birth of set theory; indeed, in many respects it stimulated the birth of set theory. In 1874 Cantor had shown that there is a one-to-one correspondence between the natural numbers and the algebraic numbers. More surprisingly, he showed that there is no one-to-one correspondence between the natural numbers and the real numbers. Taking the existence of a one-to-one correspondence as a criterion for when two sets have the same size (something he certainly did by 1878), this result shows that there is more than one level of infinity and thus gave birth to the higher infinite in mathematics. Cantor immediately tried to determine whether there were any infinite sets of real numbers that were of intermediate size, that is, whether there was an infinite set of real numbers that could not be put into one-to-one correspondence with the natural numbers and could not be put into one-to-one correspondence with the real numbers. The continuum hypothesis (under one formulation) is simply the statement that there is no such set of real numbers. It was through his attempt to prove this hypothesis that led Cantor do develop set theory into a sophisticated branch of mathematics. [ 1 ]

Despite his efforts Cantor could not resolve CH. The problem persisted and was considered so important by Hilbert that he placed it first on his famous list of open problems to be faced by the 20 th century. Hilbert also struggled to resolve CH, again without success. Ultimately, this lack of progress was explained by the combined results of Gödel and Cohen, which together showed that CH cannot be resolved on the basis of the axioms that mathematicians were employing; in modern terms, CH is independent of Zermelo-Fraenkel set theory extended with the Axiom of Choice (ZFC).

This independence result was quickly followed by many others. The independence techniques were so powerful that set theorists soon found themselves preoccupied with the meta-theoretic enterprise of proving that certain fundamental statements could not be proved or refuted within ZFC. The question then arose as to whether there were ways to settle the independent statements. The community of mathematicians and philosophers of mathematics was largely divided on this question. The pluralists (like Cohen) maintained that the independence results effectively settled the question by showing that it had no answer . On this view, one could adopt a system in which, say CH was an axiom and one could adopt a system in which ¬CH was an axiom and that was the end of the matter—there was no question as to which of two incompatible extensions was the “correct” one. The non-pluralists (like Gödel) held that the independence results merely indicated the paucity of our means for circumscribing mathematical truth. On this view, what was needed were new axioms, axioms that are both justified and sufficient for the task. Gödel actually went further in proposing candidates for new axioms—large cardinal axioms—and he conjectured that they would settle CH.

Gödel's program for large cardinal axioms proved to be remarkably successful. Over the course of the next 30 years it was shown that large cardinal axioms settle many of the questions that were shown to be independent during the era of independence. However, CH was left untouched. The situation turned out to be rather ironic since in the end it was shown (in a sense that can be made precise) that although the standard large cardinal axioms effectively settle all question of complexity strictly below that of CH, they cannot (by results of Levy and Solovay and others) settle CH itself. Thus, in choosing CH as a test case for his program, Gödel put his finger precisely on the point where it fails. It is for this reason that CH continues to play a central role in the search for new axioms.

In this entry we shall give an overview of the major approaches to settling CH and we shall discuss some of the major foundational frameworks which maintain that CH does not have an answer. The subject is a large one and we have had to sacrifice full comprehensiveness in two dimensions. First, we have not been able to discuss the major philosophical issues that are lying in the background. For this the reader is directed to the entry “ Large Cardinals and Determinacy ”, which contains a general discussion of the independence results, the nature of axioms, the nature of justification, and the successes of large cardinal axioms in the realm “below CH”. Second, we have not been able to discuss every approach to CH that is in the literature. Instead we have restricted ourselves to those approaches that appear most promising from a philosophical point of view and where the mathematics has been developed to a sufficiently advanced state. In the approaches we shall discuss—forcing axioms, inner model theory, quasi-large cardinals—the mathematics has been pressed to a very advanced stage over the course of 40 years. And this has made our task somewhat difficult. We have tried to keep the discussion as accessible as possible and we have placed the more technical items in the endnotes. But the reader should bear in mind that we are presenting a bird's eye view and that for a higher resolution at any point the reader should dip into the suggested readings that appear at the end of each section. [ 2 ]

There are really two kinds of approaches to new axioms—the local approach and the global approach. On the local approach one seeks axioms that answer questions concerning a specifiable fragment of the universe, such as V ω+1 or V ω+2 , where CH lies. On the global approach one seeks axioms that attempt to illuminate the entire structure of the universe of sets. The global approach is clearly much more challenging. In this entry we shall start with the local approach and toward the end we shall briefly touch upon the global approach.

Here is an overview of the entry: Section 1 surveys the independence results in cardinal arithmetic, covering both the case of regular cardinals (where CH lies) and singular cardinals. Section 2 considers approaches to CH where one successively verifies a hierarchy of approximations to CH, each of which is an “effective” version of CH. This approach led to the remarkable discovery of Woodin that it is possible (in the presence of large cardinals) to have an effective failure of CH, thereby showing, that the effective failure of CH is as intractable (with respect to large cardinal axioms) as CH itself. Section 3 continues with the developments that stemmed from this discovery. The centerpiece of the discussion is the discovery of a “canonical” model in which CH fails. This formed the basis of a network of results that was collectively presented by Woodin as a case for the failure of CH. To present this case in the most streamlined form we introduce the strong logic Ω-logic. Section 4 takes up the competing foundational view that there is no solution to CH. This view is sharpened in terms of the generic multiverse conception of truth and that view is then scrutinized. Section 5 continues the assessment of the case for ¬CH by investigating a parallel case for CH. In the remaining two sections we turn to the global approach to new axioms and here we shall be much briefer. Section 6 discusses the approach through inner model theory. Section 7 discusses the approach through quasi-large cardinal axioms.

1.1 Regular Cardinals

1.2 singular cardinals, 2.1 three versions, 2.2 the foreman-magidor program, 3.1 ℙ max, 3.2 ω-logic, 3.3 the case, 4.1 broad multiverse views, 4.2 the generic multiverse, 4.3 the ω conjecture and the generic multiverse, 4.4 is there a way out, 5.1 the case for ¬ch, 5.2 the parallel case for ch, 5.3 assessment.

  • 6 The Ultimate Inner Model
  • 7 The Structure Theory of L ( V λ+1 )

Other Internet Resources

Related entries, 1. independence in cardinal arithmetic.

In this section we shall discuss the independence results in cardinal arithmetic. First, we shall treat of the case of regular cardinals, where CH lies and where very little is determined in the context of ZFC. Second, for the sake of comprehensiveness, we shall discuss the case of singular cardinals, where much more can be established in the context of ZFC.

The addition and multiplication of infinite cardinal numbers is trivial: For infinite cardinals κ and λ,

κ + λ = κ ⋅ λ = max{κ,λ}.

The situation becomes interesting when one turns to exponentiation and the attempt to compute κ λ for infinite cardinals.

During the dawn of set theory Cantor showed that for every cardinal κ,

2 κ > κ.

There is no mystery about the size of 2 n for finite n . The first natural question then is where 2 ℵ 0 is located in the aleph-hierarchy: Is it ℵ 1 , ℵ 2 , …, ℵ 17 or something much larger?

The cardinal 2 ℵ 0 is important since it is the size of the continuum (the set of real numbers). Cantor's famous continuum hypothesis (CH) is the statement that 2 ℵ 0 = ℵ 1 . This is a special case of the generalized continuum hypothesis (GCH) which asserts that for all α, 2 ℵ α = ℵ α+1 . One virtue of GCH is that it gives a complete solution to the problem of computing κ λ for infinite cardinals: Assuming GCH, if κ ≤ λ then κ λ = λ + ; if cf(κ) ≤ λ ≤ κ then κ λ = κ + ; and if λ < cf(κ) then κ λ = κ.

Very little progress was made on CH and GCH. In fact, in the early era of set theory the only other piece of progress beyond Cantor's result that 2 κ > κ (and the trivial result that if κ ≤ λ then 2 κ ≤ 2 λ ) was König's result that cf(2 κ ) > κ. The explanation for the lack of progress was provided by the independence results in set theory:

To prove this Gödel invented the method of inner models —he showed that CH and GCH held in the minimal inner model L of ZFC. Cohen then complemented this result:

He did this by inventing the method of outer models and showing that CH failed in a generic extension V B of V . The combined results of Gödel and Cohen thus demonstrate that assuming the consistency of ZFC, it is in principle impossible to settle either CH or GCH in ZFC.

In the Fall of 1963 Easton completed the picture by showing that for infinite regular cardinals κ the only constraints on the function κ ↦ 2 κ that are provable in ZFC are the trivial constraint and the results of Cantor and König:

  • if κ ≤ λ then F (κ) ≤ F (λ) ,
  • F (κ) > κ , and
  • cf( F (κ)) > κ .

Thus, set theorists had pushed the cardinal arithmetic of regular cardinals as far as it could be pushed within the confines of ZFC.

The case of cardinal arithmetic on singular cardinals is much more subtle. For the sake of completeness we pause to briefly discuss this before proceeding with the continuum hypothesis.

It was generally believed that, as in the case for regular cardinals, the behaviour of the function κ ↦ 2 κ would be relatively unconstrained within the setting of ZFC. But then Silver proved the following remarkable result: [ 3 ]

It turns out that (by a deep result of Magidor, published in 1977) GCH can first fail at ℵ ω (assuming the consistency of a supercompact cardinal). Silver's theorem shows that it cannot first fail at ℵ ω 1 and this is provable in ZFC.

This raises the question of whether one can “control” the size of 2 ℵ δ with a weaker assumption than that ℵ δ is a singular cardinal of uncountable cofinality such that GCH holds below ℵ δ . The natural hypothesis to consider is that ℵ δ is a singular cardinal of uncountable cofinality which is a strong limit cardinal , that is, that for all α < ℵ δ , 2 α < ℵ δ . In 1975 Galvin and Hajnal proved (among other things) that under this weaker assumption there is indeed a bound:

2 ℵ δ < ℵ (|δ| cf(δ) ) + .

It is possible that there is a jump—in fact, Woodin showed (again assuming large cardinals) that it is possible that for all κ, 2 κ = κ ++ . What the above theorem shows is that in ZFC there is a provable bound on how big the jump can be.

The next question is whether a similar situation prevails with singular cardinals of countable cofinality. In 1978 Shelah showed that this is indeed the case. To fix ideas let us concentrate on ℵ ω .

2 ℵ ω < ℵ (2 ℵ 0 ) + .

One drawback of this result is that the bound is sensitive to the actual size of 2 ℵ 0 , which can be anything below ℵ ω . Remarkably Shelah was later able to remedy this with the development of his pcf (possible cofinalities) theory. One very quotable result from this theory is the following:

2 ℵ ω < ℵ ω 4 .

In summary, although the continuum function at regular cardinals is relatively unconstrained in ZFC, the continuum function at singular cardinals is (provably in ZFC) constrained in significant ways by the behaviour of the continuum function on the smaller cardinals.

Further Reading : For more cardinal arithmetic see Jech (2003). For more on the case of singular cardinals and pcf theory see Abraham & Magidor (2010) and Holz, Steffens & Weitz (1999).

2. Definable Versions of the Continuum Hypothesis and its Negation

Let us return to the continuum function on regular cardinals and concentrate on the simplest case, the size of 2 ℵ 0 . One of Cantor's original approaches to CH was by investigating “simple” sets of real numbers (see Hallett (1984), pp. 3–5 and §2.3(b)). One of the first results in this direction is the Cantor-Bendixson theorem that every infinite closed set is either countable or contains a perfect subset, in which case it has the same cardinality as the set of reals. In other words, CH holds (in this formulation) when one restricts one's attention to closed sets of reals. In general, questions about “definable” sets of reals are more tractable than questions about arbitrary sets of reals and this suggests looking at definable versions of the continuum hypothesis.

There are three different formulations of the continuum hypothesis—the interpolant version, the well-ordering version, and the surjection version. These versions are all equivalent to one another in ZFC but we shall be imposing a definability constraint and in this case there can be interesting differences (our discussion follows Martin (1976)). There is really a hierarchy of notions of definability—ranging up through the Borel hierarchy, the projective hierarchy, the hierarchy in L (ℝ), and, more generally, the hierarchy of universally Baire sets—and so each of these three general versions is really a hierarchy of versions, each corresponding to a given level of the hierarchy of definability (for a discussion of the hierarchy of definability see §2.2.1 and §4.6 of the entry “ Large Cardinals and Determinacy ”).

2.1.1 Interpolant Version

The first formulation of CH is that there is no interpolant , that is, there is no infinite set A of real numbers such that the cardinality of A is strictly between that of the natural numbers and the real numbers. To obtain definable versions one simply asserts that there is no “definable” interpolant and this leads to a hierarchy of definable interpolant versions, depending on which notion of definability one employs. More precisely, for a given pointclass Γ in the hierarchy of definable sets of reals, the corresponding definable interpolant version of CH asserts that there is no interpolant in Γ.

The Cantor-Bendixson theorem shows that there is no interpolant in Γ in the case where Γ is the pointclass of closed sets, thus verifying this version of CH. This was improved by Suslin who showed that this version of CH holds for Γ where Γ is the class of Σ̰ 1 1 sets. One cannot go much further within ZFC—to prove stronger versions one must bring in stronger assumptions. It turns out that axioms of definable determinacy and large cardinal axioms achieve this. For example, results of Kechris and Martin show that if Δ̰ 1 n -determinacy holds then this version of CH holds for the pointclass of Σ̰ 1 n+1 sets. Going further, if one assumes AD L (ℝ) then this version of CH holds for all sets of real numbers appearing in L (ℝ). Since these hypotheses follow from large cardinal axioms one also has that stronger and stronger large cardinal assumptions secure stronger and stronger versions of this version of the effective continuum hypothesis. Indeed large cardinal axioms imply that this version of CH holds for all sets of reals in the definability hierarchy we are considering; more precisely, if there is a proper class of Woodin cardinals then this version of CH holds for all universally Baire sets of reals.

2.1.2 Well-ordering Version

The second formulation of CH asserts that every well-ordering of the reals has order type less than ℵ 2 . For a given pointclass Γ in the hierarchy, the corresponding definable well-ordering version of CH asserts that every well-ordering (coded by a set) in Γ has order type less than ℵ 2 .

Again, axioms of definable determinacy and large cardinal axioms imply this version of CH for richer notions of definability. For example, if AD L (ℝ) holds then this version of CH holds for all sets of real numbers in L (ℝ). And if there is a proper class of Woodin cardinals then this version of CH holds for all universally Baire sets of reals.

2.1.3 Surjection Version

The third version formulation of CH asserts that there is no surjection ρ : ℝ → ℵ 2 , or, equivalently, that there is no prewellordering of ℝ of length ℵ 2 . For a given pointclass Γ in the hierarchy of definability, the corresponding surjection version of CH asserts that there is no surjection ρ : ℝ → ℵ 2 such that (the code for) ρ is in Γ.

Here the situation is more interesting. Axioms of definable determinacy and large cardinal axioms have bearing on this version since they place bounds on how long definable prewellorderings can be. Let δ̰ 1 n be the supremum of the lengths of the Σ̰ 1 n -prewellorderings of reals and let Θ L (ℝ) be the supremum of the lengths of prewellorderings of reals where the prewellordering is definable in the sense of being in L (ℝ). It is a classical result that δ̰ 1 1 = ℵ 1 . Martin showed that δ̰ 1 2 ≤ ℵ 2 and that if there is a measurable cardinal then δ̰ 1 3 ≤ ℵ 3 . Kunen and Martin also showed under PD, δ̰ 1 4 ≤ ℵ 4 and Jackson showed that under PD, for each n < ω, δ̰ 1 n < ℵ ω . Thus, assuming that there are infinitely many Woodin cardinals, these bounds hold. Moreover, the bounds continue to hold regardless of the size of 2 ℵ 0 . Of course, the question is whether these bounds can be improved to show that the prewellorderings are shorter than ℵ 2 . In 1986 Foreman and Magidor initiated a program to establish this. In the most general form they aimed to show that large cardinal axioms implied that this version of CH held for all universally Baire sets of reals.

2.1.4 Potential Bearing on CH

Notice that in the context of ZFC, these three hierarchies of versions of CH are all successive approximations of CH and in the limit case, where Γ is the pointclass of all sets of reals, they are equivalent to CH. The question is whether these approximations can provide any insight into CH itself.

There is an asymmetry that was pointed out by Martin, namely, that a definable counterexample to CH is a real counterexample, while no matter how far one proceeds in verifying definable versions of CH at no stage will one have touched CH itself. In other words, the definability approach could refute CH but it could not prove it.

Still, one might argue that although the definability approach could not prove CH it might provide some evidence for it. In the case of the first two versions we now know that CH holds for all definable sets. Does this provide evidence of CH? Martin pointed out (before the full results were known) that this is highly doubtful since in each case one is dealing with sets that are atypical. For example, in the first version, at each stage one secures the definable version of CH by showing that all sets in the definability class have the perfect set property; yet such sets are atypical in that assuming AC it is easy to show that there are sets without this property. In the second version, at each stage one actually shows not only that each well-ordering of reals in the definability class has ordertype less than ℵ 2 , but also that it has ordertype less than ℵ 1 . So neither of these versions really illuminates CH.

The third version actually has an advantage in this regard since not all of the sets it deals with are atypical. For example, while all Σ̰ 1 1 -sets have length less than ℵ 1 , there are Π̰ 1 1 -sets of length ℵ 1 . Of course, it could turn out that even if the Foreman-Magidor program were to succeed the sets could turn out to be atypical in another sense, in which case it would shed little light on CH. More interesting, however, is the possibility that in contrast to the first two versions, it would actually provide an actual counterexample to CH. This, of course, would require the failure of the Foreman-Magidor program.

The goal of the Foreman-Magidor program was to show that large cardinal axioms also implied that the third version of CH held for all sets in L (ℝ) and, more generally, all universally Baire sets. In other words, the goal was to show that large cardinal axioms implied that Θ L (ℝ) ≤ ℵ 2 and, more generally, that Θ L (A,ℝ) ≤ ℵ 2 for each universally Baire set A .

The motivation came from the celebrated results of Foreman, Magidor and Shelah on Martin's Maximum (MM), which showed that assuming large cardinal axioms one can always force to obtain a precipitous ideal on ℵ 2 without collapsing ℵ 2 (see Foreman, Magidor & Shelah (1988)). The program involved a two-part strategy:

  • Strengthen this result to show that assuming large cardinal axioms one can always force to obtain a saturated ideal on ℵ 2 without collapsing ℵ 2 .
  • Show that the existence of such a saturated ideal implies that Θ L (ℝ) ≤ ℵ 2 and, more generally that Θ L (A,ℝ) ≤ ℵ 2 for every universally Baire set A .

This would show that show that Θ L (ℝ) ≤ ℵ 2 and, more generally that Θ L (A,ℝ) ≤ ℵ 2 for every universally Baire set A . [ 4 ]

In December 1991, the following result dashed the hopes of this program.

The point is that the hypothesis of this theorem can always be forced assuming large cardinals. Thus, it is possible to have Θ L (ℝ) > ℵ 2 (in fact, δ̰ 1 3 > ℵ 2 ).

Where did the program go wrong? Foreman and Magidor had an approximation to (B) and in the end it turned out that (B) is true.

So the trouble is with (A).

This illustrates an interesting contrast between our three versions of the effective continuum hypothesis, namely, that they can come apart. For while large cardinals rule out definable counterexamples of the first two kinds, they cannot rule out definable counterexamples of the third kind. But again we must stress that they cannot prove that there are such counterexamples.

But there is an important point: Assuming large cardinal axioms (AD L (ℝ) suffices), although one can produce outer models in which δ̰ 1 3 > ℵ 2 it is not currently known how to produce outer models in which δ̰ 1 3 > ℵ 3 or even Θ L (ℝ) > ℵ 3 . Thus it is an open possibility that from ZFC +AD L (ℝ) one can prove Θ L (ℝ) ≤ ℵ 3 . Were this to be the case, it would follow that although large cardinals cannot rule out the definable failure of CH they can rule out the definable failure of 2 ℵ 0 = ℵ 2 . This could provide some insight into the size of the continuum, underscoring the centrality of ℵ 2 .

Further Reading : For more on the three effective versions of CH see Martin (1976); for more on the Foreman-Magidor program see Foreman & Magidor (1995) and the introduction to Woodin (1999).

3. The Case for ¬CH

The above results led Woodin to the identification of a “canonical” model in which CH fails and this formed the basis of his an argument that CH is false. In Section 3.1 we will describe the model and in the remainder of the section we will present the case for the failure of CH. In Section 3.2 we will introduce Ω-logic and the other notions needed to make the case. In Section 3.3 we will present the case.

The goal is to find a model in which CH is false and which is canonical in the sense that its theory cannot be altered by set forcing in the presence of large cardinals. The background motivation is this: First, we know that in the presence of large cardinal axioms the theory of second-order arithmetic and even the entire theory of L (ℝ) is invariant under set forcing. The importance of this is that it demonstrates that our main independence techniques cannot be used to establish the independence of questions about second-order arithmetic (or about L (ℝ)) in the presence of large cardinals. Second, experience has shown that the large cardinal axioms in question seem to answer all of the major known open problems about second-order arithmetic and L (ℝ) and the set forcing invariance theorems give precise content to the claim that these axioms are “effectively complete”. [ 5 ]

It follows that if ℙ is any homogeneous partial order in L (ℝ) then the generic extension L (ℝ) ℙ inherits the generic absoluteness of L (ℝ). Woodin discovered that there is a very special partial order ℙ max that has this feature. Moreover, the model L (ℝ) ℙ max satisfies ZFC + ¬CH. The key feature of this model is that it is “maximal” (or “saturated”) with respect to sentences that are of a certain complexity and which can be shown to be consistent via set forcing over the model; in other words, if these sentences can hold (by set forcing over the model) then they do hold in the model. To state this more precisely we are going to have to introduce a few rather technical notions.

There are two ways of stratifying the universe of sets. The first is in terms of ⟨ V α | α ∈ On ⟩, the second is in terms of ⟨ H (κ) | κ ∈ Card⟩, where H (κ) is the set of all sets which have cardinality less than κ and whose members have cardinality less than κ, and whose members of members have cardinality less than κ, and so on. For example, H (ω) = V ω and the theories of the structures H (ω 1 ) and V ω+1 are mutually interpretable. This latter structure is the structure of second-order arithmetic and, as mentioned above, large cardinal axioms give us an “effectively complete” understanding of this structure. We should like to be in the same position with regard to larger and larger fragments of the universe and the question is whether we should proceed in terms of the first or the second stratification.

The second stratification is potentially more fine-grained. Assuming CH one has that the theories of H (ω 2 ) and V ω+2 are mutually interpretable and assuming larger and larger fragments of GCH this correspondence continues upward. But if CH is false then the structure H (ω 2 ) is less rich than the structure V ω 2 . In this event the latter structure captures full third-order arithmetic, while the former captures only a small fragment of third-order arithmetic but is nevertheless rich enough to express CH. Given this, in attempting to understand the universe of sets by working up through it level by level, it is sensible to use the potentially more fine-grained stratification.

Our next step is therefore to understand H (ω 2 ). It actually turns out that we will be able to understand slightly more and this is somewhat technical. We will be concerned with the structure ⟨ H (ω 2 ), ∈, I NS , A G ⟩ ⊧ φ, where I NS is the non-stationary ideal on ω 1 and A G is the interpretation of (the canonical representation of) a set of reals A in L (ℝ). The details will not be important and the reader is asked to just think of H (ω 2 ) along with some “extra stuff” and not worry about the details concerning the extra stuff. [ 6 ]

We are now in a position to state the main result:

⟨ H (ω 2 ), ∈, I NS , A G ⟩ ⊧ φ
L (ℝ) ℙ max ⊧ “⟨ H (ω 2 ), ∈, I NS , A⟩ ⊧ φ”.

There are two key points: First, the theory of L (ℝ) ℙ max is “effectively complete” in the sense that it is invariant under set forcing. Second, the model L (ℝ) ℙ max is “maximal” (or “saturated”) in the sense that it satisfies all Π 2 -sentences (about the relevant structure) that can possibly hold (in the sense that they can be shown to be consistent by set forcing over the model).

One would like to get a handle on the theory of this structure by axiomatizing it. The relevant axiom is the following:

Finally, this axiom settles CH:

We will now recast the above results in terms of a strong logic. We shall make full use of large cardinal axioms and in this setting we are interested in logics that are “well-behaved” in the sense that the question of what implies what is not radically independent. For example, it is well known that CH is expressible in full second-order logic. It follows that in the presence of large cardinals one can always use set forcing to flip the truth-value of a purported logical validity of full second-order logic. However, there are strong logics—like ω-logic and β-logic—that do not have this feature—they are well-behaved in the sense that in the presence of large cardinal axioms the question of what implies what cannot be altered by set forcing. We shall introduce a very strong logic that has this feature—Ω-logic. In fact, the logic we shall introduce can be characterized as the strongest logic with this feature (see Koellner (2010) for further discussion of strong logics and for a precise statement of this result).

3.2.1 Ω-logic

T ⊧ Ω φ
if V B α ⊧ T then V B α ⊧ φ.

We say that a statement φ is Ω- satisfiable if there exists an ordinal α and a complete Boolean algebra B such that V B α ⊧ φ, and we say that φ is Ω- valid if ∅ ⊧ Ω φ. So, the above theorem says that (under our background assumptions), the statement “φ is Ω-satisfiable” is generically invariant and in terms of Ω-validity this is simply the following:

T ⊧ Ω φ iff V B ⊧ “T ⊧ Ω φ.”

Thus this logic is robust in that the question of what implies what is invariant under set forcing.

3.2.2 The Ω Conjecture

Corresponding to the semantic relation ⊧ Ω there is a quasi-syntactic proof relation ⊢ Ω . The “proofs” are certain robust sets of reals (universally Baire sets of reals) and the test structures are models that are “closed” under these proofs. The precise notions of “closure” and “proof” are somewhat technical and so we will pass over them in silence. [ 7 ]

Like the semantic relation, this quasi-syntactic proof relation is robust under large cardinal assumptions:

T ⊢ Ω φ iff V B ⊧ ‘T ⊢ Ω φ’.

Thus, we have a semantic consequence relation and a quasi-syntactic proof relation, both of which are robust under the assumption of large cardinal axioms. It is natural to ask whether the soundness and completeness theorems hold for these relations. The soundness theorem is known to hold:

It is open whether the corresponding completeness theorem holds. The Ω Conjecture is simply the assertion that it does:

∅ ⊧ Ω φ iff ∅ ⊢ Ω φ.

We will need a strong form of this conjecture which we shall call the Strong Ω Conjecture. It is somewhat technical and so we will pass over it in silence. [ 8 ]

3.2.3 Ω-Complete Theories

Recall that one key virtue of large cardinal axioms is that they “effectively settle” the theory of second-order arithmetic (and, in fact, the theory of L (ℝ) and more) in the sense that in the presence of large cardinals one cannot use the method of set forcing to establish independence with respect to statements about L (ℝ). This notion of invariance under set forcing played a key role in Section 3.1 . We can now rephrase this notion in terms of Ω-logic.

The invariance of the theory of L (ℝ) under set forcing can now be rephrased as follows:

Unfortunately, it follows from a series of results originating with work of Levy and Solovay that traditional large cardinal axioms do not yield Ω-complete theories at the level of Σ 2 1 since one can always use a “small” (and hence large cardinal preserving) forcing to alter the truth-value of CH.

Nevertheless, if one supplements large cardinal axioms then Ω-complete theories are forthcoming. This is the centerpiece of the case against CH.

  • ZFC + A is Ω -satisfiable and
  • ZFC + A is Ω -complete for the structure H (ω 2 ) .
ZFC + A ⊧ Ω ‘ H (ω 2 ) ⊧ ¬CH’.

Let us rephrase this as follows: For each A satisfying (1), let

T A = {φ | ZFC + A ⊧ Ω ‘ H (ω 2 ) ⊧ ¬φ’}.

The theorem says that if there is a proper class of Woodin cardinals and the Ω Conjecture holds, then there are (non-trivial) Ω-complete theories T A of H (ω 2 ) and all such theories contain ¬CH.

It is natural to ask whether there is greater agreement among the Ω-complete theories T A . Ideally, there would be just one. A recent result (building on Theorem 5.5) shows that if there is one such theory then there are many such theories.

 i.  ZFC + A is Ω -satisfiable and ii.  ZFC + A is Ω -complete for the structure H (ω 2 ) .
 i′.  ZFC + B is Ω -satisfiable and ii′.  ZFC + B is Ω -complete for the structure H (ω 2 )

How then shall one select from among these theories? Woodin's work in this area goes a good deal beyond Theorem 5.1. In addition to isolating an axiom that satisfies (1) of Theorem 5.1 (assuming Ω-satisfiability), he isolates a very special such axiom, namely, the axiom (∗) (“star”) mentioned earlier.

This axiom can be phrased in terms of (the provability notion of) Ω-logic:

  • (∗) .
⟨ H (ω 2 ), ∈, I NS , A | A ∈ 𝒫 (ℝ) ∩ L (ℝ)⟩
ZFC + “⟨ H (ω 2 ), ∈, I NS , A | A ∈ 𝒫 (ℝ) ∩ L (ℝ)⟩ ⊧ φ”
⟨ H (ω 2 ), ∈, I NS , A | A ∈ 𝒫 (ℝ) ∩ L (ℝ)⟩ ⊧ φ.

It follows that of the various theories T A involved in Theorem 5.1, there is one that stands out: The theory T (∗) given by (∗). This theory maximizes the Π 2 -theory of the structure ⟨ H (ω 2 ), ∈, I NS , A | A ∈ 𝒫 (ℝ) ∩ L (ℝ)⟩.

The continuum hypothesis fails in this theory. Moreover, in the maximal theory T (∗) given by (∗) the size of the continuum is ℵ 2 . [ 9 ]

To summarize: Assuming the Strong Ω Conjecture, there is a “good” theory of H (ω 2 ) and all such theories imply that CH fails. Moreover, (again, assuming the Strong Ω Conjecture) there is a maximal such theory and in that theory 2 ℵ 0 = ℵ 2 .

Further Reading : For the mathematics concerning ℙ max see Woodin (1999). For an introduction to Ω-logic see Bagaria, Castells & Larson (2006). For more on incompatible Ω-complete theories see Koellner & Woodin (2009). For more on the case against CH see Woodin (2001a,b, 2005a,b).

4. The Multiverse

The above case for the failure of CH is the strongest known local case for axioms that settle CH. In this section and the next we will switch sides and consider the pluralist arguments to the effect that CH does not have an answer (in this section) and to the effect that there is an equally good case for CH (in the next section). In the final two section we will investigate optimistic global scenarios that provide hope of settling the issue.

The pluralist maintains that the independence results effectively settle the undecided questions by showing that they have no answer. One way of providing a foundational framework for such a view is in terms of the multiverse. On this view there is not a single universe of set theory but rather a multiverse of legitimate candidates, some of which may be preferable to others for certain purposes but none of which can be said to be the “true” universe. The multiverse conception of truth is the view that a statement of set theory can only be said to be true simpliciter if it is true in all universes of the multiverse. For the purposes of this discussion we shall say that a statement is indeterminate according to the multiverse conception if it is neither true nor false according to the multiverse conception. How radical such a view is depends on the breadth of the conception of the multiverse.

The pluralist is generally a non-pluralist about certain domains of mathematics. For example, a strict finitist might be a non-pluralist about PA but a pluralist about set theory and one might be a non-pluralist about ZFC and a pluralist about large cardinal axioms and statements like CH.

There is a form of radical pluralism which advocates pluralism concerning all domains of mathematics. On this view any consistent theory is a legitimate candidate and the corresponding models of such theories are legitimate candidates for the domain of mathematics. Let us call this the broadest multiverse view. There is a difficulty in articulating this view, which may be brought out as follows: To begin with, one must pick a background theory in which to discuss the various models and this leads to a difficult. For example, according to the broad multiverse conception, since PA cannot prove Con(PA) (by the second incompleteness theorem, assuming that PA is consistent) there are models of PA + ¬Con(PA) and these models are legitimate candidates, that is, they are universes within the broad multiverse. Now to arrive at this conclusion one must (in the background theory) be in a position to prove Con(PA) (since this assumption is required to apply the second incompleteness theorem in this particular case). Thus, from the perspective of the background theory used to argue that the above models are legitimate candidates, the models in question satisfy a false Σ 0 1 -sentence, namely, ¬Con(PA). In short, there is a lack of harmony between what is held at the meta-level and what is held at the object-level.

The only way out of this difficulty would seem to be to regard each viewpoint—each articulation of the multiverse conception—as provisional and, when pressed, embrace pluralism concerning the background theory. In other words, one would have to adopt a multiverse conception of the multiverse, a multiverse conception of the multiverse conception of the multiverse, and so on, off to infinity. It follows that such a position can never be fully articulated—each time one attempts to articulate the broad multiverse conception one must employ a background theory but since one is a pluralist about that background theory this pass at using the broad multiverse to articulate the conception does not do the conception full justice. The position is thus difficult to articulate. One can certainly take the pluralist stance and try to gesture toward or exhibit the view that one intends by provisionally settling on a particular background theory but then advocate pluralism regarding that when pressed. The view is thus something of a “moving target”. We shall pass over this view in silence and concentrate on views that can be articulated within a foundational framework.

We will accordingly look at views which embrace non-pluralism with regard to a given stretch of mathematics and for reasons of space and because this is an entry on set theory we will pass over the long debates concerning strict finitism, finitism, predicativism, and start with views that embrace non-pluralism regarding ZFC.

Let the broad multiverse (based on ZFC) be the collection of all models of ZFC. The broad multiverse conception of truth (based on ZFC) is then simply the view that a statement of set theory is true simpliciter if it is provable in ZFC. On this view the statement Con(ZFC) and other undecided Π 0 1 -statements are classified as indeterminate. This view thus faces a difficulty parallel to the one mentioned above concerning radical pluralism.

This motivates the shift to views that narrow the class of universes in the multiverse by employing a strong logic. For example, one can restrict to universes that are ω-models, β-models (i.e., wellfounded), etc. On the view where one takes ω-models, the statement Con(ZFC) is classified as true (though this is sensitive to the background theory) but the statement PM (all projective sets are Lebesgue measurable) is classified as indeterminate.

For those who are convinced by the arguments (surveyed in the entry “ Large Cardinals and Determinacy ”) for large cardinal axioms and axioms of definable determinacy, even these multiverse conceptions are too weak. We will follow this route. For the rest of this entry we will embrace non-pluralism concerning large cardinal axioms and axioms of definable determinacy and focus on the question of CH.

The motivation behind the generic multiverse is to grant the case for large cardinal axioms and definable determinacy but deny that statements such as CH have a determinate truth value. To be specific about the background theory let us take ZFC + “There is a proper class of Woodin cardinals” and recall that this large cardinal assumption secures axioms of definable determinacy such as PD and AD L (ℝ) .

Let the generic multiverse 𝕍 be the result of closing V under generic extensions and generic refinements. One way to formalize this is by taking an external vantage point and start with a countable transitive model M . The generic multiverse based on M is then the smallest set 𝕍 M such that M ∈ 𝕍 M and, for each pair of countable transitive models ( N , N [ G ]) such that N ⊧ ZFC and G ⊆ ℙ is N -generic for some partial order in ℙ ∈ N , if either N or N [ G ] is in 𝕍 M then both N and N [ G ] are in 𝕍 M .

Let the generic multiverse conception of truth be the view that a statement is true simpliciter iff it is true in all universes of the generic multiverse. We will call such a statement a generic multiverse truth . A statement is said to be indeterminate according to the generic multiverse conception iff it is neither true nor false according to the generic multiverse conception. For example, granting our large cardinal assumptions, such a view deems PM (and PD and AD L (ℝ) ) true but deems CH indeterminate.

Is the generic multiverse conception of truth tenable? The answer to this question is closely related to the subject of Ω-logic. The basic connection between generic multiverse truth and Ω-logic is embodied in the following theorem:

  • φ is a generic multiverse truth.
  • φ is Ω -valid.

Now, recall that by Theorem 3.5, under our background assumptions, Ω-validity is generically invariant. It follows that given our background theory, the notion of generic multiverse truth is robust with respect to Π 2 -statements. In particular, for Π 2 -statements, the statement “φ is indeterminate” is itself determinate according to the generic multiverse conception. In this sense the conception of truth is not “self-undermining” and one is not sent in a downward spiral where one has to countenance multiverses of multiverses. So it passes the first test. Whether it passes a more challenging test depends on the Ω Conjecture.

The Ω Conjecture has profound consequences for the generic multiverse conception of truth. Let

𝒱 Ω = {φ | ∅ ⊧ Ω φ}

and, for any specifiable cardinal κ, let

𝒱 Ω ( H (κ + )) = {φ | ZFC ⊧ Ω “ H (κ + ) ⊧ φ”},

where recall that H (κ + ) is the collection of sets of hereditary cardinality less than κ + . Thus, assuming ZFC and that there is a proper class of Woodin cardinals, the set 𝒱 Ω is Turing equivalent to the set of Π 2 generic multiverse truths and the set 𝒱 Ω ( H (κ + )) is precisely the set of generic multiverse truths of H (κ + ).

To describe the bearing of the Ω Conjecture on the generic-multiverse conception of truth, we introduce two Transcendence Principles which serve as constraints on any tenable conception of truth in set theory—a truth constraint and a definability constraint .

This constraint is in the spirit of those principles of set theory—most notably, reflection principles—which aim to capture the pretheoretic idea that the universe of sets is so rich that it cannot “be described from below”; more precisely, it asserts that any tenable conception of truth must respect the idea that the universe of sets is so rich that truth (or even just Π 2 -truth) cannot be described in some specifiable fragment. (Notice that by Tarski's theorem on the undefinability of truth, the truth constraint is trivially satisfied by the standard conception of truth in set theory which takes the multiverse to contain a single element, namely, V .)

There is also a related constraint concerning the definability of truth. For a specifiable cardinal κ, set Y ⊆ ω is definable in H (κ + ) across the multiverse if Y is definable in the structure H (κ + ) of each universe of the multiverse (possibly by formulas which depend on the parent universe).

Notice again that by Tarski's theorem on the undefinability of truth, the definability constraint is trivially satisfied by the degenerate multiverse conception that takes the multiverse to contain the single element V . (Notice also that if one modifies the definability constraint by adding the requirement that the definition be uniform across the multiverse, then the constraint would automatically be met.)

The bearing of the Ω Conjecture on the tenability of the generic-multiverse conception of truth is contained in the following two theorems:

In other words, if there is a proper class of Woodin cardinals and if the Ω Conjecture holds then the generic multiverse conception of truth violates both the Truth Constraint (at δ 0 ) and the Definability Constraint (at δ 0 ).

There are actually sharper versions of the above results that involve H ( c + ) in place of H (δ + 0 ).

In other words, if there is a proper class of Woodin cardinals and if the Ω Conjecture holds then the generic-multiverse conception of truth violates the Truth Constraint at the level of third-order arithmetic, and if, in addition, the AD + Conjecture holds, then the generic-multiverse conception of truth violates the Definability Constraint at the level of third-order arithmetic.

There appear to be four ways that the advocate of the generic multiverse might resist the above criticism.

First, one could maintain that the Ω Conjecture is just as problematic as CH and hence like CH it is to be regarded as indeterminate according to the generic-multiverse conception of truth. The difficulty with this approach is the following:

V ⊧ Ω-conjecture iff V 𝔹 ⊧ Ω-conjecture.

Thus, in contrast to CH, the Ω Conjecture cannot be shown to be independent of ZFC + “There is a proper class of Woodin cardinals” via set forcing. In terms of the generic multiverse conception of truth, we can put the point this way: While the generic-multiverse conception of truth deems CH to be indeterminate, it does not deem the Ω Conjecture to be indeterminate. So the above response is not available to the advocate of the generic-multiverse conception of truth. The advocate of that conception already deems the Ω Conjecture to be determinate.

Second, one could grant that the Ω Conjecture is determinate but maintain that it is false. There are ways in which one might do this but that does not undercut the above argument. The reason is the following: To begin with there is a closely related Σ 2 -statement that one can substitute for the Ω Conjecture in the above arguments. This is the statement that the Ω Conjecture is (non-trivially) Ω-satisfiable, that is, the statement: There exists an ordinal α and a universe V′ of the multiverse such that

V′ α ⊧ ZFC + “There is a proper class of Woodin cardinals”
V′ α ⊧ “The Ω Conjecture”.

This Σ 2 -statement is invariant under set forcing and hence is one adherents to the generic multiverse view of truth must deem determinate. Moreover, the key arguments above go through with this Σ 2 -statement instead of the Ω Conjecture. The person taking this second line of response would thus also have to maintain that this statement is false. But there is substantial evidence that this statement is true . The reason is that there is no known example of a Σ 2 -statement that is invariant under set forcing relative to large cardinal axioms and which cannot be settled by large cardinal axioms. (Such a statement would be a candidate for an absolutely undecidable statement.) So it is reasonable to expect that this statement is resolved by large cardinal axioms. However, recent advances in inner model theory—in particular, those in Woodin (2010)—provide evidence that no large cardinal axiom can refute this statement. Putting everything together: It is very likely that this statement is in fact true ; so this line of response is not promising.

Third, one could reject either the Truth Constraint or the Definability Constraint. The trouble is that if one rejects the Truth Constraint then on this view (assuming the Ω Conjecture) Π 2 truth in set theory is reducible in the sense of Turing reducibility to truth in H (δ 0 ) (or, assuming the Strong Ω Conjecture, H ( c + )). And if one rejects the Definability Constraint then on this view (assuming the Ω Conjecture) Π 2 truth in set theory is reducible in the sense of definability to truth in H (δ 0 ) (or, assuming the Strong Ω Conjecture, H ( c + )). On either view, the reduction is in tension with the acceptance of non-pluralism regarding the background theory ZFC + “There is a proper class of Woodin cardinals”.

Fourth, one could embrace the criticism, reject the generic multiverse conception of truth, and admit that there are some statements about H (δ + 0 ) (or H ( c + ), granting, in addition, the AD + Conjecture) that are true simpliciter but not true in the sense of the generic-multiverse, and yet nevertheless continue to maintain that CH is indeterminate. The difficulty is that any such sentence φ is qualitatively just like CH in that it can be forced to hold and forced to fail. The challenge for the advocate of this approach is to modify the generic-multiverse conception of truth in such a way that it counts φ as determinate and yet counts CH as indeterminate.

In summary: There is evidence that the only way out is the fourth way out and this places the burden back on the pluralist—the pluralist must come up with a modified version of the generic multiverse.

Further Reading : For more on the connection between Ω-logic and the generic multiverse and the above criticism of the generic multiverse see Woodin (2011a). For the bearing of recent results in inner model theory on the status of the Ω Conjecture see Woodin (2010).

5. The Local Case Revisited

Let us now turn to a second way in which one might resist the local case for the failure of CH. This involves a parallel case for CH. In Section 5.1 we will review the main features of the case for ¬CH in order to compare it with the parallel case for CH. In Section 5.2 we will present the parallel case for CH. In Section 5.3 we will assess the comparison.

Recall that there are two basic steps in the case presented in Section 3.3 . The first step involves Ω-completeness (and this gives ¬CH) and the second step involves maximality (and this gives the stronger 2 ℵ 0 = ℵ 2 ). For ease of comparison we shall repeat these features here:

The first step is based on the following result:

ZFC + A ⊧ Ω “ H (ω 2 ) ⊧ ¬CH”.
T A = {φ | ZFC + A ⊧ Ω “ H (ω 2 ) ⊧ ¬φ”}.

The theorem says that if there is a proper class of Woodin cardinals and the Strong Ω Conjecture holds, then there are (non-trivial) Ω-complete theories T A of H (ω 2 ) and all such theories contain ¬CH. In other words, under these assumptions, there is a “good” theory and all “good” theories imply ¬CH.

The second step begins with the question of whether there is greater agreement among the Ω-complete theories T A . Ideally, there would be just one. However, this is not the case.

Then there is an axiom B such that

This raises the issue as to how one is to select from among these theories? It turns out that there is a maximal theory among the T A and this is given by the axiom (∗).

is Ω -consistent, then

So, of the various theories T A involved in Theorem 5.1, there is one that stands out: The theory T (∗) given by (∗). This theory maximizes the Π 2 -theory of the structure ⟨ H (ω 2 ), ∈, I NS , A | A ∈ 𝒫 (ℝ) ∩ L (ℝ)⟩. The fundamental result is that in this maximal theory

2 ℵ 0 = ℵ 2 .

The parallel case for CH also has two steps, the first involving Ω-completeness and the second involving maximality.

The first result in the first step is the following:

Moreover, up to Ω-equivalence, CH is the unique Σ 2 1 -statement that is Ω-complete for Σ 2 1 ; that is, letting T A be the Ω-complete theory given by ZFC + A where A is Σ 2 1 , all such T A are Ω-equivalent to T CH and hence (trivially) all such T A contain CH. In other words, there is a “good” theory and all “good” theories imply CH.

To complete the first step we have to determine whether this result is robust. For it could be the case that when one considers the next level, Σ 2 2 (or further levels, like third-order arithmetic) CH is no longer part of the picture, that is, perhaps large cardinals imply that there is an axiom A such that ZFC + A is Ω-complete for Σ 2 2 (or, going further, all of third order arithmetic) and yet not all such A have an associated T A which contains CH. We must rule this out if we are to secure the first step.

The most optimistic scenario along these lines is this: The scenario is that there is a large cardinal axiom L and axioms A → such that ZFC + L + A → is Ω-complete for all of third-order arithmetic and all such theories are Ω-equivalent and imply CH. Going further, perhaps for each specifiable fragment V λ of the universe of sets there is a large cardinal axiom L and axioms A → such that ZFC + L + A → is Ω-complete for the entire theory of V λ and, moreover, that such theories are Ω-equivalent and imply CH. Were this to be the case it would mean that for each such λ there is a unique Ω-complete picture of V λ and we would have a unique Ω-complete understanding of arbitrarily large fragments of the universe of sets. This would make for a strong case for new axioms completing the axioms of ZFC and large cardinal axioms.

Unfortunately, this optimistic scenario fails: Assuming the existence of one such theory one can construct another which differs on CH:

ZFC + L + A → is Ω-complete for Th( V λ ).
ZFC + L + B → is Ω-complete for Th( V λ )

This still leaves us with the question of existence and the answer to this question is sensitive to the Ω Conjecture and the AD + Conjecture:

In fact, under a stronger assumption, the scenario must fail at a much earlier level.

It is open whether there can be such a theory at the level of Σ 2 2 . It is conjectured that ZFC + ◇ is Ω-complete (assuming large cardinal axioms) for Σ 2 2 .

Let us assume that it is answered positively and return to the question of uniqueness. For each such axiom A , let T A be the Σ 2 2 theory computed by ZFC + A in Ω-logic. The question of uniqueness simply asks whether T A is unique.

 i. ZFC + A is Ω -satisfiable and ii. ZFC + A is Ω -complete for Σ 2 2 .
 i′. ZFC + B is Ω -satisfiable and ii′. ZFC + B is Ω -complete for Σ 2 2

This is the parallel of Theorem 5.2.

To complete the parallel one would need that CH is among all of the T A . This is not known. But it is a reasonable conjecture.

  • ZFC + A is Ω-satisfiable and
  • ZFC + A is Ω-complete for the Σ 2 2 .
ZFC + A ⊧ Ω CH.

Should this conjecture hold it would provide a true analogue of Theorem 5.1. This would complete the parallel with the first step.

There is also a parallel with the second step. Recall that for the second step in the previous subsection we had that although the various T A did not agree, they all contained ¬CH and, moreover, from among them there is one that stands out, namely the theory given by (∗), since this theory maximizes the Π 2 -theory of the structure ⟨ H (ω 2 ), ∈, I NS , A | A ∈ P (ℝ) ∩ L (ℝ)⟩. In the present context of CH we again (assuming the conjecture) have that although the T A do not agree, they all contain CH. It turns out that once again, from among them there is one that stands out, namely, the maximum one. For it is known (by a result of Woodin in 1985) that if there is a proper class of measurable Woodin cardinals then there is a forcing extension satisfying all Σ 2 2 sentences φ such that ZFC + CH + φ is Ω-satisfiable (see Ketchersid, Larson, & Zapletal (2010)). It follows that if the question of existence is answered positively with an A that is Σ 2 2 then T A must be this maximum Σ 2 2 theory and, consequently, all T A agree when A is Σ 2 2 . So, assuming that there is a T A where A is Σ 2 2 , then, although not all T A agree (when A is arbitrary) there is one that stands out, namely, the one that is maximum for Σ 2 2 sentences.

Thus, if the above conjecture holds, then the case of CH parallels that of ¬CH, only now Σ 2 2 takes the place of the theory of H (ω 2 ).

Assuming that the conjecture holds the case of CH parallels that of ¬CH, only now Σ 2 2 takes the place of the theory of H (ω 2 ): Under the background assumptions we have:

  • there are A such that ZFC + A is Ω-complete for H (ω 2 )
  • for every such A the associated T A contains ¬CH, and
  • there is a T A which is maximal, namely, T (∗) and this theory contains 2 ℵ 0 = ℵ 2 .
  • there are Σ 2 2 -axioms A such that ZFC + A is Ω-complete for Σ 2 2
  • for every such A the associated T A contains CH, and
  • there is a T A which is maximal.

The two situations are parallel with regard to maximality but in terms of the level of Ω-completeness the first is stronger. For in the first case we are not just getting Ω-completeness with regard to the Π 2 theory of H (ω 2 ) (with the additional predicates), rather we are getting Ω-completeness with regard to all of H (ω 2 ). This is arguably an argument in favour of the case for ¬CH, even granting the conjecture.

But there is a stronger point. There is evidence coming from inner model theory (which we shall discuss in the next section) to the effect that the conjecture is in fact false . Should this turn out to be the case it would break the parallel, strengthening the case for ¬CH.

However, one might counter this as follows: The higher degree of Ω-completeness in the case for ¬CH is really illusory since it is an artifact of the fact that under (∗) the theory of H (ω 2 ) is in fact mutually interpretable with that of H (ω 1 ) (by a deep result of Woodin). Moreover, this latter fact is in conflict with the spirit of the Transcendence Principles discussed in Section 4.3 . Those principles were invoked in an argument to the effect that CH does not have an answer. Thus, when all the dust settles the real import of Woodin's work on CH (so the argument goes) is not that CH is false but rather that CH very likely has an answer.

It seems fair to say that at this stage the status of the local approaches to resolving CH is somewhat unsettled. For this reason, in the remainder of this entry we shall focus on global approaches to settling CH. We shall very briefly discuss two such approaches—the approach via inner model theory and the approach via quasi-large cardinal axioms.

6. The Ultimate Inner Model

Inner model theory aims to produce “ L -like” models that contain large cardinal axioms. For each large cardinal axiom Φ that has been reached by inner model theory, one has an axiom of the form V = L Φ . This axiom has the virtue that (just as in the simplest case of V = L ) it provides an “effectively complete” solution regarding questions about L Φ (which, by assumption, is V ). Unfortunately, it turns out that the axiom V = L Φ is incompatible with stronger large cardinal axioms Φ'. For this reason, axioms of this form have never been considered as plausible candidates for new axioms.

But recent developments in inner model theory (due to Woodin) show that everything changes at the level of a supercompact cardinal. These developments show that if there is an inner model N which “inherits” a supercompact cardinal from V (in the manner in which one would expect, given the trajectory of inner model theory), then there are two remarkable consequences: First, N is close to V (in, for example, the sense that for sufficiently large singular cardinals λ, N correctly computes λ + ). Second, N inherits all known large cardinals that exist in V . Thus, in contrast to the inner models that have been developed thus far, an inner model at the level of a supercompact would provide one with an axiom that could not be refuted by stronger large cardinal assumptions.

The issue, of course, is whether one can have an “ L -like” model (one that yields an “effectively complete” axiom) at this level. There is reason to believe that one can. There is now a candidate model L Ω that yields an axiom V = L Ω with the following features: First, V = L Ω is “effectively complete.” Second, V = L Ω is compatible with all large cardinal axioms. Thus, on this scenario, the ultimate theory would be the (open-ended) theory ZFC + V = L Ω + LCA, where LCA is a schema standing for “large cardinal axioms.” The large cardinal axioms will catch instances of Gödelian independence and the axiom V = L Ω will capture the remaining instances of independence. This theory would imply CH and settle the remaining undecided statements. Independence would cease to be an issue.

It turns out, however, that there are other candidate axioms that share these features, and so the spectre of pluralism reappears. For example, there are axioms V = L Ω S and V = L Ω (∗) . These axioms would also be “effectively complete” and compatible with all large cardinal axioms. Yet they would resolve various questions differently than the axiom V = L Ω . For example, the axiom, V = L Ω (∗) would imply ¬CH. How, then, is one to adjudicate between them?

Further Reading : For an introduction to inner model theory see Mitchell (2010) and Steel (2010). For more on the recent developments at the level of one supercompact and beyond see Woodin (2010).

7. The Structure Theory of L ( V λ+1 )

This brings us to the second global approach, one that promises to select the correct axiom from among V = L Ω , V = L Ω S , V = L Ω (∗) , and their variants. This approach is based on the remarkable analogy between the structure theory of L (ℝ) under the assumption of AD L (ℝ) and the structure theory of L ( V λ+1 ) under the assumption that there is an elementary embedding from L ( V λ+1 ) into itself with critical point below λ. This embedding assumption is the strongest large cardinal axiom that appears in the literature.

The analogy between L (ℝ) and L ( V λ+1 ) is based on the observation that L (ℝ) is simply L ( V ω+1 ). Thus, λ is the analogue of ω, λ + is the analogue of ω 1 , and so on. As an example of the parallel between the structure theory of L (ℝ) under AD L (ℝ) and the structure theory of L ( V λ+1 ) under the embedding axiom, let us mention that in the first case, ω 1 is a measurable cardinal in L (ℝ) and, in the second case, the analogue of ω 1 —namely, λ + —is a measurable cardinal in L ( V λ+1 ). This result is due to Woodin and is just one instance from among many examples of the parallel that are contained in his work.

Now, we have a great deal of information about the structure theory of L (ℝ) under AD L (ℝ) . Indeed, as we noted above, this axiom is “effectively complete” with regard to questions about L (ℝ). In contrast, the embedding axiom on its own is not sufficient to imply that L ( V λ+1 ) has a structure theory that fully parallels that of L (ℝ) under AD L (ℝ) . However, the existence of an already rich parallel is evidence that the parallel extends, and we can supplement the embedding axiom by adding some key components. When one does so, something remarkable happens: the supplementary axioms become forcing fragile . This means that they have the potential to erase independence and provide non-trivial information about V λ+1 . For example, these supplementary axioms might settle CH and much more.

The difficulty in investigating the possibilities for the structure theory of L ( V λ+1 ) is that we have not had the proper lenses through which to view it. The trouble is that the model L ( V λ+1 ) contains a large piece of the universe—namely, L ( V λ+1 )—and the theory of this structure is radically underdetermined. The results discussed above provide us with the proper lenses. For one can examine the structure theory of L ( V λ+1 ) in the context of ultimate inner models like L Ω , L Ω S , L Ω (∗) , and their variants. The point is that these models can accommodate the embedding axiom and, within each, one will be able to compute the structure theory of L ( V λ+1 ).

This provides a means to select the correct axiom from among V = L Ω , V = L Ω S , V = L Ω (∗) , and their variants. One simply looks at the L ( V λ+1 ) of each model (where the embedding axiom holds) and checks to see which has the true analogue of the structure theory of L (ℝ) under the assumption of AD L (ℝ) . It is already known that certain pieces of the structure theory cannot hold in L Ω . But it is open whether they can hold in L Ω S .

Let us consider one such (very optimistic) scenario: The true analogue of the structure theory of L (ℝ) under AD L (ℝ) holds of the L ( V λ+1 ) of L Ω S but not of any of its variants. Moreover, this structure theory is “effectively complete” for the theory of V λ+1 . Assuming that there is a proper class of λ where the embedding axiom holds, this gives an “effectively complete” theory of V . And, remarkably, part of that theory is that V must be L Ω S . This (admittedly very optimistic) scenario would constitute a very strong case for axioms that resolve all of the undecided statements.

One should not place too much weight on this particular scenario. It is just one of many. The point is that we are now in a position to write down a list of definite questions with the following features: First, the questions on this list will have answers—independence is not an issue. Second, if the answers converge then one will have strong evidence for new axioms settling the undecided statements (and hence non-pluralism about the universe of sets); while if the answers oscillate, one will have evidence that these statements are “absolutely undecidable” and this will strengthen the case for pluralism. In this way the questions of “absolute undecidability” and pluralism are given mathematical traction.

Further Reading : For more on the structure theory of L ( V λ+1 ) and the parallel with determinacy see Woodin (2011b).

  • Abraham, U. and M. Magidor, 2010, “Cardinal arithmetic,” in Foreman and Kanamori 2010.
  • Bagaria, J., N. Castells, and P. Larson, 2006, “An Ω-logic primer,” in J. Bagaria and S. Todorcevic (eds), Set theory , Trends in Mathematics, Birkhäuser, Basel, pp. 1–28.
  • Cohen, P., 1963, “The independence of the continuum hypothesis I,” Proceedings of the U.S. National Academy of Sciemces , 50: 1143–48.
  • Foreman, M. and A. Kanamori, 2010, Handbook of Set Theory , Springer-Verlag.
  • Foreman, M. and M. Magidor, 1995, “Large cardinals and definable counterexamples to the continuum hypothesis,” Annals of Pure and Applied Logic 76: 47–97.
  • Foreman, M., M. Magidor, and S. Shelah, 1988, “Martin's Maximum, saturated ideals, and non-regular ultrafilters. Part i,” Annals of Mathematics 127: 1–47.
  • Gödel, K., 1938a. “The consistency of the axiom of choice and of the generalized continuum-hypothesis,” Proceedings of the U.S. National Academy of Sciences , 24: 556–7.
  • Gödel, K., 1938b. “Consistency-proof for the generalized continuum-hypothesis,” Proceedings of the U.S. National Academy of Sciemces , 25: 220–4.
  • Hallett, M., 1984, Cantorian Set Theory and Limitation of Size , Vol. 10 of Oxford Logic Guides , Oxford University Press.
  • Holz, M., K. Steffens, and E. Weitz, 1999, Introduction to Cardinal Arithmetic , Birkhäuser Advanced Texts, Birkhäuser Verlag, Basel.
  • Jech, T. J., 2003, Set Theory: Third Millennium Edition, Revised and Expanded , Springer-Verlag, Berlin.
  • Ketchersid, R., P. Larson, and J. Zapletal, 2010, “Regular embeddings of the stationary tower and Woodin's Sigma-2-2 maximality theorem.” Journal of Symbolic Logic 75(2):711–727.
  • Koellner, P., 2010, “Strong logics of first and second order,” Bulletin of Symbolic Logic 16(1): 1–36.
  • Koellner, P. and W. H. Woodin, 2009, “Incompatible Ω-complete theories,” The Journal of Symbolic Logic 74 (4).
  • Martin, D. A., 1976, “Hilbert's first problem: The Continuum Hypothesis,” in F. Browder (ed.), Mathematical Developments Arising from Hilbert's Problems , Vol. 28 of Proceedings of Symposia in Pure Mathematics , American Mathematical Society, Providence, pp. 81–92.
  • Mitchell, W., 2010, “Beginning inner model theory,” in Foreman and Kanamori 2010.
  • Steel, J. R., 2010, “An outline of inner model theory,” in Foreman and Kanamori 2010.
  • Woodin, W. H., 1999, The Axiom of Determinacy, Forcing Axioms, and the Nonstationary Ideal , Vol. 1 of de Gruyter Series in Logic and its Applications , de Gruyter, Berlin.
  • –––, 2001a, “The continuum hypothesis, part I,” Notices of the American Mathematical Society 48(6): 567–576.
  • –––, 2001b, “The continuum hypothesis, part II,” Notices of the American Mathematical Society 48(7): 681–690.
  • –––, 2005a, “The continuum hypothesis,” in R. Cori, A. Razborov, S. Todorĉević and C. Wood (eds), Logic Colloquium 2000 , Vol. 19 of Lecture Notes in Logic , Association of Symbolic Logic, pp. 143–197.
  • –––, 2005b, “Set theory after Russell: the journey back to Eden,” in G. Link (ed.), One Hundred Years Of Russell's Paradox: Mathematics, Logic, Philosophy , Vol. 6 of de Gruyter Series in Logic and Its Applications , Walter De Gruyter Inc, pp. 29–47.
  • –––, 2010, “Suitable extender models I,” Journal of Mathematical Logic 10(1–2): 101–339.
  • –––, 2011a, “The Continuum Hypothesis, the generic-multiverse of sets, and the Ω-conjecture,” in J. Kennedy and R. Kossak, (eds), Set Theory, Arithmetic, and Foundations of Mathematics: Theorems, Philosophies , Vol. 36 of Lecture Notes in Logic , Cambridge University Press.
  • –––, 2011b, “Suitable extender models II,” Journal of Mathematical Logic 11(2): 115–436.
How to cite this entry . Preview the PDF version of this entry at the Friends of the SEP Society . Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers , with links to its database.

[Please contact the author with suggestions.]

Gödel, Kurt | set theory | set theory: early development | set theory: large cardinals and determinacy

Copyright © 2013 by Peter Koellner < koellner @ fas . harvard . edu >

  • Accessibility

Support SEP

Mirror sites.

View this site from another server:

  • Info about mirror sites

The Stanford Encyclopedia of Philosophy is copyright © 2023 by The Metaphysics Research Lab , Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054

Cambridge University Faculty of Mathematics

Or search by topic

Number and algebra

  • The Number System and Place Value
  • Calculations and Numerical Methods
  • Fractions, Decimals, Percentages, Ratio and Proportion
  • Properties of Numbers
  • Patterns, Sequences and Structure
  • Algebraic expressions, equations and formulae
  • Coordinates, Functions and Graphs

Geometry and measure

  • Angles, Polygons, and Geometrical Proof
  • 3D Geometry, Shape and Space
  • Measuring and calculating with units
  • Transformations and constructions
  • Pythagoras and Trigonometry
  • Vectors and Matrices

Probability and statistics

  • Handling, Processing and Representing Data
  • Probability

Working mathematically

  • Thinking mathematically
  • Mathematical mindsets
  • Cross-curricular contexts
  • Physical and digital manipulatives

For younger learners

  • Early Years Foundation Stage

Advanced mathematics

  • Decision Mathematics and Combinatorics
  • Advanced Probability and Statistics

Published 2008 Revised 2019

Understanding Hypotheses

what is a hypothesis and conjecture

'What happens if ... ?' to ' This will happen if'

The experimentation of children continually moves on to the exploration of new ideas and the refinement of their world view of previously understood situations. This description of the playtime patterns of young children very nicely models the concept of 'making and testing hypotheses'. It follows this pattern:

  • Make some observations. Collect some data based on the observations.
  • Draw a conclusion (called a 'hypothesis') which will explain the pattern of the observations.
  • Test out your hypothesis by making some more targeted observations.

So, we have

  • A hypothesis is a statement or idea which gives an explanation to a series of observations.

Sometimes, following observation, a hypothesis will clearly need to be refined or rejected. This happens if a single contradictory observation occurs. For example, suppose that a child is trying to understand the concept of a dog. He reads about several dogs in children's books and sees that they are always friendly and fun. He makes the natural hypothesis in his mind that dogs are friendly and fun . He then meets his first real dog: his neighbour's puppy who is great fun to play with. This reinforces his hypothesis. His cousin's dog is also very friendly and great fun. He meets some of his friends' dogs on various walks to playgroup. They are also friendly and fun. He is now confident that his hypothesis is sound. Suddenly, one day, he sees a dog, tries to stroke it and is bitten. This experience contradicts his hypothesis. He will need to amend the hypothesis. We see that

  • Gathering more evidence/data can strengthen a hypothesis if it is in agreement with the hypothesis.
  • If the data contradicts the hypothesis then the hypothesis must be rejected or amended to take into account the contradictory situation.

what is a hypothesis and conjecture

  • A contradictory observation can cause us to know for certain that a hypothesis is incorrect.
  • Accumulation of supporting experimental evidence will strengthen a hypothesis but will never let us know for certain that the hypothesis is true.

In short, it is possible to show that a hypothesis is false, but impossible to prove that it is true!

Whilst we can never prove a scientific hypothesis to be true, there will be a certain stage at which we decide that there is sufficient supporting experimental data for us to accept the hypothesis. The point at which we make the choice to accept a hypothesis depends on many factors. In practice, the key issues are

  • What are the implications of mistakenly accepting a hypothesis which is false?
  • What are the cost / time implications of gathering more data?
  • What are the implications of not accepting in a timely fashion a true hypothesis?

For example, suppose that a drug company is testing a new cancer drug. They hypothesise that the drug is safe with no side effects. If they are mistaken in this belief and release the drug then the results could have a disastrous effect on public health. However, running extended clinical trials might be very costly and time consuming. Furthermore, a delay in accepting the hypothesis and releasing the drug might also have a negative effect on the health of many people.

In short, whilst we can never achieve absolute certainty with the testing of hypotheses, in order to make progress in science or industry decisions need to be made. There is a fine balance to be made between action and inaction.

Hypotheses and mathematics So where does mathematics enter into this picture? In many ways, both obvious and subtle:

  • A good hypothesis needs to be clear, precisely stated and testable in some way. Creation of these clear hypotheses requires clear general mathematical thinking.
  • The data from experiments must be carefully analysed in relation to the original hypothesis. This requires the data to be structured, operated upon, prepared and displayed in appropriate ways. The levels of this process can range from simple to exceedingly complex.

Very often, the situation under analysis will appear to be complicated and unclear. Part of the mathematics of the task will be to impose a clear structure on the problem. The clarity of thought required will actively be developed through more abstract mathematical study. Those without sufficient general mathematical skill will be unable to perform an appropriate logical analysis.

Using deductive reasoning in hypothesis testing

There is often confusion between the ideas surrounding proof, which is mathematics, and making and testing an experimental hypothesis, which is science. The difference is rather simple:

  • Mathematics is based on deductive reasoning : a proof is a logical deduction from a set of clear inputs.
  • Science is based on inductive reasoning : hypotheses are strengthened or rejected based on an accumulation of experimental evidence.

Of course, to be good at science, you need to be good at deductive reasoning, although experts at deductive reasoning need not be mathematicians. Detectives, such as Sherlock Holmes and Hercule Poirot, are such experts: they collect evidence from a crime scene and then draw logical conclusions from the evidence to support the hypothesis that, for example, Person M. committed the crime. They use this evidence to create sufficiently compelling deductions to support their hypotheses beyond reasonable doubt . The key word here is 'reasonable'. There is always the possibility of creating an exceedingly outlandish scenario to explain away any hypothesis of a detective or prosecution lawyer, but judges and juries in courts eventually make the decision that the probability of such eventualities are 'small' and the chance of the hypothesis being correct 'high'.

what is a hypothesis and conjecture

  • If a set of data is normally distributed with mean 0 and standard deviation 0.5 then there is a 97.7% certainty that a measurement will not exceed 1.0.
  • If the mean of a sample of data is 12, how confident can we be that the true mean of the population lies between 11 and 13?

It is at this point that making and testing hypotheses becomes a true branch of mathematics. This mathematics is difficult, but fascinating and highly relevant in the information-rich world of today.

To read more about the technical side of hypothesis testing, take a look at What is a Hypothesis Test?

You might also enjoy reading the articles on statistics on the Understanding Uncertainty website

This resource is part of the collection Statistics - Maths of Real Life

  • School Guide
  • Mathematics
  • Number System and Arithmetic
  • Trigonometry
  • Probability
  • Mensuration
  • Maths Formulas
  • Class 8 Maths Notes
  • Class 9 Maths Notes
  • Class 10 Maths Notes
  • Class 11 Maths Notes
  • Class 12 Maths Notes
  • Null Hypothesis
  • Hypothesis Testing Formula
  • Difference Between Hypothesis And Theory
  • Real-life Applications of Hypothesis Testing
  • Permutation Hypothesis Test in R Programming
  • Bayes' Theorem
  • Hypothesis in Machine Learning
  • Current Best Hypothesis Search
  • Understanding Hypothesis Testing
  • Hypothesis Testing in R Programming
  • Jobathon | Stats | Question 10
  • Jobathon | Stats | Question 17
  • Testing | Question 1
  • Difference between Null and Alternate Hypothesis
  • ML | Find S Algorithm
  • Python - Pearson's Chi-Square Test

Hypothesis is a testable statement that explains what is happening or observed. It proposes the relation between the various participating variables. Hypothesis is also called Theory, Thesis, Guess, Assumption, or Suggestion. Hypothesis creates a structure that guides the search for knowledge.

In this article, we will learn what is hypothesis, its characteristics, types, and examples. We will also learn how hypothesis helps in scientific research.

Hypothesis

What is Hypothesis?

A hypothesis is a suggested idea or plan that has little proof, meant to lead to more study. It’s mainly a smart guess or suggested answer to a problem that can be checked through study and trial. In science work, we make guesses called hypotheses to try and figure out what will happen in tests or watching. These are not sure things but rather ideas that can be proved or disproved based on real-life proofs. A good theory is clear and can be tested and found wrong if the proof doesn’t support it.

Hypothesis Meaning

A hypothesis is a proposed statement that is testable and is given for something that happens or observed.
  • It is made using what we already know and have seen, and it’s the basis for scientific research.
  • A clear guess tells us what we think will happen in an experiment or study.
  • It’s a testable clue that can be proven true or wrong with real-life facts and checking it out carefully.
  • It usually looks like a “if-then” rule, showing the expected cause and effect relationship between what’s being studied.

Characteristics of Hypothesis

Here are some key characteristics of a hypothesis:

  • Testable: An idea (hypothesis) should be made so it can be tested and proven true through doing experiments or watching. It should show a clear connection between things.
  • Specific: It needs to be easy and on target, talking about a certain part or connection between things in a study.
  • Falsifiable: A good guess should be able to show it’s wrong. This means there must be a chance for proof or seeing something that goes against the guess.
  • Logical and Rational: It should be based on things we know now or have seen, giving a reasonable reason that fits with what we already know.
  • Predictive: A guess often tells what to expect from an experiment or observation. It gives a guide for what someone might see if the guess is right.
  • Concise: It should be short and clear, showing the suggested link or explanation simply without extra confusion.
  • Grounded in Research: A guess is usually made from before studies, ideas or watching things. It comes from a deep understanding of what is already known in that area.
  • Flexible: A guess helps in the research but it needs to change or fix when new information comes up.
  • Relevant: It should be related to the question or problem being studied, helping to direct what the research is about.
  • Empirical: Hypotheses come from observations and can be tested using methods based on real-world experiences.

Sources of Hypothesis

Hypotheses can come from different places based on what you’re studying and the kind of research. Here are some common sources from which hypotheses may originate:

  • Existing Theories: Often, guesses come from well-known science ideas. These ideas may show connections between things or occurrences that scientists can look into more.
  • Observation and Experience: Watching something happen or having personal experiences can lead to guesses. We notice odd things or repeat events in everyday life and experiments. This can make us think of guesses called hypotheses.
  • Previous Research: Using old studies or discoveries can help come up with new ideas. Scientists might try to expand or question current findings, making guesses that further study old results.
  • Literature Review: Looking at books and research in a subject can help make guesses. Noticing missing parts or mismatches in previous studies might make researchers think up guesses to deal with these spots.
  • Problem Statement or Research Question: Often, ideas come from questions or problems in the study. Making clear what needs to be looked into can help create ideas that tackle certain parts of the issue.
  • Analogies or Comparisons: Making comparisons between similar things or finding connections from related areas can lead to theories. Understanding from other fields could create new guesses in a different situation.
  • Hunches and Speculation: Sometimes, scientists might get a gut feeling or make guesses that help create ideas to test. Though these may not have proof at first, they can be a beginning for looking deeper.
  • Technology and Innovations: New technology or tools might make guesses by letting us look at things that were hard to study before.
  • Personal Interest and Curiosity: People’s curiosity and personal interests in a topic can help create guesses. Scientists could make guesses based on their own likes or love for a subject.

Types of Hypothesis

Here are some common types of hypotheses:

Simple Hypothesis

Complex hypothesis, directional hypothesis.

  • Non-directional Hypothesis

Null Hypothesis (H0)

Alternative hypothesis (h1 or ha), statistical hypothesis, research hypothesis, associative hypothesis, causal hypothesis.

Simple Hypothesis guesses a connection between two things. It says that there is a connection or difference between variables, but it doesn’t tell us which way the relationship goes.
Complex Hypothesis tells us what will happen when more than two things are connected. It looks at how different things interact and may be linked together.
Directional Hypothesis says how one thing is related to another. For example, it guesses that one thing will help or hurt another thing.

Non-Directional Hypothesis

Non-Directional Hypothesis are the one that don’t say how the relationship between things will be. They just say that there is a connection, without telling which way it goes.
Null hypothesis is a statement that says there’s no connection or difference between different things. It implies that any seen impacts are because of luck or random changes in the information.
Alternative Hypothesis is different from the null hypothesis and shows that there’s a big connection or gap between variables. Scientists want to say no to the null hypothesis and choose the alternative one.
Statistical Hypotheis are used in math testing and include making ideas about what groups or bits of them look like. You aim to get information or test certain things using these top-level, common words only.
Research Hypothesis comes from the research question and tells what link is expected between things or factors. It leads the study and chooses where to look more closely.
Associative Hypotheis guesses that there is a link or connection between things without really saying it caused them. It means that when one thing changes, it is connected to another thing changing.
Causal Hypothesis are different from other ideas because they say that one thing causes another. This means there’s a cause and effect relationship between variables involved in the situation. They say that when one thing changes, it directly makes another thing change.

Hypothesis Examples

Following are the examples of hypotheses based on their types:

Simple Hypothesis Example

  • Studying more can help you do better on tests.
  • Getting more sun makes people have higher amounts of vitamin D.

Complex Hypothesis Example

  • How rich you are, how easy it is to get education and healthcare greatly affects the number of years people live.
  • A new medicine’s success relies on the amount used, how old a person is who takes it and their genes.

Directional Hypothesis Example

  • Drinking more sweet drinks is linked to a higher body weight score.
  • Too much stress makes people less productive at work.

Non-directional Hypothesis Example

  • Drinking caffeine can affect how well you sleep.
  • People often like different kinds of music based on their gender.
  • The average test scores of Group A and Group B are not much different.
  • There is no connection between using a certain fertilizer and how much it helps crops grow.

Alternative Hypothesis (Ha)

  • Patients on Diet A have much different cholesterol levels than those following Diet B.
  • Exposure to a certain type of light can change how plants grow compared to normal sunlight.
  • The average smarts score of kids in a certain school area is 100.
  • The usual time it takes to finish a job using Method A is the same as with Method B.
  • Having more kids go to early learning classes helps them do better in school when they get older.
  • Using specific ways of talking affects how much customers get involved in marketing activities.
  • Regular exercise helps to lower the chances of heart disease.
  • Going to school more can help people make more money.
  • Playing violent video games makes teens more likely to act aggressively.
  • Less clean air directly impacts breathing health in city populations.

Functions of Hypothesis

Hypotheses have many important jobs in the process of scientific research. Here are the key functions of hypotheses:

  • Guiding Research: Hypotheses give a clear and exact way for research. They act like guides, showing the predicted connections or results that scientists want to study.
  • Formulating Research Questions: Research questions often create guesses. They assist in changing big questions into particular, checkable things. They guide what the study should be focused on.
  • Setting Clear Objectives: Hypotheses set the goals of a study by saying what connections between variables should be found. They set the targets that scientists try to reach with their studies.
  • Testing Predictions: Theories guess what will happen in experiments or observations. By doing tests in a planned way, scientists can check if what they see matches the guesses made by their ideas.
  • Providing Structure: Theories give structure to the study process by arranging thoughts and ideas. They aid scientists in thinking about connections between things and plan experiments to match.
  • Focusing Investigations: Hypotheses help scientists focus on certain parts of their study question by clearly saying what they expect links or results to be. This focus makes the study work better.
  • Facilitating Communication: Theories help scientists talk to each other effectively. Clearly made guesses help scientists to tell others what they plan, how they will do it and the results expected. This explains things well with colleagues in a wide range of audiences.
  • Generating Testable Statements: A good guess can be checked, which means it can be looked at carefully or tested by doing experiments. This feature makes sure that guesses add to the real information used in science knowledge.
  • Promoting Objectivity: Guesses give a clear reason for study that helps guide the process while reducing personal bias. They motivate scientists to use facts and data as proofs or disprovals for their proposed answers.
  • Driving Scientific Progress: Making, trying out and adjusting ideas is a cycle. Even if a guess is proven right or wrong, the information learned helps to grow knowledge in one specific area.

How Hypothesis help in Scientific Research?

Researchers use hypotheses to put down their thoughts directing how the experiment would take place. Following are the steps that are involved in the scientific method:

  • Initiating Investigations: Hypotheses are the beginning of science research. They come from watching, knowing what’s already known or asking questions. This makes scientists make certain explanations that need to be checked with tests.
  • Formulating Research Questions: Ideas usually come from bigger questions in study. They help scientists make these questions more exact and testable, guiding the study’s main point.
  • Setting Clear Objectives: Hypotheses set the goals of a study by stating what we think will happen between different things. They set the goals that scientists want to reach by doing their studies.
  • Designing Experiments and Studies: Assumptions help plan experiments and watchful studies. They assist scientists in knowing what factors to measure, the techniques they will use and gather data for a proposed reason.
  • Testing Predictions: Ideas guess what will happen in experiments or observations. By checking these guesses carefully, scientists can see if the seen results match up with what was predicted in each hypothesis.
  • Analysis and Interpretation of Data: Hypotheses give us a way to study and make sense of information. Researchers look at what they found and see if it matches the guesses made in their theories. They decide if the proof backs up or disagrees with these suggested reasons why things are happening as expected.
  • Encouraging Objectivity: Hypotheses help make things fair by making sure scientists use facts and information to either agree or disagree with their suggested reasons. They lessen personal preferences by needing proof from experience.
  • Iterative Process: People either agree or disagree with guesses, but they still help the ongoing process of science. Findings from testing ideas make us ask new questions, improve those ideas and do more tests. It keeps going on in the work of science to keep learning things.

People Also View:

Mathematics Maths Formulas Branches of Mathematics

Summary – Hypothesis

A hypothesis is a testable statement serving as an initial explanation for phenomena, based on observations, theories, or existing knowledge. It acts as a guiding light for scientific research, proposing potential relationships between variables that can be empirically tested through experiments and observations. The hypothesis must be specific, testable, falsifiable, and grounded in prior research or observation, laying out a predictive, if-then scenario that details a cause-and-effect relationship. It originates from various sources including existing theories, observations, previous research, and even personal curiosity, leading to different types, such as simple, complex, directional, non-directional, null, and alternative hypotheses, each serving distinct roles in research methodology. The hypothesis not only guides the research process by shaping objectives and designing experiments but also facilitates objective analysis and interpretation of data, ultimately driving scientific progress through a cycle of testing, validation, and refinement.

FAQs on Hypothesis

What is a hypothesis.

A guess is a possible explanation or forecast that can be checked by doing research and experiments.

What are Components of a Hypothesis?

The components of a Hypothesis are Independent Variable, Dependent Variable, Relationship between Variables, Directionality etc.

What makes a Good Hypothesis?

Testability, Falsifiability, Clarity and Precision, Relevance are some parameters that makes a Good Hypothesis

Can a Hypothesis be Proven True?

You cannot prove conclusively that most hypotheses are true because it’s generally impossible to examine all possible cases for exceptions that would disprove them.

How are Hypotheses Tested?

Hypothesis testing is used to assess the plausibility of a hypothesis by using sample data

Can Hypotheses change during Research?

Yes, you can change or improve your ideas based on new information discovered during the research process.

What is the Role of a Hypothesis in Scientific Research?

Hypotheses are used to support scientific research and bring about advancements in knowledge.

Please Login to comment...

Similar reads.

author

  • Geeks Premier League 2023
  • Maths-Class-12
  • Geeks Premier League
  • School Learning

Improve your Coding Skills with Practice

 alt=

What kind of Experience do you want to share?

This is the Difference Between a Hypothesis and a Theory

What to Know A hypothesis is an assumption made before any research has been done. It is formed so that it can be tested to see if it might be true. A theory is a principle formed to explain the things already shown in data. Because of the rigors of experiment and control, it is much more likely that a theory will be true than a hypothesis.

As anyone who has worked in a laboratory or out in the field can tell you, science is about process: that of observing, making inferences about those observations, and then performing tests to see if the truth value of those inferences holds up. The scientific method is designed to be a rigorous procedure for acquiring knowledge about the world around us.

hypothesis

In scientific reasoning, a hypothesis is constructed before any applicable research has been done. A theory, on the other hand, is supported by evidence: it's a principle formed as an attempt to explain things that have already been substantiated by data.

Toward that end, science employs a particular vocabulary for describing how ideas are proposed, tested, and supported or disproven. And that's where we see the difference between a hypothesis and a theory .

A hypothesis is an assumption, something proposed for the sake of argument so that it can be tested to see if it might be true.

In the scientific method, the hypothesis is constructed before any applicable research has been done, apart from a basic background review. You ask a question, read up on what has been studied before, and then form a hypothesis.

What is a Hypothesis?

A hypothesis is usually tentative, an assumption or suggestion made strictly for the objective of being tested.

When a character which has been lost in a breed, reappears after a great number of generations, the most probable hypothesis is, not that the offspring suddenly takes after an ancestor some hundred generations distant, but that in each successive generation there has been a tendency to reproduce the character in question, which at last, under unknown favourable conditions, gains an ascendancy. Charles Darwin, On the Origin of Species , 1859 According to one widely reported hypothesis , cell-phone transmissions were disrupting the bees' navigational abilities. (Few experts took the cell-phone conjecture seriously; as one scientist said to me, "If that were the case, Dave Hackenberg's hives would have been dead a long time ago.") Elizabeth Kolbert, The New Yorker , 6 Aug. 2007

What is a Theory?

A theory , in contrast, is a principle that has been formed as an attempt to explain things that have already been substantiated by data. It is used in the names of a number of principles accepted in the scientific community, such as the Big Bang Theory . Because of the rigors of experimentation and control, its likelihood as truth is much higher than that of a hypothesis.

It is evident, on our theory , that coasts merely fringed by reefs cannot have subsided to any perceptible amount; and therefore they must, since the growth of their corals, either have remained stationary or have been upheaved. Now, it is remarkable how generally it can be shown, by the presence of upraised organic remains, that the fringed islands have been elevated: and so far, this is indirect evidence in favour of our theory . Charles Darwin, The Voyage of the Beagle , 1839 An example of a fundamental principle in physics, first proposed by Galileo in 1632 and extended by Einstein in 1905, is the following: All observers traveling at constant velocity relative to one another, should witness identical laws of nature. From this principle, Einstein derived his theory of special relativity. Alan Lightman, Harper's , December 2011

Non-Scientific Use

In non-scientific use, however, hypothesis and theory are often used interchangeably to mean simply an idea, speculation, or hunch (though theory is more common in this regard):

The theory of the teacher with all these immigrant kids was that if you spoke English loudly enough they would eventually understand. E. L. Doctorow, Loon Lake , 1979 Chicago is famous for asking questions for which there can be no boilerplate answers. Example: given the probability that the federal tax code, nondairy creamer, Dennis Rodman and the art of mime all came from outer space, name something else that has extraterrestrial origins and defend your hypothesis . John McCormick, Newsweek , 5 Apr. 1999 In his mind's eye, Miller saw his case suddenly taking form: Richard Bailey had Helen Brach killed because she was threatening to sue him over the horses she had purchased. It was, he realized, only a theory , but it was one he felt certain he could, in time, prove. Full of urgency, a man with a mission now that he had a hypothesis to guide him, he issued new orders to his troops: Find out everything you can about Richard Bailey and his crowd. Howard Blum, Vanity Fair , January 1995

And sometimes one term is used as a genus, or a means for defining the other:

Laplace's popular version of his astronomy, the Système du monde , was famous for introducing what came to be known as the nebular hypothesis , the theory that the solar system was formed by the condensation, through gradual cooling, of the gaseous atmosphere (the nebulae) surrounding the sun. Louis Menand, The Metaphysical Club , 2001 Researchers use this information to support the gateway drug theory — the hypothesis that using one intoxicating substance leads to future use of another. Jordy Byrd, The Pacific Northwest Inlander , 6 May 2015 Fox, the business and economics columnist for Time magazine, tells the story of the professors who enabled those abuses under the banner of the financial theory known as the efficient market hypothesis . Paul Krugman, The New York Times Book Review , 9 Aug. 2009

Incorrect Interpretations of "Theory"

Since this casual use does away with the distinctions upheld by the scientific community, hypothesis and theory are prone to being wrongly interpreted even when they are encountered in scientific contexts—or at least, contexts that allude to scientific study without making the critical distinction that scientists employ when weighing hypotheses and theories.

The most common occurrence is when theory is interpreted—and sometimes even gleefully seized upon—to mean something having less truth value than other scientific principles. (The word law applies to principles so firmly established that they are almost never questioned, such as the law of gravity.)

This mistake is one of projection: since we use theory in general use to mean something lightly speculated, then it's implied that scientists must be talking about the same level of uncertainty when they use theory to refer to their well-tested and reasoned principles.

The distinction has come to the forefront particularly on occasions when the content of science curricula in schools has been challenged—notably, when a school board in Georgia put stickers on textbooks stating that evolution was "a theory, not a fact, regarding the origin of living things." As Kenneth R. Miller, a cell biologist at Brown University, has said , a theory "doesn’t mean a hunch or a guess. A theory is a system of explanations that ties together a whole bunch of facts. It not only explains those facts, but predicts what you ought to find from other observations and experiments.”

While theories are never completely infallible, they form the basis of scientific reasoning because, as Miller said "to the best of our ability, we’ve tested them, and they’ve held up."

More Differences Explained

  • Epidemic vs. Pandemic
  • Diagnosis vs. Prognosis
  • Treatment vs. Cure

Word of the Day

See Definitions and Examples »

Get Word of the Day daily email!

Games & Quizzes

Play Quordle: Guess all four words in a limited number of tries.  Each of your guesses must be a real 5-letter word.

Commonly Confused

'canceled' or 'cancelled', 'virus' vs. 'bacteria', your vs. you're: how to use them correctly, is it 'jail' or 'prison', 'deduction' vs. 'induction' vs. 'abduction', grammar & usage, more words you always have to look up, 'fewer' and 'less', 7 pairs of commonly confused words, more commonly misspelled words, every letter is silent, sometimes: a-z list of examples, great big list of beautiful and useless words, vol. 4, 9 other words for beautiful, why jaywalking is called jaywalking, the words of the week - may 17, birds say the darndest things.

Hypothesis

A statement that could be true, which might then be tested.

Example: Sam has a hypothesis that "large dogs are better at catching tennis balls than small dogs". We can test that hypothesis by having hundreds of different sized dogs try to catch tennis balls.

Sometimes the hypothesis won't be tested, it is simply a good explanation (which could be wrong). Conjecture is a better word for this.

Example: you notice the temperature drops just as the sun rises. Your hypothesis is that the sun warms the air high above you, which rises up and then cooler air comes from the sides.

Note: when someone says "I have a theory" they should say "I have a hypothesis", because in mathematics a theory is actually well proven.

Ask Difference

Conjecture vs. Hypothesis — What's the Difference?

what is a hypothesis and conjecture

Difference Between Conjecture and Hypothesis

Table of contents, key differences, comparison chart, compare with definitions, common curiosities, is every hypothesis tested, is a conjecture the same as a theory, can a conjecture be used in scientific discussion, can a conjecture be proven true, what happens if a hypothesis is proven wrong, can a hypothesis turn into a theory, can conjectures be useful, can a hypothesis be based on a conjecture, why is testing important for a hypothesis, how can one differentiate between conjecture and hypothesis in a discussion, do all scientific studies begin with a hypothesis, can a proven hypothesis ever be proven wrong later on, can one hypothesis lead to another, is every conjecture worth investigating, are conjectures more common in certain fields, share your discovery.

what is a hypothesis and conjecture

Author Spotlight

what is a hypothesis and conjecture

Popular Comparisons

what is a hypothesis and conjecture

Trending Comparisons

what is a hypothesis and conjecture

New Comparisons

what is a hypothesis and conjecture

Trending Terms

what is a hypothesis and conjecture

what is a hypothesis and conjecture

Riemann Hypothesis

DOWNLOAD Mathematica Notebook

A more general statement known as the generalized Riemann hypothesis conjectures that neither the Riemann zeta function nor any Dirichlet L-series has a zero with real part larger than 1/2.

Legend holds that the copy of Riemann's collected works found in Hurwitz's library after his death would automatically fall open to the page on which the Riemann hypothesis was stated (Edwards 2001, p. ix).

Proof of the Riemann hypothesis is number 8 of Hilbert's problems and number 1 of Smale's problems .

In 2000, the Clay Mathematics Institute ( http://www.claymath.org/ ) offered a $1 million prize ( http://www.claymath.org/millennium/Rules_etc/ ) for proof of the Riemann hypothesis. Interestingly, disproof of the Riemann hypothesis (e.g., by using a computer to actually find a zero off the critical line ), does not earn the $1 million award.

The Riemann hypothesis is equivalent to the statement that all the zeros of the Dirichlet eta function (a.k.a. the alternating zeta function)

By modifying a criterion of Robin (1984), Lagarias (2000) showed that the Riemann hypothesis is equivalent to the statement that

There is also a finite analog of the Riemann hypothesis concerning the location of zeros for function fields defined by equations such as

According to Fields medalist Enrico Bombieri, "The failure of the Riemann hypothesis would create havoc in the distribution of prime numbers" (Havil 2003, p. 205).

In Ron Howard's 2001 film A Beautiful Mind , John Nash (played by Russell Crowe) is hindered in his attempts to solve the Riemann hypothesis by the medication he is taking to treat his schizophrenia.

In the Season 1 episode " Prime Suspect " (2005) of the television crime drama NUMB3RS , math genius Charlie Eppes realizes that character Ethan's daughter has been kidnapped because he is close to solving the Riemann hypothesis, which allegedly would allow the perpetrators to break essentially all internet security.

In the novel Life After Genius (Jacoby 2008), the main character Theodore "Mead" Fegley (who is only 18 and a college senior) tries to prove the Riemann Hypothesis for his senior year research project. He also uses a Cray Supercomputer to calculate several billion zeroes of the Riemann zeta function. In several dream sequences within the book, Mead has conversations with Bernhard Riemann about the problem and mathematics in general.

Portions of this entry contributed by Len Goodman

Explore with Wolfram|Alpha

WolframAlpha

More things to try:

  • riemann hypothesis
  • derangements on 12 elements

Cite this as:

Goodman, Len and Weisstein, Eric W. "Riemann Hypothesis." From MathWorld --A Wolfram Web Resource. https://mathworld.wolfram.com/RiemannHypothesis.html

Subject classifications

Maths and Physics Tuition/Tests/Notes

  • Skip to content
  • Jump to main navigation and login
  • Jump to additional information

Nav view search

  • Testimonials
  • Rates & FAQS
  • Maths/Physics Notes
  • Online Tests
  • 10 Lesson Offers

what is a hypothesis and conjecture

The Difference Between a Conjecture, Hypothesis, Thesis, Theory and Law

  • " onclick="window.open(this.href,'win2','status=no,toolbar=no,scrollbars=yes,titlebar=no,menubar=no,resizable=yes,width=640,height=480,directories=no,location=no'); return false;" rel="nofollow"> Print

A conjecture is a proposition that is unproven but appears correct and has not been unproven. For example Goldbach's conjecture, which states that every number greater than 2 can be written as the sum of two primes.There is a great deal of numerical evidence to imply that it is true, but it is still unproven.

A hypothesis is a proposed explanation for an observation, and is stronger than a conjecture. It is directly testable, and often serves as the starting point of an experiment, which seeks to show that it is either true or false.

A thesis is an unproven statement put forward as a premise in an argument. For example Church's thesis in logic proposes that every computable function is computable by a Turing machine.

A theory is an explanation of reality that has been thoroughly tested so that most scientists agree on it. It can be changed if new information is found. Theory is different from a working hypothesis, which is a theory that hasn't been fully tested; that is, a hypothesis is an unproven theory.

A physical law or scientific law is a scientific generalization based on empirical observations of physical behaviour, often over many years. They describe observable phenomena and patterns and can be used to make predictions. Empirical laws are typically conclusions based on repeated scientific experiments and simple observations, over many years, and which have become accepted universally within the scientific community. The production of a summary description of our environment in the form of such laws is a fundamental aim of science.

Comments    

Add comment.

Name (required)

E-mail (required, but will not display)

Security code

Latest Blog Posts

22 may 2024.

  • My slippers
  • How Injuries Breed

Facebook

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • 14 May 2024

Why mathematics is set to be revolutionized by AI

what is a hypothesis and conjecture

  • Thomas Fink 0

Thomas Fink is the director of the London Institute for Mathematical Sciences, UK.

You can also search for this author in PubMed   Google Scholar

You have full access to this article via your institution.

Giving birth to a conjecture — a proposition that is suspected to be true, but needs definitive proof — can feel to a mathematician like a moment of divine inspiration. Mathematical conjectures are not merely educated guesses. Formulating them requires a combination of genius, intuition and experience. Even a mathematician can struggle to explain their own discovery process. Yet, counter-intuitively, I think that this is the realm in which machine intelligence will initially be most transformative.

In 2017, researchers at the London Institute for Mathematical Sciences, of which I am director, began applying machine learning to mathematical data as a hobby. During the COVID-19 pandemic, they discovered that simple artificial intelligence (AI) classifiers can predict an elliptic curve’s rank 1 — a measure of its complexity. Elliptic curves are fundamental to number theory, and understanding their underlying statistics is a crucial step towards solving one of the seven Millennium Problems, which are selected by the Clay Mathematics Institute in Providence, Rhode Island, and carry a prize of US$1 million each. Few expected AI to make a dent in this high-stakes arena.

what is a hypothesis and conjecture

AI now beats humans at basic tasks — new benchmarks are needed, says major report

AI has made inroads in other areas, too. A few years ago, a computer program called the Ramanujan Machine produced new formulae for fundamental constants 2 , such as π and e . It did so by exhaustively searching through families of continued fractions — a fraction whose denominator is a number plus a fraction whose denominator is also a number plus a fraction and so on. Some of these conjectures have since been proved, whereas others remain open problems.

Another example pertains to knot theory, a branch of topology in which a hypothetical piece of string is tangled up before the ends are glued together. Researchers at Google DeepMind, based in London, trained a neural network on data for many different knots and discovered an unexpected relationship between their algebraic and geometric structures 3 .

How has AI made a difference in areas of mathematics in which human creativity was thought to be essential?

First, there are no coincidences in maths. In real-world experiments, false negatives and false positives abound. But in maths, a single counterexample leaves a conjecture dead in the water. For example, the Pólya conjecture states that most integers below any given integer have an odd number of prime factors. But in 1960, it was found that the conjecture does not hold for the number 906,180,359. In one fell swoop, the conjecture was falsified.

Second, mathematical data — on which AI can be trained — are cheap. Primes, knots and many other types of mathematical object are abundant. The On-Line Encyclopedia of Integer Sequences (OEIS) contains almost 375,000 sequences — from the familiar Fibonacci sequence (1, 1, 2, 3, 5, 8, 13, ...) to the formidable Busy Beaver sequence (0, 1, 4, 6, 13, …), which grows faster than any computable function. Scientists are already using machine-learning tools to search the OEIS database to find unanticipated relationships.

what is a hypothesis and conjecture

DeepMind AI outdoes human mathematicians on unsolved problem

AI can help us to spot patterns and form conjectures. But not all conjectures are created equal. They also need to advance our understanding of mathematics. In his 1940 essay A Mathematician’s Apology , G. H. Hardy explains that a good theorem “should be one which is a constituent in many mathematical constructs, which is used in the proof of theorems of many different kinds”. In other words, the best theorems increase the likelihood of discovering new theorems. Conjectures that help us to reach new mathematical frontiers are better than those that yield fewer insights. But distinguishing between them requires an intuition for how the field itself will evolve. This grasp of the broader context will remain out of AI’s reach for a long time — so the technology will struggle to spot important conjectures.

But despite the caveats, there are many upsides to wider adoption of AI tools in the maths community. AI can provide a decisive edge and open up new avenues for research.

Mainstream mathematics journals should also publish more conjectures. Some of the most significant problems in maths — such as Fermat’s Last Theorem, the Riemann hypothesis, Hilbert’s 23 problems and Ramanujan’s many identities — and countless less-famous conjectures have shaped the course of the field. Conjectures speed up research by pointing us in the right direction. Journal articles about conjectures, backed up by data or heuristic arguments, will accelerate discovery.

Last year, researchers at Google DeepMind predicted 2.2 million new crystal structures 4 . But it remains to be seen how many of these potential new materials are stable, can be synthesized and have practical applications. For now, this is largely a task for human researchers, who have a grasp of the broad context of materials science.

Similarly, the imagination and intuition of mathematicians will be required to make sense of the output of AI tools. Thus, AI will act only as a catalyst of human ingenuity, rather than a substitute for it.

Nature 629 , 505 (2024)

doi: https://doi.org/10.1038/d41586-024-01413-w

He, Y.-H., Lee, K.-H., Oliver, T. & Pozdnyakov, A. Preprint at arXiv https://doi.org/10.48550/arXiv.2204.10140 (2024).

Raayoni, G. et al. Nature 590 , 67–73 (2021).

Article   PubMed   Google Scholar  

Davies, A. et al. Nature 600 , 70–74 (2021).

Merchant, A. et al. Nature 624 , 80–85 (2023).

Download references

Reprints and permissions

Competing Interests

The author declares no competing interests.

Related Articles

what is a hypothesis and conjecture

  • Machine learning
  • Mathematics and computing

First ‘bilingual’ brain-reading device decodes Spanish and English words

First ‘bilingual’ brain-reading device decodes Spanish and English words

News 21 MAY 24

DeepLabCut: the motion-tracking tool that went viral

DeepLabCut: the motion-tracking tool that went viral

Technology Feature 20 MAY 24

How does ChatGPT ‘think’? Psychology and neuroscience crack open AI large language models

How does ChatGPT ‘think’? Psychology and neuroscience crack open AI large language models

News Feature 14 MAY 24

The dream of electronic newspapers becomes a reality — in 1974

The dream of electronic newspapers becomes a reality — in 1974

News & Views 07 MAY 24

3D genomic mapping reveals multifocality of human pancreatic precancers

3D genomic mapping reveals multifocality of human pancreatic precancers

Article 01 MAY 24

AI’s keen diagnostic eye

AI’s keen diagnostic eye

Outlook 18 APR 24

Postdoctoral Fellow

New Orleans, Louisiana

Tulane University School of Medicine

what is a hypothesis and conjecture

Postdoctoral Associate - Immunology

Houston, Texas (US)

Baylor College of Medicine (BCM)

what is a hypothesis and conjecture

Postdoctoral Associate

Vice president, nature communications portfolio.

This is an exciting opportunity to play a key leadership role in the market-leading journal Nature Portfolio and help drive its overall contribution.

New York City, New York (US), Berlin, or Heidelberg

Springer Nature Ltd

what is a hypothesis and conjecture

Senior Postdoctoral Research Fellow

Senior Postdoctoral Research Fellow required to lead exciting projects in Cancer Cell Cycle Biology and Cancer Epigenetics.

Melbourne University, Melbourne (AU)

University of Melbourne & Peter MacCallum Cancer Centre

what is a hypothesis and conjecture

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

what is a hypothesis and conjecture

The Fermi Paradox and the Berserker Hypothesis: Exploring Cosmic Silence Through Science Fiction

I n the realm of cosmic conundrums, the Fermi Paradox stands out: why, in a universe replete with billions of stars and planets, have we yet to find any signs of extraterrestrial intelligent life? The “berserker hypothesis,” a spine-chilling explanation rooted in science and popularized by science fiction, suggests a grim answer to this enduring mystery.

The concept’s moniker traces back to Fred Saberhagen’s “Berserker” series of novels, and it paints a picture of the cosmos where intelligent life forms are systematically eradicated by self-replicating probes, known as “berserkers.” These probes, initially intended to explore and report back, turn rogue and annihilate any signs of civilizations they encounter. The hypothesis emerges as a rather dark twist on the concept of von Neumann probes—machines capable of self-replication using local resources, which could theoretically colonize the galaxy rapidly.

Diving into the technicalities, the berserker hypothesis operates as a potential solution to the Hart-Tipler conjecture, which posits the lack of detectable probes as evidence that no intelligent life exists outside our solar system. Instead, this hypothesis flips the script: the absence of such probes doesn’t point to a lack of life but rather to the possibility that these probes have become cosmic predators, leaving a trail of silence in their wake.

Astronomer David Brin’s chilling summation underscores the potential severity of the hypothesis: “It need only happen once for the results of this scenario to become the equilibrium conditions in the Galaxy…because all were killed shortly after discovering radio.” If these berserker probes exist and are as efficient as theorized, then humanity’s attempts at communication with extraterrestrial beings could be akin to lighting a beacon for our own destruction.

Despite its foundation in speculative thought, the theory isn’t without its scientific evaluations. Anders Sandberg and Stuart Armstrong from the Future of Humanity Institute speculated that, given the vastness of the universe and even a slow replication rate, these berserker probes—if they existed—would likely have already found and destroyed us. It’s both a chilling and somewhat reassuring analysis that treads the line between fiction and potential reality.

Within the eclectic array of solutions to the Fermi Paradox, the berserker hypothesis stands out for its seamless blend of science fiction inspiration and scientific discourse. It connects with other notions such as the Great Filter, which suggests that life elsewhere in the universe is being systematically snuffed out before it can reach a space-faring stage, and the Dark Forest hypothesis, which posits that civilizations remain silent to avoid detection by such cosmic hunters.

Relevant articles:

– TIL about the berserker hypothesis, a proposed solution to the Fermi paradox stating the reason why we haven’t found other sentient species yet is because those species have been wiped out by self-replicating “berserker” probes.

– The Berserker Hypothesis: The Darkest Explanation Of The Fermi Paradox

– Beyond “Fermi’s Paradox” VI: What is the Berserker Hypothesis?

In the realm of cosmic conundrums, the Fermi Paradox stands out: why, in a universe replete with billions of stars and planets, have we yet to find any signs of extraterrestrial intelligent life? The “berserker hypothesis,” a spine-chilling explanation rooted in science and popularized by science fiction, suggests a grim answer to this enduring mystery. […]

IMAGES

  1. Conjecture vs Hypothesis: Deciding Between Similar Terms

    what is a hypothesis and conjecture

  2. Research Hypothesis: Definition, Types, Examples and Quick Tips

    what is a hypothesis and conjecture

  3. What is an Hypothesis

    what is a hypothesis and conjecture

  4. What is a Hypothesis

    what is a hypothesis and conjecture

  5. Difference Between Hypothesis and Theory

    what is a hypothesis and conjecture

  6. How to Write a Hypothesis: The Ultimate Guide with Examples

    what is a hypothesis and conjecture

VIDEO

  1. What Is A Hypothesis?

  2. Concept of Hypothesis

  3. hypothesis,conjecture,surmise,postulate,hunch,supposition ASL

  4. No, it's never "just a theory"

  5. May 4, 2024

  6. PRACTICAL RESEARCH 2

COMMENTS

  1. Conjecture vs Hypothesis: Deciding Between Similar Terms

    A hypothesis is more specific and testable than a conjecture, often serving as a working assumption in scientific research. When incorporating "hypothesis" into a sentence, it is crucial to emphasize its potential for verification or refutation.

  2. Conjecture and hypothesis: The importance of reality checks

    Conjecture is an idea, hypothesis is a conjecture that can be tested by experiment or observation, and consensus emerges when other interested colleagues agree that evidence supports a hypothesis that has explanatory value. This approach is clearly relevant to origins of life research which is still at a stage where multiple conjectures abound ...

  3. Conjecture

    Sometimes, a conjecture is called a hypothesis when it is used frequently and repeatedly as an assumption in proofs of other results. For example, the Riemann hypothesis is a conjecture from number theory that — amongst other things — makes predictions about the distribution of prime numbers. Few number theorists doubt that the Riemann ...

  4. Conjecture vs Hypothesis

    As nouns the difference between conjecture and hypothesis. is that conjecture is a statement or an idea which is unproven, but is thought to be true; a guess while hypothesis is used loosely, a tentative conjecture explaining an observation, phenomenon or scientific problem that can be tested by further observation, investigation and/or ...

  5. Conjectures

    A conjecture is a mathematical statement that has not yet been rigorously proved. Conjectures arise when one notices a pattern that holds true for many cases. However, just because a pattern holds true for many cases does not mean that the pattern will hold true for all cases. Conjectures must be proved for the mathematical observation to be fully accepted. When a conjecture is rigorously ...

  6. Axiom, Corollary, Lemma, Postulate, Conjectures and Theorems

    A "hypothesis" is an assumption made. For example, "If xx is an even integer, ... A conjecture is a mathematical statement that has not yet been rigorously proved. Conjectures arise when one notices a pattern that holds true for many cases. However, just because a pattern holds true for many cases does not mean that the pattern will hold ...

  7. Difference between axioms, theorems, postulates, corollaries, and

    $\begingroup$ One difficulty is that, for historical reasons, various results have a specific term attached (Parallel postulate, Zorn's lemma, Riemann hypothesis, Collatz conjecture, Axiom of determinacy). These do not always agree with the the usual usage of the words. Also, some theorems have unique names, for example Hilbert's Nullstellensatz.

  8. Theory vs. Hypothesis: Basics of the Scientific Method

    Theory vs. Hypothesis: Basics of the Scientific Method. Written by MasterClass. Last updated: Jun 7, 2021 • 2 min read. Though you may hear the terms "theory" and "hypothesis" used interchangeably, these two scientific terms have drastically different meanings in the world of science.

  9. The Continuum Hypothesis

    The continuum hypothesis (CH) is one of the most central open problems in set theory, one that is important for both mathematical and philosophical reasons. The problem actually arose with the birth of set theory; indeed, in many respects it stimulated the birth of set theory. In 1874 Cantor had shown that there is a one-to-one correspondence ...

  10. Understanding Hypotheses

    A hypothesis is a statement or idea which gives an explanation to a series of observations. Sometimes, following observation, a hypothesis will clearly need to be refined or rejected. This happens if a single contradictory observation occurs. For example, suppose that a child is trying to understand the concept of a dog.

  11. Conjecture Definition & Meaning

    conjecture: [noun] inference formed without proof or sufficient evidence. a conclusion deduced by surmise or guesswork. a proposition (as in mathematics) before it has been proved or disproved.

  12. What is Hypothesis

    Hypothesis is a testable statement that explains what is happening or observed. It proposes the relation between the various participating variables. Hypothesis is also called Theory, Thesis, Guess, Assumption, or Suggestion. Hypothesis creates a structure that guides the search for knowledge. In this article, we will learn what is hypothesis ...

  13. The Subtle Art of the Mathematical Conjecture

    It included all-time favorites like the Riemann hypothesis — often considered the greatest of great conjectures, one that has remained the Everest of mathematics for over a century. When Hilbert was asked what would be the first thing he'd like to know after awakening from a 500-year slumber, he immediately picked this conjecture.

  14. Hypothesis vs. Theory: The Difference Explained

    A hypothesis is an assumption made before any research has been done. It is formed so that it can be tested to see if it might be true. A theory is a principle formed to explain the things already shown in data. Because of the rigors of experiment and control, it is much more likely that a theory will be true than a hypothesis.

  15. Hypothesis Definition (Illustrated Mathematics Dictionary)

    Conjecture is a better word for this. Example: you notice the temperature drops just as the sun rises. Your hypothesis is that the sun warms the air high above you, which rises up and then cooler air comes from the sides. Note: when someone says "I have a theory" they should say "I have a hypothesis", because in mathematics a theory is actually ...

  16. Conjecture vs. Hypothesis

    Key Differences. Conjecture, in its essence, is an educated guess made without concrete or adequate evidence. It's often used in casual or non-scientific contexts, based on incomplete information. Hypothesis, on the other hand, stands as a foundational concept in the scientific method, a proposed explanation made on the basis of limited ...

  17. Riemann hypothesis

    In mathematics, the Riemann hypothesis is the conjecture that the Riemann zeta function has its zeros only at the negative even integers and complex numbers with real part 1 / 2.Many consider it to be the most important unsolved problem in pure mathematics. It is of great interest in number theory because it implies results about the distribution of prime numbers.

  18. Hypothesis Testing Explained (How I Wish It Was Explained to Me)

    The curse of hypothesis testing is that we will never know if we are dealing with a True or a False Positive (Negative). All we can do is fill the confusion matrix with probabilities that are acceptable given our application. To be able to do that, we must start from a hypothesis. Step 1. Defining the hypothesis

  19. What is the difference between assumption and conjecture?

    You can form a conjecture that holds true for the evidence at hand, but which has limitations with regard to some unknown facts. A conjecture is closer to a hypothesis than it is to an assumption. You could also build a conjecture/hypothesis base upon previous assumptions (result,of course, may vary).

  20. Riemann Hypothesis -- from Wolfram MathWorld

    First published in Riemann's groundbreaking 1859 paper (Riemann 1859), the Riemann hypothesis is a deep mathematical conjecture which states that the nontrivial Riemann zeta function zeros, i.e., the values of s other than -2, -4, -6, ... such that zeta(s)=0 (where zeta(s) is the Riemann zeta function) all lie on the "critical line" sigma=R[s]=1/2 (where R[s] denotes the real part of s). A ...

  21. Riemann hypothesis

    Riemann hypothesis, in number theory, hypothesis by German mathematician Bernhard Riemann concerning the location of solutions to the Riemann zeta function, which is connected to the prime number theorem and has important implications for the distribution of prime numbers.Riemann included the hypothesis in a paper, "Ueber die Anzahl der Primzahlen unter einer gegebenen Grösse" ("On the ...

  22. The Difference Between a Conjecture, Hypothesis, Thesis, Theory and Law

    A hypothesis is a proposed explanation for an observation, and is stronger than a conjecture. It is directly testable, and often serves as the starting point of an experiment, which seeks to show that it is either true or false. A thesis is an unproven statement put forward as a premise in an argument. For example Church's thesis in logic ...

  23. Why mathematics is set to be revolutionized by AI

    But in 1960, it was found that the conjecture does not hold for the number 906,180,359. In one fell swoop, the conjecture was falsified. ... the Riemann hypothesis, Hilbert's 23 problems and ...

  24. The Fermi Paradox and the Berserker Hypothesis: Exploring Cosmic ...

    The "berserker hypothesis," a spine-chilling explanation rooted in science and popularized by science fiction, suggests a grim answer to this enduring mystery. The concept's moniker traces ...

  25. Hart-Tipler conjecture

    The Hart-Tipler conjecture is the idea that an absence of detectable Von Neumann probes is contrapositive evidence that no intelligent life exists outside of the ... The firstborn hypothesis is a special case of the Hart-Tipler conjecture which states that no other intelligent life has been discovered because humanity is the first ...

  26. Mathematicians Prove 30-Year-Old André-Oort Conjecture

    In a striking proof posted in September, three mathematicians have solved a 30-year-old problem called the André-Oort conjecture and advanced the centuries-long quest to understand the solutions of polynomial equations.The work draws on ideas that span nearly the breadth of the field. "The methods used to approach it cover, I would say, the whole of mathematics," said Andrei Yafaev of ...

  27. Mertens conjecture

    In mathematics, the Mertens conjecture is the statement that the Mertens function is bounded by . Although now disproven, it had been shown to imply the Riemann hypothesis. It was conjectured by Thomas Joannes Stieltjes, in an 1885 letter to Charles Hermite (reprinted in Stieltjes ( 1905 )), and again in print by Franz Mertens ( 1897 ), and ...