P-Value in Statistical Hypothesis Tests: What is it?

P value definition.

A p value is used in hypothesis testing to help you support or reject the null hypothesis . The p value is the evidence against a null hypothesis . The smaller the p-value, the stronger the evidence that you should reject the null hypothesis.

P values are expressed as decimals although it may be easier to understand what they are if you convert them to a percentage . For example, a p value of 0.0254 is 2.54%. This means there is a 2.54% chance your results could be random (i.e. happened by chance). That’s pretty tiny. On the other hand, a large p-value of .9(90%) means your results have a 90% probability of being completely random and not due to anything in your experiment. Therefore, the smaller the p-value, the more important (“ significant “) your results.

When you run a hypothesis test , you compare the p value from your test to the alpha level you selected when you ran the test. Alpha levels can also be written as percentages.

p value

P Value vs Alpha level

Alpha levels are controlled by the researcher and are related to confidence levels . You get an alpha level by subtracting your confidence level from 100%. For example, if you want to be 98 percent confident in your research, the alpha level would be 2% (100% – 98%). When you run the hypothesis test, the test will give you a value for p. Compare that value to your chosen alpha level. For example, let’s say you chose an alpha level of 5% (0.05). If the results from the test give you:

  • A small p (≤ 0.05), reject the null hypothesis . This is strong evidence that the null hypothesis is invalid.
  • A large p (> 0.05) means the alternate hypothesis is weak, so you do not reject the null.

P Values and Critical Values

p-value

What if I Don’t Have an Alpha Level?

In an ideal world, you’ll have an alpha level. But if you do not, you can still use the following rough guidelines in deciding whether to support or reject the null hypothesis:

  • If p > .10 → “not significant”
  • If p ≤ .10 → “marginally significant”
  • If p ≤ .05 → “significant”
  • If p ≤ .01 → “highly significant.”

How to Calculate a P Value on the TI 83

Example question: The average wait time to see an E.R. doctor is said to be 150 minutes. You think the wait time is actually less. You take a random sample of 30 people and find their average wait is 148 minutes with a standard deviation of 5 minutes. Assume the distribution is normal. Find the p value for this test.

  • Press STAT then arrow over to TESTS.
  • Press ENTER for Z-Test .
  • Arrow over to Stats. Press ENTER.
  • Arrow down to μ0 and type 150. This is our null hypothesis mean.
  • Arrow down to σ. Type in your std dev: 5.
  • Arrow down to xbar. Type in your sample mean : 148.
  • Arrow down to n. Type in your sample size : 30.
  • Arrow to <μ0 for a left tail test . Press ENTER.
  • Arrow down to Calculate. Press ENTER. P is given as .014, or about 1%.

The probability that you would get a sample mean of 148 minutes is tiny, so you should reject the null hypothesis.

Note : If you don’t want to run a test, you could also use the TI 83 NormCDF function to get the area (which is the same thing as the probability value).

Dodge, Y. (2008). The Concise Encyclopedia of Statistics . Springer. Gonick, L. (1993). The Cartoon Guide to Statistics . HarperPerennial.

  • Data Science
  • Data Analysis
  • Data Visualization
  • Machine Learning
  • Deep Learning
  • Computer Vision
  • Artificial Intelligence
  • AI ML DS Interview Series
  • AI ML DS Projects series
  • Data Engineering
  • Web Scrapping

P-Value: Comprehensive Guide to Understand, Apply, and Interpret

A p-value is a statistical metric used to assess a hypothesis by comparing it with observed data.

This article delves into the concept of p-value, its calculation, interpretation, and significance. It also explores the factors that influence p-value and highlights its limitations.

Table of Content

  • What is P-value?

How P-value is calculated?

How to interpret p-value, p-value in hypothesis testing, implementing p-value in python, applications of p-value, what is the p-value.

The p-value, or probability value, is a statistical measure used in hypothesis testing to assess the strength of evidence against a null hypothesis. It represents the probability of obtaining results as extreme as, or more extreme than, the observed results under the assumption that the null hypothesis is true.

In simpler words, it is used to reject or support the null hypothesis during hypothesis testing. In data science, it gives valuable insights on the statistical significance of an independent variable in predicting the dependent variable. 

Calculating the p-value typically involves the following steps:

  • Formulate the Null Hypothesis (H0) : Clearly state the null hypothesis, which typically states that there is no significant relationship or effect between the variables.
  • Choose an Alternative Hypothesis (H1) : Define the alternative hypothesis, which proposes the existence of a significant relationship or effect between the variables.
  • Determine the Test Statistic : Calculate the test statistic, which is a measure of the discrepancy between the observed data and the expected values under the null hypothesis. The choice of test statistic depends on the type of data and the specific research question.
  • Identify the Distribution of the Test Statistic : Determine the appropriate sampling distribution for the test statistic under the null hypothesis. This distribution represents the expected values of the test statistic if the null hypothesis is true.
  • Calculate the Critical-value : Based on the observed test statistic and the sampling distribution, find the probability of obtaining the observed test statistic or a more extreme one, assuming the null hypothesis is true.
  • Interpret the results: Compare the critical-value with t-statistic. If the t-statistic is larger than the critical value, it provides evidence to reject the null hypothesis, and vice-versa.

Its interpretation depends on the specific test and the context of the analysis. Several popular methods for calculating test statistics that are utilized in p-value calculations.

Test

Scenario

Interpretation

Used when dealing with large sample sizes or when the population standard deviation is known.

A small p-value (smaller than 0.05) indicates strong evidence against the null hypothesis, leading to its rejection.

Appropriate for small sample sizes or when the population standard deviation is unknown.

Similar to the Z-test

Used for tests of independence or goodness-of-fit.

A small p-value indicates that there is a significant association between the categorical variables, leading to the rejection of the null hypothesis.

Commonly used in Analysis of Variance (ANOVA) to compare variances between groups.

A small p-value suggests that at least one group mean is different from the others, leading to the rejection of the null hypothesis.

Measures the strength and direction of a linear relationship between two continuous variables.

A small p-value indicates that there is a significant linear relationship between the variables, leading to rejection of the null hypothesis that there is no correlation.

In general, a small p-value indicates that the observed data is unlikely to have occurred by random chance alone, which leads to the rejection of the null hypothesis. However, it’s crucial to choose the appropriate test based on the nature of the data and the research question, as well as to interpret the p-value in the context of the specific test being used.

The table given below shows the importance of p-value and shows the various kinds of errors that occur during hypothesis testing.

Correct decision based 
on the given p-value

Type I error

Type II error

Incorrect decision based 
on the given p-value

Type I error: Incorrect rejection of the null hypothesis. It is denoted by α (significance level). Type II error: Incorrect acceptance of the null hypothesis. It is denoted by β (power level)

Let’s consider an example to illustrate the process of calculating a p-value for Two Sample T-Test:

A researcher wants to investigate whether there is a significant difference in mean height between males and females in a population of university students.

Suppose we have the following data:

\overline{x_1} = 175

Starting with interpreting the process of calculating p-value

Step 1 : Formulate the Null Hypothesis (H0):

H0: There is no significant difference in mean height between males and females.

Step 2 : Choose an Alternative Hypothesis (H1):

H1: There is a significant difference in mean height between males and females.

Step 3 : Determine the Test Statistic:

The appropriate test statistic for this scenario is the two-sample t-test, which compares the means of two independent groups.

The t-statistic is a measure of the difference between the means of two groups relative to the variability within each group. It is calculated as the difference between the sample means divided by the standard error of the difference. It is also known as the t-value or t-score.

t = \frac{\overline{x_1} - \overline{x_2}}{ \sqrt{\frac{(s_1)^2}{n_1} + \frac{(s_2)^2}{n_2}}}

  • s1 = First sample’s standard deviation
  • s2 = Second sample’s standard deviation
  • n1 = First sample’s sample size
  • n2 = Second sample’s sample size

\begin{aligned}t &= \frac{175 - 168}{\sqrt{\frac{5^2}{30} + \frac{6^2}{35}}}\\&= \frac{7}{\sqrt{0.8333 + 1.0286}}\\&= \frac{7}{\sqrt{1.8619}}\\& \approx  \frac{7}{1.364}\\& \approx 5.13\end{aligned}

So, the calculated two-sample t-test statistic (t) is approximately 5.13.

Step 4 : Identify the Distribution of the Test Statistic:

The t-distribution is used for the two-sample t-test . The degrees of freedom for the t-distribution are determined by the sample sizes of the two groups.

 The t-distribution is a probability distribution with tails that are thicker than those of the normal distribution.

df = (n_1+n_2)-2

  • where, n1 is total number of values for 1st category.
  • n2 is total number of values for 2nd category.

df= (30+35)-2=63

The degrees of freedom (63) represent the variability available in the data to estimate the population parameters. In the context of the two-sample t-test, higher degrees of freedom provide a more precise estimate of the population variance, influencing the shape and characteristics of the t-distribution.

T-distribution-gfg

T-Statistic

The t-distribution is symmetric and bell-shaped, similar to the normal distribution. As the degrees of freedom increase, the t-distribution approaches the shape of the standard normal distribution. Practically, it affects the critical values used to determine statistical significance and confidence intervals.

Step 5 : Calculate Critical Value.

To find the critical t-value with a t-statistic of 5.13 and 63 degrees of freedom, we can either consult a t-table or use statistical software.

We can use scipy.stats module in Python to find the critical t-value using below code.

Comparing with T-Statistic:

1.9983<5.13

The larger t-statistic suggests that the observed difference between the sample means is unlikely to have occurred by random chance alone. Therefore, we reject the null hypothesis.

(\alpha)

  • p ≤ (α = 0.05) : Reject the null hypothesis. There is sufficient evidence to conclude that the observed effect or relationship is statistically significant, meaning it is unlikely to have occurred by chance alone.
  • p > (α = 0.05) : reject alternate hypothesis (or accept null hypothesis). The observed effect or relationship does not provide enough evidence to reject the null hypothesis. This does not necessarily mean there is no effect; it simply means the sample data does not provide strong enough evidence to rule out the possibility that the effect is due to chance.

In case the significance level is not specified, consider the below general inferences while interpreting your results. 

  • If p > .10: not significant
  • If p ≤ .10: slightly significant
  • If p ≤ .05: significant
  • If p ≤ .001: highly significant

Graphically, the p-value is located at the tails of any confidence interval. [As shown in fig 1]

what is p value hypothesis testing

Fig 1: Graphical Representation 

What influences p-value?

The p-value in hypothesis testing is influenced by several factors:

  • Sample Size : Larger sample sizes tend to yield smaller p-values, increasing the likelihood of detecting significant effects.
  • Effect Size: A larger effect size results in smaller p-values, making it easier to detect a significant relationship.
  • Variability in the Data : Greater variability often leads to larger p-values, making it harder to identify significant effects.
  • Significance Level : A lower chosen significance level increases the threshold for considering p-values as significant.
  • Choice of Test: Different statistical tests may yield different p-values for the same data.
  • Assumptions of the Test : Violations of test assumptions can impact p-values.

Understanding these factors is crucial for interpreting p-values accurately and making informed decisions in hypothesis testing.

Significance of P-value

  • The p-value provides a quantitative measure of the strength of the evidence against the null hypothesis.
  • Decision-Making in Hypothesis Testing
  • P-value serves as a guide for interpreting the results of a statistical test. A small p-value suggests that the observed effect or relationship is statistically significant, but it does not necessarily mean that it is practically or clinically meaningful.

Limitations of P-value

  • The p-value is not a direct measure of the effect size, which represents the magnitude of the observed relationship or difference between variables. A small p-value does not necessarily mean that the effect size is large or practically meaningful.
  • Influenced by Various Factors

The p-value is a crucial concept in statistical hypothesis testing, serving as a guide for making decisions about the significance of the observed relationship or effect between variables.

Let’s consider a scenario where a tutor believes that the average exam score of their students is equal to the national average (85). The tutor collects a sample of exam scores from their students and performs a one-sample t-test to compare it to the population mean (85).

  • The code performs a one-sample t-test to compare the mean of a sample data set to a hypothesized population mean.
  • It utilizes the scipy.stats library to calculate the t-statistic and p-value. SciPy is a Python library that provides efficient numerical routines for scientific computing.
  • The p-value is compared to a significance level (alpha) to determine whether to reject the null hypothesis.

Since, 0.7059>0.05 , we would conclude to fail to reject the null hypothesis. This means that, based on the sample data, there isn’t enough evidence to claim a significant difference in the exam scores of the tutor’s students compared to the national average. The tutor would accept the null hypothesis, suggesting that the average exam score of their students is statistically consistent with the national average.

  • During Forward and Backward propagation: When fitting a model (say a Multiple Linear Regression model), we use the p-value in order to find the most significant variables that contribute significantly in predicting the output.
  • Effects of various drug medicines: It is highly used in the field of medical research in determining whether the constituents of any drug will have the desired effect on humans or not. P-value is a very strong statistical tool used in hypothesis testing. It provides a plethora of valuable information while making an important decision like making a business intelligence inference or determining whether a drug should be used on humans or not, etc. For any doubt/query, comment below.

The p-value is a crucial concept in statistical hypothesis testing, providing a quantitative measure of the strength of evidence against the null hypothesis. It guides decision-making by comparing the p-value to a chosen significance level, typically 0.05. A small p-value indicates strong evidence against the null hypothesis, suggesting a statistically significant relationship or effect. However, the p-value is influenced by various factors and should be interpreted alongside other considerations, such as effect size and context.

Frequently Based Questions (FAQs)

Why is p-value greater than 1.

A p-value is a probability, and probabilities must be between 0 and 1. Therefore, a p-value greater than 1 is not possible.

What does P 0.01 mean?

It means that the observed test statistic is unlikely to occur by chance if the null hypothesis is true. It represents a 1% chance of observing the test statistic or a more extreme one under the null hypothesis.

Is 0.9 a good p-value?

A good p-value is typically less than or equal to 0.05, indicating that the null hypothesis is likely false and the observed relationship or effect is statistically significant.

What is p-value in a model?

It is a measure of the statistical significance of a parameter in the model. It represents the probability of obtaining the observed value of the parameter or a more extreme one, assuming the null hypothesis is true.

Why is p-value so low?

A low p-value means that the observed test statistic is unlikely to occur by chance if the null hypothesis is true. It suggests that the observed relationship or effect is statistically significant and not due to random sampling variation.

How Can You Use P-value to Compare Two Different Results of a Hypothesis Test?

Compare p-values: Lower p-value indicates stronger evidence against null hypothesis, favoring results with smaller p-values in hypothesis testing.

Please Login to comment...

Similar reads.

  • 105 Funny Things to Do to Make Someone Laugh
  • Best PS5 SSDs in 2024: Top Picks for Expanding Your Storage
  • Best Nintendo Switch Controllers in 2024
  • Xbox Game Pass Ultimate: Features, Benefits, and Pricing in 2024
  • #geekstreak2024 – 21 Days POTD Challenge Powered By Deutsche Bank

Improve your Coding Skills with Practice

 alt=

What kind of Experience do you want to share?

Warning: The NCBI web site requires JavaScript to function. more...

U.S. flag

An official website of the United States government

The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Browse Titles

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2024 Jan-.

Cover of StatPearls

StatPearls [Internet].

Hypothesis testing, p values, confidence intervals, and significance.

Jacob Shreffler ; Martin R. Huecker .

Affiliations

Last Update: March 13, 2023 .

  • Definition/Introduction

Medical providers often rely on evidence-based medicine to guide decision-making in practice. Often a research hypothesis is tested with results provided, typically with p values, confidence intervals, or both. Additionally, statistical or research significance is estimated or determined by the investigators. Unfortunately, healthcare providers may have different comfort levels in interpreting these findings, which may affect the adequate application of the data.

  • Issues of Concern

Without a foundational understanding of hypothesis testing, p values, confidence intervals, and the difference between statistical and clinical significance, it may affect healthcare providers' ability to make clinical decisions without relying purely on the research investigators deemed level of significance. Therefore, an overview of these concepts is provided to allow medical professionals to use their expertise to determine if results are reported sufficiently and if the study outcomes are clinically appropriate to be applied in healthcare practice.

Hypothesis Testing

Investigators conducting studies need research questions and hypotheses to guide analyses. Starting with broad research questions (RQs), investigators then identify a gap in current clinical practice or research. Any research problem or statement is grounded in a better understanding of relationships between two or more variables. For this article, we will use the following research question example:

Research Question: Is Drug 23 an effective treatment for Disease A?

Research questions do not directly imply specific guesses or predictions; we must formulate research hypotheses. A hypothesis is a predetermined declaration regarding the research question in which the investigator(s) makes a precise, educated guess about a study outcome. This is sometimes called the alternative hypothesis and ultimately allows the researcher to take a stance based on experience or insight from medical literature. An example of a hypothesis is below.

Research Hypothesis: Drug 23 will significantly reduce symptoms associated with Disease A compared to Drug 22.

The null hypothesis states that there is no statistical difference between groups based on the stated research hypothesis.

Researchers should be aware of journal recommendations when considering how to report p values, and manuscripts should remain internally consistent.

Regarding p values, as the number of individuals enrolled in a study (the sample size) increases, the likelihood of finding a statistically significant effect increases. With very large sample sizes, the p-value can be very low significant differences in the reduction of symptoms for Disease A between Drug 23 and Drug 22. The null hypothesis is deemed true until a study presents significant data to support rejecting the null hypothesis. Based on the results, the investigators will either reject the null hypothesis (if they found significant differences or associations) or fail to reject the null hypothesis (they could not provide proof that there were significant differences or associations).

To test a hypothesis, researchers obtain data on a representative sample to determine whether to reject or fail to reject a null hypothesis. In most research studies, it is not feasible to obtain data for an entire population. Using a sampling procedure allows for statistical inference, though this involves a certain possibility of error. [1]  When determining whether to reject or fail to reject the null hypothesis, mistakes can be made: Type I and Type II errors. Though it is impossible to ensure that these errors have not occurred, researchers should limit the possibilities of these faults. [2]

Significance

Significance is a term to describe the substantive importance of medical research. Statistical significance is the likelihood of results due to chance. [3]  Healthcare providers should always delineate statistical significance from clinical significance, a common error when reviewing biomedical research. [4]  When conceptualizing findings reported as either significant or not significant, healthcare providers should not simply accept researchers' results or conclusions without considering the clinical significance. Healthcare professionals should consider the clinical importance of findings and understand both p values and confidence intervals so they do not have to rely on the researchers to determine the level of significance. [5]  One criterion often used to determine statistical significance is the utilization of p values.

P values are used in research to determine whether the sample estimate is significantly different from a hypothesized value. The p-value is the probability that the observed effect within the study would have occurred by chance if, in reality, there was no true effect. Conventionally, data yielding a p<0.05 or p<0.01 is considered statistically significant. While some have debated that the 0.05 level should be lowered, it is still universally practiced. [6]  Hypothesis testing allows us to determine the size of the effect.

An example of findings reported with p values are below:

Statement: Drug 23 reduced patients' symptoms compared to Drug 22. Patients who received Drug 23 (n=100) were 2.1 times less likely than patients who received Drug 22 (n = 100) to experience symptoms of Disease A, p<0.05.

Statement:Individuals who were prescribed Drug 23 experienced fewer symptoms (M = 1.3, SD = 0.7) compared to individuals who were prescribed Drug 22 (M = 5.3, SD = 1.9). This finding was statistically significant, p= 0.02.

For either statement, if the threshold had been set at 0.05, the null hypothesis (that there was no relationship) should be rejected, and we should conclude significant differences. Noticeably, as can be seen in the two statements above, some researchers will report findings with < or > and others will provide an exact p-value (0.000001) but never zero [6] . When examining research, readers should understand how p values are reported. The best practice is to report all p values for all variables within a study design, rather than only providing p values for variables with significant findings. [7]  The inclusion of all p values provides evidence for study validity and limits suspicion for selective reporting/data mining.  

While researchers have historically used p values, experts who find p values problematic encourage the use of confidence intervals. [8] . P-values alone do not allow us to understand the size or the extent of the differences or associations. [3]  In March 2016, the American Statistical Association (ASA) released a statement on p values, noting that scientific decision-making and conclusions should not be based on a fixed p-value threshold (e.g., 0.05). They recommend focusing on the significance of results in the context of study design, quality of measurements, and validity of data. Ultimately, the ASA statement noted that in isolation, a p-value does not provide strong evidence. [9]

When conceptualizing clinical work, healthcare professionals should consider p values with a concurrent appraisal study design validity. For example, a p-value from a double-blinded randomized clinical trial (designed to minimize bias) should be weighted higher than one from a retrospective observational study [7] . The p-value debate has smoldered since the 1950s [10] , and replacement with confidence intervals has been suggested since the 1980s. [11]

Confidence Intervals

A confidence interval provides a range of values within given confidence (e.g., 95%), including the accurate value of the statistical constraint within a targeted population. [12]  Most research uses a 95% CI, but investigators can set any level (e.g., 90% CI, 99% CI). [13]  A CI provides a range with the lower bound and upper bound limits of a difference or association that would be plausible for a population. [14]  Therefore, a CI of 95% indicates that if a study were to be carried out 100 times, the range would contain the true value in 95, [15]  confidence intervals provide more evidence regarding the precision of an estimate compared to p-values. [6]

In consideration of the similar research example provided above, one could make the following statement with 95% CI:

Statement: Individuals who were prescribed Drug 23 had no symptoms after three days, which was significantly faster than those prescribed Drug 22; there was a mean difference between the two groups of days to the recovery of 4.2 days (95% CI: 1.9 – 7.8).

It is important to note that the width of the CI is affected by the standard error and the sample size; reducing a study sample number will result in less precision of the CI (increase the width). [14]  A larger width indicates a smaller sample size or a larger variability. [16]  A researcher would want to increase the precision of the CI. For example, a 95% CI of 1.43 – 1.47 is much more precise than the one provided in the example above. In research and clinical practice, CIs provide valuable information on whether the interval includes or excludes any clinically significant values. [14]

Null values are sometimes used for differences with CI (zero for differential comparisons and 1 for ratios). However, CIs provide more information than that. [15]  Consider this example: A hospital implements a new protocol that reduced wait time for patients in the emergency department by an average of 25 minutes (95% CI: -2.5 – 41 minutes). Because the range crosses zero, implementing this protocol in different populations could result in longer wait times; however, the range is much higher on the positive side. Thus, while the p-value used to detect statistical significance for this may result in "not significant" findings, individuals should examine this range, consider the study design, and weigh whether or not it is still worth piloting in their workplace.

Similarly to p-values, 95% CIs cannot control for researchers' errors (e.g., study bias or improper data analysis). [14]  In consideration of whether to report p-values or CIs, researchers should examine journal preferences. When in doubt, reporting both may be beneficial. [13]  An example is below:

Reporting both: Individuals who were prescribed Drug 23 had no symptoms after three days, which was significantly faster than those prescribed Drug 22, p = 0.009. There was a mean difference between the two groups of days to the recovery of 4.2 days (95% CI: 1.9 – 7.8).

  • Clinical Significance

Recall that clinical significance and statistical significance are two different concepts. Healthcare providers should remember that a study with statistically significant differences and large sample size may be of no interest to clinicians, whereas a study with smaller sample size and statistically non-significant results could impact clinical practice. [14]  Additionally, as previously mentioned, a non-significant finding may reflect the study design itself rather than relationships between variables.

Healthcare providers using evidence-based medicine to inform practice should use clinical judgment to determine the practical importance of studies through careful evaluation of the design, sample size, power, likelihood of type I and type II errors, data analysis, and reporting of statistical findings (p values, 95% CI or both). [4]  Interestingly, some experts have called for "statistically significant" or "not significant" to be excluded from work as statistical significance never has and will never be equivalent to clinical significance. [17]

The decision on what is clinically significant can be challenging, depending on the providers' experience and especially the severity of the disease. Providers should use their knowledge and experiences to determine the meaningfulness of study results and make inferences based not only on significant or insignificant results by researchers but through their understanding of study limitations and practical implications.

  • Nursing, Allied Health, and Interprofessional Team Interventions

All physicians, nurses, pharmacists, and other healthcare professionals should strive to understand the concepts in this chapter. These individuals should maintain the ability to review and incorporate new literature for evidence-based and safe care. 

  • Review Questions
  • Access free multiple choice questions on this topic.
  • Comment on this article.

Disclosure: Jacob Shreffler declares no relevant financial relationships with ineligible companies.

Disclosure: Martin Huecker declares no relevant financial relationships with ineligible companies.

This book is distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) ( http://creativecommons.org/licenses/by-nc-nd/4.0/ ), which permits others to distribute the work, provided that the article is not altered or used commercially. You are not required to obtain permission to distribute this article, provided that you credit the author and journal.

  • Cite this Page Shreffler J, Huecker MR. Hypothesis Testing, P Values, Confidence Intervals, and Significance. [Updated 2023 Mar 13]. In: StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2024 Jan-.

In this Page

Bulk download.

  • Bulk download StatPearls data from FTP

Related information

  • PMC PubMed Central citations
  • PubMed Links to PubMed

Similar articles in PubMed

  • The reporting of p values, confidence intervals and statistical significance in Preventive Veterinary Medicine (1997-2017). [PeerJ. 2021] The reporting of p values, confidence intervals and statistical significance in Preventive Veterinary Medicine (1997-2017). Messam LLM, Weng HY, Rosenberger NWY, Tan ZH, Payet SDM, Santbakshsing M. PeerJ. 2021; 9:e12453. Epub 2021 Nov 24.
  • Review Clinical versus statistical significance: interpreting P values and confidence intervals related to measures of association to guide decision making. [J Pharm Pract. 2010] Review Clinical versus statistical significance: interpreting P values and confidence intervals related to measures of association to guide decision making. Ferrill MJ, Brown DA, Kyle JA. J Pharm Pract. 2010 Aug; 23(4):344-51. Epub 2010 Apr 13.
  • Interpreting "statistical hypothesis testing" results in clinical research. [J Ayurveda Integr Med. 2012] Interpreting "statistical hypothesis testing" results in clinical research. Sarmukaddam SB. J Ayurveda Integr Med. 2012 Apr; 3(2):65-9.
  • Confidence intervals in procedural dermatology: an intuitive approach to interpreting data. [Dermatol Surg. 2005] Confidence intervals in procedural dermatology: an intuitive approach to interpreting data. Alam M, Barzilai DA, Wrone DA. Dermatol Surg. 2005 Apr; 31(4):462-6.
  • Review Is statistical significance testing useful in interpreting data? [Reprod Toxicol. 1993] Review Is statistical significance testing useful in interpreting data? Savitz DA. Reprod Toxicol. 1993; 7(2):95-100.

Recent Activity

  • Hypothesis Testing, P Values, Confidence Intervals, and Significance - StatPearl... Hypothesis Testing, P Values, Confidence Intervals, and Significance - StatPearls

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

Connect with NLM

National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894

Web Policies FOIA HHS Vulnerability Disclosure

Help Accessibility Careers

statistics

P-Value And Statistical Significance: What It Is & Why It Matters

Saul McLeod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul McLeod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

The p-value in statistics quantifies the evidence against a null hypothesis. A low p-value suggests data is inconsistent with the null, potentially favoring an alternative hypothesis. Common significance thresholds are 0.05 or 0.01.

P-Value Explained in Normal Distribution

Hypothesis testing

When you perform a statistical test, a p-value helps you determine the significance of your results in relation to the null hypothesis.

The null hypothesis (H0) states no relationship exists between the two variables being studied (one variable does not affect the other). It states the results are due to chance and are not significant in supporting the idea being investigated. Thus, the null hypothesis assumes that whatever you try to prove did not happen.

The alternative hypothesis (Ha or H1) is the one you would believe if the null hypothesis is concluded to be untrue.

The alternative hypothesis states that the independent variable affected the dependent variable, and the results are significant in supporting the theory being investigated (i.e., the results are not due to random chance).

What a p-value tells you

A p-value, or probability value, is a number describing how likely it is that your data would have occurred by random chance (i.e., that the null hypothesis is true).

The level of statistical significance is often expressed as a p-value between 0 and 1.

The smaller the p -value, the less likely the results occurred by random chance, and the stronger the evidence that you should reject the null hypothesis.

Remember, a p-value doesn’t tell you if the null hypothesis is true or false. It just tells you how likely you’d see the data you observed (or more extreme data) if the null hypothesis was true. It’s a piece of evidence, not a definitive proof.

Example: Test Statistic and p-Value

Suppose you’re conducting a study to determine whether a new drug has an effect on pain relief compared to a placebo. If the new drug has no impact, your test statistic will be close to the one predicted by the null hypothesis (no difference between the drug and placebo groups), and the resulting p-value will be close to 1. It may not be precisely 1 because real-world variations may exist. Conversely, if the new drug indeed reduces pain significantly, your test statistic will diverge further from what’s expected under the null hypothesis, and the p-value will decrease. The p-value will never reach zero because there’s always a slim possibility, though highly improbable, that the observed results occurred by random chance.

P-value interpretation

The significance level (alpha) is a set probability threshold (often 0.05), while the p-value is the probability you calculate based on your study or analysis.

A p-value less than or equal to your significance level (typically ≤ 0.05) is statistically significant.

A p-value less than or equal to a predetermined significance level (often 0.05 or 0.01) indicates a statistically significant result, meaning the observed data provide strong evidence against the null hypothesis.

This suggests the effect under study likely represents a real relationship rather than just random chance.

For instance, if you set α = 0.05, you would reject the null hypothesis if your p -value ≤ 0.05. 

It indicates strong evidence against the null hypothesis, as there is less than a 5% probability the null is correct (and the results are random).

Therefore, we reject the null hypothesis and accept the alternative hypothesis.

Example: Statistical Significance

Upon analyzing the pain relief effects of the new drug compared to the placebo, the computed p-value is less than 0.01, which falls well below the predetermined alpha value of 0.05. Consequently, you conclude that there is a statistically significant difference in pain relief between the new drug and the placebo.

What does a p-value of 0.001 mean?

A p-value of 0.001 is highly statistically significant beyond the commonly used 0.05 threshold. It indicates strong evidence of a real effect or difference, rather than just random variation.

Specifically, a p-value of 0.001 means there is only a 0.1% chance of obtaining a result at least as extreme as the one observed, assuming the null hypothesis is correct.

Such a small p-value provides strong evidence against the null hypothesis, leading to rejecting the null in favor of the alternative hypothesis.

A p-value more than the significance level (typically p > 0.05) is not statistically significant and indicates strong evidence for the null hypothesis.

This means we retain the null hypothesis and reject the alternative hypothesis. You should note that you cannot accept the null hypothesis; we can only reject it or fail to reject it.

Note : when the p-value is above your threshold of significance,  it does not mean that there is a 95% probability that the alternative hypothesis is true.

One-Tailed Test

Probability and statistical significance in ab testing. Statistical significance in a b experiments

Two-Tailed Test

statistical significance two tailed

How do you calculate the p-value ?

Most statistical software packages like R, SPSS, and others automatically calculate your p-value. This is the easiest and most common way.

Online resources and tables are available to estimate the p-value based on your test statistic and degrees of freedom.

These tables help you understand how often you would expect to see your test statistic under the null hypothesis.

Understanding the Statistical Test:

Different statistical tests are designed to answer specific research questions or hypotheses. Each test has its own underlying assumptions and characteristics.

For example, you might use a t-test to compare means, a chi-squared test for categorical data, or a correlation test to measure the strength of a relationship between variables.

Be aware that the number of independent variables you include in your analysis can influence the magnitude of the test statistic needed to produce the same p-value.

This factor is particularly important to consider when comparing results across different analyses.

Example: Choosing a Statistical Test

If you’re comparing the effectiveness of just two different drugs in pain relief, a two-sample t-test is a suitable choice for comparing these two groups. However, when you’re examining the impact of three or more drugs, it’s more appropriate to employ an Analysis of Variance ( ANOVA) . Utilizing multiple pairwise comparisons in such cases can lead to artificially low p-values and an overestimation of the significance of differences between the drug groups.

How to report

A statistically significant result cannot prove that a research hypothesis is correct (which implies 100% certainty).

Instead, we may state our results “provide support for” or “give evidence for” our research hypothesis (as there is still a slight probability that the results occurred by chance and the null hypothesis was correct – e.g., less than 5%).

Example: Reporting the results

In our comparison of the pain relief effects of the new drug and the placebo, we observed that participants in the drug group experienced a significant reduction in pain ( M = 3.5; SD = 0.8) compared to those in the placebo group ( M = 5.2; SD  = 0.7), resulting in an average difference of 1.7 points on the pain scale (t(98) = -9.36; p < 0.001).

The 6th edition of the APA style manual (American Psychological Association, 2010) states the following on the topic of reporting p-values:

“When reporting p values, report exact p values (e.g., p = .031) to two or three decimal places. However, report p values less than .001 as p < .001.

The tradition of reporting p values in the form p < .10, p < .05, p < .01, and so forth, was appropriate in a time when only limited tables of critical values were available.” (p. 114)

  • Do not use 0 before the decimal point for the statistical value p as it cannot equal 1. In other words, write p = .001 instead of p = 0.001.
  • Please pay attention to issues of italics ( p is always italicized) and spacing (either side of the = sign).
  • p = .000 (as outputted by some statistical packages such as SPSS) is impossible and should be written as p < .001.
  • The opposite of significant is “nonsignificant,” not “insignificant.”

Why is the p -value not enough?

A lower p-value  is sometimes interpreted as meaning there is a stronger relationship between two variables.

However, statistical significance means that it is unlikely that the null hypothesis is true (less than 5%).

To understand the strength of the difference between the two groups (control vs. experimental) a researcher needs to calculate the effect size .

When do you reject the null hypothesis?

In statistical hypothesis testing, you reject the null hypothesis when the p-value is less than or equal to the significance level (α) you set before conducting your test. The significance level is the probability of rejecting the null hypothesis when it is true. Commonly used significance levels are 0.01, 0.05, and 0.10.

Remember, rejecting the null hypothesis doesn’t prove the alternative hypothesis; it just suggests that the alternative hypothesis may be plausible given the observed data.

The p -value is conditional upon the null hypothesis being true but is unrelated to the truth or falsity of the alternative hypothesis.

What does p-value of 0.05 mean?

If your p-value is less than or equal to 0.05 (the significance level), you would conclude that your result is statistically significant. This means the evidence is strong enough to reject the null hypothesis in favor of the alternative hypothesis.

Are all p-values below 0.05 considered statistically significant?

No, not all p-values below 0.05 are considered statistically significant. The threshold of 0.05 is commonly used, but it’s just a convention. Statistical significance depends on factors like the study design, sample size, and the magnitude of the observed effect.

A p-value below 0.05 means there is evidence against the null hypothesis, suggesting a real effect. However, it’s essential to consider the context and other factors when interpreting results.

Researchers also look at effect size and confidence intervals to determine the practical significance and reliability of findings.

How does sample size affect the interpretation of p-values?

Sample size can impact the interpretation of p-values. A larger sample size provides more reliable and precise estimates of the population, leading to narrower confidence intervals.

With a larger sample, even small differences between groups or effects can become statistically significant, yielding lower p-values. In contrast, smaller sample sizes may not have enough statistical power to detect smaller effects, resulting in higher p-values.

Therefore, a larger sample size increases the chances of finding statistically significant results when there is a genuine effect, making the findings more trustworthy and robust.

Can a non-significant p-value indicate that there is no effect or difference in the data?

No, a non-significant p-value does not necessarily indicate that there is no effect or difference in the data. It means that the observed data do not provide strong enough evidence to reject the null hypothesis.

There could still be a real effect or difference, but it might be smaller or more variable than the study was able to detect.

Other factors like sample size, study design, and measurement precision can influence the p-value. It’s important to consider the entire body of evidence and not rely solely on p-values when interpreting research findings.

Can P values be exactly zero?

While a p-value can be extremely small, it cannot technically be absolute zero. When a p-value is reported as p = 0.000, the actual p-value is too small for the software to display. This is often interpreted as strong evidence against the null hypothesis. For p values less than 0.001, report as p < .001

Further Information

  • P Value Calculator From T Score
  • P-Value Calculator For Chi-Square
  • P-values and significance tests (Kahn Academy)
  • Hypothesis testing and p-values (Kahn Academy)
  • Wasserstein, R. L., Schirm, A. L., & Lazar, N. A. (2019). Moving to a world beyond “ p “< 0.05”.
  • Criticism of using the “ p “< 0.05”.
  • Publication manual of the American Psychological Association
  • Statistics for Psychology Book Download

Bland, J. M., & Altman, D. G. (1994). One and two sided tests of significance: Authors’ reply.  BMJ: British Medical Journal ,  309 (6958), 874.

Goodman, S. N., & Royall, R. (1988). Evidence and scientific research.  American Journal of Public Health ,  78 (12), 1568-1574.

Goodman, S. (2008, July). A dirty dozen: twelve p-value misconceptions . In  Seminars in hematology  (Vol. 45, No. 3, pp. 135-140). WB Saunders.

Lang, J. M., Rothman, K. J., & Cann, C. I. (1998). That confounded P-value.  Epidemiology (Cambridge, Mass.) ,  9 (1), 7-8.

Print Friendly, PDF & Email

  • Skip to secondary menu
  • Skip to main content
  • Skip to primary sidebar

Statistics By Jim

Making statistics intuitive

How Hypothesis Tests Work: Significance Levels (Alpha) and P values

By Jim Frost 45 Comments

Hypothesis testing is a vital process in inferential statistics where the goal is to use sample data to draw conclusions about an entire population . In the testing process, you use significance levels and p-values to determine whether the test results are statistically significant.

You hear about results being statistically significant all of the time. But, what do significance levels, P values, and statistical significance actually represent? Why do we even need to use hypothesis tests in statistics?

In this post, I answer all of these questions. I use graphs and concepts to explain how hypothesis tests function in order to provide a more intuitive explanation. This helps you move on to understanding your statistical results.

Hypothesis Test Example Scenario

To start, I’ll demonstrate why we need to use hypothesis tests using an example.

A researcher is studying fuel expenditures for families and wants to determine if the monthly cost has changed since last year when the average was $260 per month. The researcher draws a random sample of 25 families and enters their monthly costs for this year into statistical software. You can download the CSV data file: FuelsCosts . Below are the descriptive statistics for this year.

Table of descriptive statistics for our fuel cost example.

We’ll build on this example to answer the research question and show how hypothesis tests work.

Descriptive Statistics Alone Won’t Answer the Question

The researcher collected a random sample and found that this year’s sample mean (330.6) is greater than last year’s mean (260). Why perform a hypothesis test at all? We can see that this year’s mean is higher by $70! Isn’t that different?

Regrettably, the situation isn’t as clear as you might think because we’re analyzing a sample instead of the full population. There are huge benefits when working with samples because it is usually impossible to collect data from an entire population. However, the tradeoff for working with a manageable sample is that we need to account for sample error.

The sampling error is the gap between the sample statistic and the population parameter. For our example, the sample statistic is the sample mean, which is 330.6. The population parameter is μ, or mu, which is the average of the entire population. Unfortunately, the value of the population parameter is not only unknown but usually unknowable. Learn more about Sampling Error .

We obtained a sample mean of 330.6. However, it’s conceivable that, due to sampling error, the mean of the population might be only 260. If the researcher drew another random sample, the next sample mean might be closer to 260. It’s impossible to assess this possibility by looking at only the sample mean. Hypothesis testing is a form of inferential statistics that allows us to draw conclusions about an entire population based on a representative sample. We need to use a hypothesis test to determine the likelihood of obtaining our sample mean if the population mean is 260.

Background information : The Difference between Descriptive and Inferential Statistics and Populations, Parameters, and Samples in Inferential Statistics

A Sampling Distribution Determines Whether Our Sample Mean is Unlikely

It is very unlikely for any sample mean to equal the population mean because of sample error. In our case, the sample mean of 330.6 is almost definitely not equal to the population mean for fuel expenditures.

If we could obtain a substantial number of random samples and calculate the sample mean for each sample, we’d observe a broad spectrum of sample means. We’d even be able to graph the distribution of sample means from this process.

This type of distribution is called a sampling distribution. You obtain a sampling distribution by drawing many random samples of the same size from the same population. Why the heck would we do this?

Because sampling distributions allow you to determine the likelihood of obtaining your sample statistic and they’re crucial for performing hypothesis tests.

Luckily, we don’t need to go to the trouble of collecting numerous random samples! We can estimate the sampling distribution using the t-distribution, our sample size, and the variability in our sample.

We want to find out if the average fuel expenditure this year (330.6) is different from last year (260). To answer this question, we’ll graph the sampling distribution based on the assumption that the mean fuel cost for the entire population has not changed and is still 260. In statistics, we call this lack of effect, or no change, the null hypothesis . We use the null hypothesis value as the basis of comparison for our observed sample value.

Sampling distributions and t-distributions are types of probability distributions.

Related posts : Sampling Distributions and Understanding Probability Distributions

Graphing our Sample Mean in the Context of the Sampling Distribution

The graph below shows which sample means are more likely and less likely if the population mean is 260. We can place our sample mean in this distribution. This larger context helps us see how unlikely our sample mean is if the null hypothesis is true (μ = 260).

Sampling distribution of means for our fuel cost data.

The graph displays the estimated distribution of sample means. The most likely values are near 260 because the plot assumes that this is the true population mean. However, given random sampling error, it would not be surprising to observe sample means ranging from 167 to 352. If the population mean is still 260, our observed sample mean (330.6) isn’t the most likely value, but it’s not completely implausible either.

The Role of Hypothesis Tests

The sampling distribution shows us that we are relatively unlikely to obtain a sample of 330.6 if the population mean is 260. Is our sample mean so unlikely that we can reject the notion that the population mean is 260?

In statistics, we call this rejecting the null hypothesis. If we reject the null for our example, the difference between the sample mean (330.6) and 260 is statistically significant. In other words, the sample data favor the hypothesis that the population average does not equal 260.

However, look at the sampling distribution chart again. Notice that there is no special location on the curve where you can definitively draw this conclusion. There is only a consistent decrease in the likelihood of observing sample means that are farther from the null hypothesis value. Where do we decide a sample mean is far away enough?

To answer this question, we’ll need more tools—hypothesis tests! The hypothesis testing procedure quantifies the unusualness of our sample with a probability and then compares it to an evidentiary standard. This process allows you to make an objective decision about the strength of the evidence.

We’re going to add the tools we need to make this decision to the graph—significance levels and p-values!

These tools allow us to test these two hypotheses:

  • Null hypothesis: The population mean equals the null hypothesis mean (260).
  • Alternative hypothesis: The population mean does not equal the null hypothesis mean (260).

Related post : Hypothesis Testing Overview

What are Significance Levels (Alpha)?

A significance level, also known as alpha or α, is an evidentiary standard that a researcher sets before the study. It defines how strongly the sample evidence must contradict the null hypothesis before you can reject the null hypothesis for the entire population. The strength of the evidence is defined by the probability of rejecting a null hypothesis that is true. In other words, it is the probability that you say there is an effect when there is no effect.

For instance, a significance level of 0.05 signifies a 5% risk of deciding that an effect exists when it does not exist.

Lower significance levels require stronger sample evidence to be able to reject the null hypothesis. For example, to be statistically significant at the 0.01 significance level requires more substantial evidence than the 0.05 significance level. However, there is a tradeoff in hypothesis tests. Lower significance levels also reduce the power of a hypothesis test to detect a difference that does exist.

The technical nature of these types of questions can make your head spin. A picture can bring these ideas to life!

To learn a more conceptual approach to significance levels, see my post about Understanding Significance Levels .

Graphing Significance Levels as Critical Regions

On the probability distribution plot, the significance level defines how far the sample value must be from the null value before we can reject the null. The percentage of the area under the curve that is shaded equals the probability that the sample value will fall in those regions if the null hypothesis is correct.

To represent a significance level of 0.05, I’ll shade 5% of the distribution furthest from the null value.

Graph that displays a two-tailed critical region for a significance level of 0.05.

The two shaded regions in the graph are equidistant from the central value of the null hypothesis. Each region has a probability of 0.025, which sums to our desired total of 0.05. These shaded areas are called the critical region for a two-tailed hypothesis test.

The critical region defines sample values that are improbable enough to warrant rejecting the null hypothesis. If the null hypothesis is correct and the population mean is 260, random samples (n=25) from this population have means that fall in the critical region 5% of the time.

Our sample mean is statistically significant at the 0.05 level because it falls in the critical region.

Related posts : One-Tailed and Two-Tailed Tests Explained , What Are Critical Values? , and T-distribution Table of Critical Values

Comparing Significance Levels

Let’s redo this hypothesis test using the other common significance level of 0.01 to see how it compares.

Chart that shows a two-tailed critical region for a significance level of 0.01.

This time the sum of the two shaded regions equals our new significance level of 0.01. The mean of our sample does not fall within with the critical region. Consequently, we fail to reject the null hypothesis. We have the same exact sample data, the same difference between the sample mean and the null hypothesis value, but a different test result.

What happened? By specifying a lower significance level, we set a higher bar for the sample evidence. As the graph shows, lower significance levels move the critical regions further away from the null value. Consequently, lower significance levels require more extreme sample means to be statistically significant.

You must set the significance level before conducting a study. You don’t want the temptation of choosing a level after the study that yields significant results. The only reason I compared the two significance levels was to illustrate the effects and explain the differing results.

The graphical version of the 1-sample t-test we created allows us to determine statistical significance without assessing the P value. Typically, you need to compare the P value to the significance level to make this determination.

Related post : Step-by-Step Instructions for How to Do t-Tests in Excel

What Are P values?

P values are the probability that a sample will have an effect at least as extreme as the effect observed in your sample if the null hypothesis is correct.

This tortuous, technical definition for P values can make your head spin. Let’s graph it!

First, we need to calculate the effect that is present in our sample. The effect is the distance between the sample value and null value: 330.6 – 260 = 70.6. Next, I’ll shade the regions on both sides of the distribution that are at least as far away as 70.6 from the null (260 +/- 70.6). This process graphs the probability of observing a sample mean at least as extreme as our sample mean.

Probability distribution plot shows how our sample mean has a p-value of 0.031.

The total probability of the two shaded regions is 0.03112. If the null hypothesis value (260) is true and you drew many random samples, you’d expect sample means to fall in the shaded regions about 3.1% of the time. In other words, you will observe sample effects at least as large as 70.6 about 3.1% of the time if the null is true. That’s the P value!

Learn more about How to Find the P Value .

Using P values and Significance Levels Together

If your P value is less than or equal to your alpha level, reject the null hypothesis.

The P value results are consistent with our graphical representation. The P value of 0.03112 is significant at the alpha level of 0.05 but not 0.01. Again, in practice, you pick one significance level before the experiment and stick with it!

Using the significance level of 0.05, the sample effect is statistically significant. Our data support the alternative hypothesis, which states that the population mean doesn’t equal 260. We can conclude that mean fuel expenditures have increased since last year.

P values are very frequently misinterpreted as the probability of rejecting a null hypothesis that is actually true. This interpretation is wrong! To understand why, please read my post: How to Interpret P-values Correctly .

Discussion about Statistically Significant Results

Hypothesis tests determine whether your sample data provide sufficient evidence to reject the null hypothesis for the entire population. To perform this test, the procedure compares your sample statistic to the null value and determines whether it is sufficiently rare. “Sufficiently rare” is defined in a hypothesis test by:

  • Assuming that the null hypothesis is true—the graphs center on the null value.
  • The significance (alpha) level—how far out from the null value is the critical region?
  • The sample statistic—is it within the critical region?

There is no special significance level that correctly determines which studies have real population effects 100% of the time. The traditional significance levels of 0.05 and 0.01 are attempts to manage the tradeoff between having a low probability of rejecting a true null hypothesis and having adequate power to detect an effect if one actually exists.

The significance level is the rate at which you incorrectly reject null hypotheses that are actually true ( type I error ). For example, for all studies that use a significance level of 0.05 and the null hypothesis is correct, you can expect 5% of them to have sample statistics that fall in the critical region. When this error occurs, you aren’t aware that the null hypothesis is correct, but you’ll reject it because the p-value is less than 0.05.

This error does not indicate that the researcher made a mistake. As the graphs show, you can observe extreme sample statistics due to sample error alone. It’s the luck of the draw!

Related posts : Statistical Significance: Definition & Meaning and Types of Errors in Hypothesis Testing

Hypothesis tests are crucial when you want to use sample data to make conclusions about a population because these tests account for sample error. Using significance levels and P values to determine when to reject the null hypothesis improves the probability that you will draw the correct conclusion.

Keep in mind that statistical significance doesn’t necessarily mean that the effect is important in a practical, real-world sense. For more information, read my post about Practical vs. Statistical Significance .

If you like this post, read the companion post: How Hypothesis Tests Work: Confidence Intervals and Confidence Levels .

You can also read my other posts that describe how other tests work:

  • How t-Tests Work
  • How the F-test works in ANOVA
  • How Chi-Squared Tests of Independence Work

To see an alternative approach to traditional hypothesis testing that does not use probability distributions and test statistics, learn about bootstrapping in statistics !

Share this:

what is p value hypothesis testing

Reader Interactions

' src=

December 11, 2022 at 10:56 am

For very easy concept about level of significance & p-value 1.Teacher has given a one assignment to student & asked how many error you have doing this assignment? Student reply, he can has error ≤ 5% (it is level of significance). After completion of assignment, teacher checked his error which is ≤ 5% (may be 4% or 3% or 2% even less, it is p-value) it means his results are significant. Otherwise he has error > 5% (may be 6% or 7% or 8% even more, it is p-value) it means his results are non-significant. 2. Teacher has given a one assignment to student & asked how many error you have doing this assignment? Student reply, he can has error ≤ 1% (it is level of significance). After completion of assignment, teacher checked his error which is ≤ 1% (may be 0.9% or 0.8% or 0.7% even less, it is p-value) it means his results are significant. Otherwise he has error > 1% (may be 1.1% or 1.5% or 2% even more, it is p-value) it means his results are non-significant. p-value is significant or not mainly dependent upon the level of significance.

' src=

December 11, 2022 at 7:50 pm

I think that approach helps explain how to determine statistical significance–is the p-value less than or equal to the significance level. However, it doesn’t really explain what statistical significance means. I find that comparing the p-value to the significance level is the easy part. Knowing what it means and how to choose your significance level is the harder part!

' src=

December 3, 2022 at 5:54 pm

What would you say to someone who believes that a p-value higher than the level of significance (alpha) means the null hypothesis has been proven? Should you support that statement or deny it?

December 3, 2022 at 10:18 pm

Hi Emmanuel,

When the p-value is greater than the significance level, you fail to reject the null hypothesis . That is different than proving it. To learn why and what it means, click the link to read a post that I’ve written that will answer your question!

' src=

April 19, 2021 at 12:27 am

Thank you so much Sir

April 18, 2021 at 2:37 pm

Hi sir, your blogs are much more helpful for clearing the concepts of statistics, as a researcher I find them much more useful. I have some quarries:

1. In many research papers I have seen authors using the statement ” means or values are statically at par at p = 0.05″ when they do some pair wise comparison between the treatments (a kind of post hoc) using some value of CD (critical difference) or we can say LSD which is calculated using alpha not using p. So with this article I think this should be alpha =0.05 or 5%, not p = 0.05 earlier I thought p and alpha are same. p it self is compared with alpha 0.05. Correct me if I am wrong.

2. When we can draw a conclusion using critical value based on critical values (CV) which is based on alpha values in different tests (e.g. in F test CV is at F (0.05, t-1, error df) when alpha is 0.05 which is table value of F and is compared with F calculated for drawing the conclusion); then why we go for p values, and draw a conclusion based on p values, even many online software do not give p value, they just mention CD (LSD)

3. can you please help me in interpreting interaction in two factor analysis (Factor A X Factor b) in Anova.

Thank You so much!

(Commenting again as I have not seen my comment in comment list; don’t know why)

April 18, 2021 at 10:57 pm

Hi Himanshu,

I manually approve comments so there will be some time lag involved before they show up.

Regarding your first question, yes, you’re correct. Test results are significant at particular significance levels or alpha. They should not use p to define the significance level. You’re also correct in that you compare p to alpha.

Critical values are a different (but related) approach for determining significance. It was more common before computer analysis took off because it reduced the calculations. Using this approach in its simplest form, you only know whether a result is significant or not at the given alpha. You just determine whether the test statistic falls within a critical region to determine statistical significance or not significant. However, it is ok to supplement this type of result with the actual p-value. Knowing the precise p-value provides additional information that significant/not significant does not provide. The critical value and p-value approaches will always agree too. For more information about why the exact p-value is useful, read my post about Five Tips for Interpreting P-values .

Finally, I’ve written about two-way ANOVA in my post, How to do Two-Way ANOVA in Excel . Additionally, I write about it in my Hypothesis Testing ebook .

' src=

January 28, 2021 at 3:12 pm

Thank you for your answer, Jim, I really appreciate it. I’m taking a Coursera stats course and online learning without being able to ask questions of a real teacher is not my forte!

You’re right, I don’t think I’m ready for that calculation! However, I think I’m struggling with something far more basic, perhaps even the interpretation of the t-table? I’m just not sure how you came up with the p-value as .03112, with the 24 degrees of freedom. When I pull up a t-table and look at the 24-degrees of freedom row, I’m not sure how any of those numbers correspond with your answer? Either the single tail of 0.01556 or the combined of 0.03112. What am I not getting? (which, frankly, could be a lot!!) Again, thank you SO much for your time.

January 28, 2021 at 11:19 pm

Ah ok, I see! First, let me point you to several posts I’ve written about t-values and the t-distribution. I don’t cover those in this post because I wanted to present a simplified version that just uses the data in its regular units. The basic idea is that the hypothesis tests actually convert all your raw data down into one value for a test statistic, such as the t-value. And then it uses that test statistic to determine whether your results are statistically significant. To be significant, the t-value must exceed a critical value, which is what you lookup in the table. Although, nowadays you’d typically let your software just tell you.

So, read the following two posts, which covers several aspects of t-values and distributions. And then if you have more questions after that, you can post them. But, you’ll have a lot more information about them and probably some of your questions will be answered! T-values T-distributions

January 27, 2021 at 3:10 pm

Jim, just found your website and really appreciate your thoughtful, thorough way of explaining things. I feel very dumb, but I’m struggling with p-values and was hoping you could help me.

Here’s the section that’s getting me confused:

“First, we need to calculate the effect that is present in our sample. The effect is the distance between the sample value and null value: 330.6 – 260 = 70.6. Next, I’ll shade the regions on both sides of the distribution that are at least as far away as 70.6 from the null (260 +/- 70.6). This process graphs the probability of observing a sample mean at least as extreme as our sample mean.

** I’m good up to this point. Draw the picture, do the subtraction, shade the regions. BUT, I’m not sure how to figure out the area of the shaded region — even with a T-table. When I look at the T-table on 24 df, I’m not sure what to do with those numbers, as none of them seem to correspond in any way to what I’m looking at in the problem. In the end, I have no idea how you calculated each shaded area being 0.01556.

I feel like there’s a (very simple) step that everyone else knows how to do, but for some reason I’m missing it.

Again, dumb question, but I’d love your help clarifying that.

thank you, Sara

January 27, 2021 at 9:51 pm

That’s not a dumb question at all. I actually don’t show or explain the calculations for figuring out the area. The reason for that is the same reason why students never calculate the critical t-values for their test, instead you look them up in tables or use statistical software. The common reason for all that is because calculating these values is extremely complicated! It’s best to let software do that for you or, when looking critical values, use the tables!

The principal though is that percentage of the area under the curve equals the probability that values will fall within that range.

Equation for t-distribution

And then, for this example, you’d need to figure out the area under the curve for particular ranges!

' src=

January 15, 2021 at 10:57 am

HI Jim, I have a question related to Hypothesis test.. in Medical imaging, there are different way to measure signal intensity (from a tumor lesion for example). I tested for the same 100 patients 4 different ways to measure tumor captation to a injected dose. So for the 100 patients, i got 4 linear regression (relation between injected dose and measured quantity at tumor sites) = so an output of 4 equations Condition A output = -0,034308 + 0,0006602*input Condition B output = 0,0117631 + 0,0005425*input Condition C output = 0,0087871 + 0,0005563*input Condition D output = 0,001911 + 0,0006255*input

My question : i want to compare the 4 methods to find the best one (compared to others) : do Hypothesis test good to me… and if Yes, i do not find test to perform it. Can you suggest me a software. I uselly used JMP for my stats… but open to other softwares…

THank for your time G

' src=

November 16, 2020 at 5:42 am

Thank you very much for writing about this topic!

Your explanation made more sense to me about: Why we reject Null Hypothesis when p value < significance level

Kind greetings, Jalal

' src=

September 25, 2020 at 1:04 pm

Hi Jim, Your explanations are so helpful! Thank you. I wondered about your first graph. I see that the mean of the graph is 260 from the null hypothesis, and it looks like the standard deviation of the graph is about 31. Where did you get 31 from? Thank you

September 25, 2020 at 4:08 pm

Hi Michelle,

That is a great question. Very observant. And it gets to how these tests work. The hypothesis test that I’m illustrating here is the one-sample t-test. And this graph illustrates the sampling distribution for the t-test. T-tests use the t-distribution to determine the sampling distribution. For the t-distribution, you need to specify the degrees of freedom, which entirely defines the distribution (i.e., it’s the only parameter). For 1-sample t-tests, the degrees of freedom equal the number of observations minus 1. This dataset has 25 observations. Hence, the 24 DF you see in the graph.

Unlike the normal distribution, there is no standard deviation parameter. Instead, the degrees of freedom determines the spread of the curve. Typically, with t-tests, you’ll see results discussed in terms of t-values, both for your sample and for defining the critical regions. However, for this introductory example, I’ve converted the t-values into the raw data units (t-value * SE mean).

So, the standard deviation you’re seeing in the graph is a result of the spread of the underlying t-distribution that has 24 degrees of freedom and then applying the conversion from t-values to raw values.

' src=

September 10, 2020 at 8:19 am

Your blog is incredible.

I am having difficulty understanding why the phrase ‘as extreme as’ is required in the definition of p-value (“P values are the probability that a sample will have an effect at least as extreme as the effect observed in your sample if the null hypothesis is correct.”)

Why can’t P-Values simply be defined as “The probability of sample observation if the null hypothesis is correct?”

In your other blog titled ‘Interpreting P values’ you have explained p-values as “P-values indicate the believability of the devil’s advocate case that the null hypothesis is correct given the sample data”. I understand (or accept) this explanation. How does one move from this definition to one that contains the phrase ‘as extreme as’?

September 11, 2020 at 5:05 pm

Thanks so much for your kind words! I’m glad that my website has been helpful!

The key to understanding the “at least as extreme” wording lies in the probability plots for p-values. Using probability plots for continuous data, you can calculate probabilities, but only for ranges of values. I discuss this in my post about understanding probability distributions . In a nutshell, we need a range of values for these probabilities because the probabilities are derived from the area under a distribution curve. A single value just produces a line on these graphs rather than an area. Those ranges are the shaded regions in the probability plots. For p-values, the range corresponds to the “at least as extreme” wording. That’s where it comes from. We need a range to calculate a probability. We can’t use the single value of the observed effect because it doesn’t produce an area under the curve.

I hope that helps! I think this is a particularly confusing part of understanding p-values that most people don’t understand.

' src=

August 7, 2020 at 5:45 pm

Hi Jim, thanks for the post.

Could you please clarify the following excerpt from ‘Graphing Significance Levels as Critical Regions’:

“The percentage of the area under the curve that is shaded equals the probability that the sample value will fall in those regions if the null hypothesis is correct.”

I’m not sure if I understood this correctly. If the sample value fall in one of the shaded regions, doesn’t mean that the null hypothesis can be rejected, hence that is not correct?

August 7, 2020 at 10:23 pm

Think of it this way. There are two basic reasons for why a sample value could fall in a critical region:

  • The null hypothesis is correct and random chance caused the sample value to be unusual.
  • The null hypothesis is not correct.

You don’t know which one is true. Remember, just because you reject the null hypothesis it doesn’t mean the null is false. However, by using hypothesis tests to determine statistical significance, you control the chances of #1 occurring. The rate at which #1 occurs equals your significance level. On the hand, you don’t know the probability of the sample value falling in a critical region if the alternative hypothesis is correct (#2). It depends on the precise distribution for the alternative hypothesis and you usually don’t know that, which is why you’re testing the hypotheses in the first place!

I hope I answered the question you were asking. If not, feel free to ask follow up questions. Also, this ties into how to interpret p-values . It’s not exactly straightforward. Click the link to learn more.

' src=

June 4, 2020 at 6:17 am

Hi Jim, thank you very much for your answer. You helped me a lot!

June 3, 2020 at 5:23 pm

Hi, Thanks for this post. I’ve been learning a lot with you. My question is regarding to lack of fit. The p-value of my lack of fit is really low, making my lack of fit significant, meaning my model does not fit well. Is my case a “false negative”? given that my pure error is really low, making the computation of the lack of fit low. So it means my model is good. Below I show some information, that I hope helps to clarify my question.

SumSq DF MeanSq F pValue ________ __ ________ ______ __________

Total 1246.5 18 69.25 Model 1241.7 6 206.94 514.43 9.3841e-14 . Linear 1196.6 3 398.87 991.53 1.2318e-14 . Nonlinear 45.046 3 15.015 37.326 2.3092e-06 Residual 4.8274 12 0.40228 . Lack of fit 4.7388 7 0.67698 38.238 0.0004787 . Pure error 0.088521 5 0.017704

June 3, 2020 at 7:53 pm

As you say, a low p-value for a lack of fit test indicates that the model doesn’t fit your data adequately. This is a positive result for the test, which means it can’t be a “false negative.” At best, it could be a false positive, meaning that your data actually fit model well despite the low p-value.

I’d recommend graphing the residuals and looking for patterns . There is probably a relationship between variables that you’re not modeling correctly, such as curvature or interaction effects. There’s no way to diagnose the specific nature of the lack-of-fit problem by using the statistical output. You’ll need the graphs.

If there are no patterns in the residual plots, then your lack-of-fit results might be a false positive.

I hope this helps!

' src=

May 30, 2020 at 6:23 am

First of all, I have to say there are not many resources that explain a complicated topic in an easier manner.

My question is, how do we arrive at “if p value is less than alpha, we reject the null hypothesis.”

Is this covered in a separate article I could read?

Thanks Shekhar

' src=

May 25, 2020 at 12:21 pm

Hi Jim, terrific website, blog, and after this I’m ordering your book. One of my biggest challenges is nomenclature, definitions, context, and formulating the hypotheses. Here’s one I want to double-be-sure I understand: From above you write: ” These tools allow us to test these two hypotheses:

Null hypothesis: The population mean equals the null hypothesis mean (260). Alternative hypothesis: The population mean does not equal the null hypothesis mean (260). ” I keep thinking that 260 is the population mean mu, the underlying population (that we never really know exactly) and that the Null Hypothesis is comparing mu to x-bar (the sample mean of the 25 families randomly sampled w mean = sample mean = x-bar = 330.6).

So is the following incorrect, and if so, why? Null hypothesis: The population mean mu=260 equals the null hypothesis mean x-bar (330.6). Alternative hypothesis: The population mean mu=269 does not equal the null hypothesis mean x-bar (330.6).

And my thinking is that usually the formulation of null and alternative hypotheses is “test value” = “mu current of underlying population”, whereas I read the formulation on the webpage above to be the reverse.

Any comments appreciated. Many Thanks,

May 26, 2020 at 8:56 pm

The null hypothesis states that population value equals the null value. Now, I know that’s not particularly helpful! But, the null value varies based on test and context. So, in this example, we’re setting the null value aa $260, which was the mean from the previous year. So, our null hypothesis states:

Null: the population mean (mu) = 260. Alternative: the population mean ≠ 260.

These hypothesis statements are about the population parameter. For this type of one-sample analysis, the target or reference value you specify is the null hypothesis value. Additionally, you don’t include the sample estimate in these statements, which is the X-bar portion you tacked on at the end. It’s strictly about the value of the population parameter you’re testing. You don’t know the value of the underlying distribution. However, given the mutually exclusive nature of the null and alternative hypothesis, you know one or the other is correct. The null states that mu equals 260 while the alternative states that it doesn’t equal 260. The data help you decide, which brings us to . . .

However, the procedure does compare our sample data to the null hypothesis value, which is how it determines how strong our evidence is against the null hypothesis.

I hope I answered your question. If not, please let me know!

' src=

May 8, 2020 at 6:00 pm

Really using the interpretation “In other words, you will observe sample effects at least as large as 70.6 about 3.1% of the time if the null is true”, our head seems to tie a knot. However, doing the reverse interpretation, it is much more intuitive and easier. That is, we will observe the sample effect of at least 70.6 in about 96.9% of the time, if the null is false (that is, our hypothesis is true).

May 8, 2020 at 7:25 pm

Your phrasing really isn’t any simpler. And it has the additional misfortune of being incorrect.

What you’re essentially doing is creating a one-sided confidence interval by using the p-value from a two-sided test. That’s incorrect in two ways.

  • Don’t mix and match one-sided and two-sided test results.
  • Confidence levels are determine by the significance level, not p-values.

So, what you need is a two-sided 95% CI (1-alpha). You could then state the results are statistically significant and you have 95% confidence that the population effect is between X and Y. If you want a lower bound as you propose, then you’ll need to use a one-sided hypothesis test with a 95% Lower Bound. That’ll give you a different value for the lower bound than the one you use.

I like confidence intervals. As I write elsewhere, I think they’re easier to understand and provide more information than a binary test result. But, you need to use them correctly!

One other point. When you are talking about p-values, it’s always under the assumption that the null hypothesis is correct. You *never* state anything about the p-value in relation to the null being false (i.e. alternative is true). But, if you want to use the type of phrasing you suggest, use it in the context of CIs and incorporate the points I cover above.

' src=

February 10, 2020 at 11:13 am

Muchas gracias profesor por compartir sus conocimientos. Un saliud especial desde Colombia.

' src=

August 6, 2019 at 11:46 pm

i found this really helpful . also can you help me out ?

I’m a little confused Can you tell me if level of significance and pvalue are comparable or not and if they are what does it mean if pvalue < LS . Do we reject the null hypothesis or do we accept the null hypothesis ?

August 7, 2019 at 12:49 am

Hi Divyanshu,

Yes, you compare the p-value to the significance level. When the p-value is less than the significance level (alpha), your results are statistically significant and you reject the null hypothesis.

I’d suggest re-reading the “Using P values and Significance Levels Together” section near the end of this post more closely. That describes the process. The next section describes what it all means.

' src=

July 1, 2019 at 4:19 am

sure.. I will use only in my class rooms that too offline with due credits to your orginal page. I will encourage my students to visit your blog . I have purchased your eBook on Regressions….immensely useful.

July 1, 2019 at 9:52 am

Hi Narasimha, that sounds perfect. Thanks for buying my ebook as well. I’m thrilled to hear that you’ve found it to be helpful!

June 28, 2019 at 6:22 am

I have benefited a lot by your writings….Can I share the same with my students in the classroom?

June 30, 2019 at 8:44 pm

Hi Narasimha,

Yes, you can certainly share with your students. Please attribute my original page. And please don’t copy whole sections of my posts onto another webpage as that can be bad with Google! Thanks!

' src=

February 11, 2019 at 7:46 pm

Hello, great site and my apologies if the answer to the following question exists already.

I’ve always wondered why we put the sampling distribution about the null hypothesis rather than simply leave it about the observed mean. I can see mathematically we are measuring the same distance from the null and basically can draw the same conclusions.

For example we take a sample (say 50 people) we gather an observation (mean wage) estimate the standard error in that observation and so can build a sampling distribution about the observed mean. That sampling distribution contains a confidence interval, where say, i am 95% confident the true mean lies (i.e. in repeated sampling the true mean would reside within this interval 95% of the time).

When i use this for a hyp-test, am i right in saying that we place the sampling dist over the reference level simply because it’s mathematically equivalent and it just seems easier to gauge how far the observation is from 0 via t-stats or its likelihood via p-values?

It seems more natural to me to look at it the other way around. leave the sampling distribution on the observed value, and then look where the null sits…if it’s too far left or right then it is unlikely the true population parameter is what we believed it to be, because if the null were true it would only occur ~ 5% of the time in repeated samples…so perhaps we need to change our opinion.

Can i interpret a hyp-test that way? Or do i have a misconception?

February 12, 2019 at 8:25 pm

The short answer is that, yes, you can draw the interval around the sample mean instead. And, that is, in fact, how you construct confidence intervals. The distance around the null hypothesis for hypothesis tests and the distance around the sample for confidence intervals are the same distance, which is why the results will always agree as long as you use corresponding alpha levels and confidence levels (e.g., alpha 0.05 with a 95% confidence level). I write about how this works in a post about confidence intervals .

I prefer confidence intervals for a number of reasons. They’ll indicate whether you have significant results if they exclude the null value and they indicate the precision of the effect size estimate. Corresponding with what you’re saying, it’s easier to gauge how far a confidence interval is from the null value (often zero) whereas a p-value doesn’t provide that information. See Practical versus Statistical Significance .

So, you don’t have any misconception at all! Just refer to it as a confidence interval rather than a hypothesis test, but, of course, they are very closely related.

' src=

January 9, 2019 at 10:37 pm

Hi Jim, Nice Article.. I have a question… I read the Central limit theorem article before this article…

Coming to this article, During almost every hypothesis test, we draw a normal distribution curve assuming there is a sampling distribution (and then we go for test statistic, p value etc…). Do we draw a normal distribution curve for hypo tests because of the central limit theorem…

Thanks in advance, Surya

January 10, 2019 at 1:57 am

These distributions are actually the t-distribution which are different from the normal distribution. T-distributions only have one parameter–the degrees of freedom. As the DF of increases, the t-distribution tightens up. Around 25 degrees of freedom, the t-distribution approximates the normal distribution. Depending on the type of t-test, this corresponds to a sample size of 26 or 27. Similarly, the sampling distribution of the means also approximate the normal distribution at around these sample sizes. With a large enough sample size, both the t-distribution and the sample distribution converge to a normal distribution regardless (largely) of the underlying population distribution. So, yes, the central limit theorem plays a strong role in this.

It’s more accurate to say that central limit theorem causes the sampling distribution of the means to converge on the same distribution that the t-test uses, which allows you to assume that the test produces valid results. But, technically, the t-test is based on the t-distribution.

Problems can occur if the underlying distribution is non-normal and you have a small sample size. In that case, the sampling distribution of the means won’t approximate the t-distribution that the t-test uses. However, the test results will assume that it does and produce results based on that–which is why it causes problems!

' src=

November 19, 2018 at 9:15 am

Dear Jim! Thank you very much for your explanation. I need your help to understand my data. I have two samples (about 300 observations) with biased distributions. I did the ttest and obtained the p-value, which is quite small. Can I draw the conclusion that the effect size is small even when the distribution of my data is not normal? Thank you

November 19, 2018 at 9:34 am

Hi Tetyana,

First, when you say that your p-value is small and that you want to “draw the conclusion that the effect size is small,” I assume that you mean statistically significant. When the p-value is low, the null hypothesis must go! In other words, you reject the null and conclude that there is a statistically significant effect–not a small effect.

Now, back to the question at hand! Yes, When you have a sufficiently large sample-size, t-tests are robust to departures from normality. For a 2-sample t-test, you should have at least 15 samples per group, which you exceed by quite a bit. So, yes, you can reliably conclude that your results are statistically significant!

You can thank the central limit theorem! 🙂

' src=

September 10, 2018 at 12:18 am

Hello Jim, I am very sorry; I have very elementary of knowledge of stats. So, would you please explain how you got a p- value of 0.03112 in the above calculation/t-test? By looking at a chart? Would you also explain how you got the information that “you will observe sample effects at least as large as 70.6 about 3.1% of the time if the null is true”?

' src=

July 6, 2018 at 7:02 am

A quick question regarding your use of two-tailed critical regions in the article above: why? I mean, what is a real-world scenario that would warrant a two-tailed test of any kind (z, t, etc.)? And if there are none, why keep using the two-tailed scenario as an example, instead of the one-tailed which is both more intuitive and applicable to most if not all practical situations. Just curious, as one person attempting to educate people on stats to another (my take on the one vs. two-tailed tests can be seen here: http://blog.analytics-toolkit.com/2017/one-tailed-two-tailed-tests-significance-ab-testing/ )

Thanks, Georgi

July 6, 2018 at 12:05 pm

There’s the appropriate time and place for both one-tailed and two-tailed tests. I plan to write a post on this issue specifically, so I’ll keep my comments here brief.

So much of statistics is context sensitive. People often want concrete rules for how to do things in statistics but that’s often hard to provide because the answer depends on the context, goals, etc. The question of whether to use a one-tailed or two-tailed test falls firmly in this category of it depends.

I did read the article you wrote. I’ll say that I can see how in the context of A/B testing specifically there might be a propensity to use one-tailed tests. You only care about improvements. There’s probably not too much downside in only caring about one direction. In fact, in a post where I compare different tests and different options , I suggest using a one-tailed test for a similar type of casing involving defects. So, I’m onboard with the idea of using one-tailed tests when they’re appropriate. However, I do think that two-tailed tests should be considered the default choice and that you need good reasons to move to a one-tailed test. Again, your A/B testing area might supply those reasons on a regular basis, but I can’t make that a blanket statement for all research areas.

I think your article mischaracterizes some of the pros and cons of both types of tests. Just a couple of for instances. In a two-tailed test, you don’t have to take the same action regardless of which direction the results are significant (example below). And, yes, you can determine the direction of the effect in a two-tailed test. You simply look at the estimated effect. Is it positive or negative?

On the other hand, I do agree that one-tailed tests don’t increase the overall Type I error. However, there is a big caveat for that. In a two-tailed test, the Type I error rate is evenly split in both tails. For a one-tailed test, the overall Type I error rate does not change, but the Type I errors are redistributed so they all occur in the direction that you are interested in rather than being split between the positive and negative directions. In other words, you’ll have twice as many Type I errors in the specific direction that you’re interested in. That’s not good.

My big concerns with one-tailed tests are that it makes it easier to obtain the results that you want to obtain. And, all of the Type I errors (false positives) are in that direction too. It’s just not a good combination.

To answer your question about when you might want to use two-tailed tests, there are plenty of reasons. For one, you might want to avoid the situation I describe above. Additionally, in a lot of scientific research, the researchers truly are interested in detecting effects in either direction for the sake of science. Even in cases with a practical application, you might want to learn about effects in either direction.

For example, I was involved in a research study that looked at the effects of an exercise intervention on bone density. The idea was that it might be a good way to prevent osteoporosis. I used a two-tailed test. Obviously, we’re hoping that there was positive effect. However, we’d be very interested in knowing whether there was a negative effect too. And, this illustrates how you can have different actions based on both directions. If there was a positive effect, you can recommend that as a good approach and try to promote its use. If there’s a negative effect, you’d issue a warning to not do that intervention. You have the potential for learning both what is good and what is bad. The extra false-positives would’ve cause problems because we’d think that there’d be health benefits for participants when those benefits don’t actually exist. Also, if we had performed only a one-tailed test and didn’t obtain significant results, we’d learn that it wasn’t a positive effect, but we would not know whether it was actually detrimental or not.

Here’s when I’d say it’s OK to use a one-tailed test. Consider a one-tailed test when you’re in situation where you truly only need to know whether an effect exists in one direction, and the extra Type I errors in that direction are an acceptable risk (false positives don’t cause problems), and there’s no benefit in determining whether an effect exists in the other direction. Those conditions really restrict when one-tailed tests are the best choice. Again, those restrictions might not be relevant for your specific field, but as for the usage of statistics as a whole, they’re absolutely crucial to consider.

On the other hand, according to this article, two-tailed tests might be important in A/B testing !

' src=

March 30, 2018 at 5:29 am

Dear Sir, please confirm if there is an inadvertent mistake in interpretation as, “We can conclude that mean fuel expenditures have increased since last year.” Our null hypothesis is =260. If found significant, it implies two possibilities – both increase and decrease. Please let us know if we are mistaken here. Many Thanks!

March 30, 2018 at 9:59 am

Hi Khalid, the null hypothesis as it is defined for this test represents the mean monthly expenditure for the previous year (260). The mean expenditure for the current year is 330.6 whereas it was 260 for the previous year. Consequently, the mean has increased from 260 to 330.7 over the course of a year. The p-value indicates that this increase is statistically significant. This finding does not suggest both an increase and a decrease–just an increase. Keep in mind that a significant result prompts us to reject the null hypothesis. So, we reject the null that the mean equals 260.

Let’s explore the other possible findings to be sure that this makes sense. Suppose the sample mean had been closer to 260 and the p-value was greater than the significance level, those results would indicate that the results were not statistically significant. The conclusion that we’d draw is that we have insufficient evidence to conclude that mean fuel expenditures have changed since the previous year.

If the sample mean was less than the null hypothesis (260) and if the p-value is statistically significant, we’d concluded that mean fuel expenditures have decreased and that this decrease is statistically significant.

When you interpret the results, you have to be sure to understand what the null hypothesis represents. In this case, it represents the mean monthly expenditure for the previous year and we’re comparing this year’s mean to it–hence our sample suggests an increase.

Comments and Questions Cancel reply

User Preferences

Content preview.

Arcu felis bibendum ut tristique et egestas quis:

  • Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris
  • Duis aute irure dolor in reprehenderit in voluptate
  • Excepteur sint occaecat cupidatat non proident

Keyboard Shortcuts

9.3 - the p-value approach, example 9-4 section  .

x-ray of someone with lung cancer

Up until now, we have used the critical region approach in conducting our hypothesis tests. Now, let's take a look at an example in which we use what is called the P -value approach .

Among patients with lung cancer, usually, 90% or more die within three years. As a result of new forms of treatment, it is felt that this rate has been reduced. In a recent study of n = 150 lung cancer patients, y = 128 died within three years. Is there sufficient evidence at the \(\alpha = 0.05\) level, say, to conclude that the death rate due to lung cancer has been reduced?

The sample proportion is:

\(\hat{p}=\dfrac{128}{150}=0.853\)

The null and alternative hypotheses are:

\(H_0 \colon p = 0.90\) and \(H_A \colon p < 0.90\)

The test statistic is, therefore:

\(Z=\dfrac{\hat{p}-p_0}{\sqrt{\dfrac{p_0(1-p_0)}{n}}}=\dfrac{0.853-0.90}{\sqrt{\dfrac{0.90(0.10)}{150}}}=-1.92\)

And, the rejection region is:

Since the test statistic Z = −1.92 < −1.645, we reject the null hypothesis. There is sufficient evidence at the \(\alpha = 0.05\) level to conclude that the rate has been reduced.

Example 9-4 (continued) Section  

What if we set the significance level \(\alpha\) = P (Type I Error) to 0.01? Is there still sufficient evidence to conclude that the death rate due to lung cancer has been reduced?

In this case, with \(\alpha = 0.01\), the rejection region is Z ≤ −2.33. That is, we reject if the test statistic falls in the rejection region defined by Z ≤ −2.33:

Because the test statistic Z = −1.92 > −2.33, we do not reject the null hypothesis. There is insufficient evidence at the \(\alpha = 0.01\) level to conclude that the rate has been reduced.

threshold

In the first part of this example, we rejected the null hypothesis when \(\alpha = 0.05\). And, in the second part of this example, we failed to reject the null hypothesis when \(\alpha = 0.01\). There must be some level of \(\alpha\), then, in which we cross the threshold from rejecting to not rejecting the null hypothesis. What is the smallest \(\alpha \text{ -level}\) that would still cause us to reject the null hypothesis?

We would, of course, reject any time the critical value was smaller than our test statistic −1.92:

That is, we would reject if the critical value were −1.645, −1.83, and −1.92. But, we wouldn't reject if the critical value were −1.93. The \(\alpha \text{ -level}\) associated with the test statistic −1.92 is called the P -value . It is the smallest \(\alpha \text{ -level}\) that would lead to rejection. In this case, the P -value is:

P ( Z < −1.92) = 0.0274

So far, all of the examples we've considered have involved a one-tailed hypothesis test in which the alternative hypothesis involved either a less than (<) or a greater than (>) sign. What happens if we weren't sure of the direction in which the proportion could deviate from the hypothesized null value? That is, what if the alternative hypothesis involved a not-equal sign (≠)? Let's take a look at an example.

two zebra tails

What if we wanted to perform a " two-tailed " test? That is, what if we wanted to test:

\(H_0 \colon p = 0.90\) versus \(H_A \colon p \ne 0.90\)

at the \(\alpha = 0.05\) level?

Let's first consider the critical value approach . If we allow for the possibility that the sample proportion could either prove to be too large or too small, then we need to specify a threshold value, that is, a critical value, in each tail of the distribution. In this case, we divide the " significance level " \(\alpha\) by 2 to get \(\alpha/2\):

That is, our rejection rule is that we should reject the null hypothesis \(H_0 \text{ if } Z ≥ 1.96\) or we should reject the null hypothesis \(H_0 \text{ if } Z ≤ −1.96\). Alternatively, we can write that we should reject the null hypothesis \(H_0 \text{ if } |Z| ≥ 1.96\). Because our test statistic is −1.92, we just barely fail to reject the null hypothesis, because 1.92 < 1.96. In this case, we would say that there is insufficient evidence at the \(\alpha = 0.05\) level to conclude that the sample proportion differs significantly from 0.90.

Now for the P -value approach . Again, needing to allow for the possibility that the sample proportion is either too large or too small, we multiply the P -value we obtain for the one-tailed test by 2:

That is, the P -value is:

\(P=P(|Z|\geq 1.92)=P(Z>1.92 \text{ or } Z<-1.92)=2 \times 0.0274=0.055\)

Because the P -value 0.055 is (just barely) greater than the significance level \(\alpha = 0.05\), we barely fail to reject the null hypothesis. Again, we would say that there is insufficient evidence at the \(\alpha = 0.05\) level to conclude that the sample proportion differs significantly from 0.90.

Let's close this example by formalizing the definition of a P -value, as well as summarizing the P -value approach to conducting a hypothesis test.

The P -value is the smallest significance level \(\alpha\) that leads us to reject the null hypothesis.

Alternatively (and the way I prefer to think of P -values), the P -value is the probability that we'd observe a more extreme statistic than we did if the null hypothesis were true.

If the P -value is small, that is, if \(P ≤ \alpha\), then we reject the null hypothesis \(H_0\).

Note! Section  

writing hand

By the way, to test \(H_0 \colon p = p_0\), some statisticians will use the test statistic:

\(Z=\dfrac{\hat{p}-p_0}{\sqrt{\dfrac{\hat{p}(1-\hat{p})}{n}}}\)

rather than the one we've been using:

\(Z=\dfrac{\hat{p}-p_0}{\sqrt{\dfrac{p_0(1-p_0)}{n}}}\)

One advantage of doing so is that the interpretation of the confidence interval — does it contain \(p_0\)? — is always consistent with the hypothesis test decision, as illustrated here:

For the sake of ease, let:

\(se(\hat{p})=\sqrt{\dfrac{\hat{p}(1-\hat{p})}{n}}\)

Two-tailed test. In this case, the critical region approach tells us to reject the null hypothesis \(H_0 \colon p = p_0\) against the alternative hypothesis \(H_A \colon p \ne p_0\):

if \(Z=\dfrac{\hat{p}-p_0}{se(\hat{p})} \geq z_{\alpha/2}\) or if \(Z=\dfrac{\hat{p}-p_0}{se(\hat{p})} \leq -z_{\alpha/2}\)

which is equivalent to rejecting the null hypothesis:

if \(\hat{p}-p_0 \geq z_{\alpha/2}se(\hat{p})\) or if \(\hat{p}-p_0 \leq -z_{\alpha/2}se(\hat{p})\)

if \(p_0 \geq \hat{p}+z_{\alpha/2}se(\hat{p})\) or if \(p_0 \leq \hat{p}-z_{\alpha/2}se(\hat{p})\)

That's the same as saying that we should reject the null hypothesis \(H_0 \text{ if } p_0\) is not in the \(\left(1-\alpha\right)100\%\) confidence interval!

Left-tailed test. In this case, the critical region approach tells us to reject the null hypothesis \(H_0 \colon p = p_0\) against the alternative hypothesis \(H_A \colon p < p_0\):

if \(Z=\dfrac{\hat{p}-p_0}{se(\hat{p})} \leq -z_{\alpha}\)

if \(\hat{p}-p_0 \leq -z_{\alpha}se(\hat{p})\)

if \(p_0 \geq \hat{p}+z_{\alpha}se(\hat{p})\)

That's the same as saying that we should reject the null hypothesis \(H_0 \text{ if } p_0\) is not in the upper \(\left(1-\alpha\right)100\%\) confidence interval:

\((0,\hat{p}+z_{\alpha}se(\hat{p}))\)

  • Search Search Please fill out this field.

What Is P-Value?

Understanding p-value.

  • P-Value in Hypothesis Testing

The Bottom Line

  • Corporate Finance
  • Financial Analysis

P-Value: What It Is, How to Calculate It, and Why It Matters

what is p value hypothesis testing

Yarilet Perez is an experienced multimedia journalist and fact-checker with a Master of Science in Journalism. She has worked in multiple cities covering breaking news, politics, education, and more. Her expertise is in personal finance and investing, and real estate.

what is p value hypothesis testing

In statistics, a p-value is defined as In statistics, a p-value indicates the likelihood of obtaining a value equal to or greater than the observed result if the null hypothesis is true.

The p-value serves as an alternative to rejection points to provide the smallest level of significance at which the null hypothesis would be rejected. A smaller p-value means stronger evidence in favor of the alternative hypothesis.

P-value is often used to promote credibility for studies or reports by government agencies. For example, the U.S. Census Bureau stipulates that any analysis with a p-value greater than 0.10 must be accompanied by a statement that the difference is not statistically different from zero. The Census Bureau also has standards in place stipulating which p-values are acceptable for various publications.

Key Takeaways

  • A p-value is a statistical measurement used to validate a hypothesis against observed data.
  • A p-value measures the probability of obtaining the observed results, assuming that the null hypothesis is true.
  • The lower the p-value, the greater the statistical significance of the observed difference.
  • A p-value of 0.05 or lower is generally considered statistically significant.
  • P-value can serve as an alternative to—or in addition to—preselected confidence levels for hypothesis testing.

Jessica Olah / Investopedia

P-values are usually calculated using statistical software or p-value tables based on the assumed or known probability distribution of the specific statistic tested. While the sample size influences the reliability of the observed data, the p-value approach to hypothesis testing specifically involves calculating the p-value based on the deviation between the observed value and a chosen reference value, given the probability distribution of the statistic. A greater difference between the two values corresponds to a lower p-value.

Mathematically, the p-value is calculated using integral calculus from the area under the probability distribution curve for all values of statistics that are at least as far from the reference value as the observed value is, relative to the total area under the probability distribution curve. Standard deviations, which quantify the dispersion of data points from the mean, are instrumental in this calculation.

The calculation for a p-value varies based on the type of test performed. The three test types describe the location on the probability distribution curve: lower-tailed test, upper-tailed test, or two-tailed test . In each case, the degrees of freedom play a crucial role in determining the shape of the distribution and thus, the calculation of the p-value.

In a nutshell, the greater the difference between two observed values, the less likely it is that the difference is due to simple random chance, and this is reflected by a lower p-value.

The P-Value Approach to Hypothesis Testing

The p-value approach to hypothesis testing uses the calculated probability to determine whether there is evidence to reject the null hypothesis. This determination relies heavily on the test statistic, which summarizes the information from the sample relevant to the hypothesis being tested. The null hypothesis, also known as the conjecture, is the initial claim about a population (or data-generating process). The alternative hypothesis states whether the population parameter differs from the value of the population parameter stated in the conjecture.

In practice, the significance level is stated in advance to determine how small the p-value must be to reject the null hypothesis. Because different researchers use different levels of significance when examining a question, a reader may sometimes have difficulty comparing results from two different tests. P-values provide a solution to this problem.

Even a low p-value is not necessarily proof of statistical significance, since there is still a possibility that the observed data are the result of chance. Only repeated experiments or studies can confirm if a relationship is statistically significant.

For example, suppose a study comparing returns from two particular assets was undertaken by different researchers who used the same data but different significance levels. The researchers might come to opposite conclusions regarding whether the assets differ.

If one researcher used a confidence level of 90% and the other required a confidence level of 95% to reject the null hypothesis, and if the p-value of the observed difference between the two returns was 0.08 (corresponding to a confidence level of 92%), then the first researcher would find that the two assets have a difference that is statistically significant , while the second would find no statistically significant difference between the returns.

To avoid this problem, the researchers could report the p-value of the hypothesis test and allow readers to interpret the statistical significance themselves. This is called a p-value approach to hypothesis testing. Independent observers could note the p-value and decide for themselves whether that represents a statistically significant difference or not.

Example of P-Value

An investor claims that their investment portfolio’s performance is equivalent to that of the Standard & Poor’s (S&P) 500 Index . To determine this, the investor conducts a two-tailed test.

The null hypothesis states that the portfolio’s returns are equivalent to the S&P 500’s returns over a specified period, while the alternative hypothesis states that the portfolio’s returns and the S&P 500’s returns are not equivalent—if the investor conducted a one-tailed test , the alternative hypothesis would state that the portfolio’s returns are either less than or greater than the S&P 500’s returns.

The p-value hypothesis test does not necessarily make use of a preselected confidence level at which the investor should reset the null hypothesis that the returns are equivalent. Instead, it provides a measure of how much evidence there is to reject the null hypothesis. The smaller the p-value, the greater the evidence against the null hypothesis.

Thus, if the investor finds that the p-value is 0.001, there is strong evidence against the null hypothesis, and the investor can confidently conclude that the portfolio’s returns and the S&P 500’s returns are not equivalent.

Although this does not provide an exact threshold as to when the investor should accept or reject the null hypothesis, it does have another very practical advantage. P-value hypothesis testing offers a direct way to compare the relative confidence that the investor can have when choosing among multiple different types of investments or portfolios relative to a benchmark such as the S&P 500.

For example, for two portfolios, A and B, whose performance differs from the S&P 500 with p-values of 0.10 and 0.01, respectively, the investor can be much more confident that portfolio B, with a lower p-value, will actually show consistently different results.

Is a 0.05 P-Value Significant?

A p-value less than 0.05 is typically considered to be statistically significant, in which case the null hypothesis should be rejected. A p-value greater than 0.05 means that deviation from the null hypothesis is not statistically significant, and the null hypothesis is not rejected.

What Does a P-Value of 0.001 Mean?

A p-value of 0.001 indicates that if the null hypothesis tested were indeed true, then there would be a one-in-1,000 chance of observing results at least as extreme. This leads the observer to reject the null hypothesis because either a highly rare data result has been observed or the null hypothesis is incorrect.

How Can You Use P-Value to Compare 2 Different Results of a Hypothesis Test?

If you have two different results, one with a p-value of 0.04 and one with a p-value of 0.06, the result with a p-value of 0.04 will be considered more statistically significant than the p-value of 0.06. Beyond this simplified example, you could compare a 0.04 p-value to a 0.001 p-value. Both are statistically significant, but the 0.001 example provides an even stronger case against the null hypothesis than the 0.04.

The p-value is used to measure the significance of observational data. When researchers identify an apparent relationship between two variables, there is always a possibility that this correlation might be a coincidence. A p-value calculation helps determine if the observed relationship could arise as a result of chance.

U.S. Census Bureau. “ Statistical Quality Standard E1: Analyzing Data .”

what is p value hypothesis testing

  • Terms of Service
  • Editorial Policy
  • Privacy Policy

MLP Logo

What is P-Value? – Understanding the meaning, math and methods

  • October 12, 2019
  • Selva Prabhakaran

P Value is a probability score that is used in statistical tests to establish the statistical significance of an observed effect. Though p-values are commonly used, the definition and meaning is often not very clear even to experienced Statisticians and Data Scientists. In this post I will attempt to explain the intuition behind p-value as clear as possible.

what is p value hypothesis testing

Introduction

In Data Science interviews, one of the frequently asked questions is ‘What is P-Value?”.

Believe it or not, even experienced Data Scientists often fail to answer this question. This is partly because of the way statistics is taught and the definitions available in textbooks and online sources.

According to American Statistical Association, “a p-value is the probability under a specified statistical model that a statistical summary of the data (e.g., the sample mean difference between two compared groups) would be equal to or more extreme than its observed value.”

That’s hard to grasp, yes?

Alright, lets understand what really is p value in small meaningful pieces so ultimately it all makes sense.

When and how is p-value used?

To understand p-value, you need to understand some background and context behind it. So, let’s start with the basics.

p-values are often reported whenever you perform a statistical significance test (like t-test, chi-square test etc). These tests typically return a computed test statistic and the associated p-value. This reported value is used to establish the statistical significance of the relationships being tested.

So, whenever you see a p-value, there is an associated statistical test.

That means, there is a Hypothesis testing being conducted with a defined Null Hypothesis (H0) and a corresponding Alternate hypothesis (HA).

The p-value reported is used to make a decision on whether the null hypothesis being tested can be rejected or not.

Let’s understand a little bit more about the null and alternate hypothesis.

Now, how to frame a Null hypothesis in general?

what is p value hypothesis testing

While the null hypothesis itself changes with every statistical test, there is a general principle to frame it:

The null hypothesis assumes there is ‘no effect’ or ‘relationship’ by default .

For example: if you are testing if a drug treatment is effective or not, then the null hypothesis will assume there is not difference in outcome between the treated and untreated groups. Likewise, if you are testing if one variable influences another (say, car weight influences the mileage), then null hypothesis will postulate there is no relationship between the two.

It simply implies the absence of an effect.

Examples of Statistical Tests reporting out p-value

Here are some examples of Null hypothesis (H0) for popular statistical tests:

  • Welch Two Sample t-Test: The true difference in means of two samples is equal to 0
  • Linear Regression: The beta coefficient(slope) of the X variable is zero
  • Chi Square test: There is no difference between expected frequencies and observed frequencies.

Get the feel?

But how would the alternate hypothesis would look like?

The alternate hypothesis (HA) is always framed to negate the null hypothesis. The corresponding HA for above tests are as follows:

  • Welch Two Sample t-Test: The true difference in means of two samples is NOT equal to 0
  • Linear Regression: The beta coefficient(slope) of the X variable is NOT zero
  • Chi Square test: The difference between expected frequencies and observed frequencies is NOT zero.

What p-value really is

Now, back to the discussion on p-value.

Along with every statistical test, you will get a corresponding p-value in the results output.

What is this meant for?

It is used to determine if the data is statistically incompatible with the null hypothesis.

Not clear eh?

Let me put it in another way.

The P Value basically helps to answer the question: ‘Does the data really represent the observed effect?’.

This leads us to a more mathematical definition of P-Value.

The P Value is the probability of seeing the effect(E) when the null hypothesis is true .

p-value formula

If you think about it, we want this probability to be very low.

Having said that, it is important to remember that p-value refers to not only what we observed but also observations more extreme than what was observed. That is why the formal definition of p-value contain the statement ‘would be equal to or more extreme than its observed value.’

How is p-value used to establish statistical significance

Now that you know, p value measures the probability of seeing the effect when the null hypothesis is true.

A sufficiently low value is required to reject the null hypothesis.

Notice how I have used the term ‘Reject the Null Hypothesis’ instead of stating the ‘Alternate Hypothesis is True’.

That’s because, we have tested the effect against the null hypothesis only.

So, when the p-value is low enough, we reject the null hypothesis and conclude the observed effect holds.

But how low is ‘low enough’ for rejecting the null hypothesis?

This level of ‘low enough’ cutoff is called the alpha level, and you need to decide it before conducting a statistical test.

But how low is ‘low enough’?

Practical Guidelines to set the cutoff of Statistical Significance (alpha level)

Let’s first understand what is Alpha level.

It is the cutoff probability for p-value to establish statistical significance for a given hypothesis test. For an observed effect to be considered as statistically significant, the p-value of the test should be lower than the pre-decided alpha value.

Typically for most statistical tests(but not always), alpha is set as 0.05.

In which case, it has to be less than 0.05 to be considered as statistically significant.

What happens if it is say, 0.051?

It is still considered as not significant. We do NOT call it as a weak statistical significant. It is either black or white. There is no gray with respect to statistical significance.

Now, how to set the alpha level?

Well, the usual practice is to set it to 0.05.

But when the occurrence of the event is rare, you may want to set a very low alpha. The rarer it is, the lower the alpha.

For example in the CERN’s Hadron collider experiment to detect Higgs-Boson particles(which was very rare), the alpha level was set so low to 5 Sigma levels , which means a p value of less than 3 * 10^-7 is required reject the null hypothesis.

Whereas for a more likely event, it can go up to 0.1.

Secondly, more the samples (number of observations) you have the lower should be the alpha level. Because, even a small effect can be made to produce a lower p-value just by increasing the number of observations. The opposite is also true, that is, a large effect can be made to produce high p value by reducing the sample size.

In case you don’t know how likely the event can occur, its a common practice to set it as 0.05. But, as a thumb rule, never set the alpha greater than 0.1.

Having said that the alpha=0.05 is mostly an arbitrary choice. Then why do most people still use p=0.05? That’s because thats what is taught in college courses and being traditionally used by the scientific community and publishers.

What P Value is Not

Given the uncertainty around the meaning of p-value, it is very common to misinterpret and use it incorrectly.

Some of the common misconceptions are as follows:

  • P-Value is the probability of making a mistake. Wrong!
  • P-Value measures the importance of a variable. Wrong!
  • P-Value measures the strength of an effect. Wrong!

A smaller p-value does not signify the variable is more important or even a stronger effect.

Because, like I mentioned earlier, any effect no matter how small can be made to produce smaller p-value only by increasing the number of observations (sample size).

Likewise, a larger value does not imply a variable is not important.

For a sound communication, it is necessary to report not just the p-value but also the sample size along with it. This is especially necessary if the experiments involve different sample sizes.

Secondly, making inferences and business decisions should not be based only on the p-value being lower than the alpha level.

Analysts should understand the business sense, understand the larger picture and bring out the reasoning before making an inference and not just rely on the p-value to make the inference for you.

Does this mean the p-value is not useful anymore?

Not really. It is a useful tool because it provides an objective standard for everyone to assess. Its just that you need to use it the right way.

Example: How to find p-value for linear regression

Linear regression is a traditional statistical modeling algorithm that is used to predict a continuous variable (a.k.a dependent variable) using one or more explanatory variables.

Let’s see an example of extracting the p-value with linear regression using the mtcars dataset. In this dataset the specifications of the vehicle and the mileage performance is recorded.

We want to use linear regression to test if one of the specs “the ‘weight’ ( wt ) of the vehicle” has a significant relationship (linear) with the ‘mileage’ ( mpg ).

This can be conveniently done using python’s statsmodels library. But first, let’s load the data.

With statsmodels library

  mpg wt
0 4.582576 2.620
1 4.582576 2.875
2 4.774935 2.320
3 4.626013 3.215
4 4.324350 3.440

The X( wt ) and Y ( mpg ) variables are ready.

Null Hypothesis (H0): The slope of the line of best fit (a.k.a beta coefficient) is zero Alternate Hypothesis (H1): The beta coefficient is not zero.

To implement the test, use the smf.ols() function available in the formula.api of statsmodels . You can pass in the formula itself as the first argument and call fit() to train the linear model.

Once model is trained, call model.summary() to get a comprehensive view of the statistics.

The p-value is located in under the P>|t| against wt row. If you want to extract that value into a variable, use model.pvalues .

Since the p-value is much lower than the significance level (0.01), we reject the null hypothesis that the slope is zero and take that the data really represents the effect.

Well, that was just one example of computing p-value.

Whereas p-value can be associated with numerous statistical tests. If you are interested in finding out more about how it is used, see more examples of statistical tests with p-values.

In this post we covered what exactly is a p-value and how and how not to use it. We also saw a python example related to computing the p-value associated with linear regression.

Now with this understanding, let’s conclude what is the difference between Statistical Model from Machine Learning model?

Well, while both statistical as well as machine learning models are associated with making predictions, there can be many differences between these two. But most simply put, any predictive model that has p-values associated with it are considered as statistical model.

Happy learning!

To understand how exactly the P-value is computed, check out the example using the T-Test .

More Articles

F statistic formula – explained, correlation – connecting the dots, the role of correlation in data analysis, hypothesis testing – a deep dive into hypothesis testing, the backbone of statistical inference, sampling and sampling distributions – a comprehensive guide on sampling and sampling distributions, law of large numbers – a deep dive into the world of statistics, central limit theorem – a deep dive into central limit theorem and its significance in statistics, similar articles, complete introduction to linear regression in r, how to implement common statistical significance tests and find the p value, logistic regression – a complete tutorial with examples in r.

Subscribe to Machine Learning Plus for high value data science content

© Machinelearningplus. All rights reserved.

what is p value hypothesis testing

Machine Learning A-Z™: Hands-On Python & R In Data Science

Free sample videos:.

what is p value hypothesis testing

Icon Partners

  • Quality Improvement
  • Talk To Minitab

Understanding Hypothesis Tests: Significance Levels (Alpha) and P values in Statistics

Topics: Hypothesis Testing , Statistics

What do significance levels and P values mean in hypothesis tests? What is statistical significance anyway? In this post, I’ll continue to focus on concepts and graphs to help you gain a more intuitive understanding of how hypothesis tests work in statistics.

To bring it to life, I’ll add the significance level and P value to the graph in my previous post in order to perform a graphical version of the 1 sample t-test. It’s easier to understand when you can see what statistical significance truly means!

Here’s where we left off in my last post . We want to determine whether our sample mean (330.6) indicates that this year's average energy cost is significantly different from last year’s average energy cost of $260.

Descriptive statistics for the example

The probability distribution plot above shows the distribution of sample means we’d obtain under the assumption that the null hypothesis is true (population mean = 260) and we repeatedly drew a large number of random samples.

I left you with a question: where do we draw the line for statistical significance on the graph? Now we'll add in the significance level and the P value, which are the decision-making tools we'll need.

We'll use these tools to test the following hypotheses:

  • Null hypothesis: The population mean equals the hypothesized mean (260).
  • Alternative hypothesis: The population mean differs from the hypothesized mean (260).

What Is the Significance Level (Alpha)?

The significance level, also denoted as alpha or α, is the probability of rejecting the null hypothesis when it is true. For example, a significance level of 0.05 indicates a 5% risk of concluding that a difference exists when there is no actual difference.

These types of definitions can be hard to understand because of their technical nature. A picture makes the concepts much easier to comprehend!

The significance level determines how far out from the null hypothesis value we'll draw that line on the graph. To graph a significance level of 0.05, we need to shade the 5% of the distribution that is furthest away from the null hypothesis.

Probability plot that shows the critical regions for a significance level of 0.05

In the graph above, the two shaded areas are equidistant from the null hypothesis value and each area has a probability of 0.025, for a total of 0.05. In statistics, we call these shaded areas the critical region for a two-tailed test. If the population mean is 260, we’d expect to obtain a sample mean that falls in the critical region 5% of the time. The critical region defines how far away our sample statistic must be from the null hypothesis value before we can say it is unusual enough to reject the null hypothesis.

Our sample mean (330.6) falls within the critical region, which indicates it is statistically significant at the 0.05 level.

We can also see if it is statistically significant using the other common significance level of 0.01.

Probability plot that shows the critical regions for a significance level of 0.01

The two shaded areas each have a probability of 0.005, which adds up to a total probability of 0.01. This time our sample mean does not fall within the critical region and we fail to reject the null hypothesis. This comparison shows why you need to choose your significance level before you begin your study. It protects you from choosing a significance level because it conveniently gives you significant results!

Thanks to the graph, we were able to determine that our results are statistically significant at the 0.05 level without using a P value. However, when you use the numeric output produced by statistical software , you’ll need to compare the P value to your significance level to make this determination.

Ready for a demo of Minitab Statistical Software? Just ask! 

Talk to Minitab

What Are P values?

P-values are the probability of obtaining an effect at least as extreme as the one in your sample data, assuming the truth of the null hypothesis.

This definition of P values, while technically correct, is a bit convoluted. It’s easier to understand with a graph!

To graph the P value for our example data set, we need to determine the distance between the sample mean and the null hypothesis value (330.6 - 260 = 70.6). Next, we can graph the probability of obtaining a sample mean that is at least as extreme in both tails of the distribution (260 +/- 70.6).

Probability plot that shows the p-value for our sample mean

In the graph above, the two shaded areas each have a probability of 0.01556, for a total probability 0.03112. This probability represents the likelihood of obtaining a sample mean that is at least as extreme as our sample mean in both tails of the distribution if the population mean is 260. That’s our P value!

When a P value is less than or equal to the significance level, you reject the null hypothesis. If we take the P value for our example and compare it to the common significance levels, it matches the previous graphical results. The P value of 0.03112 is statistically significant at an alpha level of 0.05, but not at the 0.01 level.

If we stick to a significance level of 0.05, we can conclude that the average energy cost for the population is greater than 260.

A common mistake is to interpret the P-value as the probability that the null hypothesis is true. To understand why this interpretation is incorrect, please read my blog post  How to Correctly Interpret P Values .

Discussion about Statistically Significant Results

A hypothesis test evaluates two mutually exclusive statements about a population to determine which statement is best supported by the sample data. A test result is statistically significant when the sample statistic is unusual enough relative to the null hypothesis that we can reject the null hypothesis for the entire population. “Unusual enough” in a hypothesis test is defined by:

  • The assumption that the null hypothesis is true—the graphs are centered on the null hypothesis value.
  • The significance level—how far out do we draw the line for the critical region?
  • Our sample statistic—does it fall in the critical region?

Keep in mind that there is no magic significance level that distinguishes between the studies that have a true effect and those that don’t with 100% accuracy. The common alpha values of 0.05 and 0.01 are simply based on tradition. For a significance level of 0.05, expect to obtain sample means in the critical region 5% of the time when the null hypothesis is true . In these cases, you won’t know that the null hypothesis is true but you’ll reject it because the sample mean falls in the critical region. That’s why the significance level is also referred to as an error rate!

This type of error doesn’t imply that the experimenter did anything wrong or require any other unusual explanation. The graphs show that when the null hypothesis is true, it is possible to obtain these unusual sample means for no reason other than random sampling error. It’s just luck of the draw.

Significance levels and P values are important tools that help you quantify and control this type of error in a hypothesis test. Using these tools to decide when to reject the null hypothesis increases your chance of making the correct decision.

If you like this post, you might want to read the other posts in this series that use the same graphical framework:

  • Previous: Why We Need to Use Hypothesis Tests
  • Next: Confidence Intervals and Confidence Levels

If you'd like to see how I made these graphs, please read: How to Create a Graphical Version of the 1-sample t-Test .

minitab-on-linkedin

You Might Also Like

  • Trust Center

© 2023 Minitab, LLC. All Rights Reserved.

  • Terms of Use
  • Privacy Policy
  • Cookies Settings

p-value Calculator

Table of contents

Welcome to our p-value calculator! You will never again have to wonder how to find the p-value, as here you can determine the one-sided and two-sided p-values from test statistics, following all the most popular distributions: normal, t-Student, chi-squared, and Snedecor's F.

P-values appear all over science, yet many people find the concept a bit intimidating. Don't worry – in this article, we will explain not only what the p-value is but also how to interpret p-values correctly . Have you ever been curious about how to calculate the p-value by hand? We provide you with all the necessary formulae as well!

🙋 If you want to revise some basics from statistics, our normal distribution calculator is an excellent place to start.

What is p-value?

Formally, the p-value is the probability that the test statistic will produce values at least as extreme as the value it produced for your sample . It is crucial to remember that this probability is calculated under the assumption that the null hypothesis H 0 is true !

More intuitively, p-value answers the question:

Assuming that I live in a world where the null hypothesis holds, how probable is it that, for another sample, the test I'm performing will generate a value at least as extreme as the one I observed for the sample I already have?

It is the alternative hypothesis that determines what "extreme" actually means , so the p-value depends on the alternative hypothesis that you state: left-tailed, right-tailed, or two-tailed. In the formulas below, S stands for a test statistic, x for the value it produced for a given sample, and Pr(event | H 0 ) is the probability of an event, calculated under the assumption that H 0 is true:

Left-tailed test: p-value = Pr(S ≤ x | H 0 )

Right-tailed test: p-value = Pr(S ≥ x | H 0 )

Two-tailed test:

p-value = 2 × min{Pr(S ≤ x | H 0 ), Pr(S ≥ x | H 0 )}

(By min{a,b} , we denote the smaller number out of a and b .)

If the distribution of the test statistic under H 0 is symmetric about 0 , then: p-value = 2 × Pr(S ≥ |x| | H 0 )

or, equivalently: p-value = 2 × Pr(S ≤ -|x| | H 0 )

As a picture is worth a thousand words, let us illustrate these definitions. Here, we use the fact that the probability can be neatly depicted as the area under the density curve for a given distribution. We give two sets of pictures: one for a symmetric distribution and the other for a skewed (non-symmetric) distribution.

  • Symmetric case: normal distribution:

p-values for symmetric distribution — left-tailed, right-tailed, and two-tailed tests.

  • Non-symmetric case: chi-squared distribution:

p-values for non-symmetric distribution — left-tailed, right-tailed, and two-tailed tests.

In the last picture (two-tailed p-value for skewed distribution), the area of the left-hand side is equal to the area of the right-hand side.

How do I calculate p-value from test statistic?

To determine the p-value, you need to know the distribution of your test statistic under the assumption that the null hypothesis is true . Then, with the help of the cumulative distribution function ( cdf ) of this distribution, we can express the probability of the test statistics being at least as extreme as its value x for the sample:

Left-tailed test:

p-value = cdf(x) .

Right-tailed test:

p-value = 1 - cdf(x) .

p-value = 2 × min{cdf(x) , 1 - cdf(x)} .

If the distribution of the test statistic under H 0 is symmetric about 0 , then a two-sided p-value can be simplified to p-value = 2 × cdf(-|x|) , or, equivalently, as p-value = 2 - 2 × cdf(|x|) .

The probability distributions that are most widespread in hypothesis testing tend to have complicated cdf formulae, and finding the p-value by hand may not be possible. You'll likely need to resort to a computer or to a statistical table, where people have gathered approximate cdf values.

Well, you now know how to calculate the p-value, but… why do you need to calculate this number in the first place? In hypothesis testing, the p-value approach is an alternative to the critical value approach . Recall that the latter requires researchers to pre-set the significance level, α, which is the probability of rejecting the null hypothesis when it is true (so of type I error ). Once you have your p-value, you just need to compare it with any given α to quickly decide whether or not to reject the null hypothesis at that significance level, α. For details, check the next section, where we explain how to interpret p-values.

How to interpret p-value

As we have mentioned above, the p-value is the answer to the following question:

What does that mean for you? Well, you've got two options:

  • A high p-value means that your data is highly compatible with the null hypothesis; and
  • A small p-value provides evidence against the null hypothesis , as it means that your result would be very improbable if the null hypothesis were true.

However, it may happen that the null hypothesis is true, but your sample is highly unusual! For example, imagine we studied the effect of a new drug and got a p-value of 0.03 . This means that in 3% of similar studies, random chance alone would still be able to produce the value of the test statistic that we obtained, or a value even more extreme, even if the drug had no effect at all!

The question "what is p-value" can also be answered as follows: p-value is the smallest level of significance at which the null hypothesis would be rejected. So, if you now want to make a decision on the null hypothesis at some significance level α , just compare your p-value with α :

  • If p-value ≤ α , then you reject the null hypothesis and accept the alternative hypothesis; and
  • If p-value ≥ α , then you don't have enough evidence to reject the null hypothesis.

Obviously, the fate of the null hypothesis depends on α . For instance, if the p-value was 0.03 , we would reject the null hypothesis at a significance level of 0.05 , but not at a level of 0.01 . That's why the significance level should be stated in advance and not adapted conveniently after the p-value has been established! A significance level of 0.05 is the most common value, but there's nothing magical about it. Here, you can see what too strong a faith in the 0.05 threshold can lead to. It's always best to report the p-value, and allow the reader to make their own conclusions.

Also, bear in mind that subject area expertise (and common reason) is crucial. Otherwise, mindlessly applying statistical principles, you can easily arrive at statistically significant, despite the conclusion being 100% untrue.

How to use the p-value calculator to find p-value from test statistic

As our p-value calculator is here at your service, you no longer need to wonder how to find p-value from all those complicated test statistics! Here are the steps you need to follow:

Pick the alternative hypothesis : two-tailed, right-tailed, or left-tailed.

Tell us the distribution of your test statistic under the null hypothesis: is it N(0,1), t-Student, chi-squared, or Snedecor's F? If you are unsure, check the sections below, as they are devoted to these distributions.

If needed, specify the degrees of freedom of the test statistic's distribution.

Enter the value of test statistic computed for your data sample.

By default, the calculator uses the significance level of 0.05.

Our calculator determines the p-value from the test statistic and provides the decision to be made about the null hypothesis.

How do I find p-value from z-score?

In terms of the cumulative distribution function (cdf) of the standard normal distribution, which is traditionally denoted by Φ , the p-value is given by the following formulae:

Left-tailed z-test:

p-value = Φ(Z score )

Right-tailed z-test:

p-value = 1 - Φ(Z score )

Two-tailed z-test:

p-value = 2 × Φ(−|Z score |)

p-value = 2 - 2 × Φ(|Z score |)

🙋 To learn more about Z-tests, head to Omni's Z-test calculator .

We use the Z-score if the test statistic approximately follows the standard normal distribution N(0,1) . Thanks to the central limit theorem, you can count on the approximation if you have a large sample (say at least 50 data points) and treat your distribution as normal.

A Z-test most often refers to testing the population mean , or the difference between two population means, in particular between two proportions. You can also find Z-tests in maximum likelihood estimations.

How do I find p-value from t?

The p-value from the t-score is given by the following formulae, in which cdf t,d stands for the cumulative distribution function of the t-Student distribution with d degrees of freedom:

Left-tailed t-test:

p-value = cdf t,d (t score )

Right-tailed t-test:

p-value = 1 - cdf t,d (t score )

Two-tailed t-test:

p-value = 2 × cdf t,d (−|t score |)

p-value = 2 - 2 × cdf t,d (|t score |)

Use the t-score option if your test statistic follows the t-Student distribution . This distribution has a shape similar to N(0,1) (bell-shaped and symmetric) but has heavier tails – the exact shape depends on the parameter called the degrees of freedom . If the number of degrees of freedom is large (>30), which generically happens for large samples, the t-Student distribution is practically indistinguishable from the normal distribution N(0,1).

The most common t-tests are those for population means with an unknown population standard deviation, or for the difference between means of two populations , with either equal or unequal yet unknown population standard deviations. There's also a t-test for paired (dependent) samples .

🙋 To get more insights into t-statistics, we recommend using our t-test calculator .

p-value from chi-square score (χ² score)

Use the χ²-score option when performing a test in which the test statistic follows the χ²-distribution .

This distribution arises if, for example, you take the sum of squared variables, each following the normal distribution N(0,1). Remember to check the number of degrees of freedom of the χ²-distribution of your test statistic!

How to find the p-value from chi-square-score ? You can do it with the help of the following formulae, in which cdf χ²,d denotes the cumulative distribution function of the χ²-distribution with d degrees of freedom:

Left-tailed χ²-test:

p-value = cdf χ²,d (χ² score )

Right-tailed χ²-test:

p-value = 1 - cdf χ²,d (χ² score )

Remember that χ²-tests for goodness-of-fit and independence are right-tailed tests! (see below)

Two-tailed χ²-test:

p-value = 2 × min{cdf χ²,d (χ² score ), 1 - cdf χ²,d (χ² score )}

(By min{a,b} , we denote the smaller of the numbers a and b .)

The most popular tests which lead to a χ²-score are the following:

Testing whether the variance of normally distributed data has some pre-determined value. In this case, the test statistic has the χ²-distribution with n - 1 degrees of freedom, where n is the sample size. This can be a one-tailed or two-tailed test .

Goodness-of-fit test checks whether the empirical (sample) distribution agrees with some expected probability distribution. In this case, the test statistic follows the χ²-distribution with k - 1 degrees of freedom, where k is the number of classes into which the sample is divided. This is a right-tailed test .

Independence test is used to determine if there is a statistically significant relationship between two variables. In this case, its test statistic is based on the contingency table and follows the χ²-distribution with (r - 1)(c - 1) degrees of freedom, where r is the number of rows, and c is the number of columns in this contingency table. This also is a right-tailed test .

p-value from F-score

Finally, the F-score option should be used when you perform a test in which the test statistic follows the F-distribution , also known as the Fisher–Snedecor distribution. The exact shape of an F-distribution depends on two degrees of freedom .

To see where those degrees of freedom come from, consider the independent random variables X and Y , which both follow the χ²-distributions with d 1 and d 2 degrees of freedom, respectively. In that case, the ratio (X/d 1 )/(Y/d 2 ) follows the F-distribution, with (d 1 , d 2 ) -degrees of freedom. For this reason, the two parameters d 1 and d 2 are also called the numerator and denominator degrees of freedom .

The p-value from F-score is given by the following formulae, where we let cdf F,d1,d2 denote the cumulative distribution function of the F-distribution, with (d 1 , d 2 ) -degrees of freedom:

Left-tailed F-test:

p-value = cdf F,d1,d2 (F score )

Right-tailed F-test:

p-value = 1 - cdf F,d1,d2 (F score )

Two-tailed F-test:

p-value = 2 × min{cdf F,d1,d2 (F score ), 1 - cdf F,d1,d2 (F score )}

Below we list the most important tests that produce F-scores. All of them are right-tailed tests .

A test for the equality of variances in two normally distributed populations . Its test statistic follows the F-distribution with (n - 1, m - 1) -degrees of freedom, where n and m are the respective sample sizes.

ANOVA is used to test the equality of means in three or more groups that come from normally distributed populations with equal variances. We arrive at the F-distribution with (k - 1, n - k) -degrees of freedom, where k is the number of groups, and n is the total sample size (in all groups together).

A test for overall significance of regression analysis . The test statistic has an F-distribution with (k - 1, n - k) -degrees of freedom, where n is the sample size, and k is the number of variables (including the intercept).

With the presence of the linear relationship having been established in your data sample with the above test, you can calculate the coefficient of determination, R 2 , which indicates the strength of this relationship . You can do it by hand or use our coefficient of determination calculator .

A test to compare two nested regression models . The test statistic follows the F-distribution with (k 2 - k 1 , n - k 2 ) -degrees of freedom, where k 1 and k 2 are the numbers of variables in the smaller and bigger models, respectively, and n is the sample size.

You may notice that the F-test of an overall significance is a particular form of the F-test for comparing two nested models: it tests whether our model does significantly better than the model with no predictors (i.e., the intercept-only model).

Can p-value be negative?

No, the p-value cannot be negative. This is because probabilities cannot be negative, and the p-value is the probability of the test statistic satisfying certain conditions.

What does a high p-value mean?

A high p-value means that under the null hypothesis, there's a high probability that for another sample, the test statistic will generate a value at least as extreme as the one observed in the sample you already have. A high p-value doesn't allow you to reject the null hypothesis.

What does a low p-value mean?

A low p-value means that under the null hypothesis, there's little probability that for another sample, the test statistic will generate a value at least as extreme as the one observed for the sample you already have. A low p-value is evidence in favor of the alternative hypothesis – it allows you to reject the null hypothesis.

What do you want?

What do you know?

Your Z-score

Z-score : the test statistic follows the standard normal distribution N(0,1).

.css-m482sy.css-m482sy{color:#2B3148;background-color:transparent;font-family:var(--calculator-ui-font-family),Verdana,sans-serif;font-size:20px;line-height:24px;overflow:visible;padding-top:0px;position:relative;}.css-m482sy.css-m482sy:after{content:'';-webkit-transform:scale(0);-moz-transform:scale(0);-ms-transform:scale(0);transform:scale(0);position:absolute;border:2px solid #EA9430;border-radius:2px;inset:-8px;z-index:1;}.css-m482sy .js-external-link-button.link-like,.css-m482sy .js-external-link-anchor{color:inherit;border-radius:1px;-webkit-text-decoration:underline;text-decoration:underline;}.css-m482sy .js-external-link-button.link-like:hover,.css-m482sy .js-external-link-anchor:hover,.css-m482sy .js-external-link-button.link-like:active,.css-m482sy .js-external-link-anchor:active{text-decoration-thickness:2px;text-shadow:1px 0 0;}.css-m482sy .js-external-link-button.link-like:focus-visible,.css-m482sy .js-external-link-anchor:focus-visible{outline:transparent 2px dotted;box-shadow:0 0 0 2px #6314E6;}.css-m482sy p,.css-m482sy div{margin:0;display:block;}.css-m482sy pre{margin:0;display:block;}.css-m482sy pre code{display:block;width:-webkit-fit-content;width:-moz-fit-content;width:fit-content;}.css-m482sy pre:not(:first-child){padding-top:8px;}.css-m482sy ul,.css-m482sy ol{display:block margin:0;padding-left:20px;}.css-m482sy ul li,.css-m482sy ol li{padding-top:8px;}.css-m482sy ul ul,.css-m482sy ol ul,.css-m482sy ul ol,.css-m482sy ol ol{padding-top:0;}.css-m482sy ul:not(:first-child),.css-m482sy ol:not(:first-child){padding-top:4px;} .css-63uqft{margin:auto;background-color:white;overflow:auto;overflow-wrap:break-word;word-break:break-word;}.css-63uqft code,.css-63uqft kbd,.css-63uqft pre,.css-63uqft samp{font-family:monospace;}.css-63uqft code{padding:2px 4px;color:#444;background:#ddd;border-radius:4px;}.css-63uqft figcaption,.css-63uqft caption{text-align:center;}.css-63uqft figcaption{font-size:12px;font-style:italic;overflow:hidden;}.css-63uqft h3{font-size:1.75rem;}.css-63uqft h4{font-size:1.5rem;}.css-63uqft .mathBlock{font-size:24px;-webkit-padding-start:4px;padding-inline-start:4px;}.css-63uqft .mathBlock .katex{font-size:24px;text-align:left;}.css-63uqft .math-inline{background-color:#f0f0f0;display:inline-block;font-size:inherit;padding:0 3px;}.css-63uqft .videoBlock,.css-63uqft .imageBlock{margin-bottom:16px;}.css-63uqft .imageBlock__image-align--left,.css-63uqft .videoBlock__video-align--left{float:left;}.css-63uqft .imageBlock__image-align--right,.css-63uqft .videoBlock__video-align--right{float:right;}.css-63uqft .imageBlock__image-align--center,.css-63uqft .videoBlock__video-align--center{display:block;margin-left:auto;margin-right:auto;clear:both;}.css-63uqft .imageBlock__image-align--none,.css-63uqft .videoBlock__video-align--none{clear:both;margin-left:0;margin-right:0;}.css-63uqft .videoBlock__video--wrapper{position:relative;padding-bottom:56.25%;height:0;}.css-63uqft .videoBlock__video--wrapper iframe{position:absolute;top:0;left:0;width:100%;height:100%;}.css-63uqft .videoBlock__caption{text-align:left;}@font-face{font-family:'KaTeX_AMS';src:url(/katex-fonts/KaTeX_AMS-Regular.woff2) format('woff2'),url(/katex-fonts/KaTeX_AMS-Regular.woff) format('woff'),url(/katex-fonts/KaTeX_AMS-Regular.ttf) format('truetype');font-weight:normal;font-style:normal;}@font-face{font-family:'KaTeX_Caligraphic';src:url(/katex-fonts/KaTeX_Caligraphic-Bold.woff2) format('woff2'),url(/katex-fonts/KaTeX_Caligraphic-Bold.woff) format('woff'),url(/katex-fonts/KaTeX_Caligraphic-Bold.ttf) format('truetype');font-weight:bold;font-style:normal;}@font-face{font-family:'KaTeX_Caligraphic';src:url(/katex-fonts/KaTeX_Caligraphic-Regular.woff2) format('woff2'),url(/katex-fonts/KaTeX_Caligraphic-Regular.woff) format('woff'),url(/katex-fonts/KaTeX_Caligraphic-Regular.ttf) format('truetype');font-weight:normal;font-style:normal;}@font-face{font-family:'KaTeX_Fraktur';src:url(/katex-fonts/KaTeX_Fraktur-Bold.woff2) format('woff2'),url(/katex-fonts/KaTeX_Fraktur-Bold.woff) format('woff'),url(/katex-fonts/KaTeX_Fraktur-Bold.ttf) format('truetype');font-weight:bold;font-style:normal;}@font-face{font-family:'KaTeX_Fraktur';src:url(/katex-fonts/KaTeX_Fraktur-Regular.woff2) format('woff2'),url(/katex-fonts/KaTeX_Fraktur-Regular.woff) format('woff'),url(/katex-fonts/KaTeX_Fraktur-Regular.ttf) format('truetype');font-weight:normal;font-style:normal;}@font-face{font-family:'KaTeX_Main';src:url(/katex-fonts/KaTeX_Main-Bold.woff2) format('woff2'),url(/katex-fonts/KaTeX_Main-Bold.woff) format('woff'),url(/katex-fonts/KaTeX_Main-Bold.ttf) format('truetype');font-weight:bold;font-style:normal;}@font-face{font-family:'KaTeX_Main';src:url(/katex-fonts/KaTeX_Main-BoldItalic.woff2) format('woff2'),url(/katex-fonts/KaTeX_Main-BoldItalic.woff) format('woff'),url(/katex-fonts/KaTeX_Main-BoldItalic.ttf) format('truetype');font-weight:bold;font-style:italic;}@font-face{font-family:'KaTeX_Main';src:url(/katex-fonts/KaTeX_Main-Italic.woff2) format('woff2'),url(/katex-fonts/KaTeX_Main-Italic.woff) format('woff'),url(/katex-fonts/KaTeX_Main-Italic.ttf) format('truetype');font-weight:normal;font-style:italic;}@font-face{font-family:'KaTeX_Main';src:url(/katex-fonts/KaTeX_Main-Regular.woff2) format('woff2'),url(/katex-fonts/KaTeX_Main-Regular.woff) format('woff'),url(/katex-fonts/KaTeX_Main-Regular.ttf) format('truetype');font-weight:normal;font-style:normal;}@font-face{font-family:'KaTeX_Math';src:url(/katex-fonts/KaTeX_Math-BoldItalic.woff2) format('woff2'),url(/katex-fonts/KaTeX_Math-BoldItalic.woff) format('woff'),url(/katex-fonts/KaTeX_Math-BoldItalic.ttf) format('truetype');font-weight:bold;font-style:italic;}@font-face{font-family:'KaTeX_Math';src:url(/katex-fonts/KaTeX_Math-Italic.woff2) format('woff2'),url(/katex-fonts/KaTeX_Math-Italic.woff) format('woff'),url(/katex-fonts/KaTeX_Math-Italic.ttf) format('truetype');font-weight:normal;font-style:italic;}@font-face{font-family:'KaTeX_SansSerif';src:url(/katex-fonts/KaTeX_SansSerif-Bold.woff2) format('woff2'),url(/katex-fonts/KaTeX_SansSerif-Bold.woff) format('woff'),url(/katex-fonts/KaTeX_SansSerif-Bold.ttf) format('truetype');font-weight:bold;font-style:normal;}@font-face{font-family:'KaTeX_SansSerif';src:url(/katex-fonts/KaTeX_SansSerif-Italic.woff2) format('woff2'),url(/katex-fonts/KaTeX_SansSerif-Italic.woff) format('woff'),url(/katex-fonts/KaTeX_SansSerif-Italic.ttf) format('truetype');font-weight:normal;font-style:italic;}@font-face{font-family:'KaTeX_SansSerif';src:url(/katex-fonts/KaTeX_SansSerif-Regular.woff2) format('woff2'),url(/katex-fonts/KaTeX_SansSerif-Regular.woff) format('woff'),url(/katex-fonts/KaTeX_SansSerif-Regular.ttf) format('truetype');font-weight:normal;font-style:normal;}@font-face{font-family:'KaTeX_Script';src:url(/katex-fonts/KaTeX_Script-Regular.woff2) format('woff2'),url(/katex-fonts/KaTeX_Script-Regular.woff) format('woff'),url(/katex-fonts/KaTeX_Script-Regular.ttf) format('truetype');font-weight:normal;font-style:normal;}@font-face{font-family:'KaTeX_Size1';src:url(/katex-fonts/KaTeX_Size1-Regular.woff2) format('woff2'),url(/katex-fonts/KaTeX_Size1-Regular.woff) format('woff'),url(/katex-fonts/KaTeX_Size1-Regular.ttf) format('truetype');font-weight:normal;font-style:normal;}@font-face{font-family:'KaTeX_Size2';src:url(/katex-fonts/KaTeX_Size2-Regular.woff2) format('woff2'),url(/katex-fonts/KaTeX_Size2-Regular.woff) format('woff'),url(/katex-fonts/KaTeX_Size2-Regular.ttf) format('truetype');font-weight:normal;font-style:normal;}@font-face{font-family:'KaTeX_Size3';src:url(/katex-fonts/KaTeX_Size3-Regular.woff2) format('woff2'),url(/katex-fonts/KaTeX_Size3-Regular.woff) format('woff'),url(/katex-fonts/KaTeX_Size3-Regular.ttf) format('truetype');font-weight:normal;font-style:normal;}@font-face{font-family:'KaTeX_Size4';src:url(/katex-fonts/KaTeX_Size4-Regular.woff2) format('woff2'),url(/katex-fonts/KaTeX_Size4-Regular.woff) format('woff'),url(/katex-fonts/KaTeX_Size4-Regular.ttf) format('truetype');font-weight:normal;font-style:normal;}@font-face{font-family:'KaTeX_Typewriter';src:url(/katex-fonts/KaTeX_Typewriter-Regular.woff2) format('woff2'),url(/katex-fonts/KaTeX_Typewriter-Regular.woff) format('woff'),url(/katex-fonts/KaTeX_Typewriter-Regular.ttf) format('truetype');font-weight:normal;font-style:normal;}.css-63uqft .katex{font:normal 1.21em KaTeX_Main,Times New Roman,serif;line-height:1.2;text-indent:0;text-rendering:auto;}.css-63uqft .katex *{-ms-high-contrast-adjust:none!important;border-color:currentColor;}.css-63uqft .katex .katex-version::after{content:'0.13.13';}.css-63uqft .katex .katex-mathml{position:absolute;clip:rect(1px,1px,1px,1px);padding:0;border:0;height:1px;width:1px;overflow:hidden;}.css-63uqft .katex .katex-html>.newline{display:block;}.css-63uqft .katex .base{position:relative;display:inline-block;white-space:nowrap;width:-webkit-min-content;width:-moz-min-content;width:-webkit-min-content;width:-moz-min-content;width:min-content;}.css-63uqft .katex .strut{display:inline-block;}.css-63uqft .katex .textbf{font-weight:bold;}.css-63uqft .katex .textit{font-style:italic;}.css-63uqft .katex .textrm{font-family:KaTeX_Main;}.css-63uqft .katex .textsf{font-family:KaTeX_SansSerif;}.css-63uqft .katex .texttt{font-family:KaTeX_Typewriter;}.css-63uqft .katex .mathnormal{font-family:KaTeX_Math;font-style:italic;}.css-63uqft .katex .mathit{font-family:KaTeX_Main;font-style:italic;}.css-63uqft .katex .mathrm{font-style:normal;}.css-63uqft .katex .mathbf{font-family:KaTeX_Main;font-weight:bold;}.css-63uqft .katex .boldsymbol{font-family:KaTeX_Math;font-weight:bold;font-style:italic;}.css-63uqft .katex .amsrm{font-family:KaTeX_AMS;}.css-63uqft .katex .mathbb,.css-63uqft .katex .textbb{font-family:KaTeX_AMS;}.css-63uqft .katex .mathcal{font-family:KaTeX_Caligraphic;}.css-63uqft .katex .mathfrak,.css-63uqft .katex .textfrak{font-family:KaTeX_Fraktur;}.css-63uqft .katex .mathtt{font-family:KaTeX_Typewriter;}.css-63uqft .katex .mathscr,.css-63uqft .katex .textscr{font-family:KaTeX_Script;}.css-63uqft .katex .mathsf,.css-63uqft .katex .textsf{font-family:KaTeX_SansSerif;}.css-63uqft .katex .mathboldsf,.css-63uqft .katex .textboldsf{font-family:KaTeX_SansSerif;font-weight:bold;}.css-63uqft .katex .mathitsf,.css-63uqft .katex .textitsf{font-family:KaTeX_SansSerif;font-style:italic;}.css-63uqft .katex .mainrm{font-family:KaTeX_Main;font-style:normal;}.css-63uqft .katex .vlist-t{display:inline-table;table-layout:fixed;border-collapse:collapse;}.css-63uqft .katex .vlist-r{display:table-row;}.css-63uqft .katex .vlist{display:table-cell;vertical-align:bottom;position:relative;}.css-63uqft .katex .vlist>span{display:block;height:0;position:relative;}.css-63uqft .katex .vlist>span>span{display:inline-block;}.css-63uqft .katex .vlist>span>.pstrut{overflow:hidden;width:0;}.css-63uqft .katex .vlist-t2{margin-right:-2px;}.css-63uqft .katex .vlist-s{display:table-cell;vertical-align:bottom;font-size:1px;width:2px;min-width:2px;}.css-63uqft .katex .vbox{display:-webkit-inline-box;display:-webkit-inline-flex;display:-ms-inline-flexbox;display:inline-flex;-webkit-flex-direction:column;-ms-flex-direction:column;flex-direction:column;-webkit-align-items:baseline;-webkit-box-align:baseline;-ms-flex-align:baseline;align-items:baseline;}.css-63uqft .katex .hbox{display:-webkit-inline-box;display:-webkit-inline-flex;display:-ms-inline-flexbox;display:inline-flex;-webkit-flex-direction:row;-ms-flex-direction:row;flex-direction:row;width:100%;}.css-63uqft .katex .thinbox{display:-webkit-inline-box;display:-webkit-inline-flex;display:-ms-inline-flexbox;display:inline-flex;-webkit-flex-direction:row;-ms-flex-direction:row;flex-direction:row;width:0;max-width:0;}.css-63uqft .katex .msupsub{text-align:left;}.css-63uqft .katex .mfrac>span>span{text-align:center;}.css-63uqft .katex .mfrac .frac-line{display:inline-block;width:100%;border-bottom-style:solid;}.css-63uqft .katex .mfrac .frac-line,.css-63uqft .katex .overline .overline-line,.css-63uqft .katex .underline .underline-line,.css-63uqft .katex .hline,.css-63uqft .katex .hdashline,.css-63uqft .katex .rule{min-height:1px;}.css-63uqft .katex .mspace{display:inline-block;}.css-63uqft .katex .llap,.css-63uqft .katex .rlap,.css-63uqft .katex .clap{width:0;position:relative;}.css-63uqft .katex .llap>.inner,.css-63uqft .katex .rlap>.inner,.css-63uqft .katex .clap>.inner{position:absolute;}.css-63uqft .katex .llap>.fix,.css-63uqft .katex .rlap>.fix,.css-63uqft .katex .clap>.fix{display:inline-block;}.css-63uqft .katex .llap>.inner{right:0;}.css-63uqft .katex .rlap>.inner,.css-63uqft .katex .clap>.inner{left:0;}.css-63uqft .katex .clap>.inner>span{margin-left:-50%;margin-right:50%;}.css-63uqft .katex .rule{display:inline-block;border:solid 0;position:relative;}.css-63uqft .katex .overline .overline-line,.css-63uqft .katex .underline .underline-line,.css-63uqft .katex .hline{display:inline-block;width:100%;border-bottom-style:solid;}.css-63uqft .katex .hdashline{display:inline-block;width:100%;border-bottom-style:dashed;}.css-63uqft .katex .sqrt>.root{margin-left:0.27777778em;margin-right:-0.55555556em;}.css-63uqft .katex .sizing.reset-size1.size1,.css-63uqft .katex .fontsize-ensurer.reset-size1.size1{font-size:1em;}.css-63uqft .katex .sizing.reset-size1.size2,.css-63uqft .katex .fontsize-ensurer.reset-size1.size2{font-size:1.2em;}.css-63uqft .katex .sizing.reset-size1.size3,.css-63uqft .katex .fontsize-ensurer.reset-size1.size3{font-size:1.4em;}.css-63uqft .katex .sizing.reset-size1.size4,.css-63uqft .katex .fontsize-ensurer.reset-size1.size4{font-size:1.6em;}.css-63uqft .katex .sizing.reset-size1.size5,.css-63uqft .katex .fontsize-ensurer.reset-size1.size5{font-size:1.8em;}.css-63uqft .katex .sizing.reset-size1.size6,.css-63uqft .katex .fontsize-ensurer.reset-size1.size6{font-size:2em;}.css-63uqft .katex .sizing.reset-size1.size7,.css-63uqft .katex .fontsize-ensurer.reset-size1.size7{font-size:2.4em;}.css-63uqft .katex .sizing.reset-size1.size8,.css-63uqft .katex .fontsize-ensurer.reset-size1.size8{font-size:2.88em;}.css-63uqft .katex .sizing.reset-size1.size9,.css-63uqft .katex .fontsize-ensurer.reset-size1.size9{font-size:3.456em;}.css-63uqft .katex .sizing.reset-size1.size10,.css-63uqft .katex .fontsize-ensurer.reset-size1.size10{font-size:4.148em;}.css-63uqft .katex .sizing.reset-size1.size11,.css-63uqft .katex .fontsize-ensurer.reset-size1.size11{font-size:4.976em;}.css-63uqft .katex .sizing.reset-size2.size1,.css-63uqft .katex .fontsize-ensurer.reset-size2.size1{font-size:0.83333333em;}.css-63uqft .katex .sizing.reset-size2.size2,.css-63uqft .katex .fontsize-ensurer.reset-size2.size2{font-size:1em;}.css-63uqft .katex .sizing.reset-size2.size3,.css-63uqft .katex .fontsize-ensurer.reset-size2.size3{font-size:1.16666667em;}.css-63uqft .katex .sizing.reset-size2.size4,.css-63uqft .katex .fontsize-ensurer.reset-size2.size4{font-size:1.33333333em;}.css-63uqft .katex .sizing.reset-size2.size5,.css-63uqft .katex .fontsize-ensurer.reset-size2.size5{font-size:1.5em;}.css-63uqft .katex .sizing.reset-size2.size6,.css-63uqft .katex .fontsize-ensurer.reset-size2.size6{font-size:1.66666667em;}.css-63uqft .katex .sizing.reset-size2.size7,.css-63uqft .katex .fontsize-ensurer.reset-size2.size7{font-size:2em;}.css-63uqft .katex .sizing.reset-size2.size8,.css-63uqft .katex .fontsize-ensurer.reset-size2.size8{font-size:2.4em;}.css-63uqft .katex .sizing.reset-size2.size9,.css-63uqft .katex .fontsize-ensurer.reset-size2.size9{font-size:2.88em;}.css-63uqft .katex .sizing.reset-size2.size10,.css-63uqft .katex .fontsize-ensurer.reset-size2.size10{font-size:3.45666667em;}.css-63uqft .katex .sizing.reset-size2.size11,.css-63uqft .katex .fontsize-ensurer.reset-size2.size11{font-size:4.14666667em;}.css-63uqft .katex .sizing.reset-size3.size1,.css-63uqft .katex .fontsize-ensurer.reset-size3.size1{font-size:0.71428571em;}.css-63uqft .katex .sizing.reset-size3.size2,.css-63uqft .katex .fontsize-ensurer.reset-size3.size2{font-size:0.85714286em;}.css-63uqft .katex .sizing.reset-size3.size3,.css-63uqft .katex .fontsize-ensurer.reset-size3.size3{font-size:1em;}.css-63uqft .katex .sizing.reset-size3.size4,.css-63uqft .katex .fontsize-ensurer.reset-size3.size4{font-size:1.14285714em;}.css-63uqft .katex .sizing.reset-size3.size5,.css-63uqft .katex .fontsize-ensurer.reset-size3.size5{font-size:1.28571429em;}.css-63uqft .katex .sizing.reset-size3.size6,.css-63uqft .katex .fontsize-ensurer.reset-size3.size6{font-size:1.42857143em;}.css-63uqft .katex .sizing.reset-size3.size7,.css-63uqft .katex .fontsize-ensurer.reset-size3.size7{font-size:1.71428571em;}.css-63uqft .katex .sizing.reset-size3.size8,.css-63uqft .katex .fontsize-ensurer.reset-size3.size8{font-size:2.05714286em;}.css-63uqft .katex .sizing.reset-size3.size9,.css-63uqft .katex .fontsize-ensurer.reset-size3.size9{font-size:2.46857143em;}.css-63uqft .katex .sizing.reset-size3.size10,.css-63uqft .katex .fontsize-ensurer.reset-size3.size10{font-size:2.96285714em;}.css-63uqft .katex .sizing.reset-size3.size11,.css-63uqft .katex .fontsize-ensurer.reset-size3.size11{font-size:3.55428571em;}.css-63uqft .katex .sizing.reset-size4.size1,.css-63uqft .katex .fontsize-ensurer.reset-size4.size1{font-size:0.625em;}.css-63uqft .katex .sizing.reset-size4.size2,.css-63uqft .katex .fontsize-ensurer.reset-size4.size2{font-size:0.75em;}.css-63uqft .katex .sizing.reset-size4.size3,.css-63uqft .katex .fontsize-ensurer.reset-size4.size3{font-size:0.875em;}.css-63uqft .katex .sizing.reset-size4.size4,.css-63uqft .katex .fontsize-ensurer.reset-size4.size4{font-size:1em;}.css-63uqft .katex .sizing.reset-size4.size5,.css-63uqft .katex .fontsize-ensurer.reset-size4.size5{font-size:1.125em;}.css-63uqft .katex .sizing.reset-size4.size6,.css-63uqft .katex .fontsize-ensurer.reset-size4.size6{font-size:1.25em;}.css-63uqft .katex .sizing.reset-size4.size7,.css-63uqft .katex .fontsize-ensurer.reset-size4.size7{font-size:1.5em;}.css-63uqft .katex .sizing.reset-size4.size8,.css-63uqft .katex .fontsize-ensurer.reset-size4.size8{font-size:1.8em;}.css-63uqft .katex .sizing.reset-size4.size9,.css-63uqft .katex .fontsize-ensurer.reset-size4.size9{font-size:2.16em;}.css-63uqft .katex .sizing.reset-size4.size10,.css-63uqft .katex .fontsize-ensurer.reset-size4.size10{font-size:2.5925em;}.css-63uqft .katex .sizing.reset-size4.size11,.css-63uqft .katex .fontsize-ensurer.reset-size4.size11{font-size:3.11em;}.css-63uqft .katex .sizing.reset-size5.size1,.css-63uqft .katex .fontsize-ensurer.reset-size5.size1{font-size:0.55555556em;}.css-63uqft .katex .sizing.reset-size5.size2,.css-63uqft .katex .fontsize-ensurer.reset-size5.size2{font-size:0.66666667em;}.css-63uqft .katex .sizing.reset-size5.size3,.css-63uqft .katex .fontsize-ensurer.reset-size5.size3{font-size:0.77777778em;}.css-63uqft .katex .sizing.reset-size5.size4,.css-63uqft .katex .fontsize-ensurer.reset-size5.size4{font-size:0.88888889em;}.css-63uqft .katex .sizing.reset-size5.size5,.css-63uqft .katex .fontsize-ensurer.reset-size5.size5{font-size:1em;}.css-63uqft .katex .sizing.reset-size5.size6,.css-63uqft .katex .fontsize-ensurer.reset-size5.size6{font-size:1.11111111em;}.css-63uqft .katex .sizing.reset-size5.size7,.css-63uqft .katex .fontsize-ensurer.reset-size5.size7{font-size:1.33333333em;}.css-63uqft .katex .sizing.reset-size5.size8,.css-63uqft .katex .fontsize-ensurer.reset-size5.size8{font-size:1.6em;}.css-63uqft .katex .sizing.reset-size5.size9,.css-63uqft .katex .fontsize-ensurer.reset-size5.size9{font-size:1.92em;}.css-63uqft .katex .sizing.reset-size5.size10,.css-63uqft .katex .fontsize-ensurer.reset-size5.size10{font-size:2.30444444em;}.css-63uqft .katex .sizing.reset-size5.size11,.css-63uqft .katex .fontsize-ensurer.reset-size5.size11{font-size:2.76444444em;}.css-63uqft .katex .sizing.reset-size6.size1,.css-63uqft .katex .fontsize-ensurer.reset-size6.size1{font-size:0.5em;}.css-63uqft .katex .sizing.reset-size6.size2,.css-63uqft .katex .fontsize-ensurer.reset-size6.size2{font-size:0.6em;}.css-63uqft .katex .sizing.reset-size6.size3,.css-63uqft .katex .fontsize-ensurer.reset-size6.size3{font-size:0.7em;}.css-63uqft .katex .sizing.reset-size6.size4,.css-63uqft .katex .fontsize-ensurer.reset-size6.size4{font-size:0.8em;}.css-63uqft .katex .sizing.reset-size6.size5,.css-63uqft .katex .fontsize-ensurer.reset-size6.size5{font-size:0.9em;}.css-63uqft .katex .sizing.reset-size6.size6,.css-63uqft .katex .fontsize-ensurer.reset-size6.size6{font-size:1em;}.css-63uqft .katex .sizing.reset-size6.size7,.css-63uqft .katex .fontsize-ensurer.reset-size6.size7{font-size:1.2em;}.css-63uqft .katex .sizing.reset-size6.size8,.css-63uqft .katex .fontsize-ensurer.reset-size6.size8{font-size:1.44em;}.css-63uqft .katex .sizing.reset-size6.size9,.css-63uqft .katex .fontsize-ensurer.reset-size6.size9{font-size:1.728em;}.css-63uqft .katex .sizing.reset-size6.size10,.css-63uqft .katex .fontsize-ensurer.reset-size6.size10{font-size:2.074em;}.css-63uqft .katex .sizing.reset-size6.size11,.css-63uqft .katex .fontsize-ensurer.reset-size6.size11{font-size:2.488em;}.css-63uqft .katex .sizing.reset-size7.size1,.css-63uqft .katex .fontsize-ensurer.reset-size7.size1{font-size:0.41666667em;}.css-63uqft .katex .sizing.reset-size7.size2,.css-63uqft .katex .fontsize-ensurer.reset-size7.size2{font-size:0.5em;}.css-63uqft .katex .sizing.reset-size7.size3,.css-63uqft .katex .fontsize-ensurer.reset-size7.size3{font-size:0.58333333em;}.css-63uqft .katex .sizing.reset-size7.size4,.css-63uqft .katex .fontsize-ensurer.reset-size7.size4{font-size:0.66666667em;}.css-63uqft .katex .sizing.reset-size7.size5,.css-63uqft .katex .fontsize-ensurer.reset-size7.size5{font-size:0.75em;}.css-63uqft .katex .sizing.reset-size7.size6,.css-63uqft .katex .fontsize-ensurer.reset-size7.size6{font-size:0.83333333em;}.css-63uqft .katex .sizing.reset-size7.size7,.css-63uqft .katex .fontsize-ensurer.reset-size7.size7{font-size:1em;}.css-63uqft .katex .sizing.reset-size7.size8,.css-63uqft .katex .fontsize-ensurer.reset-size7.size8{font-size:1.2em;}.css-63uqft .katex .sizing.reset-size7.size9,.css-63uqft .katex .fontsize-ensurer.reset-size7.size9{font-size:1.44em;}.css-63uqft .katex .sizing.reset-size7.size10,.css-63uqft .katex .fontsize-ensurer.reset-size7.size10{font-size:1.72833333em;}.css-63uqft .katex .sizing.reset-size7.size11,.css-63uqft .katex .fontsize-ensurer.reset-size7.size11{font-size:2.07333333em;}.css-63uqft .katex .sizing.reset-size8.size1,.css-63uqft .katex .fontsize-ensurer.reset-size8.size1{font-size:0.34722222em;}.css-63uqft .katex .sizing.reset-size8.size2,.css-63uqft .katex .fontsize-ensurer.reset-size8.size2{font-size:0.41666667em;}.css-63uqft .katex .sizing.reset-size8.size3,.css-63uqft .katex .fontsize-ensurer.reset-size8.size3{font-size:0.48611111em;}.css-63uqft .katex .sizing.reset-size8.size4,.css-63uqft .katex .fontsize-ensurer.reset-size8.size4{font-size:0.55555556em;}.css-63uqft .katex .sizing.reset-size8.size5,.css-63uqft .katex .fontsize-ensurer.reset-size8.size5{font-size:0.625em;}.css-63uqft .katex .sizing.reset-size8.size6,.css-63uqft .katex .fontsize-ensurer.reset-size8.size6{font-size:0.69444444em;}.css-63uqft .katex .sizing.reset-size8.size7,.css-63uqft .katex .fontsize-ensurer.reset-size8.size7{font-size:0.83333333em;}.css-63uqft .katex .sizing.reset-size8.size8,.css-63uqft .katex .fontsize-ensurer.reset-size8.size8{font-size:1em;}.css-63uqft .katex .sizing.reset-size8.size9,.css-63uqft .katex .fontsize-ensurer.reset-size8.size9{font-size:1.2em;}.css-63uqft .katex .sizing.reset-size8.size10,.css-63uqft .katex .fontsize-ensurer.reset-size8.size10{font-size:1.44027778em;}.css-63uqft .katex .sizing.reset-size8.size11,.css-63uqft .katex .fontsize-ensurer.reset-size8.size11{font-size:1.72777778em;}.css-63uqft .katex .sizing.reset-size9.size1,.css-63uqft .katex .fontsize-ensurer.reset-size9.size1{font-size:0.28935185em;}.css-63uqft .katex .sizing.reset-size9.size2,.css-63uqft .katex .fontsize-ensurer.reset-size9.size2{font-size:0.34722222em;}.css-63uqft .katex .sizing.reset-size9.size3,.css-63uqft .katex .fontsize-ensurer.reset-size9.size3{font-size:0.40509259em;}.css-63uqft .katex .sizing.reset-size9.size4,.css-63uqft .katex .fontsize-ensurer.reset-size9.size4{font-size:0.46296296em;}.css-63uqft .katex .sizing.reset-size9.size5,.css-63uqft .katex .fontsize-ensurer.reset-size9.size5{font-size:0.52083333em;}.css-63uqft .katex .sizing.reset-size9.size6,.css-63uqft .katex .fontsize-ensurer.reset-size9.size6{font-size:0.5787037em;}.css-63uqft .katex .sizing.reset-size9.size7,.css-63uqft .katex .fontsize-ensurer.reset-size9.size7{font-size:0.69444444em;}.css-63uqft .katex .sizing.reset-size9.size8,.css-63uqft .katex .fontsize-ensurer.reset-size9.size8{font-size:0.83333333em;}.css-63uqft .katex .sizing.reset-size9.size9,.css-63uqft .katex .fontsize-ensurer.reset-size9.size9{font-size:1em;}.css-63uqft .katex .sizing.reset-size9.size10,.css-63uqft .katex .fontsize-ensurer.reset-size9.size10{font-size:1.20023148em;}.css-63uqft .katex .sizing.reset-size9.size11,.css-63uqft .katex .fontsize-ensurer.reset-size9.size11{font-size:1.43981481em;}.css-63uqft .katex .sizing.reset-size10.size1,.css-63uqft .katex .fontsize-ensurer.reset-size10.size1{font-size:0.24108004em;}.css-63uqft .katex .sizing.reset-size10.size2,.css-63uqft .katex .fontsize-ensurer.reset-size10.size2{font-size:0.28929605em;}.css-63uqft .katex .sizing.reset-size10.size3,.css-63uqft .katex .fontsize-ensurer.reset-size10.size3{font-size:0.33751205em;}.css-63uqft .katex .sizing.reset-size10.size4,.css-63uqft .katex .fontsize-ensurer.reset-size10.size4{font-size:0.38572806em;}.css-63uqft .katex .sizing.reset-size10.size5,.css-63uqft .katex .fontsize-ensurer.reset-size10.size5{font-size:0.43394407em;}.css-63uqft .katex .sizing.reset-size10.size6,.css-63uqft .katex .fontsize-ensurer.reset-size10.size6{font-size:0.48216008em;}.css-63uqft .katex .sizing.reset-size10.size7,.css-63uqft .katex .fontsize-ensurer.reset-size10.size7{font-size:0.57859209em;}.css-63uqft .katex .sizing.reset-size10.size8,.css-63uqft .katex .fontsize-ensurer.reset-size10.size8{font-size:0.69431051em;}.css-63uqft .katex .sizing.reset-size10.size9,.css-63uqft .katex .fontsize-ensurer.reset-size10.size9{font-size:0.83317261em;}.css-63uqft .katex .sizing.reset-size10.size10,.css-63uqft .katex .fontsize-ensurer.reset-size10.size10{font-size:1em;}.css-63uqft .katex .sizing.reset-size10.size11,.css-63uqft .katex .fontsize-ensurer.reset-size10.size11{font-size:1.19961427em;}.css-63uqft .katex .sizing.reset-size11.size1,.css-63uqft .katex .fontsize-ensurer.reset-size11.size1{font-size:0.20096463em;}.css-63uqft .katex .sizing.reset-size11.size2,.css-63uqft .katex .fontsize-ensurer.reset-size11.size2{font-size:0.24115756em;}.css-63uqft .katex .sizing.reset-size11.size3,.css-63uqft .katex .fontsize-ensurer.reset-size11.size3{font-size:0.28135048em;}.css-63uqft .katex .sizing.reset-size11.size4,.css-63uqft .katex .fontsize-ensurer.reset-size11.size4{font-size:0.32154341em;}.css-63uqft .katex .sizing.reset-size11.size5,.css-63uqft .katex .fontsize-ensurer.reset-size11.size5{font-size:0.36173633em;}.css-63uqft .katex .sizing.reset-size11.size6,.css-63uqft .katex .fontsize-ensurer.reset-size11.size6{font-size:0.40192926em;}.css-63uqft .katex .sizing.reset-size11.size7,.css-63uqft .katex .fontsize-ensurer.reset-size11.size7{font-size:0.48231511em;}.css-63uqft .katex .sizing.reset-size11.size8,.css-63uqft .katex .fontsize-ensurer.reset-size11.size8{font-size:0.57877814em;}.css-63uqft .katex .sizing.reset-size11.size9,.css-63uqft .katex .fontsize-ensurer.reset-size11.size9{font-size:0.69453376em;}.css-63uqft .katex .sizing.reset-size11.size10,.css-63uqft .katex .fontsize-ensurer.reset-size11.size10{font-size:0.83360129em;}.css-63uqft .katex .sizing.reset-size11.size11,.css-63uqft .katex .fontsize-ensurer.reset-size11.size11{font-size:1em;}.css-63uqft .katex .delimsizing.size1{font-family:KaTeX_Size1;}.css-63uqft .katex .delimsizing.size2{font-family:KaTeX_Size2;}.css-63uqft .katex .delimsizing.size3{font-family:KaTeX_Size3;}.css-63uqft .katex .delimsizing.size4{font-family:KaTeX_Size4;}.css-63uqft .katex .delimsizing.mult .delim-size1>span{font-family:KaTeX_Size1;}.css-63uqft .katex .delimsizing.mult .delim-size4>span{font-family:KaTeX_Size4;}.css-63uqft .katex .nulldelimiter{display:inline-block;width:0.12em;}.css-63uqft .katex .delimcenter{position:relative;}.css-63uqft .katex .op-symbol{position:relative;}.css-63uqft .katex .op-symbol.small-op{font-family:KaTeX_Size1;}.css-63uqft .katex .op-symbol.large-op{font-family:KaTeX_Size2;}.css-63uqft .katex .op-limits>.vlist-t{text-align:center;}.css-63uqft .katex .accent>.vlist-t{text-align:center;}.css-63uqft .katex .accent .accent-body{position:relative;}.css-63uqft .katex .accent .accent-body:not(.accent-full){width:0;}.css-63uqft .katex .overlay{display:block;}.css-63uqft .katex .mtable .vertical-separator{display:inline-block;min-width:1px;}.css-63uqft .katex .mtable .arraycolsep{display:inline-block;}.css-63uqft .katex .mtable .col-align-c>.vlist-t{text-align:center;}.css-63uqft .katex .mtable .col-align-l>.vlist-t{text-align:left;}.css-63uqft .katex .mtable .col-align-r>.vlist-t{text-align:right;}.css-63uqft .katex .svg-align{text-align:left;}.css-63uqft .katex svg{display:block;position:absolute;width:100%;height:inherit;fill:currentColor;stroke:currentColor;fill-rule:nonzero;fill-opacity:1;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1;}.css-63uqft .katex svg path{stroke:none;}.css-63uqft .katex img{border-style:none;min-width:0;min-height:0;max-width:none;max-height:none;}.css-63uqft .katex .stretchy{width:100%;display:block;position:relative;overflow:hidden;}.css-63uqft .katex .stretchy::before,.css-63uqft .katex .stretchy::after{content:'';}.css-63uqft .katex .hide-tail{width:100%;position:relative;overflow:hidden;}.css-63uqft .katex .halfarrow-left{position:absolute;left:0;width:50.2%;overflow:hidden;}.css-63uqft .katex .halfarrow-right{position:absolute;right:0;width:50.2%;overflow:hidden;}.css-63uqft .katex .brace-left{position:absolute;left:0;width:25.1%;overflow:hidden;}.css-63uqft .katex .brace-center{position:absolute;left:25%;width:50%;overflow:hidden;}.css-63uqft .katex .brace-right{position:absolute;right:0;width:25.1%;overflow:hidden;}.css-63uqft .katex .x-arrow-pad{padding:0 0.5em;}.css-63uqft .katex .cd-arrow-pad{padding:0 0.55556em 0 0.27778em;}.css-63uqft .katex .x-arrow,.css-63uqft .katex .mover,.css-63uqft .katex .munder{text-align:center;}.css-63uqft .katex .boxpad{padding:0 0.3em 0 0.3em;}.css-63uqft .katex .fbox,.css-63uqft .katex .fcolorbox{box-sizing:border-box;border:0.04em solid;}.css-63uqft .katex .cancel-pad{padding:0 0.2em 0 0.2em;}.css-63uqft .katex .cancel-lap{margin-left:-0.2em;margin-right:-0.2em;}.css-63uqft .katex .sout{border-bottom-style:solid;border-bottom-width:0.08em;}.css-63uqft .katex .angl{box-sizing:border-box;border-top:0.049em solid;border-right:0.049em solid;margin-right:0.03889em;}.css-63uqft .katex .anglpad{padding:0 0.03889em 0 0.03889em;}.css-63uqft .katex .eqn-num::before{counter-increment:katexEqnNo;content:'(' counter(katexEqnNo) ')';}.css-63uqft .katex .mml-eqn-num::before{counter-increment:mmlEqnNo;content:'(' counter(mmlEqnNo) ')';}.css-63uqft .katex .mtr-glue{width:50%;}.css-63uqft .katex .cd-vert-arrow{display:inline-block;position:relative;}.css-63uqft .katex .cd-label-left{display:inline-block;position:absolute;right:calc(50% + 0.3em);text-align:left;}.css-63uqft .katex .cd-label-right{display:inline-block;position:absolute;left:calc(50% + 0.3em);text-align:right;}.css-63uqft .katex-display{display:block;margin:1em 0;text-align:center;}.css-63uqft .katex-display>.katex{display:block;white-space:nowrap;}.css-63uqft .katex-display>.katex>.katex-html{display:block;position:relative;}.css-63uqft .katex-display>.katex>.katex-html>.tag{position:absolute;right:0;}.css-63uqft .katex-display.leqno>.katex>.katex-html>.tag{left:0;right:auto;}.css-63uqft .katex-display.fleqn>.katex{text-align:left;padding-left:2em;}.css-63uqft body{counter-reset:katexEqnNo mmlEqnNo;}.css-63uqft table{width:-webkit-max-content;width:-moz-max-content;width:max-content;}.css-63uqft .tableBlock{max-width:100%;margin-bottom:1rem;overflow-y:scroll;}.css-63uqft .tableBlock thead,.css-63uqft .tableBlock thead th{border-bottom:1px solid #333!important;}.css-63uqft .tableBlock th,.css-63uqft .tableBlock td{padding:10px;text-align:left;}.css-63uqft .tableBlock th{font-weight:bold!important;}.css-63uqft .tableBlock caption{caption-side:bottom;color:#555;font-size:12px;font-style:italic;text-align:center;}.css-63uqft .tableBlock caption>p{margin:0;}.css-63uqft .tableBlock th>p,.css-63uqft .tableBlock td>p{margin:0;}.css-63uqft .tableBlock [data-background-color='aliceblue']{background-color:#f0f8ff;color:#000;}.css-63uqft .tableBlock [data-background-color='black']{background-color:#000;color:#fff;}.css-63uqft .tableBlock [data-background-color='chocolate']{background-color:#d2691e;color:#fff;}.css-63uqft .tableBlock [data-background-color='cornflowerblue']{background-color:#6495ed;color:#fff;}.css-63uqft .tableBlock [data-background-color='crimson']{background-color:#dc143c;color:#fff;}.css-63uqft .tableBlock [data-background-color='darkblue']{background-color:#00008b;color:#fff;}.css-63uqft .tableBlock [data-background-color='darkseagreen']{background-color:#8fbc8f;color:#000;}.css-63uqft .tableBlock [data-background-color='deepskyblue']{background-color:#00bfff;color:#000;}.css-63uqft .tableBlock [data-background-color='gainsboro']{background-color:#dcdcdc;color:#000;}.css-63uqft .tableBlock [data-background-color='grey']{background-color:#808080;color:#fff;}.css-63uqft .tableBlock [data-background-color='lemonchiffon']{background-color:#fffacd;color:#000;}.css-63uqft .tableBlock [data-background-color='lightpink']{background-color:#ffb6c1;color:#000;}.css-63uqft .tableBlock [data-background-color='lightsalmon']{background-color:#ffa07a;color:#000;}.css-63uqft .tableBlock [data-background-color='lightskyblue']{background-color:#87cefa;color:#000;}.css-63uqft .tableBlock [data-background-color='mediumblue']{background-color:#0000cd;color:#fff;}.css-63uqft .tableBlock [data-background-color='omnigrey']{background-color:#f0f0f0;color:#000;}.css-63uqft .tableBlock [data-background-color='white']{background-color:#fff;color:#000;}.css-63uqft .tableBlock [data-text-align='center']{text-align:center;}.css-63uqft .tableBlock [data-text-align='left']{text-align:left;}.css-63uqft .tableBlock [data-text-align='right']{text-align:right;}.css-63uqft .tableBlock [data-vertical-align='bottom']{vertical-align:bottom;}.css-63uqft .tableBlock [data-vertical-align='middle']{vertical-align:middle;}.css-63uqft .tableBlock [data-vertical-align='top']{vertical-align:top;}.css-63uqft .tableBlock__font-size--xxsmall{font-size:10px;}.css-63uqft .tableBlock__font-size--xsmall{font-size:12px;}.css-63uqft .tableBlock__font-size--small{font-size:14px;}.css-63uqft .tableBlock__font-size--large{font-size:18px;}.css-63uqft .tableBlock__border--some tbody tr:not(:last-child){border-bottom:1px solid #e2e5e7;}.css-63uqft .tableBlock__border--bordered td,.css-63uqft .tableBlock__border--bordered th{border:1px solid #e2e5e7;}.css-63uqft .tableBlock__border--borderless tbody+tbody,.css-63uqft .tableBlock__border--borderless td,.css-63uqft .tableBlock__border--borderless th,.css-63uqft .tableBlock__border--borderless tr,.css-63uqft .tableBlock__border--borderless thead,.css-63uqft .tableBlock__border--borderless thead th{border:0!important;}.css-63uqft .tableBlock:not(.tableBlock__table-striped) tbody tr{background-color:unset!important;}.css-63uqft .tableBlock__table-striped tbody tr:nth-of-type(odd){background-color:#f9fafc!important;}.css-63uqft .tableBlock__table-compactl th,.css-63uqft .tableBlock__table-compact td{padding:3px!important;}.css-63uqft .tableBlock__full-size{width:100%;}.css-63uqft .textBlock{margin-bottom:16px;}.css-63uqft .textBlock__text-formatting--finePrint{font-size:12px;}.css-63uqft .textBlock__text-infoBox{padding:0.75rem 1.25rem;margin-bottom:1rem;border:1px solid transparent;border-radius:0.25rem;}.css-63uqft .textBlock__text-infoBox p{margin:0;}.css-63uqft .textBlock__text-infoBox--primary{background-color:#cce5ff;border-color:#b8daff;color:#004085;}.css-63uqft .textBlock__text-infoBox--secondary{background-color:#e2e3e5;border-color:#d6d8db;color:#383d41;}.css-63uqft .textBlock__text-infoBox--success{background-color:#d4edda;border-color:#c3e6cb;color:#155724;}.css-63uqft .textBlock__text-infoBox--danger{background-color:#f8d7da;border-color:#f5c6cb;color:#721c24;}.css-63uqft .textBlock__text-infoBox--warning{background-color:#fff3cd;border-color:#ffeeba;color:#856404;}.css-63uqft .textBlock__text-infoBox--info{background-color:#d1ecf1;border-color:#bee5eb;color:#0c5460;}.css-63uqft .textBlock__text-infoBox--dark{background-color:#d6d8d9;border-color:#c6c8ca;color:#1b1e21;}.css-63uqft .text-overline{-webkit-text-decoration:overline;text-decoration:overline;}.css-63uqft.css-63uqft{color:#2B3148;background-color:transparent;font-family:var(--calculator-ui-font-family),Verdana,sans-serif;font-size:20px;line-height:24px;overflow:visible;padding-top:0px;position:relative;}.css-63uqft.css-63uqft:after{content:'';-webkit-transform:scale(0);-moz-transform:scale(0);-ms-transform:scale(0);transform:scale(0);position:absolute;border:2px solid #EA9430;border-radius:2px;inset:-8px;z-index:1;}.css-63uqft .js-external-link-button.link-like,.css-63uqft .js-external-link-anchor{color:inherit;border-radius:1px;-webkit-text-decoration:underline;text-decoration:underline;}.css-63uqft .js-external-link-button.link-like:hover,.css-63uqft .js-external-link-anchor:hover,.css-63uqft .js-external-link-button.link-like:active,.css-63uqft .js-external-link-anchor:active{text-decoration-thickness:2px;text-shadow:1px 0 0;}.css-63uqft .js-external-link-button.link-like:focus-visible,.css-63uqft .js-external-link-anchor:focus-visible{outline:transparent 2px dotted;box-shadow:0 0 0 2px #6314E6;}.css-63uqft p,.css-63uqft div{margin:0;display:block;}.css-63uqft pre{margin:0;display:block;}.css-63uqft pre code{display:block;width:-webkit-fit-content;width:-moz-fit-content;width:fit-content;}.css-63uqft pre:not(:first-child){padding-top:8px;}.css-63uqft ul,.css-63uqft ol{display:block margin:0;padding-left:20px;}.css-63uqft ul li,.css-63uqft ol li{padding-top:8px;}.css-63uqft ul ul,.css-63uqft ol ul,.css-63uqft ul ol,.css-63uqft ol ol{padding-top:0;}.css-63uqft ul:not(:first-child),.css-63uqft ol:not(:first-child){padding-top:4px;} Interpretation

Significance level α

Table of Contents

What is p-value , p value vs alpha level, p values and critical values, how is p-value calculated, p-value in hypothesis testing, p-values and statistical significance, reporting p-values, our learners also ask, what is p-value in statistical hypothesis.

What Is P-Value in Statistical Hypothesis?

Few statistical estimates are as significant as the p-value. The p-value or probability value is a number, calculated from a statistical test , that describes how likely your results would have occurred if the null hypothesis were true. A P-value less than 0.5 is statistically significant, while a value higher than 0.5 indicates the null hypothesis is true; hence it is not statistically significant. So, what is P-Value exactly, and why is it so important?

In statistical hypothesis testing , P-Value or probability value can be defined as the measure of the probability that a real-valued test statistic is at least as extreme as the value actually obtained. P-value shows how likely it is that your set of observations could have occurred under the null hypothesis. P-Values are used in statistical hypothesis testing to determine whether to reject the null hypothesis. The smaller the p-value, the stronger the likelihood that you should reject the null hypothesis. 

Your Data Analytics Career is Around The Corner!

Your Data Analytics Career is Around The Corner!

P-values are expressed as decimals and can be converted into percentage. For example, a p-value of 0.0237 is 2.37%, which means there's a 2.37% chance of your results being random or having happened by chance. The smaller the P-value, the more significant your results are. 

In a hypothesis test, you can compare the p value from your test with the alpha level selected while running the test. Now, let’s try to understand what is P-Value vs Alpha level.    

A P-value indicates the probability of getting an effect no less than that actually observed in the sample data.

An alpha level will tell you the probability of wrongly rejecting a true null hypothesis. The level is selected by the researcher and obtained by subtracting your confidence level from 100%. For instance, if you are 95% confident in your research, the alpha level will be 5% (0.05).

When you run the hypothesis test, if you get:

  • A small p value (<=0.05), you should reject the null hypothesis
  • A large p value (>0.05), you should not reject the null hypothesis

In addition to the P-value, you can use other values given by your test to determine if your null hypothesis is true. 

For example, if you run an F-test to compare two variances in Excel, you will obtain a p-value, an f-critical value, and a f-value. Compare the f-value with f-critical value. If f-critical value is lower, you should reject the null hypothesis. 

P-Values are usually calculated using p-value tables or spreadsheets, or calculated automatically using statistical software like R, SPSS, etc. 

Depending on the test statistic and degrees of freedom (subtracting no. of independent variables from no. of observations) of your test, you can find out from the tables how frequently you can expect the test statistic to be under the null hypothesis. 

How to calculate P-value depends on which statistical test you’re using to test your hypothesis.  

  • Every statistical test uses different assumptions and generates different statistics. Select the test method that best suits your data and matches the effect or relationship being tested.
  • The number of independent variables included in your test determines how big or small the test statistic should be in order to generate the same p-value. 

Regardless of what statistical test you are using, the p-value will always denote the same thing – how frequently you can expect to get a test statistic as extreme or even more extreme than the one given by your test. 

In the P-Value approach to hypothesis testing, a calculated probability is used to decide if there’s evidence to reject the null hypothesis, also known as the conjecture. The conjecture is the initial claim about a data population, while the alternative hypothesis ascertains if the observed population parameter differs from the population parameter value according to the conjecture. 

Effectively, the significance level is declared in advance to determine how small the P-value needs to be such that the null hypothesis is rejected.  The levels of significance vary from one researcher to another; so it can get difficult for readers to compare results from two different tests. That is when P-value makes things easier. 

Readers could interpret the statistical significance by referring to the reported P-value of the hypothesis test. This is known as the P-value approach to hypothesis testing. Using this, readers could decide for themselves whether the p value represents a statistically significant difference.  

The level of statistical significance is usually represented as a P-value between 0 and 1. The smaller the p-value, the more likely it is that you would reject the null hypothesis. 

  • A P-Value < or = 0.05 is considered statistically significant. It denotes strong evidence against the null hypothesis, since there is below 5% probability of the null being correct. So, we reject the null hypothesis and accept the alternative hypothesis.
  • But if P-Value is lower than your threshold of significance, though the null hypothesis can be rejected, it does not mean that there is 95% probability of the alternative hypothesis being true. 
  • A P-Value >0.05 is not statistically significant. It denotes strong evidence for the null hypothesis being true. Thus, we retain the null hypothesis and reject the alternative hypothesis. We cannot accept null hypothesis; we can only reject or not reject it. 

A statistically significant result does not prove a research hypothesis to be correct. Instead, it provides support for or provides evidence for the hypothesis. 

  • You should report exact P-Values upto two or three decimal places. 
  • For P-values less than .001, report as p < .001. 
  • Do not use 0 before the decimal point as it cannot equal1. Write p = .001, and not p = 0.001
  • Make sure p is always italicized and there is space on either side of the = sign. 
  • It is impossible to get P = .000, and should be written as p < .001

An investor says that the performance of their investment portfolio is equivalent to that of the Standard & Poor’s (S&P) 500 Index. He performs a two-tailed test to determine this. 

The null hypothesis here says that the portfolio’s returns are equivalent to the returns of S&P 500, while the alternative hypothesis says that the returns of the portfolio and the returns of the S&P 500 are not equivalent.  

The p-value hypothesis test gives a measure of how much evidence is present to reject the null hypothesis. The smaller the p value, the higher the evidence against null hypothesis. 

Therefore, if the investor gets a P value of .001, it indicates strong evidence against null hypothesis. So he confidently deduces that the portfolio’s returns and the S&P 500’s returns are not equivalent.

1. What does P-value mean?

P-Value or probability value is a number that denotes the likelihood of your data having occurred under the null hypothesis of your statistical test. 

2. What does p 0.05 mean?

A P-value less than 0.05 is deemed to be statistically significant, meaning the null hypothesis should be rejected in such a case. A P-Value greater than 0.05 is not considered to be statistically significant, meaning the null hypothesis should not be rejected. 

3. What is P-value and how is it calculated?

The p-value or probability value is a number, calculated from a statistical test, that tells how likely it is that your results would have occurred under the null hypothesis of the test.  

P-values are usually automatically calculated using statistical software. They can also be calculated using p-value tables for the relevant statistical test. P values are calculated based on the null distribution of the test statistic. In case the test statistic is far from the mean of the null distribution, the p-value obtained is small. It indicates that the test statistic is unlikely to have occurred under the null hypothesis. 

4. What is p-value in research?

P values are used in hypothesis testing to help determine whether the null hypothesis should be rejected. It plays a major role when results of research are discussed. Hypothesis testing is a statistical methodology frequently used in medical and clinical research studies. 

5. Why is the p-value significant?

Statistical significance is a term that researchers use to say that it is not likely that their observations could have occurred if the null hypothesis were true. The level of statistical significance is usually represented as a P-value or probability value between 0 and 1. The smaller the p-value, the more likely it is that you would reject the null hypothesis. 

6. What is null hypothesis and what is p-value?

A null hypothesis is a kind of statistical hypothesis that suggests that there is no statistical significance in a set of given observations. It says there is no relationship between your variables.   

P-value or probability value is a number, calculated from a statistical test, that tells how likely it is that your results would have occurred under the null hypothesis of the test.   

P-Value is used to determine the significance of observational data. Whenever researchers notice an apparent relation between two variables, a P-Value calculation helps ascertain if the observed relationship happened as a result of chance. Learn more about statistical analysis and data analytics and fast track your career with our Professional Certificate Program In Data Analytics .  

Data Science & Business Analytics Courses Duration and Fees

Data Science & Business Analytics programs typically range from a few weeks to several months, with fees varying based on program and institution.

Program NameDurationFees

Cohort Starts:

14 weeks€ 1,999

Cohort Starts:

32 weeks€ 1,790

Cohort Starts:

11 Months€ 3,790

Cohort Starts:

8 months€ 2,790

Cohort Starts:

11 months€ 2,290

Cohort Starts:

11 months€ 2,790
11 months€ 1,099
11 months€ 1,099

Recommended Reads

Unlocking Client Value with GenAI: A Guide for IT Service Leaders to Build Capability

Inferential Statistics Explained: From Basics to Advanced!

A Comprehensive Look at Percentile in Statistics

Free eBook: Top Programming Languages For A Data Scientist

The Difference Between Data Mining and Statistics

All You Need to Know About Bias in Statistics

Get Affiliated Certifications with Live Class programs

Post graduate program in data analytics.

  • Post Graduate Program certificate and Alumni Association membership
  • Exclusive hackathons and Ask me Anything sessions by IBM

Data Analyst

  • Industry-recognized Data Analyst Master’s certificate from Simplilearn
  • Dedicated live sessions by faculty of industry experts
  • PMP, PMI, PMBOK, CAPM, PgMP, PfMP, ACP, PBA, RMP, SP, and OPM3 are registered marks of the Project Management Institute, Inc.

A probability measure of finding the observed, or more extreme results, when the null hypothesis of a given statistical test is true

What is the P-value?

In statistical hypothesis testing, the p-value (probability value) is a probability measure of finding the observed, or more extreme, results, when the null hypothesis of a given statistical test is true. The p-value is a primary value used to quantify the statistical significance of the results of a hypothesis test .

P-value

The main interpretation of the p-value is whether there’s enough evidence to reject the null hypothesis. If the p-value is reasonably low (less than the level of significance ), we can state that there is enough evidence to reject the null hypothesis. Otherwise, we should not reject the null hypothesis.

The conclusions about the hypothesis test are drawn when the p-value of a test is compared against the level of significance, which plays the role of a benchmark. The most typical levels of significance are 0.10, 0.05, and 0.01. The level of significance of 0.05 is considered conventional and the most commonly used.

How to Use P-value in Hypothesis Testing?

In order to use the p-value in hypothesis testing, follow the steps below:

  • Determine your level of significance (α). The level of significance generally should be chosen during the first steps of the design of a hypothesis test. The most common levels of significance include 0.10, 0.05, and 0.01.
  • Calculate the p-value. There are numerous software applications that offer the calculation. For instance, Microsoft Excel allows the calculation of the p-value using the Data Analysis ToolPak.
  • Compare the obtained p-value with the level of significance (α) and draw the relevant conclusions. The general rule here is if the figure is less than the level of significance, then there is sufficient evidence to reject the null hypothesis of an experiment.

The degree of statistical significance generally varies depending on the level of significance. For example, a p-value that is more than 0.05 is considered statistically significant while a figure that is less than 0.01 is viewed as highly statistically significant.

Misinterpretations of the P-value

In statistics , the p-value can be truly considered as one of the most commonly misinterpreted concepts. The biggest misconception about the concept is that it is a probability that the null hypothesis is true (or it is a probability that the alternative hypothesis is false).

In reality, the p-value does not determine the probability of the null hypothesis to be true but merely indicates the probability of encountering the results of a study at least as extreme as the actually observed results if the null hypothesis is true. In other words, it indicates the probability of having enough evidence to reject or not to reject the null hypothesis.

Additional Resources

CFI offers the Business Intelligence & Data Analyst (BIDA)®  certification program for those looking to take their careers to the next level. To keep learning and advancing your career, the following CFI resources will be helpful:

  • Expected Value
  • Nonparametric Tests
  • Sample Selection Bias
  • Total Probability Rule
  • See all data science resources
  • Share this article

Excel Fundamentals - Formulas for Finance

Create a free account to unlock this Template

Access and download collection of free Templates to help power your productivity and performance.

Already have an account? Log in

Supercharge your skills with Premium Templates

Take your learning and productivity to the next level with our Premium Templates.

Upgrading to a paid membership gives you access to our extensive collection of plug-and-play Templates designed to power your performance—as well as CFI's full course catalog and accredited Certification Programs.

Already have a Self-Study or Full-Immersion membership? Log in

Access Exclusive Templates

Gain unlimited access to more than 250 productivity Templates, CFI's full course catalog and accredited Certification Programs, hundreds of resources, expert reviews and support, the chance to work with real-world finance and research tools, and more.

Already have a Full-Immersion membership? Log in

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

An Easy Introduction to Statistical Significance (With Examples)

Published on January 7, 2021 by Pritha Bhandari . Revised on June 22, 2023.

If a result is statistically significant , that means it’s unlikely to be explained solely by chance or random factors. In other words, a statistically significant result has a very low chance of occurring if there were no true effect in a research study.

The p value , or probability value, tells you the statistical significance of a finding. In most studies, a p value of 0.05 or less is considered statistically significant, but this threshold can also be set higher or lower.

Table of contents

How do you test for statistical significance, what is a significance level, problems with relying on statistical significance, other types of significance in research, other interesting articles, frequently asked questions about statistical significance.

In quantitative research , data are analyzed through null hypothesis significance testing, or hypothesis testing. This is a formal procedure for assessing whether a relationship between variables or a difference between groups is statistically significant.

Null and alternative hypotheses

To begin, research predictions are rephrased into two main hypotheses: the null and alternative hypothesis .

  • A null hypothesis ( H 0 ) always predicts no true effect, no relationship between variables , or no difference between groups.
  • An alternative hypothesis ( H a or H 1 ) states your main prediction of a true effect, a relationship between variables, or a difference between groups.

Hypothesis testin g always starts with the assumption that the null hypothesis is true. Using this procedure, you can assess the likelihood (probability) of obtaining your results under this assumption. Based on the outcome of the test, you can reject or retain the null hypothesis.

  • H 0 : There is no difference in happiness between actively smiling and not smiling.
  • H a : Actively smiling leads to more happiness than not smiling.

Test statistics and p values

Every statistical test produces:

  • A test statistic that indicates how closely your data match the null hypothesis.
  • A corresponding p value that tells you the probability of obtaining this result if the null hypothesis is true.

The p value determines statistical significance. An extremely low p value indicates high statistical significance, while a high p value means low or no statistical significance.

Next, you perform a t test to see whether actively smiling leads to more happiness. Using the difference in average happiness between the two groups, you calculate:

  • a t value (the test statistic) that tells you how much the sample data differs from the null hypothesis,
  • a p value showing the likelihood of finding this result if the null hypothesis is true.

Prevent plagiarism. Run a free check.

The significance level , or alpha (α), is a value that the researcher sets in advance as the threshold for statistical significance. It is the maximum risk of making a false positive conclusion ( Type I error ) that you are willing to accept .

In a hypothesis test, the  p value is compared to the significance level to decide whether to reject the null hypothesis.

  • If the p value is  higher than the significance level, the null hypothesis is not refuted, and the results are not statistically significant .
  • If the p value is lower than the significance level, the results are interpreted as refuting the null hypothesis and reported as statistically significant .

Usually, the significance level is set to 0.05 or 5%. That means your results must have a 5% or lower chance of occurring under the null hypothesis to be considered statistically significant.

The significance level can be lowered for a more conservative test. That means an effect has to be larger to be considered statistically significant.

The significance level may also be set higher for significance testing in non-academic marketing or business contexts. This makes the study less rigorous and increases the probability of finding a statistically significant result.

As best practice, you should set a significance level before you begin your study. Otherwise, you can easily manipulate your results to match your research predictions.

It’s important to note that hypothesis testing can only show you whether or not to reject the null hypothesis in favor of the alternative hypothesis. It can never “prove” the null hypothesis, because the lack of a statistically significant effect doesn’t mean that absolutely no effect exists.

When reporting statistical significance, include relevant descriptive statistics about your data (e.g., means and standard deviations ) as well as the test statistic and p value.

There are various critiques of the concept of statistical significance and how it is used in research.

Researchers classify results as statistically significant or non-significant using a conventional threshold that lacks any theoretical or practical basis. This means that even a tiny 0.001 decrease in a p value can convert a research finding from statistically non-significant to significant with almost no real change in the effect.

On its own, statistical significance may also be misleading because it’s affected by sample size. In extremely large samples , you’re more likely to obtain statistically significant results, even if the effect is actually small or negligible in the real world. This means that small effects are often exaggerated if they meet the significance threshold, while interesting results are ignored when they fall short of meeting the threshold.

The strong emphasis on statistical significance has led to a serious publication bias and replication crisis in the social sciences and medicine over the last few decades. Results are usually only published in academic journals if they show statistically significant results—but statistically significant results often can’t be reproduced in high quality replication studies.

As a result, many scientists call for retiring statistical significance as a decision-making tool in favor of more nuanced approaches to interpreting results.

That’s why APA guidelines advise reporting not only p values but also  effect sizes and confidence intervals wherever possible to show the real world implications of a research outcome.

Aside from statistical significance, clinical significance and practical significance are also important research outcomes.

Practical significance shows you whether the research outcome is important enough to be meaningful in the real world. It’s indicated by the effect size of the study.

Clinical significance is relevant for intervention and treatment studies. A treatment is considered clinically significant when it tangibly or substantially improves the lives of patients.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Normal distribution
  • Descriptive statistics
  • Measures of central tendency
  • Correlation coefficient

Methodology

  • Cluster sampling
  • Stratified sampling
  • Types of interviews
  • Cohort study
  • Thematic analysis

Research bias

  • Implicit bias
  • Cognitive bias
  • Survivorship bias
  • Availability heuristic
  • Nonresponse bias
  • Regression to the mean

Statistical significance is a term used by researchers to state that it is unlikely their observations could have occurred under the null hypothesis of a statistical test . Significance is usually denoted by a p -value , or probability value.

Statistical significance is arbitrary – it depends on the threshold, or alpha value, chosen by the researcher. The most common threshold is p < 0.05, which means that the data is likely to occur less than 5% of the time under the null hypothesis .

When the p -value falls below the chosen alpha value, then we say the result of the test is statistically significant.

A p -value , or probability value, is a number describing how likely it is that your data would have occurred under the null hypothesis of your statistical test .

P -values are usually automatically calculated by the program you use to perform your statistical test. They can also be estimated using p -value tables for the relevant test statistic .

P -values are calculated from the null distribution of the test statistic. They tell you how often a test statistic is expected to occur under the null hypothesis of the statistical test, based on where it falls in the null distribution.

If the test statistic is far from the mean of the null distribution, then the p -value will be small, showing that the test statistic is not likely to have occurred under the null hypothesis.

No. The p -value only tells you how likely the data you have observed is to have occurred under the null hypothesis .

If the p -value is below your threshold of significance (typically p < 0.05), then you can reject the null hypothesis, but this does not necessarily mean that your alternative hypothesis is true.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bhandari, P. (2023, June 22). An Easy Introduction to Statistical Significance (With Examples). Scribbr. Retrieved September 9, 2024, from https://www.scribbr.com/statistics/statistical-significance/

Is this article helpful?

Pritha Bhandari

Pritha Bhandari

Other students also liked, understanding p values | definition and examples, what is effect size and why does it matter (examples), hypothesis testing | a step-by-step guide with easy examples, what is your plagiarism score.

Logo for Pressbooks at Virginia Tech

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

5.6 Hypothesis Tests in Depth

Establishing the parameter of interest, type of distribution to use, the test statistic, and p -value can help you figure out how to go about a hypothesis test. However, there are several other factors you should consider when interpreting the results.

Rare Events

Suppose you make an assumption about a property of the population (this assumption is the null hypothesis). Then you gather sample data randomly. If the sample has properties that would be very unlikely to occur if the assumption is true, then you would conclude that your assumption about the population is probably incorrect. Remember that your assumption is just an assumption; it is not a fact, and it may or may not be true. But your sample data are real and are showing you a fact that seems to contradict your assumption.

\frac{1}{200}

Errors in Hypothesis Tests

When you perform a hypothesis test, there are four possible outcomes depending on the actual truth (or falseness) of the null hypothesis H 0 and the decision to reject or not. The outcomes are summarized in the following table:

Figure 5.14: Type I and type II errors
IS ACTUALLY
Action
Correct outcome Type II error
Type I error Correct outcome

The four possible outcomes in the table are:

  • The decision is not to reject H 0 when H 0 is true (correct decision).
  • The decision is to reject H 0 when H 0 is true (incorrect decision known as a type I error ).
  • The decision is not to reject H 0 when, in fact, H 0 is false (incorrect decision known as a type II error ).
  • The decision is to reject H 0 when H 0 is false (correct decision whose probability is called the power of the test).

Each of the errors occurs with a particular probability. The Greek letters α and β represent the probabilities.

α = probability of a type I error = P (type I error) = probability of rejecting the null hypothesis when the null hypothesis is true. These are also known as false positives. We know that α is often determined in advance, and α = 0.05 is often widely accepted. In that case, you are saying, “We are OK making this type of error in 5% of samples.” In fact, the p -value is the exact probability of a type I error based on what you observed.

β = probability of a type II error = P (type II error) = probability of not rejecting the null hypothesis when the null hypothesis is false. These are also known as false negatives.

The power of a test is 1 – β .

Ideally, α and β should be as small as possible because they are probabilities of errors but are rarely zero. We want a high power that is as close to one as well. Increasing the sample size can help us achieve these by reducing both α and β and therefore increasing the power of the test.

Suppose the null hypothesis, H 0 , is that Frank’s rock climbing equipment is safe.

Type I error: Frank thinks that his rock climbing equipment may not be safe when, in fact, it really is safe. Type II error: Frank thinks that his rock climbing equipment may be safe when, in fact, it is not safe.

α = probability that Frank thinks his rock climbing equipment may not be safe when, in fact, it really is safe. β = probability that Frank thinks his rock climbing equipment may be safe when, in fact, it is not safe.

Notice that, in this case, the error with the greater consequence is the type II error, in which Frank thinks his rock climbing equipment is safe, so he goes ahead and uses it.

Suppose the null hypothesis, H 0 , is that the blood cultures contain no traces of pathogen X . State the type I and type II errors.

Statistical Significance vs. Practical Significance

When the sample size becomes larger, point estimates become more precise and any real differences in the mean and null value become easier to detect and recognize. Even a very small difference would likely be detected if we took a large enough sample. Sometimes, researchers will take such large samples that even the slightest difference is detected, even differences where there is no practical value. In such cases, we still say the difference is statistically significant , but it is not practically significant.

For example, an online experiment might identify that placing additional ads on a movie review website statistically significantly increases viewership of a TV show by 0.001%, but this increase might not have any practical value.

One role of a data scientist in conducting a study often includes planning the size of the study. The data scientist might first consult experts or scientific literature to learn what would be the smallest meaningful difference from the null value. She also would obtain other information, such as a very rough estimate of the true proportion p , so that she could roughly estimate the standard error. From here, she could suggest a sample size that is sufficiently large enough to detect the real difference if it is meaningful. While larger sample sizes may still be used, these calculations are especially helpful when considering costs or potential risks, such as possible health impacts to volunteers in a medical study.

Click here for more multimedia resources, including podcasts, videos, lecture notes, and worked examples.

The decision is to reject the null hypothesis when, in fact, the null hypothesis is true

Erroneously rejecting a true null hypothesis or erroneously failing to reject a false null hypothesis

The probability of failing to reject a true hypothesis

Finding sufficient evidence that the observed effect is not just due to variability, often from rejecting the null hypothesis

Significant Statistics Copyright © 2024 by John Morgan Russell, OpenStaxCollege, OpenIntro is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

Stack Exchange Network

Stack Exchange network consists of 183 Q&A communities including Stack Overflow , the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.

Q&A for work

Connect and share knowledge within a single location that is structured and easy to search.

What is the meaning of p values and t values in statistical tests?

After taking a statistics course and then trying to help fellow students, I noticed one subject that inspires much head-desk banging is interpreting the results of statistical hypothesis tests. It seems that students easily learn how to perform the calculations required by a given test but get hung up on interpreting the results. Many computerized tools report test results in terms of "p values" or "t values".

How would you explain the following points to college students taking their first course in statistics:

What does a "p-value" mean in relation to the hypothesis being tested? Are there cases when one should be looking for a high p-value or a low p-value?

What is the relationship between a p-value and a t-value?

  • hypothesis-testing
  • interpretation

whuber's user avatar

  • 15 $\begingroup$ A fair bit of this is basically covered by the first sentence of the wikipedia article on p values , which correctly defines a p-value. If that's understood, much is made clear. $\endgroup$ –  Glen_b Commented May 16, 2013 at 7:50
  • 1 $\begingroup$ Just get the book: Statistics without Tears. It might save your sanity!! $\endgroup$ –  user48700 Commented Jun 20, 2014 at 5:18
  • 8 $\begingroup$ @user48700 Could you summarize how Statistics Without Tears explains this? $\endgroup$ –  Matt Krause Commented Jun 20, 2014 at 5:40
  • 14 $\begingroup$ Someone should draw a graph of p-value related questions over time and I bet we'll see the seasonality and correlation to academic calendars in colleges or Coursera data science classes $\endgroup$ –  Aksakal Commented Dec 29, 2014 at 23:04
  • $\begingroup$ In addition to other nice and relevant book recommendations in the answers and comments, I would like to suggest another book, appropriately called "What is a p-value anyway?" . $\endgroup$ –  Aleksandr Blekh Commented Dec 29, 2014 at 23:04

15 Answers 15

Understanding $p$ -value.

Suppose, that you want to test the hypothesis that the average height of male students at your University is $5$ ft $7$ inches. You collect heights of $100$ students selected at random and compute the sample mean (say it turns out to be $5$ ft $9$ inches). Using an appropriate formula/statistical routine you compute the $p$ -value for your hypothesis and say it turns out to be $0.06$ .

In order to interpret $p=0.06$ appropriately, we should keep several things in mind:

The first step under classical hypothesis testing is the assumption that the hypothesis under consideration is true. (In our context, we assume that the true average height is $5$ ft $7$ inches.)

Imagine doing the following calculation: Compute the probability that the sample mean is greater than $5$ ft $9$ inches assuming that our hypothesis is in fact correct (see point 1).

In other words, we want to know $$\mathrm{P}(\mathrm{Sample\: mean} \ge 5 \:\mathrm{ft} \:9 \:\mathrm{inches} \:|\: \mathrm{True\: value} = 5 \:\mathrm{ft}\: 7\: \mathrm{inches}).$$

The calculation in step 2 is what is called the $p$ -value. Therefore, a $p$ -value of $0.06$ would mean that if we were to repeat our experiment many, many times (each time we select $100$ students at random and compute the sample mean) then $6$ times out of $100$ we can expect to see a sample mean greater than or equal to $5$ ft $9$ inches.

Given the above understanding, should we still retain our assumption that our hypothesis is true (see step 1)? Well, a $p=0.06$ indicates that one of two things have happened:

  • (A) Either our hypothesis is correct and an extremely unlikely event has occurred (e.g., all $100$ students are student athletes)
  • (B) Our assumption is incorrect and the sample we have obtained is not that unusual.

The traditional way to choose between (A) and (B) is to choose an arbitrary cut-off for $p$ . We choose (A) if $p > 0.05$ and (B) if $p < 0.05$ .

Community's user avatar

  • 4 $\begingroup$ Take your time! I won't be thinking about selecting a "Best Answer" for a week or so. $\endgroup$ –  Sharpie Commented Jul 19, 2010 at 20:54
  • 1 $\begingroup$ Now that I've had a chance to come back and read the whole answer- a big +1 for the student height example. Very clear and well laid out. $\endgroup$ –  Sharpie Commented Jul 20, 2010 at 0:12
  • 3 $\begingroup$ Nice work ... but we need to add (C) our model (embodied in the formula/statistical routine) is wrong. $\endgroup$ –  Andrew Robinson Commented Jul 20, 2010 at 5:07
  • 8 $\begingroup$ A t-value (or any other test statistic) is mostly an intermediate step. It's basically some statistic that was proven, under some assumptions, to have a well-known distribution. Since we know the distribution of the test statistic under the null, we can then use standard tables (today mostly software) to derive a p-value. $\endgroup$ –  Gala Commented May 16, 2013 at 8:39
  • 2 $\begingroup$ The dichotomy at the end is just wrong. See stats.stackexchange.com/a/510941/103407 $\endgroup$ –  innisfree Commented Feb 24, 2021 at 6:20

A Dialog Between a Teacher and a Thoughtful Student

Humbly submitted in the belief that not enough crayons have been used so far in this thread. A brief illustrated synopsis appears at the end. A practical, real-world example is worked out (with code) at https://stats.stackexchange.com/a/131489/919 .

Student : What does a p-value mean? A lot of people seem to agree it's the chance we will "see a sample mean greater than or equal to" a statistic or it's "the probability of observing this outcome ... given the null hypothesis is true" or where "my sample's statistic fell on [a simulated] distribution" and even "the probability of observing a test statistic at least as large as the one calculated assuming the null hypothesis is true" .

Teacher : Properly understood, all those statements are correct in many circumstances.

Student : I don't see how most of them are relevant. Didn't you teach us that we have to state a null hypothesis $H_0$ and an alternative hypothesis $H_A$ ? How are they involved in these ideas of "greater than or equal to" or "at least as large" or the very popular "more extreme"?

Teacher : Because it can seem complicated in general, would it help for us to explore a concrete example?

Student : Sure. But please make it a realistic but simple one if you can.

Teacher : This theory of hypothesis testing historically began with the need of astronomers to analyze observational errors, so how about starting there. I was going through some old documents one day where a scientist described his efforts to reduce the measurement error in his apparatus. He had taken a lot of measurements of a star in a known position and recorded their displacements ahead of or behind that position. To visualize those displacements, he drew a histogram that--when smoothed a little--looked like this one.

Figure 1: Histogram of displacements

Student : I remember how histograms work: the vertical axis is labeled "Density" to remind me that the relative frequencies of the measurements are represented by area rather than height.

Teacher : That's right. An "unusual" or "extreme" value would be located in a region with pretty small area. Here's a crayon. Do you think you could color in a region whose area is just one-tenth the total?

Student : Sure; that's easy. [Colors in the figure.]

Figure 2: The student's first coloring.

Teacher : Very good! That looks like about 10% of the area to me. Remember, though, that the only areas in the histogram that matter are those between vertical lines: they represent the chance or probability that the displacement would be located between those lines on the horizontal axis. That means you needed to color all the way down to the bottom and that would be over half the area, wouldn't it?

Student : Oh, I see. Let me try again. I'm going to want to color in where the curve is really low, won't I? It's lowest at the two ends. Do I have to color in just one area or would it be ok to break it into several parts?

Teacher : Using several parts is a smart idea. Where would they be?

Student (pointing): Here and here. Because this crayon isn't very sharp, I used a pen to show you the lines I'm using.

Figure 3: The student's second coloring

Teacher : Very nice! Let me tell you the rest of the story. The scientist made some improvements to his device and then he took additional measurements. He wrote that the displacement of the first one was only $0.1$ , which he thought was a good sign, but being a careful scientist he proceeded to take more measurements as a check. Unfortunately, those other measurements are lost--the manuscript breaks off at this point--and all we have is that single number, $0.1$ .

Student : That's too bad. But isn't that much better than the wide spread of displacements in your figure?

Teacher : That's the question I would like you to answer. To start with, what should we posit as $H_0$ ?

Student : Well, a sceptic would wonder whether the improvements made to the device had any effect at all. The burden of proof is on the scientist: he would want to show that the sceptic is wrong. That makes me think the null hypothesis is kind of bad for the scientist: it says that all the new measurements--including the value of $0.1$ we know about--ought to behave as described by the first histogram. Or maybe even worse than that: they might be even more spread out.

Teacher : Go on, you're doing well.

Student : And so the alternative is that the new measurements would be less spread out, right?

Teacher : Very good! Could you draw me a picture of what a histogram with less spread would look like? Here's another copy of the first histogram; you can draw on top of it as a reference.

Student (drawing): I'm using a pen to outline the new histogram and I'm coloring in the area beneath it. I have made it so most of the curve is close to zero on the horizontal axis and so most of its area is near a (horizontal) value of zero: that's what it means to be less spread out or more precise.

Figure 4: The student's new histogram

Teacher : That's a good start. But remember that a histogram showing chances should have a total area of $1$ . The total area of the first histogram therefore is $1$ . How much area is inside your new histogram?

Student : Less than half, I think. I see that's a problem, but I don't know how to fix it. What should I do?

Teacher : The trick is to make the new histogram higher than the old so that its total area is $1$ . Here, I'll show you a computer-generated version to illustrate.

Figure 5: The teacher's new histogram

Student : I see: you stretched it out vertically so its shape didn't really change but now the red area and gray area (including the part under the red) are the same amounts.

Teacher : Right. You are looking at a picture of the null hypothesis (in blue, spread out) and part of the alternative hypothesis (in red, with less spread).

Student : What do you mean by "part" of the alternative? Isn't it just the alternative hypothesis?

Teacher : Statisticians and grammar don't seem to mix. :-) Seriously, what they mean by a "hypothesis" usually is a whole big set of possibilities. Here, the alternative (as you stated so well before) is that the measurements are "less spread out" than before. But how much less ? There are many possibilities. Here, let me show you another. I drew it with yellow dashes. It's in between the previous two.

Figure 6: The null along with two elements of the alternative

Student : I see: you can have different amounts of spread but you don't know in advance how much the spread will really be. But why did you make the funny shading in this picture?

Teacher : I wanted to highlight where and how the histograms differ. I shaded them in gray where the alternative histograms are lower than the null and in red where the alternatives are higher .

Student : Why would that matter?

Teacher : Do you remember how you colored the first histogram in both the tails? [Looking through the papers.] Ah, here it is. Let's color this picture in the same way.

Figure 7: The null and alternative, colored.

Student : I remember: those are the extreme values. I found the places where the null density was as small as possible and colored in 10% of the area there.

Teacher : Tell me about the alternatives in those extreme areas.

Student : It's hard to see, because the crayon covered it up, but it looks like there's almost no chance for any alternative to be in the areas I colored. Their histograms are right down against value axis and there's no room for any area beneath them.

Teacher : Let's continue that thought. If I told you, hypothetically, that a measurement had a displacement of $-2$ , and asked you to pick which of these three histograms was the one it most likely came from, which would it be?

Student : The first one--the blue one. It's the most spread out and it's the only one where $-2$ seems to have any chance of occurring.

Teacher : And what about the value of $0.1$ in the manuscript?

Student : Hmmm... that's a different story. All three histograms are pretty high above the ground at $0.1$ .

Teacher : OK, fair enough. But suppose I told you the value was somewhere near $0.1$ , like between $0$ and $0.2$ . Does that help you read some probabilities off of these graphs?

Student : Sure, because I can use areas. I just have to estimate the areas underneath each curve between $0$ and $0.2$ . But that looks pretty hard.

Teacher : You don't need to go that far. Can you just tell which area is the largest?

Student : The one beneath the tallest curve, of course. All three areas have the same base, so the taller the curve, the more area there is beneath it and the base. That means the tallest histogram--the one I drew, with the red dashes--is the likeliest one for a displacement of $0.1$ . I think I see where you're going with this, but I'm a little concerned: don't I have to look at all the histograms for all the alternatives, not just the one or two shown here? How could I possibly do that?

Teacher : You're good at picking up patterns, so tell me: as the measurement apparatus is made more and more precise, what happens to its histogram?

Student : It gets narrower--oh, and it has to get taller, too, so its total area stays the same. That makes it pretty hard to compare the histograms. The alternative ones are all higher than the null right at $0$ , that's obvious. But at other values sometimes the alternatives are higher and sometimes they are lower! For example, [pointing at a value near $3/4$ ], right here my red histogram is the lowest, the yellow histogram is the highest, and the original null histogram is between them. But over on the right the null is the highest.

Teacher : In general, comparing histograms is a complicated business. To help us do it, I have asked the computer to make another plot: it has divided each of the alternative histogram heights (or "densities") by the null histogram height, creating values known as "likelihood ratios." As a result, a value greater than $1$ means the alternative is more likely, while a value less than $1$ means the alternative is less likely. It has drawn yet one more alternative: it's more spread out than the other two, but still less spread out than the original apparatus was.

Figure 8: Likelihood ratios

Teacher (continuing): Could you show me where the alternatives tend to be more likely than the null?

Student (coloring): Here in the middle, obviously. And because these are not histograms anymore, I guess we should be looking at heights rather than areas, so I'm just marking a range of values on the horizontal axis. But how do I know how much of the middle to color in? Where do I stop coloring?

Figure 9: Marked-up likelihood ratio plots

Teacher : There's no firm rule. It all depends on how we plan to use our conclusions and how fierce the sceptics are. But sit back and think about what you have accomplished: you now realize that outcomes with large likelihood ratios are evidence for the alternative and outcomes with small likelihood ratios are evidence against the alternative. What I will ask you to do is to color in an area that, insofar as is possible, has a small chance of occurring under the null hypothesis and a relatively large chance of occurring under the alternatives. Going back to the first diagram you colored, way back at the start of our conversation, you colored in the two tails of the null because they were "extreme." Would they still do a good job?

Student : I don't think so. Even though they were pretty extreme and rare under the null hypothesis, they are practically impossible for any of the alternatives. If my new measurement were, say $3.0$ , I think I would side with the sceptic and deny that any improvement had occurred, even though $3.0$ was an unusual outcome in any case. I want to change that coloring. Here--let me have another crayon.

Figure 10: Improved markup

Teacher : What does that represent?

Student : We started out with you asking me to draw in just 10% of the area under the original histogram--the one describing the null. So now I drew in 10% of the area where the alternatives seem more likely to be occurring. I think that when a new measurement is in that area, it's telling us we ought to believe the alternative.

Teacher : And how should the sceptic react to that?

Student : A sceptic never has to admit he's wrong, does he? But I think his faith should be a little shaken. After all, we arranged it so that although a measurement could be inside the area I just drew, it only has a 10% chance of being there when the null is true. And it has a larger chance of being there when the alternative is true. I just can't tell you how much larger that chance is, because it would depend on how much the scientist improved the apparatus. I just know it's larger. So the evidence would be against the sceptic.

Teacher : All right. Would you mind summarizing your understanding so that we're perfectly clear about what you have learned?

Student : I learned that to compare alternative hypotheses to null hypotheses, we should compare their histograms. We divide the densities of the alternatives by the density of the null: that's what you called the "likelihood ratio." To make a good test, I should pick a small number like 10% or whatever might be enough to shake a sceptic. Then I should find values where the likelihood ratio is as high as possible and color them in until 10% (or whatever) has been colored.

Teacher : And how would you use that coloring?

Student : As you reminded me earlier, the coloring has to be between vertical lines. Values (on the horizontal axis) that lie under the coloring are evidence against the null hypothesis. Other values--well, it's hard to say what they might mean without taking a more detailed look at all the histograms involved.

Teacher : Going back to the value of $0.1$ in the manuscript, what would you conclude?

Student : That's within the area I last colored, so I think the scientist probably was right and the apparatus really was improved.

Teacher : One last thing. Your conclusion was based on picking 10% as the criterion, or "size" of the test. Many people like to use 5% instead. Some prefer 1%. What could you tell them?

Student : I couldn't do all those tests at once! Well, maybe I could in a way. I can see that no matter what size the test should be, I ought to start coloring from $0$ , which is in this sense the "most extreme" value, and work outwards in both directions from there. If I were to stop right at $0.1$ --the value actually observed--I think I would have colored in an area somewhere between $0.05$ and $0.1$ , say $0.08$ . The 5% and 1% people could tell right away that I colored too much: if they wanted to color just 5% or 1%, they could, but they wouldn't get as far out as $0.1$ . They wouldn't come to the same conclusion I did: they would say there's not enough evidence that a change actually occurred.

Teacher : You have just told me what all those quotations at the beginning really mean. It should be obvious from this example that they cannot possibly intend "more extreme" or "greater than or equal" or "at least as large" in the sense of having a bigger value or even having a value where the null density is small. They really mean these things in the sense of large likelihood ratios that you have described. By the way, the number around $0.08$ that you computed is called the "p-value." It can only properly be understood in the way you have described: with respect to an analysis of relative histogram heights--the likelihood ratios.

Student : Thank you. I'm not confident I fully understand all of this yet, but you have given me a lot to think about.

Teacher : If you would like to go further, take a look at the Neyman-Pearson Lemma . You are probably ready to understand it now.

Many tests that are based on a single statistic like the one in the dialog will call it " $z$ " or " $t$ ". These are ways of hinting what the null histogram looks like, but they are only hints: what we name this number doesn't really matter. The construction summarized by the student, as illustrated here, shows how it is related to the p-value. The p-value is the smallest test size that would cause an observation of $t=0.1$ to lead to a rejection of the null hypothesis.

Figure 11: p-value as an area.

In this figure, which is zoomed to show detail, the null hypothesis is plotted in solid blue and two typical alternatives are plotted with dashed lines. The region where those alternatives tend to be much larger than the null is shaded in. The shading starts where the relative likelihoods of the alternatives are greatest (at $0$ ). The shading stops when the observation $t=0.1$ is reached. The p-value is the area of the shaded region under the null histogram: it is the chance, assuming the null is true, of observing an outcome whose likelihood ratios tend to be large regardless of which alternative happens to be true. In particular, this construction depends intimately on the alternative hypothesis. It cannot be carried out without specifying the possible alternatives.

For two practical examples of the test described here -- one published, the other hypothetical -- see https://stats.stackexchange.com/a/5408/919 . A detailed application of these ideas to testing a median is presented in my post at https://stats.stackexchange.com/a/131489/919 .

  • 5 $\begingroup$ This has excellently dealt with my comment on another answer, that none of the prior answers to this question had tackled, in generality, the commonly-heard "or more extreme" aspect of a p -value. (Though the "tea-testing" answer included a good specific example.) I particularly admire the way this example has been deliberately constructed to highlight that "more extreme" can mean quite the contrary of "bigger" or "further from zero". $\endgroup$ –  Silverfish Commented Jan 5, 2015 at 20:47
  • 5 $\begingroup$ I wish teachers and textbooks didn't use the phrase "or more extreme", really. Two variants I have heard might be paraphrased as "more favourable towards $H_1$" or "more persuasive of $H_1$". In this instance, values nearer zero would indeed be more persuasive that the telescope has become more reliable, but it requires some linguistic acrobatics (plausibly argued, but potentially confusing) to describe them as "more extreme". $\endgroup$ –  Silverfish Commented Jan 5, 2015 at 20:51
  • 3 $\begingroup$ Uniquely insightful as always, thank you for taking the time to write out those incredibly helpful answers. I really wonder why textbooks are never written in a way that offers anywhere near these levels of clarity and intuition. $\endgroup$ –  jeremy radcliff Commented Sep 20, 2016 at 15:47
  • 2 $\begingroup$ I think a link to a definition of likelihood wrt this example could be beneficial $\endgroup$ –  baxx Commented Apr 27, 2017 at 18:10
  • 2 $\begingroup$ I'm not sure if you're being sarcastic, I'm aware of the search function. And I searched for definitions but found lots of literature about tests and stuff and didn't know what to use. I think perhaps if you were to link to an appropriate source for this material level it would be easier for future users too $\endgroup$ –  baxx Commented Apr 27, 2017 at 18:53

Before touching this topic, I always make sure that students are happy moving between percentages, decimals, odds and fractions. If they are not completely happy with this then they can get confused very quickly.

I like to explain hypothesis testing for the first time (and therefore p-values and test statistics) through Fisher's classic tea experiment. I have several reasons for this:

(i) I think working through an experiment and defining the terms as we go along makes more sense that just defining all of these terms to begin with. (ii) You don't need to rely explicitly on probability distributions, areas under the curve, etc to get over the key points of hypothesis testing. (iii) It explains this ridiculous notion of "as or more extreme than those observed" in a fairly sensible manner (iv) I find students like to understand the history, origins and back story of what they are studying as it makes it more real than some abstract theories. (v) It doesn't matter what discipline or subject the students come from, they can relate to the example of tea (N.B. Some international students have difficulty with this peculiarly British institution of tea with milk.)

[Note: I originally got this idea from Dennis Lindley's wonderful article "The Analysis of Experimental Data: The Appreciation of Tea & Wine" in which he demonstrates why Bayesian methods are superior to classical methods.]

The back story is that Muriel Bristol visits Fisher one afternoon in the 1920's at Rothamsted Experimental Station for a cup of tea. When Fisher put the milk in last she complained saying that she could also tell whether the milk was poured first (or last) and that she preferred the former. To put this to the test he designed his classic tea experiment where Muriel is presented with a pair of tea cups and she must identify which one had the milk added first. This is repeated with six pairs of tea cups. Her choices are either Right (R) or Wrong (W) and her results are: RRRRRW.

Suppose that Muriel is actually just guessing and has no ability to discriminate whatsoever. This is called the Null Hypothesis . According to Fisher the purpose of the experiment is to discredit this null hypothesis. If Muriel is guessing she will identify the tea cup correctly with probability 0.5 on each turn and as they are independent the observed result has 0.5$^6$ = 0.016 (or 1/64). Fisher then argues that either:

(a) the null hypothesis (Muriel is guessing) is true and an event of small probability has occurred or,

(b) the null hypothesis is false and Muriel has discriminatory powers.

The p-value (or probability value) is the probability of observing this outcome (RRRRRW) given the null hypothesis is true - it's the small probability referred to in (a), above. In this instance it's 0.016. Since events with small probabilities only occur rarely (by definition) situation (b) might be a more preferable explanation of what occurred than situation (a). When we reject the null hypothesis we're in fact accepting the opposite hypothesis which is we call the alternative hypothesis. In this example, Muriel has discriminatory powers is the alternative hypothesis.

An important consideration is what do we class as a "small" probability? What's the cutoff point at which we're willing to say that an event is unlikely? The standard benchmark is 5% (0.05) and this is called the significance level. When the p-value is smaller than the significance level we reject the null hypothesis as being false and accept our alternative hypothesis. It is common parlance to claim a result is "significant" when the p-value is smaller than the significance level i.e. when the probability of what we observed occurring given the null hypothesis is true is smaller than our cutoff point. It is important to be clear that using 5% is completely subjective (as is using the other common significance levels of 1% and 10%).

Fisher realised that this doesn't work; every possible outcome with one wrong pair was equally suggestive of discriminatory powers. The relevant probability for situation (a), above, is therefore 6(0.5)^6 = 0.094 (or 6/64) which now is not significant at a significance level of 5%. To overcome this Fisher argued that if 1 error in 6 is considered evidence of discriminatory powers then so is no errors i.e. outcomes that more strongly indicate discriminatory powers than the one observed should be included when calculating the p-value. This resulted in the following amendment to the reasoning, either:

(a) the null hypothesis (Muriel is guessing) is true and the probability of events as, or more, extreme than that observed is small, or

Back to our tea experiment and we find that the p-value under this set-up is 7(0.5)^6 = 0.109 which still is not significant at the 5% threshold.

I then get students to work with some other examples such as coin tossing to work out whether or not a coin is fair. This drills home the concepts of the null/alternative hypothesis, p-values and significance levels. We then move onto the case of a continuous variable and introduce the notion of a test-statistic. As we have already covered the normal distribution, standard normal distribution and the z-transformation in depth it's merely a matter of bolting together several concepts.

As well as calculating test-statistics, p-values and making a decision (significant/not significant) I get students to work through published papers in a fill in the missing blanks game.

Tyto alba's user avatar

  • 2 $\begingroup$ I know I'm somewhat reviving a very old thread, but here it goes... I was really enjoying your answer, but I miss the t-value part in it :( Could you please use your given examples to talk about it? No one answered about the t-test part $\endgroup$ –  Sos Commented Dec 4, 2014 at 12:01
  • $\begingroup$ @sosi It's probably because p-values are much more general than t-values. It's like asking a question about cars and then about the brakes on a Ford Fiesta. $\endgroup$ –  conjectures Commented Dec 29, 2014 at 19:55
  • 2 $\begingroup$ The answer is very interesting (+1), but a few things are confused together at the end. 1. What does it mean for a $p$-value to be "significant at the 5% level"? Either the $p$-value is below 5%, or it is not. I don't see the point in using such an obscure sentence, leaving "significance" undefined. 2. What does it mean to "decide" wether or not a $p$-value is significant? It does not seem justified to bring in decision theory into the mix in this way (especially since Fisher was a strong opponent of the application of the Neyman-Pearson testing framework in the sciences). $\endgroup$ –  Olivier Commented Apr 2, 2017 at 19:22

No amount of verbal explanation or calculations really helped me to understand at a gut level what p-values were, but it really snapped into focus for me once I took a course that involved simulation. That gave me the ability to actually see data generated by the null hypothesis and to plot the means/etc. of simulated samples, then look at where my sample's statistic fell on that distribution.

I think the key advantage to this is that it lets students forget about the math and the test statistic distributions for a minute and focus on the concepts at hand. Granted, it required that I learn how to simulate that stuff, which will cause problems for an entirely different set of students. But it worked for me, and I've used simulation countless times to help explain statistics to others with great success (e.g., "This is what your data looks like; this is what a Poisson distribution looks like overlaid. Are you SURE you want to do a Poisson regression?").

This doesn't exactly answer the questions you posed, but for me, at least, it made them trivial.

Matt Parker's user avatar

  • 11 $\begingroup$ I agree wholeheartedly about the use of simulation for explaining this. But just a small note on the example at the end: I find that people (not just students) do find it difficult to distinguish for any particular distributional assumption, e.g. the poisson, between being marginally poisson distributed and being conditionally poisson distributed. Since only the latter matters for a regression model, a bunch of dependent variable values that aren't poisson need not necessarily be any cause for concern. $\endgroup$ –  conjugateprior Commented Oct 30, 2010 at 9:19
  • 1 $\begingroup$ I have to confess that I didn't know that. I've really appreciated your comments around this site over the past few days of your membership - I hope you'll stick around. $\endgroup$ –  Matt Parker Commented Oct 30, 2010 at 18:29
  • $\begingroup$ @MattParker do you know of any learning resources focussed towards the use of simulation to develop understanding? Or is it just a case of putting some python / R scripts together and running a bunch of tests? $\endgroup$ –  baxx Commented Apr 27, 2017 at 13:39
  • 1 $\begingroup$ @baxx The [Seeing Theory website by Daniel Kunin](students.brown.edu/seeing-theory/) has some interesting tools for this, but it's still under construction. Otherwise, yeah, I've largely just experimented with R's built-in tools for simulation - using them to prove to myself how some method works, or to see what would happen if a predictor was replaced with a random variable, etc. Sorry, I wish I knew of better resources for this! $\endgroup$ –  Matt Parker Commented Apr 27, 2017 at 19:48
  • $\begingroup$ @MattParker cool thanks. Yeah - bit of a chicken and egg in that, to construct the experiments you (I assume?) need to at least get enough to write them. No worries though..... Just checked that site you linked, it's nice, thanks $\endgroup$ –  baxx Commented Apr 27, 2017 at 20:13

A nice definition of p-value is "the probability of observing a test statistic at least as large as the one calculated assuming the null hypothesis is true".

The problem with that is that it requires an understanding of "test statistic" and "null hypothesis". But, that's easy to get across. If the null hypothesis is true, usually something like "parameter from population A is equal to parameter from population B", and you calculate statistics to estimate those parameters, what is the probability of seeing a test statistic that says, "they're this different"?

E.g., If the coin is fair, what is the probability I'd see 60 heads out of 100 tosses? That's testing the null hypothesis, "the coin is fair", or "p = .5" where p is the probability of heads.

The test statistic in that case would be the number of heads.

Now, I assume that what you're calling "t-value" is a generic "test statistic", not a value from a "t distribution". They're not the same thing, and the term "t-value" isn't (necessarily) widely used and could be confusing.

What you're calling "t-value" is probably what I'm calling "test statistic". In order to calculate a p-value (remember, it's just a probability) you need a distribution, and a value to plug into that distribution which will return a probability. Once you do that, the probability you return is your p-value. You can see that they are related because under the same distribution, different test-statistics are going to return different p-values. More extreme test-statistics will return lower p-values giving greater indication that the null hypothesis is false.

I've ignored the issue of one-sided and two-sided p-values here.

Baltimark's user avatar

What the p-value doesn't tell you is how likely it is that the null hypothesis is true. Under the conventional (Fisher) significance testing framework we first compute the likelihood of observing the data assuming the null hypothesis is true, this is the p-value. It seems intuitively reasonable then to assume the null hypothesis is probably false if the data are sufficiently unlikely to be observed under the null hypothesis. This is entirely reasonable. Statisticians tranditionally use a threshold and "reject the null hypothesis at the 95% significance level" if (1 - p) > 0.95; however this is just a convention that has proven reasonable in practice - it doesn't mean that there is less than 5% probability that the null hypothesis is false (and therefore a 95% probability that the alternative hypothesis is true). One reason that we can't say this is that we have not looked at the alternative hypothesis yet.

Imaging a function f() that maps the p-value onto the probability that the alternative hypothesis is true. It would be reasonable to assert that this function is strictly decreasing (such that the more likely the observations under the null hypothesis, the less likely the alternative hypothesis is true), and that it gives values between 0 and 1 (as it gives an estimate of probability). However, that is all that we know about f(), so while there is a relationship between p and the probability that the alternative hypothesis is true, it is uncalibrated. This means we cannot use the p-value to make quantitative statements about the plausibility of the nulll and alternatve hypotheses.

Caveat lector: It isn't really within the frequentist framework to speak of the probability that a hypothesis is true, as it isn't a random variable - it is either true or it isn't. So where I have talked of the probability of the truth of a hypothesis I have implicitly moved to a Bayesian interpretation. It is incorrect to mix Bayesian and frequentist, however there is always a temptation to do so as what we really want is an quantative indication of the relative plausibility/probability of the hypotheses. But this is not what the p-value provides.

Dikran Marsupial's user avatar

Imagine you have a bag containing 900 black marbles and 100 white, i.e. 10% of the marbles are white. Now imagine you take 1 marble out, look at it and record its colour, take out another, record its colour etc.. and do this 100 times. At the end of this process you will have a number for white marbles which, ideally, we would expect to be 10, i.e. 10% of 100, but in actual fact may be 8, or 13 or whatever simply due to randomness. If you repeat this 100 marble withdrawal experiment many, many times and then plot a histogram of the number of white marbles drawn per experiment, you'll find you will have a Bell Curve centred about 10.

This represents your 10% hypothesis: with any bag containing 1000 marbles of which 10% are white, if you randomly take out 100 marbles you will find 10 white marbles in the selection, give or take 4 or so. The p-value is all about this "give or take 4 or so." Let's say by referring to the Bell Curve created earlier you can determine that less than 5% of the time would you get 5 or fewer white marbles and another < 5% of the time accounts for 15 or more white marbles i.e. > 90% of the time your 100 marble selection will contain between 6 to 14 white marbles inclusive.

Now assuming someone plonks down a bag of 1000 marbles with an unknown number of white marbles in it, we have the tools to answer these questions

i) Are there fewer than 100 white marbles?

ii) Are there more than 100 white marbles?

iii) Does the bag contain 100 white marbles?

Simply take out 100 marbles from the bag and count how many of this sample are white.

a) If there are 6 to 14 whites in the sample you cannot reject the hypothesis that there are 100 white marbles in the bag and the corresponding p-values for 6 through 14 will be > 0.05.

b) If there are 5 or fewer whites in the sample you can reject the hypothesis that there are 100 white marbles in the bag and the corresponding p-values for 5 or fewer will be < 0.05. You would expect the bag to contain < 10% white marbles.

c) If there are 15 or more whites in the sample you can reject the hypothesis that there are 100 white marbles in the bag and the corresponding p-values for 15 or more will be < 0.05. You would expect the bag to contain > 10% white marbles.

In response to Baltimark's comment

Given the example above, there is an approximately:-

4.8% chance of getter 5 white balls or fewer

1.85% chance of 4 or fewer

0.55% chance of 3 or fewer

0.1% chance of 2 or fewer

6.25% chance of 15 or more

3.25% chance of 16 or more

1.5% chance of 17 or more

0.65% chance of 18 or more

0.25% chance of 19 or more

0.1% chance of 20 or more

0.05% chance of 21 or more

These numbers were estimated from an empirical distribution generated by a simple Monte Carlo routine run in R and the resultant quantiles of the sampling distribution.

For the purposes of answering the original question, suppose you draw 5 white balls, there is only an approximate 4.8% chance that if the 1000 marble bag really does contain 10% white balls you would pull out only 5 whites in a sample of 100. This equates to a p value < 0.05. You now have to choose between

i) There really are 10% white balls in the bag and I have just been "unlucky" to draw so few

ii) I have drawn so few white balls that there can't really be 10% white balls (reject the hypothesis of 10% white balls)

babelproofreader's user avatar

  • $\begingroup$ First of all, this is just a big example and doesn't really explain explain the concept of p-value and test-statistic. Second, you're just claiming that if you get fewer than 5 or more than 15 white marbles, you reject the null hypothesis. What's your distribution that you're calculating those probabilities from? This can be approximated with a normal dist. centered at 10, with a standard deviation of 3. Your rejection criteria is not nearly strict enough. $\endgroup$ –  Baltimark Commented Jul 20, 2010 at 15:21
  • $\begingroup$ I would agree that this is just an example, and I it is true I just picked the numbers 5 and 15 out of the air for illustrative purposes. When I have time I will post a second answer, which I hope will be more complete. $\endgroup$ –  babelproofreader Commented Jul 20, 2010 at 22:00

In statistics you can never say something is absolutely certain, so statisticians use another approach to gauge whether a hypothesis is true or not. They try to reject all the other hypotheses that are not supported by the data.

To do this, statistical tests have a null hypothesis and an alternate hypothesis. The p-value reported from a statistical test is the likelihood of the result given that the null hypothesis was correct. That's why we want small p-values. The smaller they are, the less likely the result would be if the null hypothesis was correct. If the p-value is small enough (ie,it is very unlikely for the result to have occurred if the null hypothesis was correct), then the null hypothesis is rejected.

In this fashion, null hypotheses can be formulated and subsequently rejected. If the null hypothesis is rejected, you accept the alternate hypothesis as the best explanation. Just remember though that the alternate hypothesis is never certain, since the null hypothesis could have, by chance, generated the results.

DaRob's user avatar

  • $\begingroup$ a p-value is the likelihood of a result as or more "extreme" than the result given, not of the actual result. p-value is $Pr(T\geq t|H_0)$ and not $Pr(T=t|H_0)$ (T is test statistic, and t is its observed value). $\endgroup$ –  probabilityislogic Commented Jul 3, 2011 at 1:05

I am bit diffident to revive the old topic, but I jumped from here , so I post this as a response to the question in the link.

The p-value is a concrete term, there should be no room for misunderstanding. But, it is somehow mystical that colloquial translations of the definition of p-value leads to many different misinterpretations. I think the root of the problem is in the use of the phrases "at least as adverse to null hypothesis" or "at least as extreme as the one in your sample data" etc.

For instance, Wikipedia says

...the p-value is the probability of obtaining the observed sample results (or a more extreme result) when the null hypothesis is actually true.

Meaning of $p$-value is blurred when people first stumble upon "(or a more extreme result)" and start thinking " more extreeeme ?".

I think it is better to leave the "more extreme result" to something like indirect speech act . So, my take is

The p-value is the probability of seeing what you see in a "imaginary world" where the null hypothesis is true.

To make the idea concrete, suppose you have sample x consisting of 10 observations and you hypothesize that the population mean is $\mu_0=20$. So, in your hypothesized world, population distribution is $N(20,1)$.

You compute t-stat as $t_0=\sqrt{n}\frac{\bar{X}-\mu_0}{s}$, and find out that

So, what is the probability of observing $|t_0|$ as large as 2.97 ( "more extreme" comes here) in the imaginary world? In the imaginary world $t_0\sim t(9)$, thus, the p-value must be $$p-value=Pr(|t_0|\geq 2.97)= 0.01559054$$

Since p-value is small, it is very unlikely that the sample x would have been drawn in the hypothesized world. Therefore, we conclude that it is very unlikely that the hypothesized world was in fact the actual world.

Khashaa's user avatar

  • 4 $\begingroup$ +1, but when you write "probability of seeing what you see" and omit the "more extreme" part, this sentence becomes strictly speaking false (and potentially misleading, even if perhaps less confusing). It is not the probability of seeing what you see (this is usually zero). It is the probability of seeing what you see "or more extreme". Even though this might be a confusing bit for many, it is still crucial (and one can argue endlessly about the degree of subjectivity that hides behind this "more extreme" wording). $\endgroup$ –  amoeba Commented Dec 30, 2014 at 9:30
  • $\begingroup$ @amoeba I thought, when adequate example supplied, it could serve as a proxy for "obtaining the observed sample results (or a more extreme result)". Maybe, better wording is needed. $\endgroup$ –  Khashaa Commented Dec 30, 2014 at 10:09
  • 1 $\begingroup$ I was going to make the same observation as @amoeba; the "or more extreme" part is handled well by example in the student heights and tea party answers, but I don't think any answers in this thread have hit upon a clear general explanation of it, particularly one which covers different alternative hypotheses. I do agree with this answer suggesting that the "or more extreme" part is a conceptual sticking point for many students. $\endgroup$ –  Silverfish Commented Dec 30, 2014 at 13:43
  • $\begingroup$ @Silverfish: and not only students. How many Bayesian-vs-frequentists rants have I read that discuss the subjectivity/objectivity issue of this "more extreme" bit! $\endgroup$ –  amoeba Commented Dec 30, 2014 at 13:46
  • 1 $\begingroup$ @Silver I agree with your criticism and have posted an answer attempting to address it. "Or more extreme" is the very crux of the matter. $\endgroup$ –  whuber ♦ Commented Dec 30, 2014 at 21:53

I have also found simulations to be a useful in teaching.

Here is a simulation for the arguably most basic case in which we sample $n$ times from $N(\mu,1)$ (hence, $\sigma^2=1$ is known for simplicity) and test $H_0:\mu=\mu_0$ against a left-sided alternative.

Then, the $t$-statistic $\text{tstat}:=\sqrt{n}(\bar{X}-\mu_0)$ is $N(0,1)$ under $H_0$, such that the $p$-value is simply $\Phi(\text{tstat})$ or pnorm(tstat) in R.

In the simulation, it is the fraction of times that data generated under the null $N(\mu_0,1)$ (here, $\mu_0=2$) yields sample means stored in nullMeans that are less (i.e., ``more extreme'' in this left-sided test) than the one calculated from the observed data.

Christoph Hanck's user avatar

I find it helpful to follow a sequence in which you explain concepts in the following order: (1) The z score and proportions above and below the z score assuming a normal curve. (2) The notion of a sampling distribution, and the z score for a given sample mean when the population standard deviation is known (and thence the one sample z test) (3) The one-sample t-test and the likelihood of a sample mean when the population standard deviation is unknown (replete with stories about the secret identity of a certain industrial statistician and why Guinness is Good For Statistics). (4) The two-sample t-test and the sampling distribution of mean differences. The ease with which introductory students grasp the t-test has much to do with the groundwork that is laid in preparation for this topic.

/* instructor of terrified students mode off */

StatisticsDoc Consulting's user avatar

I have yet to prove the following argument so it might contain errors, but I really want to throw in my two cents (Hopefully, I'll update it with a rigorous proof soon). Another way of looking at the $p$ -value is

$p$ -value - A statistic $X$ such that $$\forall 0 \le c \le 1, F_{X|H_0}(\inf\{x: F_{X|H_0}(x) \ge c\}) = c$$ where $F_{X|H_0}$ is the distribution function of $X$ under $H_0$ .

Specifically, if $X$ has a continuous distribution and you're not using approximation, then

  • Every $p$ -value is a statistic with a uniform distribution on $[0, 1]$ , and
  • Every statistic with a uniform distribution on $[0, 1]$ is a $p$ -value.

You may consider this a generalized description of the $p$ -values.

nalzok's user avatar

  • $\begingroup$ This definition makes sense only for discrete distributions (and then is not correct), because the second appearance of "$P$" makes it clear it refers to probabilities, not probability densities. Moreover, there are extremely few distributions (if any) which have the stated property, suggesting that there must be typographical errors in the statement. As far as your subsequent claims go, (1) is ideally true but (2) is not, unless you allow the null hypothesis to depend on the statistic! $\endgroup$ –  whuber ♦ Commented Jun 5, 2019 at 12:09
  • $\begingroup$ @whuber Thanks for the input. I have edited the definition, and it should make more sense now! $\endgroup$ –  nalzok Commented Jun 5, 2019 at 15:47
  • 1 $\begingroup$ It does make sense, thank you: if I'm reading it correctly, it asserts the null distribution of $X$ is uniform on $[0,1].$ However, that captures only part of the properties of p-values; it does not characterize p-values; and it says nothing about what they mean or how to interpret them. Consider studying some of the other answers in this thread for information on what is missing. $\endgroup$ –  whuber ♦ Commented Jun 5, 2019 at 15:50
  • $\begingroup$ Here is an example that you might find interesting. The distribution family is Uniform$(\theta,\theta+1)$ for $\theta\in\mathbb{R},$ the null hypothesis is $\theta=0,$ and the alternative is its complement. Consider a random sample $\mathbf{X}=(X_1,\ldots,X_n).$ Define the statistic $X(\mathbf{X}) = X_1.$ Obviously this has a uniform distribution on $[0,1]$ under $H_0:$ but in what sense is it a p-value? What is the corresponding hypothesis test? Suppose we take a sample of size $n=1$ and observe the value $X_1=-2:$ are you claiming the p-value is $-2$?? $\endgroup$ –  whuber ♦ Commented Jun 5, 2019 at 15:59
What does a "p-value" mean in relation to the hypothesis being tested?

In an ontological sense (what is truth?), it means nothing . Any hypothesis testing is based on untested assumptions . This are normally part of the test itself, but are also part of whatever model you are using (e.g. in a regression model). Since we are merely assuming these, we cannot know if the reason why the p-value is below our threshold is because the null is false. It is a non sequitur to deduce unconditionally that because of a low p-value we must reject the null. For instance, something in the model could be wrong.

In an epistemological sense (what can we learn?), it means something . You gain knowledge conditional on the untested premises being true. Since (at least until now) we cannot prove every edifice of reality, all our knowledge will be necessarily conditional. We will never get to the "truth".

luchonacho's user avatar

I think that examples involving marbles or coins or height-measuring can be fine for practicing the math, but they aren't good for building intuition. College students like to question society, right? How about using a political example?

Say a political candidate ran a campaign promising that some policy will help the economy. She was elected, she got the policy enacted, and 2 years later, the economy is booming. She's up for re-election, and claims that her policy is the reason for everyone's prosperity. Should you re-elect her?

The thoughtful citizen should say "well, it's true that the economy is doing well, but can we really attribute that to your policy?" To truly answer this, we must consider the question "would the economy have done well in the last 2 years without it?" If the answer is yes (e.g. the economy is booming because of some new unrelated technological development) then we reject the politician's explanation of the data.

That is, to examine one hypothesis (policy helped the economy), we must build a model of the world where that hypothesis is null (the policy was never enacted). We then make a prediction under that model. We call the probability of observing this data in that alternate world the p-value . If the p-value is too high, then we aren't convinced by the hypothesis--the policy made no difference. If the p-value is low then we trust the hypothesis--the policy was essential.

cgreen's user avatar

  • 1 $\begingroup$ I disagree with the p being defined as "We call the probability of observing this data in that alternate world the p-value" and also the strength of the conclusion being drawn (especially failure to reject the null). $\endgroup$ –  Silverfish Commented Dec 30, 2014 at 13:46
  • $\begingroup$ @Silverfish Could you elaborate? Probably it would be more correct to call the p-value the probability of making that observation OR a more extreme observation. But it sounds like you have a deeper criticism. $\endgroup$ –  cgreen Commented Jan 3, 2015 at 19:45
  • 1 $\begingroup$ Since the original question is asking what a p-value is, I thought that getting that definition across clearly was important. Just saying "more extreme" isn't in itself very helpful without explaining what "more extreme" might mean - that's a weakness of most answers in this thread I think. Only whuber's answer and the "tea test" one seem to really explain why the "more extreme" matters too. $\endgroup$ –  Silverfish Commented Jan 3, 2015 at 23:22
  • $\begingroup$ I also felt your conclusions are phrased too strongly. If we reject the null, we have significant evidence against it, but don't know that it's false. When we fail to reject the null, that certainly doesn't mean the null is true (though it may well be). As a more general comment I have the feeling the test you're describing, in quite abstract terms, is not likely to be clear to a learner who is just learning how to perform a test. The lack of a clearly defined test statistic doesn't sit well with the original question asking how to interpret t -statistic too. $\endgroup$ –  Silverfish Commented Jan 3, 2015 at 23:33
  • $\begingroup$ A feature of this answer I like a lot is the clear explanation that p-values are calculated using a null model, even if we don't (subjectively) believe the null model is actually true. I think the fact test statistics are calculated under a model is a key point that many students struggle with. $\endgroup$ –  Silverfish Commented Jan 3, 2015 at 23:37

The p-value isnt as mysterious as most analysts make it out to be. It is a way of not having to calculate the confidence interval for a t-test but simply determining the confidence level with which null hypothesis can be rejected.

ILLUSTRATION. You run a test. The p-value comes up as 0.1866 for Q-variable, 0.0023 for R-variable. (These are expressed in %).

If you are testing at a 95% confidence level to reject the null hypo;

for Q: 100-18.66= 81.34%

for R: 100-0.23= 99.77%.

At a 95% confidence level, Q gives an 81.34% confidence to reject. This falls below 95% and is unacceptable. ACCEPT NULL.

R gives a 99.77% confidence to reject null. Clearly above the desired 95%. We thus reject the null.

I just illustrated the reading of the p-value through a 'reverse way' of measuring it up to the confidence level at which we reject the null hypo.

dytchay's user avatar

  • 8 $\begingroup$ Welcome to the site. What do you mean by $Q$-variable and $R$-variable? Please clarify. Also, use of the phrase "accept null" is usually considered quite undesirable, even misleading. $\endgroup$ –  cardinal Commented Jan 8, 2012 at 3:46
  • $\begingroup$ @cardinal points out an important point. You're not going to accept the null. $\endgroup$ –  Patrick Coulombe Commented Oct 13, 2013 at 22:22

Not the answer you're looking for? Browse other questions tagged hypothesis-testing p-value interpretation intuition faq or ask your own question .

  • Featured on Meta
  • Site maintenance - Mon, Sept 16 2024, 21:00 UTC to Tue, Sept 17 2024, 2:00...
  • User activation: Learnings and opportunities
  • Join Stack Overflow’s CEO and me for the first Stack IRL Community Event in...

Hot Network Questions

  • How can I analyze the anatomy of a humanoid species to create sounds for their language?
  • Why would the GPL be viral, while EUPL isn't, according to the EUPL authors?
  • Is Produce Flame a spell that the caster casts upon themself?
  • Removing extra characters from code environment in beamer
  • How to expand argument in the Expl3 command \str_if_eq?
  • Numerical precision of keys in Merge function
  • Rocky Mountains Elevation Cutout
  • Navigating career options after a disastrous PhD performance and a disappointed advisor?
  • How to avoid bringing paper silverfish home from a vacation place?
  • How did NASA figure out when and where the Apollo capsule would touch down on the ocean?
  • grouping for stdout
  • What is the origin of 找碴?
  • What is the rationale behind 32333 "Technic Pin Connector Block 1 x 5 x 3"?
  • Example of two dinatural transformations between finite categories that do not compose
  • The meaning of an implication in an existential quantifier
  • The action matters if intention its to just take good action and not action itself?
  • Why do I often see bunches of medical helicopters hovering in clusters in various locations
  • How to apply a squared operator to a function?
  • Why is resonance such a widespread phenomenon?
  • How many engineers/scientists believed that human flight was imminent as of the late 19th/early 20th century?
  • How can a microcontroller (such as an Arduino Uno) that requires 7-21V input voltage be powered via USB-B which can only run 5V?
  • Connections vertically and horizontally
  • Taylor Swift - Use of "them" in her text "she fights for the rights and causes I believe need a warrior to champion them"
  • How do I go about writing a tragic ending in a story while making it overall satisfying to the reader?

what is p value hypothesis testing

logo

What is Statistical Significance? P Value Defined and How to Calculate It

' data-src=

Statistical significance and p-values are central concepts in data analysis and statistical inference. But what do they actually mean, and why are they so widely used? This comprehensive guide aims to demystify these key ideas for both statistics novices and experienced practitioners.

We‘ll cover:

  • Definitions of statistical significance, p-values and related concepts
  • Step-by-step explanation of how to calculate a p-value across common tests
  • Examples of hypothesis testing scenarios and interpreting p-values
  • Common misconceptions and pitfalls to avoid
  • Criticisms of over-reliance on p-values
  • Alternative approaches to quantify evidence in data
  • Relevance to the replication crisis and p-hacking

So let‘s dive in to unlocking the meaning of this ubiquitous, yet widely misunderstood idea.

What is a p-value?

In statistics, a p-value represents the probability of obtaining results at least as extreme as your observed data, assuming the null hypothesis is true.

The null hypothesis essentially states there is no statistical significance or relationship between the variables you are analyzing. Often denoted $H_0$, the null serves as the "baseline" or "no effect" scenario.

So in plain terms, a p-value lets you quantify how rare or common your results would be if there was no real effect in your data. The lower the p-value, the more unlikely or "surprising" the results, if the null hypothesis of no effect was actually true.

P-value example distributions

Depiction of a low and high p-value on example data distributions. Original image.

As shown above, a lower p-value (e.g. p=0.01) falls further into the tails of the distribution. This indicates your data showing a difference between groups would be less probable to occur by chance alone, compared to a higher p-value (e.g. p=0.30).

Now what constitutes a "low enough" p-value is based on an arbitrary statistical significance threshold, commonly set at 0.05. If your calculated p-value meets or crosses below this threshold, the result is deemed statistically significant .

Interpreting Statistical Significance

A statistically significant result simply suggests there is likely some underlying effect between the variables or groups in your analysis, rather than pure noise. The evidence in the data leaning towards $H_A$: the alternative hypothesis.

But here is a common misconception and trap. Statistical significance does not automatically imply the differences uncovered are large, important or meaningful in a practical sense.

Significance testing using p-values has a narrow focus on rejecting the null hypothesis ($H_0$) of strictly zero effect. But real world differences often have multiple contributing factors, so statistical significance cannot differentiate between small, moderate or large effects.

Example Scenario

  • A pharmaceutical trial finds a new drug has a statistically significant impact on reducing blood pressure
  • But when examining the confidence intervals, the actual reduction is minor – less than 5 mmHg systolic on average
  • So while the effect is "statistically significant", it likely won‘t offer much tangible patient benefit

Therefore, the substantive relevance of a statistically significant finding requires much deeper investigation. You need to critically examine things like effect sizes, confidence intervals and experimental contexts to determine if the results have meaningful implications.

With that crucial distinction in mind regarding what statistical significance does and does not imply, let‘s walk through an illustrative example.

Hypothesis Testing Example

Let‘s say a pharmaceutical company developed a new drug to treat anxiety. They conducted a randomized control trial with 100 participants to compare the effectiveness of the new drug (the treatment group) against an existing drug already on the market (the control group).

The metric they tracked was anxiety levels using a standardized validated questionnaire, where higher scores indicate greater anxiety. The research question they want to evaluate is:

Is there evidence suggesting the new drug outperforms the old drug in reducing anxiety?

1. State the Hypotheses

First, we define the null and alternative hypotheses:

2. Set Significance Level

We need to set our threshold p-value required to claim statistical significance. Common levels are 0.05, 0.01 or 0.001 – but this is an arbitrary choice. Let‘s use the standard $\alpha = 0.05$ level.

3. Collect Data and Compute Test Statistic

The researchers gather data and compare anxiety score means. Since we are comparing the means of two independent groups, we can use an independent (unpaired) t-test:

We obtain a t-test statistic of 3.15 based on the sample data.

4. Find the P-value

Using the t-statistic, degrees of freedom and a probability calculator, we compute a p-value = 0.012.

5. Compare to Significance Level

Since our p-value 0.012 < 0.05 significance level, we have obtained a statistically significant result.

6. Interpret Results

We can reject the null hypothesis at the 5% level. So we have reasonable evidence to suggest there is likely some true difference in anxiety reduction effectiveness favoring the new drug. But additional investigation is required to determine if the magnitude of improvement is clinically meaningful.

This framework offers a useful starting point for peeking inside the data to identify potential signals or stories. But properly interpreting what is statistically significant versus practically meaningful is critical.

Now that you understand the intuition, let‘s shift gears to cover how to calculate p-values.

How to Calculate a P-Value

The exact process for computing a p-value depends on the type of statistical test you are running. Common options include:

  • Chi-squared tests
  • Nonparametric tests
  • Regression models
  • Machine learning algorithms

And many more. Each test carries its own assumptions and methodology. But generally, calculating a p-value has this overarching workflow:

General Process for Finding p-values

  • Compute relevant test statistic (t score, F ratio, Z score etc)
  • Determine appropriate probability distribution based on test
  • Find cumulative probability of obtaining test statistic (or more extreme version)
  • That cumulative probability = p-value!

Let‘s break this down into more detail by revisiting our familiar t-test example.

Calculating a T-test P-value in R

For a t-test, you compute a t-statistic that summarizes the separation between two group means relative to the variability or spread of scores within groups.

If the null hypothesis of no true difference is correct, you expect the t-statistic to follow a central t-distribution with degrees of freedom based on the sample sizes.

You can then leverage this probability distribution to find the p-value based on the observed data, following these steps:

Computing a T-test P-value

  • Compute t-statistic from data
  • Determine degrees of freedom
  • Find cumulative probability of obtaining the t-statistic or more extreme value from t-distribution

Let‘s walk through a full example in R:

Here are the key steps:

  • Simulate data for two groups
  • Run a t-test to obtain t-statistic and degrees of freedom
  • Use R‘s pt() function to find cumulative probability from t distribution
  • Output p-value = 0.0089

This small p-value indicates our observed data would be unlikely under the null hypothesis, allowing us to reject it.

While t-tests are popular, calculating precise p-values varies slightly across statistical techniques:

P-value Computations By Test

t-testt statistic -> t distribution
Z-testZ statistic -> normal distribution
F-testF statistic -> F distribution
Chi-squaredChi-square stat -> chi-square distribution
NonparametricsPermutation testing / Monte Carlo simulation
RegressionResidual errors -> t distribution

But the general framework sticks to:

  • Compute relevant test statistic
  • Map to probability distribution
  • Find cumulative probability

Understanding how to move from test statistic to p-value unlocks performing a wide array of statistical analyses.

Common Misconceptions

Like any concept in data science, statistical significance testing carries several pitfalls. Here are important misconceptions to avoid:

Statistical vs Practical Significance

❌ "It‘s statistically significant, so the finding is important"

✅ Statistical significance just means likely not pure chance. The real-world meaning needs deeper investigation through effect sizes, costs/benefits, reproducibility etc.

Directionality with Null Hypothesis

❌ "A statistically significant result occurred because of random chance"

✅ By definition, significance suggests real evidence beyond pure randomness. But multiple factors still usually play a role.

❌ "High p-value means you should accept the null hypothesis"

✅ A p-value is either significant or not based on your threshold. But failing significance just means lack of current evidence to reject the null.

Absence of Significance

❌ "If a finding is not statistically significant, it‘s not important"

✅ Underpowered studies may miss effects. Focus more on magnitudes rather than statistical significance alone.

Proper interpretation requires nuance – but avoiding these pitfalls will put you ahead of the curve. Next we‘ll tackle criticisms of over-reliance on p-values.

Criticisms of P-values

While p-values and significance testing offer useful guides, various limitations have prompted debate. Most center around leaning too heavily on p-values alone to draw conclusions.

Arbitrary Thresholds

P-values hinge on an arbitrary threshold for declaring significance, commonly 0.05. But the distinction between 0.049 and 0.051 is slight – encouraging dichotomous black & white thinking.

Preferred Alternatives : Confidence intervals give a range of plausible effects sizes to avoid strict cutoffs. Or use stricter significance levels like 0.001 or 0.0001 for stronger evidence.

No Effect Size

Statistical significance testing focuses solely on the existence of an effect – but not the magnitude. Massive versus tiny differences both get a "significant" label.

Preferred Alternatives : Analyzing effect sizes and variance explained provides more context on real-world impacts.

No Prior Information

Frequentist p-values don‘t account for outside knowledge and constraints. But science builds incrementally on prior work.

Preferred Alternatives : Bayesian statistical methods formally incorporate previous evidence and beliefs when quantifying if results shift understanding.

There are no perfect solutions. But pairing p-value hypothesis testing with other quantitative approaches helps extract deeper insights from data.

Now let‘s discuss an area many get confused…

Multiple Testing and Corrections

When analyzing more than one relationship, conducting multiple statistical tests introduces another wrinkle. The more comparisons you make, the more likely random chance produces low p-values by coincidence.

Think of rolling a die. On any given roll, there is a 1 in 6 chance of getting a 6. But if you keep rolling across multiple trials, you become increasingly likely to roll a 6.

This issue is called the multiple testing or multiple comparisons problem. The more statistical tests, the higher probability of mistakenly declaring false positives due to randomness.

Several adjustment procedures seek to correct for this effect when evaluating many hypotheses:

Common Corrections for Multiple Tests

MethodWhat it ControlsStringency
BonferroniFamilywise error rateVery high
HolmFamilywise error rateHigh
FDR (Benjamini-Hochberg)False discovery rateModerate
  • Familywise error rate : Probability of at least one type I error across all tests
  • False discovery rate : Expected proportion of false positives among tests declared significant

More stringent corrections lower risk of any false positives, but possibly increase false negatives. There are always tradeoffs when adjusting significance levels.

Overall, be aware of multiplicity issues when interpreting collections of hypothesis tests – and consider reasonable corrections to better separate signal from noise.

Now let‘s connect this all to a profound issue threatening research reproducibility…

The Replication Crisis

In recent years the scientific community has grown increasingly concerned by failures to replicate previously published research. Across fields like psychology, medicine, economics and more, researchers have voiced doubts about research findings that initially showed strong statistical significance.

This has become known as the "replication crisis" or "reproducibility crisis" – and it has prompted much heated debate on relying so heavily on p-values alone.

Why Does This Happen?

There are a few key reasons findings fail to replicate:

  • Low statistical power : Original studies used small samples unable to detect small/moderate effects
  • P-hacking : Continuously analyzing data different ways until a low p-value emerges
  • Publication bias : Only reporting positive significant effects, allowing false positives to flourish
  • Multiple testing : Running many comparisons increases false positives by chance

Many have blamed obsession over statistical significance as encouraging these questionable practices. Obtaining that vaunted low p-value often becomes the goal itself rather than carefully understanding the data‘s story.

This pressures researchers into chasing statistically significant results at all costs -even if the underlying effects are marginal or spurious.

Solutions and Preventative Measures

In response, various recommendations advocate improving research practices:

  • Increase transparency and data sharing
  • Publish negative/null results rather than only positive
  • Pre-register analysis plans to prevent manipulating findings
  • Emphasize confidence intervals and effect sizes
  • Use stricter p-value thresholds where appropriate
  • Replication focused journals

Significance thresholds still offer useful guides. But focusing solely on clearing that statistical significance bar often takes science down the wrong path. Keeping proper perspective is crucial when leveraging these tools to guide understanding.

Wrapping Up

I hope this guide helped provide an intuitive yet comprehensive reference for statistical significance testing and the pivotal p-value concept. Properly understanding meaning, calculation, and practical interpretation of p-values lets you squeeze reliable insights from data.

Key takeaways:

  • P-values help assess evidence against a null hypothesis of no effect/relationship
  • Statistically significant ≠ practically meaningful – examine effect sizes
  • Various processes for calculation based on test statistics and distributions
  • Avoid misconceptions like directionality assumptions
  • Multiple testing adjustments prevent false positives
  • Replication crisis prompts rethinking strict p-value focus

While p-values can provide critical pieces of the puzzle, resist putting all trust in statistical significance alone. Consistent, meaningful effects matter more than low p-values themselves across lines of evidence.

Combining thoughtfully calculated p-values with critical thinking around experimental design, effect sizes, and principles of analytics lays the foundation for extracting true value from data and advancing understanding.

' data-src=

Dr. Alex Mitchell is a dedicated coding instructor with a deep passion for teaching and a wealth of experience in computer science education. As a university professor, Dr. Mitchell has played a pivotal role in shaping the coding skills of countless students, helping them navigate the intricate world of programming languages and software development.

Beyond the classroom, Dr. Mitchell is an active contributor to the freeCodeCamp community, where he regularly shares his expertise through tutorials, code examples, and practical insights. His teaching repertoire includes a wide range of languages and frameworks, such as Python, JavaScript, Next.js, and React, which he presents in an accessible and engaging manner.

Dr. Mitchell’s approach to teaching blends academic rigor with real-world applications, ensuring that his students not only understand the theory but also how to apply it effectively. His commitment to education and his ability to simplify complex topics have made him a respected figure in both the university and online learning communities.

Similar Posts

What is a CPU? An In-Depth Guide for Programmers

What is a CPU? An In-Depth Guide for Programmers

As an experienced full-stack developer, I depend on CPUs every single day to compile and execute…

HTML Attributes Explained in Depth

HTML Attributes Explained in Depth

HTML elements can have attributes that provide additional information about the element. Attributes generally come in…

A Survival Guide to the Either Monad in Scala

A Survival Guide to the Either Monad in Scala

Either is a powerful tool for error handling in functional programming. As Scala developers, deeply understanding…

Nothing Will Change Until You Start Building

Nothing Will Change Until You Start Building

The familiar adage goes "Nothing changes until something moves." When it comes to bringing your ideas…

A Complete Roadmap for Mastering RxJava

A Complete Roadmap for Mastering RxJava

As a full-stack developer with over 5 years of experience building complex reactive applications, I have…

Three.js Tutorial – How to Build a Simple Car with Texture in 3D

Three.js Tutorial – How to Build a Simple Car with Texture in 3D

As a full-stack developer with over 5 years of experience working with creative agencies and startups,…

IMAGES

  1. P-Value Method For Hypothesis Testing

    what is p value hypothesis testing

  2. P-Value

    what is p value hypothesis testing

  3. p-Value in Hypothesis Testing

    what is p value hypothesis testing

  4. Understanding P-Values and Statistical Significance

    what is p value hypothesis testing

  5. Hypothesis testing tutorial using p value method

    what is p value hypothesis testing

  6. PPT

    what is p value hypothesis testing

VIDEO

  1. Statistics 14: One-way ANOVA

  2. p-value

  3. P Value & Hypothesis testing, intuition behind for Non-Technical audience

  4. p value in hypothesis testing?

  5. 24. p-Value in Hypothesis Testing

  6. meetAI_stats33_P-VALUE

COMMENTS

  1. Understanding P-values

    The p value is a number, calculated from a statistical test, that describes how likely you are to have found a particular set of observations if the null hypothesis were true. P values are used in hypothesis testing to help decide whether to reject the null hypothesis. The smaller the p value, the more likely you are to reject the null ...

  2. P-Value in Statistical Hypothesis Tests: What is it?

    P Value Definition. A p value is used in hypothesis testing to help you support or reject the null hypothesis. The p value is the evidence against a null hypothesis. The smaller the p-value, the stronger the evidence that you should reject the null hypothesis. P values are expressed as decimals although it may be easier to understand what they ...

  3. S.3.2 Hypothesis Testing (P-Value Approach)

    S.3.2 Hypothesis Testing (P-Value Approach) The P -value approach involves determining "likely" or "unlikely" by determining the probability — assuming the null hypothesis was true — of observing a more extreme test statistic in the direction of the alternative hypothesis than the one observed. If the P -value is small, say less than (or ...

  4. How to Find the P value: Process and Calculations

    To find the p value for your sample, do the following: Identify the correct test statistic. Calculate the test statistic using the relevant properties of your sample. Specify the characteristics of the test statistic's sampling distribution. Place your test statistic in the sampling distribution to find the p value.

  5. P-Value: Comprehensive Guide to Understand, Apply, and Interpret

    The p-value, or probability value, is a statistical measure used in hypothesis testing to assess the strength of evidence against a null hypothesis. It represents the probability of obtaining results as extreme as, or more extreme than, the observed results under the assumption that the null hypothesis is true.

  6. Hypothesis Testing, P Values, Confidence Intervals, and Significance

    Without a foundational understanding of hypothesis testing, p values, confidence intervals, and the difference between statistical and clinical significance, it may affect healthcare providers' ability to make clinical decisions without relying purely on the research investigators deemed level of significance. ... The p-value is the probability ...

  7. Understanding P-Values and Statistical Significance

    A p-value, or probability value, is a number describing how likely it is that your data would have occurred by random chance (i.e., that the null hypothesis is true). The level of statistical significance is often expressed as a p-value between 0 and 1. The smaller the p -value, the less likely the results occurred by random chance, and the ...

  8. Interpreting P values

    Here is the technical definition of P values: P values are the probability of observing a sample statistic that is at least as extreme as your sample statistic when you assume that the null hypothesis is true. Let's go back to our hypothetical medication study. Suppose the hypothesis test generates a P value of 0.03.

  9. p-value

    p. -value. In null-hypothesis significance testing, the -value[note 1] is the probability of obtaining test results at least as extreme as the result actually observed, under the assumption that the null hypothesis is correct. [2][3] A very small p -value means that such an extreme observed outcome would be very unlikely under the null hypothesis.

  10. How Hypothesis Tests Work: Significance Levels (Alpha) and P values

    Hypothesis testing is a vital process in inferential statistics where the goal is to use sample data to draw conclusions about an entire population. In the testing process, you use significance levels and p-values to determine whether the test results are statistically significant.

  11. 9.3

    P-Value. The P-value is the smallest significance level \(\alpha\) that leads us to reject the null hypothesis. Alternatively (and the way I prefer to think of P-values), the P-value is the probability that we'd observe a more extreme statistic than we did if the null hypothesis were true.

  12. P-Value: What It Is, How to Calculate It, and Why It Matters

    P-value is the level of marginal significance within a statistical hypothesis test, representing the probability of the occurrence of a given event.

  13. What is P-Value?

    It is the cutoff probability for p-value to establish statistical significance for a given hypothesis test. For an observed effect to be considered as statistically significant, the p-value of the test should be lower than the pre-decided alpha value. Typically for most statistical tests (but not always), alpha is set as 0.05.

  14. Understanding Hypothesis Tests: Significance Levels (Alpha) and P

    When a P value is less than or equal to the significance level, you reject the null hypothesis. If we take the P value for our example and compare it to the common significance levels, it matches the previous graphical results. The P value of 0.03112 is statistically significant at an alpha level of 0.05, but not at the 0.01 level.

  15. Hypothesis testing and p-values

    Courses on Khan Academy are always 100% free. Start practicing—and saving your progress—now: https://www.khanacademy.org/math/statistics-probability/signifi...

  16. p-value Calculator

    To determine the p-value, you need to know the distribution of your test statistic under the assumption that the null hypothesis is true.Then, with the help of the cumulative distribution function (cdf) of this distribution, we can express the probability of the test statistics being at least as extreme as its value x for the sample:Left-tailed test:

  17. An Explanation of P-Values and Statistical Significance

    The textbook definition of a p-value is: A p-value is the probability of observing a sample statistic that is at least as extreme as your sample statistic, given that the null hypothesis is true. For example, suppose a factory claims that they produce tires that have a mean weight of 200 pounds. An auditor hypothesizes that the true mean weight ...

  18. P-Value Method for Hypothesis Testing

    The P-value method is used in Hypothesis Testing to check the significance of the given Null Hypothesis. Then, deciding to reject or support it is based upon the specified significance level or threshold. A P-value is calculated in this method which is a test statistic. This statistic can give us the probability of finding a value (Sample Mean ...

  19. What Is P-Value in Statistical Hypothesis?

    The p-value or probability value is a number, calculated from a statistical test, that tells how likely it is that your results would have occurred under the null hypothesis of the test. P-values are usually automatically calculated using statistical software. They can also be calculated using p-value tables for the relevant statistical test.

  20. P-value

    The p-value is a primary value used to quantify the statistical significance of the results of a hypothesis test. The main interpretation of the p-value is whether there's enough evidence to reject the null hypothesis. If the p-value is reasonably low (less than the level of significance), we can state that there is enough evidence to reject ...

  21. 5 Tips for Interpreting P-Values Correctly in Hypothesis Testing

    Here are five essential tips for ensuring the p-value from a hypothesis test is understood correctly. 1. Know What the P-value Represents. First, it is essential to understand what a p-value is. In hypothesis testing, the p-value is defined as the probability of observing your data, or data more extreme, if the null hypothesis is true.

  22. An Easy Introduction to Statistical Significance (With Examples)

    The p value determines statistical significance. An extremely low p value indicates high statistical significance, while a high p value means low or no statistical significance. Example: Hypothesis testing. To test your hypothesis, you first collect data from two groups. The experimental group actively smiles, while the control group does not.

  23. 5.6 Hypothesis Tests in Depth

    5.6 Hypothesis Tests in Depth Establishing the parameter of interest, type of distribution to use, the test statistic, and p-value can help you figure out how to go about a hypothesis test. However, there are several other factors you should consider when interpreting the results. ... In fact, the p-value is the exact probability of a type I ...

  24. hypothesis testing

    The p-value isnt as mysterious as most analysts make it out to be. It is a way of not having to calculate the confidence interval for a t-test but simply determining the confidence level with which null hypothesis can be rejected. ILLUSTRATION. You run a test. The p-value comes up as 0.1866 for Q-variable, 0.0023 for R-variable.

  25. What Is Statistical Significance? P Value Defined And How To Calculate

    Hypothesis Testing Example. Let's say a pharmaceutical company developed a new drug to treat anxiety. They conducted a randomized control trial with 100 participants to compare the effectiveness of the new drug (the treatment group) against an existing drug already on the market (the control group).

  26. 8.2: The controversy over proper hypothesis testing

    Sellke et al (2001) provided a calibration of p-values and, assuming that the prior probabilities of the null hypothesis and the alternative hypothesis are equal (that is, that each have a prior probability of 0.5), by using a formula provided by them (equation 3), we can correct our NHST p-value into a probability that can be interpreted as ...

  27. Kolmogorov-Smirnov test

    Illustration of the Kolmogorov-Smirnov statistic. The red line is a model CDF, the blue line is an empirical CDF, and the black arrow is the KS statistic.. Kolmogorov-Smirnov test (K-S test or KS test) is a nonparametric test of the equality of continuous (or discontinuous, see Section 2.2), one-dimensional probability distributions that can be used to test whether a sample came from a ...