Critical Value Calculator

Table of contents

Welcome to the critical value calculator! Here you can quickly determine the critical value(s) for two-tailed tests, as well as for one-tailed tests. It works for most common distributions in statistical testing: the standard normal distribution N(0,1) (that is when you have a Z-score), t-Student, chi-square, and F-distribution .

What is a critical value? And what is the critical value formula? Scroll down – we provide you with the critical value definition and explain how to calculate critical values in order to use them to construct rejection regions (also known as critical regions).

How to use critical value calculator

The critical value calculator is your go-to tool for swiftly determining critical values in statistical tests, be it one-tailed or two-tailed. To effectively use the calculator, follow these steps:

In the first field, input the distribution of your test statistic under the null hypothesis: is it a standard normal N (0,1), t-Student, chi-squared, or Snedecor's F? If you are not sure, check the sections below devoted to those distributions, and try to localize the test you need to perform.

In the field What type of test? choose the alternative hypothesis : two-tailed, right-tailed, or left-tailed.

If needed, specify the degrees of freedom of the test statistic's distribution. If you need more clarification, check the description of the test you are performing. You can learn more about the meaning of this quantity in statistics from the degrees of freedom calculator .

Set the significance level, α \alpha α . By default, we pre-set it to the most common value, 0.05, but you can adjust it to your needs.

The critical value calculator will display your critical value(s) and the rejection region(s).

For example, let's envision a scenario where you are conducting a one-tailed hypothesis test using a t-Student distribution with 15 degrees of freedom. You have opted for a right-tailed test and set a significance level (α) of 0.05. The results indicate that the critical value is 1.7531, and the critical region is (1.7531, ∞). This implies that if your test statistic exceeds 1.7531, you will reject the null hypothesis at the 0.05 significance level.

👩‍🏫 Want to learn more about critical values? Keep reading!

What is a critical value?

In hypothesis testing, critical values are one of the two approaches which allow you to decide whether to retain or reject the null hypothesis. The other approach is to calculate the p-value (for example, using the p-value calculator ).

The critical value approach consists of checking if the value of the test statistic generated by your sample belongs to the so-called rejection region , or critical region , which is the region where the test statistic is highly improbable to lie . A critical value is a cut-off value (or two cut-off values in the case of a two-tailed test) that constitutes the boundary of the rejection region(s). In other words, critical values divide the scale of your test statistic into the rejection region and the non-rejection region.

Once you have found the rejection region, check if the value of the test statistic generated by your sample belongs to it :

  • If so, it means that you can reject the null hypothesis and accept the alternative hypothesis; and
  • If not, then there is not enough evidence to reject H 0 .

But how to calculate critical values? First of all, you need to set a significance level , α \alpha α , which quantifies the probability of rejecting the null hypothesis when it is actually correct. The choice of α is arbitrary; in practice, we most often use a value of 0.05 or 0.01. Critical values also depend on the alternative hypothesis you choose for your test , elucidated in the next section .

Critical value definition

To determine critical values, you need to know the distribution of your test statistic under the assumption that the null hypothesis holds. Critical values are then points with the property that the probability of your test statistic assuming values at least as extreme at those critical values is equal to the significance level α . Wow, quite a definition, isn't it? Don't worry, we'll explain what it all means.

First, let us point out it is the alternative hypothesis that determines what "extreme" means. In particular, if the test is one-sided, then there will be just one critical value; if it is two-sided, then there will be two of them: one to the left and the other to the right of the median value of the distribution.

Critical values can be conveniently depicted as the points with the property that the area under the density curve of the test statistic from those points to the tails is equal to α \alpha α :

Left-tailed test: the area under the density curve from the critical value to the left is equal to α \alpha α ;

Right-tailed test: the area under the density curve from the critical value to the right is equal to α \alpha α ; and

Two-tailed test: the area under the density curve from the left critical value to the left is equal to α / 2 \alpha/2 α /2 , and the area under the curve from the right critical value to the right is equal to α / 2 \alpha/2 α /2 as well; thus, total area equals α \alpha α .

Critical values for symmetric distribution

As you can see, finding the critical values for a two-tailed test with significance α \alpha α boils down to finding both one-tailed critical values with a significance level of α / 2 \alpha/2 α /2 .

How to calculate critical values?

The formulae for the critical values involve the quantile function , Q Q Q , which is the inverse of the cumulative distribution function ( c d f \mathrm{cdf} cdf ) for the test statistic distribution (calculated under the assumption that H 0 holds!): Q = c d f − 1 Q = \mathrm{cdf}^{-1} Q = cdf − 1 .

Once we have agreed upon the value of α \alpha α , the critical value formulae are the following:

  • Left-tailed test :
  • Right-tailed test :
  • Two-tailed test :

In the case of a distribution symmetric about 0 , the critical values for the two-tailed test are symmetric as well:

Unfortunately, the probability distributions that are the most widespread in hypothesis testing have somewhat complicated c d f \mathrm{cdf} cdf formulae. To find critical values by hand, you would need to use specialized software or statistical tables. In these cases, the best option is, of course, our critical value calculator! 😁

Z critical values

Use the Z (standard normal) option if your test statistic follows (at least approximately) the standard normal distribution N(0,1) .

In the formulae below, u u u denotes the quantile function of the standard normal distribution N(0,1):

Left-tailed Z critical value: u ( α ) u(\alpha) u ( α )

Right-tailed Z critical value: u ( 1 − α ) u(1-\alpha) u ( 1 − α )

Two-tailed Z critical value: ± u ( 1 − α / 2 ) \pm u(1- \alpha/2) ± u ( 1 − α /2 )

Check out Z-test calculator to learn more about the most common Z-test used on the population mean. There are also Z-tests for the difference between two population means, in particular, one between two proportions.

t critical values

Use the t-Student option if your test statistic follows the t-Student distribution . This distribution is similar to N(0,1) , but its tails are fatter – the exact shape depends on the number of degrees of freedom . If this number is large (>30), which generically happens for large samples, then the t-Student distribution is practically indistinguishable from N(0,1). Check our t-statistic calculator to compute the related test statistic.

t-Student distribution densities

In the formulae below, Q t , d Q_{\text{t}, d} Q t , d ​ is the quantile function of the t-Student distribution with d d d degrees of freedom:

Left-tailed t critical value: Q t , d ( α ) Q_{\text{t}, d}(\alpha) Q t , d ​ ( α )

Right-tailed t critical value: Q t , d ( 1 − α ) Q_{\text{t}, d}(1 - \alpha) Q t , d ​ ( 1 − α )

Two-tailed t critical values: ± Q t , d ( 1 − α / 2 ) \pm Q_{\text{t}, d}(1 - \alpha/2) ± Q t , d ​ ( 1 − α /2 )

Visit the t-test calculator to learn more about various t-tests: the one for a population mean with an unknown population standard deviation , those for the difference between the means of two populations (with either equal or unequal population standard deviations), as well as about the t-test for paired samples .

chi-square critical values (χ²)

Use the χ² (chi-square) option when performing a test in which the test statistic follows the χ²-distribution .

You need to determine the number of degrees of freedom of the χ²-distribution of your test statistic – below, we list them for the most commonly used χ²-tests.

Here we give the formulae for chi square critical values; Q χ 2 , d Q_{\chi^2, d} Q χ 2 , d ​ is the quantile function of the χ²-distribution with d d d degrees of freedom:

Left-tailed χ² critical value: Q χ 2 , d ( α ) Q_{\chi^2, d}(\alpha) Q χ 2 , d ​ ( α )

Right-tailed χ² critical value: Q χ 2 , d ( 1 − α ) Q_{\chi^2, d}(1 - \alpha) Q χ 2 , d ​ ( 1 − α )

Two-tailed χ² critical values: Q χ 2 , d ( α / 2 ) Q_{\chi^2, d}(\alpha/2) Q χ 2 , d ​ ( α /2 ) and Q χ 2 , d ( 1 − α / 2 ) Q_{\chi^2, d}(1 - \alpha/2) Q χ 2 , d ​ ( 1 − α /2 )

Several different tests lead to a χ²-score:

Goodness-of-fit test : does the empirical distribution agree with the expected distribution?

This test is right-tailed . Its test statistic follows the χ²-distribution with k − 1 k - 1 k − 1 degrees of freedom, where k k k is the number of classes into which the sample is divided.

Independence test : is there a statistically significant relationship between two variables?

This test is also right-tailed , and its test statistic is computed from the contingency table. There are ( r − 1 ) ( c − 1 ) (r - 1)(c - 1) ( r − 1 ) ( c − 1 ) degrees of freedom, where r r r is the number of rows, and c c c is the number of columns in the contingency table.

Test for the variance of normally distributed data : does this variance have some pre-determined value?

This test can be one- or two-tailed! Its test statistic has the χ²-distribution with n − 1 n - 1 n − 1 degrees of freedom, where n n n is the sample size.

F critical values

Finally, choose F (Fisher-Snedecor) if your test statistic follows the F-distribution . This distribution has a pair of degrees of freedom .

Let us see how those degrees of freedom arise. Assume that you have two independent random variables, X X X and Y Y Y , that follow χ²-distributions with d 1 d_1 d 1 ​ and d 2 d_2 d 2 ​ degrees of freedom, respectively. If you now consider the ratio ( X d 1 ) : ( Y d 2 ) (\frac{X}{d_1}):(\frac{Y}{d_2}) ( d 1 ​ X ​ ) : ( d 2 ​ Y ​ ) , it turns out it follows the F-distribution with ( d 1 , d 2 ) (d_1, d_2) ( d 1 ​ , d 2 ​ ) degrees of freedom. That's the reason why we call d 1 d_1 d 1 ​ and d 2 d_2 d 2 ​ the numerator and denominator degrees of freedom , respectively.

In the formulae below, Q F , d 1 , d 2 Q_{\text{F}, d_1, d_2} Q F , d 1 ​ , d 2 ​ ​ stands for the quantile function of the F-distribution with ( d 1 , d 2 ) (d_1, d_2) ( d 1 ​ , d 2 ​ ) degrees of freedom:

Left-tailed F critical value: Q F , d 1 , d 2 ( α ) Q_{\text{F}, d_1, d_2}(\alpha) Q F , d 1 ​ , d 2 ​ ​ ( α )

Right-tailed F critical value: Q F , d 1 , d 2 ( 1 − α ) Q_{\text{F}, d_1, d_2}(1 - \alpha) Q F , d 1 ​ , d 2 ​ ​ ( 1 − α )

Two-tailed F critical values: Q F , d 1 , d 2 ( α / 2 ) Q_{\text{F}, d_1, d_2}(\alpha/2) Q F , d 1 ​ , d 2 ​ ​ ( α /2 ) and Q F , d 1 , d 2 ( 1 − α / 2 ) Q_{\text{F}, d_1, d_2}(1 -\alpha/2) Q F , d 1 ​ , d 2 ​ ​ ( 1 − α /2 )

Here we list the most important tests that produce F-scores: each of them is right-tailed .

ANOVA : tests the equality of means in three or more groups that come from normally distributed populations with equal variances. There are ( k − 1 , n − k ) (k - 1, n - k) ( k − 1 , n − k ) degrees of freedom, where k k k is the number of groups, and n n n is the total sample size (across every group).

Overall significance in regression analysis . The test statistic has ( k − 1 , n − k ) (k - 1, n - k) ( k − 1 , n − k ) degrees of freedom, where n n n is the sample size, and k k k is the number of variables (including the intercept).

Compare two nested regression models . The test statistic follows the F-distribution with ( k 2 − k 1 , n − k 2 ) (k_2 - k_1, n - k_2) ( k 2 ​ − k 1 ​ , n − k 2 ​ ) degrees of freedom, where k 1 k_1 k 1 ​ and k 2 k_2 k 2 ​ are the number of variables in the smaller and bigger models, respectively, and n n n is the sample size.

The equality of variances in two normally distributed populations . There are ( n − 1 , m − 1 ) (n - 1, m - 1) ( n − 1 , m − 1 ) degrees of freedom, where n n n and m m m are the respective sample sizes.

Behind the scenes of the critical value calculator

I'm Anna, the mastermind behind the critical value calculator and a PhD in mathematics from Jagiellonian University .

The idea for creating the tool originated from my experiences in teaching and research. Recognizing the need for a tool that simplifies the critical value determination process across various statistical distributions, I built a user-friendly calculator accessible to both students and professionals. After publishing the tool, I soon found myself using the calculator in my research and as a teaching aid.

Trust in this calculator is paramount to me. Each tool undergoes a rigorous review process , with peer-reviewed insights from experts and meticulous proofreading by native speakers. This commitment to accuracy and reliability ensures that users can be confident in the content. Please check the Editorial Policies page for more details on our standards.

What is a Z critical value?

A Z critical value is the value that defines the critical region in hypothesis testing when the test statistic follows the standard normal distribution . If the value of the test statistic falls into the critical region, you should reject the null hypothesis and accept the alternative hypothesis.

How do I calculate Z critical value?

To find a Z critical value for a given confidence level α :

Check if you perform a one- or two-tailed test .

For a one-tailed test:

Left -tailed: critical value is the α -th quantile of the standard normal distribution N(0,1).

Right -tailed: critical value is the (1-α) -th quantile.

Two-tailed test: critical value equals ±(1-α/2) -th quantile of N(0,1).

No quantile tables ? Use CDF tables! (The quantile function is the inverse of the CDF.)

Verify your answer with an online critical value calculator.

Is a t critical value the same as Z critical value?

In theory, no . In practice, very often, yes . The t-Student distribution is similar to the standard normal distribution, but it is not the same . However, if the number of degrees of freedom (which is, roughly speaking, the size of your sample) is large enough (>30), then the two distributions are practically indistinguishable , and so the t critical value has practically the same value as the Z critical value.

What is the Z critical value for 95% confidence?

The Z critical value for a 95% confidence interval is:

  • 1.96 for a two-tailed test;
  • 1.64 for a right-tailed test; and
  • -1.64 for a left-tailed test.

What distribution?

What type of test?

Degrees of freedom (d)

Significance level

The test statistic follows the t-distribution with d degrees of freedom.

Critical Value Approach in Hypothesis Testing

how to find critical values null hypothesis

After calculating the test statistic using the sample data, you compare it to the critical value(s) corresponding to the chosen significance level ( α ).

Two-sided test

Left-tailed test, right-tailed test, using critical values to construct confidence intervals.

Compute the lower bound and upper bound:

Finding the Critical Value

As you can see, the specific formula to find critical values depends on the distribution and the parameters associated with the problem at hand.

Take your skills to the next level ⚡️

Critical Value

Critical value is a cut-off value that is used to mark the start of a region where the test statistic, obtained in hypothesis testing, is unlikely to fall in. In hypothesis testing, the critical value is compared with the obtained test statistic to determine whether the null hypothesis has to be rejected or not.

Graphically, the critical value splits the graph into the acceptance region and the rejection region for hypothesis testing. It helps to check the statistical significance of a test statistic. In this article, we will learn more about the critical value, its formula, types, and how to calculate its value.

1.
2.
3.
4.
5.
6.
7.
8.

What is Critical Value?

A critical value can be calculated for different types of hypothesis tests. The critical value of a particular test can be interpreted from the distribution of the test statistic and the significance level. A one-tailed hypothesis test will have one critical value while a two-tailed test will have two critical values.

Critical Value Definition

Critical value can be defined as a value that is compared to a test statistic in hypothesis testing to determine whether the null hypothesis is to be rejected or not. If the value of the test statistic is less extreme than the critical value, then the null hypothesis cannot be rejected. However, if the test statistic is more extreme than the critical value, the null hypothesis is rejected and the alternative hypothesis is accepted. In other words, the critical value divides the distribution graph into the acceptance and the rejection region. If the value of the test statistic falls in the rejection region, then the null hypothesis is rejected otherwise it cannot be rejected.

Critical Value Formula

Depending upon the type of distribution the test statistic belongs to, there are different formulas to compute the critical value. The confidence interval or the significance level can be used to determine a critical value. Given below are the different critical value formulas.

Critical Value Confidence Interval

The critical value for a one-tailed or two-tailed test can be computed using the confidence interval . Suppose a confidence interval of 95% has been specified for conducting a hypothesis test. The critical value can be determined as follows:

  • Step 1: Subtract the confidence level from 100%. 100% - 95% = 5%.
  • Step 2: Convert this value to decimals to get \(\alpha\). Thus, \(\alpha\) = 5%.
  • Step 3: If it is a one-tailed test then the alpha level will be the same value in step 2. However, if it is a two-tailed test, the alpha level will be divided by 2.
  • Step 4: Depending on the type of test conducted the critical value can be looked up from the corresponding distribution table using the alpha value.

The process used in step 4 will be elaborated in the upcoming sections.

T Critical Value

A t-test is used when the population standard deviation is not known and the sample size is lesser than 30. A t-test is conducted when the population data follows a Student t distribution . The t critical value can be calculated as follows:

  • Determine the alpha level.
  • Subtract 1 from the sample size. This gives the degrees of freedom (df).
  • If the hypothesis test is one-tailed then use the one-tailed t distribution table. Otherwise, use the two-tailed t distribution table for a two-tailed test.
  • Match the corresponding df value (left side) and the alpha value (top row) of the table. Find the intersection of this row and column to give the t critical value.

Test Statistic for one sample t test: t = \(\frac{\overline{x}-\mu}{\frac{s}{\sqrt{n}}}\). \(\overline{x}\) is the sample mean, \(\mu\) is the population mean, s is the sample standard deviation and n is the size of the sample.

Test Statistic for two samples t test: \(\frac{(\overline{x_{1}}-\overline{x_{2}})-(\mu_{1}-\mu_{2})}{\sqrt{\frac{s_{1}^{2}}{n_{1}}+\frac{s_{2}^{2}}{n_{2}}}}\).

Decision Criteria:

  • Reject the null hypothesis if test statistic > t critical value (right-tailed hypothesis test).
  • Reject the null hypothesis if test statistic < t critical value (left-tailed hypothesis test).
  • Reject the null hypothesis if the test statistic does not lie in the acceptance region (two-tailed hypothesis test).

Critical Value

This decision criterion is used for all tests. Only the test statistic and critical value change.

Z Critical Value

A z test is conducted on a normal distribution when the population standard deviation is known and the sample size is greater than or equal to 30. The z critical value can be calculated as follows:

  • Find the alpha level.
  • Subtract the alpha level from 1 for a two-tailed test. For a one-tailed test subtract the alpha level from 0.5.
  • Look up the area from the z distribution table to obtain the z critical value. For a left-tailed test, a negative sign needs to be added to the critical value at the end of the calculation.

Test statistic for one sample z test: z = \(\frac{\overline{x}-\mu}{\frac{\sigma}{\sqrt{n}}}\). \(\sigma\) is the population standard deviation.

Test statistic for two samples z test: z = \(\frac{(\overline{x_{1}}-\overline{x_{2}})-(\mu_{1}-\mu_{2})}{\sqrt{\frac{\sigma_{1}^{2}}{n_{1}}+\frac{\sigma_{2}^{2}}{n_{2}}}}\).

F Critical Value

The F test is largely used to compare the variances of two samples. The test statistic so obtained is also used for regression analysis. The f critical value is given as follows:

  • Subtract 1 from the size of the first sample. This gives the first degree of freedom. Say, x
  • Similarly, subtract 1 from the second sample size to get the second df. Say, y.
  • Using the f distribution table, the intersection of the x column and y row will give the f critical value.

Test Statistic for large samples: f = \(\frac{\sigma_{1}^{2}}{\sigma_{2}^{2}}\). \(\sigma_{1}^{2}\) variance of the first sample and \(\sigma_{2}^{2}\) variance of the second sample.

Test Statistic for small samples: f = \(\frac{s_{1}^{2}}{s_{2}^{2}}\). \(s_{1}^{1}\) variance of the first sample and \(s_{2}^{2}\) variance of the second sample.

Chi-Square Critical Value

The chi-square test is used to check if the sample data matches the population data. It can also be used to compare two variables to see if they are related. The chi-square critical value is given as follows:

  • Identify the alpha level.
  • Subtract 1 from the sample size to determine the degrees of freedom (df).
  • Using the chi-square distribution table, the intersection of the row of the df and the column of the alpha value yields the chi-square critical value.

Test statistic for chi-squared test statistic: \(\chi ^{2} = \sum \frac{(O_{i}-E_{i})^{2}}{E_{i}}\).

Critical Value Calculation

Suppose a right-tailed z test is being conducted. The critical value needs to be calculated for a 0.0079 alpha level. Then the steps are as follows:

  • Subtract the alpha level from 0.5. Thus, 0.5 - 0.0079 = 0.4921
  • Using the z distribution table find the area closest to 0.4921. The closest area is 0.4922. As this value is at the intersection of 2.4 and 0.02 thus, the z critical value = 2.42.

Critical Value Calculation

Related Articles:

  • Probability and Statistics
  • Data Handling

Important Notes on Critical Value

  • Critical value can be defined as a value that is useful in checking whether the null hypothesis can be rejected or not by comparing it with the test statistic.
  • It is the point that divides the distribution graph into the acceptance and the rejection region.
  • There are 4 types of critical values - z, f, chi-square, and t.

Examples on Critical Value

Example 1: Find the critical value for a left tailed z test where \(\alpha\) = 0.012.

Solution: First subtract \(\alpha\) from 0.5. Thus, 0.5 - 0.012 = 0.488.

Using the z distribution table, z = 2.26.

However, as this is a left-tailed z test thus, z = -2.26

Answer: Critical value = -2.26

Example 2: Find the critical value for a two-tailed f test conducted on the following samples at a \(\alpha\) = 0.025

Variance = 110, Sample size = 41

Variance = 70, Sample size = 21

Solution: \(n_{1}\) = 41, \(n_{2}\) = 21,

\(n_{1}\) - 1= 40, \(n_{2}\) - 1 = 20,

Sample 1 df = 40, Sample 2 df = 20

Using the F distribution table for \(\alpha\) = 0.025, the value at the intersection of the 40 th column and 20 th row is

F(40, 20) = 2.287

Answer: Critical Value = 2.287

Example 3: Suppose a one-tailed t-test is being conducted on data with a sample size of 8 at \(\alpha\) = 0.05. Then find the critical value.

Solution: n = 8

df = 8 - 1 = 7

Using the one tailed t distribution table t(7, 0.05) = 1.895.

Answer: Crititcal Value = 1.895

go to slide go to slide go to slide

how to find critical values null hypothesis

Book a Free Trial Class

FAQs on Critical Value

What is the critical value in statistics.

Critical value in statistics is a cut-off value that is compared with a test statistic in hypothesis testing to check whether the null hypothesis should be rejected or not.

What are the Different Types of Critical Value?

There are 4 types of critical values depending upon the type of distributions they are obtained from. These distributions are given as follows:

  • Normal distribution (z critical value).
  • Student t distribution (t).
  • Chi-squared distribution (chi-squared).
  • F distribution (f).

What is the Critical Value Formula for an F test?

To find the critical value for an f test the steps are as follows:

  • Determine the degrees of freedom for both samples by subtracting 1 from each sample size.
  • Find the corresponding value from a one-tailed or two-tailed f distribution at the given alpha level.
  • This will give the critical value.

What is the T Critical Value?

The t critical value is obtained when the population follows a t distribution. The steps to find the t critical value are as follows:

  • Subtract the sample size number by 1 to get the df.
  • Use the t distribution table for the alpha value to get the required critical value.

How to Find the Critical Value Using a Confidence Interval for a Two-Tailed Z Test?

The steps to find the critical value using a confidence interval are as follows:

  • Subtract the confident interval from 100% and convert the resultant into a decimal value to get the alpha level.
  • Subtract this value from 1.
  • Find the z value for the corresponding area using the normal distribution table to get the critical value.

Can a Critical Value be Negative?

If a left-tailed test is being conducted then the critical value will be negative. This is because the critical value will be to the left of the mean thus, making it negative.

How to Reject Null Hypothesis Based on Critical Value?

The rejection criteria for the null hypothesis is given as follows:

  • Right-tailed test: Test statistic > critical value.
  • Left-tailed test: Test statistic < critical value.
  • Two-tailed test: Reject if the test statistic does not lie in the acceptance region.

S.3.1 Hypothesis Testing (Critical Value Approach)

The critical value approach involves determining "likely" or "unlikely" by determining whether or not the observed test statistic is more extreme than would be expected if the null hypothesis were true. That is, it entails comparing the observed test statistic to some cutoff value, called the " critical value ." If the test statistic is more extreme than the critical value, then the null hypothesis is rejected in favor of the alternative hypothesis. If the test statistic is not as extreme as the critical value, then the null hypothesis is not rejected.

Specifically, the four steps involved in using the critical value approach to conducting any hypothesis test are:

  • Specify the null and alternative hypotheses.
  • Using the sample data and assuming the null hypothesis is true, calculate the value of the test statistic. To conduct the hypothesis test for the population mean μ , we use the t -statistic \(t^*=\frac{\bar{x}-\mu}{s/\sqrt{n}}\) which follows a t -distribution with n - 1 degrees of freedom.
  • Determine the critical value by finding the value of the known distribution of the test statistic such that the probability of making a Type I error — which is denoted \(\alpha\) (greek letter "alpha") and is called the " significance level of the test " — is small (typically 0.01, 0.05, or 0.10).
  • Compare the test statistic to the critical value. If the test statistic is more extreme in the direction of the alternative than the critical value, reject the null hypothesis in favor of the alternative hypothesis. If the test statistic is less extreme than the critical value, do not reject the null hypothesis.

Example S.3.1.1

In our example concerning the mean grade point average, suppose we take a random sample of n = 15 students majoring in mathematics. Since n = 15, our test statistic t * has n - 1 = 14 degrees of freedom. Also, suppose we set our significance level α at 0.05 so that we have only a 5% chance of making a Type I error.

Right-Tailed

The critical value for conducting the right-tailed test H 0 : μ = 3 versus H A : μ > 3 is the t -value, denoted t \(\alpha\) , n - 1 , such that the probability to the right of it is \(\alpha\). It can be shown using either statistical software or a t -table that the critical value t 0.05,14 is 1.7613. That is, we would reject the null hypothesis H 0 : μ = 3 in favor of the alternative hypothesis H A : μ > 3 if the test statistic t * is greater than 1.7613. Visually, the rejection region is shaded red in the graph.

t distribution graph for a t value of 1.76131

Left-Tailed

The critical value for conducting the left-tailed test H 0 : μ = 3 versus H A : μ < 3 is the t -value, denoted -t ( \(\alpha\) , n - 1) , such that the probability to the left of it is \(\alpha\). It can be shown using either statistical software or a t -table that the critical value -t 0.05,14 is -1.7613. That is, we would reject the null hypothesis H 0 : μ = 3 in favor of the alternative hypothesis H A : μ < 3 if the test statistic t * is less than -1.7613. Visually, the rejection region is shaded red in the graph.

t-distribution graph for a t value of -1.76131

There are two critical values for the two-tailed test H 0 : μ = 3 versus H A : μ ≠ 3 — one for the left-tail denoted -t ( \(\alpha\) / 2, n - 1) and one for the right-tail denoted t ( \(\alpha\) / 2, n - 1) . The value - t ( \(\alpha\) /2, n - 1) is the t -value such that the probability to the left of it is \(\alpha\)/2, and the value t ( \(\alpha\) /2, n - 1) is the t -value such that the probability to the right of it is \(\alpha\)/2. It can be shown using either statistical software or a t -table that the critical value -t 0.025,14 is -2.1448 and the critical value t 0.025,14 is 2.1448. That is, we would reject the null hypothesis H 0 : μ = 3 in favor of the alternative hypothesis H A : μ ≠ 3 if the test statistic t * is less than -2.1448 or greater than 2.1448. Visually, the rejection region is shaded red in the graph.

t distribution graph for a two tailed test of 0.05 level of significance

Hypothesis Testing Calculator

$H_o$:
$H_a$: μ μ₀
$n$ =   $\bar{x}$ =   =
$\text{Test Statistic: }$ =
$\text{Degrees of Freedom: } $ $df$ =
$ \text{Level of Significance: } $ $\alpha$ =

Type II Error

$H_o$: $\mu$
$H_a$: $\mu$ $\mu_0$
$n$ =   σ =   $\mu$ =
$\text{Level of Significance: }$ $\alpha$ =

The first step in hypothesis testing is to calculate the test statistic. The formula for the test statistic depends on whether the population standard deviation (σ) is known or unknown. If σ is known, our hypothesis test is known as a z test and we use the z distribution. If σ is unknown, our hypothesis test is known as a t test and we use the t distribution. Use of the t distribution relies on the degrees of freedom, which is equal to the sample size minus one. Furthermore, if the population standard deviation σ is unknown, the sample standard deviation s is used instead. To switch from σ known to σ unknown, click on $\boxed{\sigma}$ and select $\boxed{s}$ in the Hypothesis Testing Calculator.

$\sigma$ Known $\sigma$ Unknown
Test Statistic $ z = \dfrac{\bar{x}-\mu_0}{\sigma/\sqrt{{\color{Black} n}}} $ $ t = \dfrac{\bar{x}-\mu_0}{s/\sqrt{n}} $

Next, the test statistic is used to conduct the test using either the p-value approach or critical value approach. The particular steps taken in each approach largely depend on the form of the hypothesis test: lower tail, upper tail or two-tailed. The form can easily be identified by looking at the alternative hypothesis (H a ). If there is a less than sign in the alternative hypothesis then it is a lower tail test, greater than sign is an upper tail test and inequality is a two-tailed test. To switch from a lower tail test to an upper tail or two-tailed test, click on $\boxed{\geq}$ and select $\boxed{\leq}$ or $\boxed{=}$, respectively.

Lower Tail Test Upper Tail Test Two-Tailed Test
$H_0 \colon \mu \geq \mu_0$ $H_0 \colon \mu \leq \mu_0$ $H_0 \colon \mu = \mu_0$
$H_a \colon \mu $H_a \colon \mu \neq \mu_0$

In the p-value approach, the test statistic is used to calculate a p-value. If the test is a lower tail test, the p-value is the probability of getting a value for the test statistic at least as small as the value from the sample. If the test is an upper tail test, the p-value is the probability of getting a value for the test statistic at least as large as the value from the sample. In a two-tailed test, the p-value is the probability of getting a value for the test statistic at least as unlikely as the value from the sample.

To test the hypothesis in the p-value approach, compare the p-value to the level of significance. If the p-value is less than or equal to the level of signifance, reject the null hypothesis. If the p-value is greater than the level of significance, do not reject the null hypothesis. This method remains unchanged regardless of whether it's a lower tail, upper tail or two-tailed test. To change the level of significance, click on $\boxed{.05}$. Note that if the test statistic is given, you can calculate the p-value from the test statistic by clicking on the switch symbol twice.

In the critical value approach, the level of significance ($\alpha$) is used to calculate the critical value. In a lower tail test, the critical value is the value of the test statistic providing an area of $\alpha$ in the lower tail of the sampling distribution of the test statistic. In an upper tail test, the critical value is the value of the test statistic providing an area of $\alpha$ in the upper tail of the sampling distribution of the test statistic. In a two-tailed test, the critical values are the values of the test statistic providing areas of $\alpha / 2$ in the lower and upper tail of the sampling distribution of the test statistic.

To test the hypothesis in the critical value approach, compare the critical value to the test statistic. Unlike the p-value approach, the method we use to decide whether to reject the null hypothesis depends on the form of the hypothesis test. In a lower tail test, if the test statistic is less than or equal to the critical value, reject the null hypothesis. In an upper tail test, if the test statistic is greater than or equal to the critical value, reject the null hypothesis. In a two-tailed test, if the test statistic is less than or equal the lower critical value or greater than or equal to the upper critical value, reject the null hypothesis.

Lower Tail Test Upper Tail Test Two-Tailed Test
If $z \leq -z_\alpha$, reject $H_0$. If $z \geq z_\alpha$, reject $H_0$. If $z \leq -z_{\alpha/2}$ or $z \geq z_{\alpha/2}$, reject $H_0$.
If $t \leq -t_\alpha$, reject $H_0$. If $t \geq t_\alpha$, reject $H_0$. If $t \leq -t_{\alpha/2}$ or $t \geq t_{\alpha/2}$, reject $H_0$.

When conducting a hypothesis test, there is always a chance that you come to the wrong conclusion. There are two types of errors you can make: Type I Error and Type II Error. A Type I Error is committed if you reject the null hypothesis when the null hypothesis is true. Ideally, we'd like to accept the null hypothesis when the null hypothesis is true. A Type II Error is committed if you accept the null hypothesis when the alternative hypothesis is true. Ideally, we'd like to reject the null hypothesis when the alternative hypothesis is true.

Condition
$H_0$ True $H_a$ True
Conclusion Accept $H_0$ Correct Type II Error
Reject $H_0$ Type I Error Correct

Hypothesis testing is closely related to the statistical area of confidence intervals. If the hypothesized value of the population mean is outside of the confidence interval, we can reject the null hypothesis. Confidence intervals can be found using the Confidence Interval Calculator . The calculator on this page does hypothesis tests for one population mean. Sometimes we're interest in hypothesis tests about two population means. These can be solved using the Two Population Calculator . The probability of a Type II Error can be calculated by clicking on the link at the bottom of the page.

If you could change one thing about college, what would it be?

Graduate faster

Better quality online classes

Flexible schedule

Access to top-rated instructors

Baseball hitter. This represents stats

How To Find Critical Value In Statistics

10.28.2022 • 13 min read

Sarah Thomas

Subject Matter Expert

Learn how to find critical value, its importance, the different systems, and the steps to follow when calculating it.

In This Article

What Is a Critical Value?

The role of critical values in hypothesis tests, factors that influence critical values, critical values for one-tailed tests & two-tailed tests, commonly used critical values, how to find the critical value in statistics, how to find a critical value in r, don't overpay for college statistics.

Take Intro to Statistics Online with Outlier.org

From the co-founder of MasterClass, earn transferable college credits from the University of Pittsburgh (a top 50 global school). The world's best online college courses for 50% less than a traditional college.

Outlier Stats 1628x960 (1)

In baseball, an ump cries “foul ball” any time a batter hits the ball into foul territory. In statistics, we have something similar to a foul zone. It’s called a rejection region. While foul lines, poles, and the stadium fence mark off the foul territory in baseball, in statistics numbers called critical values mark off rejection regions.

A critical value is a number that defines the rejection region of a hypothesis test. Critical values vary depending on the type of hypothesis test you run and the type of data you are working with.

In a hypothesis test called a two-tailed Z-test with a 95% confidence level, the critical values are 1.96 and -1.96. In this test, if the statistician’s results are greater than 1.96 or less than -1.96. We reject the null hypothesis in favor of the alternative hypothesis.

In Outlier's Intro to Statistics course, Dr. Gregory Matthews explains more about hypothesis testing and why to use it:

The figure below shows how the critical values mark the boundaries of two rejection regions (shaded in pink). Any test result greater than 1.96 falls into the rejection region in the distribution’s right tail, and any test result below -1.96 falls into the rejection region in the left tail of the distribution.

A two-tailed Z-test with a 95% confidence level

A two-tailed Z-test with a 95% confidence level (or a significance level of ɑ = 0.05) has two critical values 1.96 and -1.96.

Before we dive deeper, let’s do a quick refresher on hypothesis testing. In statistics, a hypothesis test is a statistical test where you test an “alternative” hypothesis against a “null” hypothesis. The null hypothesis represents the default hypothesis or the status quo. It typically represents what the academic community or the general public believes to be true. The alternative hypothesis represents what you suspect could be true in place of the null hypothesis.

For example, I may hypothesize that as times have changed, the average age of first-time mothers in the U.S. has increased and that first-time mothers, on average, are now older than 25. Meanwhile, conventional wisdom or existing research may say that the average age of first-time mothers in the U.S. is 25 years old.

In this example, my hypothesis is the alternative hypothesis, and the conventional wisdom is the null hypothesis.

Alternative Hypothesis H a H_a H a ​ = Average age of first-time mothers in the U.S. > 25

Null Hypothesis H 0 H_0 H 0 ​ = Average age of first-time mothers in the U.S. = 25

In a hypothesis test, the goal is to draw inferences about a population parameter (such as the population mean of first-time mothers in the U.S.) from sample data randomly drawn from the population.

The basic intuition behind hypothesis testing is this. If we assume that the null hypothesis is true, data collected from a random sample of first-time mothers should have a sample average that’s close to 25 years old. We don’t expect the sample to have the same average as the population, but we expect it to be pretty close. If we find this to be the case, we have evidence favoring the null hypothesis. If our sample average is far enough above 25, we have evidence that favors the alternative hypothesis.

A major conundrum in hypothesis testing is deciding what counts as “close to 25” and what counts as being “far enough above 25”? If you randomly sample a thousand first-time mothers and the sample mean is 26 or 27 years old, should you favor the null hypothesis or the alternative?

To make this determination, you need to do the following:

1. First, you convert your sample statistic into a test statistic.

In our first-time mother example, the sample statistic we have is the average age of the first-time mothers in our sample. Depending on the data we have, we might map this average to a Z-test statistic or a T-test statistic.

A test statistic is just a number that maps a sample statistic to a value on a standardized distribution such as a normal distribution or a T-distribution. By converting our sample statistic to a test statistic, we can easily see how likely or unlikely it is to get our sample statistic under the assumption that the null hypothesis is true.

2. Next, you select a significance level (also known as an alpha (ɑ) level) for your test.

The significance level is a measure of how confident you want to be in your decision to reject the null hypothesis in favor of the alternative. A commonly used significance level in hypothesis testing is 5% (or ɑ=0.05). An alpha-level of 0.05 means that you’ll only reject the null hypothesis if there is less than a 5% chance of wrongly favoring the alternative over the null.

3. Third, you find the critical values that correspond to your test statistic and significance level.

The critical value(s) tell you how small or large your test statistic has to be in order to reject the null hypothesis at your chosen significance level.

4. You check to see if your test statistic falls into the rejection region.

Check the value of the test statistic. Any test statistic that falls above a critical value in the right tail of the distribution is in the rejection region. Any test statistic located below a critical value in the left tail of the distribution is also in the rejection region. If your test statistic falls into the rejection region, you reject the null hypothesis in favor of the alternative hypothesis. If your test statistic does not fall into the rejection region, you fail to reject the null hypothesis.

Notice that critical values play a crucial role in hypothesis testing. Without knowing what your critical values are, you cannot make the final determination of whether or not to reject the null hypothesis.

Critical values vary with the following traits of a hypothesis test.

What test statistic are you using?

This will depend on the type of research question you have and the type of data you are working with. In a first-year statistics course, you will often conduct hypothesis tests using Z-statistics (these correspond to a standard normal distribution ), T-statistics (these correspond to a T-distribution), or chi-squared test statistics (these correspond to a chi-square distribution).

What significance level have you selected?

This is up to the person conducting the test. A significance level (or alpha level) is the probability of mistakenly rejecting the null hypothesis when it is actually true. By choosing a significance level, you are deciding how careful you want to be in avoiding such a mistake.

You might also hear a hypothesis test being described by a confidence level. Confidence levels are closely related to statistical significance. The confidence level of a test is equal to one minus the significance level or 1-ɑ.

Is it a one-tailed test or a two-tailed test?

Hypothesis tests can be one-tailed or two-tailed, depending on the alternative hypothesis. Null and alternative hypotheses are always mutually exclusive statements, but they can take different forms. If your alternative hypothesis is only concerned with positive effects or the right tail of the distribution, you will likely use a one-tailed upper-tail test.

If your alternative hypothesis is only concerned with negative effects or the left tail of the distribution, you will likely use a one-tailed lower-tail test. Finally, if your alternative hypothesis proposes a deviation in either direction from what the null hypothesis proposes, you’ll use a two-tailed test.

The number of critical values in a hypothesis test depends on whether the test is a one-tailed test or a two-tailed test.

Critical Values for Two-Tailed Tests

In a two-tailed test, we divide the rejection region into two equal parts: one in the right tail of the distribution and one in the left tail of the distribution. Each of these rejection regions will contain an area of the distribution equal to ɑ/2. For example, in a two-tailed test with a significance level of 0.05, each rejection region will contain 0.05/2 = 0.025 = 2.5% of the area under the distribution. Because we split the rejection region, a two-tailed test has two critical values.

Critical Values for One-Tailed Tests

A one-tailed test has one rejection region (either in the right tail or the left tail of the distribution) and one critical value. In a lower tail (or left-tailed) test, the critical value and rejection region will be in the left tail of the distribution. In an upper tail (or right-tailed) test, the critical value and rejection region will be in the right tail of the distribution.

Graph showing two-tailed test

Two-tailed test

Graph showing one-tailed lower tail test

One-tailed lower tail test

Graph showing one-tailed upper tail test

One-tailed upper tail test

The tables below provide a list of critical values that are commonly used in hypothesis testing.

Z-Test Statistics (Using a Normal Distribution)

CONFIDENCE LEVELTAILSALPHA (𝛂)CRITICAL VALUE(S)
90%Two-tailed0.1-1.64 and 1.64
Right-tailed0.11.28
Left-tailed0.1-1.28
95%Two-tailed0.05-1.96 and 1.96
Right-tailed0.051.65
Left-tailed0.05-1.65
99%Two-tailed0.01-2.58 and 2.58
Right-tailed0.012.33
Left-tailed0.01-2.33

T-Test Statistics (Using a T Distribution)

One-tailed test 𝛂 =______0.100.050.0250.010.005
Two-tailed test 𝛂 =______0.200.100.050.020.01

Degrees of Freedom (df)

13.0786.31412.7131.8263.66
21.8862.9204.3036.9659.925
31.6382.3533.1824.5415.841
41.5332.1322.7763.7474.604
51.4762.0152.5713.3654.032
61.4401.9432.4473.1433.707
71.4151.8952.3652.9983.499
81.3971.8602.3062.8963.355
91.3831.8332.2622.8213.250
101.3721.8122.2282.7643.169
111.3631.7962.2012.7183.106
121.3561.7822.1792.6813.055
131.3501.7712.1602.6503.012
141.3451.7612.1452.6242.977
151.3411.7532.1312.6022.947
161.3371.7462.1202.5832.921
171.3331.7402.1102.5672.898
181.3301.7342.1012.5522.878
191.3281.7292.0932.5392.861
201.3251.7252.0862.5282.845
211.3231.7212.0802.5182.831
221.3211.7172.0742.5082.819
231.3191.7142.0692.5002.807
241.3181.7112.0642.4922.797
251.3161.7082.0602.4852.787
261.3151.7062.0562.4792.779
271.3141.7032.0522.4732.771
281.3131.7012.0482.4672.763
291.3111.6992.0452.4622.756
301.3101.6972.0422.4572.750

Finding a Critical Value for a Two-Tailed Z-Test

Suppose you don’t remember what the critical values for a two-sided Z-test are. How would you go about finding them?

To find the critical value, you start with the significance level of your hypothesis test. Your significance level is equal to the total area of the rejection region. For example, with a 0.05 significance level, the entire rejection region will be equal to 5% of the area under the normal distribution.

In a two-tailed test Z-test, we split equally the rejection region into two parts. One rejection region is in the distribution’s right tail, and the other is in the left tail of the distribution. Each of these two parts will contain half of the total area of the rejection region. For a two-tailed Z-test with a significance level of ɑ=0.05, each rejection region will contain ɑ/2 = 0.025 or 2.5% of the distribution. This leaves a confidence interval of 0.95 (or 95%) between the two rejection regions.

Graph showing a confidence interval of 0.95 (or 95%) between the two rejection regions.

To find the critical values, you need to find the corresponding values (or Z-scores ) in the Z-distribution. Make sure the percentage lying to the left of the first critical value is equal to ɑ/2. Also, check that the percentage of the distribution lying to the right of the second critical value is equal to ɑ/2. You can use a Z-table to look up these figures.

Solved Example: Two-Tailed Z-Test

For a two-tailed Z-test with a significance level of ɑ=0.05, we are looking for two critical values such that ɑ/2 or 2.5% of the normal distribution lies to the left of the first critical value and ɑ/2 or 0.025 of the normal distribution lies to the right of the second critical value.

Z-tables will either show you probabilities to the LEFT or to the RIGHT of a particular value. We’ll stick to Z-tables showing probabilities to the LEFT.

For the first critical value, if the area to the left of the critical value is 0.025, we use the Z-table to find the number 0.025 in the table (we’ve shown this figure highlighted in an orange box). We then trace that value to the left to find the first two digits of the critical value (-1.9) and then up to the top to find the last digit (-0.06). If we put these together, we have the critical value -1.96. Z-tables provide Z-scores that are usually rounded to two decimal places.

z-table

For the second critical value, 2.5% of the distribution will lie to the right, meaning 97.5% of the distribution will lie to the left of the critical value (1-0.025=0.0975). To find this critical value, we look for the number 0.0975 in the Z-table (we’ve shown this figure highlighted in a green box). We trace that value to the left to find the first two digits of the critical value (1.9) and then up to the top to find the last digit (0.06). Our second critical value is 1.96.

z-table

Following similar steps, see if you can find the critical values for a Z-test with a significance level of ɑ=0.10. The critical values you find should be equal to -1.64 and 1.64.

Finding a Critical Value for a One-Tailed Z-Test

In a one-tailed test, there is just one rejection region, and the area of the rejection region is equal to the significance level.

For a one-tailed lower tail test, use the z-table to find a critical value where the total area to the left of the critical value is equal to alpha.

For a one-tailed upper tail test, use the z-table to find a critical value where the total area to the left of the critical value is equal to 1- alpha.

Solved Example: One-Tailed Z-Test

Let’s see if we can use the Z-table to find the critical value for a lower tail Z-test with a significance level of 0.01.

Since alpha equals 0.01, we are looking for this number in the Z-table. If you can’t find the exact number, you look for the closest number, which in this case is 0.0090. Once we’ve found this number, we trace the value to the first column to find the first two digits of the critical value and then up to the first row to find the last digit. The critical value is -2.33.

z-table values represent area to the left of the z-score

Now let’s see if we can use the Z-table to find the critical value for an upper tail Z-test with a significance level of 0.10.

Since this is an upper tail test, we need to use the Z-table to look for a critical value corresponding to 0.90 (1-ɑ = 1-0.10 = 0.90). The closest number to 0.90 we can find in the table is 0.89973. We trace this number to the left and then up to the top of the table to find a critical value of 1.28.

z-table showing z-scores

To find a critical value in R, you can use the qnorm() function for a Z-test or the qt() function for a T-test.

Here are some examples of how you could use these functions in your critical value approach.

Z-Critical Values Using R

For a two-tailed Z-test with a 0.05 significance level, you would type:

qnorm(p=0.05/2, lower.tail=FALSE)

This will give you one of your critical values. The second critical value is just the negative value of the first.

For a one-tailed lower tail Z-test with a 0.01 significance level, you would type:

qnorm(p=0.01, lower.tail=TRUE)

For a one-tailed upper tail Z-test with a 0.01 significance level, you would type:

qnorm(p=0.01, lower.tail=FALSE)

T-Critical Values Using R

For a two-tailed T-test with 15 degrees of freedom and a 0.1 significance level, you would type:

qt(p=0.1/2, df=15, lower.tail=FALSE)

For a one-tailed lower tail T-test with 10 degrees of freedom and a 0.05 significance level, you would type:

qt(p=0.05, df=10, lower.tail=TRUE)

For a one-tailed upper tail T-test with 20 degrees of freedom and a 0.01 significance level, you would type:

qt(p=0.01, df=20, lower.tail=FALSE)

Now that you know the ins and outs of critical values, you’re one step closer to conducting hypothesis tests with ease!

Explore Outlier's Award-Winning For-Credit Courses

Outlier (from the co-founder of MasterClass) has brought together some of the world's best instructors, game designers, and filmmakers to create the future of online college.

Check out these related courses:

Intro to Statistics

Intro to Statistics

How data describes our world.

Intro to Microeconomics

Intro to Microeconomics

Why small choices have big impact.

Intro to Macroeconomics

Intro to Macroeconomics

How money moves our world.

Intro to Psychology

Intro to Psychology

The science of the mind.

Related Articles

Closeup view of person flipping a coin which helps represent binomial probability

Binomial Distribution: Meaning & Formula

Learn what binomial distribution is in probability. Read a list of the criteria that must be present to apply the formula and learn how to calculate it.

playing cards showing Math Probability HighRes

Understanding Math Probability - Definition, Formula & How To Find It

This article is about what probability is, its definition, and the formula. You’ll also learn how to calculate it.

weight scale representing what is a residual in stats

What Is a Residual in Stats?

This article gives a quick definition of what’s a residual equation, the best way to read it, and how to use it with proper statistical models.

Further Reading

Set operations: formulas, properties, examples & exercises, what is set notation [+ bonus practice], how to make a box plot, a guide to understand negative correlation, calculating p-value in hypothesis testing.

What is a critical value?

A critical value is a point on the distribution of the test statistic under the null hypothesis that defines a set of values that call for rejecting the null hypothesis. This set is called critical or rejection region. Usually, one-sided tests have one critical value and two-sided test have two critical values. The critical values are determined so that the probability that the test statistic has a value in the rejection region of the test when the null hypothesis is true equals the significance level (denoted as α or alpha).

how to find critical values null hypothesis

Critical values on the standard normal distribution for α = 0.05

Figure A shows that results of a one-tailed Z-test are significant if the value of the test statistic is equal to or greater than 1.64, the critical value in this case. The shaded area represents the probability of a type I error (α = 5% in this example) of the area under the curve. Figure B shows that results of a two-tailed Z-test are significant if the absolute value of the test statistic is equal to or greater than 1.96, the critical value in this case. The two shaded areas sum to 5% (α) of the area under the curve.

Examples of calculating critical values

In hypothesis testing, there are two ways to determine whether there is enough evidence from the sample to reject H 0 or to fail to reject H 0 . The most common way is to compare the p-value with a pre-specified value of α, where α is the probability of rejecting H 0 when H 0 is true. However, an equivalent approach is to compare the calculated value of the test statistic based on your data with the critical value. The following are examples of how to calculate the critical value for a 1-sample t-test and a One-Way ANOVA.

Calculating a critical value for a 1-sample t-test

  • Select Calc > Probability Distributions > t .
  • Select Inverse cumulative probability .
  • In Degrees of freedom , enter 9 (the number of observations minus one).
  • In Input constant , enter 0.95 (one minus one-half alpha).

This gives you an inverse cumulative probability, which equals the critical value, of 1.83311. If the absolute value of the t-statistic value is greater than this critical value, then you can reject the null hypothesis, H 0 , at the 0.10 level of significance.

Calculating a critical value for an analysis of variance (ANOVA)

  • Choose Calc > Probability Distributions > F .
  • In Numerator degrees of freedom , enter 2 (the number of factor levels minus one).
  • In Denominator degrees of freedom , enter 9 (the degrees of freedom for error).
  • In Input constant , enter 0.95 (one minus alpha).

This gives you an inverse cumulative probability (critical value) of 4.25649. If the F-statistic is greater than this critical value, then you can reject the null hypothesis, H 0 , at the 0.05 level of significance.

  • Minitab.com
  • License Portal
  • Cookie Settings

You are now leaving support.minitab.com.

Click Continue to proceed to:

Critical Values: Find a Critical Value in Any Tail

Critical Values: Contents

  • What are Critical Values?
  • Critical Value of Z
  • Find a critical value for a confidence level.
  • Common confidence levels and their critical values.
  • Find a Critical Value: Two-Tailed Test.
  • Find a Critical Value: Right-Tailed Test.
  • Find a Critical Value: Left-Tailed Test.
  • Critical Values and Working with Samples
  • Other Types of Critical Values.
  • What does Significance Testing Tell Us?
  • More Critical Value Articles.

1. What are Critical Values?

Watch the video below for an overview on critical values:

how to find critical values null hypothesis

Can’t see the video? Click here to watch it on YouTube.

A critical value is a line on a graph that splits the graph into sections. One or two of the sections is the “ rejection region “; if your test value falls into that region, then you reject the null hypothesis .

A one tailed test with the rejection in one tail

Critical values come in all shapes and sizes, but the one you’ll come across first in statistics is the critical value of Z.

Note : A critical number , used in calculus , is not the same thing as a critical value. Critical numbers are used in calculus to find points where a graph changes from increasing to decreasing, or vice-versa.

2. Critical Values of Z

critical values of z

  • Tail region : The area of the tails (the red areas) is 1 minus the central region. In this example, 1-.8 = .20, or about 20 percent. The tail regions are sometimes calculated when you want to know how many variables would be less than or more than a certain figure.

A critical value of z is sometimes written as z a , where the alpha level , a, is the area in the tail. For example, z .10 = 1.28.

When are Critical values of z used?

A critical value of z ( Z-score ) is used when the sampling distribution is normal, or close to normal. Z-scores are used when the population standard deviation is known or when you have larger sample sizes . While the z-score can also be used to calculate probability for unknown standard deviations and small samples , in real life you’ll probabably use the t distribution to calculate these probabilities. That’s because you often don’t know the population variance (which is a requirement for using the z test ).

See also: T Critical Value.

Other uses of z-scores

Every statistic has a probability, and every probability calculated for a sample has a margin of error . The critical value of z can also be used to calculate the margin of error.

  • Margin of error = Critical value * Standard deviation of the statistic.
  • Margin of error = Critical value * Standard error of the sample .

3. Find Critical Values in Any Tail

Need help? Check out our tutoring page!

How you look up a critical value is very straightforward as long as you know if you have a left tailed test or right tailed test (or potentially, both).

A. Find a critical value for a confidence level

Watch the video below to learn about finding a critical value for a confidence level:

how to find critical values null hypothesis

Can’t see the video? Click here to watch it on YouTube. Example question: Find a critical value for a 90% confidence level ( Two-Tailed Test ).

Step 1: Subtract the confidence level from 100% to find the α level: 100% – 90% = 10%.

Step 2: Convert Step 1 to a decimal: 10% = 0.10.

Step 3: Divide Step 2 by 2 (this is called “α/2”). 0.10 = 0.05. This is the area in each tail.

Step 4: Subtract Step 3 from 1 (because we want the area in the middle, not the area in the tail): 1 – 0.05 = .95.

Step 5: Look up the area from Step in the z-table. The area is at z=1.645. This is your critical value for a confidence level of 90%.

Back to Top

B. Common confidence levels and their critical values

You don’t have to perform the above calculations every time. This list of z- critical values and their associated confidence levels were calculated using the above steps:

90% 1.64 1.28
95% 1.96 1.65
99% 2.58 2.33

C. Find Critical Values: Two-Tailed Test

Watch the video or read the steps below:

how to find critical values null hypothesis

Can’t see the video? Click here to watch it on YouTube

Example question : Find the critical value for alpha of .05.

Step 1: Subtract alpha from 1.

1 – .05 = .95

Step 2: Divide Step 1 by 2 (because we are looking for a two-tailed test).

.95 / 2 = .475

Step 3: Look at your z-table and locate the answer from Step 2 in the middle section of the z-table . The fastest way to do this is to use the find function of your browser (usually CTRL+F). In this example we’re going to look for .475, so go ahead and press CTRL+F, then type in .475. Step 4: In this example, you should have found the number .4750. Look to the far left or the row, you’ll see the number 1.9 and look to the top of the column, you’ll see .06. Add them together to get 1.96 . That’s the critical value!

Tip : The critical value appears twice in the z table because you’re looking for both a left hand and a right hand tail, so don’t forget to add the plus or minus sign! In our example you’d get ±1.96 .

D. Find a Critical Value: Right-Tailed Test

find critical values right tail

Example question : Find a critical value in the z-table for an alpha level of 0.0079.

Step 1: Draw a diagram, like the one above. Shade in the area in the right tail. This area represents alpha, α. A diagram helps you to visualize what area you are looking for (i.e. if you want an area to the right of the mean or the left of the mean).

Step 2: Subtract alpha (α) from 0.5 . 0.5-0.0079 = 0.4921.

Step 3: Find the result from step 2 in the center part of the z-table . The closest area to 0.4921 is 0.4922 at z = 2.42.

finding critical values

That’s it!

E. Find a Critical Value: Left-Tailed Test

Example question : find the critical value in the z-table for α = .012 (left-tailed test).

left

Step 1: Draw a diagram, like the one above. Shade in the area in the left tail (because you’re looking for a critical value for a left-tailed test). This area represents alpha, α .

Step 2: Subtract alpha (α) from 0.5 . 0.5 – 0.012 = 0.488.

Step 3: Find the result from step 2 in the center part of the z-table.

critical value z table 2

Step 4: Add a negative sign to Step 3 (left-tail critical values are always negative) . -2.26.

4. Critical Values and Working with Samples

critical value

Critical values are used in statistics for hypothesis testing . When you work with statistics, you’re working with a small percentage (a sample ) of a population . For example, you might have statistics for voting habits from two percent of democratic voters, or five percent of students and their test results. Because you’re working with a fraction of the population and not the entire population, you can never be one hundred percent certain that your results reflect the actual population’s results. You might be 90 percent certain, or even 99 percent certain, but you can never be 100 percent certain. How accurate are your results? You can tell with hypothesis testing .

5. Types of Critical Values

Various types of critical values are used to calculate significance , including: t scores from student’s t-tests , chi-square , and z-tests. In each of these tests, you’ll have an area where you are able to reject the null hypothesis , and an area where you cannot. The line that separates these two regions is where your critical values are.

In the above image, the critical values are at 1.28 or -1.28. The blue area is where you must accept the null hypothesis . The red areas are where you can reject the null hypothesis . How large these areas actually are (and what test you use) is dependent on many factors, including your chosen confidence level and your sample size .

Significance Testing Example

Significance testing is used to figure out if your results differ from the null hypothesis . The null hypothesis is just an accepted fact about the population.

For example, your school may make a statistics course mandatory for nursing students because research has shown that patient outcomes improve when nurses have a statistics background. You might think that there’s no difference. Instead of trying to prove that there’s no difference, proper research techniques dictate that you’ll try to disprove the opposite — the null hypothesis, which in this case is that “patient outcomes improve when nurses have a statistics background.” In order to disprove, or reject, the null hypothesis, your research must pass a test of significance .

6. What does Significance Testing Tell Us?

Significance testing is used to calculate the probability that a relationship between two variables (like “taking a statistics class” and “improved patient outcomes”) is just due to chance. It helps to answer the question of whether you could duplicate your test results accurately in further research. By using probability and the normal curve, you can figure out what the chance is that your research is wrong.

Steps in Testing for Statistical Significance

  • State the Alternate Hypothesis .
  • State the Null Hypothesis .
  • Select a probability of error level ( alpha level ).
  • Select and compute the test for statistical significance (i.e. calculate a z-score .)
  • Interpret the results.

Back to top

6. More Critical Values Articles

  • Chi square test .
  • How to Find a Critical Chi-Square Value
  • How do I Find the Area Under a Normal Distribution Curve?
  • Hypothesis Testing Examples
  • One tailed Distribution: How to Find the Area.
  • What is z alpha/2?
  • What is a Z Test?
  • One Sample Z Test
  • Z Score to Percentile Calculator and Manual methods

Kenney, J. F. and Keeping, E. S. “Confidence Interval Charts.” §11.5 in Mathematics of Statistics, Pt. 1, 3rd ed. Princeton, NJ: Van Nostrand, pp. 167-169, 1962.

The Data Scientist

the data scientist logo

Understanding Critical Value vs. P-Value in Hypothesis Testing

In the realm of statistical analysis , critical values and p-values serve as essential tools for hypothesis testing and decision making. These concepts, rooted in the work of statisticians like Ronald Fisher and the Neyman-Pearson approach, play a crucial role in determining statistical significance. Understanding the distinction between critical values and p-values is vital for researchers and data analysts to interpret their findings accurately and avoid misinterpretations that can lead to false positives or false negatives.

This article aims to shed light on the key differences between critical values and p-values in hypothesis testing. It will explore the definition and calculation of critical values, including how to find critical values using tables or calculators. The discussion will also cover p-values, their interpretation, and their relationship to significance levels. Additionally, the article will address common pitfalls in result interpretation and provide guidance on when to use critical values versus p-values in various statistical scenarios, such as t-tests and confidence intervals.

how to find critical values null hypothesis

What is a Critical Value?

Definition and concept.

A critical value in statistics serves as a crucial cut-off point in hypothesis testing and decision making. It defines the boundary between accepting and rejecting the null hypothesis, playing a vital role in determining statistical significance. The critical value is intrinsically linked to the significance level (α) chosen for the test, which represents the probability of making a Type I error.

Critical values are essential for accurately representing a range of characteristics within a dataset. They help statisticians calculate the margin of error and provide insights into the validity and accuracy of their findings. In hypothesis testing, the critical value is compared to the obtained test statistic to determine whether the null hypothesis should be rejected or not.

How to calculate critical values

Calculating critical values involves several steps and depends on the type of test being conducted. The general formula for finding the critical value is:

Critical probability (p*) = 1 – (Alpha / 2)

Where Alpha = 1 – (confidence level / 100)

For example, using a confidence level of 95%, the alpha value would be:

Alpha value = 1 – (95/100) = 0.05

Then, the critical probability would be:

Critical probability (p*) = 1 – (0.05 / 2) = 0.975

The critical value can be expressed in two ways:

  • As a Z-score related to cumulative probability
  • As a critical t statistic, which is equal to the critical probability

For larger sample sizes (typically n ≥ 30), the Z-score is used, while for smaller samples or when the population standard deviation is unknown, the t statistic is more appropriate.

Examples in hypothesis testing

Critical values play a crucial role in various types of hypothesis tests. Here are some examples:

  • One-tailed test: For a right-tailed test with H₀: μ = 3 vs. H₁: μ > 3, the critical value would be the t-value such that the probability to the right of it is α. For instance, with α = 0.05 and 14 degrees of freedom, the critical value t₀.₀₅,₁₄ is 1.7613 . The null hypothesis would be rejected if the test statistic t is greater than 1.7613.
  • Two-tailed test: For a two-tailed test with H₀: μ = 3 vs. H₁: μ ≠ 3, there are two critical values – one for each tail. Using α = 0.05 and 14 degrees of freedom, the critical values would be -2.1448 and 2.1448 . The null hypothesis would be rejected if the test statistic t is less than -2.1448 or greater than 2.1448.
  • Z-test example: In a tire manufacturing plant producing 15.2 tires per hour with a variance of 2.5, new machines were tested. The critical region for a one-tailed test with α = 0.10 was z > 1.282. The calculated z-statistic of 3.51 exceeded this critical value , leading to the rejection of the null hypothesis.

Understanding critical values is essential for making informed decisions in hypothesis testing and statistical analysis. They provide a standardized approach to evaluating the significance of research findings and help researchers avoid misinterpretations that could lead to false positives or false negatives.

Understanding P-Values

how to find critical values null hypothesis

Definition of p-value

In statistical hypothesis testing, a p-value is a crucial concept that helps researchers quantify the strength of evidence against the null hypothesis. The p-value is defined as the probability of obtaining test results at least as extreme as the observed results, assuming that the null hypothesis is true. This definition highlights the relationship between the p-value and the null hypothesis, which is fundamental to understanding its interpretation.

The p-value serves as an alternative to rejection points, providing the smallest level of significance at which the null hypothesis would be rejected. It is important to note that the p-value is not the probability that the null hypothesis is true or that the alternative hypothesis is false. Rather, it indicates how compatible the observed data are with a specified statistical model, typically the null hypothesis.

Interpreting p-values

Interpreting p-values correctly is essential for making sound statistical inferences. A smaller p-value suggests stronger evidence against the null hypothesis and in favor of the alternative hypothesis. Conventionally, a p-value of 0.05 or lower is considered statistically significant, leading to the rejection of the null hypothesis. However, it is crucial to understand that this threshold is arbitrary and should not be treated as a definitive cutoff point for decision-making.

When interpreting p-values, it is important to consider the following:

  • The p-value does not indicate the size or importance of the observed effect. A small p-value can be observed for an effect that is not meaningful or important, especially with large sample sizes.
  • The p-value is not the probability that the observed effects were produced by random chance alone. It is calculated under the assumption that the null hypothesis is true.
  • A p-value greater than 0.05 does not necessarily mean that the null hypothesis is true or that there is no effect. It simply indicates that the evidence against the null hypothesis is not strong enough to reject it at the chosen significance level.

Common misconceptions about p-values

Despite their widespread use, p-values are often misinterpreted in scientific research and education. Some common misconceptions include:

  • Interpreting the p-value as the probability that the null hypothesis is true or the probability that the alternative hypothesis is false. This interpretation is incorrect, as p-values do not provide direct probabilities for hypotheses.
  • Believing that a p-value less than 0.05 proves that a finding is true or that the probability of making a mistake is less than 5%. In reality, the p-value is a statement about the relation of the data to the null hypothesis, not a measure of truth or error rates.
  • Treating p-values on opposite sides of the 0.05 threshold as qualitatively different. This dichotomous thinking can lead to overemphasis on statistical significance and neglect of practical significance.
  • Using p-values to determine the size or importance of an effect. P-values do not provide information about effect sizes or clinical relevance.

To address these misconceptions, it is important to consider p-values as continuous measures of evidence rather than binary indicators of significance. Additionally, researchers should focus on reporting effect sizes, confidence intervals, and practical significance alongside p-values to provide a more comprehensive understanding of their findings.

Key Differences Between Critical Values and P-Values

how to find critical values null hypothesis

Approach to hypothesis testing

Critical values and p-values represent two distinct approaches to hypothesis testing, each offering unique insights into the decision-making process. The critical value approach, rooted in traditional hypothesis testing, establishes a clear boundary for accepting or rejecting the null hypothesis. This method is closely tied to significance levels and provides a straightforward framework for statistical inference.

In contrast, p-values offer a continuous measure of evidence against the null hypothesis. This approach allows for a more nuanced evaluation of the data’s compatibility with the null hypothesis. While both methods aim to support or reject the null hypothesis, they differ in how they lead to that decision.

Decision-making process

The decision-making process for critical values and p-values follows different paths. Critical values provide a binary framework, simplifying the decision to either reject or fail to reject the null hypothesis. This approach streamlines the process by classifying results as significant or not significant based on predetermined thresholds.

For instance, in a hypothesis test with a significance level (α) of 0.05 , the critical value serves as the dividing line between the rejection and non-rejection regions. If the test statistic exceeds the critical value, the null hypothesis is rejected.

P-values, on the other hand, offer a more flexible approach to decision-making. Instead of a simple yes or no answer, p-values present a range of evidence levels against the null hypothesis. This continuous scale allows researchers to interpret the strength of evidence and choose an appropriate significance level for their specific context.

Interpretation of results

The interpretation of results differs significantly between critical values and p-values. Critical values provide a clear-cut interpretation: if the test statistic falls within the rejection region defined by the critical value, the null hypothesis is rejected. This approach offers a straightforward way to communicate results, especially when a binary decision is required.

P-values, however, offer a more nuanced interpretation of results. A smaller p-value indicates stronger evidence against the null hypothesis. For example, a p-value of 0.03 suggests more compelling evidence against the null hypothesis than a p-value of 0.07. This continuous scale allows for a more detailed assessment of the data’s compatibility with the null hypothesis.

It’s important to note that while a p-value of 0.05 is often used as a threshold for statistical significance, this is an arbitrary cutoff. The interpretation of p-values should consider the context of the study and the potential for practical significance.

Both approaches have their strengths and limitations. Critical values simplify decision-making but may not accurately reflect the increasing precision of estimates as sample sizes grow. P-values provide a more comprehensive understanding of outcomes, especially when combined with effect size measures. However, they are frequently misunderstood and can be affected by sample size in large datasets, potentially leading to misleading significance.

In conclusion, while critical values and p-values are both essential tools in hypothesis testing, they offer different perspectives on statistical inference. Critical values provide a clear, binary decision framework, while p-values allow for a more nuanced evaluation of evidence against the null hypothesis. Understanding these differences is crucial for researchers to choose the most appropriate method for their specific research questions and to interpret results accurately.

how to find critical values null hypothesis

When to Use Critical Values vs. P-Values

Advantages of critical value approach.

The critical value approach offers several advantages in hypothesis testing. It provides a simple, binary framework for decision-making, allowing researchers to either reject or fail to reject the null hypothesis. This method is particularly useful when a clear explanation of the significance of results is required. Critical values are especially beneficial in sectors where decision-making is influenced by predetermined thresholds, such as the commonly used 0.05 significance level.

One of the key strengths of the critical value approach is its consistency with accepted significance levels, which simplifies interpretation. This method is particularly valuable in non-parametric tests where distributional assumptions may be violated. The critical value approach involves comparing the observed test statistic to a predetermined cutoff value. If the test statistic is more extreme than the critical value, the null hypothesis is rejected in favor of the alternative hypothesis.

Benefits of p-value method

The p-value method offers a more nuanced approach to hypothesis testing. It provides a continuous scale for evaluating the strength of evidence against the null hypothesis, allowing researchers to interpret data with greater flexibility. This approach is particularly useful when conducting unique or exploratory research, as it enables scientists to choose an appropriate level of significance based on their specific context.

P-values quantify the probability of observing a test statistic as extreme as, or more extreme than, the one observed, assuming the null hypothesis is true. This method provides a more comprehensive understanding of outcomes, especially when combined with effect size measures. For instance, a p-value of 0.0127 indicates that it is unlikely to observe such an extreme test statistic if the null hypothesis were true, leading to its rejection.

Choosing the right approach for your study

The choice between critical values and p-values depends on various factors, including the nature of the data , study design, and research objectives. Critical values are best suited for situations requiring a simple, binary choice about the null hypothesis. They streamline the decision-making process by classifying results as significant or not significant.

On the other hand, p-values are more appropriate when evaluating the strength of evidence against the null hypothesis on a continuous scale. They offer a more subtle understanding of the data’s significance and allow for flexibility in interpretation. However, it’s crucial to note that p-values have been subject to debate and controversy, particularly in the context of analyzing complex data associated with plant and animal breeding programs.

When choosing between these approaches, consider the following:

  • If you need a clear-cut decision based on predetermined thresholds, the critical value approach may be more suitable.
  • For a more nuanced interpretation of results, especially in exploratory research, the p-value method might be preferable.
  • Consider the potential for misinterpretation and misuse associated with p-values, such as p-value hacking , which can lead to inflated significance and misleading conclusions.

Ultimately, the choice between critical values and p-values should be guided by the specific requirements of your study and the need for accurate statistical inferences to make informed decisions in your field of research.

Common Pitfalls in Interpreting Results

Overreliance on arbitrary thresholds.

One of the most prevalent issues in statistical analysis is the overreliance on arbitrary thresholds, particularly the p-value of 0.05 . This threshold has been widely used for decades to determine statistical significance , but its arbitrary nature has come under scrutiny. Many researchers argue that setting a single threshold for all sciences is too extreme and can lead to misleading conclusions.

The use of p-values as the sole measure of significance can result in the publication of potentially false or misleading results. It’s crucial to understand that statistical significance does not necessarily equate to practical significance or real-world importance. A study with a large sample size can produce statistically significant results even when the effect size is trivial.

To address this issue, some researchers propose selecting and justifying p-value thresholds for experiments before collecting any data. These levels would be based on factors such as the potential impact of a discovery or how surprising it would be. However, this approach also has its critics, who argue that researchers may not have the incentive to use more stringent thresholds of evidence.

Ignoring effect sizes

Another common pitfall in interpreting results is the tendency to focus solely on statistical significance while ignoring effect sizes. Effect size is a crucial measure that indicates the magnitude of the relationship between variables or the difference between groups. It provides information about the practical significance of research findings, which is often more valuable than mere statistical significance.

Unlike p-values, effect sizes are independent of sample size . This means they offer a more reliable measure of the practical importance of a result, especially when dealing with large datasets. Researchers should report effect sizes alongside p-values to provide a comprehensive understanding of their findings.

It’s important to note that the criteria for small or large effect sizes may vary depending on the research field. Therefore, it’s essential to consider the context and norms within a particular area of study when interpreting effect sizes.

Misinterpreting statistical vs. practical significance

The distinction between statistical and practical significance is often misunderstood or overlooked in research. Statistical significance, typically determined by p-values, indicates the probability that the observed results occurred by chance. However, it does not provide information about the magnitude or practical importance of the effect.

Practical significance, on the other hand, refers to the real-world relevance or importance of the research findings. A result can be statistically significant but practically insignificant, or vice versa. For instance, a study with a large sample size might find a statistically significant difference between two groups, but the actual difference may be too small to have any meaningful impact in practice.

To avoid this pitfall, researchers should focus on both statistical and practical significance when interpreting their results. This involves considering not only p-values but also effect sizes, confidence intervals, and the potential real-world implications of the findings. Additionally, it’s crucial to interpret results in the context of the specific research question and field of study.

By addressing these common pitfalls, researchers can improve the quality and relevance of their statistical analyzes. This approach will lead to more meaningful interpretations of results and better-informed decision-making in various fields of study.

Critical values and p-values are key tools in statistical analysis , each offering unique benefits to researchers. These concepts help in making informed decisions about hypotheses and understanding the significance of findings. While critical values provide a clear-cut approach for decision-making, p-values offer a more nuanced evaluation of evidence against the null hypothesis. Understanding their differences and proper use is crucial to avoid common pitfalls in result interpretation.

Ultimately, the choice between critical values and p-values depends on the specific needs of a study and the context of the research. It’s essential to consider both statistical and practical significance when interpreting results, and to avoid overreliance on arbitrary thresholds. By using these tools wisely, researchers can enhance the quality and relevance of their statistical analyzes, leading to more meaningful insights and better-informed decisions. 

1. When should you use a critical value as opposed to a p-value in hypothesis testing?

When testing a hypothesis, compare the p-value directly with the significance level (α). If the p-value is less than α, reject the null hypothesis (H0); if it’s greater, do not reject H0. Conversely, using critical values allows you to determine whether the p-value is greater or less than α.

2. What does it mean if the p-value is less than the critical value?

If the p-value is lower than the critical value, you should reject the null hypothesis. Conversely, if the p-value is equal to or greater than the critical value, you should not reject the null hypothesis. Remember, a smaller p-value generally indicates stronger evidence against the null hypothesis.

3. What is the purpose of a critical value in statistical testing?

The critical value is a point on the test statistic that defines the boundaries of the acceptance or rejection regions for a statistical test. It helps in setting the threshold for what constitutes statistically significant results.

4. When should you reject the null hypothesis based on the critical value?

In the critical value approach, if the test statistic is more extreme than the critical value, reject the null hypothesis. If it is less extreme, do not reject the null hypothesis. This method helps in deciding the statistical significance of the test results.

how to find critical values null hypothesis

Unlock the power of data science & AI with Tesseract Academy! Dive into our data science & AI courses to elevate your skills and discover endless possibilities in this new era.

  • 6 Emerging Tech Trends Every Organization Needs to Monitor
  • What Role Does Machine Learning Play In Generating Summaries?
  • The Rise of MSPs: How Managed IT Services Are Transforming Businesses
  • Business models in data science and AI
  • Skip to secondary menu
  • Skip to main content
  • Skip to primary sidebar

Statistics By Jim

Making statistics intuitive

T-Distribution Table of Critical Values

By Jim Frost 8 Comments

This t-distribution table provides the critical t-values for both one-tailed and two-tailed t-tests, and confidence intervals. Learn how to use this t-table with the information, examples, and illustrations below the table.

1 3.078 6.314 12.71 31.82 63.66 636.62
2 1.886 2.920 4.303 6.965 9.925 31.599
3 1.638 2.353 3.182 4.541 5.841 12.924
4 1.533 2.132 2.776 3.747 4.604 8.610
5 1.476 2.015 2.571 3.365 4.032 6.869
6 1.440 1.943 2.447 3.143 3.707 5.959
7 1.415 1.895 2.365 2.998 3.499 5.408
8 1.397 1.860 2.306 2.896 3.355 5.041
9 1.383 1.833 2.262 2.821 3.250 4.781
10 1.372 1.812 2.228 2.764 3.169 4.587
11 1.363 1.796 2.201 2.718 3.106 4.437
12 1.356 1.782 2.179 2.681 3.055 4.318
13 1.350 1.771 2.160 2.650 3.012 4.221
14 1.345 1.761 2.145 2.624 2.977 4.140
15 1.341 1.753 2.131 2.602 2.947 4.073
16 1.337 1.746 2.120 2.583 2.921 4.015
17 1.333 1.740 2.110 2.567 2.898 3.965
18 1.330 1.734 2.101 2.552 2.878 3.922
19 1.328 1.729 2.093 2.539 2.861 3.883
20 1.325 1.725 2.086 2.528 2.845 3.850
21 1.323 1.721 2.080 2.518 2.831 3.819
22 1.321 1.717 2.074 2.508 2.819 3.792
23 1.319 1.714 2.069 2.500 2.807 3.768
24 1.318 1.711 2.064 2.492 2.797 3.745
25 1.316 1.708 2.060 2.485 2.787 3.725
26 1.315 1.706 2.056 2.479 2.779 3.707
27 1.314 1.703 2.052 2.473 2.771 3.690
28 1.313 1.701 2.048 2.467 2.763 3.674
29 1.311 1.699 2.045 2.462 2.756 3.659
30 1.310 1.697 2.042 2.457 2.750 3.646
40 1.303 1.684 2.021 2.423 2.704 3.551
60 1.296 1.671 2.000 2.390 2.660 3.460
80 1.292 1.664 1.990 2.374 2.639 3.416
100 1.290 1.660 1.984 2.364 2.626 3.390
1000 1.282 1.646 1.962 2.330 2.581 3.300
1.282 1.645 1.960 2.326 2.576 3.291

How to Use the T-Distribution Table

Use the t-distribution table by finding the intersection of your significance level and degrees of freedom. The t-distribution is the sampling distribution of t-values when the null hypothesis is true. Learn more about the T Distribution: Definition and Uses .

Significance Level (Alpha α) : Choose the column in the t-distribution table that contains the significance level for your test. Be sure to choose the alpha for a one- or two-tailed t-test based on your t-test’s methodology. Learn more about the Significance Level and One- and Two-Tailed Tests .

Degrees of freedom (df) : Choose the row of the t-table that corresponds to the degrees of freedom in your t-test. The final row in the table lists the z-distribution’s critical values for comparison. Learn more about Degrees of Freedom .

Critical Values : In the t-distribution table, find the cell at the column and row intersection. When you are performing a:

  • Two-tailed t-test : Use the positive critical value AND the negative form to cover both tails of the distribution.
  • One-tailed t-test : Use the positive critical value OR the negative value depending on whether you’re using an upper (+) or lower (-) sided test.

Learn more about : How T-tests Work , test statistics , critical values , and How to do T-Tests in Excel

Tables for other statistics include the z-table , chi-square table , and F-table .

Examples of Using the T-Distribution Table of Critical Values

Two-sided t-test.

Suppose you perform a two-tailed t-test with a significance level of 0.05 and 20 degrees of freedom, and you need to find the critical values.

In the t-distribution table, find the column which contains alpha = 0.05 for the two-tailed test. Then, find the row corresponding to 20 degrees of freedom. The truncated t-table below shows the critical t-value.

T-distribution table showing the critical t-value for a two-sided t-test.

The t-table indicates that the critical values for our test are -2.086 and +2.086. Use both the positive and negative values for a two-sided test. Your results are statistically significant if your t-value is less than the negative value or greater than the positive value. The graph below illustrates these results.

Plot that displays the critical regions in the two tails of the distribution for our t-table results.

One-sided t-test

Now, suppose you perform a one-sided t-test with a significance level of 0.05 and 20 df.

In the t-distribution table, find the column which contains alpha = 0.05 for the one-tailed test. Then, find the row corresponding to 20 degrees of freedom. The truncated t-table below shows the critical t-value.

T-distribution table that show the critical t-value for a one-sided t-test.

The row and column intersection in the t-distribution table indicates that the critical t-value is 1.725. Use either the positive or negative critical value depending on the direction of your t-test. The graphs below illustrate both one-sided tests. Your results are statistically significant if your t-value falls in the red critical region.

Plot that displays a single critical region for a one-tailed test for our t-table results.

Using Critical T-values to Calculate Confidence Intervals

To calculate a two-sided confidence interval for a t-test, take the positive critical value from the t-distribution table and multiply it by your sample’s standard error of the mean . Then take the sample mean and add and subtract the product from it to calculate the upper and lower interval limits, respectively.

For a one-sided confidence interval, either add or subtract the product from the mean to calculate the upper or lower bound, respectively.

The confidence level is 1 – α.

Share this:

how to find critical values null hypothesis

Reader Interactions

' src=

July 7, 2024 at 12:51 pm

2.021 + 0.0065 = 2.0315 sir, 0.0065 is to be replaced by 0.0105.

' src=

June 1, 2024 at 5:22 am

Hello. I am testing a hypothesis using p method with t test. My test statistic equals to -0.12. In the table, I can not find the near numbers so I can compare it to significance level. Thank you!

' src=

June 1, 2024 at 11:43 pm

To find the critical value, you need to know the DF for your test, whether you’re using a one- or two-tailed test (usually two-tailed), and the significance level (usually 0.05). That gives you the critical value(s). If you’re performing a two-tailed test, you’ll need to use that positive and negative values of the displayed number for the two critical values.

However, your test statistic is so close to zero that it’s definitely not significant. That’s why you’re not seeing any values close to your test statistic.

' src=

November 11, 2022 at 12:11 am

thanks very much..Its helpful to me

' src=

September 30, 2022 at 11:44 am

In my problem the sample size is 36 so the DF is 35. I am using a two-tailed test with an alpha of .05. DF 35 is not on the table – what do I do?

October 2, 2022 at 9:27 pm

Hi Chelsea,

There are two standard approaches.

One is to use interpolation, which figures out the in-between value. In this case, it’s simple to interpolate because it’s exactly halfway between the critical values of 2.042 and 2.021 for 30 and 40 DF, respectively. The difference is 2.042 – 2.021 = 0.021. Divide that by two for half the distance (0.0105) and then add that to the lower value of 2.021 + 0.0105 = 2.0315. Of course, we’d use the positive and negative values for a two-sided test. That’s an approximation by interpolating the table values.

Another alternative is to go with the more conservative value, which is the smaller DF. In that case, you’d use the critical value for 30 DF, which +/- 2.042. If it is significant with the lower DF, then you know it’s significant for the actual, somewhat higher DF.

And, of course, in this day and age, you could use a t-value calculator, which produces +/- 2.030108. That’s the most exact value.

' src=

May 12, 2022 at 6:55 pm

Hi Jim, I think the DF has to be used in this case. This is a t distribution and DF = n-1. So, DF for the critical values would be t (0.05 or 0.025), 19. Not df:20. Thanks. Merve

May 12, 2022 at 7:46 pm

My examples use 20 DF, and you can see that DF is the first column in the table. In a 1-sample t-test, 20 DF corresponds to a sample size of 21 because for this test DF = n – 1. However, for a 2-sample t-test, the 20 degrees of freedom corresponds to a sample size of 22 because for that test DF = N₁ + N₂ – 2. Regardless of the test, the critical values that I show are accurate for 20 DF.

Comments and Questions Cancel reply

how to find critical values null hypothesis

A Beginner's Guide to Hypothesis Testing: Key Concepts and Applications

  • September 27, 2024

Hypothesis Testing

In our everyday lives, we often encounter statements and claims that we can't instantly verify. 

Have you ever questioned how to determine which statements are factual or validate them with certainty? 

Fortunately, there's a systematic way to find answers: Hypothesis Testing.

Hypothesis Testing is a fundamental concept in analytics and statistics, yet it remains a mystery to many. This method helps us understand and validate data and supports decision-making in various fields. 

Are you curious about how it works and why it's so crucial? 

Let's understand the hypothesis testing basics and explore its applications together.

What is hypothesis testing in statistics?

Hypothesis evaluation is a statistical method used to determine whether there is enough evidence in a sample of data to support a particular assumption. 

A statistical hypothesis test generally involves calculating a test statistic. The decision is then made by either comparing the test statistic to a crucial value or assessing the p-value derived from the test statistic.

The P-value in Hypothesis Testing

P-value helps determine whether to accept or reject the null hypothesis (H₀) during hypothesis testing.

Two types of errors in this process are:

  • Type I error (α):

This happens when the null hypothesis is incorrectly rejected, meaning we think there's an effect or difference when there isn't.

It is denoted by α (significance level).

  • Type II error (β)

This occurs when the null hypothesis gets incorrectly accepted, meaning we fail to detect an effect or difference that exists.

It is denoted by β (power level).

  • Type I error: Rejecting something that's true.
  • Type II error: Accepting something that's false.

Here's a simplified breakdown of the key components of hypothesis testing :

  • Null Hypothesis (H₀): The default assumption that there's no significant effect or difference
  • Alternative Hypothesis (H₁): The statement that challenges the null hypothesis, suggesting a significant effect
  • P-Value : This tells you how likely it is that your results happened by chance. 
  • Significance Level (α): Typically set at 0.05, this is the threshold used to conclude whether to reject the null hypothesis.

This process is often used in financial analysis to test the effectiveness of trading strategies, assess portfolio performance, or predict market trends.

Statistical Hypothesis Testing for Beginners: A Step-by-Step Guide

Applying hypothesis testing in finance requires a clear understanding of the steps involved. 

Here's a practical approach for beginners:

STEP 1: Define the Hypothesis

Start by formulating your null and alternative hypotheses. For example, you might hypothesise that a certain stock's returns outperform the market average.

STEP 2: Collect Data

Gather relevant financial data from reliable sources, ensuring that your sample size is appropriate to draw meaningful conclusions.

STEP 3: Choose the Right Test

Select a one-tailed or two-tailed test depending on the data type and your hypothesis. Two-tailed tests are commonly used for financial analysis to assess whether a parameter differs in either direction.

STEP 4: Calculate the Test Statistic

Use statistical software or a financial calculator to compute your test statistic and compare it to the critical value.

STEP 5: Interpret the Results

Based on the p-value, decide whether to reject or fail to reject the null hypothesis. If the p-value is below the significance level, it indicates that the null hypothesis is unlikely, and you may accept the alternative hypothesis.

Here's a quick reference table to help with your decisions:

Test Type Null HypothesisAlternative HypothesisUse Case in Finance
 No effect or no gainA positive or negative impactTesting a specific directional claim about stock returns
No differenceAny significant differenceComparing performance between two portfolios

  Real-Life Applications of Hypothesis Testing in Finance

The concept of hypothesis testing basics might sound theoretical, but its real-world applications are vast in the financial sector. 

Here's how professionals use it:

  • Investment Portfolio Performance : Analysts often use statistical hypothesis testing for beginners to determine whether one investment portfolio performs better than another.
  • Risk Assessment: Statistical testing helps evaluate market risk by testing assumptions about asset price movements and volatility.
  • Forecasting Market Trends : Predicting future market trends using past data can be tricky, but research testing allows professionals to make more informed predictions by validating their assumptions.

Common Pitfalls to Avoid in Hypothesis Testing

Even seasoned professionals sometimes need to correct their theory testing analysis.

Here are some common mistakes you'll want to avoid:

Misinterpreting P-Values

A common misunderstanding is that a low p-value proves that the alternative hypothesis is correct. It just means there's strong evidence against the null hypothesis.

Ignoring Sample Size

Small sample sizes can also lead to misleading results, so ensuring that your data set is large enough to provide reliable insights is crucial.

Overfitting the Model

This happens when you tailor your hypothesis too closely to the sample data, resulting in a model that only holds up under different conditions.

By being aware of these pitfalls, you'll be better positioned to conduct accurate hypothesis tests in any financial scenario.

Lead The World of Finance with Imarticus Learning

Mastering hypothesis testing is crucial for making informed financial decisions and validating assumptions. Consider the exceptional CFA course at Imarticus Learning as you enhance your analytical skills.

Achieve a prestigious qualification in investment management and thrive in a competitive industry. Imarticus, a leading learning partner approved by the CFA Institute, offers the best CFA course . Benefit from Comprehensive Learning with top-tier materials from Kaplan Schweser, including books, study notes, and mock exams. 

Ready to elevate your finance career? 

Enrol now and unlock your potential with Imarticus Learning!

Q: What is hypothesis testing in finance?

A: This is a statistical method used in finance to validate assumptions or hypotheses about financial data, such as testing the performance of investment strategies.

Q: What are the types of hypothesis testing?

A: The two primary types are one-tailed and two-tailed tests. You can use one-tailed tests to assess a specific direction of effect, while you can use two-tailed tests to determine if there is any significant difference, regardless of the direction.

Q: What is a p-value in hypothesis testing?

A: A p-value indicates the probability that your observed results occurred by chance. A lower p-value suggests stronger evidence against the null hypothesis.

Q: Why is sample size important in hypothesis testing?

A: A larger sample size increases the reliability of results, reducing the risk of errors and providing more accurate conclusions in hypothesis testing.

Share This Post

Subscribe to our newsletter, get updates and learn from the best, more to explore.

Your Ultimate Guide to Becoming a Chartered Financial Analyst

Your Ultimate Guide to Becoming a Chartered Financial Analyst

Your ultimate guide to acca exam dates: stay ahead of the curve, our programs.

how to find critical values null hypothesis

Certified Investment Banking Operations Professional

how to find critical values null hypothesis

Chief Financial Officer Programme

GSLP CFO

Global Senior Leadership Programme Specialisation: Chief Finance Officer

how to find critical values null hypothesis

Advanced Management Programme In Financial Services And Capital Markets

Fintech Course

Senior Leadership Programme In Fintech

how to find critical values null hypothesis

Chartered Financial Analyst (CFA)

how to find critical values null hypothesis

Certified Management Accountant

how to find critical values null hypothesis

Certified Public Accountant

Do you want to boost your career, drop us a message and keep in touch.

how to find critical values null hypothesis

Keep In Touch

What is The Null Hypothesis & When Do You Reject The Null Hypothesis

Julia Simkus

Editor at Simply Psychology

BA (Hons) Psychology, Princeton University

Julia Simkus is a graduate of Princeton University with a Bachelor of Arts in Psychology. She is currently studying for a Master's Degree in Counseling for Mental Health and Wellness in September 2023. Julia's research has been published in peer reviewed journals.

Learn about our Editorial Process

Saul McLeod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul McLeod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

A null hypothesis is a statistical concept suggesting no significant difference or relationship between measured variables. It’s the default assumption unless empirical evidence proves otherwise.

The null hypothesis states no relationship exists between the two variables being studied (i.e., one variable does not affect the other).

The null hypothesis is the statement that a researcher or an investigator wants to disprove.

Testing the null hypothesis can tell you whether your results are due to the effects of manipulating ​ the dependent variable or due to random chance. 

How to Write a Null Hypothesis

Null hypotheses (H0) start as research questions that the investigator rephrases as statements indicating no effect or relationship between the independent and dependent variables.

It is a default position that your research aims to challenge or confirm.

For example, if studying the impact of exercise on weight loss, your null hypothesis might be:

There is no significant difference in weight loss between individuals who exercise daily and those who do not.

Examples of Null Hypotheses

Research QuestionNull Hypothesis
Do teenagers use cell phones more than adults?Teenagers and adults use cell phones the same amount.
Do tomato plants exhibit a higher rate of growth when planted in compost rather than in soil?Tomato plants show no difference in growth rates when planted in compost rather than soil.
Does daily meditation decrease the incidence of depression?Daily meditation does not decrease the incidence of depression.
Does daily exercise increase test performance?There is no relationship between daily exercise time and test performance.
Does the new vaccine prevent infections?The vaccine does not affect the infection rate.
Does flossing your teeth affect the number of cavities?Flossing your teeth has no effect on the number of cavities.

When Do We Reject The Null Hypothesis? 

We reject the null hypothesis when the data provide strong enough evidence to conclude that it is likely incorrect. This often occurs when the p-value (probability of observing the data given the null hypothesis is true) is below a predetermined significance level.

If the collected data does not meet the expectation of the null hypothesis, a researcher can conclude that the data lacks sufficient evidence to back up the null hypothesis, and thus the null hypothesis is rejected. 

Rejecting the null hypothesis means that a relationship does exist between a set of variables and the effect is statistically significant ( p > 0.05).

If the data collected from the random sample is not statistically significance , then the null hypothesis will be accepted, and the researchers can conclude that there is no relationship between the variables. 

You need to perform a statistical test on your data in order to evaluate how consistent it is with the null hypothesis. A p-value is one statistical measurement used to validate a hypothesis against observed data.

Calculating the p-value is a critical part of null-hypothesis significance testing because it quantifies how strongly the sample data contradicts the null hypothesis.

The level of statistical significance is often expressed as a  p  -value between 0 and 1. The smaller the p-value, the stronger the evidence that you should reject the null hypothesis.

Probability and statistical significance in ab testing. Statistical significance in a b experiments

Usually, a researcher uses a confidence level of 95% or 99% (p-value of 0.05 or 0.01) as general guidelines to decide if you should reject or keep the null.

When your p-value is less than or equal to your significance level, you reject the null hypothesis.

In other words, smaller p-values are taken as stronger evidence against the null hypothesis. Conversely, when the p-value is greater than your significance level, you fail to reject the null hypothesis.

In this case, the sample data provides insufficient data to conclude that the effect exists in the population.

Because you can never know with complete certainty whether there is an effect in the population, your inferences about a population will sometimes be incorrect.

When you incorrectly reject the null hypothesis, it’s called a type I error. When you incorrectly fail to reject it, it’s called a type II error.

Why Do We Never Accept The Null Hypothesis?

The reason we do not say “accept the null” is because we are always assuming the null hypothesis is true and then conducting a study to see if there is evidence against it. And, even if we don’t find evidence against it, a null hypothesis is not accepted.

A lack of evidence only means that you haven’t proven that something exists. It does not prove that something doesn’t exist. 

It is risky to conclude that the null hypothesis is true merely because we did not find evidence to reject it. It is always possible that researchers elsewhere have disproved the null hypothesis, so we cannot accept it as true, but instead, we state that we failed to reject the null. 

One can either reject the null hypothesis, or fail to reject it, but can never accept it.

Why Do We Use The Null Hypothesis?

We can never prove with 100% certainty that a hypothesis is true; We can only collect evidence that supports a theory. However, testing a hypothesis can set the stage for rejecting or accepting this hypothesis within a certain confidence level.

The null hypothesis is useful because it can tell us whether the results of our study are due to random chance or the manipulation of a variable (with a certain level of confidence).

A null hypothesis is rejected if the measured data is significantly unlikely to have occurred and a null hypothesis is accepted if the observed outcome is consistent with the position held by the null hypothesis.

Rejecting the null hypothesis sets the stage for further experimentation to see if a relationship between two variables exists. 

Hypothesis testing is a critical part of the scientific method as it helps decide whether the results of a research study support a particular theory about a given population. Hypothesis testing is a systematic way of backing up researchers’ predictions with statistical analysis.

It helps provide sufficient statistical evidence that either favors or rejects a certain hypothesis about the population parameter. 

Purpose of a Null Hypothesis 

  • The primary purpose of the null hypothesis is to disprove an assumption. 
  • Whether rejected or accepted, the null hypothesis can help further progress a theory in many scientific cases.
  • A null hypothesis can be used to ascertain how consistent the outcomes of multiple studies are.

Do you always need both a Null Hypothesis and an Alternative Hypothesis?

The null (H0) and alternative (Ha or H1) hypotheses are two competing claims that describe the effect of the independent variable on the dependent variable. They are mutually exclusive, which means that only one of the two hypotheses can be true. 

While the null hypothesis states that there is no effect in the population, an alternative hypothesis states that there is statistical significance between two variables. 

The goal of hypothesis testing is to make inferences about a population based on a sample. In order to undertake hypothesis testing, you must express your research hypothesis as a null and alternative hypothesis. Both hypotheses are required to cover every possible outcome of the study. 

What is the difference between a null hypothesis and an alternative hypothesis?

The alternative hypothesis is the complement to the null hypothesis. The null hypothesis states that there is no effect or no relationship between variables, while the alternative hypothesis claims that there is an effect or relationship in the population.

It is the claim that you expect or hope will be true. The null hypothesis and the alternative hypothesis are always mutually exclusive, meaning that only one can be true at a time.

What are some problems with the null hypothesis?

One major problem with the null hypothesis is that researchers typically will assume that accepting the null is a failure of the experiment. However, accepting or rejecting any hypothesis is a positive result. Even if the null is not refuted, the researchers will still learn something new.

Why can a null hypothesis not be accepted?

We can either reject or fail to reject a null hypothesis, but never accept it. If your test fails to detect an effect, this is not proof that the effect doesn’t exist. It just means that your sample did not have enough evidence to conclude that it exists.

We can’t accept a null hypothesis because a lack of evidence does not prove something that does not exist. Instead, we fail to reject it.

Failing to reject the null indicates that the sample did not provide sufficient enough evidence to conclude that an effect exists.

If the p-value is greater than the significance level, then you fail to reject the null hypothesis.

Is a null hypothesis directional or non-directional?

A hypothesis test can either contain an alternative directional hypothesis or a non-directional alternative hypothesis. A directional hypothesis is one that contains the less than (“<“) or greater than (“>”) sign.

A nondirectional hypothesis contains the not equal sign (“≠”).  However, a null hypothesis is neither directional nor non-directional.

A null hypothesis is a prediction that there will be no change, relationship, or difference between two variables.

The directional hypothesis or nondirectional hypothesis would then be considered alternative hypotheses to the null hypothesis.

Gill, J. (1999). The insignificance of null hypothesis significance testing.  Political research quarterly ,  52 (3), 647-674.

Krueger, J. (2001). Null hypothesis significance testing: On the survival of a flawed method.  American Psychologist ,  56 (1), 16.

Masson, M. E. (2011). A tutorial on a practical Bayesian alternative to null-hypothesis significance testing.  Behavior research methods ,  43 , 679-690.

Nickerson, R. S. (2000). Null hypothesis significance testing: a review of an old and continuing controversy.  Psychological methods ,  5 (2), 241.

Rozeboom, W. W. (1960). The fallacy of the null-hypothesis significance test.  Psychological bulletin ,  57 (5), 416.

Print Friendly, PDF & Email

logo

Stats and R

Hypothesis test by hand.

  • Confidence interval

Hypothesis test

  • Inferential statistics

Descriptive versus inferential statistics

Motivations and limitations, step #1: stating the null and alternative hypothesis, step #2: computing the test statistic, step #3: finding the critical value, why don’t we accept \(h_0\) , step #3: computing the p -value, step #4: concluding and interpreting the results, step #2: computing the confidence interval, step #3: concluding and interpreting the results, which method to choose.

how to find critical values null hypothesis

Remember that descriptive statistics is the branch of statistics aiming at describing and summarizing a set of data in the best possible manner, that is, by reducing it down to a few meaningful key measures and visualizations—with as little loss of information as possible. In other words, the branch of descriptive statistics helps to have a better understanding and a clear image about a set of observations thanks to summary statistics and graphics. With descriptive statistics, there is no uncertainty because we describe only the group of observations that we decided to work on and no attempt is made to generalize the observed characteristics to another or to a larger group of observations.

Inferential statistics , one the other hand, is the branch of statistics that uses a random sample of data taken from a population to make inferences, i.e., to draw conclusions about the population of interest (see the difference between population and sample if you need a refresh of the two concepts). In other words, information from the sample is used to make generalizations about the parameter of interest in the population.

The two most important tools used in the domain of inferential statistics are:

  • hypothesis test (which is the main subject of the present article), and
  • confidence interval (which is briefly discussed in this section )

Via my teaching tasks, I realized that many students (especially in introductory statistic classes) struggle to perform hypothesis tests and interpret the results. It seems to me that these students often encounter difficulties mainly because hypothesis testing is rather unclear and abstract to them.

One of the reason it looks abstract to them is because they do not understand the final goal of hypothesis testing—the “why” behind this tool. They often do inferential statistics without understanding the reasoning behind it, as if they were following a cooking recipe which does not require any thinking. However, as soon as they understand the principle underlying hypothesis testing, it is much easier for them to apply the concepts and solve the exercises.

For this reason, I though it would be useful to write an article on the goal of hypothesis tests (the “why?”), in which context they should be used (the “when?”), how they work (the “how?”) and how to interpret the results (the “so what?”). Like anything else in statistics, it becomes much easier to apply a concept in practice when we understand what we are testing or what we are trying to demonstrate beforehand.

In this article, I present—as comprehensibly as possible—the different steps required to perform and conclude a hypothesis test by hand .

These steps are illustrated with a basic example. This will build the theoretical foundations of hypothesis testing, which will in turn be of great help for the understanding of most statistical tests .

Hypothesis tests come in many forms and can be used for many parameters or research questions. The steps I present in this article are not applicable to all hypothesis test, unfortunately.

They are however, appropriate for at least the most common hypothesis tests—the tests on:

  • One mean: \(\mu\)
  • independent samples: \(\mu_1\) and \(\mu_2\)
  • paired samples: \(\mu_D\)
  • One proportion: \(p\)
  • Two proportions: \(p_1\) and \(p_2\)
  • One variance: \(\sigma^2\)
  • Two variances: \(\sigma^2_1\) and \(\sigma^2_2\)

The good news is that the principles behind these 6 statistical tests (and many more) are exactly the same. So if you understand the intuition and the process for one of them, all others pretty much follow.

Unlike descriptive statistics where we only describe the data at hand, hypothesis tests use a subset of observations , referred as a sample , to draw conclusions about a population .

One may wonder why we would try to “guess” or make inference about a parameter of a population based on a sample, instead of simply collecting data for the entire population, compute statistics we are interested in and take decisions based upon that.

The main reason we actually use a sample instead of the entire population is because, most of the time, collecting data on the entire population is practically impossible, too complex, too expensive, it would take too long, or a combination of any of these. 1

So the overall objective of a hypothesis test is to draw conclusions in order to confirm or refute a belief about a population , based on a smaller group of observations.

In practice, we take some measurements of the variable of interest—representing the sample(s)—and we check whether our measurements are likely or not given our assumption (our belief). Based on the probability of observing the sample(s) we have, we decide whether we can trust our belief or not.

Hypothesis tests have many practical applications.

Here are different situations illustrating when the 6 tests mentioned above would be appropriate:

  • One mean: suppose that a health professional would like to test whether the mean weight of Belgian adults is different than 80 kg (176.4 lbs).
  • Independent samples: suppose that a physiotherapist would like to test the effectiveness of a new treatment by measuring the mean response time (in seconds) for patients in a control group and patients in a treatment group, where patients in the two groups are different.
  • Paired samples: suppose that a physiotherapist would like to test the effectiveness of a new treatment by measuring the mean response time (in seconds) before and after a treatment, where patients are measured twice—before and after treatment, so patients are the same in the 2 samples.
  • One proportion: suppose that a political pundit would like to test whether the proportion of citizens who are going to vote for a specific candidate is smaller than 30%.
  • Two proportions: suppose that a doctor would like to test whether the proportion of smokers is different between professional and amateur athletes.
  • One variance: suppose that an engineer would like to test whether a voltmeter has a lower variability than what is imposed by the safety standards.
  • Two variances: suppose that, in a factory, two production lines work independently from each other. The financial manager would like to test whether the costs of the weekly maintenance of these two machines have the same variance. Note that a test on two variances is also often performed to verify the assumption of equal variances, which is required for several other statistical tests, such as the Student’s t-test for instance.

Of course, this is a non-exhaustive list of potential applications and many research questions can be answered thanks to a hypothesis test.

One important point to remember is that in hypothesis testing we are always interested in the population and not in the sample. The sample is used for the aim of drawing conclusions about the population, so we always test in terms of the population.

Usually, hypothesis tests are used to answer research questions in confirmatory analyses . Confirmatory analyses refer to statistical analyses where hypotheses—deducted from theory—are defined beforehand (preferably before data collection). In this approach, the researcher has a specific idea about the variables under consideration and she is trying to see if her idea, specified as hypotheses, is supported by data.

On the other hand, hypothesis tests are rarely used in exploratory analyses. 2 Exploratory analyses aims to uncover possible relationships between the variables under investigation. In this approach, the researcher does not have any clear theory-driven assumptions or ideas in mind before data collection. This is the reason exploratory analyses are sometimes referred as hypothesis-generating analyses—they are used to create some hypotheses, which in turn may be tested via confirmatory analyses at a later stage.

There are, to my knowledge, 3 different methods to perform a hypothesis tests:

Method A: Comparing the test statistic with the critical value

Method b: comparing the p -value with the significance level \(\alpha\), method c: comparing the target parameter with the confidence interval.

Although the process for these 3 approaches may slightly differ, they all lead to the exact same conclusions. Using one method or another is, therefore, more often than not a matter of personal choice or a matter of context. See this section to know which method I use depending on the context.

I present the 3 methods in the following sections, starting with, in my opinion, the most comprehensive one when it comes to doing it by hand: comparing the test statistic with the critical value.

For the three methods, I will explain the required steps to perform a hypothesis test from a general point of view and illustrate them with the following situation: 3

Suppose a health professional who would like to test whether the mean weight of Belgian adults is different than 80 kg.

Note that, as for most hypothesis tests, the test we are going to use as example below requires some assumptions. Since the aim of the present article is to explain a hypothesis test, we assume that all assumptions are met. For the interested reader, see the assumptions (and how to verify them) for this type of hypothesis test in the article presenting the one-sample t-test .

Method A, which consists in comparing the test statistic with the critical value, boils down to the following 4 steps:

  • Stating the null and alternative hypothesis
  • Computing the test statistic
  • Finding the critical value
  • Concluding and interpreting the results

Each step is detailed below.

As discussed before, a hypothesis test first requires an idea, that is, an assumption about a phenomenon. This assumption, referred as hypothesis, is derived from the theory and/or the research question.

Since a hypothesis test is used to confirm or refute a prior belief, we need to formulate our belief so that there is a null and an alternative hypothesis . Those hypotheses must be mutually exclusive , which means that they cannot be true at the same time. This is step #1.

In the context of our scenario, the null and alternative hypothesis are thus:

  • Null hypothesis \(H_0: \mu = 80\)
  • Alternative hypothesis \(H_1: \mu \ne 80\)

When stating the null and alternative hypothesis, bear in mind the following three points:

  • We are always interested in the population and not in the sample. This is the reason \(H_0\) and \(H_1\) will always be written in terms of the population and not in terms of the sample (in this case, \(\mu\) and not \(\bar{x}\) ).
  • The assumption we would like to test is often the alternative hypothesis. If the researcher wanted to test whether the mean weight of Belgian adults was less than 80 kg, she would have stated \(H_0: \mu = 80\) (or equivalently, \(H_0: \mu \ge 80\) ) and \(H_1: \mu < 80\) . 4 Do not mix the null with the alternative hypothesis, or the conclusions will be diametrically opposed!
  • The null hypothesis is often the status quo. For instance, suppose that a doctor wants to test whether the new treatment A is more efficient than the old treatment B. The status quo is that the new and old treatments are equally efficient. Assuming a larger value is better, she will then write \(H_0: \mu_A = \mu_B\) (or equivalently, \(H_0: \mu_A - \mu_B = 0\) ) and \(H_1: \mu_A > \mu_B\) (or equivalently, \(H_0: \mu_A - \mu_B > 0\) ). On the opposite, if the lower the better, she would have written \(H_0: \mu_A = \mu_B\) (or equivalently, \(H_0: \mu_A - \mu_B = 0\) ) and \(H_1: \mu_A < \mu_B\) (or equivalently, \(H_0: \mu_A - \mu_B < 0\) ).

The test statistic (often called t-stat ) is, in some sense, a metric indicating how extreme the observations are compared to the null hypothesis . The higher the t-stat (in absolute value), the more extreme the observations are.

There are several formulas to compute the t-stat, with one formula for each type of hypothesis test—one or two means, one or two proportions, one or two variances. This means that there is a formula to compute the t-stat for a hypothesis test on one mean, another formula for a test on two means, another for a test on one proportion, etc. 5

The only difficulty in this second step is to choose the appropriate formula. As soon as you know which formula to use based on the type of test, you simply have to apply it to the data. For the interested reader, see the different formulas to compute the t-stat for the most common tests in this Shiny app .

Luckily, formulas for hypothesis tests on one and two means, and one and two proportions follow the same structure.

Computing the test statistic for these tests is similar than scaling a random variable (a process also knows as “standardization” or “normalization”) which consists in subtracting the mean from that random variable, and dividing the result by the standard deviation:

\[Z = \frac{X - \mu}{\sigma}\]

For these 4 hypothesis tests (one/two means and one/two proportions), computing the test statistic is like scaling the estimator (computed from the sample) corresponding to the parameter of interest (in the population). So we basically subtract the target parameter from the point estimator and then divide the result by the standard error (which is equivalent to the standard deviation but for an estimator).

If this is unclear, here is how the test statistic (denoted \(t_{obs}\) ) is computed in our scenario (assuming that the variance of the population is unknown):

\[t_{obs} = \frac{\bar{x} - \mu}{\frac{s}{\sqrt{n}}}\]

  • \(\bar{x}\) is the sample mean (i.e., the estimator)
  • \(\mu\) is the mean under the null hypothesis (i.e., the target parameter)
  • \(s\) is the sample standard deviation
  • \(n\) is the sample size
  • ( \(\frac{s}{\sqrt{n}}\) is the standard error)

Notice the similarity between the formula of this test statistic and the formula used to standardize a random variable. This structure is the same for a test on two means, one proportion and two proportions, except that the estimator, the parameter and the standard error are, of course, slightly different for each type of test.

Suppose that in our case we have a sample mean of 71 kg ( \(\bar{x}\) = 71), a sample standard deviation of 13 kg ( \(s\) = 13) and a sample size of 10 adults ( \(n\) = 10). Remember that the population mean (the mean under the null hypothesis) is 80 kg ( \(\mu\) = 80).

The t-stat is thus:

\[t_{obs} = \frac{\bar{x} - \mu}{\frac{s}{\sqrt{n}}} = \frac{71 - 80}{\frac{13}{\sqrt{10}}} = -2.189\]

Although formulas are different depending on which parameter you are testing, the value found for the test statistic gives us an indication on how extreme our observations are.

We keep this value of -2.189 in mind because it will be used again in step #4.

Although the t-stat gives us an indication of how extreme our observations are, we cannot tell whether this “score of extremity” is too extreme or not based on its value only.

So, at this point, we cannot yet tell whether our data are too extreme or not. For this, we need to compare our t-stat with a threshold—referred as critical value —given by the probability distribution tables (and which can, of course, also be found with R).

In the same way that the formula to compute the t-stat is different for each parameter of interest, the underlying probability distribution—and thus the statistical table—on which the critical value is based is also different for each target parameter. This means that, in addition to choosing the appropriate formula to compute the t-stat, we also need to select the appropriate probability distribution depending on the parameter we are testing.

Luckily, there are only 4 different probability distributions for the 6 hypothesis tests covered in this article (one/two means, one/two proportions and one/two variances):

  • test on one and two means with known population variance(s)
  • test on two paired samples where the variance of the difference between the 2 samples \(\sigma^2_D\) is known
  • test on one and two proportions (given that some assumptions are met)
  • test on one and two means with un known population variance(s)
  • test on two paired samples where the variance of the difference between the 2 samples \(\sigma^2_D\) is un known
  • test on one variance
  • test on two variances

Each probability distribution also has its own parameters (up to two parameters for the 4 distribution considered here), defining its shape and/or location. Parameter(s) of a probability distribution can be seen as its DNA; meaning that the distribution is entirely defined by its parameter(s).

Taking our initial scenario—a health professional who would like to test whether the mean weight of Belgian adults is different than 80 kg—as example.

The underlying probability distribution of a test on one mean is either the standard Normal or the Student distribution, depending on whether the variance of the population (not sample variance!) is known or unknown: 6

  • If the population variance is known \(\rightarrow\) the standard Normal distribution is used
  • If the population variance is un known \(\rightarrow\) the Student distribution is used

If no population variance is explicitly given, you can assume that it is unknown since you cannot compute it based on a sample. If you could compute it, that would mean you have access to the entire population and there is, in this case, no point in performing a hypothesis test (you could simply use some descriptive statistics to confirm or refute your belief).

In our example, no population variance is specified so it is assumed to be unknown. We therefore use the Student distribution.

The Student distribution has one parameter which defines it; the number of degrees of freedom. The number of degrees of freedom depends on the type of hypothesis test. For instance, the number of degrees of freedom for a test on one mean is equal to the number of observations minus one ( \(n\) - 1). Without going too far into the details, the - 1 comes from the fact that there is one quantity which is estimated (i.e., the mean). 7 The sample size being equal to 10 in our example, the degrees of freedom is equal to \(n\) - 1 = 10 - 1 = 9.

There is only one last element missing to find the critical value: the significance level . The significance level , denoted \(\alpha\) , is the probability of wrongly rejecting the null hypothesis, so the probability of rejecting the null hypothesis although it is in reality true . In this sense, it is an error (type I error, as opposed to the type II error 8 ) that we accept to deal with, in order to be able to draw conclusions about a population based on a subset of it.

As you may have read in many statistical textbooks, the significance level is very often set to 5%. 9 In some fields (such as medicine or engineering, among others), the significance level is also sometimes set to 1% to decrease the error rate.

It is best to specify the significance level before performing a hypothesis test to avoid the temptation to set the significance level in accordance to the results (the temptation is even bigger when the results are on the edge of being significant). As I always tell my students, you cannot “guess” nor compute the significance level. Therefore, if it is not explicitly specified, you can safely assume it is 5%. In our case, we did not indicate it, so we take \(\alpha\) = 5% = 0.05.

Furthermore, in our example, we want to test whether the mean weight of Belgian adults is different than 80 kg. Since we do not specify the direction of the test, it is a two-sided test . If we wanted to test that the mean weight was less than 80 kg ( \(H_1: \mu <\) 80) or greater than 80 kg ( \(H_1: \mu >\) 80), we would have done a one-sided test.

Make sure that you perform the correct test (two-sided or one-sided) because it has an impact on how to find the critical value (see more in the following paragraphs).

So now that we know the appropriate distribution (Student distribution), its parameter (degrees of freedom (df) = 9), the significance level ( \(\alpha\) = 0.05) and the direction (two-sided), we have all we need to find the critical value in the statistical tables :

how to find critical values null hypothesis

By looking at the row df = 9 and the column \(t_.025\) in the Student’s distribution table, we find a critical value of:

\[t_{n-1; \alpha / 2} = t_{9; 0.025} = 2.262\]

One may wonder why we take \(t_{\alpha/2} = t_.025\) and not \(t_\alpha = t_.05\) since the significance level is 0.05. The reason is that we are doing a two-sided test ( \(H_1: \mu \ne\) 80), so the error rate of 0.05 must be divided in 2 to find the critical value to the right of the distribution. Since the Student’s distribution is symmetric, the critical value to the left of the distribution is simply: -2.262.

Visually, the error rate of 0.05 is partitioned into two parts:

  • 0.025 to the left of -2.262 and
  • 0.025 to the right of 2.262

how to find critical values null hypothesis

We keep in mind these critical values of -2.262 and 2.262 for the fourth and last step.

Note that the red shaded areas in the previous plot are also known as the rejection regions. More on that in the following section.

These critical values can also be found in R, thanks to the qt() function:

The qt() function is used for the Student’s distribution ( q stands for quantile and t for Student). There are other functions accompanying the different distributions:

  • qnorm() for the Normal distribution
  • qchisq() for the Chi-square distribution
  • qf() for the Fisher distribution

In this fourth and last step, all we have to do is to compare the test statistic (computed in step #2) with the critical values (found in step #3) in order to conclude the hypothesis test .

The only two possibilities when concluding a hypothesis test are:

  • Rejection of the null hypothesis
  • Non-rejection of the null hypothesis

In our example of adult weight, remember that:

  • the t-stat is -2.189
  • the critical values are -2.262 and 2.262

Also remember that:

  • the t-stat gives an indication on how extreme our sample is compared to the null hypothesis
  • the critical values are the threshold from which the t-stat is considered as too extreme

To compare the t-stat with the critical values, I always recommend to plot them:

how to find critical values null hypothesis

These two critical values form the rejection regions (the red shaded areas):

  • from \(- \infty\) to -2.262, and
  • from 2.262 to \(\infty\)

If the t-stat lies within one of the rejection region, we reject the null hypothesis . On the contrary, if the t-stat does not lie within any of the rejection region, we do not reject the null hypothesis .

As we can see from the above plot, the t-stat is less extreme than the critical value and therefore does not lie within any of the rejection region. In conclusion, we do not reject the null hypothesis that \(\mu = 80\) .

This is the conclusion in statistical terms but they are meaningless without proper interpretation. So it is a good practice to also interpret the result in the context of the problem:

At the 5% significance level, we do not reject the hypothesis that the mean weight of Belgian adults is 80 kg.

From a more philosophical (but still very important) perspective, note that we wrote “we do not reject the null hypothesis” and “we do not reject the hypothesis that the mean weight of Belgian adults is equal to 80 kg”. We did not write “we accept the null hypothesis” nor “the mean weight of Belgian adults is 80 kg”.

The reason is due to the fact that, in hypothesis testing, we conclude something about the population based on a sample. There is, therefore, always some uncertainty and we cannot be 100% sure that our conclusion is correct.

Perhaps it is the case that the mean weight of Belgian adults is in reality different than 80 kg, but we failed to prove it based on the data at hand. It may be the case that if we had more observations, we would have rejected the null hypothesis (since all else being equal, a larger sample size implies a more extreme t-stat). Or, it may be the case that even with more observations, we would not have rejected the null hypothesis because the mean weight of Belgian adults is in reality close to 80 kg. We cannot distinguish between the two.

So we can just say that we did not find enough evidence against the hypothesis that the mean weight of Belgian adults is 80 kg, but we do not conclude that the mean is equal to 80 kg.

If the difference is still not clear to you, the following example may help. Suppose a person is suspected of having committed a crime. This person is either innocent—the null hypothesis—or guilty—the alternative hypothesis. In the attempt to know if the suspect committed the crime, the police collects as much information and proof as possible. This is similar to the researcher collecting data to form a sample. And then the judge, based on the collected evidence, decides whether the suspect is considered as innocent or guilty. If there is enough evidence that the suspect committed the crime, the judge will conclude that the suspect is guilty. In other words, she will reject the null hypothesis of the suspect being innocent because there are enough evidence that the suspect committed the crime.

This is similar to the t-stat being more extreme than the critical value: we have enough information (based on the sample) to say that the null hypothesis is unlikely because our data would be too extreme if the null hypothesis were true. Since the sample cannot be “wrong” (it corresponds to the collected data), the only remaining possibility is that the null hypothesis is in fact wrong. This is the reason we write “we reject the null hypothesis”.

On the other hand, if there is not enough evidence that the suspect committed the crime (or no evidence at all), the judge will conclude that the suspect is considered as not guilty. In other words, she will not reject the null hypothesis of the suspect being innocent. But even if she concludes that the suspect is considered as not guilty, she will never be 100% sure that he is really innocent.

It may be the case that:

  • the suspect did not commit the crime, or
  • the suspect committed the crime but the police was not able to collect enough information against the suspect.

In the former case the suspect is really innocent, whereas in the latter case the suspect is guilty but the police and the judge failed to prove it because they failed to find enough evidence against him. Similar to hypothesis testing, the judge has to conclude the case by considering the suspect not guilty, without being able to distinguish between the two.

This is the main reason we write “we do not reject the null hypothesis” or “we fail to reject the null hypothesis” (you may even read in some textbooks conclusion such as “there is no sufficient evidence in the data to reject the null hypothesis”), and we do not write “we accept the null hypothesis”.

I hope this metaphor helped you to understand the reason why we reject the null hypothesis instead of accepting it.

In the following sections, we present two other methods used in hypothesis testing.

These methods will result in the exact same conclusion: non-rejection of the null hypothesis, that is, we do not reject the hypothesis that the mean weight of Belgian adults is 80 kg. It is thus presented only if you prefer to use these methods over the first one.

Method B, which consists in computing the p -value and comparing this p -value with the significance level \(\alpha\) , boils down to the following 4 steps:

  • Computing the p -value

In this second method which uses the p -value, the first and second steps are similar than in the first method.

The null and alternative hypotheses remain the same:

  • \(H_0: \mu = 80\)
  • \(H_1: \mu \ne 80\)

Remember that the formula for the t-stat is different depending on the type of hypothesis test (one or two means, one or two proportions, one or two variances). In our case of one mean with unknown variance, we have:

The p -value is the probability (so it goes from 0 to 1) of observing a sample at least as extreme as the one we observed if the null hypothesis were true. In some sense, it gives you an indication on how likely your null hypothesis is . It is also defined as the smallest level of significance for which the data indicate rejection of the null hypothesis.

For more information about the p -value, I recommend reading this note about the p -value and the significance level \(\alpha\) .

Formally, the p -value is the area beyond the test statistic. Since we are doing a two-sided test, the p -value is thus the sum of the area above 2.189 and below -2.189.

Visually, the p -value is the sum of the two blue shaded areas in the following plot:

how to find critical values null hypothesis

The p -value can computed with precision in R with the pt() function:

The p -value is 0.0563, which indicates that there is a 5.63% chance to observe a sample at least as extreme as the one observed if the null hypothesis were true. This already gives us a hint on whether our t-stat is too extreme or not (and thus whether our null hypothesis is likely or not), but we formally conclude in step #4.

Like the qt() function to find the critical value, we use pt() to find the p -value because the underlying distribution is the Student’s distribution.

Use pnorm() , pchisq() and pf() for the Normal, Chi-square and Fisher distribution, respectively. See also this Shiny app to compute the p -value given a certain t-stat for most probability distributions.

If you do not have access to a computer (during exams for example) you will not be able to compute the p -value precisely, but you can bound it using the statistical table referring to your test.

In our case, we use the Student distribution and we look at the row df = 9 (since df = n - 1):

how to find critical values null hypothesis

  • The test statistic is -2.189
  • We take the absolute value, which gives 2.189
  • The value 2.189 is between 1.833 and 2.262 (highlighted in blue in the above table)
  • the area to the right of 1.833 is 0.05
  • the area to the right of 2.262 is 0.025
  • So we know that the area to the right of 2.189 must be between 0.025 and 0.05
  • Since the Student distribution is symmetric, we know that the area to the left of -2.189 must also be between 0.025 and 0.05
  • Therefore, the sum of the two areas must be between 0.05 and 0.10
  • In other words, the p -value is between 0.05 and 0.10 (i.e., 0.05 < p -value < 0.10)

Although we could not compute it precisely, it is enough to conclude our hypothesis test in the last step.

The final step is now to simply compare the p -value (computed in step #3) with the significance level \(\alpha\) . As for all statistical tests :

  • If the p -value is smaller than \(\alpha\) ( p -value < 0.05) \(\rightarrow H_0\) is unlikely \(\rightarrow\) we reject the null hypothesis
  • If the p -value is greater than or equal to \(\alpha\) ( p -value \(\ge\) 0.05) \(\rightarrow H_0\) is likely \(\rightarrow\) we do not reject the null hypothesis

No matter if we take into consideration the exact p -value (i.e., 0.0563) or the bounded one (0.05 < p -value < 0.10), it is larger than 0.05, so we do not reject the null hypothesis. 10 In the context of the problem, we do not reject the null hypothesis that the mean weight of Belgian adults is 80 kg.

Remember that rejecting (or not rejecting) a null hypothesis at the significance level \(\alpha\) using the critical value method (method A) is equivalent to rejecting (or not rejecting) the null hypothesis when the p -value is lower (equal or greater) than \(\alpha\) (method B).

This is the reason we find the exact same conclusion than with method A, and why you should too if you use both methods on the same data and with the same significance level.

Method C, which consists in computing the confidence interval and comparing this confidence interval with the target parameter (the parameter under the null hypothesis), boils down to the following 3 steps:

  • Computing the confidence interval

In this last method which uses the confidence interval, the first step is similar than in the first two methods.

Like hypothesis testing, confidence intervals are a well-known tool in inferential statistics.

Confidence interval is an estimation procedure which produces an interval (i.e., a range of values) containing the true parameter with a certain —usually high— probability .

In the same way that there is a formula for each type of hypothesis test when computing the test statistics, there exists a formula for each type of confidence interval. Formulas for the different types of confidence intervals can be found in this Shiny app .

Here is the formula for a confidence interval on one mean \(\mu\) (with unknown population variance):

\[ (1-\alpha)\text{% CI for } \mu = \bar{x} \pm t_{\alpha/2, n - 1} \frac{s}{\sqrt{n}} \]

where \(t_{\alpha/2, n - 1}\) is found in the Student distribution table (and is similar to the critical value found in step #3 of method A).

Given our data and with \(\alpha\) = 0.05, we have:

\[ \begin{aligned} 95\text{% CI for } \mu &= \bar{x} \pm t_{\alpha/2, n - 1} \frac{s}{\sqrt{n}} \\ &= 71 \pm 2.262 \frac{13}{\sqrt{10}} \\ &= [61.70; 80.30] \end{aligned} \]

The 95% confidence interval for \(\mu\) is [61.70; 80.30] kg. But what does a 95% confidence interval mean?

We know that this estimation procedure has a 95% probability of producing an interval containing the true mean \(\mu\) . In other words, if we construct many confidence intervals (with different samples of the same size), 95% of them will , on average, include the mean of the population (the true parameter). So on average, 5% of these confidence intervals will not cover the true mean.

If you wish to decrease this last percentage, you can decrease the significance level (set \(\alpha\) = 0.01 or 0.02 for instance). All else being equal, this will increase the range of the confidence interval and thus increase the probability that it includes the true parameter.

The final step is simply to compare the confidence interval (constructed in step #2) with the value of the target parameter (the value under the null hypothesis, mentioned in step #1):

  • If the confidence interval does not include the hypothesized value \(\rightarrow H_0\) is unlikely \(\rightarrow\) we reject the null hypothesis
  • If the confidence interval includes the hypothesized value \(\rightarrow H_0\) is likely \(\rightarrow\) we do not reject the null hypothesis

In our example:

  • the hypothesized value is 80 (since \(H_0: \mu\) = 80)
  • 80 is included in the 95% confidence interval since it goes from 61.70 to 80.30 kg
  • So we do not reject the null hypothesis

In the terms of the problem, we do not reject the hypothesis that the mean weight of Belgian adults is 80 kg.

As you can see, the conclusion is equivalent than with the critical value method (method A) and the p -value method (method B). Again, this must be the case since we use the same data and the same significance level \(\alpha\) for all three methods.

All three methods give the same conclusion. However, each method has its own advantage so I usually select the most convenient one depending on the situation:

  • It is, in my opinion, the easiest and most straightforward method of the three when I do not have access to R.
  • In addition to being able to know whether the null hypothesis is rejected or not, computing the exact p -value can be very convenient so I tend to use this method if I have access to R.
  • If I need to test several hypothesized values , I tend to choose this method because I can construct one single confidence interval and compare it to as many values as I want. For example, with our 95% confidence interval [61.70; 80.30], I know that any value below 61.70 kg and above 80.30 kg will be rejected, without testing it for each value.

In this article, we reviewed the goals and when hypothesis testing is used. We then showed how to do a hypothesis test by hand through three different methods (A. critical value , B. p -value and C. confidence interval ). We also showed how to interpret the results in the context of the initial problem.

Although all three methods give the exact same conclusion when using the same data and the same significance level (otherwise there is a mistake somewhere), I also presented my personal preferences when it comes to choosing one method over the other two.

Thanks for reading.

I hope this article helped you to understand the structure of a hypothesis by hand. I remind you that, at least for the 6 hypothesis tests covered in this article, the formulas are different, but the structure and the reasoning behind it remain the same. So you basically have to know which formulas to use, and simply follow the steps mentioned in this article.

For the interested reader, I created two accompanying Shiny apps:

  • Hypothesis testing and confidence intervals : after entering your data, the app illustrates all the steps in order to conclude the test and compute a confidence interval. See more information in this article .
  • How to read statistical tables : the app helps you to compute the p -value given a t-stat for most probability distributions. See more information in this article .

As always, if you have a question or a suggestion related to the topic covered in this article, please add it as a comment so other readers can benefit from the discussion.

Suppose a researcher wants to test whether Belgian women are taller than French women. Suppose a health professional would like to know whether the proportion of smokers is different among athletes and non-athletes. It would take way too long to measure the height of all Belgian and French women and to ask all athletes and non-athletes their smoking habits. So most of the time, decisions are based on a representative sample of the population and not on the whole population. If we could measure the entire population in a reasonable time frame, we would not do any inferential statistics. ↩︎

Don’t get me wrong, this does not mean that hypothesis tests are never used in exploratory analyses. It is just much less frequent in exploratory research than in confirmatory research. ↩︎

You may see more or less steps in other articles or textbooks, depending on whether these steps are detailed or concise. Hypothesis testing should, however, follows the same process regardless of the number of steps. ↩︎

For one-sided tests, writing \(H_0: \mu = 80\) or \(H_0: \mu \ge 80\) are both correct. The point is that the null and alternative hypothesis must be mutually exclusive since you are testing one hypothesis against the other, so both cannot be true at the same time. ↩︎

To be complete, there are even different formulas within each type of test, depending on whether some assumptions are met or not. For the interested reader, see all the different scenarios and thus the different formulas for a test on one mean and on two means . ↩︎

There are more uncertainty if the population variance is unknown than if it is known, and this greater uncertainty is taken into account by using the Student distribution instead of the standard Normal distribution. Also note that as the sample size increases, the degrees of freedom of the Student distribution increases and the two distributions become more and more similar. For large sample size (usually from \(n >\) 30), the Student distribution becomes so close to the standard Normal distribution that, even if the population variance is unknown, the standard Normal distribution can be used. ↩︎

For a test on two independent samples, the degrees of freedom is \(n_1 + n_2 - 2\) , where \(n_1\) and \(n_2\) are the size of the first and second sample, respectively. Note the - 2 due to the fact that in this case, two quantities are estimated. ↩︎

The type II error is the probability of not rejecting the null hypothesis although it is in reality false. ↩︎

Whether this is a good or a bad standard is a question that comes up often and is debatable. This is, however, beyond the scope of the article. ↩︎

Again, p -values found via a statistical table or via R must be coherent. ↩︎

Related articles

  • One-sample Wilcoxon test in R
  • Correlation coefficient and correlation test in R
  • One-proportion and chi-square goodness of fit test
  • How to perform a one-sample t-test by hand and in R: test on one mean

Liked this post?

  • Get updates every time a new article is published (no spam and unsubscribe anytime):

Yes, receive new posts by email

  • Support the blog

FAQ Contribute Sitemap

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base
  • Chi-Square (Χ²) Tests | Types, Formula & Examples

Chi-Square (Χ²) Tests | Types, Formula & Examples

Published on May 23, 2022 by Shaun Turney . Revised on June 22, 2023.

A Pearson’s chi-square test is a statistical test for categorical data. It is used to determine whether your data are significantly different from what you expected. There are two types of Pearson’s chi-square tests:

  • The chi-square goodness of fit test is used to test whether the frequency distribution of a categorical variable is different from your expectations.
  • The chi-square test of independence is used to test whether two categorical variables are related to each other.

Table of contents

What is a chi-square test, the chi-square formula, when to use a chi-square test, types of chi-square tests, how to perform a chi-square test, how to report a chi-square test, practice questions, other interesting articles, frequently asked questions about chi-square tests.

Pearson’s chi-square (Χ 2 ) tests, often referred to simply as chi-square tests, are among the most common nonparametric tests . Nonparametric tests are used for data that don’t follow the assumptions of parametric tests , especially the assumption of a normal distribution .

If you want to test a hypothesis about the distribution of a categorical variable you’ll need to use a chi-square test or another nonparametric test. Categorical variables can be nominal or ordinal and represent groupings such as species or nationalities. Because they can only have a few specific values, they can’t have a normal distribution.

Test hypotheses about frequency distributions

There are two types of Pearson’s chi-square tests, but they both test whether the observed frequency distribution of a categorical variable is significantly different from its expected frequency distribution. A frequency distribution describes how observations are distributed between different groups.

Frequency distributions are often displayed using frequency distribution tables . A frequency distribution table shows the number of observations in each group. When there are two categorical variables, you can use a specific type of frequency distribution table called a contingency table to show the number of observations in each combination of groups.

Frequency of visits by bird species at a bird feeder during a 24-hour period
Bird species Frequency
House sparrow 15
House finch 12
Black-capped chickadee 9
Common grackle 8
European starling 8
Mourning dove 6
Contingency table of the handedness of a sample of Americans and Canadians
Right-handed Left-handed
American 236 19
Canadian 157 16

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

how to find critical values null hypothesis

Both of Pearson’s chi-square tests use the same formula to calculate the test statistic , chi-square (Χ 2 ):

\begin{equation*} X^2=\sum{\frac{(O-E)^2}{E}} \end{equation*}

  • Χ 2 is the chi-square test statistic
  • Σ is the summation operator (it means “take the sum of”)
  • O is the observed frequency
  • E is the expected frequency

The larger the difference between the observations and the expectations ( O − E in the equation), the bigger the chi-square will be. To decide whether the difference is big enough to be statistically significant , you compare the chi-square value to a critical value.

A Pearson’s chi-square test may be an appropriate option for your data if all of the following are true:

  • You want to test a hypothesis about one or more categorical variables . If one or more of your variables is quantitative, you should use a different statistical test . Alternatively, you could convert the quantitative variable into a categorical variable by separating the observations into intervals.
  • The sample was randomly selected from the population .
  • There are a minimum of five observations expected in each group or combination of groups.

The two types of Pearson’s chi-square tests are:

Chi-square goodness of fit test

Chi-square test of independence.

Mathematically, these are actually the same test. However, we often think of them as different tests because they’re used for different purposes.

You can use a chi-square goodness of fit test when you have one categorical variable. It allows you to test whether the frequency distribution of the categorical variable is significantly different from your expectations. Often, but not always, the expectation is that the categories will have equal proportions.

  • Null hypothesis ( H 0 ): The bird species visit the bird feeder in equal proportions.
  • Alternative hypothesis ( H A ): The bird species visit the bird feeder in different proportions.

Expectation of different proportions

  • Null hypothesis ( H 0 ): The bird species visit the bird feeder in the same proportions as the average over the past five years.
  • Alternative hypothesis ( H A ): The bird species visit the bird feeder in different proportions from the average over the past five years.

You can use a chi-square test of independence when you have two categorical variables. It allows you to test whether the two variables are related to each other. If two variables are independent (unrelated), the probability of belonging to a certain group of one variable isn’t affected by the other variable .

  • Null hypothesis ( H 0 ): The proportion of people who are left-handed is the same for Americans and Canadians.
  • Alternative hypothesis ( H A ): The proportion of people who are left-handed differs between nationalities.

Other types of chi-square tests

Some consider the chi-square test of homogeneity to be another variety of Pearson’s chi-square test. It tests whether two populations come from the same distribution by determining whether the two populations have the same proportions as each other. You can consider it simply a different way of thinking about the chi-square test of independence.

McNemar’s test is a test that uses the chi-square test statistic. It isn’t a variety of Pearson’s chi-square test, but it’s closely related. You can conduct this test when you have a related pair of categorical variables that each have two groups. It allows you to determine whether the proportions of the variables are equal.

Contingency table of ice cream flavor preference
Like chocolate Dislike chocolate
Like vanilla 47 32
Dislike vanilla 8 13
  • Null hypothesis ( H 0 ): The proportion of people who like chocolate is the same as the proportion of people who like vanilla.
  • Alternative hypothesis ( H A ): The proportion of people who like chocolate is different from the proportion of people who like vanilla.

There are several other types of chi-square tests that are not Pearson’s chi-square tests, including the test of a single variance and the likelihood ratio chi-square test .

Prevent plagiarism. Run a free check.

The exact procedure for performing a Pearson’s chi-square test depends on which test you’re using, but it generally follows these steps:

  • Create a table of the observed and expected frequencies. This can sometimes be the most difficult step because you will need to carefully consider which expected values are most appropriate for your null hypothesis.
  • Calculate the chi-square value from your observed and expected frequencies using the chi-square formula.
  • Find the critical chi-square value in a chi-square critical value table or using statistical software.
  • Compare the chi-square value to the critical value to determine which is larger.
  • Decide whether to reject the null hypothesis. You should reject the null hypothesis if the chi-square value is greater than the critical value. If you reject the null hypothesis, you can conclude that your data are significantly different from what you expected.

If you decide to include a Pearson’s chi-square test in your research paper , dissertation or thesis , you should report it in your results section . You can follow these rules if you want to report statistics in APA Style :

  • You don’t need to provide a reference or formula since the chi-square test is a commonly used statistic.
  • Refer to chi-square using its Greek symbol, Χ 2 . Although the symbol looks very similar to an “X” from the Latin alphabet, it’s actually a different symbol. Greek symbols should not be italicized.
  • Include a space on either side of the equal sign.
  • If your chi-square is less than zero, you should include a leading zero (a zero before the decimal point) since the chi-square can be greater than zero.
  • Provide two significant digits after the decimal point.
  • Report the chi-square alongside its degrees of freedom , sample size, and p value , following this format: Χ 2 (degrees of freedom, N = sample size) = chi-square value, p = p value).

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Chi square test of independence
  • Statistical power
  • Descriptive statistics
  • Degrees of freedom
  • Pearson correlation
  • Null hypothesis

Methodology

  • Double-blind study
  • Case-control study
  • Research ethics
  • Data collection
  • Hypothesis testing
  • Structured interviews

Research bias

  • Hawthorne effect
  • Unconscious bias
  • Recall bias
  • Halo effect
  • Self-serving bias
  • Information bias

The two main chi-square tests are the chi-square goodness of fit test and the chi-square test of independence .

Both chi-square tests and t tests can test for differences between two groups. However, a t test is used when you have a dependent quantitative variable and an independent categorical variable (with two groups). A chi-square test of independence is used when you have two categorical variables.

Both correlations and chi-square tests can test for relationships between two variables. However, a correlation is used when you have two quantitative variables and a chi-square test of independence is used when you have two categorical variables.

Quantitative variables are any variables where the data represent amounts (e.g. height, weight, or age).

Categorical variables are any variables where the data represent groups. This includes rankings (e.g. finishing places in a race), classifications (e.g. brands of cereal), and binary outcomes (e.g. coin flips).

You need to know what type of variables you are working with to choose the right statistical test for your data and interpret your results .

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Turney, S. (2023, June 22). Chi-Square (Χ²) Tests | Types, Formula & Examples. Scribbr. Retrieved September 23, 2024, from https://www.scribbr.com/statistics/chi-square-tests/

Is this article helpful?

Shaun Turney

Shaun Turney

Other students also liked, chi-square test of independence | formula, guide & examples, chi-square goodness of fit test | formula, guide & examples, chi-square (χ²) distributions | definition & examples, what is your plagiarism score.

  • Data Visualization
  • Statistics in R
  • Machine Learning in R
  • Data Science in R
  • Packages in R

How to Find Critical Value for One-Sided and Two-Sided t-Test in R

When conducting hypothesis tests, such as the t-test, critical values are essential for determining whether to reject or fail to reject the null hypothesis. The t-distribution is used to find these critical values, especially when sample sizes are small. In this article, we will explore how to find the critical value for one-sided and two-sided t-tests in R, based on the significance level (α) and degrees of freedom.

Parameters Required for Finding Critical Value

  • Significance Level (α) : The probability of making a Type I error (rejecting the null hypothesis when it is true). Typical values are 0.05 or 0.01.
  • Degrees of Freedom (df) : The number of independent values that can vary in an analysis. For t-tests, it is calculated as df = n – 1, where n is the sample size.

Finding Critical Value for One-Sided t-Test in R

In a one-sided t-test, the critical value marks the point at which the test statistic is considered extreme in one direction (either greater than or less than a hypothesized value). The critical value for a one-sided t-test can be found using the qt() function in R .

qt(p, df, lower.tail = TRUE) p : Probability value (1 – significance level for one-sided test). df : Degrees of freedom. lower.tail : Logical; if TRUE , the lower tail of the distribution is returned, otherwise the upper tail.

Example 1: Right-Tailed One-Sided t-Test

Suppose we have a sample size of 20, and we want to find the critical value for a right-tailed one-sided t-test with a significance level of 0.05. The degrees of freedom are df=20−1=19.

This means the critical value for a right-tailed one-sided t-test is approximately 1.729 when α=0.05 and df=19. If your t-statistic exceeds this value, you reject the null hypothesis.

Example 2: Left-Tailed One-Sided t-Test

For a left-tailed one-sided t-test, we adjust the qt() function to return the lower tail critical value.

The critical value for a left-tailed one-sided t-test is approximately -1.729, meaning if your t-statistic is smaller than this value, you reject the null hypothesis.

Finding Critical Value for Two-Sided t-Test in R

In a two-sided t-test, the critical region is split between two tails of the distribution. The critical value for a two-sided test can be found similarly using the qt() function, but we divide the significance level by 2 (since the critical value is in both tails).

For a two-sided t-test, if the t-statistic falls below -2.093 or above 2.093, we reject the null hypothesis.

Visualizing the Critical Value on a t-Distribution

Visualizing the critical value on a t-distribution helps in understanding the critical region where the null hypothesis would be rejected.

gh

This plot will show the t-distribution with shaded areas indicating the critical region for rejecting the null hypothesis.

Finding the critical value for a one-sided or two-sided t-test in R is straightforward using the qt() function. The critical value depends on the significance level and degrees of freedom, and it marks the boundary where the test statistic would lead to rejecting the null hypothesis. For one-sided tests, we focus on one tail (either upper or lower), while for two-sided tests, the critical region is split between both tails of the t-distribution.

Similar Reads

  • R-Statistics

Please Login to comment...

  • How to Watch NFL on NFL+ in 2024: A Complete Guide
  • Best Smartwatches in 2024: Top Picks for Every Need
  • Top Budgeting Apps in 2024
  • 10 Best Parental Control App in 2024
  • GeeksforGeeks Practice - Leading Online Coding Platform

Improve your Coding Skills with Practice

 alt=

What kind of Experience do you want to share?

IMAGES

  1. Critical Value

    how to find critical values null hypothesis

  2. How to find critical values

    how to find critical values null hypothesis

  3. How to find critical values for a hypothesis test using a z or t table, part 2

    how to find critical values null hypothesis

  4. Using the graphing calculator to find critical values for a hypothesis test

    how to find critical values null hypothesis

  5. Finding t critical values for Hypothesis Testing

    how to find critical values null hypothesis

  6. Hypothesis Testing

    how to find critical values null hypothesis

VIDEO

  1. Tutorial for Finding the Critical Value(s) in a Z Test

  2. WHAT IS TEST STATISTICS?

  3. How to Do The Critical Value Approach to Hypothesis Testing

  4. Hypothesis Testing Critical Value Method

  5. Null Hypothesis # Significance Level# Critical Value# Rejection / Acceptance Area# Malayalam

  6. How to find critical values for a hypothesis test using a z or t table, part 5

COMMENTS

  1. Critical Value Calculator

    Find critical values for different distributions and types of tests with this online tool. Learn how to use critical values to reject or accept the null hypothesis based on the test statistic and the significance level.

  2. S.3.1 Hypothesis Testing (Critical Value Approach)

    Learn how to use the critical value approach to test hypotheses about population parameters. Find out how to calculate the test statistic, determine the critical value, and compare them to reject or not reject the null hypothesis.

  3. Critical Value: Definition, Finding & Calculator

    Learn how to use critical values to test hypotheses and construct confidence intervals. Find out how to calculate critical values for z-tests, t-tests, chi-square tests, and F-tests with a calculator.

  4. Critical Value Approach in Hypothesis Testing

    Learn how to find critical values for different distributions and confidence intervals using formulas or tables. Critical values are the cut-off points to accept or reject the null hypothesis based on the test statistic and significance level.

  5. Critical Value

    Learn how to calculate the critical value for different types of hypothesis tests using confidence interval or significance level. The critical value is a cut-off value that compares with the test statistic to determine the rejection or acceptance of the null hypothesis.

  6. 7.5: Critical values, p-values, and significance level

    Learn how to use critical values, p-values, and significance level to test null hypotheses using z-scores and normal distributions. Find out how to interpret the rejection region, the test statistic, and the p-value in hypothesis testing.

  7. How to Calculate Critical Values for Statistical Hypothesis Testing

    Learn how to calculate and use critical values for one-tail and two-tail tests from Gaussian, Student's t, and Chi-Squared distributions. Critical values are the observation values that mark the rejection or failure to reject the null hypothesis based on a given probability.

  8. 8.1: The null and alternative hypotheses

    With \(df\) in hand, the value of the test statistic is compared to the critical value for our null hypothesis. If the test statistic is smaller than the critical value, we fail to reject the null hypothesis. If, however, the test statistic is greater than the critical value, then we provisionally reject the null hypothesis.

  9. S.3.1 Hypothesis Testing (Critical Value Approach)

    Learn how to use the critical value approach to conduct hypothesis tests for the population mean using the t-statistic. Find the critical value for a left-tailed test at 10% significance level and see an example with graphs.

  10. 7.5.1: Critical Values

    At this point, our hypothesis test is essentially complete: (1) we choose an α level (e.g., α=.05, (2) come up with some test statistic (more on this step later) that does a good job (in some meaningful sense) of comparing the null hypothesis to the research hypothesis, (3) calculate the critical region that produces an appropriate α level ...

  11. Hypothesis Testing Calculator with Steps

    Calculate the test statistic for z or t tests using the population mean, standard deviation, sample mean and sample size. Choose the form of the hypothesis test and compare the test statistic to the p-value or critical value to reject or accept the null hypothesis.

  12. How To Find Critical Value In Statistics

    Learn how to find critical value, a number that defines the rejection region of a hypothesis test, and how it varies depending on the test statistic, significance level, and type of test. See examples of critical values for one-tailed and two-tailed tests, and how to use R to calculate them.

  13. Hypothesis Testing

    This minimizes the risk of incorrectly rejecting the null hypothesis (Type I error). Hypothesis testing example In your analysis of the difference in average height between men and women, you find that the p-value of 0.002 is below your cutoff of 0.05, so you decide to reject your null hypothesis of no difference.

  14. What is a critical value?

    A critical value is a point on the distribution of the test statistic that defines a rejection region for the null hypothesis. Learn how to calculate critical values for one-sample t-test and one-way ANOVA using Minitab software.

  15. Null Hypothesis: Definition, Rejecting & Examples

    Learn what a null hypothesis is and how to reject it using a hypothesis test. The web page explains the null hypothesis for different types of effects, relationships, and tests, and gives examples of writing and rejecting it.

  16. 7.1.3.1. Critical values and p values

    Determination of critical values: Critical values for a test of hypothesis depend upon a test statistic, which is specific to the type of test, and the significance level, \(\alpha\), which defines the sensitivity of the test. A value of \(\alpha\) = 0.05 implies that the null hypothesis is rejected 5 % of the time when it is in fact true. The choice of \(\alpha\) is somewhat arbitrary ...

  17. Critical Values: Find a Critical Value in Any Tail

    Example question: Find a critical value in the z-table for an alpha level of 0.0079.. Step 1: Draw a diagram, like the one above. Shade in the area in the right tail. This area represents alpha, α. A diagram helps you to visualize what area you are looking for (i.e. if you want an area to the right of the mean or the left of the mean).

  18. Understanding Critical Value vs. P-Value in Hypothesis Testing

    In hypothesis testing, the critical value is compared to the obtained test statistic to determine whether the null hypothesis should be rejected or not. How to calculate critical values. Calculating critical values involves several steps and depends on the type of test being conducted. The general formula for finding the critical value is:

  19. T-Distribution Table of Critical Values

    Find the t-values for one-tailed and two-tailed t-tests and confidence intervals for different degrees of freedom. The table shows the critical t-values for alpha levels from 0.0005 to 0.20 and degrees of freedom from 1 to 1000.

  20. A Beginner's Guide to Hypothesis Testing: Key Concepts and Applications

    Null Hypothesis (H₀): ... Use statistical software or a financial calculator to compute your test statistic and compare it to the critical value. STEP 5: Interpret the Results. Based on the p-value, decide whether to reject or fail to reject the null hypothesis. If the p-value is below the significance level, it indicates that the null ...

  21. What Is The Null Hypothesis & When To Reject It

    A null hypothesis is a statistical concept that suggests no significant difference or relationship between measured variables. Learn how to write, test, and reject a null hypothesis using p-values, significance levels, and examples.

  22. Hypothesis test by hand

    These two critical values form the rejection regions (the red shaded areas): from \(- \infty\) to -2.262, and; from 2.262 to \(\infty\) If the t-stat lies within one of the rejection region, we reject the null hypothesis. On the contrary, if the t-stat does not lie within any of the rejection region, we do not reject the null hypothesis.

  23. How do I test a hypothesis using the critical value of t?

    How do I test a hypothesis using the critical value of t? To test a hypothesis using the critical value of t, follow these four steps:. Calculate the t value for your sample.; Find the critical value of t in the t table.; Determine if the (absolute) t value is greater than the critical value of t. Reject the null hypothesis if the sample's t value is greater than the critical value of t.

  24. Chi-Square (Χ²) Tests

    Learn how to use chi-square tests to compare categorical data and test hypotheses about frequency distributions. Find out the difference between chi-square goodness of fit and chi-square test of independence, and see how to perform and report them.

  25. Hypothesis Testing

    Revision notes on 5.1.1 Hypothesis Testing for the AQA A Level Maths: Statistics syllabus, written by the Maths experts at Save My Exams.

  26. How to Find Critical Value for One-Sided and Two-Sided t-Test in R

    Find Critical Value for One-Sided and Two-Sided t-Test in R. This plot will show the t-distribution with shaded areas indicating the critical region for rejecting the null hypothesis. Conclusion. Finding the critical value for a one-sided or two-sided t-test in R is straightforward using the qt() function. The critical value depends on the ...