p-Value Calculator

Find the p-value for any hypothesis test - Z, t, F, or chi-square - in one click.

p p-Value Calculator
p-Value
-
Significance
-
Test Statistic
-
α Level
-

📖 What is a p-Value?

The p-value is the probability of obtaining a test statistic as extreme as (or more extreme than) the one observed, assuming the null hypothesis (H₀) is true. It quantifies the evidence against the null hypothesis: a small p-value means the observed data would be unlikely if H₀ were true, providing evidence to reject it.

The p-value was introduced by Karl Pearson and popularised by Ronald Fisher in the 1920s as a measure of evidence against the null hypothesis. It has become the most widely used - and most often misunderstood - concept in statistics. Crucially, the p-value is not the probability that the null hypothesis is true. It is a conditional probability: P(data as extreme as observed | H₀ true).

In practice, p-values are used in medicine (clinical trials), psychology (experimental studies), economics (econometric analysis), quality control (process testing), and machine learning (feature selection). The conventional threshold of 0.05 was proposed by Fisher as a rule of thumb, but many fields now require stricter thresholds (0.01 or even 0.001) to reduce false positives.

This calculator computes p-values from four types of test statistics - Z, t, F, and chi-square - the four most common in introductory and intermediate statistics.

📐 Formulas

Two-tailed Z: p = 2 × (1 − Φ(|Z|))

One-tailed right Z: p = 1 − Φ(Z)

One-tailed left Z: p = Φ(Z)

t-test (df): p = 2 × P(T > |t|) using t-distribution with df degrees of freedom

F-test (df₁, df₂): p = P(F > f) using F-distribution (always right-tailed)

Chi-square (df): p = P(χ² > χ²_obs) using chi-square distribution (always right-tailed)

Φ = standard normal CDF (area to the left of Z)

α = significance level. If p ≤ α: reject H₀. If p > α: fail to reject H₀.

📖 How to Use This Calculator

1
Select the type of test statistic: Z, t, F, or Chi-square. The required input fields update automatically.
2
Enter the test statistic value computed from your data. For t and F, also enter the degrees of freedom.
3
Select the tail type based on your alternative hypothesis: two-tailed (≠), right-tailed (>), or left-tailed (<).
4
Choose the significance level α (default 0.05) and click Calculate p-Value.
5
The p-value, significance verdict, and plain-English conclusion appear instantly.

📝 Example Calculations

Example 1 - Z-test, Two-tailed

A Z-statistic of 2.31 is calculated from a large sample test. Tail: two-tailed. α = 0.05.

p = 2 × (1 − Φ(2.31)) = 2 × (1 − 0.9896) = 2 × 0.0104 = 0.0209

p (0.021) < α (0.05): Reject H₀ - statistically significant at 5% level.

Result = p = 0.0209 (Significant)
Try this example →

Example 2 - t-test, Two-tailed, df = 15

t-statistic = 2.10, df = 15, two-tailed, α = 0.05.

From t-distribution with 15 df, p ≈ 0.053

p (0.053) > α (0.05): Fail to reject H₀ - not significant at 5% level (borderline).

Result = p ≈ 0.053 (Not Significant)
Try this example →

Example 3 - Chi-square, df = 3

χ² = 9.21, df = 3 (always right-tailed), α = 0.05.

p ≈ 0.027 - less than 0.05: Reject H₀. The categorical distribution differs from expected.

Result = p ≈ 0.027 (Significant)
Try this example →

Example 4 - F-test ANOVA, df₁ = 3, df₂ = 36

F = 4.28, df₁ = 3, df₂ = 36, α = 0.05.

p ≈ 0.010 - less than 0.05: Reject H₀. At least one group mean differs significantly.

Result = p ≈ 0.010 (Significant)
Try this example →

Example 5 - One-tailed t-test (right), df = 29

t = 1.70, df = 29, right-tailed (H₁: μ > μ₀), α = 0.05.

p ≈ 0.050 - exactly at the threshold. Borderline significance - report both the p-value and effect size.

Result = p ≈ 0.050 (Borderline)
Try this example →

❓ Frequently Asked Questions

What is a p-value?+
A p-value is the probability of observing a test statistic as extreme as - or more extreme than - the one calculated from your sample data, assuming the null hypothesis is true. A small p-value (typically < 0.05) suggests the observed result would be rare if the null hypothesis were true, providing evidence against it.
What does p < 0.05 mean?+
If p < 0.05, there is less than a 5% probability of observing your result (or something more extreme) by chance alone, under the null hypothesis. Conventionally, this is the threshold for rejecting the null hypothesis at the 5% significance level (α = 0.05). The result is described as 'statistically significant'.
What is the difference between one-tailed and two-tailed p-values?+
A one-tailed test tests directional hypotheses (e.g., μ > μ₀ or μ < μ₀). The p-value is the area in one tail of the distribution. A two-tailed test tests non-directional hypotheses (μ ≠ μ₀). The p-value is the area in both tails combined. For the same test statistic, the two-tailed p-value is exactly double the one-tailed p-value.
What is a statistically significant p-value?+
Statistical significance depends on the chosen significance level (α). Common thresholds: α = 0.05 (social sciences), α = 0.01 (stricter, medicine), α = 0.001 (physics/large-scale trials). If p < α, the result is statistically significant and the null hypothesis is rejected.
Can a low p-value prove causation?+
No. A low p-value indicates that the observed data is unlikely under the null hypothesis, but it does not prove the alternative hypothesis is true, nor does it establish causation. Causation requires experimental design (randomised controlled trials), not just statistical significance.
What is the difference between p-value and confidence interval?+
A p-value summarises the evidence against the null hypothesis in a single number. A confidence interval gives a range of plausible values for the true parameter. They are related: if the p-value for H₀: μ = μ₀ is less than α, then μ₀ falls outside the (1−α)×100% confidence interval for μ.
How do I calculate a p-value from a Z-score?+
For a right-tailed test: p = 1 − Φ(z). For a left-tailed test: p = Φ(z). For a two-tailed test: p = 2 × (1 − Φ(|z|)). Φ is the standard normal CDF. For example, Z = 2.0 gives a two-tailed p-value of 2 × (1 − 0.9772) = 0.0456.
How do I calculate a p-value from a t-statistic?+
For a t-test with df degrees of freedom: the p-value is calculated from the t-distribution CDF. This calculator handles the computation automatically - just enter the t-statistic and degrees of freedom. For large df (> 30), the t-distribution approaches the standard normal.
Why is my p-value larger than expected?+
Common reasons: small sample size (low statistical power), large variability in your data, the true effect size is small, or the null hypothesis is actually true. A non-significant p-value does not prove the null - it only means the data didn't provide enough evidence to reject it.
What is the p-value for an F-statistic?+
The p-value for an F-statistic is the right-tail probability P(F > f_obs) from the F-distribution with numerator df₁ and denominator df₂ degrees of freedom. F-tests are used in ANOVA and regression to test whether group variances or regression coefficients are significantly different from zero.