Degrees of Freedom Calculator

Find the correct degrees of freedom for any hypothesis test - instantly.

df Degrees of Freedom Calculator
Degrees of Freedom
-
df₂ (if applicable)
-
Formula
-
Test Type
-

📖 What are Degrees of Freedom?

Degrees of freedom (df) is one of the most fundamental - and most confusing - concepts in statistics. At its core, df represents the number of independent pieces of information available to estimate a parameter. Every time you estimate something from your data (a mean, a variance, a regression coefficient), you use up one degree of freedom.

The simplest example: if you have n observations and compute the sample mean x̄, you have used 1 degree of freedom. The deviations (x₁ − x̄), (x₂ − x̄), ..., (xₙ − x̄) must sum to zero - so knowing n − 1 of them determines the last. Only n − 1 deviations are free to vary. This is why sample variance divides by n − 1, not n.

Degrees of freedom matter in practice because they determine which probability distribution to use for computing p-values and critical values. The t-distribution with 5 df has much heavier tails than the t-distribution with 50 df - reflecting that small samples have much more uncertainty. A t-statistic of 2.0 has p ≈ 0.10 with 5 df, but p ≈ 0.05 with 50 df.

This calculator handles the most common df formulas: t-tests (one-sample, two-sample, Welch's, paired), chi-square (goodness-of-fit and independence), one-way ANOVA, and multiple regression.

📐 Formulas

One-sample / Paired t-test: df = n − 1

Two-sample (equal variance): df = n₁ + n₂ − 2

Welch's (Satterthwaite): df = (s₁²/n₁ + s₂²/n₂)² / [(s₁²/n₁)²/(n₁−1) + (s₂²/n₂)²/(n₂−1)]

Chi-square goodness-of-fit: df = k − 1 (k = number of categories)

Chi-square independence (r×c table): df = (r − 1)(c − 1)

One-way ANOVA: df_between = k − 1, df_within = N − k, df_total = N − 1

Multiple regression: df_model = k, df_residual = n − k − 1, df_total = n − 1

📖 How to Use This Calculator

1
Select the test type from the dropdown. The input fields update to show exactly what is needed.
2
Enter the required values: sample sizes, standard deviations (for Welch's), number of groups/categories, or predictors.
3
Click Calculate Degrees of Freedom. The df value, formula used, and any secondary df (e.g., for ANOVA or F-tests) appear instantly.
4
Use the df value with the Critical Value Calculator or p-Value Calculator to complete your hypothesis test.

📝 Example Calculations

Example 1 - One-Sample t-Test

n = 25 observations. df = 25 − 1 = 24. Critical value at α = 0.05, two-tailed: t₀.₀₂₅,₂₄ = 2.064.

df = 24
Try this example →

Example 2 - Two-Sample t-Test

Group 1: n₁ = 20. Group 2: n₂ = 18. df = 20 + 18 − 2 = 36. Critical value at α = 0.05, two-tailed: t₀.₀₂₅,₃₆ ≈ 2.028.

df = 36
Try this example →

Example 3 - Welch's t-Test

Group 1: s₁=10, n₁=20. Group 2: s₂=25, n₂=12. Numerator = (100/20+625/12)² = (5+52.08)² = 3258. Denominator = 25/19 + 2702/11 = 1.316 + 245.6 = 246.9. df = 3258/246.9 = 13.2 → floor to 13.

df ≈ 13 (Satterthwaite)
Try this example →

Example 4 - Chi-Square Independence (3×4 Table)

r = 3 rows, c = 4 columns. df = (3−1)(4−1) = 2×3 = 6. Use chi-square distribution with 6 df to find the critical value.

df = 6
Try this example →

Example 5 - One-Way ANOVA

k = 4 groups, N = 40 total observations. df_between = 3, df_within = 36. F ~ F(3, 36). Critical F at α = 0.05: F_crit ≈ 2.866.

df_between = 3, df_within = 36
Try this example →

❓ Frequently Asked Questions

What are degrees of freedom in statistics?+
Degrees of freedom (df) represent the number of independent values that can vary when estimating a statistical parameter. After estimating k parameters from n observations, only n − k pieces of information remain 'free'. Degrees of freedom are used to select the correct t, chi-square, or F distribution for hypothesis testing - distributions with fewer df have heavier tails, reflecting greater uncertainty.
Why do we use n − 1 instead of n for sample variance?+
When we estimate the population mean from the sample (x̄), we 'use up' one degree of freedom - the deviations (xᵢ − x̄) sum to zero, so knowing n − 1 of them determines the last. Dividing by n − 1 (Bessel's correction) gives an unbiased estimate of the population variance. Dividing by n would systematically underestimate the true variance, especially for small samples.
What is the Welch-Satterthwaite equation for df?+
For Welch's t-test (unequal variances): df = (s₁²/n₁ + s₂²/n₂)² / [(s₁²/n₁)²/(n₁−1) + (s₂²/n₂)²/(n₂−1)]. This gives a non-integer result - always round down to be conservative. The Welch df is always between min(n₁,n₂)−1 and n₁+n₂−2.
What are degrees of freedom for a chi-square test?+
Goodness-of-fit: df = k − 1, where k is the number of categories. Independence (r × c table): df = (r − 1)(c − 1). A 2×2 table has df = 1. A 3×4 table has df = 6. The reason is that once the marginal totals are fixed, only (r−1)(c−1) cells can vary freely.
What are degrees of freedom in ANOVA?+
One-way ANOVA with k groups and N total observations: df_between = k − 1 (explained), df_within = N − k (residual), df_total = N − 1. These sum: df_total = df_between + df_within. The F-statistic uses both df for its distribution: F ~ F(k−1, N−k).
What are degrees of freedom in regression?+
Multiple regression with k predictors and n observations: df_model = k, df_residual = n − k − 1, df_total = n − 1. Simple linear regression (k=1): df_model = 1, df_residual = n − 2. Adjusted R² uses df_residual: Adj R² = 1 − [(1−R²)(n−1)/df_residual].
What happens when degrees of freedom is very small?+
Small df means less precision and heavier-tailed distributions. At df = 1, the t-distribution is equivalent to the Cauchy distribution (no defined mean or variance). At df = 2, the t-distribution still has very heavy tails. Critical values are substantially larger than the normal distribution values, requiring stronger evidence to reject H₀.
Can degrees of freedom be non-integer?+
Yes - Welch's t-test produces non-integer df from the Satterthwaite equation. The result is used directly to look up the critical value from the t-distribution (which is defined for real-valued df), then rounded down for table lookups or calculated precisely with software.