F-Statistic Calculator
Calculate the F-statistic and p-value for variance comparison, ANOVA, or regression significance testing.
📖 What is the F-Statistic?
The F-statistic is a ratio of two variance estimates, named after the statistician Ronald A. Fisher. It is the foundation of three major statistical tests: the variance equality test (Snedecor's F), analysis of variance (ANOVA), and the regression model F-test. In all three contexts, the F-statistic answers a similar question: is the variance explained by a factor or model significantly larger than the unexplained (error) variance?
The F-distribution is right-skewed and bounded below by zero, defined by two parameters: the numerator degrees of freedom (df₁) and the denominator degrees of freedom (df₂). Large F values correspond to small p-values, indicating evidence against the null hypothesis. Unlike the t or Z distributions, the critical region for most F-tests is always in the upper tail - you reject H₀ when F is sufficiently large.
In one-way ANOVA, the F-statistic compares variance between group means (MS_between) to variance within groups (MS_within). If the groups truly have different means, between-group variance will be large relative to within-group variance, producing a large F. In regression, F compares the explained variance per predictor to the residual variance per degree of freedom - a significant F indicates the model outperforms a null model with no predictors.
This calculator handles all three F-test modes with full p-value calculation using the regularised incomplete beta function - the same mathematical approach used in professional statistical software.
📐 Formulas
Two-Variance F-Test: F = s₁² / s₂² - df₁ = n₁ − 1, df₂ = n₂ − 1 (always put larger s² in numerator; use two-sided p-value)
One-Way ANOVA: F = MS_between / MS_within - df₁ = k − 1, df₂ = n − k
- MS_between = SS_between / (k − 1) - mean square between groups
- MS_within = SS_within / (n − k) - mean square within groups (error)
- k = number of groups, n = total observations
Regression F-Test: F = (R² / k) / ((1 − R²) / (n − k − 1)) - df₁ = k, df₂ = n − k − 1
p-value: P(F(df₁, df₂) > F_observed) - computed via regularised incomplete beta function
Critical value F_crit: The value such that P(F > F_crit) = α. Reject H₀ if F > F_crit.
📖 How to Use This Calculator
📝 Example Calculations
Example 1 - Testing Equality of Process Variances
A manufacturer tests two production lines. Line 1: s₁ = 12.4 mm, n₁ = 21. Line 2: s₂ = 9.8 mm, n₂ = 18. Test H₀: σ₁² = σ₂² at α = 0.05.
F = 12.4² / 9.8² = 153.76 / 96.04 = 1.601, df₁ = 20, df₂ = 17
Two-sided p ≈ 0.38 > 0.05 - Fail to Reject H₀. No significant difference in process variability.
Example 2 - One-Way ANOVA: Three Drug Treatments
Three drug treatments are tested on 45 patients (15 per group). From the ANOVA table: SS_between = 245.6, SS_within = 892.3.
MS_B = 245.6 / (3−1) = 122.8; MS_W = 892.3 / (45−3) = 21.24
F = 122.8 / 21.24 = 5.782, df₁ = 2, df₂ = 42; F_crit(α=0.05) ≈ 3.22
p ≈ 0.006 < 0.05 - Reject H₀. At least one drug treatment has a different mean outcome.
Example 3 - Regression Model Significance
A regression model with k = 3 predictors and n = 50 observations achieves R² = 0.72. Is the model significant at α = 0.05?
F = (0.72/3) / ((1−0.72)/(50−3−1)) = 0.24 / (0.28/46) = 0.24 / 0.006087 = 39.43
df₁ = 3, df₂ = 46; F_crit ≈ 2.81; p < 0.0001 - Reject H₀. The model is highly significant.
Example 4 - ANOVA: Student Performance Across 4 Teaching Methods
Four teaching methods are compared across 60 students (15 per group). SS_between = 480.0, SS_within = 1120.0.
MS_B = 480 / 3 = 160.0; MS_W = 1120 / 56 = 20.0; F = 160 / 20 = 8.000
df₁ = 3, df₂ = 56; p < 0.001 - Reject H₀. Teaching methods differ significantly in student performance.