Statistics Calculators
Free statistics calculators: hypothesis testing, regression, Z-score, p-value, t-test, confidence intervals, ANOVA, non-parametric tests, and more. Instant results.
Statistics Calculators - From Basics to Advanced Inference
Statistics is the science of collecting, analysing, and interpreting data to make decisions under uncertainty. Whether you are running a clinical trial, analysing A/B test results, checking if two groups differ significantly, or fitting a curve to experimental data, the tools below give you precise, reproducible answers instantly.
Descriptive & Summary Statistics
Mean, Median & Mode Calculator - Central tendency for any dataset: mean, median, all modes, range, min, max.
Standard Deviation Calculator - Population and sample SD, variance, mean, and standard error.
Descriptive Statistics Calculator - 25+ measures: mean, SD, quartiles, IQR, skewness, kurtosis, outliers, frequency table.
Probability Calculator - Single events, compound (AND/OR), conditional, binomial probability.
Standardisation & Z-Scores
Z-Score Calculator - Convert raw scores to standard scores and find percentile rankings.
Raw Score Calculator - Convert between raw scores, Z-scores, T-scores (mean 50, SD 10), and percentiles.
Normal Approximation Calculator - Approximate binomial and Poisson probabilities using the normal distribution with continuity correction.
Hypothesis Testing
Hypothesis Testing Calculator - Full guided 6-step hypothesis test (Z, t, proportion) with conclusion and effect size.
p-Value Calculator - Find p-values from Z, t, F, or chi-square statistics. One-tailed and two-tailed.
Critical Value Calculator - Find critical values for Z, t, F, and chi-square distributions at any α level.
Z-Test Calculator - One-sample and two-sample Z-tests for means and proportions.
t-Test Calculator - One-sample, two-sample (Student’s and Welch’s), and paired t-tests with full output.
t-Statistic Calculator - Compute the t-value from sample data (one-sample, two-sample, paired).
F-Statistic Calculator - F-test for two variances, ANOVA F-test, and regression F-test.
Power Analysis Calculator - Calculate statistical power, required sample size, and Type I/II error rates.
Confidence Intervals & Estimation
Margin of Error Calculator - Survey MOE and required sample size for any confidence level.
Point Estimate Calculator - MLE, Wilson score, and Laplace estimates for proportions and means.
Sampling Error Calculator - Standard error of the mean, proportion SE, and finite population correction.
Degrees of Freedom Calculator - df for t-tests, chi-square, ANOVA, Welch’s, and regression.
Regression & Curve Fitting
Linear Regression Calculator - Least squares line (y = mx + b), slope, intercept, R², correlation, residuals.
Quadratic Regression Calculator - Fit y = ax² + bx + c to data using least squares.
Cubic Regression Calculator - Fit y = ax³ + bx² + cx + d to data.
Polynomial Regression Calculator - Fit polynomials up to degree 6 to your data.
Exponential Regression Calculator - Fit y = ae^(bx) to data (growth, decay, compound processes).
Coefficient of Determination (R²) Calculator - R², adjusted R², and SS decomposition from data, SS values, or correlation r.
Residual Calculator - Compute residuals, SSR, RMSE, and standardised residuals from regression output.
Error Analysis
Absolute Uncertainty Calculator - Propagate uncertainty through addition, multiplication, and power operations.
Relative Error Calculator - Absolute error, relative error, percentage error, MAE, and RMSE.
Specialised Tests
AB Test Calculator - Statistical significance for A/B experiments: conversion rates, lift, confidence intervals.
Bonferroni Correction Calculator - Adjust α for multiple comparisons (Bonferroni, Holm, Šidák methods).
Fisher’s Exact Test Calculator - Exact p-value for 2×2 contingency tables. Odds ratio and relative risk.
McNemar’s Test Calculator - Test for paired binary data (before/after, matched pairs).
Mann-Whitney U Test Calculator - Non-parametric test for two independent groups (alternative to t-test).
Wilcoxon Rank-Sum Test Calculator - Rank-sum test for two groups; also includes Wilcoxon signed-rank for paired data.
Youden Index Calculator - Diagnostic test performance: sensitivity, specificity, PPV, NPV, LR+, LR−, MCC.
Frequently Asked Questions
What is the difference between a Z-test and a t-test?
Use a Z-test when the population standard deviation σ is known, or when n > 30 (the Central Limit Theorem ensures normality of the sampling distribution). Use a t-test when σ is unknown and must be estimated from the sample - which is almost always the case in practice. For large samples, the two tests give virtually identical results.
How do I interpret a p-value?
A p-value is the probability of observing your test result (or something more extreme) if the null hypothesis were true. A small p-value (typically p < 0.05) means your result is unlikely under H₀, providing evidence to reject it. Crucially, p-value is NOT the probability that the null hypothesis is true - it is a conditional probability: P(data | H₀ true).
What is statistical power and why does it matter?
Statistical power (1−β) is the probability of correctly detecting a true effect. Low power means even real effects often go undetected (Type II errors). The Power Analysis Calculator helps you determine the sample size needed to achieve adequate power (typically 80% or 90%) before running your study - this is essential for clinical trials, psychology experiments, and quality control studies.
When should I use non-parametric tests?
Use non-parametric tests (Mann-Whitney U, Wilcoxon) when: your data is ordinal (e.g., ratings 1–5), the normality assumption is seriously violated, you have small samples with heavy-tailed distributions, or you are measuring something like pain scores or satisfaction ratings where the mean is not meaningful. They are based on ranks rather than raw values and are more robust to outliers.
What is R² and how do I interpret it?
R² measures the proportion of variance in Y explained by the regression model. R² = 0.85 means 85% of Y's variance is explained by X. Use adjusted R² when comparing models with different numbers of predictors - it penalises for adding variables that don't improve the model. R² alone does not tell you if the model is appropriate - always check residual plots too.
What is the Bonferroni correction and when is it needed?
When you run multiple hypothesis tests simultaneously, the probability of at least one false positive increases. The Bonferroni correction divides the significance threshold α by the number of comparisons k, requiring each test to meet α/k. For example, with 5 comparisons at α = 0.05, each test must reach p < 0.01. Use it for GWAS studies, pairwise ANOVA comparisons, or any analysis with multiple outcomes.