A/B Test Calculator
Enter your experiment data to check if the conversion rate difference is statistically significant.
📖 What is an A/B Test?
An A/B test (also called a split test or controlled experiment) is a method of comparing two versions of something - a web page, email subject line, button colour, pricing page, or product feature - by randomly assigning users to one version (control) or the other (variant) and measuring which version produces better outcomes. It is the gold standard for evidence-based product and marketing decisions because it isolates the effect of the change being tested from all other variables.
The core question of any A/B test is: is the observed difference in conversion rates real, or could it be explained by random chance? Statistical significance testing answers this question by computing a p-value - the probability of observing a difference this large (or larger) if the two versions truly performed identically. A p-value below your significance threshold (typically 0.05) means the result is statistically significant: you can reject the null hypothesis that there is no difference.
A/B testing uses the two-proportion Z-test. Both groups have binary outcomes (converted or not converted), so each conversion follows a Bernoulli distribution. With sufficient sample sizes, the sampling distribution of the difference in proportions is approximately normal by the Central Limit Theorem, allowing Z-test inference. The pooled standard error uses a combined estimate of the conversion probability under the null hypothesis that both groups share the same rate.
Beyond the p-value, a complete A/B test analysis includes: the confidence interval for the true difference (which tells you the range of plausible lift values), the effect size (absolute and relative lift), statistical power (the probability of detecting a real effect given your sample sizes), and the minimum detectable effect (the smallest lift your test is powered to find). This calculator computes all of these from your experiment data.
📐 Formulas
Where:
p̂_C = c_C / n_C - control conversion rate (conversions / visitors)
p̂_V = c_V / n_V - variant conversion rate
p̂ = (c_C + c_V) / (n_C + n_V) - pooled conversion rate (used only for the null hypothesis)
p-Value (two-tailed): p = 2 × Φ(−|Z|), where Φ is the standard normal CDF.
Confidence interval for difference (unpooled SE):
CI = (p̂_V − p̂_C) ± z_α/2 × √(p̂_C(1−p̂_C)/n_C + p̂_V(1−p̂_V)/n_V)
Minimum Detectable Effect (at 80% power):
MDE = (z_α + z_β) × √(p̂_C(1−p̂_C)/n_C + p̂_C(1−p̂_C)/n_V) where z_β = 0.842 for 80% power.
Absolute Lift: p̂_V − p̂_C
Relative Lift: (p̂_V − p̂_C) / p̂_C × 100%