Relative Error Calculator

Find how far a measurement or prediction deviates from the true or expected value.

ε Relative Error Calculator
Absolute Error
-
Relative Error
-
Percentage Error
-
MAE
-
RMSE
-
Max Error
-
n (pairs)
-
Mean (x̄)
-
Max Rel. Deviation
-
Avg Rel. Deviation
-
n
-

📖 What is Relative Error?

Relative error measures how large a measurement error is in proportion to the true or reference value. Unlike absolute error - which simply gives the difference in original units - relative error is dimensionless, making it possible to compare precision across measurements of completely different quantities. A scientist measuring a 1 mm crack and a 1 km road both need context: an error of 0.01 mm on the crack is far more significant than an error of 0.01 mm on the road. Relative error captures this context.

The most common form is percentage error - relative error multiplied by 100. It is universal in science education, quality control, and engineering acceptance testing. A percentage error of 2% means the measured value deviates from the true value by 2% of that true value. Industries specify tolerance limits in percentage terms: a component within ±0.5% tolerance, a sensor with ±2% full-scale accuracy.

When evaluating predictive models - in machine learning, weather forecasting, or financial modelling - the equivalent metrics are Mean Absolute Error (MAE) and Root Mean Square Error (RMSE). MAE averages the absolute prediction errors, giving an intuitive sense of typical error magnitude. RMSE gives greater weight to large errors by squaring them before averaging, which is more appropriate when large mistakes are especially costly. Both are widely used alongside R² to characterise model performance.

A third application is relative deviation from the mean, useful when there is no external true value and you want to assess the internal consistency of a set of measurements. This measures how much each value deviates from the group mean as a proportion of that mean - useful in laboratory quality control and data validation.

📐 Formulas

Relative Error = |measured − true| / |true|

Absolute Error: |x_measured − x_true|

Relative Error: |x_measured − x_true| / |x_true| (dimensionless)

Percentage Error: Relative Error × 100%

Relative Deviation from Mean: |x − x̄| / x̄, where x̄ = mean of all values

MAE (Mean Absolute Error): (1/n) × Σ|predicted_i − actual_i|

RMSE (Root Mean Square Error): √[(1/n) × Σ(predicted_i − actual_i)²]

All variables: x_measured = experimentally obtained value; x_true = known correct value; x̄ = sample mean; n = number of data points or prediction pairs.

📖 How to Use This Calculator

1
Choose your Calculation Mode: Single Measurement Error for comparing one value to a known true value, Relative Deviation from Mean for a group of measurements without a known true value, or MAE & RMSE for evaluating a series of model predictions.
2
Enter the required values. For batch mode, ensure your predicted and actual value lists have the same number of entries, separated by commas.
3
Click Calculate Error to see all relevant error metrics. Batch mode also shows a table of individual errors for each prediction pair.

💡 Example Calculations

Example 1 - Physics measurement: acceleration due to gravity

1
A student measures g = 9.78 m/s². The accepted value is 9.81 m/s².
2
Absolute error: |9.78 − 9.81| = 0.03 m/s². Relative error: 0.03 / 9.81 = 0.00306. Percentage error: 0.31%.
3
A 0.31% error is excellent for a simple pendulum experiment and well within the ±2% tolerance typical for introductory physics labs.
Result = Percentage error = 0.31%
Try this example →

Example 2 - Model predictions: house price estimates

1
Predicted: 300,000 / 450,000 / 520,000 / 380,000. Actual: 310,000 / 440,000 / 500,000 / 400,000.
2
Errors: 10,000 / 10,000 / 20,000 / 20,000. MAE = (10+10+20+20)/4 = 15,000.
3
RMSE = √((100M + 100M + 400M + 400M)/4) = √250M = 15,811. RMSE > MAE because the two larger errors (20,000) contribute disproportionately.
Result = MAE = 15,000; RMSE = 15,811
Try this example →

Example 3 - Survey estimation error

1
A survey estimates 42% support for a policy. The true population figure is 40%.
2
Absolute error: |42 − 40| = 2 percentage points. Relative error: 2/40 = 5%. Percentage error: 5%.
3
Note the distinction: the absolute error is 2 percentage points (a unit of percentage), while the relative (percentage) error is 5% (a percentage of the reference value). These are different quantities, a common source of confusion in polling analysis.
Result = Absolute error = 2; Percentage error = 5%
Try this example →

❓ Frequently Asked Questions

What is the difference between absolute error and relative error?+
Absolute error is the raw difference between the measured value and the true value: |measured − true|. It is expressed in the same units as the measurement. Relative error is this difference divided by the true value: |measured − true| / |true|, giving a unitless fraction. Percentage error multiplies relative error by 100. For example, if a scale reads 10.3 g when the true mass is 10.0 g, the absolute error is 0.3 g, the relative error is 0.03, and the percentage error is 3%.
What is an acceptable level of percentage error?+
Acceptable percentage error depends entirely on the context. In analytical chemistry, errors below 1% are expected. In physics lab experiments, 5% is often acceptable. Engineering measurements may require errors below 0.1%. Survey and social science data may accept 10% or more. There is no universal threshold - the key question is whether the error is small enough for the decision or conclusion that depends on the measurement.
What is the difference between RMSE and MAE?+
Both RMSE and MAE measure average prediction error across a set of forecasts or model outputs. MAE (Mean Absolute Error) gives equal weight to all errors and is easier to interpret: an MAE of 5 means predictions are off by 5 on average. RMSE (Root Mean Square Error) squares the errors before averaging, which gives extra weight to large errors. RMSE ≥ MAE always. Use MAE when all errors are equally important; use RMSE when large errors are disproportionately costly (e.g. in safety-critical forecasting).
Why divide by the true value and not the measured value?+
The relative error formula divides by the true value because we are measuring how far the measured value deviates from what is correct. The true value is the reference standard. If you divided by the measured value instead (which is the relative difference from the other direction), you would get a different number - and crucially, if the measured value is much smaller than the true value, dividing by it would artificially inflate the error. Always divide by the true or accepted reference value.
Can relative error be greater than 100%?+
Yes. If the measured value is more than double the true value, or if it has the opposite sign, the percentage error exceeds 100%. For example, if the true value is 2 and the measured value is 10, the percentage error is |10−2|/2 × 100% = 400%. This most commonly occurs with very small true values or when there is a systematic instrument error or incorrect calculation.
What is relative deviation from the mean and when is it used?+
Relative deviation from the mean is |x − x̄| / x̄, where x̄ is the mean of a set of measurements. Unlike the standard relative error, it does not require a known true value - instead, the mean serves as the reference. It is used when you want to assess the consistency of repeated measurements or values within a group, for example comparing how far each measurement deviates from the average in a laboratory experiment where the true value is unknown.
How is RMSE calculated?+
RMSE is calculated as: √(Σ(predicted_i − actual_i)² / n). First, compute the squared error for each prediction: (predicted − actual)². Sum all squared errors, divide by the number of observations n, then take the square root. RMSE has the same units as the original values, making it more interpretable than MSE (mean squared error). A lower RMSE indicates better model accuracy.
What does it mean when RMSE is much larger than MAE?+
When RMSE >> MAE, it signals that there are a few predictions with very large errors pulling the RMSE up, while most predictions are reasonably accurate. This gap between RMSE and MAE is diagnostic: a large difference indicates the presence of outlier predictions or systematic errors in specific cases. For a model with uniformly distributed errors, RMSE is only modestly larger than MAE (roughly 1.25× for normally distributed errors).