How to Calculate Error Bars

Error bars are used to quantify uncertainty in graphs of statistical metrics. When an estimator (typically a mean, or average) is based on a small sample of a much larger population, error bars help depict how far the estimator is likely to be from the true value -- that is not measured directly because the size of the larger population makes that impossible or impractical. A graph with error bars contains values for multiple estimators, each corresponding to different experiment conditions. Each estimator is derived from its own sample, and has its own error bar. You can calculate the size of the error bar.

Business Team Corporate Marketing Working Concept
A man using his computer to look at graphs and charts.
credit: Rawpixel Ltd/iStock/Getty Images

Video of the Day

Step

Compute the average (i.e., the estimator) for your measurements, by evaluating the following formula:

average = (sample1 + sample2 + ... + sampleN) / N

Replace "sample1," sample2," ... "sampleN" by the measurements, and "N" by the total number of measurements in the experiment.

Step

Compute the standard deviation by evaluating the following formula:

stdDev = sqrt(((sample1 - average)^2 + ... + (sampleN - average)^2)/N)

Function "sqrt()" denotes the non-negative square root of its argument. The standard deviation is the measure of dispersion used for error bars.

Step

Compute the beginning and end points of the error bars, by evaluating the following formulas:

barBegin = average - stdDev barEnd = average + stdDev

The bar begins at "barBegin," is centered at "average," and ends at "barEnd."

Show Comments