Standard Error (SE) Definition: Standard Deviation in Statistics Explained

What Is Standard Error (SE)?

Standard error (SE) is a statistic that reveals how accurately sample data represents the whole population. It measures the accuracy with which a sample distribution represents a population by using standard deviation. In statistics, a sample mean deviates from the actual mean of a population; this deviation is the standard error of the mean.

The standard error is considered part of inferential statistics—or, the conclusions drawn from the study. It is inversely proportional to the sample size; the larger the sample size, the smaller the standard error because the statistic will approach the actual value.

Key Takeaways

  • Standard error is the approximate standard deviation of a statistical sample population.
  • The standard error describes the variation between the calculated mean of the population and one which is considered known, or accepted as accurate.
  • The more data points involved in the calculations of the mean, the smaller the standard error tends to be.
Standard Error

Investopedia / Joules Garcia

Understanding Standard Error

The term "standard error," or SE for short, is used to refer to the standard deviation of various sample statistics, such as the mean or median.

When a population is sampled, the mean, or average, is generally calculated. The standard error describes the variation between the calculated mean of the population and one which is considered known, or accepted as accurate. This helps compensate for any incidental inaccuracies related to the gathering of the sample.

The "standard error of the mean" refers to the standard deviation of the distribution of sample means taken from a population. The relationship between the standard error and the standard deviation is such that, for a given sample size, the standard error equals the standard deviation divided by the square root of the sample size.

The deviation of the standard error is expressed as a number. Sometimes, the deviation is necessary or desired to be shown as a percentage. When shown as a percentage it is known as the relative standard error.

Standard error and standard deviation are measures of variability, while central tendency measures include mean, median, etc.

The smaller the standard error, the more representative the sample will be of the overall population. And the more data points involved in the calculations of the mean, the smaller the standard error tends to be. In cases where the standard error is large, the data may have some notable irregularities.

In cases where multiple samples are collected, the mean of each sample may vary slightly from the others, creating a spread among the variables. This spread is most often measured as the standard error, accounting for the differences between the means across the datasets.

Formula and Calculation of Standard Error

Used in algorithmic trading, the standard error of an estimate can be calculated as the standard deviation divided by the square root of the sample size:

SE = σ n where: σ = The population standard deviation n = The square root of the sample size \begin{aligned}&\text{SE} = \frac{\sigma}{\surd n}\\&\textbf{where:}\\&\sigma=\text{The population standard deviation}\\&\surd n = \text{The square root of the sample size}\end{aligned} SE=nσwhere:σ=The population standard deviationn=The square root of the sample size


If the population standard deviation is not known, you can substitute the sample standard deviation, s, in the numerator to approximate the standard error.

Standard Error vs. Standard Deviation

The standard deviation is a representation of the spread of each of the data points. It is used to help determine the validity of the data based on the number of data points displayed at each level of standard deviation. Standard errors function more as a way to determine the accuracy of the sample or the accuracy of multiple samples by analyzing deviation within the means.

The standard error normalizes the standard deviation relative to the sample size used in an analysis. Standard deviation measures the amount of variance or dispersion of the data spread around the mean. The standard error can be thought of as the dispersion of the sample mean estimations around the true population mean.

Example of Standard Error

Say that an analyst has looked at a random sample of 50 companies in the S&P 500 to understand the association between a stock's P/E ratio and subsequent 12-month performance in the market. Assume that the resulting estimate is -0.20, indicating that for every 1.0 point in the P/E ratio, stocks return 0.2% poorer relative performance. In the sample of 50, the standard deviation was found to be 1.0.

The standard error is thus:

SE = 1.0 50 = 1 7.07 = 0.141 \begin{aligned}&\text{SE} = \frac{1.0}{\surd50} = \frac{1}{7.07} = 0.141\end{aligned} SE=√501.0=7.071=0.141

Therefore, we would report the estimate as -0.20% ± 0.14, giving us a confidence interval of (-0.34 - -0.06). The true mean value of the association of the P/E on returns of the S&P 500 would therefore fall within that range with a high degree of probability.

Say now that we increase the sample of stocks to 100 and find that the estimate changes slightly from -0.20 to -0.25, and the standard deviation falls to 0.90. The new standard error would thus be:

SE = 0.90 100 = 0.90 10 = 0.09. \begin{aligned}&\text{SE} = \frac{0.90}{\surd100} = \frac{0.90}{10} = 0.09.\end{aligned} SE=√1000.90=100.90=0.09.

The resulting confidence interval becomes -0.25 ± 0.09 = (-0.34 - -0.16), which is a tighter range of values.

What Is Meant by Standard Error?

Standard error is intuitively the standard deviation of the sampling distribution. In other words, it depicts how much disparity there is likely to be in a point estimate obtained from a sample relative to the true population mean.

What Is a Good Standard Error?

Standard error measures the amount of discrepancy that can be expected in a sample estimate compared to the true value in the population. Therefore, the smaller the standard error the better. In fact, a standard error of zero (or close to it) would indicate that the estimated value is exactly the true value.

How Do You Find the Standard Error?

The standard error takes the standard deviation and divides it by the square root of the sample size. Many statistical software packages automatically compute standard errors.

The Bottom Line

The standard error (SE) measures the dispersion of estimated values obtained from a sample around the true value to be found in the population. Statistical analysis and inference often involves drawing samples and running statistical tests to determine associations and correlations between variables. The standard error thus tells us with what degree of confidence we can expect the estimated value to approximate the population value.

Article Sources
Investopedia requires writers to use primary sources to support their work. These include white papers, government data, original reporting, and interviews with industry experts. We also reference original research from other reputable publishers where appropriate. You can learn more about the standards we follow in producing accurate, unbiased content in our editorial policy.
  1. Radford University. "Standard Error Calculation."

Open a New Bank Account
×
The offers that appear in this table are from partnerships from which Investopedia receives compensation. This compensation may impact how and where listings appear. Investopedia does not include all offers available in the marketplace.
Sponsor
Name
Description
Open a New Bank Account
×
The offers that appear in this table are from partnerships from which Investopedia receives compensation. This compensation may impact how and where listings appear. Investopedia does not include all offers available in the marketplace.