Standard Deviation and Standard Error

In statistics, there is the standard deviation and the standard error. Their respective estimators are

$$\begin{aligned} \begin{aligned} \sigma_X &= \sqrt{\frac{1}{N - 1} \sum_{i = 1}^N (x_i - \bar x_.)} \ \mathrm{se}{}_X &= \frac{1}{\sqrt{N}} \, \sigma_X \,. \end{aligned} \end{aligned}$$

So the standard error is by a factor $\sqrt{N}$ smaller. They have similar but different interpretations:

Standard Deviation

This is the width of the population distribution. If you take a single measurement out of the normal distributed population, it will be within one standard deviation of the mean in 68% of the cases.

Standard Error

If you take another $N$ measurements and compute the mean of those measurements, it will be within one standard error of the mean in 68% of the cases. The standard error is therefore the uncertainty of the mean.

Taking more samples from the same distribution will not change the standard deviation, assuming that you already have sufficient measurements that the normal distribution is sufficiently apparent. The standard error will become smaller.

I have written a small program, which samples a standard normal distribution and displays the results in a histogram. You can see that the shape will come closer to the normal distribution. The green dots and line show the standard deviation. Those points will always stay close to $-1$ and $1$ as this is the standard deviation of the underlying distribution.

The standard error is marked by the two red points. There you can see how it shrinks over time as more measurements are drawn from the underlying distribution.