Tuesday, March 24, 2009

Difference between the Standard Deviation, Standard Error and Confidence Itervals

In its simplest:
  • the standard deviation represents the variability of input values,
  • the standard error (of the mean) represents variability of computed mean,
  • the confidence intervals represents where the 'true' mean value might lie.
  • the standard deviation is computed from the variance of your data - input values,
  • the standard error is computed from the standard deviation,
  • the confidence intervals are computed from the standard error.
Read this:

Many people confuse the standard deviation (SD) and the standard error of the mean (SE) and are unsure which, if either, to use in presenting data in graphical or tabular form. The SD is an index of the variability of the original data points and should be reported in all studies. The SE reflects the variability of the mean values, as if the study were repeated a large number of times. By itself, the SE is not particularly useful; however, it is used in constructing 95% and 99% confidence intervals (CIs), which indicate a range of values within which the “true” value lies. The CI shows the reader how accurate the estimates of the population values actually are. If graphs are used, error bars equal to plus and minus 2 SEs (which show the 95% CI) should be drawn around mean values. Both statistical significance testing and CIs are useful because they assist the reader in determining the meaning of the findings.

(Can J Psychiatry 1996;41:498–502)
Post a Comment