# S.D or S.E.M ?? (standard deviation or Standard error of mean) - survival curve of C. elegans (Oct/29/2009 )

Hi all. i would love to hear from different point of views regarding the title above.

currently i am working onto the survival curve of c. elegans.

however, i was quite confused whether i should use Stand. deviation or stand. error of mean when plotting the error bar in my graph.

some researchers have used S.D, some used S.E.M.

anyone have idea onto this ? Thank you.

tyrael on Oct 30 2009, 08:48 AM said:

currently i am working onto the survival curve of c. elegans.

however, i was quite confused whether i should use Stand. deviation or stand. error of mean when plotting the error bar in my graph.

some researchers have used S.D, some used S.E.M.

anyone have idea onto this ? Thank you.

0

In my opinion Error is best represented by the Standard error!!!

FROM BMJ

The terms "standard error" and "standard deviation" are often confused.1 The contrast between these two terms reflects the important distinction between data description and inference, one that all researchers should appreciate.

The standard deviation (often SD) is a measure of variability. When we calculate the standard deviation of a sample, we are using it as an estimate of the variability of the population from which the sample was drawn. For data with a normal distribution,2 about 95% of individuals will have values within 2 standard deviations of the mean, the other 5% being equally scattered above and below these limits. Contrary to popular misconception, the standard deviation is a valid measure of variability regardless of the distribution. About 95% of observations of any distribution usually fall within the 2 standard deviation limits, though those outside may all be at one end. We may choose a different summary statistic, however, when data have a skewed distribution.3

When we calculate the sample mean we are usually interested not in the mean of this particular sample, but in the mean for individuals of this type—in statistical terms, of the population from which the sample comes. We usually collect data in order to generalise from them and so use the sample mean as an estimate of the mean for the whole population. Now the sample mean will vary from sample to sample; the way this variation occurs is described by the "sampling distribution" of the mean. We can estimate how much sample means will vary from the standard deviation of this sampling distribution, which we call the standard error (SE) of the estimate of the mean. As the standard error is a type of standard deviation, confusion is understandable. Another way of considering the standard error is as a measure of the precision of the sample mean.

The standard error of the sample mean depends on both the standard deviation and the sample size, by the simple relation SE = SD/

So, if we want to say how widely scattered some measurements are, we use the standard deviation. If we want to indicate the uncertainty around the estimate of the mean measurement, we quote the standard error of the mean. The standard error is most useful as a means of calculating a confidence interval. For a large sample, a 95% confidence interval is obtained as the values 1.96xSE either side of the mean. We will discuss confidence intervals in more detail in a subsequent Statistics Note. The standard error is also used to calculate P values in many circumstances.

The principle of a sampling distribution applies to other quantities that we may estimate from a sample, such as a proportion or regression coefficient, and to contrasts between two samples, such as a risk ratio or the difference between two means or proportions. All such quantities have uncertainty due to sampling variation, and for all such estimates a standard error can be calculated to indicate the degree of uncertainty.

In many publications a ± sign is used to join the standard deviation (SD) or standard error (SE) to an observed mean—for example, 69.4±9.3 kg. That notation gives no indication whether the second figure is the standard deviation or the standard error (or indeed something else). A review of 88 articles published in 2002 found that 12 (14%) failed to identify which measure of dispersion was reported (and three failed to report any measure of variability).4 The policy of the BMJ and many other journals is to remove ± signs and request authors to indicate clearly whether the standard deviation or standard error is being quoted. All journals should follow this practice

A lot of times, it's a matter of personal preference. I prefer standard error because it takes into account sample size, and the larger the sample, the lower your calculated error becomes. Also, standard deviation seems messy to me, because you can have error bars that overlap by quite a bit and still have a significant difference. With standard error (unless you have a large enough sample size), typically if the error bars overlap or are close to each other, the differences are insignificant. Not always, but many times. To me, the data usually looks "cleaner" with standard error.

And really, SE is not that hard to calculate anyway. Once you have the SD, you divide the SD by the square root of the sample size, and that's your SE.