Any statisticians who can clarify which stat test suits which circumstances?

From what I understand, standard deviation gives you the variability within a group; standard error shows the error of a particular mean. But n=? for you to use each test?

Submit your paper to J Biol Methods today!

# When to use standard deviation/standard error

Started by science noob, Mar 07 2012 11:17 PM

3 replies to this topic

### #1

Posted 07 March 2012 - 11:17 PM

### #2

Posted 10 March 2012 - 01:19 PM

Theoretically you can use either on any sample size, it is just that both increase dramatically with a smaller "n". For standard parametric statisitcs such as T-tests etc., the minimum sample size is 30.

### #3

Posted 11 March 2012 - 07:57 AM

Standard deviation shows the average difference between your data points and its mean. Standard error shows how variable the mean will be, if you repeat the experiment several times.

Because you have to repeat your experiment several times independently, standard errors are okay in many cases. If you are unsure you can use also confidence intervals (typically 95%).

But this are not statistical tests, but descriptive statistics. In order to know which test to use they won't help you. You have to find out if the assumptions of the particular test is fulfilled (type of data, type of distribution, variance homogeneity, etc).

Because you have to repeat your experiment several times independently, standard errors are okay in many cases. If you are unsure you can use also confidence intervals (typically 95%).

But this are not statistical tests, but descriptive statistics. In order to know which test to use they won't help you. You have to find out if the assumptions of the particular test is fulfilled (type of data, type of distribution, variance homogeneity, etc).

**Edited by hobglobin, 11 March 2012 - 08:26 AM.**

One must presume that long and short arguments contribute to the same end. - Epicurus

...except casandra's that belong to the funniest, most interesting and imaginative (or over-imaginative?) ones, I suppose.

### #4

Posted 12 March 2012 - 10:02 AM

I'm not a statistician, but here is how I understand it. Standard error is to be used when the value you obtain is a MEAN of a random sampling process...hence "standard error of the mean". Many people use this incorrectly because as the sample size gets bigger the error gets smaller. However, if you have several data points of finite values (such as an OD on a western blot done or the % inhibition of a compound done in triplicate, n=3) you should use standard deviation and will not vary with sample size. However, if you are counting the percent of whatever in a population that your are randomly sampling (such as counting the percent of affected cells in a microscopy field, n=total number of cells counted) you use the standard error.