Jump to content

  • Log in with Facebook Log in with Twitter Log in with Windows Live Log In with Google      Sign In   
  • Create Account

Submit your paper to J Biol Methods today!
Photo
- - - - -

When to use standard deviation/standard error


  • Please log in to reply
3 replies to this topic

#1 science noob

science noob

    Veteran

  • Active Members
  • PipPipPipPipPipPipPipPipPipPip
  • 281 posts
20
Excellent

Posted 07 March 2012 - 11:17 PM

Any statisticians who can clarify which stat test suits which circumstances?

From what I understand, standard deviation gives you the variability within a group; standard error shows the error of a particular mean. But n=? for you to use each test?

#2 bob1

bob1

    Thelymitra pulchella

  • Global Moderators
  • PipPipPipPipPipPipPipPipPipPip
  • 5,830 posts
415
Excellent

Posted 10 March 2012 - 01:19 PM

Theoretically you can use either on any sample size, it is just that both increase dramatically with a smaller "n". For standard parametric statisitcs such as T-tests etc., the minimum sample size is 30.

#3 hobglobin

hobglobin

    Growing old is mandatory, growing up is optional...

  • Global Moderators
  • PipPipPipPipPipPipPipPipPipPip
  • 5,546 posts
104
Excellent

Posted 11 March 2012 - 07:57 AM

Standard deviation shows the average difference between your data points and its mean. Standard error shows how variable the mean will be, if you repeat the experiment several times.
Because you have to repeat your experiment several times independently, standard errors are okay in many cases. If you are unsure you can use also confidence intervals (typically 95%).
But this are not statistical tests, but descriptive statistics. In order to know which test to use they won't help you. You have to find out if the assumptions of the particular test is fulfilled (type of data, type of distribution, variance homogeneity, etc).

Edited by hobglobin, 11 March 2012 - 08:26 AM.

One must presume that long and short arguments contribute to the same end. - Epicurus
...except casandra's that belong to the funniest, most interesting and imaginative (or over-imaginative?) ones, I suppose.

That is....if she posts at all.


#4 knuf

knuf

    Veteran

  • Active Members
  • PipPipPipPipPipPipPipPipPipPip
  • 104 posts
3
Neutral

Posted 12 March 2012 - 10:02 AM

I'm not a statistician, but here is how I understand it. Standard error is to be used when the value you obtain is a MEAN of a random sampling process...hence "standard error of the mean". Many people use this incorrectly because as the sample size gets bigger the error gets smaller. However, if you have several data points of finite values (such as an OD on a western blot done or the % inhibition of a compound done in triplicate, n=3) you should use standard deviation and will not vary with sample size. However, if you are counting the percent of whatever in a population that your are randomly sampling (such as counting the percent of affected cells in a microscopy field, n=total number of cells counted) you use the standard error.




Home - About - Terms of Service - Privacy - Contact Us

©1999-2013 Protocol Online, All rights reserved.