Protocol Online logo
Top : New Forum Archives (2009-): : Bioinformatics and Biostatistics

Statistics for multiple strains of mice on different diets - (May/23/2013 )

We have two strains of mice. They each get split into two groups, each gets one type of diet from 4 weeks to 20 weeks of age. We measure their body weight (or other factors like blood glucose) weekly over that time. We have strain1+diet1, strain1+diet2, strain2+diet1 and strain2+diet2. To analyze the differences in perhaps body mass and also blood glucose for those guys, what type of stats would be considered the proper test? We want to know if the second diet causes the first strain to gain a lot of weight while the second strain is resistant to that weight gain (and same thing for increase in blood glucose, etc...). I can think of several ways to think about this:

1. First I tried doing one way ANOVA. I realized this just takes an average of all the values over that time period and compares them. I don't think we want that.
2. I tried doing two way ANOVA. This compares their values (averaged per 'group', where 'group' is strain+diet) weekly. We get a new comparison and p-value for each week of each group versus each other group.
3. We can also try calculating area under the curve for each animal over that time and then do one way ANOVA (AUC-ANOVA). I don't think it would be fair to just plot the average for each group and then do AUC-ANOVA, right?
4. We can try calculating the weight gain (or increase in blood glucose, etc...) for each animal over the time period and do one way ANOVA of that.
5. Maybe something I can't even think of...

I'm thinking now we might want to do AUC-ANOVA for each mouse over that time. I could be wrong, though.

Thank you.


You are certainly looking at 2-way anova’s with treatments being strain x diet.

I don’t think using AUC(weight) makes a lot of sense because their final weight is already an accumulation(AUC) of the gain/day.
Other parameters could include weight gain per feed consumed etc but one thing to be wary of is that as you test out more and more response parameters the likely hood that one of them is a false-positive increases. Usually this is corrected by reducing the p value used for significance (there is an equation but I cannot find it).