Protocol Online logo
Top : Forum Archives: : Real-Time PCR

processing data from triplicate runs - (Apr/24/2008 )


I am doing QPCR using the Light Cycler and SYBR Green.
I am using several housekeeping genes and their geometric mean as a normalization factor for my GOI.
I run my samples in triplicates.

The question is, should I use the 3 values of the triplicates when they differ? how much they can differ and be included? I mean, if two values are very similar and one deviates, do I remove this third one? and what if the three deviate from each other, how much they can deviate?
And also, do you calculate the mean or the median from these values?




If I see a curve in replicates that differs to much from the other two, I exclude it from calculation, because there definitely went something wrong.
It more a question of how you are used to, I usualy take Ct deviation to 0.1 as very good, to 0.5 usable and higher ones as problematic, but this could be different in different assays.

Usualy higher Cts differ more that lower ones, in principle, important is to see your Ct deviation compared with your relative quantitative results, it can render them insignificant.

And AFAIK there is no big difference using mean or median in the case of triplicates, median can discard the extremities so you don't have to do it manualy, but you get higher deviation.


Hi Trof, and thanks for your answer, it helps, but I am still having some questions:

I have found this article:

Larionov, A et al. A standard curve based method for relative real time PCR data processing.BMC Bioinformatics. 2005 Mar 21;6:62.

The authors describe a method for relative quantification based on standard curve, that is , I am using. They say "Two different approaches may be utilized for initial statistical handling of intra-assay PCR replicates. Either CP values are first averaged and then transformed to nonnormalized values or vice versa. Both approaches may yield similar results, as long as the arithmetic mean is used for the CP values and geometric mean for the non-normalized quantities. We prefer to start statistical assessment using unmodified source data i.e. we average crossing points before transformation to the non-normalized values."

Can anyone explain me why you should use the geometric mean for averaging nonnormalized copy number values, and not the arithmetic mean?
I am using the Light Cycler and an old version of tha data analysis software and I cannot use the same approach they use, as my software does not average the Cp values before transforming them to copy number.