Protocol Online logo
Top : Forum Archives: : Real-Time PCR

qPCR Analysis - Standard Error of Individual dCT vs Group average expression (Aug/22/2006 )

Hi All

When using the Pfaffl (REST) or Ramakers (DART) qPCR analysis methods you have to take the average expression of the target gene and compare it to the average expression of the reference gene.

My problem with this is that it makes for very high standard deviation if all the samples are not exactly at the same concentration. This is a problem if you are looking at subtle changes in expression.

The purpose of the reference gene is to provide a normalising factor, so why can't this be used on each individual and then just look at the SD of the dCT within a group?

I guess what I'm wanting to do is a sort of ddCt Analysis but as my primers are 100% efficient I'm not allowed to.

DART is almost there, it takes each reaction and calculates the efficiency for each well, but then just averages all the expression and efficiencies.

Key question:::: Is there a method which I can look at the individual dCT whilst factoring in the efficiency??




Background info
I'm veryfying some microarray expression data. The fluctuations in the reference expression are probably due to RNA concentration errors or innacurate pipetting, but the ratios should hold true over small changes in input cDNA concentration.

I didn't do a DNAse step before my RT, which I'm guessing could be a source of error when doing my RNA concentrations, which is the root of my problems.
I could go back and redo everything to get my reference genes to match perfectly and then the changes in target expression would be significant but thats seems to defeat the purpose of the reference.

Also, my RNA stocks are, however, finite and very limited, and every manipulation costs me a bit of extra RNA.

Thanks in advance.

-weeiain-

Hi All,

Never got a reply to this one. Is it a crap question or does no-one know? I shall rephrase.....


1. Why can't you use ddCt when the two primer sets have different efficiencies, can't the efficiency be incorporated somewhere?

or

2. Can the reaction efficiencies be incorporated into the analysis without taking the means of Ct.

Cheers

Iain

-weeiain-

I did not answer because I dont' know ohmy.gif at least for the second question

but I can tell you that ddCt requires an approx equal efficiency because for the method to have any accuracy, it assumes that you get double the product for every round of PCR. if your efficiencies are very different, for example, then one product will form more copies/round than the other and you can't compare changes in Ct. if your efficiencies are very far off from 1, (or 100%), then the equation used to calculate the change in expression is not true because it is based on the assumption that your product doubles each round

at least, this is how I understand it? someone may be able to add more...

-aimikins-

Cheers for the reply aimikins, I'm amazed you can get any work done with the number of posts you put up.

I get the principle of the ddCT analysis and the assumption that the control and target reactions are working at the same efficiency. But it doesn't seem like a huge leap to be able to incorporate differing efficiencies.

The only published methods I have found which incorporate different efficiences take the mean Ct of the group first, and then apply the efficiency, but this is where you get a big standard deviation if the samples have variable RNA concentration.

I want to take the control:target ratios which incorporates the efficiencies, and then take the mean of that.

Is there anything fundamentally wrong with that?

-weeiain-

I spend too much time on here, when I have down time and no papers to catch up on. when you don't see me around much, it usually means I've taken several days to do some literature mining rolleyes.gif

so...I am not an expert. however, I think the large SD's are real because the method incorporates more room for error. I understand that large SD's suck and make the data less usable, you have to do many many repeats to bring down the SD so that your results can be considered 'real and true'. I cannot counsel you here, you will have to do what you decide is right, but if I were you I would just straighten out my efficiencies so it's not an issue and you can better trust your data. it may be as easy as a new set of primers; it may require more extensive optimization work. however, piling up a bunch of very expensive data that might not be 'real' is an awful lot of work if you run into a brick wall when you try to project the work further

-aimikins-